ECS *** Client ====== class ECS.Client A low-level client representing Amazon EC2 Container Service (ECS) Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service. It makes it easy to run, stop, and manage Docker containers. You can host your cluster on a serverless infrastructure that's managed by Amazon ECS by launching your services or tasks on Fargate. For more control, you can host your tasks on a cluster of Amazon Elastic Compute Cloud (Amazon EC2) or External (on-premises) instances that you manage. Amazon ECS makes it easy to launch and stop container-based applications with simple API calls. This makes it easy to get the state of your cluster from a centralized service, and gives you access to many familiar Amazon EC2 features. You can use Amazon ECS to schedule the placement of containers across your cluster based on your resource needs, isolation policies, and availability requirements. With Amazon ECS, you don't need to operate your own cluster management and configuration management systems. You also don't need to worry about scaling your management infrastructure. import boto3 client = boto3.client('ecs') These are the available methods: * can_paginate * close * create_capacity_provider * create_cluster * create_service * create_task_set * delete_account_setting * delete_attributes * delete_capacity_provider * delete_cluster * delete_service * delete_task_definitions * delete_task_set * deregister_container_instance * deregister_task_definition * describe_capacity_providers * describe_clusters * describe_container_instances * describe_service_deployments * describe_service_revisions * describe_services * describe_task_definition * describe_task_sets * describe_tasks * discover_poll_endpoint * execute_command * get_paginator * get_task_protection * get_waiter * list_account_settings * list_attributes * list_clusters * list_container_instances * list_service_deployments * list_services * list_services_by_namespace * list_tags_for_resource * list_task_definition_families * list_task_definitions * list_tasks * put_account_setting * put_account_setting_default * put_attributes * put_cluster_capacity_providers * register_container_instance * register_task_definition * run_task * start_task * stop_service_deployment * stop_task * submit_attachment_state_changes * submit_container_state_change * submit_task_state_change * tag_resource * untag_resource * update_capacity_provider * update_cluster * update_cluster_settings * update_container_agent * update_container_instances_state * update_service * update_service_primary_task_set * update_task_protection * update_task_set Paginators ========== Paginators are available on a client instance via the "get_paginator" method. For more detailed instructions and examples on the usage of paginators, see the paginators user guide. The available paginators are: * ListAccountSettings * ListAttributes * ListClusters * ListContainerInstances * ListServices * ListServicesByNamespace * ListTaskDefinitionFamilies * ListTaskDefinitions * ListTasks Waiters ======= Waiters are available on a client instance via the "get_waiter" method. For more detailed instructions and examples on the usage or waiters, see the waiters user guide. The available waiters are: * ServicesInactive * ServicesStable * TasksRunning * TasksStopped ECS / Waiter / TasksRunning TasksRunning ************ class ECS.Waiter.TasksRunning waiter = client.get_waiter('tasks_running') wait(**kwargs) Polls "ECS.Client.describe_tasks()" every 6 seconds until a successful state is reached. An error is raised after 100 failed checks. See also: AWS API Documentation **Request Syntax** waiter.wait( cluster='string', tasks=[ 'string', ], include=[ 'TAGS', ], WaiterConfig={ 'Delay': 123, 'MaxAttempts': 123 } ) Parameters: * **cluster** (*string*) -- The short name or full Amazon Resource Name (ARN) of the cluster that hosts the task or tasks to describe. If you do not specify a cluster, the default cluster is assumed. If you do not specify a value, the "default" cluster is used. * **tasks** (*list*) -- **[REQUIRED]** A list of up to 100 task IDs or full ARN entries. * *(string) --* * **include** (*list*) -- Specifies whether you want to see the resource tags for the task. If "TAGS" is specified, the tags are included in the response. If this field is omitted, tags aren't included in the response. * *(string) --* * **WaiterConfig** (*dict*) -- A dictionary that provides parameters to control waiting behavior. * **Delay** *(integer) --* The amount of time in seconds to wait between attempts. Default: 6 * **MaxAttempts** *(integer) --* The maximum number of attempts to be made. Default: 100 Returns: None ECS / Waiter / ServicesStable ServicesStable ************** class ECS.Waiter.ServicesStable waiter = client.get_waiter('services_stable') wait(**kwargs) Polls "ECS.Client.describe_services()" every 15 seconds until a successful state is reached. An error is raised after 40 failed checks. See also: AWS API Documentation **Request Syntax** waiter.wait( cluster='string', services=[ 'string', ], include=[ 'TAGS', ], WaiterConfig={ 'Delay': 123, 'MaxAttempts': 123 } ) Parameters: * **cluster** (*string*) -- The short name or full Amazon Resource Name (ARN)the cluster that hosts the service to describe. If you do not specify a cluster, the default cluster is assumed. This parameter is required if the service or services you are describing were launched in any cluster other than the default cluster. * **services** (*list*) -- **[REQUIRED]** A list of services to describe. You may specify up to 10 services to describe in a single operation. * *(string) --* * **include** (*list*) -- Determines whether you want to see the resource tags for the service. If "TAGS" is specified, the tags are included in the response. If this field is omitted, tags aren't included in the response. * *(string) --* * **WaiterConfig** (*dict*) -- A dictionary that provides parameters to control waiting behavior. * **Delay** *(integer) --* The amount of time in seconds to wait between attempts. Default: 15 * **MaxAttempts** *(integer) --* The maximum number of attempts to be made. Default: 40 Returns: None ECS / Waiter / TasksStopped TasksStopped ************ class ECS.Waiter.TasksStopped waiter = client.get_waiter('tasks_stopped') wait(**kwargs) Polls "ECS.Client.describe_tasks()" every 6 seconds until a successful state is reached. An error is raised after 100 failed checks. See also: AWS API Documentation **Request Syntax** waiter.wait( cluster='string', tasks=[ 'string', ], include=[ 'TAGS', ], WaiterConfig={ 'Delay': 123, 'MaxAttempts': 123 } ) Parameters: * **cluster** (*string*) -- The short name or full Amazon Resource Name (ARN) of the cluster that hosts the task or tasks to describe. If you do not specify a cluster, the default cluster is assumed. If you do not specify a value, the "default" cluster is used. * **tasks** (*list*) -- **[REQUIRED]** A list of up to 100 task IDs or full ARN entries. * *(string) --* * **include** (*list*) -- Specifies whether you want to see the resource tags for the task. If "TAGS" is specified, the tags are included in the response. If this field is omitted, tags aren't included in the response. * *(string) --* * **WaiterConfig** (*dict*) -- A dictionary that provides parameters to control waiting behavior. * **Delay** *(integer) --* The amount of time in seconds to wait between attempts. Default: 6 * **MaxAttempts** *(integer) --* The maximum number of attempts to be made. Default: 100 Returns: None ECS / Waiter / ServicesInactive ServicesInactive **************** class ECS.Waiter.ServicesInactive waiter = client.get_waiter('services_inactive') wait(**kwargs) Polls "ECS.Client.describe_services()" every 15 seconds until a successful state is reached. An error is raised after 40 failed checks. See also: AWS API Documentation **Request Syntax** waiter.wait( cluster='string', services=[ 'string', ], include=[ 'TAGS', ], WaiterConfig={ 'Delay': 123, 'MaxAttempts': 123 } ) Parameters: * **cluster** (*string*) -- The short name or full Amazon Resource Name (ARN)the cluster that hosts the service to describe. If you do not specify a cluster, the default cluster is assumed. This parameter is required if the service or services you are describing were launched in any cluster other than the default cluster. * **services** (*list*) -- **[REQUIRED]** A list of services to describe. You may specify up to 10 services to describe in a single operation. * *(string) --* * **include** (*list*) -- Determines whether you want to see the resource tags for the service. If "TAGS" is specified, the tags are included in the response. If this field is omitted, tags aren't included in the response. * *(string) --* * **WaiterConfig** (*dict*) -- A dictionary that provides parameters to control waiting behavior. * **Delay** *(integer) --* The amount of time in seconds to wait between attempts. Default: 15 * **MaxAttempts** *(integer) --* The maximum number of attempts to be made. Default: 40 Returns: None ECS / Paginator / ListTaskDefinitionFamilies ListTaskDefinitionFamilies ************************** class ECS.Paginator.ListTaskDefinitionFamilies paginator = client.get_paginator('list_task_definition_families') paginate(**kwargs) Creates an iterator that will paginate through responses from "ECS.Client.list_task_definition_families()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( familyPrefix='string', status='ACTIVE'|'INACTIVE'|'ALL', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **familyPrefix** (*string*) -- The "familyPrefix" is a string that's used to filter the results of "ListTaskDefinitionFamilies". If you specify a "familyPrefix", only task definition family names that begin with the "familyPrefix" string are returned. * **status** (*string*) -- The task definition family status to filter the "ListTaskDefinitionFamilies" results with. By default, both "ACTIVE" and "INACTIVE" task definition families are listed. If this parameter is set to "ACTIVE", only task definition families that have an "ACTIVE" task definition revision are returned. If this parameter is set to "INACTIVE", only task definition families that do not have any "ACTIVE" task definition revisions are returned. If you paginate the resulting output, be sure to keep the "status" value constant in each subsequent request. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'families': [ 'string', ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **families** *(list) --* The list of task definition family names that match the "ListTaskDefinitionFamilies" request. * *(string) --* * **NextToken** *(string) --* A token to resume pagination. ECS / Paginator / ListServicesByNamespace ListServicesByNamespace *********************** class ECS.Paginator.ListServicesByNamespace paginator = client.get_paginator('list_services_by_namespace') paginate(**kwargs) Creates an iterator that will paginate through responses from "ECS.Client.list_services_by_namespace()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( namespace='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **namespace** (*string*) -- **[REQUIRED]** The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace to list the services in. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'serviceArns': [ 'string', ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **serviceArns** *(list) --* The list of full ARN entries for each service that's associated with the specified namespace. * *(string) --* * **NextToken** *(string) --* A token to resume pagination. ECS / Paginator / ListAccountSettings ListAccountSettings ******************* class ECS.Paginator.ListAccountSettings paginator = client.get_paginator('list_account_settings') paginate(**kwargs) Creates an iterator that will paginate through responses from "ECS.Client.list_account_settings()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( name='serviceLongArnFormat'|'taskLongArnFormat'|'containerInstanceLongArnFormat'|'awsvpcTrunking'|'containerInsights'|'fargateFIPSMode'|'tagResourceAuthorization'|'fargateTaskRetirementWaitPeriod'|'guardDutyActivate'|'defaultLogDriverMode', value='string', principalArn='string', effectiveSettings=True|False, PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **name** (*string*) -- The name of the account setting you want to list the settings for. * **value** (*string*) -- The value of the account settings to filter results with. You must also specify an account setting name to use this parameter. * **principalArn** (*string*) -- The ARN of the principal, which can be a user, role, or the root user. If this field is omitted, the account settings are listed only for the authenticated user. In order to use this parameter, you must be the root user, or the principal. Note: Federated users assume the account setting of the root user and can't have explicit account settings set for them. * **effectiveSettings** (*boolean*) -- Determines whether to return the effective settings. If "true", the account settings for the root user or the default setting for the "principalArn" are returned. If "false", the account settings for the "principalArn" are returned if they're set. Otherwise, no account settings are returned. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'settings': [ { 'name': 'serviceLongArnFormat'|'taskLongArnFormat'|'containerInstanceLongArnFormat'|'awsvpcTrunking'|'containerInsights'|'fargateFIPSMode'|'tagResourceAuthorization'|'fargateTaskRetirementWaitPeriod'|'guardDutyActivate'|'defaultLogDriverMode', 'value': 'string', 'principalArn': 'string', 'type': 'user'|'aws_managed' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **settings** *(list) --* The account settings for the resource. * *(dict) --* The current account setting for a resource. * **name** *(string) --* The Amazon ECS resource name. * **value** *(string) --* Determines whether the account setting is on or off for the specified resource. * **principalArn** *(string) --* The ARN of the principal. It can be a user, role, or the root user. If this field is omitted, the authenticated user is assumed. * **type** *(string) --* Indicates whether Amazon Web Services manages the account setting, or if the user manages it. "aws_managed" account settings are read-only, as Amazon Web Services manages such on the customer's behalf. Currently, the "guardDutyActivate" account setting is the only one Amazon Web Services manages. * **NextToken** *(string) --* A token to resume pagination. ECS / Paginator / ListServices ListServices ************ class ECS.Paginator.ListServices paginator = client.get_paginator('list_services') paginate(**kwargs) Creates an iterator that will paginate through responses from "ECS.Client.list_services()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( cluster='string', launchType='EC2'|'FARGATE'|'EXTERNAL', schedulingStrategy='REPLICA'|'DAEMON', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **cluster** (*string*) -- The short name or full Amazon Resource Name (ARN) of the cluster to use when filtering the "ListServices" results. If you do not specify a cluster, the default cluster is assumed. * **launchType** (*string*) -- The launch type to use when filtering the "ListServices" results. * **schedulingStrategy** (*string*) -- The scheduling strategy to use when filtering the "ListServices" results. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'serviceArns': [ 'string', ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **serviceArns** *(list) --* The list of full ARN entries for each service that's associated with the specified cluster. * *(string) --* * **NextToken** *(string) --* A token to resume pagination. ECS / Paginator / ListAttributes ListAttributes ************** class ECS.Paginator.ListAttributes paginator = client.get_paginator('list_attributes') paginate(**kwargs) Creates an iterator that will paginate through responses from "ECS.Client.list_attributes()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( cluster='string', targetType='container-instance', attributeName='string', attributeValue='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **cluster** (*string*) -- The short name or full Amazon Resource Name (ARN) of the cluster to list attributes. If you do not specify a cluster, the default cluster is assumed. * **targetType** (*string*) -- **[REQUIRED]** The type of the target to list attributes with. * **attributeName** (*string*) -- The name of the attribute to filter the results with. * **attributeValue** (*string*) -- The value of the attribute to filter results with. You must also specify an attribute name to use this parameter. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'attributes': [ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **attributes** *(list) --* A list of attribute objects that meet the criteria of the request. * *(dict) --* An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the attribute. The "name" must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.). * **value** *(string) --* The value of the attribute. The "value" must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space. * **targetType** *(string) --* The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN. * **targetId** *(string) --* The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN). * **NextToken** *(string) --* A token to resume pagination. ECS / Paginator / ListTaskDefinitions ListTaskDefinitions ******************* class ECS.Paginator.ListTaskDefinitions paginator = client.get_paginator('list_task_definitions') paginate(**kwargs) Creates an iterator that will paginate through responses from "ECS.Client.list_task_definitions()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( familyPrefix='string', status='ACTIVE'|'INACTIVE'|'DELETE_IN_PROGRESS', sort='ASC'|'DESC', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **familyPrefix** (*string*) -- The full family name to filter the "ListTaskDefinitions" results with. Specifying a "familyPrefix" limits the listed task definitions to task definition revisions that belong to that family. * **status** (*string*) -- The task definition status to filter the "ListTaskDefinitions" results with. By default, only "ACTIVE" task definitions are listed. By setting this parameter to "INACTIVE", you can view task definitions that are "INACTIVE" as long as an active task or service still references them. If you paginate the resulting output, be sure to keep the "status" value constant in each subsequent request. * **sort** (*string*) -- The order to sort the results in. Valid values are "ASC" and "DESC". By default, ( "ASC") task definitions are listed lexicographically by family name and in ascending numerical order by revision so that the newest task definitions in a family are listed last. Setting this parameter to "DESC" reverses the sort order on family name and revision. This is so that the newest task definitions in a family are listed first. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'taskDefinitionArns': [ 'string', ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **taskDefinitionArns** *(list) --* The list of task definition Amazon Resource Name (ARN) entries for the "ListTaskDefinitions" request. * *(string) --* * **NextToken** *(string) --* A token to resume pagination. ECS / Paginator / ListClusters ListClusters ************ class ECS.Paginator.ListClusters paginator = client.get_paginator('list_clusters') paginate(**kwargs) Creates an iterator that will paginate through responses from "ECS.Client.list_clusters()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max- items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'clusterArns': [ 'string', ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **clusterArns** *(list) --* The list of full Amazon Resource Name (ARN) entries for each cluster that's associated with your account. * *(string) --* * **NextToken** *(string) --* A token to resume pagination. ECS / Paginator / ListTasks ListTasks ********* class ECS.Paginator.ListTasks paginator = client.get_paginator('list_tasks') paginate(**kwargs) Creates an iterator that will paginate through responses from "ECS.Client.list_tasks()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( cluster='string', containerInstance='string', family='string', startedBy='string', serviceName='string', desiredStatus='RUNNING'|'PENDING'|'STOPPED', launchType='EC2'|'FARGATE'|'EXTERNAL', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **cluster** (*string*) -- The short name or full Amazon Resource Name (ARN) of the cluster to use when filtering the "ListTasks" results. If you do not specify a cluster, the default cluster is assumed. * **containerInstance** (*string*) -- The container instance ID or full ARN of the container instance to use when filtering the "ListTasks" results. Specifying a "containerInstance" limits the results to tasks that belong to that container instance. * **family** (*string*) -- The name of the task definition family to use when filtering the "ListTasks" results. Specifying a "family" limits the results to tasks that belong to that family. * **startedBy** (*string*) -- The "startedBy" value to filter the task results with. Specifying a "startedBy" value limits the results to tasks that were started with that value. When you specify "startedBy" as the filter, it must be the only filter that you use. * **serviceName** (*string*) -- The name of the service to use when filtering the "ListTasks" results. Specifying a "serviceName" limits the results to tasks that belong to that service. * **desiredStatus** (*string*) -- The task desired status to use when filtering the "ListTasks" results. Specifying a "desiredStatus" of "STOPPED" limits the results to tasks that Amazon ECS has set the desired status to "STOPPED". This can be useful for debugging tasks that aren't starting properly or have died or finished. The default status filter is "RUNNING", which shows tasks that Amazon ECS has set the desired status to "RUNNING". Note: Although you can filter results based on a desired status of "PENDING", this doesn't return any results. Amazon ECS never sets the desired status of a task to that value (only a task's "lastStatus" may have a value of "PENDING"). * **launchType** (*string*) -- The launch type to use when filtering the "ListTasks" results. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'taskArns': [ 'string', ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **taskArns** *(list) --* The list of task ARN entries for the "ListTasks" request. * *(string) --* * **NextToken** *(string) --* A token to resume pagination. ECS / Paginator / ListContainerInstances ListContainerInstances ********************** class ECS.Paginator.ListContainerInstances paginator = client.get_paginator('list_container_instances') paginate(**kwargs) Creates an iterator that will paginate through responses from "ECS.Client.list_container_instances()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( cluster='string', filter='string', status='ACTIVE'|'DRAINING'|'REGISTERING'|'DEREGISTERING'|'REGISTRATION_FAILED', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **cluster** (*string*) -- The short name or full Amazon Resource Name (ARN) of the cluster that hosts the container instances to list. If you do not specify a cluster, the default cluster is assumed. * **filter** (*string*) -- You can filter the results of a "ListContainerInstances" operation with cluster query language statements. For more information, see Cluster Query Language in the *Amazon Elastic Container Service Developer Guide*. * **status** (*string*) -- Filters the container instances by status. For example, if you specify the "DRAINING" status, the results include only container instances that have been set to "DRAINING" using UpdateContainerInstancesState. If you don't specify this parameter, the default is to include container instances set to all states other than "INACTIVE". * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'containerInstanceArns': [ 'string', ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **containerInstanceArns** *(list) --* The list of container instances with full ARN entries for each container instance associated with the specified cluster. * *(string) --* * **NextToken** *(string) --* A token to resume pagination. ECS / Client / get_paginator get_paginator ************* ECS.Client.get_paginator(operation_name) Create a paginator for an operation. Parameters: **operation_name** (*string*) -- The operation name. This is the same name as the method name on the client. For example, if the method name is "create_foo", and you'd normally invoke the operation as "client.create_foo(**kwargs)", if the "create_foo" operation can be paginated, you can use the call "client.get_paginator("create_foo")". Raises: **OperationNotPageableError** -- Raised if the operation is not pageable. You can use the "client.can_paginate" method to check if an operation is pageable. Return type: "botocore.paginate.Paginator" Returns: A paginator object. ECS / Client / create_cluster create_cluster ************** ECS.Client.create_cluster(**kwargs) Creates a new Amazon ECS cluster. By default, your account receives a "default" cluster when you launch your first container instance. However, you can create your own cluster with a unique name. Note: When you call the CreateCluster API operation, Amazon ECS attempts to create the Amazon ECS service-linked role for your account. This is so that it can manage required resources in other Amazon Web Services services on your behalf. However, if the user that makes the call doesn't have permissions to create the service-linked role, it isn't created. For more information, see Using service-linked roles for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. See also: AWS API Documentation **Request Syntax** response = client.create_cluster( clusterName='string', tags=[ { 'key': 'string', 'value': 'string' }, ], settings=[ { 'name': 'containerInsights', 'value': 'string' }, ], configuration={ 'executeCommandConfiguration': { 'kmsKeyId': 'string', 'logging': 'NONE'|'DEFAULT'|'OVERRIDE', 'logConfiguration': { 'cloudWatchLogGroupName': 'string', 'cloudWatchEncryptionEnabled': True|False, 's3BucketName': 'string', 's3EncryptionEnabled': True|False, 's3KeyPrefix': 'string' } }, 'managedStorageConfiguration': { 'kmsKeyId': 'string', 'fargateEphemeralStorageKmsKeyId': 'string' } }, capacityProviders=[ 'string', ], defaultCapacityProviderStrategy=[ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], serviceConnectDefaults={ 'namespace': 'string' } ) Parameters: * **clusterName** (*string*) -- The name of your cluster. If you don't specify a name for your cluster, you create a cluster that's named "default". Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. * **tags** (*list*) -- The metadata that you apply to the cluster to help you categorize and organize them. Each tag consists of a key and an optional value. You define both. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **settings** (*list*) -- The setting to use when creating a cluster. This parameter is used to turn on CloudWatch Container Insights for a cluster. If this value is specified, it overrides the "containerInsights" value set with PutAccountSetting or PutAccountSettingDefault. * *(dict) --* The settings to use when creating a cluster. This parameter is used to turn on CloudWatch Container Insights with enhanced observability or CloudWatch Container Insights for a cluster. Container Insights with enhanced observability provides all the Container Insights metrics, plus additional task and container metrics. This version supports enhanced observability for Amazon ECS clusters using the Amazon EC2 and Fargate launch types. After you configure Container Insights with enhanced observability on Amazon ECS, Container Insights auto-collects detailed infrastructure telemetry from the cluster level down to the container level in your environment and displays these critical performance data in curated dashboards removing the heavy lifting in observability set-up. For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the cluster setting. The value is "containerInsights" . * **value** *(string) --* The value to set for the cluster setting. The supported values are "enhanced", "enabled", and "disabled". To use Container Insights with enhanced observability, set the "containerInsights" account setting to "enhanced". To use Container Insights, set the "containerInsights" account setting to "enabled". If a cluster value is specified, it will override the "containerInsights" value set with PutAccountSetting or PutAccountSettingDefault. * **configuration** (*dict*) -- The "execute" command configuration for the cluster. * **executeCommandConfiguration** *(dict) --* The details of the execute command configuration. * **kmsKeyId** *(string) --* Specify an Key Management Service key ID to encrypt the data between the local client and the container. * **logging** *(string) --* The log setting to use for redirecting logs for your execute command results. The following log settings are available. * "NONE": The execute command session is not logged. * "DEFAULT": The "awslogs" configuration in the task definition is used. If no logging parameter is specified, it defaults to this value. If no "awslogs" log driver is configured in the task definition, the output won't be logged. * "OVERRIDE": Specify the logging details as a part of "logConfiguration". If the "OVERRIDE" logging option is specified, the "logConfiguration" is required. * **logConfiguration** *(dict) --* The log configuration for the results of the execute command actions. The logs can be sent to CloudWatch Logs or an Amazon S3 bucket. When "logging=OVERRIDE" is specified, a "logConfiguration" must be provided. * **cloudWatchLogGroupName** *(string) --* The name of the CloudWatch log group to send logs to. Note: The CloudWatch log group must already be created. * **cloudWatchEncryptionEnabled** *(boolean) --* Determines whether to use encryption on the CloudWatch logs. If not specified, encryption will be off. * **s3BucketName** *(string) --* The name of the S3 bucket to send logs to. Note: The S3 bucket must already be created. * **s3EncryptionEnabled** *(boolean) --* Determines whether to use encryption on the S3 logs. If not specified, encryption is not used. * **s3KeyPrefix** *(string) --* An optional folder in the S3 bucket to place logs in. * **managedStorageConfiguration** *(dict) --* The details of the managed storage configuration. * **kmsKeyId** *(string) --* Specify a Key Management Service key ID to encrypt Amazon ECS managed storage. When you specify a "kmsKeyId", Amazon ECS uses the key to encrypt data volumes managed by Amazon ECS that are attached to tasks in the cluster. The following data volumes are managed by Amazon ECS: Amazon EBS. For more information about encryption of Amazon EBS volumes attached to Amazon ECS tasks, see Encrypt data stored in Amazon EBS volumes for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. The key must be a single Region key. * **fargateEphemeralStorageKmsKeyId** *(string) --* Specify the Key Management Service key ID for Fargate ephemeral storage. When you specify a "fargateEphemeralStorageKmsKeyId", Amazon Web Services Fargate uses the key to encrypt data at rest in ephemeral storage. For more information about Fargate ephemeral storage encryption, see Customer managed keys for Amazon Web Services Fargate ephemeral storage for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. The key must be a single Region key. * **capacityProviders** (*list*) -- The short name of one or more capacity providers to associate with the cluster. A capacity provider must be associated with a cluster before it can be included as part of the default capacity provider strategy of the cluster or used in a capacity provider strategy when calling the CreateService or RunTask actions. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must be created but not associated with another cluster. New Auto Scaling group capacity providers can be created with the CreateCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used. The PutCapacityProvider API operation is used to update the list of available capacity providers for a cluster after the cluster is created. * *(string) --* * **defaultCapacityProviderStrategy** (*list*) -- The capacity provider strategy to set as the default for the cluster. After a default capacity provider strategy is set for a cluster, when you call the CreateService or RunTask APIs with no capacity provider strategy or launch type specified, the default capacity provider strategy for the cluster is used. If a default capacity provider strategy isn't defined for a cluster when it was created, it can be defined later with the PutClusterCapacityProviders API operation. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* **[REQUIRED]** The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. * **serviceConnectDefaults** (*dict*) -- Use this parameter to set a default Service Connect namespace. After you set a default Service Connect namespace, any new services with Service Connect turned on that are created in the cluster are added as client services in the namespace. This setting only applies to new services that set the "enabled" parameter to "true" in the "ServiceConnectConfiguration". You can set the namespace of each service individually in the "ServiceConnectConfiguration" to override this default parameter. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **namespace** *(string) --* **[REQUIRED]** The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace that's used when you create a service and don't specify a Service Connect configuration. The namespace name can include up to 1024 characters. The name is case-sensitive. The name can't include greater than (>), less than (<), double quotation marks ("), or slash (/). If you enter an existing namespace name or ARN, then that namespace will be used. Any namespace type is supported. The namespace must be in this account and this Amazon Web Services Region. If you enter a new name, a Cloud Map namespace will be created. Amazon ECS creates a Cloud Map namespace with the "API calls" method of instance discovery only. This instance discovery method is the "HTTP" namespace type in the Command Line Interface. Other types of instance discovery aren't used by Service Connect. If you update the cluster with an empty string """" for the namespace name, the cluster configuration for Service Connect is removed. Note that the namespace will remain in Cloud Map and must be deleted separately. For more information about Cloud Map, see Working with Services in the *Cloud Map Developer Guide*. Return type: dict Returns: **Response Syntax** { 'cluster': { 'clusterArn': 'string', 'clusterName': 'string', 'configuration': { 'executeCommandConfiguration': { 'kmsKeyId': 'string', 'logging': 'NONE'|'DEFAULT'|'OVERRIDE', 'logConfiguration': { 'cloudWatchLogGroupName': 'string', 'cloudWatchEncryptionEnabled': True|False, 's3BucketName': 'string', 's3EncryptionEnabled': True|False, 's3KeyPrefix': 'string' } }, 'managedStorageConfiguration': { 'kmsKeyId': 'string', 'fargateEphemeralStorageKmsKeyId': 'string' } }, 'status': 'string', 'registeredContainerInstancesCount': 123, 'runningTasksCount': 123, 'pendingTasksCount': 123, 'activeServicesCount': 123, 'statistics': [ { 'name': 'string', 'value': 'string' }, ], 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'settings': [ { 'name': 'containerInsights', 'value': 'string' }, ], 'capacityProviders': [ 'string', ], 'defaultCapacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'attachments': [ { 'id': 'string', 'type': 'string', 'status': 'string', 'details': [ { 'name': 'string', 'value': 'string' }, ] }, ], 'attachmentsStatus': 'string', 'serviceConnectDefaults': { 'namespace': 'string' } } } **Response Structure** * *(dict) --* * **cluster** *(dict) --* The full description of your new cluster. * **clusterArn** *(string) --* The Amazon Resource Name (ARN) that identifies the cluster. For more information about the ARN format, see Amazon Resource Name (ARN) in the *Amazon ECS Developer Guide*. * **clusterName** *(string) --* A user-generated string that you use to identify your cluster. * **configuration** *(dict) --* The execute command and managed storage configuration for the cluster. * **executeCommandConfiguration** *(dict) --* The details of the execute command configuration. * **kmsKeyId** *(string) --* Specify an Key Management Service key ID to encrypt the data between the local client and the container. * **logging** *(string) --* The log setting to use for redirecting logs for your execute command results. The following log settings are available. * "NONE": The execute command session is not logged. * "DEFAULT": The "awslogs" configuration in the task definition is used. If no logging parameter is specified, it defaults to this value. If no "awslogs" log driver is configured in the task definition, the output won't be logged. * "OVERRIDE": Specify the logging details as a part of "logConfiguration". If the "OVERRIDE" logging option is specified, the "logConfiguration" is required. * **logConfiguration** *(dict) --* The log configuration for the results of the execute command actions. The logs can be sent to CloudWatch Logs or an Amazon S3 bucket. When "logging=OVERRIDE" is specified, a "logConfiguration" must be provided. * **cloudWatchLogGroupName** *(string) --* The name of the CloudWatch log group to send logs to. Note: The CloudWatch log group must already be created. * **cloudWatchEncryptionEnabled** *(boolean) --* Determines whether to use encryption on the CloudWatch logs. If not specified, encryption will be off. * **s3BucketName** *(string) --* The name of the S3 bucket to send logs to. Note: The S3 bucket must already be created. * **s3EncryptionEnabled** *(boolean) --* Determines whether to use encryption on the S3 logs. If not specified, encryption is not used. * **s3KeyPrefix** *(string) --* An optional folder in the S3 bucket to place logs in. * **managedStorageConfiguration** *(dict) --* The details of the managed storage configuration. * **kmsKeyId** *(string) --* Specify a Key Management Service key ID to encrypt Amazon ECS managed storage. When you specify a "kmsKeyId", Amazon ECS uses the key to encrypt data volumes managed by Amazon ECS that are attached to tasks in the cluster. The following data volumes are managed by Amazon ECS: Amazon EBS. For more information about encryption of Amazon EBS volumes attached to Amazon ECS tasks, see Encrypt data stored in Amazon EBS volumes for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. The key must be a single Region key. * **fargateEphemeralStorageKmsKeyId** *(string) --* Specify the Key Management Service key ID for Fargate ephemeral storage. When you specify a "fargateEphemeralStorageKmsKeyId", Amazon Web Services Fargate uses the key to encrypt data at rest in ephemeral storage. For more information about Fargate ephemeral storage encryption, see Customer managed keys for Amazon Web Services Fargate ephemeral storage for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. The key must be a single Region key. * **status** *(string) --* The status of the cluster. The following are the possible states that are returned. ACTIVE The cluster is ready to accept tasks and if applicable you can register container instances with the cluster. PROVISIONING The cluster has capacity providers that are associated with it and the resources needed for the capacity provider are being created. DEPROVISIONING The cluster has capacity providers that are associated with it and the resources needed for the capacity provider are being deleted. FAILED The cluster has capacity providers that are associated with it and the resources needed for the capacity provider have failed to create. INACTIVE The cluster has been deleted. Clusters with an "INACTIVE" status may remain discoverable in your account for a period of time. However, this behavior is subject to change in the future. We don't recommend that you rely on "INACTIVE" clusters persisting. * **registeredContainerInstancesCount** *(integer) --* The number of container instances registered into the cluster. This includes container instances in both "ACTIVE" and "DRAINING" status. * **runningTasksCount** *(integer) --* The number of tasks in the cluster that are in the "RUNNING" state. * **pendingTasksCount** *(integer) --* The number of tasks in the cluster that are in the "PENDING" state. * **activeServicesCount** *(integer) --* The number of services that are running on the cluster in an "ACTIVE" state. You can view these services with PListServices. * **statistics** *(list) --* Additional information about your clusters that are separated by launch type. They include the following: * runningEC2TasksCount * RunningFargateTasksCount * pendingEC2TasksCount * pendingFargateTasksCount * activeEC2ServiceCount * activeFargateServiceCount * drainingEC2ServiceCount * drainingFargateServiceCount * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **tags** *(list) --* The metadata that you apply to the cluster to help you categorize and organize them. Each tag consists of a key and an optional value. You define both. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **settings** *(list) --* The settings for the cluster. This parameter indicates whether CloudWatch Container Insights is on or off for a cluster. * *(dict) --* The settings to use when creating a cluster. This parameter is used to turn on CloudWatch Container Insights with enhanced observability or CloudWatch Container Insights for a cluster. Container Insights with enhanced observability provides all the Container Insights metrics, plus additional task and container metrics. This version supports enhanced observability for Amazon ECS clusters using the Amazon EC2 and Fargate launch types. After you configure Container Insights with enhanced observability on Amazon ECS, Container Insights auto-collects detailed infrastructure telemetry from the cluster level down to the container level in your environment and displays these critical performance data in curated dashboards removing the heavy lifting in observability set-up. For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the cluster setting. The value is "containerInsights" . * **value** *(string) --* The value to set for the cluster setting. The supported values are "enhanced", "enabled", and "disabled". To use Container Insights with enhanced observability, set the "containerInsights" account setting to "enhanced". To use Container Insights, set the "containerInsights" account setting to "enabled". If a cluster value is specified, it will override the "containerInsights" value set with PutAccountSetting or PutAccountSettingDefault. * **capacityProviders** *(list) --* The capacity providers associated with the cluster. * *(string) --* * **defaultCapacityProviderStrategy** *(list) --* The default capacity provider strategy for the cluster. When services or tasks are run in the cluster with no launch type or capacity provider strategy specified, the default capacity provider strategy is used. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two- minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. * **attachments** *(list) --* The resources attached to a cluster. When using a capacity provider with a cluster, the capacity provider and associated resources are returned as cluster attachments. * *(dict) --* An object representing a container instance or task attachment. * **id** *(string) --* The unique identifier for the attachment. * **type** *(string) --* The type of the attachment, such as "ElasticNetworkInterface", "Service Connect", and "AmazonElasticBlockStorage". * **status** *(string) --* The status of the attachment. Valid values are "PRECREATED", "CREATED", "ATTACHING", "ATTACHED", "DETACHING", "DETACHED", "DELETED", and "FAILED". * **details** *(list) --* Details of the attachment. For elastic network interfaces, this includes the network interface ID, the MAC address, the subnet ID, and the private IPv4 address. For Service Connect services, this includes "portName", "clientAliases", "discoveryName", and "ingressPortOverride". For Elastic Block Storage, this includes "roleArn", "deleteOnTermination", "volumeName", "volumeId", and "statusReason" (only when the attachment fails to create or attach). * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **attachmentsStatus** *(string) --* The status of the capacity providers associated with the cluster. The following are the states that are returned. UPDATE_IN_PROGRESS The available capacity providers for the cluster are updating. UPDATE_COMPLETE The capacity providers have successfully updated. UPDATE_FAILED The capacity provider updates failed. * **serviceConnectDefaults** *(dict) --* Use this parameter to set a default Service Connect namespace. After you set a default Service Connect namespace, any new services with Service Connect turned on that are created in the cluster are added as client services in the namespace. This setting only applies to new services that set the "enabled" parameter to "true" in the "ServiceConnectConfiguration". You can set the namespace of each service individually in the "ServiceConnectConfiguration" to override this default parameter. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **namespace** *(string) --* The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace. When you create a service and don't specify a Service Connect configuration, this namespace is used. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.NamespaceNotFoundException" **Examples** This example creates a cluster in your default region. response = client.create_cluster( clusterName='my_cluster', ) print(response) Expected Output: { 'cluster': { 'activeServicesCount': 0, 'clusterArn': 'arn:aws:ecs:us-east-1:012345678910:cluster/my_cluster', 'clusterName': 'my_cluster', 'pendingTasksCount': 0, 'registeredContainerInstancesCount': 0, 'runningTasksCount': 0, 'status': 'ACTIVE', }, 'ResponseMetadata': { '...': '...', }, } ECS / Client / delete_account_setting delete_account_setting ********************** ECS.Client.delete_account_setting(**kwargs) Disables an account setting for a specified user, role, or the root user for an account. See also: AWS API Documentation **Request Syntax** response = client.delete_account_setting( name='serviceLongArnFormat'|'taskLongArnFormat'|'containerInstanceLongArnFormat'|'awsvpcTrunking'|'containerInsights'|'fargateFIPSMode'|'tagResourceAuthorization'|'fargateTaskRetirementWaitPeriod'|'guardDutyActivate'|'defaultLogDriverMode', principalArn='string' ) Parameters: * **name** (*string*) -- **[REQUIRED]** The resource name to disable the account setting for. If "serviceLongArnFormat" is specified, the ARN for your Amazon ECS services is affected. If "taskLongArnFormat" is specified, the ARN and resource ID for your Amazon ECS tasks is affected. If "containerInstanceLongArnFormat" is specified, the ARN and resource ID for your Amazon ECS container instances is affected. If "awsvpcTrunking" is specified, the ENI limit for your Amazon ECS container instances is affected. * **principalArn** (*string*) -- The Amazon Resource Name (ARN) of the principal. It can be a user, role, or the root user. If you specify the root user, it disables the account setting for all users, roles, and the root user of the account unless a user or role explicitly overrides these settings. If this field is omitted, the setting is changed only for the authenticated user. In order to use this parameter, you must be the root user, or the principal. Return type: dict Returns: **Response Syntax** { 'setting': { 'name': 'serviceLongArnFormat'|'taskLongArnFormat'|'containerInstanceLongArnFormat'|'awsvpcTrunking'|'containerInsights'|'fargateFIPSMode'|'tagResourceAuthorization'|'fargateTaskRetirementWaitPeriod'|'guardDutyActivate'|'defaultLogDriverMode', 'value': 'string', 'principalArn': 'string', 'type': 'user'|'aws_managed' } } **Response Structure** * *(dict) --* * **setting** *(dict) --* The account setting for the specified principal ARN. * **name** *(string) --* The Amazon ECS resource name. * **value** *(string) --* Determines whether the account setting is on or off for the specified resource. * **principalArn** *(string) --* The ARN of the principal. It can be a user, role, or the root user. If this field is omitted, the authenticated user is assumed. * **type** *(string) --* Indicates whether Amazon Web Services manages the account setting, or if the user manages it. "aws_managed" account settings are read-only, as Amazon Web Services manages such on the customer's behalf. Currently, the "guardDutyActivate" account setting is the only one Amazon Web Services manages. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" **Examples** This example deletes the account setting for your user for the specified resource type. response = client.delete_account_setting( name='serviceLongArnFormat', ) print(response) Expected Output: { 'setting': { 'name': 'serviceLongArnFormat', 'value': 'enabled', 'principalArn': 'arn:aws:iam:::user/principalName', }, 'ResponseMetadata': { '...': '...', }, } This example deletes the account setting for a specific IAM user or IAM role for the specified resource type. Only the root user can view or modify the account settings for another user. response = client.delete_account_setting( name='containerInstanceLongArnFormat', principalArn='arn:aws:iam:::user/principalName', ) print(response) Expected Output: { 'setting': { 'name': 'containerInstanceLongArnFormat', 'value': 'enabled', 'principalArn': 'arn:aws:iam:::user/principalName', }, 'ResponseMetadata': { '...': '...', }, } ECS / Client / stop_service_deployment stop_service_deployment *********************** ECS.Client.stop_service_deployment(**kwargs) Stops an ongoing service deployment. The following stop types are avaiable: * ROLLBACK - This option rolls back the service deployment to the previous service revision. You can use this option even if you didn't configure the service deployment for the rollback option. For more information, see Stopping Amazon ECS service deployments in the *Amazon Elastic Container Service Developer Guide*. See also: AWS API Documentation **Request Syntax** response = client.stop_service_deployment( serviceDeploymentArn='string', stopType='ABORT'|'ROLLBACK' ) Parameters: * **serviceDeploymentArn** (*string*) -- **[REQUIRED]** The ARN of the service deployment that you want to stop. * **stopType** (*string*) -- How you want Amazon ECS to stop the service. The valid values are "ROLLBACK". Return type: dict Returns: **Response Syntax** { 'serviceDeploymentArn': 'string' } **Response Structure** * *(dict) --* * **serviceDeploymentArn** *(string) --* The ARN of the stopped service deployment. **Exceptions** * "ECS.Client.exceptions.AccessDeniedException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.ConflictException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ServiceDeploymentNotFoundException" * "ECS.Client.exceptions.UnsupportedFeatureException" ECS / Client / create_task_set create_task_set *************** ECS.Client.create_task_set(**kwargs) Create a task set in the specified cluster and service. This is used when a service uses the "EXTERNAL" deployment controller type. For more information, see Amazon ECS deployment types in the *Amazon Elastic Container Service Developer Guide*. Note: On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition. For information about the maximum number of task sets and other quotas, see Amazon ECS service quotas in the *Amazon Elastic Container Service Developer Guide*. See also: AWS API Documentation **Request Syntax** response = client.create_task_set( service='string', cluster='string', externalId='string', taskDefinition='string', networkConfiguration={ 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, loadBalancers=[ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], serviceRegistries=[ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], launchType='EC2'|'FARGATE'|'EXTERNAL', capacityProviderStrategy=[ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], platformVersion='string', scale={ 'value': 123.0, 'unit': 'PERCENT' }, clientToken='string', tags=[ { 'key': 'string', 'value': 'string' }, ] ) Parameters: * **service** (*string*) -- **[REQUIRED]** The short name or full Amazon Resource Name (ARN) of the service to create the task set in. * **cluster** (*string*) -- **[REQUIRED]** The short name or full Amazon Resource Name (ARN) of the cluster that hosts the service to create the task set in. * **externalId** (*string*) -- An optional non-unique tag that identifies this task set in external systems. If the task set is associated with a service discovery registry, the tasks in this task set will have the "ECS_TASK_SET_EXTERNAL_ID" Cloud Map attribute set to the provided value. * **taskDefinition** (*string*) -- **[REQUIRED]** The task definition for the tasks in the task set to use. If a revision isn't specified, the latest "ACTIVE" revision is used. * **networkConfiguration** (*dict*) -- An object representing the network configuration for a task set. * **awsvpcConfiguration** *(dict) --* The VPC subnets and security groups that are associated with a task. Note: All specified subnets and security groups must be from the same VPC. * **subnets** *(list) --* **[REQUIRED]** The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified. Note: All specified subnets must be from the same VPC. * *(string) --* * **securityGroups** *(list) --* The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified. Note: All specified security groups must be from the same VPC. * *(string) --* * **assignPublicIp** *(string) --* Whether the task's elastic network interface receives a public IP address. Consider the following when you set this value: * When you use "create-service" or "update-service", the default is "DISABLED". * When the service "deploymentController" is "ECS", the value must be "DISABLED". * **loadBalancers** (*list*) -- A load balancer object representing the load balancer to use with the task set. The supported load balancer types are either an Application Load Balancer or a Network Load Balancer. * *(dict) --* The load balancer configuration to use with a service or task set. When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers. We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration. A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the *Amazon Elastic Container Service Developer Guide*. * **targetGroupArn** *(string) --* The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set. A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. For services using the "ECS" deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the *Amazon Elastic Container Service Developer Guide*. For services using the "CODE_DEPLOY" deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the *Amazon Elastic Container Service Developer Guide*. Warning: If your service's task definition uses the "awsvpc" network mode, you must choose "ip" as the target type, not "instance". Do this when creating your target groups because tasks that use the "awsvpc" network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type. * **loadBalancerName** *(string) --* The name of the load balancer to associate with the Amazon ECS service or task set. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted. * **containerName** *(string) --* The name of the container (as it appears in a container definition) to associate with the load balancer. You need to specify the container name when configuring the target group for an Amazon ECS load balancer. * **containerPort** *(integer) --* The port on the container to associate with the load balancer. This port must correspond to a "containerPort" in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the "hostPort" of the port mapping. * **advancedConfiguration** *(dict) --* The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments. * **alternateTargetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments. * **productionListenerRule** *(string) --* The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic. * **testListenerRule** *(string) --* The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you. * **serviceRegistries** (*list*) -- The details of the service discovery registries to assign to this task set. For more information, see Service discovery. * *(dict) --* The details for the service registry. Each service may be associated with one service registry. Multiple service registries for each service are not supported. When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration. * **registryArn** *(string) --* The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService. * **port** *(integer) --* The port value used if your service discovery service specified an SRV record. This field might be used if both the "awsvpc" network mode and SRV records are used. * **containerName** *(string) --* The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition that your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **containerPort** *(integer) --* The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **launchType** (*string*) -- The launch type that new tasks in the task set uses. For more information, see Amazon ECS launch types in the *Amazon Elastic Container Service Developer Guide*. If a "launchType" is specified, the "capacityProviderStrategy" parameter must be omitted. * **capacityProviderStrategy** (*list*) -- The capacity provider strategy to use for the task set. A capacity provider strategy consists of one or more capacity providers along with the "base" and "weight" to assign to them. A capacity provider must be associated with the cluster to be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. Only capacity providers with an "ACTIVE" or "UPDATING" status can be used. If a "capacityProviderStrategy" is specified, the "launchType" parameter must be omitted. If no "capacityProviderStrategy" or "launchType" is specified, the "defaultCapacityProviderStrategy" for the cluster is used. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New capacity providers can be created with the >>`<`__API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used. The PutClusterCapacityProviders API operation is used to update the list of available capacity providers for a cluster after the cluster is created. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* **[REQUIRED]** The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. * **platformVersion** (*string*) -- The platform version that the tasks in the task set uses. A platform version is specified only for tasks using the Fargate launch type. If one isn't specified, the "LATEST" platform version is used. * **scale** (*dict*) -- A floating-point percentage of the desired number of tasks to place and keep running in the task set. * **value** *(float) --* The value, specified as a percent total of a service's "desiredCount", to scale the task set. Accepted values are numbers between 0 and 100. * **unit** *(string) --* The unit of measure for the scale value. * **clientToken** (*string*) -- An identifier that you provide to ensure the idempotency of the request. It must be unique and is case sensitive. Up to 36 ASCII characters in the range of 33-126 (inclusive) are allowed. * **tags** (*list*) -- The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both. When a service is deleted, the tags are deleted. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). Return type: dict Returns: **Response Syntax** { 'taskSet': { 'id': 'string', 'taskSetArn': 'string', 'serviceArn': 'string', 'clusterArn': 'string', 'startedBy': 'string', 'externalId': 'string', 'status': 'string', 'taskDefinition': 'string', 'computedDesiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'scale': { 'value': 123.0, 'unit': 'PERCENT' }, 'stabilityStatus': 'STEADY_STATE'|'STABILIZING', 'stabilityStatusAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' } } } **Response Structure** * *(dict) --* * **taskSet** *(dict) --* Information about a set of Amazon ECS tasks in either an CodeDeploy or an "EXTERNAL" deployment. A task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic. * **id** *(string) --* The ID of the task set. * **taskSetArn** *(string) --* The Amazon Resource Name (ARN) of the task set. * **serviceArn** *(string) --* The Amazon Resource Name (ARN) of the service the task set exists in. * **clusterArn** *(string) --* The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in. * **startedBy** *(string) --* The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the "startedBy" parameter is "CODE_DEPLOY". If an external deployment created the task set, the "startedBy" field isn't used. * **externalId** *(string) --* The external ID associated with the task set. If an CodeDeploy deployment created a task set, the "externalId" parameter contains the CodeDeploy deployment ID. If a task set is created for an external deployment and is associated with a service discovery registry, the "externalId" parameter contains the "ECS_TASK_SET_EXTERNAL_ID" Cloud Map attribute. * **status** *(string) --* The status of the task set. The following describes each state. PRIMARY The task set is serving production traffic. ACTIVE The task set isn't serving production traffic. DRAINING The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group. * **taskDefinition** *(string) --* The task definition that the task set is using. * **computedDesiredCount** *(integer) --* The computed desired count for the task set. This is calculated by multiplying the service's "desiredCount" by the task set's "scale" percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks. * **pendingCount** *(integer) --* The number of tasks in the task set that are in the "PENDING" status during a deployment. A task in the "PENDING" state is preparing to enter the "RUNNING" state. A task set enters the "PENDING" status when it launches for the first time or when it's restarted after being in the "STOPPED" state. * **runningCount** *(integer) --* The number of tasks in the task set that are in the "RUNNING" status during a deployment. A task in the "RUNNING" state is running and ready for use. * **createdAt** *(datetime) --* The Unix timestamp for the time when the task set was created. * **updatedAt** *(datetime) --* The Unix timestamp for the time when the task set was last updated. * **launchType** *(string) --* The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the *Amazon Elastic Container Service Developer Guide*. * **capacityProviderStrategy** *(list) --* The capacity provider strategy that are associated with the task set. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two- minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. * **platformVersion** *(string) --* The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the *Amazon Elastic Container Service Developer Guide*. * **platformFamily** *(string) --* The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type. All tasks in the set must have the same value. * **networkConfiguration** *(dict) --* The network configuration for the task set. * **awsvpcConfiguration** *(dict) --* The VPC subnets and security groups that are associated with a task. Note: All specified subnets and security groups must be from the same VPC. * **subnets** *(list) --* The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified. Note: All specified subnets must be from the same VPC. * *(string) --* * **securityGroups** *(list) --* The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified. Note: All specified security groups must be from the same VPC. * *(string) --* * **assignPublicIp** *(string) --* Whether the task's elastic network interface receives a public IP address. Consider the following when you set this value: * When you use "create-service" or "update-service", the default is "DISABLED". * When the service "deploymentController" is "ECS", the value must be "DISABLED". * **loadBalancers** *(list) --* Details on a load balancer that are used with a task set. * *(dict) --* The load balancer configuration to use with a service or task set. When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers. We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration. A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the *Amazon Elastic Container Service Developer Guide*. * **targetGroupArn** *(string) --* The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set. A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. For services using the "ECS" deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the *Amazon Elastic Container Service Developer Guide*. For services using the "CODE_DEPLOY" deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the *Amazon Elastic Container Service Developer Guide*. Warning: If your service's task definition uses the "awsvpc" network mode, you must choose "ip" as the target type, not "instance". Do this when creating your target groups because tasks that use the "awsvpc" network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type. * **loadBalancerName** *(string) --* The name of the load balancer to associate with the Amazon ECS service or task set. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted. * **containerName** *(string) --* The name of the container (as it appears in a container definition) to associate with the load balancer. You need to specify the container name when configuring the target group for an Amazon ECS load balancer. * **containerPort** *(integer) --* The port on the container to associate with the load balancer. This port must correspond to a "containerPort" in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the "hostPort" of the port mapping. * **advancedConfiguration** *(dict) --* The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments. * **alternateTargetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments. * **productionListenerRule** *(string) --* The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic. * **testListenerRule** *(string) --* The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you. * **serviceRegistries** *(list) --* The details for the service discovery registries to assign to this task set. For more information, see Service discovery. * *(dict) --* The details for the service registry. Each service may be associated with one service registry. Multiple service registries for each service are not supported. When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration. * **registryArn** *(string) --* The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService. * **port** *(integer) --* The port value used if your service discovery service specified an SRV record. This field might be used if both the "awsvpc" network mode and SRV records are used. * **containerName** *(string) --* The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition that your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **containerPort** *(integer) --* The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **scale** *(dict) --* A floating-point percentage of your desired number of tasks to place and keep running in the task set. * **value** *(float) --* The value, specified as a percent total of a service's "desiredCount", to scale the task set. Accepted values are numbers between 0 and 100. * **unit** *(string) --* The unit of measure for the scale value. * **stabilityStatus** *(string) --* The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set are in "STEADY_STATE": * The task "runningCount" is equal to the "computedDesiredCount". * The "pendingCount" is "0". * There are no tasks that are running on container instances in the "DRAINING" status. * All tasks are reporting a healthy status from the load balancers, service discovery, and container health checks. If any of those conditions aren't met, the stability status returns "STABILIZING". * **stabilityStatusAt** *(datetime) --* The Unix timestamp for the time when the task set stability status was retrieved. * **tags** *(list) --* The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **fargateEphemeralStorage** *(dict) --* The Fargate ephemeral storage settings for the task set. * **kmsKeyId** *(string) --* Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ClusterNotFoundException" * "ECS.Client.exceptions.UnsupportedFeatureException" * "ECS.Client.exceptions.PlatformUnknownException" * "ECS.Client.exceptions.PlatformTaskDefinitionIncompatibilityExce ption" * "ECS.Client.exceptions.AccessDeniedException" * "ECS.Client.exceptions.ServiceNotFoundException" * "ECS.Client.exceptions.ServiceNotActiveException" * "ECS.Client.exceptions.NamespaceNotFoundException" ECS / Client / update_container_agent update_container_agent ********************** ECS.Client.update_container_agent(**kwargs) Updates the Amazon ECS container agent on a specified container instance. Updating the Amazon ECS container agent doesn't interrupt running tasks or services on the container instance. The process for updating the agent differs depending on whether your container instance was launched with the Amazon ECS-optimized AMI or another operating system. Note: The "UpdateContainerAgent" API isn't supported for container instances using the Amazon ECS-optimized Amazon Linux 2 (arm64) AMI. To update the container agent, you can update the "ecs-init" package. This updates the agent. For more information, see Updating the Amazon ECS container agent in the *Amazon Elastic Container Service Developer Guide*. Note: Agent updates with the "UpdateContainerAgent" API operation do not apply to Windows container instances. We recommend that you launch new container instances to update the agent version in your Windows clusters. The "UpdateContainerAgent" API requires an Amazon ECS-optimized AMI or Amazon Linux AMI with the "ecs-init" service installed and running. For help updating the Amazon ECS container agent on other operating systems, see Manually updating the Amazon ECS container agent in the *Amazon Elastic Container Service Developer Guide*. See also: AWS API Documentation **Request Syntax** response = client.update_container_agent( cluster='string', containerInstance='string' ) Parameters: * **cluster** (*string*) -- The short name or full Amazon Resource Name (ARN) of the cluster that your container instance is running on. If you do not specify a cluster, the default cluster is assumed. * **containerInstance** (*string*) -- **[REQUIRED]** The container instance ID or full ARN entries for the container instance where you would like to update the Amazon ECS container agent. Return type: dict Returns: **Response Syntax** { 'containerInstance': { 'containerInstanceArn': 'string', 'ec2InstanceId': 'string', 'capacityProviderName': 'string', 'version': 123, 'versionInfo': { 'agentVersion': 'string', 'agentHash': 'string', 'dockerVersion': 'string' }, 'remainingResources': [ { 'name': 'string', 'type': 'string', 'doubleValue': 123.0, 'longValue': 123, 'integerValue': 123, 'stringSetValue': [ 'string', ] }, ], 'registeredResources': [ { 'name': 'string', 'type': 'string', 'doubleValue': 123.0, 'longValue': 123, 'integerValue': 123, 'stringSetValue': [ 'string', ] }, ], 'status': 'string', 'statusReason': 'string', 'agentConnected': True|False, 'runningTasksCount': 123, 'pendingTasksCount': 123, 'agentUpdateStatus': 'PENDING'|'STAGING'|'STAGED'|'UPDATING'|'UPDATED'|'FAILED', 'attributes': [ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ], 'registeredAt': datetime(2015, 1, 1), 'attachments': [ { 'id': 'string', 'type': 'string', 'status': 'string', 'details': [ { 'name': 'string', 'value': 'string' }, ] }, ], 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'healthStatus': { 'overallStatus': 'OK'|'IMPAIRED'|'INSUFFICIENT_DATA'|'INITIALIZING', 'details': [ { 'type': 'CONTAINER_RUNTIME', 'status': 'OK'|'IMPAIRED'|'INSUFFICIENT_DATA'|'INITIALIZING', 'lastUpdated': datetime(2015, 1, 1), 'lastStatusChange': datetime(2015, 1, 1) }, ] } } } **Response Structure** * *(dict) --* * **containerInstance** *(dict) --* The container instance that the container agent was updated for. * **containerInstanceArn** *(string) --* The Amazon Resource Name (ARN) of the container instance. For more information about the ARN format, see Amazon Resource Name (ARN) in the *Amazon ECS Developer Guide*. * **ec2InstanceId** *(string) --* The ID of the container instance. For Amazon EC2 instances, this value is the Amazon EC2 instance ID. For external instances, this value is the Amazon Web Services Systems Manager managed instance ID. * **capacityProviderName** *(string) --* The capacity provider that's associated with the container instance. * **version** *(integer) --* The version counter for the container instance. Every time a container instance experiences a change that triggers a CloudWatch event, the version counter is incremented. If you're replicating your Amazon ECS container instance state with CloudWatch Events, you can compare the version of a container instance reported by the Amazon ECS APIs with the version reported in CloudWatch Events for the container instance (inside the "detail" object) to verify that the version in your event stream is current. * **versionInfo** *(dict) --* The version information for the Amazon ECS container agent and Docker daemon running on the container instance. * **agentVersion** *(string) --* The version number of the Amazon ECS container agent. * **agentHash** *(string) --* The Git commit hash for the Amazon ECS container agent build on the amazon-ecs-agent GitHub repository. * **dockerVersion** *(string) --* The Docker version that's running on the container instance. * **remainingResources** *(list) --* For CPU and memory resource types, this parameter describes the remaining CPU and memory that wasn't already allocated to tasks and is therefore available for new tasks. For port resource types, this parameter describes the ports that were reserved by the Amazon ECS container agent (at instance registration time) and any task containers that have reserved port mappings on the host (with the "host" or "bridge" network mode). Any port that's not specified here is available for new tasks. * *(dict) --* Describes the resources available for a container instance. * **name** *(string) --* The name of the resource, such as "CPU", "MEMORY", "PORTS", "PORTS_UDP", or a user-defined resource. * **type** *(string) --* The type of the resource. Valid values: "INTEGER", "DOUBLE", "LONG", or "STRINGSET". * **doubleValue** *(float) --* When the "doubleValue" type is set, the value of the resource must be a double precision floating-point type. * **longValue** *(integer) --* When the "longValue" type is set, the value of the resource must be an extended precision floating-point type. * **integerValue** *(integer) --* When the "integerValue" type is set, the value of the resource must be an integer. * **stringSetValue** *(list) --* When the "stringSetValue" type is set, the value of the resource must be a string type. * *(string) --* * **registeredResources** *(list) --* For CPU and memory resource types, this parameter describes the amount of each resource that was available on the container instance when the container agent registered it with Amazon ECS. This value represents the total amount of CPU and memory that can be allocated on this container instance to tasks. For port resource types, this parameter describes the ports that were reserved by the Amazon ECS container agent when it registered the container instance with Amazon ECS. * *(dict) --* Describes the resources available for a container instance. * **name** *(string) --* The name of the resource, such as "CPU", "MEMORY", "PORTS", "PORTS_UDP", or a user-defined resource. * **type** *(string) --* The type of the resource. Valid values: "INTEGER", "DOUBLE", "LONG", or "STRINGSET". * **doubleValue** *(float) --* When the "doubleValue" type is set, the value of the resource must be a double precision floating-point type. * **longValue** *(integer) --* When the "longValue" type is set, the value of the resource must be an extended precision floating-point type. * **integerValue** *(integer) --* When the "integerValue" type is set, the value of the resource must be an integer. * **stringSetValue** *(list) --* When the "stringSetValue" type is set, the value of the resource must be a string type. * *(string) --* * **status** *(string) --* The status of the container instance. The valid values are "REGISTERING", "REGISTRATION_FAILED", "ACTIVE", "INACTIVE", "DEREGISTERING", or "DRAINING". If your account has opted in to the "awsvpcTrunking" account setting, then any newly registered container instance will transition to a "REGISTERING" status while the trunk elastic network interface is provisioned for the instance. If the registration fails, the instance will transition to a "REGISTRATION_FAILED" status. You can describe the container instance and see the reason for failure in the "statusReason" parameter. Once the container instance is terminated, the instance transitions to a "DEREGISTERING" status while the trunk elastic network interface is deprovisioned. The instance then transitions to an "INACTIVE" status. The "ACTIVE" status indicates that the container instance can accept tasks. The "DRAINING" indicates that new tasks aren't placed on the container instance and any service tasks running on the container instance are removed if possible. For more information, see Container instance draining in the *Amazon Elastic Container Service Developer Guide*. * **statusReason** *(string) --* The reason that the container instance reached its current status. * **agentConnected** *(boolean) --* This parameter returns "true" if the agent is connected to Amazon ECS. An instance with an agent that may be unhealthy or stopped return "false". Only instances connected to an agent can accept task placement requests. * **runningTasksCount** *(integer) --* The number of tasks on the container instance that have a desired status ( "desiredStatus") of "RUNNING". * **pendingTasksCount** *(integer) --* The number of tasks on the container instance that are in the "PENDING" status. * **agentUpdateStatus** *(string) --* The status of the most recent agent update. If an update wasn't ever requested, this value is "NULL". * **attributes** *(list) --* The attributes set for the container instance, either by the Amazon ECS container agent at instance registration or manually with the PutAttributes operation. * *(dict) --* An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the attribute. The "name" must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.). * **value** *(string) --* The value of the attribute. The "value" must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space. * **targetType** *(string) --* The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN. * **targetId** *(string) --* The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN). * **registeredAt** *(datetime) --* The Unix timestamp for the time when the container instance was registered. * **attachments** *(list) --* The resources attached to a container instance, such as an elastic network interface. * *(dict) --* An object representing a container instance or task attachment. * **id** *(string) --* The unique identifier for the attachment. * **type** *(string) --* The type of the attachment, such as "ElasticNetworkInterface", "Service Connect", and "AmazonElasticBlockStorage". * **status** *(string) --* The status of the attachment. Valid values are "PRECREATED", "CREATED", "ATTACHING", "ATTACHED", "DETACHING", "DETACHED", "DELETED", and "FAILED". * **details** *(list) --* Details of the attachment. For elastic network interfaces, this includes the network interface ID, the MAC address, the subnet ID, and the private IPv4 address. For Service Connect services, this includes "portName", "clientAliases", "discoveryName", and "ingressPortOverride". For Elastic Block Storage, this includes "roleArn", "deleteOnTermination", "volumeName", "volumeId", and "statusReason" (only when the attachment fails to create or attach). * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **tags** *(list) --* The metadata that you apply to the container instance to help you categorize and organize them. Each tag consists of a key and an optional value. You define both. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **healthStatus** *(dict) --* An object representing the health status of the container instance. * **overallStatus** *(string) --* The overall health status of the container instance. This is an aggregate status of all container instance health checks. * **details** *(list) --* An array of objects representing the details of the container instance health status. * *(dict) --* An object representing the result of a container instance health status check. * **type** *(string) --* The type of container instance health status that was verified. * **status** *(string) --* The container instance health status. * **lastUpdated** *(datetime) --* The Unix timestamp for when the container instance health status was last updated. * **lastStatusChange** *(datetime) --* The Unix timestamp for when the container instance health status last changed. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ClusterNotFoundException" * "ECS.Client.exceptions.UpdateInProgressException" * "ECS.Client.exceptions.NoUpdateAvailableException" * "ECS.Client.exceptions.MissingVersionException" ECS / Client / delete_cluster delete_cluster ************** ECS.Client.delete_cluster(**kwargs) Deletes the specified cluster. The cluster transitions to the "INACTIVE" state. Clusters with an "INACTIVE" status might remain discoverable in your account for a period of time. However, this behavior is subject to change in the future. We don't recommend that you rely on "INACTIVE" clusters persisting. You must deregister all container instances from this cluster before you may delete it. You can list the container instances in a cluster with ListContainerInstances and deregister them with DeregisterContainerInstance. See also: AWS API Documentation **Request Syntax** response = client.delete_cluster( cluster='string' ) Parameters: **cluster** (*string*) -- **[REQUIRED]** The short name or full Amazon Resource Name (ARN) of the cluster to delete. Return type: dict Returns: **Response Syntax** { 'cluster': { 'clusterArn': 'string', 'clusterName': 'string', 'configuration': { 'executeCommandConfiguration': { 'kmsKeyId': 'string', 'logging': 'NONE'|'DEFAULT'|'OVERRIDE', 'logConfiguration': { 'cloudWatchLogGroupName': 'string', 'cloudWatchEncryptionEnabled': True|False, 's3BucketName': 'string', 's3EncryptionEnabled': True|False, 's3KeyPrefix': 'string' } }, 'managedStorageConfiguration': { 'kmsKeyId': 'string', 'fargateEphemeralStorageKmsKeyId': 'string' } }, 'status': 'string', 'registeredContainerInstancesCount': 123, 'runningTasksCount': 123, 'pendingTasksCount': 123, 'activeServicesCount': 123, 'statistics': [ { 'name': 'string', 'value': 'string' }, ], 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'settings': [ { 'name': 'containerInsights', 'value': 'string' }, ], 'capacityProviders': [ 'string', ], 'defaultCapacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'attachments': [ { 'id': 'string', 'type': 'string', 'status': 'string', 'details': [ { 'name': 'string', 'value': 'string' }, ] }, ], 'attachmentsStatus': 'string', 'serviceConnectDefaults': { 'namespace': 'string' } } } **Response Structure** * *(dict) --* * **cluster** *(dict) --* The full description of the deleted cluster. * **clusterArn** *(string) --* The Amazon Resource Name (ARN) that identifies the cluster. For more information about the ARN format, see Amazon Resource Name (ARN) in the *Amazon ECS Developer Guide*. * **clusterName** *(string) --* A user-generated string that you use to identify your cluster. * **configuration** *(dict) --* The execute command and managed storage configuration for the cluster. * **executeCommandConfiguration** *(dict) --* The details of the execute command configuration. * **kmsKeyId** *(string) --* Specify an Key Management Service key ID to encrypt the data between the local client and the container. * **logging** *(string) --* The log setting to use for redirecting logs for your execute command results. The following log settings are available. * "NONE": The execute command session is not logged. * "DEFAULT": The "awslogs" configuration in the task definition is used. If no logging parameter is specified, it defaults to this value. If no "awslogs" log driver is configured in the task definition, the output won't be logged. * "OVERRIDE": Specify the logging details as a part of "logConfiguration". If the "OVERRIDE" logging option is specified, the "logConfiguration" is required. * **logConfiguration** *(dict) --* The log configuration for the results of the execute command actions. The logs can be sent to CloudWatch Logs or an Amazon S3 bucket. When "logging=OVERRIDE" is specified, a "logConfiguration" must be provided. * **cloudWatchLogGroupName** *(string) --* The name of the CloudWatch log group to send logs to. Note: The CloudWatch log group must already be created. * **cloudWatchEncryptionEnabled** *(boolean) --* Determines whether to use encryption on the CloudWatch logs. If not specified, encryption will be off. * **s3BucketName** *(string) --* The name of the S3 bucket to send logs to. Note: The S3 bucket must already be created. * **s3EncryptionEnabled** *(boolean) --* Determines whether to use encryption on the S3 logs. If not specified, encryption is not used. * **s3KeyPrefix** *(string) --* An optional folder in the S3 bucket to place logs in. * **managedStorageConfiguration** *(dict) --* The details of the managed storage configuration. * **kmsKeyId** *(string) --* Specify a Key Management Service key ID to encrypt Amazon ECS managed storage. When you specify a "kmsKeyId", Amazon ECS uses the key to encrypt data volumes managed by Amazon ECS that are attached to tasks in the cluster. The following data volumes are managed by Amazon ECS: Amazon EBS. For more information about encryption of Amazon EBS volumes attached to Amazon ECS tasks, see Encrypt data stored in Amazon EBS volumes for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. The key must be a single Region key. * **fargateEphemeralStorageKmsKeyId** *(string) --* Specify the Key Management Service key ID for Fargate ephemeral storage. When you specify a "fargateEphemeralStorageKmsKeyId", Amazon Web Services Fargate uses the key to encrypt data at rest in ephemeral storage. For more information about Fargate ephemeral storage encryption, see Customer managed keys for Amazon Web Services Fargate ephemeral storage for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. The key must be a single Region key. * **status** *(string) --* The status of the cluster. The following are the possible states that are returned. ACTIVE The cluster is ready to accept tasks and if applicable you can register container instances with the cluster. PROVISIONING The cluster has capacity providers that are associated with it and the resources needed for the capacity provider are being created. DEPROVISIONING The cluster has capacity providers that are associated with it and the resources needed for the capacity provider are being deleted. FAILED The cluster has capacity providers that are associated with it and the resources needed for the capacity provider have failed to create. INACTIVE The cluster has been deleted. Clusters with an "INACTIVE" status may remain discoverable in your account for a period of time. However, this behavior is subject to change in the future. We don't recommend that you rely on "INACTIVE" clusters persisting. * **registeredContainerInstancesCount** *(integer) --* The number of container instances registered into the cluster. This includes container instances in both "ACTIVE" and "DRAINING" status. * **runningTasksCount** *(integer) --* The number of tasks in the cluster that are in the "RUNNING" state. * **pendingTasksCount** *(integer) --* The number of tasks in the cluster that are in the "PENDING" state. * **activeServicesCount** *(integer) --* The number of services that are running on the cluster in an "ACTIVE" state. You can view these services with PListServices. * **statistics** *(list) --* Additional information about your clusters that are separated by launch type. They include the following: * runningEC2TasksCount * RunningFargateTasksCount * pendingEC2TasksCount * pendingFargateTasksCount * activeEC2ServiceCount * activeFargateServiceCount * drainingEC2ServiceCount * drainingFargateServiceCount * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **tags** *(list) --* The metadata that you apply to the cluster to help you categorize and organize them. Each tag consists of a key and an optional value. You define both. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **settings** *(list) --* The settings for the cluster. This parameter indicates whether CloudWatch Container Insights is on or off for a cluster. * *(dict) --* The settings to use when creating a cluster. This parameter is used to turn on CloudWatch Container Insights with enhanced observability or CloudWatch Container Insights for a cluster. Container Insights with enhanced observability provides all the Container Insights metrics, plus additional task and container metrics. This version supports enhanced observability for Amazon ECS clusters using the Amazon EC2 and Fargate launch types. After you configure Container Insights with enhanced observability on Amazon ECS, Container Insights auto-collects detailed infrastructure telemetry from the cluster level down to the container level in your environment and displays these critical performance data in curated dashboards removing the heavy lifting in observability set-up. For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the cluster setting. The value is "containerInsights" . * **value** *(string) --* The value to set for the cluster setting. The supported values are "enhanced", "enabled", and "disabled". To use Container Insights with enhanced observability, set the "containerInsights" account setting to "enhanced". To use Container Insights, set the "containerInsights" account setting to "enabled". If a cluster value is specified, it will override the "containerInsights" value set with PutAccountSetting or PutAccountSettingDefault. * **capacityProviders** *(list) --* The capacity providers associated with the cluster. * *(string) --* * **defaultCapacityProviderStrategy** *(list) --* The default capacity provider strategy for the cluster. When services or tasks are run in the cluster with no launch type or capacity provider strategy specified, the default capacity provider strategy is used. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two- minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. * **attachments** *(list) --* The resources attached to a cluster. When using a capacity provider with a cluster, the capacity provider and associated resources are returned as cluster attachments. * *(dict) --* An object representing a container instance or task attachment. * **id** *(string) --* The unique identifier for the attachment. * **type** *(string) --* The type of the attachment, such as "ElasticNetworkInterface", "Service Connect", and "AmazonElasticBlockStorage". * **status** *(string) --* The status of the attachment. Valid values are "PRECREATED", "CREATED", "ATTACHING", "ATTACHED", "DETACHING", "DETACHED", "DELETED", and "FAILED". * **details** *(list) --* Details of the attachment. For elastic network interfaces, this includes the network interface ID, the MAC address, the subnet ID, and the private IPv4 address. For Service Connect services, this includes "portName", "clientAliases", "discoveryName", and "ingressPortOverride". For Elastic Block Storage, this includes "roleArn", "deleteOnTermination", "volumeName", "volumeId", and "statusReason" (only when the attachment fails to create or attach). * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **attachmentsStatus** *(string) --* The status of the capacity providers associated with the cluster. The following are the states that are returned. UPDATE_IN_PROGRESS The available capacity providers for the cluster are updating. UPDATE_COMPLETE The capacity providers have successfully updated. UPDATE_FAILED The capacity provider updates failed. * **serviceConnectDefaults** *(dict) --* Use this parameter to set a default Service Connect namespace. After you set a default Service Connect namespace, any new services with Service Connect turned on that are created in the cluster are added as client services in the namespace. This setting only applies to new services that set the "enabled" parameter to "true" in the "ServiceConnectConfiguration". You can set the namespace of each service individually in the "ServiceConnectConfiguration" to override this default parameter. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **namespace** *(string) --* The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace. When you create a service and don't specify a Service Connect configuration, this namespace is used. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ClusterNotFoundException" * "ECS.Client.exceptions.ClusterContainsContainerInstancesExceptio n" * "ECS.Client.exceptions.ClusterContainsServicesException" * "ECS.Client.exceptions.ClusterContainsTasksException" * "ECS.Client.exceptions.UpdateInProgressException" **Examples** This example deletes an empty cluster in your default region. response = client.delete_cluster( cluster='my_cluster', ) print(response) Expected Output: { 'cluster': { 'activeServicesCount': 0, 'clusterArn': 'arn:aws:ecs:us-east-1:012345678910:cluster/my_cluster', 'clusterName': 'my_cluster', 'pendingTasksCount': 0, 'registeredContainerInstancesCount': 0, 'runningTasksCount': 0, 'status': 'INACTIVE', }, 'ResponseMetadata': { '...': '...', }, } ECS / Client / can_paginate can_paginate ************ ECS.Client.can_paginate(operation_name) Check if an operation can be paginated. Parameters: **operation_name** (*string*) -- The operation name. This is the same name as the method name on the client. For example, if the method name is "create_foo", and you'd normally invoke the operation as "client.create_foo(**kwargs)", if the "create_foo" operation can be paginated, you can use the call "client.get_paginator("create_foo")". Returns: "True" if the operation can be paginated, "False" otherwise. ECS / Client / describe_task_definition describe_task_definition ************************ ECS.Client.describe_task_definition(**kwargs) Describes a task definition. You can specify a "family" and "revision" to find information about a specific task definition, or you can simply specify the family to find the latest "ACTIVE" revision in that family. Note: You can only describe "INACTIVE" task definitions while an active task or service references them. See also: AWS API Documentation **Request Syntax** response = client.describe_task_definition( taskDefinition='string', include=[ 'TAGS', ] ) Parameters: * **taskDefinition** (*string*) -- **[REQUIRED]** The "family" for the latest "ACTIVE" revision, "family" and "revision" ( "family:revision") for a specific revision in the family, or full Amazon Resource Name (ARN) of the task definition to describe. * **include** (*list*) -- Determines whether to see the resource tags for the task definition. If "TAGS" is specified, the tags are included in the response. If this field is omitted, tags aren't included in the response. * *(string) --* Return type: dict Returns: **Response Syntax** { 'taskDefinition': { 'taskDefinitionArn': 'string', 'containerDefinitions': [ { 'name': 'string', 'image': 'string', 'repositoryCredentials': { 'credentialsParameter': 'string' }, 'cpu': 123, 'memory': 123, 'memoryReservation': 123, 'links': [ 'string', ], 'portMappings': [ { 'containerPort': 123, 'hostPort': 123, 'protocol': 'tcp'|'udp', 'name': 'string', 'appProtocol': 'http'|'http2'|'grpc', 'containerPortRange': 'string' }, ], 'essential': True|False, 'restartPolicy': { 'enabled': True|False, 'ignoredExitCodes': [ 123, ], 'restartAttemptPeriod': 123 }, 'entryPoint': [ 'string', ], 'command': [ 'string', ], 'environment': [ { 'name': 'string', 'value': 'string' }, ], 'environmentFiles': [ { 'value': 'string', 'type': 's3' }, ], 'mountPoints': [ { 'sourceVolume': 'string', 'containerPath': 'string', 'readOnly': True|False }, ], 'volumesFrom': [ { 'sourceContainer': 'string', 'readOnly': True|False }, ], 'linuxParameters': { 'capabilities': { 'add': [ 'string', ], 'drop': [ 'string', ] }, 'devices': [ { 'hostPath': 'string', 'containerPath': 'string', 'permissions': [ 'read'|'write'|'mknod', ] }, ], 'initProcessEnabled': True|False, 'sharedMemorySize': 123, 'tmpfs': [ { 'containerPath': 'string', 'size': 123, 'mountOptions': [ 'string', ] }, ], 'maxSwap': 123, 'swappiness': 123 }, 'secrets': [ { 'name': 'string', 'valueFrom': 'string' }, ], 'dependsOn': [ { 'containerName': 'string', 'condition': 'START'|'COMPLETE'|'SUCCESS'|'HEALTHY' }, ], 'startTimeout': 123, 'stopTimeout': 123, 'versionConsistency': 'enabled'|'disabled', 'hostname': 'string', 'user': 'string', 'workingDirectory': 'string', 'disableNetworking': True|False, 'privileged': True|False, 'readonlyRootFilesystem': True|False, 'dnsServers': [ 'string', ], 'dnsSearchDomains': [ 'string', ], 'extraHosts': [ { 'hostname': 'string', 'ipAddress': 'string' }, ], 'dockerSecurityOptions': [ 'string', ], 'interactive': True|False, 'pseudoTerminal': True|False, 'dockerLabels': { 'string': 'string' }, 'ulimits': [ { 'name': 'core'|'cpu'|'data'|'fsize'|'locks'|'memlock'|'msgqueue'|'nice'|'nofile'|'nproc'|'rss'|'rtprio'|'rttime'|'sigpending'|'stack', 'softLimit': 123, 'hardLimit': 123 }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] }, 'healthCheck': { 'command': [ 'string', ], 'interval': 123, 'timeout': 123, 'retries': 123, 'startPeriod': 123 }, 'systemControls': [ { 'namespace': 'string', 'value': 'string' }, ], 'resourceRequirements': [ { 'value': 'string', 'type': 'GPU'|'InferenceAccelerator' }, ], 'firelensConfiguration': { 'type': 'fluentd'|'fluentbit', 'options': { 'string': 'string' } }, 'credentialSpecs': [ 'string', ] }, ], 'family': 'string', 'taskRoleArn': 'string', 'executionRoleArn': 'string', 'networkMode': 'bridge'|'host'|'awsvpc'|'none', 'revision': 123, 'volumes': [ { 'name': 'string', 'host': { 'sourcePath': 'string' }, 'dockerVolumeConfiguration': { 'scope': 'task'|'shared', 'autoprovision': True|False, 'driver': 'string', 'driverOpts': { 'string': 'string' }, 'labels': { 'string': 'string' } }, 'efsVolumeConfiguration': { 'fileSystemId': 'string', 'rootDirectory': 'string', 'transitEncryption': 'ENABLED'|'DISABLED', 'transitEncryptionPort': 123, 'authorizationConfig': { 'accessPointId': 'string', 'iam': 'ENABLED'|'DISABLED' } }, 'fsxWindowsFileServerVolumeConfiguration': { 'fileSystemId': 'string', 'rootDirectory': 'string', 'authorizationConfig': { 'credentialsParameter': 'string', 'domain': 'string' } }, 'configuredAtLaunch': True|False }, ], 'status': 'ACTIVE'|'INACTIVE'|'DELETE_IN_PROGRESS', 'requiresAttributes': [ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ], 'placementConstraints': [ { 'type': 'memberOf', 'expression': 'string' }, ], 'compatibilities': [ 'EC2'|'FARGATE'|'EXTERNAL', ], 'runtimePlatform': { 'cpuArchitecture': 'X86_64'|'ARM64', 'operatingSystemFamily': 'WINDOWS_SERVER_2019_FULL'|'WINDOWS_SERVER_2019_CORE'|'WINDOWS_SERVER_2016_FULL'|'WINDOWS_SERVER_2004_CORE'|'WINDOWS_SERVER_2022_CORE'|'WINDOWS_SERVER_2022_FULL'|'WINDOWS_SERVER_2025_CORE'|'WINDOWS_SERVER_2025_FULL'|'WINDOWS_SERVER_20H2_CORE'|'LINUX' }, 'requiresCompatibilities': [ 'EC2'|'FARGATE'|'EXTERNAL', ], 'cpu': 'string', 'memory': 'string', 'inferenceAccelerators': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'pidMode': 'host'|'task', 'ipcMode': 'host'|'task'|'none', 'proxyConfiguration': { 'type': 'APPMESH', 'containerName': 'string', 'properties': [ { 'name': 'string', 'value': 'string' }, ] }, 'registeredAt': datetime(2015, 1, 1), 'deregisteredAt': datetime(2015, 1, 1), 'registeredBy': 'string', 'ephemeralStorage': { 'sizeInGiB': 123 }, 'enableFaultInjection': True|False }, 'tags': [ { 'key': 'string', 'value': 'string' }, ] } **Response Structure** * *(dict) --* * **taskDefinition** *(dict) --* The full task definition description. * **taskDefinitionArn** *(string) --* The full Amazon Resource Name (ARN) of the task definition. * **containerDefinitions** *(list) --* A list of container definitions in JSON format that describe the different containers that make up your task. For more information about container definition parameters and defaults, see Amazon ECS Task Definitions in the *Amazon Elastic Container Service Developer Guide*. * *(dict) --* Container definitions are used in task definitions to describe the different containers that are launched as part of a task. * **name** *(string) --* The name of a container. If you're linking multiple containers together in a task definition, the "name" of one container can be entered in the "links" of another container to connect the containers. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. This parameter maps to "name" in the docker container create command and the "--name" option to docker run. * **image** *(string) --* The image used to start a container. This string is passed directly to the Docker daemon. By default, images in the Docker Hub registry are available. Other repositories are specified with either "repository- url/image:tag" or "repository-url/image@digest ``. For images using tags (repository-url/image:tag), up to 255 characters total are allowed, including letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs (#). For images using digests (repository-url/image@digest), the 255 character limit applies only to the repository URL and image name (everything before the @ sign). The only supported hash function is sha256, and the hash value after sha256: must be exactly 64 characters (only letters A-F, a-f, and numbers 0-9 are allowed). This parameter maps to ``Image" in the docker container create command and the "IMAGE" parameter of docker run. * When a new task starts, the Amazon ECS container agent pulls the latest version of the specified image and tag for the container to use. However, subsequent updates to a repository image aren't propagated to already running tasks. * Images in Amazon ECR repositories can be specified by either using the full "registry/repository:tag" or "registry/repository@digest". For example, "012345678910.dkr.ecr..amazonaws.com /:latest" or "012345678910.dkr.ecr ..amazonaws.com/@sha2 56:94afd1f2e64d908bc90dbca0035a5b567EXAMPLE". * Images in official repositories on Docker Hub use a single name (for example, "ubuntu" or "mongo"). * Images in other repositories on Docker Hub are qualified with an organization name (for example, "amazon/amazon-ecs-agent"). * Images in other online repositories are qualified further by a domain name (for example, "quay.io/assemblyline/ubuntu"). * **repositoryCredentials** *(dict) --* The private repository authentication credentials to use. * **credentialsParameter** *(string) --* The Amazon Resource Name (ARN) of the secret containing the private repository credentials. Note: When you use the Amazon ECS API, CLI, or Amazon Web Services SDK, if the secret exists in the same Region as the task that you're launching then you can use either the full ARN or the name of the secret. When you use the Amazon Web Services Management Console, you must specify the full ARN of the secret. * **cpu** *(integer) --* The number of "cpu" units reserved for the container. This parameter maps to "CpuShares" in the docker container create commandand the "--cpu-shares" option to docker run. This field is optional for tasks using the Fargate launch type, and the only requirement is that the total amount of CPU reserved for all containers within a task be lower than the task-level "cpu" value. Note: You can determine the number of CPU units that are available per EC2 instance type by multiplying the vCPUs listed for that instance type on the Amazon EC2 Instances detail page by 1,024. Linux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that's the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task is guaranteed a minimum of 512 CPU units when needed. Moreover, each container could float to higher CPU usage if the other container was not using it. If both tasks were 100% active all of the time, they would be limited to 512 CPU units. On Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. The minimum valid CPU share value that the Linux kernel allows is 2, and the maximum valid CPU share value that the Linux kernel allows is 262144. However, the CPU parameter isn't required, and you can use CPU values below 2 or above 262144 in your container definitions. For CPU values below 2 (including null) or above 262144, the behavior varies based on your Amazon ECS container agent version: * **Agent versions less than or equal to 1.1.0:** Null and zero CPU values are passed to Docker as 0, which Docker then converts to 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux kernel converts to two CPU shares. * **Agent versions greater than or equal to 1.2.0:** Null, zero, and CPU values of 1 are passed to Docker as 2. * **Agent versions greater than or equal to 1.84.0:** CPU values greater than 256 vCPU are passed to Docker as 256, which is equivalent to 262144 CPU shares. On Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only have access to the specified amount of CPU that's described in the task definition. A null or zero CPU value is passed to Docker as "0", which Windows interprets as 1% of one CPU. * **memory** *(integer) --* The amount (in MiB) of memory to present to the container. If your container attempts to exceed the memory specified here, the container is killed. The total amount of memory reserved for all containers within a task must be lower than the task "memory" value, if one is specified. This parameter maps to "Memory" in the docker container create command and the "--memory" option to docker run. If using the Fargate launch type, this parameter is optional. If using the EC2 launch type, you must specify either a task-level memory value or a container-level memory value. If you specify both a container-level "memory" and "memoryReservation" value, "memory" must be greater than "memoryReservation". If you specify "memoryReservation", then that value is subtracted from the available memory resources for the container instance where the container is placed. Otherwise, the value of "memory" is used. The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers. The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers. * **memoryReservation** *(integer) --* The soft limit (in MiB) of memory to reserve for the container. When system memory is under heavy contention, Docker attempts to keep the container memory to this soft limit. However, your container can consume more memory when it needs to, up to either the hard limit specified with the "memory" parameter (if applicable), or all of the available memory on the container instance, whichever comes first. This parameter maps to "MemoryReservation" in the docker container create command and the "--memory- reservation" option to docker run. If a task-level memory value is not specified, you must specify a non-zero integer for one or both of "memory" or "memoryReservation" in a container definition. If you specify both, "memory" must be greater than "memoryReservation". If you specify "memoryReservation", then that value is subtracted from the available memory resources for the container instance where the container is placed. Otherwise, the value of "memory" is used. For example, if your container normally uses 128 MiB of memory, but occasionally bursts to 256 MiB of memory for short periods of time, you can set a "memoryReservation" of 128 MiB, and a "memory" hard limit of 300 MiB. This configuration would allow the container to only reserve 128 MiB of memory from the remaining resources on the container instance, but also allow the container to consume more memory resources when needed. The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers. The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers. * **links** *(list) --* The "links" parameter allows containers to communicate with each other without the need for port mappings. This parameter is only supported if the network mode of a task definition is "bridge". The "name:internalName" construct is analogous to "name:alias" in Docker links. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.. This parameter maps to "Links" in the docker container create command and the "-- link" option to docker run. Note: This parameter is not supported for Windows containers. Warning: Containers that are collocated on a single container instance may be able to communicate with each other without requiring links or host port mappings. Network isolation is achieved on the container instance using security groups and VPC settings. * *(string) --* * **portMappings** *(list) --* The list of port mappings for the container. Port mappings allow containers to access ports on the host container instance to send or receive traffic. For task definitions that use the "awsvpc" network mode, only specify the "containerPort". The "hostPort" can be left blank or it must be the same value as the "containerPort". Port mappings on Windows use the "NetNAT" gateway address rather than "localhost". There's no loopback for port mappings on Windows, so you can't access a container's mapped port from the host itself. This parameter maps to "PortBindings" in the the docker container create command and the "--publish" option to docker run. If the network mode of a task definition is set to "none", then you can't specify port mappings. If the network mode of a task definition is set to "host", then host ports must either be undefined or they must match the container port in the port mapping. Note: After a task reaches the "RUNNING" status, manual and automatic host and container port assignments are visible in the **Network Bindings** section of a container description for a selected task in the Amazon ECS console. The assignments are also visible in the "networkBindings" section DescribeTasks responses. * *(dict) --* Port mappings allow containers to access ports on the host container instance to send or receive traffic. Port mappings are specified as part of the container definition. If you use containers in a task with the "awsvpc" or "host" network mode, specify the exposed ports using "containerPort". The "hostPort" can be left blank or it must be the same value as the "containerPort". Most fields of this parameter ( "containerPort", "hostPort", "protocol") maps to "PortBindings" in the docker container create command and the "-- publish" option to "docker run". If the network mode of a task definition is set to "host", host ports must either be undefined or match the container port in the port mapping. Note: You can't expose the same container port for multiple protocols. If you attempt this, an error is returned. After a task reaches the "RUNNING" status, manual and automatic host and container port assignments are visible in the "networkBindings" section of DescribeTasks API responses. * **containerPort** *(integer) --* The port number on the container that's bound to the user-specified or automatically assigned host port. If you use containers in a task with the "awsvpc" or "host" network mode, specify the exposed ports using "containerPort". If you use containers in a task with the "bridge" network mode and you specify a container port and not a host port, your container automatically receives a host port in the ephemeral port range. For more information, see "hostPort". Port mappings that are automatically assigned in this way do not count toward the 100 reserved ports limit of a container instance. * **hostPort** *(integer) --* The port number on the container instance to reserve for your container. If you specify a "containerPortRange", leave this field empty and the value of the "hostPort" is set as follows: * For containers in a task with the "awsvpc" network mode, the "hostPort" is set to the same value as the "containerPort". This is a static mapping strategy. * For containers in a task with the "bridge" network mode, the Amazon ECS agent finds open ports on the host and automatically binds them to the container ports. This is a dynamic mapping strategy. If you use containers in a task with the "awsvpc" or "host" network mode, the "hostPort" can either be left blank or set to the same value as the "containerPort". If you use containers in a task with the "bridge" network mode, you can specify a non-reserved host port for your container port mapping, or you can omit the "hostPort" (or set it to "0") while specifying a "containerPort" and your container automatically receives a port in the ephemeral port range for your container instance operating system and Docker version. The default ephemeral port range for Docker version 1.6.0 and later is listed on the instance under "/proc/sys/net/ipv4/ip_local_port_range". If this kernel parameter is unavailable, the default ephemeral port range from 49153 through 65535 (Linux) or 49152 through 65535 (Windows) is used. Do not attempt to specify a host port in the ephemeral port range as these are reserved for automatic assignment. In general, ports below 32768 are outside of the ephemeral port range. The default reserved ports are 22 for SSH, the Docker ports 2375 and 2376, and the Amazon ECS container agent ports 51678-51680. Any host port that was previously specified in a running task is also reserved while the task is running. That is, after a task stops, the host port is released. The current reserved ports are displayed in the "remainingResources" of DescribeContainerInstances output. A container instance can have up to 100 reserved ports at a time. This number includes the default reserved ports. Automatically assigned ports aren't included in the 100 reserved ports quota. * **protocol** *(string) --* The protocol used for the port mapping. Valid values are "tcp" and "udp". The default is "tcp". "protocol" is immutable in a Service Connect service. Updating this field requires a service deletion and redeployment. * **name** *(string) --* The name that's used for the port mapping. This parameter is the name that you use in the "serviceConnectConfiguration" and the "vpcLatticeConfigurations" of a service. The name can include up to 64 characters. The characters can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen. * **appProtocol** *(string) --* The application protocol that's used for the port mapping. This parameter only applies to Service Connect. We recommend that you set this parameter to be consistent with the protocol that your application uses. If you set this parameter, Amazon ECS adds protocol-specific connection handling to the Service Connect proxy. If you set this parameter, Amazon ECS adds protocol-specific telemetry in the Amazon ECS console and CloudWatch. If you don't set a value for this parameter, then TCP is used. However, Amazon ECS doesn't add protocol-specific telemetry for TCP. "appProtocol" is immutable in a Service Connect service. Updating this field requires a service deletion and redeployment. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **containerPortRange** *(string) --* The port number range on the container that's bound to the dynamically mapped host port range. The following rules apply when you specify a "containerPortRange": * You must use either the "bridge" network mode or the "awsvpc" network mode. * This parameter is available for both the EC2 and Fargate launch types. * This parameter is available for both the Linux and Windows operating systems. * The container instance must have at least version 1.67.0 of the container agent and at least version 1.67.0-1 of the "ecs-init" package * You can specify a maximum of 100 port ranges per container. * You do not specify a "hostPortRange". The value of the "hostPortRange" is set as follows: * For containers in a task with the "awsvpc" network mode, the "hostPortRange" is set to the same value as the "containerPortRange". This is a static mapping strategy. * For containers in a task with the "bridge" network mode, the Amazon ECS agent finds open host ports from the default ephemeral range and passes it to docker to bind them to the container ports. * The "containerPortRange" valid values are between 1 and 65535. * A port can only be included in one port mapping per container. * You cannot specify overlapping port ranges. * The first port in the range must be less than last port in the range. * Docker recommends that you turn off the docker- proxy in the Docker daemon config file when you have a large number of ports. For more information, see Issue #11185 on the Github website. For information about how to turn off the docker-proxy in the Docker daemon config file, see Docker daemon in the *Amazon ECS Developer Guide*. You can call DescribeTasks to view the "hostPortRange" which are the host ports that are bound to the container ports. * **essential** *(boolean) --* If the "essential" parameter of a container is marked as "true", and that container fails or stops for any reason, all other containers that are part of the task are stopped. If the "essential" parameter of a container is marked as "false", its failure doesn't affect the rest of the containers in a task. If this parameter is omitted, a container is assumed to be essential. All tasks must have at least one essential container. If you have an application that's composed of multiple containers, group containers that are used for a common purpose into components, and separate the different components into multiple task definitions. For more information, see Application Architecture in the *Amazon Elastic Container Service Developer Guide*. * **restartPolicy** *(dict) --* The restart policy for a container. When you set up a restart policy, Amazon ECS can restart the container without needing to replace the task. For more information, see Restart individual containers in Amazon ECS tasks with container restart policies in the *Amazon Elastic Container Service Developer Guide*. * **enabled** *(boolean) --* Specifies whether a restart policy is enabled for the container. * **ignoredExitCodes** *(list) --* A list of exit codes that Amazon ECS will ignore and not attempt a restart on. You can specify a maximum of 50 container exit codes. By default, Amazon ECS does not ignore any exit codes. * *(integer) --* * **restartAttemptPeriod** *(integer) --* A period of time (in seconds) that the container must run for before a restart can be attempted. A container can be restarted only once every "restartAttemptPeriod" seconds. If a container isn't able to run for this time period and exits early, it will not be restarted. You can set a minimum "restartAttemptPeriod" of 60 seconds and a maximum "restartAttemptPeriod" of 1800 seconds. By default, a container must run for 300 seconds before it can be restarted. * **entryPoint** *(list) --* Warning: Early versions of the Amazon ECS container agent don't properly handle "entryPoint" parameters. If you have problems using "entryPoint", update your container agent or enter your commands and arguments as "command" array items instead. The entry point that's passed to the container. This parameter maps to "Entrypoint" in the docker container create command and the "--entrypoint" option to docker run. * *(string) --* * **command** *(list) --* The command that's passed to the container. This parameter maps to "Cmd" in the docker container create command and the "COMMAND" parameter to docker run. If there are multiple arguments, each argument is a separated string in the array. * *(string) --* * **environment** *(list) --* The environment variables to pass to a container. This parameter maps to "Env" in the docker container create command and the "--env" option to docker run. Warning: We don't recommend that you use plaintext environment variables for sensitive information, such as credential data. * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **environmentFiles** *(list) --* A list of files containing the environment variables to pass to a container. This parameter maps to the "-- env-file" option to docker run. You can specify up to ten environment files. The file must have a ".env" file extension. Each line in an environment file contains an environment variable in "VARIABLE=VALUE" format. Lines beginning with "#" are treated as comments and are ignored. If there are environment variables specified using the "environment" parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Specifying Environment Variables in the *Amazon Elastic Container Service Developer Guide*. * *(dict) --* A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a ".env" file extension. Each line in an environment file should contain an environment variable in "VARIABLE=VALUE" format. Lines beginning with "#" are treated as comments and are ignored. If there are environment variables specified using the "environment" parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Use a file to pass environment variables to a container in the *Amazon Elastic Container Service Developer Guide*. Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply. You must use the following platforms for the Fargate launch type: * Linux platform version "1.4.0" or later. * Windows platform version "1.0.0" or later. Consider the following when using the Fargate launch type: * The file is handled like a native Docker env-file. * There is no support for shell escape handling. * The container entry point interperts the "VARIABLE" values. * **value** *(string) --* The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file. * **type** *(string) --* The file type to use. Environment files are objects in Amazon S3. The only supported value is "s3". * **mountPoints** *(list) --* The mount points for data volumes in your container. This parameter maps to "Volumes" in the docker container create command and the "--volume" option to docker run. Windows containers can mount whole directories on the same drive as "$env:ProgramData". Windows containers can't mount directories on a different drive, and mount point can't be across drives. * *(dict) --* The details for a volume mount point that's used in a container definition. * **sourceVolume** *(string) --* The name of the volume to mount. Must be a volume name referenced in the "name" parameter of task definition "volume". * **containerPath** *(string) --* The path on the container to mount the host volume at. * **readOnly** *(boolean) --* If this value is "true", the container has read- only access to the volume. If this value is "false", then the container can write to the volume. The default value is "false". * **volumesFrom** *(list) --* Data volumes to mount from another container. This parameter maps to "VolumesFrom" in the docker container create command and the "--volumes-from" option to docker run. * *(dict) --* Details on a data volume from another container in the same task definition. * **sourceContainer** *(string) --* The name of another container within the same task definition to mount volumes from. * **readOnly** *(boolean) --* If this value is "true", the container has read- only access to the volume. If this value is "false", then the container can write to the volume. The default value is "false". * **linuxParameters** *(dict) --* Linux-specific modifications that are applied to the default Docker container configuration, such as Linux kernel capabilities. For more information see KernelCapabilities. Note: This parameter is not supported for Windows containers. * **capabilities** *(dict) --* The Linux capabilities for the container that are added to or dropped from the default configuration provided by Docker. Note: For tasks that use the Fargate launch type, "capabilities" is supported for all platform versions but the "add" parameter is only supported if using platform version 1.4.0 or later. * **add** *(list) --* The Linux capabilities for the container that have been added to the default configuration provided by Docker. This parameter maps to "CapAdd" in the docker container create command and the "--cap- add" option to docker run. Note: Tasks launched on Fargate only support adding the "SYS_PTRACE" kernel capability. Valid values: ""ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"" * *(string) --* * **drop** *(list) --* The Linux capabilities for the container that have been removed from the default configuration provided by Docker. This parameter maps to "CapDrop" in the docker container create command and the "--cap-drop" option to docker run. Valid values: ""ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"" * *(string) --* * **devices** *(list) --* Any host devices to expose to the container. This parameter maps to "Devices" in the docker container create command and the "--device" option to docker run. Note: If you're using tasks that use the Fargate launch type, the "devices" parameter isn't supported. * *(dict) --* An object representing a container instance host device. * **hostPath** *(string) --* The path for the device on the host container instance. * **containerPath** *(string) --* The path inside the container at which to expose the host device. * **permissions** *(list) --* The explicit permissions to provide to the container for the device. By default, the container has permissions for "read", "write", and "mknod" for the device. * *(string) --* * **initProcessEnabled** *(boolean) --* Run an "init" process inside the container that forwards signals and reaps processes. This parameter maps to the "--init" option to docker run. This parameter requires version 1.25 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" * **sharedMemorySize** *(integer) --* The value for the size (in MiB) of the "/dev/shm" volume. This parameter maps to the "--shm-size" option to docker run. Note: If you are using tasks that use the Fargate launch type, the "sharedMemorySize" parameter is not supported. * **tmpfs** *(list) --* The container path, mount options, and size (in MiB) of the tmpfs mount. This parameter maps to the "-- tmpfs" option to docker run. Note: If you're using tasks that use the Fargate launch type, the "tmpfs" parameter isn't supported. * *(dict) --* The container path, mount options, and size of the tmpfs mount. * **containerPath** *(string) --* The absolute file path where the tmpfs volume is to be mounted. * **size** *(integer) --* The maximum size (in MiB) of the tmpfs volume. * **mountOptions** *(list) --* The list of tmpfs volume mount options. Valid values: ""defaults" | "ro" | "rw" | "suid" | "nosuid" | "dev" | "nodev" | "exec" | "noexec" | "sync" | "async" | "dirsync" | "remount" | "mand" | "nomand" | "atime" | "noatime" | "diratime" | "nodiratime" | "bind" | "rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime" | "norelatime" | "strictatime" | "nostrictatime" | "mode" | "uid" | "gid" | "nr_inodes" | "nr_blocks" | "mpol"" * *(string) --* * **maxSwap** *(integer) --* The total amount of swap memory (in MiB) a container can use. This parameter will be translated to the " --memory-swap" option to docker run where the value would be the sum of the container memory plus the "maxSwap" value. If a "maxSwap" value of "0" is specified, the container will not use swap. Accepted values are "0" or any positive integer. If the "maxSwap" parameter is omitted, the container will use the swap configuration for the container instance it is running on. A "maxSwap" value must be set for the "swappiness" parameter to be used. Note: If you're using tasks that use the Fargate launch type, the "maxSwap" parameter isn't supported.If you're using tasks on Amazon Linux 2023 the "swappiness" parameter isn't supported. * **swappiness** *(integer) --* This allows you to tune a container's memory swappiness behavior. A "swappiness" value of "0" will cause swapping to not happen unless absolutely necessary. A "swappiness" value of "100" will cause pages to be swapped very aggressively. Accepted values are whole numbers between "0" and "100". If the "swappiness" parameter is not specified, a default value of "60" is used. If a value is not specified for "maxSwap" then this parameter is ignored. This parameter maps to the "--memory- swappiness" option to docker run. Note: If you're using tasks that use the Fargate launch type, the "swappiness" parameter isn't supported.If you're using tasks on Amazon Linux 2023 the "swappiness" parameter isn't supported. * **secrets** *(list) --* The secrets to pass to the container. For more information, see Specifying Sensitive Data in the *Amazon Elastic Container Service Developer Guide*. * *(dict) --* An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways: * To inject sensitive data into your containers as environment variables, use the "secrets" container definition parameter. * To reference sensitive information in the log configuration of a container, use the "secretOptions" container definition parameter. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the secret. * **valueFrom** *(string) --* The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store. For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the *Amazon Elastic Container Service Developer Guide*. Note: If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified. * **dependsOn** *(list) --* The dependencies defined for container startup and shutdown. A container can contain multiple dependencies on other containers in a task definition. When a dependency is defined for container startup, for container shutdown it is reversed. For tasks using the EC2 launch type, the container instances require at least version 1.26.0 of the container agent to turn on container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the *Amazon Elastic Container Service Developer Guide*. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the "ecs-init" package. If your container instances are launched from version "20190301" or later, then they contain the required versions of the container agent and "ecs- init". For more information, see Amazon ECS-optimized Linux AMI in the *Amazon Elastic Container Service Developer Guide*. For tasks using the Fargate launch type, the task or service requires the following platforms: * Linux platform version "1.3.0" or later. * Windows platform version "1.0.0" or later. * *(dict) --* The dependencies defined for container startup and shutdown. A container can contain multiple dependencies. When a dependency is defined for container startup, for container shutdown it is reversed. Your Amazon ECS container instances require at least version 1.26.0 of the container agent to use container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the *Amazon Elastic Container Service Developer Guide*. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the "ecs-init" package. If your container instances are launched from version "20190301" or later, then they contain the required versions of the container agent and "ecs-init". For more information, see Amazon ECS-optimized Linux AMI in the *Amazon Elastic Container Service Developer Guide*. Note: For tasks that use the Fargate launch type, the task or service requires the following platforms: * Linux platform version "1.3.0" or later. * Windows platform version "1.0.0" or later. For more information about how to create a container dependency, see Container dependency in the *Amazon Elastic Container Service Developer Guide*. * **containerName** *(string) --* The name of a container. * **condition** *(string) --* The dependency condition of the container. The following are the available conditions and their behavior: * "START" - This condition emulates the behavior of links and volumes today. It validates that a dependent container is started before permitting other containers to start. * "COMPLETE" - This condition validates that a dependent container runs to completion (exits) before permitting other containers to start. This can be useful for nonessential containers that run a script and then exit. This condition can't be set on an essential container. * "SUCCESS" - This condition is the same as "COMPLETE", but it also requires that the container exits with a "zero" status. This condition can't be set on an essential container. * "HEALTHY" - This condition validates that the dependent container passes its Docker health check before permitting other containers to start. This requires that the dependent container has health checks configured. This condition is confirmed only at task startup. * **startTimeout** *(integer) --* Time duration (in seconds) to wait before giving up on resolving dependencies for a container. For example, you specify two containers in a task definition with containerA having a dependency on containerB reaching a "COMPLETE", "SUCCESS", or "HEALTHY" status. If a "startTimeout" value is specified for containerB and it doesn't reach the desired status within that time then containerA gives up and not start. This results in the task transitioning to a "STOPPED" state. Note: When the "ECS_CONTAINER_START_TIMEOUT" container agent configuration variable is used, it's enforced independently from this start timeout value. For tasks using the Fargate launch type, the task or service requires the following platforms: * Linux platform version "1.3.0" or later. * Windows platform version "1.0.0" or later. For tasks using the EC2 launch type, your container instances require at least version "1.26.0" of the container agent to use a container start timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the *Amazon Elastic Container Service Developer Guide*. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version "1.26.0-1" of the "ecs-init" package. If your container instances are launched from version "20190301" or later, then they contain the required versions of the container agent and "ecs-init". For more information, see Amazon ECS-optimized Linux AMI in the *Amazon Elastic Container Service Developer Guide*. The valid values for Fargate are 2-120 seconds. * **stopTimeout** *(integer) --* Time duration (in seconds) to wait before the container is forcefully killed if it doesn't exit normally on its own. For tasks using the Fargate launch type, the task or service requires the following platforms: * Linux platform version "1.3.0" or later. * Windows platform version "1.0.0" or later. For tasks that use the Fargate launch type, the max stop timeout value is 120 seconds and if the parameter is not specified, the default value of 30 seconds is used. For tasks that use the EC2 launch type, if the "stopTimeout" parameter isn't specified, the value set for the Amazon ECS container agent configuration variable "ECS_CONTAINER_STOP_TIMEOUT" is used. If neither the "stopTimeout" parameter or the "ECS_CONTAINER_STOP_TIMEOUT" agent configuration variable are set, then the default values of 30 seconds for Linux containers and 30 seconds on Windows containers are used. Your container instances require at least version 1.26.0 of the container agent to use a container stop timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the *Amazon Elastic Container Service Developer Guide*. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the "ecs-init" package. If your container instances are launched from version "20190301" or later, then they contain the required versions of the container agent and "ecs-init". For more information, see Amazon ECS-optimized Linux AMI in the *Amazon Elastic Container Service Developer Guide*. The valid values for Fargate are 2-120 seconds. * **versionConsistency** *(string) --* Specifies whether Amazon ECS will resolve the container image tag provided in the container definition to an image digest. By default, the value is "enabled". If you set the value for a container as "disabled", Amazon ECS will not resolve the provided container image tag to a digest and will use the original image URI specified in the container definition for deployment. For more information about container image resolution, see Container image resolution in the *Amazon ECS Developer Guide*. * **hostname** *(string) --* The hostname to use for your container. This parameter maps to "Hostname" in the docker container create command and the "--hostname" option to docker run. Note: The "hostname" parameter is not supported if you're using the "awsvpc" network mode. * **user** *(string) --* The user to use inside the container. This parameter maps to "User" in the docker container create command and the "--user" option to docker run. Warning: When running tasks using the "host" network mode, don't run containers using the root user (UID 0). We recommend using a non-root user for better security. You can specify the "user" using the following formats. If specifying a UID or GID, you must specify it as a positive integer. * "user" * "user:group" * "uid" * "uid:gid" * "user:gid" * "uid:group" Note: This parameter is not supported for Windows containers. * **workingDirectory** *(string) --* The working directory to run commands inside the container in. This parameter maps to "WorkingDir" in the docker container create command and the "-- workdir" option to docker run. * **disableNetworking** *(boolean) --* When this parameter is true, networking is off within the container. This parameter maps to "NetworkDisabled" in the docker container create command. Note: This parameter is not supported for Windows containers. * **privileged** *(boolean) --* When this parameter is true, the container is given elevated privileges on the host container instance (similar to the "root" user). This parameter maps to "Privileged" in the docker container create command and the "--privileged" option to docker run Note: This parameter is not supported for Windows containers or tasks run on Fargate. * **readonlyRootFilesystem** *(boolean) --* When this parameter is true, the container is given read-only access to its root file system. This parameter maps to "ReadonlyRootfs" in the docker container create command and the "--read-only" option to docker run. Note: This parameter is not supported for Windows containers. * **dnsServers** *(list) --* A list of DNS servers that are presented to the container. This parameter maps to "Dns" in the docker container create command and the "--dns" option to docker run. Note: This parameter is not supported for Windows containers. * *(string) --* * **dnsSearchDomains** *(list) --* A list of DNS search domains that are presented to the container. This parameter maps to "DnsSearch" in the docker container create command and the "--dns-search" option to docker run. Note: This parameter is not supported for Windows containers. * *(string) --* * **extraHosts** *(list) --* A list of hostnames and IP address mappings to append to the "/etc/hosts" file on the container. This parameter maps to "ExtraHosts" in the docker container create command and the "--add-host" option to docker run. Note: This parameter isn't supported for Windows containers or tasks that use the "awsvpc" network mode. * *(dict) --* Hostnames and IP address entries that are added to the "/etc/hosts" file of a container via the "extraHosts" parameter of its ContainerDefinition. * **hostname** *(string) --* The hostname to use in the "/etc/hosts" entry. * **ipAddress** *(string) --* The IP address to use in the "/etc/hosts" entry. * **dockerSecurityOptions** *(list) --* A list of strings to provide custom configuration for multiple security systems. This field isn't valid for containers in tasks using the Fargate launch type. For Linux tasks on EC2, this parameter can be used to reference custom labels for SELinux and AppArmor multi-level security systems. For any tasks on EC2, this parameter can be used to reference a credential spec file that configures a container for Active Directory authentication. For more information, see Using gMSAs for Windows Containers and Using gMSAs for Linux Containers in the *Amazon Elastic Container Service Developer Guide*. This parameter maps to "SecurityOpt" in the docker container create command and the "--security-opt" option to docker run. Note: The Amazon ECS container agent running on a container instance must register with the "ECS_SELINUX_CAPABLE=true" or "ECS_APPARMOR_CAPABLE=true" environment variables before containers placed on that instance can use these security options. For more information, see Amazon ECS Container Agent Configuration in the *Amazon Elastic Container Service Developer Guide*. Valid values: "no-new-privileges" | "apparmor:PROFILE" | "label:value" | "credentialspec:CredentialSpecFilePath" * *(string) --* * **interactive** *(boolean) --* When this parameter is "true", you can deploy containerized applications that require "stdin" or a "tty" to be allocated. This parameter maps to "OpenStdin" in the docker container create command and the "--interactive" option to docker run. * **pseudoTerminal** *(boolean) --* When this parameter is "true", a TTY is allocated. This parameter maps to "Tty" in the docker container create command and the "--tty" option to docker run. * **dockerLabels** *(dict) --* A key/value map of labels to add to the container. This parameter maps to "Labels" in the docker container create command and the "--label" option to docker run. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" * *(string) --* * *(string) --* * **ulimits** *(list) --* A list of "ulimits" to set in the container. If a "ulimit" value is specified in a task definition, it overrides the default values set by Docker. This parameter maps to "Ulimits" in the docker container create command and the "--ulimit" option to docker run. Valid naming values are displayed in the Ulimit data type. Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the "nofile" resource limit parameter which Fargate overrides. The "nofile" resource limit sets a restriction on the number of open files that a container can use. The default "nofile" soft limit is "65535" and the default hard limit is "65535". This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" Note: This parameter is not supported for Windows containers. * *(dict) --* The "ulimit" settings to pass to the container. Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the "nofile" resource limit parameter which Fargate overrides. The "nofile" resource limit sets a restriction on the number of open files that a container can use. The default "nofile" soft limit is "65535" and the default hard limit is "65535". You can specify the "ulimit" settings for a container in a task definition. * **name** *(string) --* The "type" of the "ulimit". * **softLimit** *(integer) --* The soft limit for the "ulimit" type. The value can be specified in bytes, seconds, or as a count, depending on the "type" of the "ulimit". * **hardLimit** *(integer) --* The hard limit for the "ulimit" type. The value can be specified in bytes, seconds, or as a count, depending on the "type" of the "ulimit". * **logConfiguration** *(dict) --* The log configuration specification for the container. This parameter maps to "LogConfig" in the docker container create command and the "--log-driver" option to docker run. By default, containers use the same logging driver that the Docker daemon uses. However the container can use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance (or on a different log server for remote logging options). Note: Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon (shown in the LogConfiguration data type). Additional log drivers may be available in future releases of the Amazon ECS container agent. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" Note: The Amazon ECS container agent running on a container instance must register the logging drivers available on that instance with the "ECS_AVAILABLE_LOGGING_DRIVERS" environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS Container Agent Configuration in the *Amazon Elastic Container Service Developer Guide*. * **logDriver** *(string) --* The log driver to use for the container. For tasks on Fargate, the supported log drivers are "awslogs", "splunk", and "awsfirelens". For tasks hosted on Amazon EC2 instances, the supported log drivers are "awslogs", "fluentd", "gelf", "json-file", "journald", "syslog", "splunk", and "awsfirelens". For more information about using the "awslogs" log driver, see Send Amazon ECS logs to CloudWatch in the *Amazon Elastic Container Service Developer Guide*. For more information about using the "awsfirelens" log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner. Note: If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software. * **options** *(dict) --* The configuration options to send to the log driver. The options you can specify depend on the log driver. Some of the options you can specify when you use the "awslogs" log driver to route logs to Amazon CloudWatch include the following: awslogs-create-group Required: No Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to "false". Note: Your IAM policy must include the "logs:CreateLogGroup" permission before you attempt to use "awslogs-create-group".awslogs- region Required: Yes Specify the Amazon Web Services Region that the "awslogs" log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option. awslogs-group Required: Yes Make sure to specify a log group that the "awslogs" log driver sends its log streams to. awslogs-stream-prefix Required: Yes, when using Fargate.Optional when using EC2. Use the "awslogs-stream-prefix" option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format "prefix-name/container-name/ecs-task-id". If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option. For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to. You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console. awslogs-datetime-format Required: No This option defines a multiline start pattern in Python "strftime" format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry. For more information, see awslogs-datetime-format. You cannot configure both the "awslogs-datetime- format" and "awslogs-multiline-pattern" options. Note: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.awslogs-multiline-pattern Required: No This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. For more information, see awslogs-multiline-pattern. This option is ignored if "awslogs-datetime-format" is also configured. You cannot configure both the "awslogs-datetime- format" and "awslogs-multiline-pattern" options. Note: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. The following options apply to all supported log drivers. mode Required: No Valid values: "non-blocking" | "blocking" This option defines the delivery mode of log messages from the container to the log driver specified using "logDriver". The delivery mode you choose affects application availability when the flow of logs from container is interrupted. If you use the "blocking" mode and the flow of logs is interrupted, calls from container code to write to the "stdout" and "stderr" streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure. If you use the "non-blocking" mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the "max-buffer-size" option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non- blocking mode in the awslogs container log driver. You can set a default "mode" for all containers in a specific Amazon Web Services Region by using the "defaultLogDriverMode" account setting. If you don't specify the "mode" option or configure the account setting, Amazon ECS will default to the "non- blocking" mode. For more information about the account setting, see Default log driver mode in the *Amazon Elastic Container Service Developer Guide*. Note: On June 25, 2025, Amazon ECS changed the default log driver mode from "blocking" to "non-blocking" to prioritize task availability over logging. To continue using the "blocking" mode after this change, do one of the following: * Set the "mode" option in your container definition's "logConfiguration" as "blocking". * Set the "defaultLogDriverMode" account setting to "blocking". max-buffer-size Required: No Default value: "1m" When "non-blocking" mode is used, the "max-buffer- size" log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost. To route logs using the "splunk" log router, you need to specify a "splunk-token" and a "splunk-url". When you use the "awsfirelens" log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the "log-driver-buffer- limit" option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. Other options you can specify when using "awsfirelens" to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with "region" and a name for the log stream with "delivery_stream". When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with "region" and a data stream name with "stream". When you export logs to Amazon OpenSearch Service, you can specify options like "Name", "Host" (OpenSearch Service endpoint without protocol), "Port", "Index", "Type", "Aws_auth", "Aws_region", "Suppress_Type_Name", and "tls". For more information, see Under the hood: FireLens for Amazon ECS Tasks. When you export logs to Amazon S3, you can specify the bucket using the "bucket" option. You can also specify "region", "total_file_size", "upload_timeout", and "use_put_object" as options. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" * *(string) --* * *(string) --* * **secretOptions** *(list) --* The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * *(dict) --* An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways: * To inject sensitive data into your containers as environment variables, use the "secrets" container definition parameter. * To reference sensitive information in the log configuration of a container, use the "secretOptions" container definition parameter. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the secret. * **valueFrom** *(string) --* The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store. For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the *Amazon Elastic Container Service Developer Guide*. Note: If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified. * **healthCheck** *(dict) --* The container health check command and associated configuration parameters for the container. This parameter maps to "HealthCheck" in the docker container create command and the "HEALTHCHECK" parameter of docker run. * **command** *(list) --* A string array representing the command that the container runs to determine if it is healthy. The string array must start with "CMD" to run the command arguments directly, or "CMD-SHELL" to run the command with the container's default shell. When you use the Amazon Web Services Management Console JSON panel, the Command Line Interface, or the APIs, enclose the list of commands in double quotes and brackets. "[ "CMD-SHELL", "curl -f http://localhost/ || exit 1" ]" You don't include the double quotes and brackets when you use the Amazon Web Services Management Console. "CMD-SHELL, curl -f http://localhost/ || exit 1" An exit code of 0 indicates success, and non-zero exit code indicates failure. For more information, see "HealthCheck" in the docker container create command. * *(string) --* * **interval** *(integer) --* The time period in seconds between each health check execution. You may specify between 5 and 300 seconds. The default value is 30 seconds. This value applies only when you specify a "command". * **timeout** *(integer) --* The time period in seconds to wait for a health check to succeed before it is considered a failure. You may specify between 2 and 60 seconds. The default value is 5. This value applies only when you specify a "command". * **retries** *(integer) --* The number of times to retry a failed health check before the container is considered unhealthy. You may specify between 1 and 10 retries. The default value is 3. This value applies only when you specify a "command". * **startPeriod** *(integer) --* The optional grace period to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. You can specify between 0 and 300 seconds. By default, the "startPeriod" is off. This value applies only when you specify a "command". Note: If a health check succeeds within the "startPeriod", then the container is considered healthy and any subsequent failures count toward the maximum number of retries. * **systemControls** *(list) --* A list of namespaced kernel parameters to set in the container. This parameter maps to "Sysctls" in the docker container create command and the "--sysctl" option to docker run. For example, you can configure "net.ipv4.tcp_keepalive_time" setting to maintain longer lived connections. * *(dict) --* A list of namespaced kernel parameters to set in the container. This parameter maps to "Sysctls" in the docker container create command and the "--sysctl" option to docker run. For example, you can configure "net.ipv4.tcp_keepalive_time" setting to maintain longer lived connections. We don't recommend that you specify network-related "systemControls" parameters for multiple containers in a single task that also uses either the "awsvpc" or "host" network mode. Doing this has the following disadvantages: * For tasks that use the "awsvpc" network mode including Fargate, if you set "systemControls" for any container, it applies to all containers in the task. If you set different "systemControls" for multiple containers in a single task, the container that's started last determines which "systemControls" take effect. * For tasks that use the "host" network mode, the network namespace "systemControls" aren't supported. If you're setting an IPC resource namespace to use for the containers in the task, the following conditions apply to your system controls. For more information, see IPC mode. * For tasks that use the "host" IPC mode, IPC namespace "systemControls" aren't supported. * For tasks that use the "task" IPC mode, IPC namespace "systemControls" values apply to all containers within a task. Note: This parameter is not supported for Windows containers. Note: This parameter is only supported for tasks that are hosted on Fargate if the tasks are using platform version "1.4.0" or later (Linux). This isn't supported for Windows containers on Fargate. * **namespace** *(string) --* The namespaced kernel parameter to set a "value" for. * **value** *(string) --* The namespaced kernel parameter to set a "value" for. Valid IPC namespace values: ""kernel.msgmax" | "kernel.msgmnb" | "kernel.msgmni" | "kernel.sem" | "kernel.shmall" | "kernel.shmmax" | "kernel.shmmni" | "kernel.shm_rmid_forced"", and "Sysctls" that start with ""fs.mqueue.*"" Valid network namespace values: "Sysctls" that start with ""net.*"". Only namespaced "Sysctls" that exist within the container starting with "net.* are accepted. All of these values are supported by Fargate. * **resourceRequirements** *(list) --* The type and amount of a resource to assign to a container. The only supported resource is a GPU. * *(dict) --* The type and amount of a resource to assign to a container. The supported resource types are GPUs and Elastic Inference accelerators. For more information, see Working with GPUs on Amazon ECS or Working with Amazon Elastic Inference on Amazon ECS in the *Amazon Elastic Container Service Developer Guide* * **value** *(string) --* The value for the specified resource type. When the type is "GPU", the value is the number of physical "GPUs" the Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for all containers in a task can't exceed the number of available GPUs on the container instance that the task is launched on. When the type is "InferenceAccelerator", the "value" matches the "deviceName" for an InferenceAccelerator specified in a task definition. * **type** *(string) --* The type of resource to assign to a container. * **firelensConfiguration** *(dict) --* The FireLens configuration for the container. This is used to specify and configure a log router for container logs. For more information, see Custom Log Routing in the *Amazon Elastic Container Service Developer Guide*. * **type** *(string) --* The log router to use. The valid values are "fluentd" or "fluentbit". * **options** *(dict) --* The options to use when configuring the log router. This field is optional and can be used to specify a custom configuration file or to add additional metadata, such as the task, task definition, cluster, and container instance details to the log event. If specified, the syntax to use is ""options ":{"enable-ecs-log-metadata":"true|false","config- file-type:"s3|file","config-file-value":"arn:aws:s3 :::mybucket/fluent.conf|filepath"}". For more information, see Creating a task definition that uses a FireLens configuration in the *Amazon Elastic Container Service Developer Guide*. Note: Tasks hosted on Fargate only support the "file" configuration file type. * *(string) --* * *(string) --* * **credentialSpecs** *(list) --* A list of ARNs in SSM or Amazon S3 to a credential spec ( "CredSpec") file that configures the container for Active Directory authentication. We recommend that you use this parameter instead of the "dockerSecurityOptions". The maximum number of ARNs is 1. There are two formats for each ARN. credentialspecdomainless:MyARN You use "credentialspecdomainless:MyARN" to provide a "CredSpec" with an additional section for a secret in Secrets Manager. You provide the login credentials to the domain in the secret. Each task that runs on any container instance can join different domains. You can use this format without joining the container instance to a domain. credentialspec:MyARN You use "credentialspec:MyARN" to provide a "CredSpec" for a single domain. You must join the container instance to the domain before you start any tasks that use this task definition. In both formats, replace "MyARN" with the ARN in SSM or Amazon S3. If you provide a "credentialspecdomainless:MyARN", the "credspec" must provide a ARN in Secrets Manager for a secret containing the username, password, and the domain to connect to. For better security, the instance isn't joined to the domain for domainless authentication. Other applications on the instance can't use the domainless credentials. You can use this parameter to run tasks on the same instance, even it the tasks need to join different domains. For more information, see Using gMSAs for Windows Containers and Using gMSAs for Linux Containers. * *(string) --* * **family** *(string) --* The name of a family that this task definition is registered to. Up to 255 characters are allowed. Letters (both uppercase and lowercase letters), numbers, hyphens (-), and underscores (_) are allowed. A family groups multiple versions of a task definition. Amazon ECS gives the first task definition that you registered to a family a revision number of 1. Amazon ECS gives sequential revision numbers to each task definition that you add. * **taskRoleArn** *(string) --* The short name or full Amazon Resource Name (ARN) of the Identity and Access Management role that grants containers in the task permission to call Amazon Web Services APIs on your behalf. For informationabout the required IAM roles for Amazon ECS, see IAM roles for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. * **executionRoleArn** *(string) --* The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make Amazon Web Services API calls on your behalf. For informationabout the required IAM roles for Amazon ECS, see IAM roles for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. * **networkMode** *(string) --* The Docker networking mode to use for the containers in the task. The valid values are "none", "bridge", "awsvpc", and "host". If no network mode is specified, the default is "bridge". For Amazon ECS tasks on Fargate, the "awsvpc" network mode is required. For Amazon ECS tasks on Amazon EC2 Linux instances, any network mode can be used. For Amazon ECS tasks on Amazon EC2 Windows instances, "" or "awsvpc" can be used. If the network mode is set to "none", you cannot specify port mappings in your container definitions, and the tasks containers do not have external connectivity. The "host" and "awsvpc" network modes offer the highest networking performance for containers because they use the EC2 network stack instead of the virtualized network stack provided by the "bridge" mode. With the "host" and "awsvpc" network modes, exposed container ports are mapped directly to the corresponding host port (for the "host" network mode) or the attached elastic network interface port (for the "awsvpc" network mode), so you cannot take advantage of dynamic host port mappings. Warning: When using the "host" network mode, you should not run containers using the root user (UID 0). It is considered best practice to use a non-root user. If the network mode is "awsvpc", the task is allocated an elastic network interface, and you must specify a NetworkConfiguration value when you create a service or run a task with the task definition. For more information, see Task Networking in the *Amazon Elastic Container Service Developer Guide*. If the network mode is "host", you cannot run multiple instantiations of the same task on a single container instance when port mappings are used. * **revision** *(integer) --* The revision of the task in a particular family. The revision is a version number of a task definition in a family. When you register a task definition for the first time, the revision is "1". Each time that you register a new revision of a task definition in the same family, the revision value always increases by one. This is even if you deregistered previous revisions in this family. * **volumes** *(list) --* The list of data volume definitions for the task. For more information, see Using data volumes in tasks in the *Amazon Elastic Container Service Developer Guide*. Note: The "host" and "sourcePath" parameters aren't supported for tasks run on Fargate. * *(dict) --* The data volume configuration for tasks launched using this task definition. Specifying a volume configuration in a task definition is optional. The volume configuration may contain multiple volumes but only one volume configured at launch is supported. Each volume defined in the volume configuration may only specify a "name" and one of either "configuredAtLaunch", "dockerVolumeConfiguration", "efsVolumeConfiguration", "fsxWindowsFileServerVolumeConfiguration", or "host". If an empty volume configuration is specified, by default Amazon ECS uses a host volume. For more information, see Using data volumes in tasks. * **name** *(string) --* The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. When using a volume configured at launch, the "name" is required and must also be specified as the volume name in the "ServiceVolumeConfiguration" or "TaskVolumeConfiguration" parameter when creating your service or standalone task. For all other types of volumes, this name is referenced in the "sourceVolume" parameter of the "mountPoints" object in the container definition. When a volume is using the "efsVolumeConfiguration", the name is required. * **host** *(dict) --* This parameter is specified when you use bind mount host volumes. The contents of the "host" parameter determine whether your bind mount host volume persists on the host container instance and where it's stored. If the "host" parameter is empty, then the Docker daemon assigns a host path for your data volume. However, the data isn't guaranteed to persist after the containers that are associated with it stop running. Windows containers can mount whole directories on the same drive as "$env:ProgramData". Windows containers can't mount directories on a different drive, and mount point can't be across drives. For example, you can mount "C:\my\path:C:\my\path" and "D:\:D:\", but not "D:\my\path:C:\my\path" or "D:\:C:\my\path". * **sourcePath** *(string) --* When the "host" parameter is used, specify a "sourcePath" to declare the path on the host container instance that's presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If the "host" parameter contains a "sourcePath" file location, then the data volume persists at the specified location on the host container instance until you delete it manually. If the "sourcePath" value doesn't exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported. If you're using the Fargate launch type, the "sourcePath" parameter is not supported. * **dockerVolumeConfiguration** *(dict) --* This parameter is specified when you use Docker volumes. Windows containers only support the use of the "local" driver. To use bind mounts, specify the "host" parameter instead. Note: Docker volumes aren't supported by tasks run on Fargate. * **scope** *(string) --* The scope for the Docker volume that determines its lifecycle. Docker volumes that are scoped to a "task" are automatically provisioned when the task starts and destroyed when the task stops. Docker volumes that are scoped as "shared" persist after the task stops. * **autoprovision** *(boolean) --* If this value is "true", the Docker volume is created if it doesn't already exist. Note: This field is only used if the "scope" is "shared". * **driver** *(string) --* The Docker volume driver to use. The driver value must match the driver name provided by Docker because it is used for task placement. If the driver was installed using the Docker plugin CLI, use "docker plugin ls" to retrieve the driver name from your container instance. If the driver was installed using another method, use Docker plugin discovery to retrieve the driver name. This parameter maps to "Driver" in the docker container create command and the "xxdriver" option to docker volume create. * **driverOpts** *(dict) --* A map of Docker driver-specific options passed through. This parameter maps to "DriverOpts" in the docker create-volume command and the "xxopt" option to docker volume create. * *(string) --* * *(string) --* * **labels** *(dict) --* Custom metadata to add to your Docker volume. This parameter maps to "Labels" in the docker container create command and the "xxlabel" option to docker volume create. * *(string) --* * *(string) --* * **efsVolumeConfiguration** *(dict) --* This parameter is specified when you use an Amazon Elastic File System file system for task storage. * **fileSystemId** *(string) --* The Amazon EFS file system ID to use. * **rootDirectory** *(string) --* The directory within the Amazon EFS file system to mount as the root directory inside the host. If this parameter is omitted, the root of the Amazon EFS volume will be used. Specifying "/" will have the same effect as omitting this parameter. Warning: If an EFS access point is specified in the "authorizationConfig", the root directory parameter must either be omitted or set to "/" which will enforce the path set on the EFS access point. * **transitEncryption** *(string) --* Determines whether to use encryption for Amazon EFS data in transit between the Amazon ECS host and the Amazon EFS server. Transit encryption must be turned on if Amazon EFS IAM authorization is used. If this parameter is omitted, the default value of "DISABLED" is used. For more information, see Encrypting data in transit in the *Amazon Elastic File System User Guide*. * **transitEncryptionPort** *(integer) --* The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server. If you do not specify a transit encryption port, it will use the port selection strategy that the Amazon EFS mount helper uses. For more information, see EFS mount helper in the *Amazon Elastic File System User Guide*. * **authorizationConfig** *(dict) --* The authorization configuration details for the Amazon EFS file system. * **accessPointId** *(string) --* The Amazon EFS access point ID to use. If an access point is specified, the root directory value specified in the "EFSVolumeConfiguration" must either be omitted or set to "/" which will enforce the path set on the EFS access point. If an access point is used, transit encryption must be on in the "EFSVolumeConfiguration". For more information, see Working with Amazon EFS access points in the *Amazon Elastic File System User Guide*. * **iam** *(string) --* Determines whether to use the Amazon ECS task role defined in a task definition when mounting the Amazon EFS file system. If it is turned on, transit encryption must be turned on in the "EFSVolumeConfiguration". If this parameter is omitted, the default value of "DISABLED" is used. For more information, see Using Amazon EFS access points in the *Amazon Elastic Container Service Developer Guide*. * **fsxWindowsFileServerVolumeConfiguration** *(dict) --* This parameter is specified when you use Amazon FSx for Windows File Server file system for task storage. * **fileSystemId** *(string) --* The Amazon FSx for Windows File Server file system ID to use. * **rootDirectory** *(string) --* The directory within the Amazon FSx for Windows File Server file system to mount as the root directory inside the host. * **authorizationConfig** *(dict) --* The authorization configuration details for the Amazon FSx for Windows File Server file system. * **credentialsParameter** *(string) --* The authorization credential option to use. The authorization credential options can be provided using either the Amazon Resource Name (ARN) of an Secrets Manager secret or SSM Parameter Store parameter. The ARN refers to the stored credentials. * **domain** *(string) --* A fully qualified domain name hosted by an Directory Service Managed Microsoft AD (Active Directory) or self-hosted AD on Amazon EC2. * **configuredAtLaunch** *(boolean) --* Indicates whether the volume should be configured at launch time. This is used to create Amazon EBS volumes for standalone tasks or tasks created as part of a service. Each task definition revision may only have one volume configured at launch in the volume configuration. To configure a volume at launch time, use this task definition revision and specify a "volumeConfigurations" object when calling the "CreateService", "UpdateService", "RunTask" or "StartTask" APIs. * **status** *(string) --* The status of the task definition. * **requiresAttributes** *(list) --* The container instance attributes required by your task. When an Amazon EC2 instance is registered to your cluster, the Amazon ECS container agent assigns some standard attributes to the instance. You can apply custom attributes. These are specified as key-value pairs using the Amazon ECS console or the PutAttributes API. These attributes are used when determining task placement for tasks hosted on Amazon EC2 instances. For more information, see Attributes in the *Amazon Elastic Container Service Developer Guide*. Note: This parameter isn't supported for tasks run on Fargate. * *(dict) --* An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the attribute. The "name" must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.). * **value** *(string) --* The value of the attribute. The "value" must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space. * **targetType** *(string) --* The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN. * **targetId** *(string) --* The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN). * **placementConstraints** *(list) --* An array of placement constraint objects to use for tasks. Note: This parameter isn't supported for tasks run on Fargate. * *(dict) --* The constraint on task placement in the task definition. For more information, see Task placement constraints in the *Amazon Elastic Container Service Developer Guide*. Note: Task placement constraints aren't supported for tasks run on Fargate. * **type** *(string) --* The type of constraint. The "MemberOf" constraint restricts selection to be from a group of valid candidates. * **expression** *(string) --* A cluster query language expression to apply to the constraint. For more information, see Cluster query language in the *Amazon Elastic Container Service Developer Guide*. * **compatibilities** *(list) --* Amazon ECS validates the task definition parameters with those supported by the launch type. For more information, see Amazon ECS launch types in the *Amazon Elastic Container Service Developer Guide*. * *(string) --* * **runtimePlatform** *(dict) --* The operating system that your task definitions are running on. A platform family is specified only for tasks using the Fargate launch type. When you specify a task in a service, this value must match the "runtimePlatform" value of the service. * **cpuArchitecture** *(string) --* The CPU architecture. You can run your Linux tasks on an ARM-based platform by setting the value to "ARM64". This option is available for tasks that run on Linux Amazon EC2 instance or Linux containers on Fargate. * **operatingSystemFamily** *(string) --* The operating system. * **requiresCompatibilities** *(list) --* The task launch types the task definition was validated against. The valid values are "EC2", "FARGATE", and "EXTERNAL". For more information, see Amazon ECS launch types in the *Amazon Elastic Container Service Developer Guide*. * *(string) --* * **cpu** *(string) --* The number of "cpu" units used by the task. If you use the EC2 launch type, this field is optional. Any value can be used. If you use the Fargate launch type, this field is required. You must use one of the following values. The value that you choose determines your range of valid values for the "memory" parameter. If you're using the EC2 launch type or the external launch type, this field is optional. Supported values are between "128" CPU units ( "0.125" vCPUs) and "196608" CPU units ( "192" vCPUs). This field is required for Fargate. For information about the valid values, see Task size in the *Amazon Elastic Container Service Developer Guide*. * **memory** *(string) --* The amount (in MiB) of memory used by the task. If your tasks runs on Amazon EC2 instances, you must specify either a task-level memory value or a container- level memory value. This field is optional and any value can be used. If a task-level memory value is specified, the container-level memory value is optional. For more information regarding container-level memory and memory reservation, see ContainerDefinition. If your tasks runs on Fargate, this field is required. You must use one of the following values. The value you choose determines your range of valid values for the "cpu" parameter. * 512 (0.5 GB), 1024 (1 GB), 2048 (2 GB) - Available "cpu" values: 256 (.25 vCPU) * 1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB) - Available "cpu" values: 512 (.5 vCPU) * 2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB) - Available "cpu" values: 1024 (1 vCPU) * Between 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB) - Available "cpu" values: 2048 (2 vCPU) * Between 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB) - Available "cpu" values: 4096 (4 vCPU) * Between 16 GB and 60 GB in 4 GB increments - Available "cpu" values: 8192 (8 vCPU) This option requires Linux platform "1.4.0" or later. * Between 32GB and 120 GB in 8 GB increments - Available "cpu" values: 16384 (16 vCPU) This option requires Linux platform "1.4.0" or later. * **inferenceAccelerators** *(list) --* The Elastic Inference accelerator that's associated with the task. * *(dict) --* Details on an Elastic Inference accelerator. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. * **deviceName** *(string) --* The Elastic Inference accelerator device name. The "deviceName" must also be referenced in a container definition as a ResourceRequirement. * **deviceType** *(string) --* The Elastic Inference accelerator type to use. * **pidMode** *(string) --* The process namespace to use for the containers in the task. The valid values are "host" or "task". On Fargate for Linux containers, the only valid value is "task". For example, monitoring sidecars might need "pidMode" to access information about other containers running in the same task. If "host" is specified, all containers within the tasks that specified the "host" PID mode on the same container instance share the same process namespace with the host Amazon EC2 instance. If "task" is specified, all containers within the specified task share the same process namespace. If no value is specified, the default is a private namespace for each container. If the "host" PID mode is used, there's a heightened risk of undesired process namespace exposure. Note: This parameter is not supported for Windows containers. Note: This parameter is only supported for tasks that are hosted on Fargate if the tasks are using platform version "1.4.0" or later (Linux). This isn't supported for Windows containers on Fargate. * **ipcMode** *(string) --* The IPC resource namespace to use for the containers in the task. The valid values are "host", "task", or "none". If "host" is specified, then all containers within the tasks that specified the "host" IPC mode on the same container instance share the same IPC resources with the host Amazon EC2 instance. If "task" is specified, all containers within the specified task share the same IPC resources. If "none" is specified, then IPC resources within the containers of a task are private and not shared with other containers in a task or on the container instance. If no value is specified, then the IPC resource namespace sharing depends on the Docker daemon setting on the container instance. If the "host" IPC mode is used, be aware that there is a heightened risk of undesired IPC namespace expose. If you are setting namespaced kernel parameters using "systemControls" for the containers in the task, the following will apply to your IPC resource namespace. For more information, see System Controls in the *Amazon Elastic Container Service Developer Guide*. * For tasks that use the "host" IPC mode, IPC namespace related "systemControls" are not supported. * For tasks that use the "task" IPC mode, IPC namespace related "systemControls" will apply to all containers within a task. Note: This parameter is not supported for Windows containers or tasks run on Fargate. * **proxyConfiguration** *(dict) --* The configuration details for the App Mesh proxy. Your Amazon ECS container instances require at least version 1.26.0 of the container agent and at least version 1.26.0-1 of the "ecs-init" package to use a proxy configuration. If your container instances are launched from the Amazon ECS optimized AMI version "20190301" or later, they contain the required versions of the container agent and "ecs-init". For more information, see Amazon ECS-optimized Linux AMI in the *Amazon Elastic Container Service Developer Guide*. * **type** *(string) --* The proxy type. The only supported value is "APPMESH". * **containerName** *(string) --* The name of the container that will serve as the App Mesh proxy. * **properties** *(list) --* The set of network configuration parameters to provide the Container Network Interface (CNI) plugin, specified as key-value pairs. * "IgnoredUID" - (Required) The user ID (UID) of the proxy container as defined by the "user" parameter in a container definition. This is used to ensure the proxy ignores its own traffic. If "IgnoredGID" is specified, this field can be empty. * "IgnoredGID" - (Required) The group ID (GID) of the proxy container as defined by the "user" parameter in a container definition. This is used to ensure the proxy ignores its own traffic. If "IgnoredUID" is specified, this field can be empty. * "AppPorts" - (Required) The list of ports that the application uses. Network traffic to these ports is forwarded to the "ProxyIngressPort" and "ProxyEgressPort". * "ProxyIngressPort" - (Required) Specifies the port that incoming traffic to the "AppPorts" is directed to. * "ProxyEgressPort" - (Required) Specifies the port that outgoing traffic from the "AppPorts" is directed to. * "EgressIgnoredPorts" - (Required) The egress traffic going to the specified ports is ignored and not redirected to the "ProxyEgressPort". It can be an empty list. * "EgressIgnoredIPs" - (Required) The egress traffic going to the specified IP addresses is ignored and not redirected to the "ProxyEgressPort". It can be an empty list. * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **registeredAt** *(datetime) --* The Unix timestamp for the time when the task definition was registered. * **deregisteredAt** *(datetime) --* The Unix timestamp for the time when the task definition was deregistered. * **registeredBy** *(string) --* The principal that registered the task definition. * **ephemeralStorage** *(dict) --* The ephemeral storage settings to use for tasks run with the task definition. * **sizeInGiB** *(integer) --* The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is "21" GiB and the maximum supported value is "200" GiB. * **enableFaultInjection** *(boolean) --* Enables fault injection and allows for fault injection requests to be accepted from the task's containers. The default value is "false". * **tags** *(list) --* The metadata that's applied to the task definition to help you categorize and organize them. Each tag consists of a key and an optional value. You define both. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" **Examples** This example provides a description of the specified task definition. response = client.describe_task_definition( taskDefinition='hello_world:8', ) print(response) Expected Output: { 'taskDefinition': { 'containerDefinitions': [ { 'name': 'wordpress', 'cpu': 10, 'environment': [ ], 'essential': True, 'image': 'wordpress', 'links': [ 'mysql', ], 'memory': 500, 'mountPoints': [ ], 'portMappings': [ { 'containerPort': 80, 'hostPort': 80, }, ], 'volumesFrom': [ ], }, { 'name': 'mysql', 'cpu': 10, 'environment': [ { 'name': 'MYSQL_ROOT_PASSWORD', 'value': 'password', }, ], 'essential': True, 'image': 'mysql', 'memory': 500, 'mountPoints': [ ], 'portMappings': [ ], 'volumesFrom': [ ], }, ], 'family': 'hello_world', 'revision': 8, 'taskDefinitionArn': 'arn:aws:ecs:us-east-1::task-definition/hello_world:8', 'volumes': [ ], }, 'ResponseMetadata': { '...': '...', }, } ECS / Client / describe_tasks describe_tasks ************** ECS.Client.describe_tasks(**kwargs) Describes a specified task or tasks. Currently, stopped tasks appear in the returned results for at least one hour. If you have tasks with tags, and then delete the cluster, the tagged tasks are returned in the response. If you create a new cluster with the same name as the deleted cluster, the tagged tasks are not included in the response. See also: AWS API Documentation **Request Syntax** response = client.describe_tasks( cluster='string', tasks=[ 'string', ], include=[ 'TAGS', ] ) type cluster: string param cluster: The short name or full Amazon Resource Name (ARN) of the cluster that hosts the task or tasks to describe. If you do not specify a cluster, the default cluster is assumed. If you do not specify a value, the "default" cluster is used. type tasks: list param tasks: **[REQUIRED]** A list of up to 100 task IDs or full ARN entries. * *(string) --* type include: list param include: Specifies whether you want to see the resource tags for the task. If "TAGS" is specified, the tags are included in the response. If this field is omitted, tags aren't included in the response. * *(string) --* rtype: dict returns: **Response Syntax** { 'tasks': [ { 'attachments': [ { 'id': 'string', 'type': 'string', 'status': 'string', 'details': [ { 'name': 'string', 'value': 'string' }, ] }, ], 'attributes': [ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ], 'availabilityZone': 'string', 'capacityProviderName': 'string', 'clusterArn': 'string', 'connectivity': 'CONNECTED'|'DISCONNECTED', 'connectivityAt': datetime(2015, 1, 1), 'containerInstanceArn': 'string', 'containers': [ { 'containerArn': 'string', 'taskArn': 'string', 'name': 'string', 'image': 'string', 'imageDigest': 'string', 'runtimeId': 'string', 'lastStatus': 'string', 'exitCode': 123, 'reason': 'string', 'networkBindings': [ { 'bindIP': 'string', 'containerPort': 123, 'hostPort': 123, 'protocol': 'tcp'|'udp', 'containerPortRange': 'string', 'hostPortRange': 'string' }, ], 'networkInterfaces': [ { 'attachmentId': 'string', 'privateIpv4Address': 'string', 'ipv6Address': 'string' }, ], 'healthStatus': 'HEALTHY'|'UNHEALTHY'|'UNKNOWN', 'managedAgents': [ { 'lastStartedAt': datetime(2015, 1, 1), 'name': 'ExecuteCommandAgent', 'reason': 'string', 'lastStatus': 'string' }, ], 'cpu': 'string', 'memory': 'string', 'memoryReservation': 'string', 'gpuIds': [ 'string', ] }, ], 'cpu': 'string', 'createdAt': datetime(2015, 1, 1), 'desiredStatus': 'string', 'enableExecuteCommand': True|False, 'executionStoppedAt': datetime(2015, 1, 1), 'group': 'string', 'healthStatus': 'HEALTHY'|'UNHEALTHY'|'UNKNOWN', 'inferenceAccelerators': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'lastStatus': 'string', 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'memory': 'string', 'overrides': { 'containerOverrides': [ { 'name': 'string', 'command': [ 'string', ], 'environment': [ { 'name': 'string', 'value': 'string' }, ], 'environmentFiles': [ { 'value': 'string', 'type': 's3' }, ], 'cpu': 123, 'memory': 123, 'memoryReservation': 123, 'resourceRequirements': [ { 'value': 'string', 'type': 'GPU'|'InferenceAccelerator' }, ] }, ], 'cpu': 'string', 'inferenceAcceleratorOverrides': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'executionRoleArn': 'string', 'memory': 'string', 'taskRoleArn': 'string', 'ephemeralStorage': { 'sizeInGiB': 123 } }, 'platformVersion': 'string', 'platformFamily': 'string', 'pullStartedAt': datetime(2015, 1, 1), 'pullStoppedAt': datetime(2015, 1, 1), 'startedAt': datetime(2015, 1, 1), 'startedBy': 'string', 'stopCode': 'TaskFailedToStart'|'EssentialContainerExited'|'UserInitiated'|'ServiceSchedulerInitiated'|'SpotInterruption'|'TerminationNotice', 'stoppedAt': datetime(2015, 1, 1), 'stoppedReason': 'string', 'stoppingAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'taskArn': 'string', 'taskDefinitionArn': 'string', 'version': 123, 'ephemeralStorage': { 'sizeInGiB': 123 }, 'fargateEphemeralStorage': { 'sizeInGiB': 123, 'kmsKeyId': 'string' } }, ], 'failures': [ { 'arn': 'string', 'reason': 'string', 'detail': 'string' }, ] } **Response Structure** * *(dict) --* * **tasks** *(list) --* The list of tasks. * *(dict) --* Details on a task in a cluster. * **attachments** *(list) --* The Elastic Network Adapter that's associated with the task if the task uses the "awsvpc" network mode. * *(dict) --* An object representing a container instance or task attachment. * **id** *(string) --* The unique identifier for the attachment. * **type** *(string) --* The type of the attachment, such as "ElasticNetworkInterface", "Service Connect", and "AmazonElasticBlockStorage". * **status** *(string) --* The status of the attachment. Valid values are "PRECREATED", "CREATED", "ATTACHING", "ATTACHED", "DETACHING", "DETACHED", "DELETED", and "FAILED". * **details** *(list) --* Details of the attachment. For elastic network interfaces, this includes the network interface ID, the MAC address, the subnet ID, and the private IPv4 address. For Service Connect services, this includes "portName", "clientAliases", "discoveryName", and "ingressPortOverride". For Elastic Block Storage, this includes "roleArn", "deleteOnTermination", "volumeName", "volumeId", and "statusReason" (only when the attachment fails to create or attach). * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **attributes** *(list) --* The attributes of the task * *(dict) --* An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the attribute. The "name" must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.). * **value** *(string) --* The value of the attribute. The "value" must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space. * **targetType** *(string) --* The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN. * **targetId** *(string) --* The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN). * **availabilityZone** *(string) --* The Availability Zone for the task. * **capacityProviderName** *(string) --* The capacity provider that's associated with the task. * **clusterArn** *(string) --* The ARN of the cluster that hosts the task. * **connectivity** *(string) --* The connectivity status of a task. * **connectivityAt** *(datetime) --* The Unix timestamp for the time when the task last went into "CONNECTED" status. * **containerInstanceArn** *(string) --* The ARN of the container instances that host the task. * **containers** *(list) --* The containers that's associated with the task. * *(dict) --* A Docker container that's part of a task. * **containerArn** *(string) --* The Amazon Resource Name (ARN) of the container. * **taskArn** *(string) --* The ARN of the task. * **name** *(string) --* The name of the container. * **image** *(string) --* The image used for the container. * **imageDigest** *(string) --* The container image manifest digest. * **runtimeId** *(string) --* The ID of the Docker container. * **lastStatus** *(string) --* The last known status of the container. * **exitCode** *(integer) --* The exit code returned from the container. * **reason** *(string) --* A short (1024 max characters) human-readable string to provide additional details about a running or stopped container. * **networkBindings** *(list) --* The network bindings associated with the container. * *(dict) --* Details on the network bindings between a container and its host container instance. After a task reaches the "RUNNING" status, manual and automatic host and container port assignments are visible in the "networkBindings" section of DescribeTasks API responses. * **bindIP** *(string) --* The IP address that the container is bound to on the container instance. * **containerPort** *(integer) --* The port number on the container that's used with the network binding. * **hostPort** *(integer) --* The port number on the host that's used with the network binding. * **protocol** *(string) --* The protocol used for the network binding. * **containerPortRange** *(string) --* The port number range on the container that's bound to the dynamically mapped host port range. The following rules apply when you specify a "containerPortRange": * You must use either the "bridge" network mode or the "awsvpc" network mode. * This parameter is available for both the EC2 and Fargate launch types. * This parameter is available for both the Linux and Windows operating systems. * The container instance must have at least version 1.67.0 of the container agent and at least version 1.67.0-1 of the "ecs-init" package * You can specify a maximum of 100 port ranges per container. * You do not specify a "hostPortRange". The value of the "hostPortRange" is set as follows: * For containers in a task with the "awsvpc" network mode, the "hostPortRange" is set to the same value as the "containerPortRange". This is a static mapping strategy. * For containers in a task with the "bridge" network mode, the Amazon ECS agent finds open host ports from the default ephemeral range and passes it to docker to bind them to the container ports. * The "containerPortRange" valid values are between 1 and 65535. * A port can only be included in one port mapping per container. * You cannot specify overlapping port ranges. * The first port in the range must be less than last port in the range. * Docker recommends that you turn off the docker-proxy in the Docker daemon config file when you have a large number of ports. For more information, see Issue #11185 on the Github website. For information about how to turn off the docker-proxy in the Docker daemon config file, see Docker daemon in the *Amazon ECS Developer Guide*. You can call DescribeTasks to view the "hostPortRange" which are the host ports that are bound to the container ports. * **hostPortRange** *(string) --* The port number range on the host that's used with the network binding. This is assigned is assigned by Docker and delivered by the Amazon ECS agent. * **networkInterfaces** *(list) --* The network interfaces associated with the container. * *(dict) --* An object representing the elastic network interface for tasks that use the "awsvpc" network mode. * **attachmentId** *(string) --* The attachment ID for the network interface. * **privateIpv4Address** *(string) --* The private IPv4 address for the network interface. * **ipv6Address** *(string) --* The private IPv6 address for the network interface. * **healthStatus** *(string) --* The health status of the container. If health checks aren't configured for this container in its task definition, then it reports the health status as "UNKNOWN". * **managedAgents** *(list) --* The details of any Amazon ECS managed agents associated with the container. * *(dict) --* Details about the managed agent status for the container. * **lastStartedAt** *(datetime) --* The Unix timestamp for the time when the managed agent was last started. * **name** *(string) --* The name of the managed agent. When the execute command feature is turned on, the managed agent name is "ExecuteCommandAgent". * **reason** *(string) --* The reason for why the managed agent is in the state it is in. * **lastStatus** *(string) --* The last known status of the managed agent. * **cpu** *(string) --* The number of CPU units set for the container. The value is "0" if no value was specified in the container definition when the task definition was registered. * **memory** *(string) --* The hard limit (in MiB) of memory set for the container. * **memoryReservation** *(string) --* The soft limit (in MiB) of memory set for the container. * **gpuIds** *(list) --* The IDs of each GPU assigned to the container. * *(string) --* * **cpu** *(string) --* The number of CPU units used by the task as expressed in a task definition. It can be expressed as an integer using CPU units (for example, "1024"). It can also be expressed as a string using vCPUs (for example, "1 vCPU" or "1 vcpu"). String values are converted to an integer that indicates the CPU units when the task definition is registered. If you're using the EC2 launch type or the external launch type, this field is optional. Supported values are between "128" CPU units ( "0.125" vCPUs) and "196608" CPU units ( "192" vCPUs). If you do not specify a value, the parameter is ignored. This field is required for Fargate. For information about the valid values, see Task size in the *Amazon Elastic Container Service Developer Guide*. * **createdAt** *(datetime) --* The Unix timestamp for the time when the task was created. More specifically, it's for the time when the task entered the "PENDING" state. * **desiredStatus** *(string) --* The desired status of the task. For more information, see Task Lifecycle. * **enableExecuteCommand** *(boolean) --* Determines whether execute command functionality is turned on for this task. If "true", execute command functionality is turned on all the containers in the task. * **executionStoppedAt** *(datetime) --* The Unix timestamp for the time when the task execution stopped. * **group** *(string) --* The name of the task group that's associated with the task. * **healthStatus** *(string) --* The health status for the task. It's determined by the health of the essential containers in the task. If all essential containers in the task are reporting as "HEALTHY", the task status also reports as "HEALTHY". If any essential containers in the task are reporting as "UNHEALTHY" or "UNKNOWN", the task status also reports as "UNHEALTHY" or "UNKNOWN". Note: The Amazon ECS container agent doesn't monitor or report on Docker health checks that are embedded in a container image and not specified in the container definition. For example, this includes those specified in a parent image or from the image's Dockerfile. Health check parameters that are specified in a container definition override any Docker health checks that are found in the container image. * **inferenceAccelerators** *(list) --* The Elastic Inference accelerator that's associated with the task. * *(dict) --* Details on an Elastic Inference accelerator. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. * **deviceName** *(string) --* The Elastic Inference accelerator device name. The "deviceName" must also be referenced in a container definition as a ResourceRequirement. * **deviceType** *(string) --* The Elastic Inference accelerator type to use. * **lastStatus** *(string) --* The last known status for the task. For more information, see Task Lifecycle. * **launchType** *(string) --* The infrastructure where your task runs on. For more information, see Amazon ECS launch types in the *Amazon Elastic Container Service Developer Guide*. * **memory** *(string) --* The amount of memory (in MiB) that the task uses as expressed in a task definition. It can be expressed as an integer using MiB (for example, "1024"). If it's expressed as a string using GB (for example, "1GB" or "1 GB"), it's converted to an integer indicating the MiB when the task definition is registered. If you use the EC2 launch type, this field is optional. If you use the Fargate launch type, this field is required. You must use one of the following values. The value that you choose determines the range of supported values for the "cpu" parameter. * 512 (0.5 GB), 1024 (1 GB), 2048 (2 GB) - Available "cpu" values: 256 (.25 vCPU) * 1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB) - Available "cpu" values: 512 (.5 vCPU) * 2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB) - Available "cpu" values: 1024 (1 vCPU) * Between 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB) - Available "cpu" values: 2048 (2 vCPU) * Between 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB) - Available "cpu" values: 4096 (4 vCPU) * Between 16 GB and 60 GB in 4 GB increments - Available "cpu" values: 8192 (8 vCPU) This option requires Linux platform "1.4.0" or later. * Between 32GB and 120 GB in 8 GB increments - Available "cpu" values: 16384 (16 vCPU) This option requires Linux platform "1.4.0" or later. * **overrides** *(dict) --* One or more container overrides. * **containerOverrides** *(list) --* One or more container overrides that are sent to a task. * *(dict) --* The overrides that are sent to a container. An empty container override can be passed in. An example of an empty container override is "{"containerOverrides": [ ] }". If a non-empty container override is specified, the "name" parameter must be included. You can use Secrets Manager or Amazon Web Services Systems Manager Parameter Store to store the sensitive data. For more information, see Retrieve secrets through environment variables in the Amazon ECS Developer Guide. * **name** *(string) --* The name of the container that receives the override. This parameter is required if any override is specified. * **command** *(list) --* The command to send to the container that overrides the default command from the Docker image or the task definition. You must also specify a container name. * *(string) --* * **environment** *(list) --* The environment variables to send to the container. You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the task definition. You must also specify a container name. * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **environmentFiles** *(list) --* A list of files containing the environment variables to pass to a container, instead of the value from the container definition. * *(dict) --* A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a ".env" file extension. Each line in an environment file should contain an environment variable in "VARIABLE=VALUE" format. Lines beginning with "#" are treated as comments and are ignored. If there are environment variables specified using the "environment" parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Use a file to pass environment variables to a container in the *Amazon Elastic Container Service Developer Guide*. Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply. You must use the following platforms for the Fargate launch type: * Linux platform version "1.4.0" or later. * Windows platform version "1.0.0" or later. Consider the following when using the Fargate launch type: * The file is handled like a native Docker env-file. * There is no support for shell escape handling. * The container entry point interperts the "VARIABLE" values. * **value** *(string) --* The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file. * **type** *(string) --* The file type to use. Environment files are objects in Amazon S3. The only supported value is "s3". * **cpu** *(integer) --* The number of "cpu" units reserved for the container, instead of the default value from the task definition. You must also specify a container name. * **memory** *(integer) --* The hard limit (in MiB) of memory to present to the container, instead of the default value from the task definition. If your container attempts to exceed the memory specified here, the container is killed. You must also specify a container name. * **memoryReservation** *(integer) --* The soft limit (in MiB) of memory to reserve for the container, instead of the default value from the task definition. You must also specify a container name. * **resourceRequirements** *(list) --* The type and amount of a resource to assign to a container, instead of the default value from the task definition. The only supported resource is a GPU. * *(dict) --* The type and amount of a resource to assign to a container. The supported resource types are GPUs and Elastic Inference accelerators. For more information, see Working with GPUs on Amazon ECS or Working with Amazon Elastic Inference on Amazon ECS in the *Amazon Elastic Container Service Developer Guide* * **value** *(string) --* The value for the specified resource type. When the type is "GPU", the value is the number of physical "GPUs" the Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for all containers in a task can't exceed the number of available GPUs on the container instance that the task is launched on. When the type is "InferenceAccelerator", the "value" matches the "deviceName" for an InferenceAccelerator specified in a task definition. * **type** *(string) --* The type of resource to assign to a container. * **cpu** *(string) --* The CPU override for the task. * **inferenceAcceleratorOverrides** *(list) --* The Elastic Inference accelerator override for the task. * *(dict) --* Details on an Elastic Inference accelerator task override. This parameter is used to override the Elastic Inference accelerator specified in the task definition. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. * **deviceName** *(string) --* The Elastic Inference accelerator device name to override for the task. This parameter must match a "deviceName" specified in the task definition. * **deviceType** *(string) --* The Elastic Inference accelerator type to use. * **executionRoleArn** *(string) --* The Amazon Resource Name (ARN) of the task execution role override for the task. For more information, see Amazon ECS task execution IAM role in the *Amazon Elastic Container Service Developer Guide*. * **memory** *(string) --* The memory override for the task. * **taskRoleArn** *(string) --* The Amazon Resource Name (ARN) of the role that containers in this task can assume. All containers in this task are granted the permissions that are specified in this role. For more information, see IAM Role for Tasks in the *Amazon Elastic Container Service Developer Guide*. * **ephemeralStorage** *(dict) --* The ephemeral storage setting override for the task. Note: This parameter is only supported for tasks hosted on Fargate that use the following platform versions: * Linux platform version "1.4.0" or later. * Windows platform version "1.0.0" or later. * **sizeInGiB** *(integer) --* The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is "21" GiB and the maximum supported value is "200" GiB. * **platformVersion** *(string) --* The platform version where your task runs on. A platform version is only specified for tasks that use the Fargate launch type. If you didn't specify one, the "LATEST" platform version is used. For more information, see Fargate Platform Versions in the *Amazon Elastic Container Service Developer Guide*. * **platformFamily** *(string) --* The operating system that your tasks are running on. A platform family is specified only for tasks that use the Fargate launch type. All tasks that run as part of this service must use the same "platformFamily" value as the service (for example, "LINUX."). * **pullStartedAt** *(datetime) --* The Unix timestamp for the time when the container image pull began. * **pullStoppedAt** *(datetime) --* The Unix timestamp for the time when the container image pull completed. * **startedAt** *(datetime) --* The Unix timestamp for the time when the task started. More specifically, it's for the time when the task transitioned from the "PENDING" state to the "RUNNING" state. * **startedBy** *(string) --* The tag specified when a task is started. If an Amazon ECS service started the task, the "startedBy" parameter contains the deployment ID of that service. * **stopCode** *(string) --* The stop code indicating why a task was stopped. The "stoppedReason" might contain additional details. For more information about stop code, see Stopped tasks error codes in the *Amazon ECS Developer Guide*. * **stoppedAt** *(datetime) --* The Unix timestamp for the time when the task was stopped. More specifically, it's for the time when the task transitioned from the "RUNNING" state to the "STOPPED" state. * **stoppedReason** *(string) --* The reason that the task was stopped. * **stoppingAt** *(datetime) --* The Unix timestamp for the time when the task stops. More specifically, it's for the time when the task transitions from the "RUNNING" state to "STOPPING". * **tags** *(list) --* The metadata that you apply to the task to help you categorize and organize the task. Each tag consists of a key and an optional value. You define both the key and value. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **taskArn** *(string) --* The Amazon Resource Name (ARN) of the task. * **taskDefinitionArn** *(string) --* The ARN of the task definition that creates the task. * **version** *(integer) --* The version counter for the task. Every time a task experiences a change that starts a CloudWatch event, the version counter is incremented. If you replicate your Amazon ECS task state with CloudWatch Events, you can compare the version of a task reported by the Amazon ECS API actions with the version reported in CloudWatch Events for the task (inside the "detail" object) to verify that the version in your event stream is current. * **ephemeralStorage** *(dict) --* The ephemeral storage settings for the task. * **sizeInGiB** *(integer) --* The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is "21" GiB and the maximum supported value is "200" GiB. * **fargateEphemeralStorage** *(dict) --* The Fargate ephemeral storage settings for the task. * **sizeInGiB** *(integer) --* The total amount, in GiB, of the ephemeral storage to set for the task. The minimum supported value is "20" GiB and the maximum supported value is "200" GiB. * **kmsKeyId** *(string) --* Specify an Key Management Service key ID to encrypt the ephemeral storage for the task. * **failures** *(list) --* Any failures associated with the call. * *(dict) --* A failed resource. For a list of common causes, see API failure reasons in the *Amazon Elastic Container Service Developer Guide*. * **arn** *(string) --* The Amazon Resource Name (ARN) of the failed resource. * **reason** *(string) --* The reason for the failure. * **detail** *(string) --* The details of the failure. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ClusterNotFoundException" **Examples** This example provides a description of the specified task, using the task UUID as an identifier. response = client.describe_tasks( tasks=[ 'c5cba4eb-5dad-405e-96db-71ef8eefe6a8', ], ) print(response) Expected Output: { 'failures': [ ], 'tasks': [ { 'clusterArn': 'arn:aws:ecs:::cluster/default', 'containerInstanceArn': 'arn:aws:ecs:::container-instance/18f9eda5-27d7-4c19-b133-45adc516e8fb', 'containers': [ { 'name': 'ecs-demo', 'containerArn': 'arn:aws:ecs:::container/7c01765b-c588-45b3-8290-4ba38bd6c5a6', 'lastStatus': 'RUNNING', 'networkBindings': [ { 'bindIP': '0.0.0.0', 'containerPort': 80, 'hostPort': 80, }, ], 'taskArn': 'arn:aws:ecs:::task/c5cba4eb-5dad-405e-96db-71ef8eefe6a8', }, ], 'desiredStatus': 'RUNNING', 'lastStatus': 'RUNNING', 'overrides': { 'containerOverrides': [ { 'name': 'ecs-demo', }, ], }, 'startedBy': 'ecs-svc/9223370608528463088', 'taskArn': 'arn:aws:ecs:::task/c5cba4eb-5dad-405e-96db-71ef8eefe6a8', 'taskDefinitionArn': 'arn:aws:ecs:::task-definition/amazon-ecs-sample:1', }, ], 'ResponseMetadata': { '...': '...', }, } ECS / Client / deregister_container_instance deregister_container_instance ***************************** ECS.Client.deregister_container_instance(**kwargs) Deregisters an Amazon ECS container instance from the specified cluster. This instance is no longer available to run tasks. If you intend to use the container instance for some other purpose after deregistration, we recommend that you stop all of the tasks running on the container instance before deregistration. That prevents any orphaned tasks from consuming resources. Deregistering a container instance removes the instance from a cluster, but it doesn't terminate the EC2 instance. If you are finished using the instance, be sure to terminate it in the Amazon EC2 console to stop billing. Note: If you terminate a running container instance, Amazon ECS automatically deregisters the instance from your cluster (stopped container instances or instances with disconnected agents aren't automatically deregistered when terminated). See also: AWS API Documentation **Request Syntax** response = client.deregister_container_instance( cluster='string', containerInstance='string', force=True|False ) Parameters: * **cluster** (*string*) -- The short name or full Amazon Resource Name (ARN) of the cluster that hosts the container instance to deregister. If you do not specify a cluster, the default cluster is assumed. * **containerInstance** (*string*) -- **[REQUIRED]** The container instance ID or full ARN of the container instance to deregister. For more information about the ARN format, see Amazon Resource Name (ARN) in the *Amazon ECS Developer Guide*. * **force** (*boolean*) -- Forces the container instance to be deregistered. If you have tasks running on the container instance when you deregister it with the "force" option, these tasks remain running until you terminate the instance or the tasks stop through some other means, but they're orphaned (no longer monitored or accounted for by Amazon ECS). If an orphaned task on your container instance is part of an Amazon ECS service, then the service scheduler starts another copy of that task, on a different container instance if possible. Any containers in orphaned service tasks that are registered with a Classic Load Balancer or an Application Load Balancer target group are deregistered. They begin connection draining according to the settings on the load balancer or target group. Return type: dict Returns: **Response Syntax** { 'containerInstance': { 'containerInstanceArn': 'string', 'ec2InstanceId': 'string', 'capacityProviderName': 'string', 'version': 123, 'versionInfo': { 'agentVersion': 'string', 'agentHash': 'string', 'dockerVersion': 'string' }, 'remainingResources': [ { 'name': 'string', 'type': 'string', 'doubleValue': 123.0, 'longValue': 123, 'integerValue': 123, 'stringSetValue': [ 'string', ] }, ], 'registeredResources': [ { 'name': 'string', 'type': 'string', 'doubleValue': 123.0, 'longValue': 123, 'integerValue': 123, 'stringSetValue': [ 'string', ] }, ], 'status': 'string', 'statusReason': 'string', 'agentConnected': True|False, 'runningTasksCount': 123, 'pendingTasksCount': 123, 'agentUpdateStatus': 'PENDING'|'STAGING'|'STAGED'|'UPDATING'|'UPDATED'|'FAILED', 'attributes': [ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ], 'registeredAt': datetime(2015, 1, 1), 'attachments': [ { 'id': 'string', 'type': 'string', 'status': 'string', 'details': [ { 'name': 'string', 'value': 'string' }, ] }, ], 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'healthStatus': { 'overallStatus': 'OK'|'IMPAIRED'|'INSUFFICIENT_DATA'|'INITIALIZING', 'details': [ { 'type': 'CONTAINER_RUNTIME', 'status': 'OK'|'IMPAIRED'|'INSUFFICIENT_DATA'|'INITIALIZING', 'lastUpdated': datetime(2015, 1, 1), 'lastStatusChange': datetime(2015, 1, 1) }, ] } } } **Response Structure** * *(dict) --* * **containerInstance** *(dict) --* The container instance that was deregistered. * **containerInstanceArn** *(string) --* The Amazon Resource Name (ARN) of the container instance. For more information about the ARN format, see Amazon Resource Name (ARN) in the *Amazon ECS Developer Guide*. * **ec2InstanceId** *(string) --* The ID of the container instance. For Amazon EC2 instances, this value is the Amazon EC2 instance ID. For external instances, this value is the Amazon Web Services Systems Manager managed instance ID. * **capacityProviderName** *(string) --* The capacity provider that's associated with the container instance. * **version** *(integer) --* The version counter for the container instance. Every time a container instance experiences a change that triggers a CloudWatch event, the version counter is incremented. If you're replicating your Amazon ECS container instance state with CloudWatch Events, you can compare the version of a container instance reported by the Amazon ECS APIs with the version reported in CloudWatch Events for the container instance (inside the "detail" object) to verify that the version in your event stream is current. * **versionInfo** *(dict) --* The version information for the Amazon ECS container agent and Docker daemon running on the container instance. * **agentVersion** *(string) --* The version number of the Amazon ECS container agent. * **agentHash** *(string) --* The Git commit hash for the Amazon ECS container agent build on the amazon-ecs-agent GitHub repository. * **dockerVersion** *(string) --* The Docker version that's running on the container instance. * **remainingResources** *(list) --* For CPU and memory resource types, this parameter describes the remaining CPU and memory that wasn't already allocated to tasks and is therefore available for new tasks. For port resource types, this parameter describes the ports that were reserved by the Amazon ECS container agent (at instance registration time) and any task containers that have reserved port mappings on the host (with the "host" or "bridge" network mode). Any port that's not specified here is available for new tasks. * *(dict) --* Describes the resources available for a container instance. * **name** *(string) --* The name of the resource, such as "CPU", "MEMORY", "PORTS", "PORTS_UDP", or a user-defined resource. * **type** *(string) --* The type of the resource. Valid values: "INTEGER", "DOUBLE", "LONG", or "STRINGSET". * **doubleValue** *(float) --* When the "doubleValue" type is set, the value of the resource must be a double precision floating-point type. * **longValue** *(integer) --* When the "longValue" type is set, the value of the resource must be an extended precision floating-point type. * **integerValue** *(integer) --* When the "integerValue" type is set, the value of the resource must be an integer. * **stringSetValue** *(list) --* When the "stringSetValue" type is set, the value of the resource must be a string type. * *(string) --* * **registeredResources** *(list) --* For CPU and memory resource types, this parameter describes the amount of each resource that was available on the container instance when the container agent registered it with Amazon ECS. This value represents the total amount of CPU and memory that can be allocated on this container instance to tasks. For port resource types, this parameter describes the ports that were reserved by the Amazon ECS container agent when it registered the container instance with Amazon ECS. * *(dict) --* Describes the resources available for a container instance. * **name** *(string) --* The name of the resource, such as "CPU", "MEMORY", "PORTS", "PORTS_UDP", or a user-defined resource. * **type** *(string) --* The type of the resource. Valid values: "INTEGER", "DOUBLE", "LONG", or "STRINGSET". * **doubleValue** *(float) --* When the "doubleValue" type is set, the value of the resource must be a double precision floating-point type. * **longValue** *(integer) --* When the "longValue" type is set, the value of the resource must be an extended precision floating-point type. * **integerValue** *(integer) --* When the "integerValue" type is set, the value of the resource must be an integer. * **stringSetValue** *(list) --* When the "stringSetValue" type is set, the value of the resource must be a string type. * *(string) --* * **status** *(string) --* The status of the container instance. The valid values are "REGISTERING", "REGISTRATION_FAILED", "ACTIVE", "INACTIVE", "DEREGISTERING", or "DRAINING". If your account has opted in to the "awsvpcTrunking" account setting, then any newly registered container instance will transition to a "REGISTERING" status while the trunk elastic network interface is provisioned for the instance. If the registration fails, the instance will transition to a "REGISTRATION_FAILED" status. You can describe the container instance and see the reason for failure in the "statusReason" parameter. Once the container instance is terminated, the instance transitions to a "DEREGISTERING" status while the trunk elastic network interface is deprovisioned. The instance then transitions to an "INACTIVE" status. The "ACTIVE" status indicates that the container instance can accept tasks. The "DRAINING" indicates that new tasks aren't placed on the container instance and any service tasks running on the container instance are removed if possible. For more information, see Container instance draining in the *Amazon Elastic Container Service Developer Guide*. * **statusReason** *(string) --* The reason that the container instance reached its current status. * **agentConnected** *(boolean) --* This parameter returns "true" if the agent is connected to Amazon ECS. An instance with an agent that may be unhealthy or stopped return "false". Only instances connected to an agent can accept task placement requests. * **runningTasksCount** *(integer) --* The number of tasks on the container instance that have a desired status ( "desiredStatus") of "RUNNING". * **pendingTasksCount** *(integer) --* The number of tasks on the container instance that are in the "PENDING" status. * **agentUpdateStatus** *(string) --* The status of the most recent agent update. If an update wasn't ever requested, this value is "NULL". * **attributes** *(list) --* The attributes set for the container instance, either by the Amazon ECS container agent at instance registration or manually with the PutAttributes operation. * *(dict) --* An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the attribute. The "name" must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.). * **value** *(string) --* The value of the attribute. The "value" must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space. * **targetType** *(string) --* The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN. * **targetId** *(string) --* The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN). * **registeredAt** *(datetime) --* The Unix timestamp for the time when the container instance was registered. * **attachments** *(list) --* The resources attached to a container instance, such as an elastic network interface. * *(dict) --* An object representing a container instance or task attachment. * **id** *(string) --* The unique identifier for the attachment. * **type** *(string) --* The type of the attachment, such as "ElasticNetworkInterface", "Service Connect", and "AmazonElasticBlockStorage". * **status** *(string) --* The status of the attachment. Valid values are "PRECREATED", "CREATED", "ATTACHING", "ATTACHED", "DETACHING", "DETACHED", "DELETED", and "FAILED". * **details** *(list) --* Details of the attachment. For elastic network interfaces, this includes the network interface ID, the MAC address, the subnet ID, and the private IPv4 address. For Service Connect services, this includes "portName", "clientAliases", "discoveryName", and "ingressPortOverride". For Elastic Block Storage, this includes "roleArn", "deleteOnTermination", "volumeName", "volumeId", and "statusReason" (only when the attachment fails to create or attach). * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **tags** *(list) --* The metadata that you apply to the container instance to help you categorize and organize them. Each tag consists of a key and an optional value. You define both. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **healthStatus** *(dict) --* An object representing the health status of the container instance. * **overallStatus** *(string) --* The overall health status of the container instance. This is an aggregate status of all container instance health checks. * **details** *(list) --* An array of objects representing the details of the container instance health status. * *(dict) --* An object representing the result of a container instance health status check. * **type** *(string) --* The type of container instance health status that was verified. * **status** *(string) --* The container instance health status. * **lastUpdated** *(datetime) --* The Unix timestamp for when the container instance health status was last updated. * **lastStatusChange** *(datetime) --* The Unix timestamp for when the container instance health status last changed. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ClusterNotFoundException" **Examples** This example deregisters a container instance from the specified cluster in your default region. If there are still tasks running on the container instance, you must either stop those tasks before deregistering, or use the force option. response = client.deregister_container_instance( cluster='default', containerInstance='container_instance_UUID', force=True, ) print(response) Expected Output: { 'ResponseMetadata': { '...': '...', }, } ECS / Client / describe_services describe_services ***************** ECS.Client.describe_services(**kwargs) Describes the specified services running in your cluster. See also: AWS API Documentation **Request Syntax** response = client.describe_services( cluster='string', services=[ 'string', ], include=[ 'TAGS', ] ) type cluster: string param cluster: The short name or full Amazon Resource Name (ARN)the cluster that hosts the service to describe. If you do not specify a cluster, the default cluster is assumed. This parameter is required if the service or services you are describing were launched in any cluster other than the default cluster. type services: list param services: **[REQUIRED]** A list of services to describe. You may specify up to 10 services to describe in a single operation. * *(string) --* type include: list param include: Determines whether you want to see the resource tags for the service. If "TAGS" is specified, the tags are included in the response. If this field is omitted, tags aren't included in the response. * *(string) --* rtype: dict returns: **Response Syntax** { 'services': [ { 'serviceArn': 'string', 'serviceName': 'string', 'clusterArn': 'string', 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'status': 'string', 'desiredCount': 123, 'runningCount': 123, 'pendingCount': 123, 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'taskDefinition': 'string', 'deploymentConfiguration': { 'deploymentCircuitBreaker': { 'enable': True|False, 'rollback': True|False }, 'maximumPercent': 123, 'minimumHealthyPercent': 123, 'alarms': { 'alarmNames': [ 'string', ], 'rollback': True|False, 'enable': True|False }, 'strategy': 'ROLLING'|'BLUE_GREEN', 'bakeTimeInMinutes': 123, 'lifecycleHooks': [ { 'hookTargetArn': 'string', 'roleArn': 'string', 'lifecycleStages': [ 'RECONCILE_SERVICE'|'PRE_SCALE_UP'|'POST_SCALE_UP'|'TEST_TRAFFIC_SHIFT'|'POST_TEST_TRAFFIC_SHIFT'|'PRODUCTION_TRAFFIC_SHIFT'|'POST_PRODUCTION_TRAFFIC_SHIFT', ] }, ] }, 'taskSets': [ { 'id': 'string', 'taskSetArn': 'string', 'serviceArn': 'string', 'clusterArn': 'string', 'startedBy': 'string', 'externalId': 'string', 'status': 'string', 'taskDefinition': 'string', 'computedDesiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'scale': { 'value': 123.0, 'unit': 'PERCENT' }, 'stabilityStatus': 'STEADY_STATE'|'STABILIZING', 'stabilityStatusAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' } }, ], 'deployments': [ { 'id': 'string', 'status': 'string', 'taskDefinition': 'string', 'desiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'failedTasks': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'rolloutState': 'COMPLETED'|'FAILED'|'IN_PROGRESS', 'rolloutStateReason': 'string', 'serviceConnectConfiguration': { 'enabled': True|False, 'namespace': 'string', 'services': [ { 'portName': 'string', 'discoveryName': 'string', 'clientAliases': [ { 'port': 123, 'dnsName': 'string', 'testTrafficRules': { 'header': { 'name': 'string', 'value': { 'exact': 'string' } } } }, ], 'ingressPortOverride': 123, 'timeout': { 'idleTimeoutSeconds': 123, 'perRequestTimeoutSeconds': 123 }, 'tls': { 'issuerCertificateAuthority': { 'awsPcaAuthorityArn': 'string' }, 'kmsKey': 'string', 'roleArn': 'string' } }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] } }, 'serviceConnectResources': [ { 'discoveryName': 'string', 'discoveryArn': 'string' }, ], 'volumeConfigurations': [ { 'name': 'string', 'managedEBSVolume': { 'encrypted': True|False, 'kmsKeyId': 'string', 'volumeType': 'string', 'sizeInGiB': 123, 'snapshotId': 'string', 'volumeInitializationRate': 123, 'iops': 123, 'throughput': 123, 'tagSpecifications': [ { 'resourceType': 'volume', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE' }, ], 'roleArn': 'string', 'filesystemType': 'ext3'|'ext4'|'xfs'|'ntfs' } }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' }, 'vpcLatticeConfigurations': [ { 'roleArn': 'string', 'targetGroupArn': 'string', 'portName': 'string' }, ] }, ], 'roleArn': 'string', 'events': [ { 'id': 'string', 'createdAt': datetime(2015, 1, 1), 'message': 'string' }, ], 'createdAt': datetime(2015, 1, 1), 'placementConstraints': [ { 'type': 'distinctInstance'|'memberOf', 'expression': 'string' }, ], 'placementStrategy': [ { 'type': 'random'|'spread'|'binpack', 'field': 'string' }, ], 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'healthCheckGracePeriodSeconds': 123, 'schedulingStrategy': 'REPLICA'|'DAEMON', 'deploymentController': { 'type': 'ECS'|'CODE_DEPLOY'|'EXTERNAL' }, 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'createdBy': 'string', 'enableECSManagedTags': True|False, 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE', 'enableExecuteCommand': True|False, 'availabilityZoneRebalancing': 'ENABLED'|'DISABLED' }, ], 'failures': [ { 'arn': 'string', 'reason': 'string', 'detail': 'string' }, ] } **Response Structure** * *(dict) --* * **services** *(list) --* The list of services described. * *(dict) --* Details on a service within a cluster. * **serviceArn** *(string) --* The ARN that identifies the service. For more information about the ARN format, see Amazon Resource Name (ARN) in the *Amazon ECS Developer Guide*. * **serviceName** *(string) --* The name of your service. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. Service names must be unique within a cluster. However, you can have similarly named services in multiple clusters within a Region or across multiple Regions. * **clusterArn** *(string) --* The Amazon Resource Name (ARN) of the cluster that hosts the service. * **loadBalancers** *(list) --* A list of Elastic Load Balancing load balancer objects. It contains the load balancer name, the container name, and the container port to access from the load balancer. The container name is as it appears in a container definition. * *(dict) --* The load balancer configuration to use with a service or task set. When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers. We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration. A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the *Amazon Elastic Container Service Developer Guide*. * **targetGroupArn** *(string) --* The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set. A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. For services using the "ECS" deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the *Amazon Elastic Container Service Developer Guide*. For services using the "CODE_DEPLOY" deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the *Amazon Elastic Container Service Developer Guide*. Warning: If your service's task definition uses the "awsvpc" network mode, you must choose "ip" as the target type, not "instance". Do this when creating your target groups because tasks that use the "awsvpc" network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type. * **loadBalancerName** *(string) --* The name of the load balancer to associate with the Amazon ECS service or task set. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted. * **containerName** *(string) --* The name of the container (as it appears in a container definition) to associate with the load balancer. You need to specify the container name when configuring the target group for an Amazon ECS load balancer. * **containerPort** *(integer) --* The port on the container to associate with the load balancer. This port must correspond to a "containerPort" in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the "hostPort" of the port mapping. * **advancedConfiguration** *(dict) --* The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments. * **alternateTargetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments. * **productionListenerRule** *(string) --* The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic. * **testListenerRule** *(string) --* The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you. * **serviceRegistries** *(list) --* The details for the service discovery registries to assign to this service. For more information, see Service Discovery. * *(dict) --* The details for the service registry. Each service may be associated with one service registry. Multiple service registries for each service are not supported. When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration. * **registryArn** *(string) --* The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService. * **port** *(integer) --* The port value used if your service discovery service specified an SRV record. This field might be used if both the "awsvpc" network mode and SRV records are used. * **containerName** *(string) --* The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition that your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **containerPort** *(integer) --* The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **status** *(string) --* The status of the service. The valid values are "ACTIVE", "DRAINING", or "INACTIVE". * **desiredCount** *(integer) --* The desired number of instantiations of the task definition to keep running on the service. This value is specified when the service is created with CreateService , and it can be modified with UpdateService. * **runningCount** *(integer) --* The number of tasks in the cluster that are in the "RUNNING" state. * **pendingCount** *(integer) --* The number of tasks in the cluster that are in the "PENDING" state. * **launchType** *(string) --* The launch type the service is using. When using the DescribeServices API, this field is omitted if the service was created using a capacity provider strategy. * **capacityProviderStrategy** *(list) --* The capacity provider strategy the service uses. When using the DescribeServices API, this field is omitted if the service was created using a launch type. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. * **platformVersion** *(string) --* The platform version to run your service on. A platform version is only specified for tasks that are hosted on Fargate. If one isn't specified, the "LATEST" platform version is used. For more information, see Fargate Platform Versions in the *Amazon Elastic Container Service Developer Guide*. * **platformFamily** *(string) --* The operating system that your tasks in the service run on. A platform family is specified only for tasks using the Fargate launch type. All tasks that run as part of this service must use the same "platformFamily" value as the service (for example, "LINUX"). * **taskDefinition** *(string) --* The task definition to use for tasks in the service. This value is specified when the service is created with CreateService, and it can be modified with UpdateService. * **deploymentConfiguration** *(dict) --* Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks. * **deploymentCircuitBreaker** *(dict) --* Note: The deployment circuit breaker can only be used for services using the rolling update ( "ECS") deployment type. The **deployment circuit breaker** determines whether a service deployment will fail if the service can't reach a steady state. If you use the deployment circuit breaker, a service deployment will transition to a failed state and stop launching new tasks. If you use the rollback option, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. For more information, see Rolling update in the *Amazon Elastic Container Service Developer Guide* * **enable** *(boolean) --* Determines whether to use the deployment circuit breaker logic for the service. * **rollback** *(boolean) --* Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is on, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. * **maximumPercent** *(integer) --* If a service is using the rolling update ( "ECS") deployment type, the "maximumPercent" parameter represents an upper limit on the number of your service's tasks that are allowed in the "RUNNING" or "PENDING" state during a deployment, as a percentage of the "desiredCount" (rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using the "REPLICA" service scheduler and has a "desiredCount" of four tasks and a "maximumPercent" value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default "maximumPercent" value for a service using the "REPLICA" service scheduler is 200%. The Amazon ECS scheduler uses this parameter to replace unhealthy tasks by starting replacement tasks first and then stopping the unhealthy tasks, as long as cluster resources for starting replacement tasks are available. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services. If a service is using either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types, and tasks in the service use the EC2 launch type, the **maximum percent** value is set to the default value. The **maximum percent** value is used to define the upper limit on the number of the tasks in the service that remain in the "RUNNING" state while the container instances are in the "DRAINING" state. Note: You can't specify a custom "maximumPercent" value for a service that uses either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and has tasks that use the EC2 launch type. If the service uses either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types, and the tasks in the service use the Fargate launch type, the maximum percent value is not used. The value is still returned when describing your service. * **minimumHealthyPercent** *(integer) --* If a service is using the rolling update ( "ECS") deployment type, the "minimumHealthyPercent" represents a lower limit on the number of your service's tasks that must remain in the "RUNNING" state during a deployment, as a percentage of the "desiredCount" (rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a "desiredCount" of four tasks and a "minimumHealthyPercent" of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks. If any tasks are unhealthy and if "maximumPercent" doesn't allow the Amazon ECS scheduler to start replacement tasks, the scheduler stops the unhealthy tasks one-by-one — using the "minimumHealthyPercent" as a constraint — to clear up capacity to launch replacement tasks. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services . For services that *do not* use a load balancer, the following should be noted: * A service is considered healthy if all essential containers within the tasks in the service pass their health checks. * If a task has no essential containers with a health check defined, the service scheduler will wait for 40 seconds after a task reaches a "RUNNING" state before the task is counted towards the minimum healthy percent total. * If a task has one or more essential containers with a health check defined, the service scheduler will wait for the task to reach a healthy status before counting it towards the minimum healthy percent total. A task is considered healthy when all essential containers within the task have passed their health checks. The amount of time the service scheduler can wait for is determined by the container health check settings. For services that *do* use a load balancer, the following should be noted: * If a task has no essential containers with a health check defined, the service scheduler will wait for the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total. * If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total. The default value for a replica service for "minimumHealthyPercent" is 100%. The default "minimumHealthyPercent" value for a service using the "DAEMON" service schedule is 0% for the CLI, the Amazon Web Services SDKs, and the APIs and 50% for the Amazon Web Services Management Console. The minimum number of healthy tasks during a deployment is the "desiredCount" multiplied by the "minimumHealthyPercent"/100, rounded up to the nearest integer value. If a service is using either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and is running tasks that use the EC2 launch type, the **minimum healthy percent** value is set to the default value. The **minimum healthy percent** value is used to define the lower limit on the number of the tasks in the service that remain in the "RUNNING" state while the container instances are in the "DRAINING" state. Note: You can't specify a custom "minimumHealthyPercent" value for a service that uses either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and has tasks that use the EC2 launch type. If a service is using either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service. * **alarms** *(dict) --* Information about the CloudWatch alarms. * **alarmNames** *(list) --* One or more CloudWatch alarm names. Use a "," to separate the alarms. * *(string) --* * **rollback** *(boolean) --* Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is used, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. * **enable** *(boolean) --* Determines whether to use the CloudWatch alarm option in the service deployment process. * **strategy** *(string) --* The deployment strategy for the service. Choose from these valid values: * "ROLLING" - When you create a service which uses the rolling update ( "ROLLING") deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration. * "BLUE_GREEN" - A blue/green deployment strategy ( "BLUE_GREEN") is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed. * **bakeTimeInMinutes** *(integer) --* The time period when both blue and green service revisions are running simultaneously after the production traffic has shifted. You must provide this parameter when you use the "BLUE_GREEN" deployment strategy. * **lifecycleHooks** *(list) --* An array of deployment lifecycle hook objects to run custom logic at specific stages of the deployment lifecycle. * *(dict) --* A deployment lifecycle hook runs custom logic at specific stages of the deployment process. Currently, you can use Lambda functions as hook targets. For more information, see Lifecycle hooks for Amazon ECS service deployments in the *Amazon Elastic Container Service Developer Guide*. * **hookTargetArn** *(string) --* The Amazon Resource Name (ARN) of the hook target. Currently, only Lambda function ARNs are supported. You must provide this parameter when configuring a deployment lifecycle hook. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call Lambda functions on your behalf. For more information, see Permissions required for Lambda functions in Amazon ECS blue/green deployments in the *Amazon Elastic Container Service Developer Guide*. * **lifecycleStages** *(list) --* The lifecycle stages at which to run the hook. Choose from these valid values: * RECONCILE_SERVICE The reconciliation stage that only happens when you start a new service deployment with more than 1 service revision in an ACTIVE state. You can use a lifecycle hook for this stage. * PRE_SCALE_UP The green service revision has not started. The blue service revision is handling 100% of the production traffic. There is no test traffic. You can use a lifecycle hook for this stage. * POST_SCALE_UP The green service revision has started. The blue service revision is handling 100% of the production traffic. There is no test traffic. You can use a lifecycle hook for this stage. * TEST_TRAFFIC_SHIFT The blue and green service revisions are running. The blue service revision handles 100% of the production traffic. The green service revision is migrating from 0% to 100% of test traffic. You can use a lifecycle hook for this stage. * POST_TEST_TRAFFIC_SHIFT The test traffic shift is complete. The green service revision handles 100% of the test traffic. You can use a lifecycle hook for this stage. * PRODUCTION_TRAFFIC_SHIFT Production traffic is shifting to the green service revision. The green service revision is migrating from 0% to 100% of production traffic. You can use a lifecycle hook for this stage. * POST_PRODUCTION_TRAFFIC_SHIFT The production traffic shift is complete. You can use a lifecycle hook for this stage. You must provide this parameter when configuring a deployment lifecycle hook. * *(string) --* * **taskSets** *(list) --* Information about a set of Amazon ECS tasks in either an CodeDeploy or an "EXTERNAL" deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic. * *(dict) --* Information about a set of Amazon ECS tasks in either an CodeDeploy or an "EXTERNAL" deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic. * **id** *(string) --* The ID of the task set. * **taskSetArn** *(string) --* The Amazon Resource Name (ARN) of the task set. * **serviceArn** *(string) --* The Amazon Resource Name (ARN) of the service the task set exists in. * **clusterArn** *(string) --* The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in. * **startedBy** *(string) --* The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the "startedBy" parameter is "CODE_DEPLOY". If an external deployment created the task set, the "startedBy" field isn't used. * **externalId** *(string) --* The external ID associated with the task set. If an CodeDeploy deployment created a task set, the "externalId" parameter contains the CodeDeploy deployment ID. If a task set is created for an external deployment and is associated with a service discovery registry, the "externalId" parameter contains the "ECS_TASK_SET_EXTERNAL_ID" Cloud Map attribute. * **status** *(string) --* The status of the task set. The following describes each state. PRIMARY The task set is serving production traffic. ACTIVE The task set isn't serving production traffic. DRAINING The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group. * **taskDefinition** *(string) --* The task definition that the task set is using. * **computedDesiredCount** *(integer) --* The computed desired count for the task set. This is calculated by multiplying the service's "desiredCount" by the task set's "scale" percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks. * **pendingCount** *(integer) --* The number of tasks in the task set that are in the "PENDING" status during a deployment. A task in the "PENDING" state is preparing to enter the "RUNNING" state. A task set enters the "PENDING" status when it launches for the first time or when it's restarted after being in the "STOPPED" state. * **runningCount** *(integer) --* The number of tasks in the task set that are in the "RUNNING" status during a deployment. A task in the "RUNNING" state is running and ready for use. * **createdAt** *(datetime) --* The Unix timestamp for the time when the task set was created. * **updatedAt** *(datetime) --* The Unix timestamp for the time when the task set was last updated. * **launchType** *(string) --* The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the *Amazon Elastic Container Service Developer Guide*. * **capacityProviderStrategy** *(list) --* The capacity provider strategy that are associated with the task set. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. * **platformVersion** *(string) --* The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the *Amazon Elastic Container Service Developer Guide*. * **platformFamily** *(string) --* The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type. All tasks in the set must have the same value. * **networkConfiguration** *(dict) --* The network configuration for the task set. * **awsvpcConfiguration** *(dict) --* The VPC subnets and security groups that are associated with a task. Note: All specified subnets and security groups must be from the same VPC. * **subnets** *(list) --* The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified. Note: All specified subnets must be from the same VPC. * *(string) --* * **securityGroups** *(list) --* The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified. Note: All specified security groups must be from the same VPC. * *(string) --* * **assignPublicIp** *(string) --* Whether the task's elastic network interface receives a public IP address. Consider the following when you set this value: * When you use "create-service" or "update- service", the default is "DISABLED". * When the service "deploymentController" is "ECS", the value must be "DISABLED". * **loadBalancers** *(list) --* Details on a load balancer that are used with a task set. * *(dict) --* The load balancer configuration to use with a service or task set. When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers. We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration. A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the *Amazon Elastic Container Service Developer Guide*. * **targetGroupArn** *(string) --* The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set. A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. For services using the "ECS" deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the *Amazon Elastic Container Service Developer Guide*. For services using the "CODE_DEPLOY" deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the *Amazon Elastic Container Service Developer Guide*. Warning: If your service's task definition uses the "awsvpc" network mode, you must choose "ip" as the target type, not "instance". Do this when creating your target groups because tasks that use the "awsvpc" network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type. * **loadBalancerName** *(string) --* The name of the load balancer to associate with the Amazon ECS service or task set. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted. * **containerName** *(string) --* The name of the container (as it appears in a container definition) to associate with the load balancer. You need to specify the container name when configuring the target group for an Amazon ECS load balancer. * **containerPort** *(integer) --* The port on the container to associate with the load balancer. This port must correspond to a "containerPort" in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the "hostPort" of the port mapping. * **advancedConfiguration** *(dict) --* The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments. * **alternateTargetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments. * **productionListenerRule** *(string) --* The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic. * **testListenerRule** *(string) --* The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you. * **serviceRegistries** *(list) --* The details for the service discovery registries to assign to this task set. For more information, see Service discovery. * *(dict) --* The details for the service registry. Each service may be associated with one service registry. Multiple service registries for each service are not supported. When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration. * **registryArn** *(string) --* The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService. * **port** *(integer) --* The port value used if your service discovery service specified an SRV record. This field might be used if both the "awsvpc" network mode and SRV records are used. * **containerName** *(string) --* The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition that your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **containerPort** *(integer) --* The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **scale** *(dict) --* A floating-point percentage of your desired number of tasks to place and keep running in the task set. * **value** *(float) --* The value, specified as a percent total of a service's "desiredCount", to scale the task set. Accepted values are numbers between 0 and 100. * **unit** *(string) --* The unit of measure for the scale value. * **stabilityStatus** *(string) --* The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set are in "STEADY_STATE": * The task "runningCount" is equal to the "computedDesiredCount". * The "pendingCount" is "0". * There are no tasks that are running on container instances in the "DRAINING" status. * All tasks are reporting a healthy status from the load balancers, service discovery, and container health checks. If any of those conditions aren't met, the stability status returns "STABILIZING". * **stabilityStatusAt** *(datetime) --* The Unix timestamp for the time when the task set stability status was retrieved. * **tags** *(list) --* The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **fargateEphemeralStorage** *(dict) --* The Fargate ephemeral storage settings for the task set. * **kmsKeyId** *(string) --* Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment. * **deployments** *(list) --* The current state of deployments for the service. * *(dict) --* The details of an Amazon ECS service deployment. This is used only when a service uses the "ECS" deployment controller type. * **id** *(string) --* The ID of the deployment. * **status** *(string) --* The status of the deployment. The following describes each state. PRIMARY The most recent deployment of a service. ACTIVE A service deployment that still has running tasks, but are in the process of being replaced with a new "PRIMARY" deployment. INACTIVE A deployment that has been completely replaced. * **taskDefinition** *(string) --* The most recent task definition that was specified for the tasks in the service to use. * **desiredCount** *(integer) --* The most recent desired count of tasks that was specified for the service to deploy or maintain. * **pendingCount** *(integer) --* The number of tasks in the deployment that are in the "PENDING" status. * **runningCount** *(integer) --* The number of tasks in the deployment that are in the "RUNNING" status. * **failedTasks** *(integer) --* The number of consecutively failed tasks in the deployment. A task is considered a failure if the service scheduler can't launch the task, the task doesn't transition to a "RUNNING" state, or if it fails any of its defined health checks and is stopped. Note: Once a service deployment has one or more successfully running tasks, the failed task count resets to zero and stops being evaluated. * **createdAt** *(datetime) --* The Unix timestamp for the time when the service deployment was created. * **updatedAt** *(datetime) --* The Unix timestamp for the time when the service deployment was last updated. * **capacityProviderStrategy** *(list) --* The capacity provider strategy that the deployment is using. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. * **launchType** *(string) --* The launch type the tasks in the service are using. For more information, see Amazon ECS Launch Types in the *Amazon Elastic Container Service Developer Guide*. * **platformVersion** *(string) --* The platform version that your tasks in the service run on. A platform version is only specified for tasks using the Fargate launch type. If one isn't specified, the "LATEST" platform version is used. For more information, see Fargate Platform Versions in the *Amazon Elastic Container Service Developer Guide*. * **platformFamily** *(string) --* The operating system that your tasks in the service, or tasks are running on. A platform family is specified only for tasks using the Fargate launch type. All tasks that run as part of this service must use the same "platformFamily" value as the service, for example, "LINUX.". * **networkConfiguration** *(dict) --* The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the "awsvpc" networking mode. * **awsvpcConfiguration** *(dict) --* The VPC subnets and security groups that are associated with a task. Note: All specified subnets and security groups must be from the same VPC. * **subnets** *(list) --* The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified. Note: All specified subnets must be from the same VPC. * *(string) --* * **securityGroups** *(list) --* The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified. Note: All specified security groups must be from the same VPC. * *(string) --* * **assignPublicIp** *(string) --* Whether the task's elastic network interface receives a public IP address. Consider the following when you set this value: * When you use "create-service" or "update- service", the default is "DISABLED". * When the service "deploymentController" is "ECS", the value must be "DISABLED". * **rolloutState** *(string) --* Note: The "rolloutState" of a service is only returned for services that use the rolling update ( "ECS") deployment type that aren't behind a Classic Load Balancer. The rollout state of the deployment. When a service deployment is started, it begins in an "IN_PROGRESS" state. When the service reaches a steady state, the deployment transitions to a "COMPLETED" state. If the service fails to reach a steady state and circuit breaker is turned on, the deployment transitions to a "FAILED" state. A deployment in "FAILED" state doesn't launch any new tasks. For more information, see DeploymentCircuitBreaker. * **rolloutStateReason** *(string) --* A description of the rollout state of a deployment. * **serviceConnectConfiguration** *(dict) --* The details of the Service Connect configuration that's used by this deployment. Compare the configuration between multiple deployments when troubleshooting issues with new deployments. The configuration for this service to discover and connect to services, and be discovered by, and connected from, other services within a namespace. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **enabled** *(boolean) --* Specifies whether to use Service Connect with this service. * **namespace** *(string) --* The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace for use with Service Connect. The namespace must be in the same Amazon Web Services Region as the Amazon ECS service and cluster. The type of namespace doesn't affect Service Connect. For more information about Cloud Map, see Working with Services in the *Cloud Map Developer Guide*. * **services** *(list) --* The list of Service Connect service objects. These are names and aliases (also known as endpoints) that are used by other Amazon ECS services to connect to this service. This field is not required for a "client" Amazon ECS service that's a member of a namespace only to connect to other services within the namespace. An example of this would be a frontend application that accepts incoming requests from either a load balancer that's attached to the service or by other means. An object selects a port from the task definition, assigns a name for the Cloud Map service, and a list of aliases (endpoints) and ports for client applications to refer to this service. * *(dict) --* The Service Connect service object configuration. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **portName** *(string) --* The "portName" must match the name of one of the "portMappings" from all the containers in the task definition of this Amazon ECS service. * **discoveryName** *(string) --* The "discoveryName" is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen. If the "discoveryName" isn't specified, the port mapping name from the task definition is used in "portName.namespace". * **clientAliases** *(list) --* The list of client aliases for this Service Connect service. You use these to assign names that can be used by client applications. The maximum number of client aliases that you can have in this list is 1. Each alias ("endpoint") is a fully- qualified name and port number that other Amazon ECS tasks ("clients") can use to connect to this service. Each name and port mapping must be unique within the namespace. For each "ServiceConnectService", you must provide at least one "clientAlias" with one "port". * *(dict) --* Each alias ("endpoint") is a fully- qualified name and port number that other tasks ("clients") can use to connect to this service. Each name and port mapping must be unique within the namespace. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **port** *(integer) --* The listening port number for the Service Connect proxy. This port is available inside of all of the tasks within the same namespace. To avoid changing your applications in client Amazon ECS services, set this to the same port that the client application uses by default. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **dnsName** *(string) --* The "dnsName" is the name that you use in the applications of client tasks to connect to this service. The name must be a valid DNS name but doesn't need to be fully-qualified. The name can include up to 127 characters. The name can include lowercase letters, numbers, underscores (_), hyphens (-), and periods (.). The name can't start with a hyphen. If this parameter isn't specified, the default value of "discoveryName.namespace" is used. If the "discoveryName" isn't specified, the port mapping name from the task definition is used in "portName.namespace". To avoid changing your applications in client Amazon ECS services, set this to the same name that the client application uses by default. For example, a few common names are "database", "db", or the lowercase name of a database, such as "mysql" or "redis". For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **testTrafficRules** *(dict) --* The configuration for test traffic routing rules used during blue/green deployments with Amazon ECS Service Connect. This allows you to route a portion of traffic to the new service revision of your service for testing before shifting all production traffic. * **header** *(dict) --* The HTTP header-based routing rules that determine which requests should be routed to the new service version during blue/green deployment testing. These rules provide fine-grained control over test traffic routing based on request headers. * **name** *(string) --* The name of the HTTP header to examine for test traffic routing. Common examples include custom headers like "X-Test-Version" or "X -Canary-Request" that can be used to identify test traffic. * **value** *(dict) --* The header value matching configuration that determines how the HTTP header value is evaluated for test traffic routing decisions. * **exact** *(string) --* The exact value that the HTTP header must match for the test traffic routing rule to apply. This provides precise control over which requests are routed to the new service revision during blue/green deployments. * **ingressPortOverride** *(integer) --* The port number for the Service Connect proxy to listen on. Use the value of this field to bypass the proxy for traffic on the port number specified in the named "portMapping" in the task definition of this application, and then use it in your VPC security groups to allow traffic into the proxy for this Amazon ECS service. In "awsvpc" mode and Fargate, the default value is the container port number. The container port number is in the "portMapping" in the task definition. In bridge mode, the default value is the ephemeral port of the Service Connect proxy. * **timeout** *(dict) --* A reference to an object that represents the configured timeouts for Service Connect. * **idleTimeoutSeconds** *(integer) --* The amount of time in seconds a connection will stay active while idle. A value of "0" can be set to disable "idleTimeout". The "idleTimeout" default for "HTTP"/ "HTTP2"/ "GRPC" is 5 minutes. The "idleTimeout" default for "TCP" is 1 hour. * **perRequestTimeoutSeconds** *(integer) --* The amount of time waiting for the upstream to respond with a complete response per request. A value of "0" can be set to disable "perRequestTimeout". "perRequestTimeout" can only be set if Service Connect "appProtocol" isn't "TCP". Only "idleTimeout" is allowed for "TCP" "appProtocol". * **tls** *(dict) --* A reference to an object that represents a Transport Layer Security (TLS) configuration. * **issuerCertificateAuthority** *(dict) --* The signer certificate authority. * **awsPcaAuthorityArn** *(string) --* The ARN of the Amazon Web Services Private Certificate Authority certificate. * **kmsKey** *(string) --* The Amazon Web Services Key Management Service key. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that's associated with the Service Connect TLS. * **logConfiguration** *(dict) --* The log configuration for the container. This parameter maps to "LogConfig" in the docker container create command and the "--log-driver" option to docker run. By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition. Understand the following when specifying a log configuration for your containers. * Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent. For tasks on Fargate, the supported log drivers are "awslogs", "splunk", and "awsfirelens". For tasks hosted on Amazon EC2 instances, the supported log drivers are "awslogs", "fluentd", "gelf", "json-file", "journald", "syslog", "splunk", and "awsfirelens". * This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. * For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the "ECS_AVAILABLE_LOGGING_DRIVERS" environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the *Amazon Elastic Container Service Developer Guide*. * For tasks that are on Fargate, because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to. * **logDriver** *(string) --* The log driver to use for the container. For tasks on Fargate, the supported log drivers are "awslogs", "splunk", and "awsfirelens". For tasks hosted on Amazon EC2 instances, the supported log drivers are "awslogs", "fluentd", "gelf", "json-file", "journald", "syslog", "splunk", and "awsfirelens". For more information about using the "awslogs" log driver, see Send Amazon ECS logs to CloudWatch in the *Amazon Elastic Container Service Developer Guide*. For more information about using the "awsfirelens" log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner. Note: If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software. * **options** *(dict) --* The configuration options to send to the log driver. The options you can specify depend on the log driver. Some of the options you can specify when you use the "awslogs" log driver to route logs to Amazon CloudWatch include the following: awslogs-create-group Required: No Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to "false". Note: Your IAM policy must include the "logs:CreateLogGroup" permission before you attempt to use "awslogs-create-group ".awslogs-region Required: Yes Specify the Amazon Web Services Region that the "awslogs" log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option. awslogs-group Required: Yes Make sure to specify a log group that the "awslogs" log driver sends its log streams to. awslogs-stream-prefix Required: Yes, when using Fargate.Optional when using EC2. Use the "awslogs-stream-prefix" option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format "prefix- name/container-name/ecs-task-id". If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option. For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to. You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console. awslogs-datetime-format Required: No This option defines a multiline start pattern in Python "strftime" format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry. For more information, see awslogs-datetime- format. You cannot configure both the "awslogs- datetime-format" and "awslogs-multiline- pattern" options. Note: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.awslogs-multiline- pattern Required: No This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. For more information, see awslogs-multiline- pattern. This option is ignored if "awslogs-datetime- format" is also configured. You cannot configure both the "awslogs- datetime-format" and "awslogs-multiline- pattern" options. Note: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. The following options apply to all supported log drivers. mode Required: No Valid values: "non-blocking" | "blocking" This option defines the delivery mode of log messages from the container to the log driver specified using "logDriver". The delivery mode you choose affects application availability when the flow of logs from container is interrupted. If you use the "blocking" mode and the flow of logs is interrupted, calls from container code to write to the "stdout" and "stderr" streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure. If you use the "non-blocking" mode, the container's logs are instead stored in an in- memory intermediate buffer configured with the "max-buffer-size" option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver. You can set a default "mode" for all containers in a specific Amazon Web Services Region by using the "defaultLogDriverMode" account setting. If you don't specify the "mode" option or configure the account setting, Amazon ECS will default to the "non- blocking" mode. For more information about the account setting, see Default log driver mode in the *Amazon Elastic Container Service Developer Guide*. Note: On June 25, 2025, Amazon ECS changed the default log driver mode from "blocking" to "non-blocking" to prioritize task availability over logging. To continue using the "blocking" mode after this change, do one of the following: * Set the "mode" option in your container definition's "logConfiguration" as "blocking". * Set the "defaultLogDriverMode" account setting to "blocking". max-buffer-size Required: No Default value: "1m" When "non-blocking" mode is used, the "max- buffer-size" log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost. To route logs using the "splunk" log router, you need to specify a "splunk-token" and a "splunk-url". When you use the "awsfirelens" log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the "log-driver-buffer-limit" option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. Other options you can specify when using "awsfirelens" to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with "region" and a name for the log stream with "delivery_stream". When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with "region" and a data stream name with "stream". When you export logs to Amazon OpenSearch Service, you can specify options like "Name", "Host" (OpenSearch Service endpoint without protocol), "Port", "Index", "Type", "Aws_auth", "Aws_region", "Suppress_Type_Name", and "tls". For more information, see Under the hood: FireLens for Amazon ECS Tasks. When you export logs to Amazon S3, you can specify the bucket using the "bucket" option. You can also specify "region", "total_file_size", "upload_timeout", and "use_put_object" as options. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" * *(string) --* * *(string) --* * **secretOptions** *(list) --* The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * *(dict) --* An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways: * To inject sensitive data into your containers as environment variables, use the "secrets" container definition parameter. * To reference sensitive information in the log configuration of a container, use the "secretOptions" container definition parameter. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the secret. * **valueFrom** *(string) --* The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store. For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the *Amazon Elastic Container Service Developer Guide*. Note: If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified. * **serviceConnectResources** *(list) --* The list of Service Connect resources that are associated with this deployment. Each list entry maps a discovery name to a Cloud Map service name. * *(dict) --* The Service Connect resource. Each configuration maps a discovery name to a Cloud Map service name. The data is stored in Cloud Map as part of the Service Connect configuration for each discovery name of this Amazon ECS service. A task can resolve the "dnsName" for each of the "clientAliases" of a service. However a task can't resolve the discovery names. If you want to connect to a service, refer to the "ServiceConnectConfiguration" of that service for the list of "clientAliases" that you can use. * **discoveryName** *(string) --* The discovery name of this Service Connect resource. The "discoveryName" is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen. If the "discoveryName" isn't specified, the port mapping name from the task definition is used in "portName.namespace". * **discoveryArn** *(string) --* The Amazon Resource Name (ARN) for the service in Cloud Map that matches the discovery name for this Service Connect resource. You can use this ARN in other integrations with Cloud Map. However, Service Connect can't ensure connectivity outside of Amazon ECS. * **volumeConfigurations** *(list) --* The details of the volume that was "configuredAtLaunch". You can configure different settings like the size, throughput, volumeType, and ecryption in ServiceManagedEBSVolumeConfiguration. The "name" of the volume must match the "name" from the task definition. * *(dict) --* The configuration for a volume specified in the task definition as a volume that is configured at launch time. Currently, the only supported volume type is an Amazon EBS volume. * **name** *(string) --* The name of the volume. This value must match the volume name from the "Volume" object in the task definition. * **managedEBSVolume** *(dict) --* The configuration for the Amazon EBS volume that Amazon ECS creates and manages on your behalf. These settings are used to create each Amazon EBS volume, with one volume created for each task in the service. The Amazon EBS volumes are visible in your account in the Amazon EC2 console once they are created. * **encrypted** *(boolean) --* Indicates whether the volume should be encrypted. If you turn on Region-level Amazon EBS encryption by default but set this value as "false", the setting is overridden and the volume is encrypted with the KMS key specified for Amazon EBS encryption by default. This parameter maps 1:1 with the "Encrypted" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * **kmsKeyId** *(string) --* The Amazon Resource Name (ARN) identifier of the Amazon Web Services Key Management Service key to use for Amazon EBS encryption. When a key is specified using this parameter, it overrides Amazon EBS default encryption or any KMS key that you specified for cluster-level managed storage encryption. This parameter maps 1:1 with the "KmsKeyId" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. For more information about encrypting Amazon EBS volumes attached to tasks, see Encrypt data stored in Amazon EBS volumes attached to Amazon ECS tasks. Warning: Amazon Web Services authenticates the Amazon Web Services Key Management Service key asynchronously. Therefore, if you specify an ID, alias, or ARN that is invalid, the action can appear to complete, but eventually fails. * **volumeType** *(string) --* The volume type. This parameter maps 1:1 with the "VolumeType" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. For more information, see Amazon EBS volume types in the *Amazon EC2 User Guide*. The following are the supported volume types. * General Purpose SSD: "gp2``| ``gp3" * Provisioned IOPS SSD: "io1``| ``io2" * Throughput Optimized HDD: "st1" * Cold HDD: "sc1" * Magnetic: "standard" Note: The magnetic volume type is not supported on Fargate. * **sizeInGiB** *(integer) --* The size of the volume in GiB. You must specify either a volume size or a snapshot ID. If you specify a snapshot ID, the snapshot size is used for the volume size by default. You can optionally specify a volume size greater than or equal to the snapshot size. This parameter maps 1:1 with the "Size" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. The following are the supported volume size values for each volume type. * "gp2" and "gp3": 1-16,384 * "io1" and "io2": 4-16,384 * "st1" and "sc1": 125-16,384 * "standard": 1-1,024 * **snapshotId** *(string) --* The snapshot that Amazon ECS uses to create volumes for attachment to tasks maintained by the service. You must specify either "snapshotId" or "sizeInGiB" in your volume configuration. This parameter maps 1:1 with the "SnapshotId" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * **volumeInitializationRate** *(integer) --* The rate, in MiB/s, at which data is fetched from a snapshot of an existing EBS volume to create new volumes for attachment to the tasks maintained by the service. This property can be specified only if you specify a "snapshotId". For more information, see Initialize Amazon EBS volumes in the *Amazon EBS User Guide*. * **iops** *(integer) --* The number of I/O operations per second (IOPS). For "gp3", "io1", and "io2" volumes, this represents the number of IOPS that are provisioned for the volume. For "gp2" volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting. The following are the supported values for each volume type. * "gp3": 3,000 - 16,000 IOPS * "io1": 100 - 64,000 IOPS * "io2": 100 - 256,000 IOPS This parameter is required for "io1" and "io2" volume types. The default for "gp3" volumes is "3,000 IOPS". This parameter is not supported for "st1", "sc1", or "standard" volume types. This parameter maps 1:1 with the "Iops" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * **throughput** *(integer) --* The throughput to provision for a volume, in MiB/s, with a maximum of 1,000 MiB/s. This parameter maps 1:1 with the "Throughput" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. Warning: This parameter is only supported for the "gp3" volume type. * **tagSpecifications** *(list) --* The tags to apply to the volume. Amazon ECS applies service-managed tags by default. This parameter maps 1:1 with the "TagSpecifications.N" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * *(dict) --* The tag specifications of an Amazon EBS volume. * **resourceType** *(string) --* The type of volume resource. * **tags** *(list) --* The tags applied to this Amazon EBS volume. "AmazonECSCreated" and "AmazonECSManaged" are reserved tags that can't be used. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case- sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **propagateTags** *(string) --* Determines whether to propagate the tags from the task definition to the Amazon EBS volume. Tags can only propagate to a "SERVICE" specified in "ServiceVolumeConfiguration". If no value is specified, the tags aren't propagated. * **roleArn** *(string) --* The ARN of the IAM role to associate with this volume. This is the Amazon ECS infrastructure IAM role that is used to manage your Amazon Web Services infrastructure. We recommend using the Amazon ECS-managed "AmazonECSInfrastructureRolePolicyForVolumes" IAM policy with this role. For more information, see Amazon ECS infrastructure IAM role in the *Amazon ECS Developer Guide*. * **filesystemType** *(string) --* The filesystem type for the volume. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the tasks will fail to start. The available Linux filesystem types are "ext3", "ext4", and "xfs". If no value is specified, the "xfs" filesystem type is used by default. The available Windows filesystem types are "NTFS". * **fargateEphemeralStorage** *(dict) --* The Fargate ephemeral storage settings for the deployment. * **kmsKeyId** *(string) --* Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment. * **vpcLatticeConfigurations** *(list) --* The VPC Lattice configuration for the service deployment. * *(dict) --* The VPC Lattice configuration for your service that holds the information for the target group(s) Amazon ECS tasks will be registered to. * **roleArn** *(string) --* The ARN of the IAM role to associate with this VPC Lattice configuration. This is the Amazon ECS infrastructure IAM role that is used to manage your VPC Lattice infrastructure. * **targetGroupArn** *(string) --* The full Amazon Resource Name (ARN) of the target group or groups associated with the VPC Lattice configuration that the Amazon ECS tasks will be registered to. * **portName** *(string) --* The name of the port mapping to register in the VPC Lattice target group. This is the name of the "portMapping" you defined in your task definition. * **roleArn** *(string) --* The ARN of the IAM role that's associated with the service. It allows the Amazon ECS container agent to register container instances with an Elastic Load Balancing load balancer. * **events** *(list) --* The event stream for your service. A maximum of 100 of the latest events are displayed. * *(dict) --* The details for an event that's associated with a service. * **id** *(string) --* The ID string for the event. * **createdAt** *(datetime) --* The Unix timestamp for the time when the event was triggered. * **message** *(string) --* The event message. * **createdAt** *(datetime) --* The Unix timestamp for the time when the service was created. * **placementConstraints** *(list) --* The placement constraints for the tasks in the service. * *(dict) --* An object representing a constraint on task placement. For more information, see Task placement constraints in the *Amazon Elastic Container Service Developer Guide*. Note: If you're using the Fargate launch type, task placement constraints aren't supported. * **type** *(string) --* The type of constraint. Use "distinctInstance" to ensure that each task in a particular group is running on a different container instance. Use "memberOf" to restrict the selection to a group of valid candidates. * **expression** *(string) --* A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is "distinctInstance". For more information, see Cluster query language in the *Amazon Elastic Container Service Developer Guide*. * **placementStrategy** *(list) --* The placement strategy that determines how tasks for the service are placed. * *(dict) --* The task placement strategy for a task or service. For more information, see Task placement strategies in the *Amazon Elastic Container Service Developer Guide*. * **type** *(string) --* The type of placement strategy. The "random" placement strategy randomly places tasks on available candidates. The "spread" placement strategy spreads placement across available candidates evenly based on the "field" parameter. The "binpack" strategy places tasks on available candidates that have the least available amount of the resource that's specified with the "field" parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task. * **field** *(string) --* The field to apply the placement strategy against. For the "spread" placement strategy, valid values are "instanceId" (or "host", which has the same effect), or any platform or custom attribute that's applied to a container instance, such as "attribute:ecs.availability-zone". For the "binpack" placement strategy, valid values are "cpu" and "memory". For the "random" placement strategy, this field is not used. * **networkConfiguration** *(dict) --* The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the "awsvpc" networking mode. * **awsvpcConfiguration** *(dict) --* The VPC subnets and security groups that are associated with a task. Note: All specified subnets and security groups must be from the same VPC. * **subnets** *(list) --* The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified. Note: All specified subnets must be from the same VPC. * *(string) --* * **securityGroups** *(list) --* The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified. Note: All specified security groups must be from the same VPC. * *(string) --* * **assignPublicIp** *(string) --* Whether the task's elastic network interface receives a public IP address. Consider the following when you set this value: * When you use "create-service" or "update-service", the default is "DISABLED". * When the service "deploymentController" is "ECS", the value must be "DISABLED". * **healthCheckGracePeriodSeconds** *(integer) --* The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing target health checks after a task has first started. * **schedulingStrategy** *(string) --* The scheduling strategy to use for the service. For more information, see Services. There are two service scheduler strategies available. * "REPLICA"-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. * "DAEMON"-The daemon scheduling strategy deploys exactly one task on each active container instance. This task meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks. It stop tasks that don't meet the placement constraints. Note: Fargate tasks don't support the "DAEMON" scheduling strategy. * **deploymentController** *(dict) --* The deployment controller type the service is using. * **type** *(string) --* The deployment controller type to use. The deployment controller is the mechanism that determines how tasks are deployed for your service. The valid options are: * ECS When you create a service which uses the "ECS" deployment controller, you can choose between the following deployment strategies: * "ROLLING": When you create a service which uses the *rolling update* ( "ROLLING") deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration. Rolling update deployments are best suited for the following scenarios: * Gradual service updates: You need to update your service incrementally without taking the entire service offline at once. * Limited resource requirements: You want to avoid the additional resource costs of running two complete environments simultaneously (as required by blue/green deployments). * Acceptable deployment time: Your application can tolerate a longer deployment process, as rolling updates replace tasks one by one. * No need for instant roll back: Your service can tolerate a rollback process that takes minutes rather than seconds. * Simple deployment process: You prefer a straightforward deployment approach without the complexity of managing multiple environments, target groups, and listeners. * No load balancer requirement: Your service doesn't use or require a load balancer, Application Load Balancer, Network Load Balancer, or Service Connect (which are required for blue/green deployments). * Stateful applications: Your application maintains state that makes it difficult to run two parallel environments. * Cost sensitivity: You want to minimize deployment costs by not running duplicate environments during deployment. Rolling updates are the default deployment strategy for services and provide a balance between deployment safety and resource efficiency for many common application scenarios. * "BLUE_GREEN": A *blue/green* deployment strategy ( "BLUE_GREEN") is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed. Amazon ECS blue/green deployments are best suited for the following scenarios: * Service validation: When you need to validate new service revisions before directing production traffic to them * Zero downtime: When your service requires zero- downtime deployments * Instant roll back: When you need the ability to quickly roll back if issues are detected * Load balancer requirement: When your service uses Application Load Balancer, Network Load Balancer, or Service Connect * External Use a third-party deployment controller. * Blue/green deployment (powered by CodeDeploy) CodeDeploy installs an updated version of the application as a new replacement task set and reroutes production traffic from the original application task set to the replacement task set. The original task set is terminated after a successful deployment. Use this deployment controller to verify a new deployment of a service before sending production traffic to it. * **tags** *(list) --* The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value. You define bot the key and value. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **createdBy** *(string) --* The principal that created the service. * **enableECSManagedTags** *(boolean) --* Determines whether to use Amazon ECS managed tags for the tasks in the service. For more information, see Tagging Your Amazon ECS Resources in the *Amazon Elastic Container Service Developer Guide*. * **propagateTags** *(string) --* Determines whether to propagate the tags from the task definition or the service to the task. If no value is specified, the tags aren't propagated. * **enableExecuteCommand** *(boolean) --* Determines whether the execute command functionality is turned on for the service. If "true", the execute command functionality is turned on for all containers in tasks as part of the service. * **availabilityZoneRebalancing** *(string) --* Indicates whether to use Availability Zone rebalancing for the service. For more information, see Balancing an Amazon ECS service across Availability Zones in the *Amazon Elastic Container Service Developer Guide* . * **failures** *(list) --* Any failures associated with the call. * *(dict) --* A failed resource. For a list of common causes, see API failure reasons in the *Amazon Elastic Container Service Developer Guide*. * **arn** *(string) --* The Amazon Resource Name (ARN) of the failed resource. * **reason** *(string) --* The reason for the failure. * **detail** *(string) --* The details of the failure. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ClusterNotFoundException" **Examples** This example provides descriptive information about the service named "ecs-simple-service". response = client.describe_services( services=[ 'ecs-simple-service', ], ) print(response) Expected Output: { 'failures': [ ], 'services': [ { 'clusterArn': 'arn:aws:ecs:us-east-1:012345678910:cluster/default', 'createdAt': datetime(2016, 8, 29, 16, 25, 52, 0, 242, 0), 'deploymentConfiguration': { 'maximumPercent': 200, 'minimumHealthyPercent': 100, }, 'deployments': [ { 'createdAt': datetime(2016, 8, 29, 16, 25, 52, 0, 242, 0), 'desiredCount': 1, 'id': 'ecs-svc/9223370564341623665', 'pendingCount': 0, 'runningCount': 0, 'status': 'PRIMARY', 'taskDefinition': 'arn:aws:ecs:us-east-1:012345678910:task-definition/hello_world:6', 'updatedAt': datetime(2016, 8, 29, 16, 25, 52, 0, 242, 0), }, ], 'desiredCount': 1, 'events': [ { 'createdAt': datetime(2016, 8, 29, 16, 25, 58, 0, 242, 0), 'id': '38c285e5-d335-4b68-8b15-e46dedc8e88d', # In this example, there is a service event that shows unavailable cluster resources. 'message': '(service ecs-simple-service) was unable to place a task because no container instance met all of its requirements. The closest matching (container-instance 3f4de1c5-ffdd-4954-af7e-75b4be0c8841) is already using a port required by your task. For more information, see the Troubleshooting section of the Amazon ECS Developer Guide.', }, ], 'loadBalancers': [ ], 'pendingCount': 0, 'runningCount': 0, 'serviceArn': 'arn:aws:ecs:us-east-1:012345678910:service/ecs-simple-service', 'serviceName': 'ecs-simple-service', 'status': 'ACTIVE', 'taskDefinition': 'arn:aws:ecs:us-east-1:012345678910:task-definition/hello_world:6', }, ], 'ResponseMetadata': { '...': '...', }, } ECS / Client / describe_capacity_providers describe_capacity_providers *************************** ECS.Client.describe_capacity_providers(**kwargs) Describes one or more of your capacity providers. See also: AWS API Documentation **Request Syntax** response = client.describe_capacity_providers( capacityProviders=[ 'string', ], include=[ 'TAGS', ], maxResults=123, nextToken='string' ) Parameters: * **capacityProviders** (*list*) -- The short name or full Amazon Resource Name (ARN) of one or more capacity providers. Up to "100" capacity providers can be described in an action. * *(string) --* * **include** (*list*) -- Specifies whether or not you want to see the resource tags for the capacity provider. If "TAGS" is specified, the tags are included in the response. If this field is omitted, tags aren't included in the response. * *(string) --* * **maxResults** (*integer*) -- The maximum number of account setting results returned by "DescribeCapacityProviders" in paginated output. When this parameter is used, "DescribeCapacityProviders" only returns "maxResults" results in a single page along with a "nextToken" response element. The remaining results of the initial request can be seen by sending another "DescribeCapacityProviders" request with the returned "nextToken" value. This value can be between 1 and 10. If this parameter is not used, then "DescribeCapacityProviders" returns up to 10 results and a "nextToken" value if applicable. * **nextToken** (*string*) -- The "nextToken" value returned from a previous paginated "DescribeCapacityProviders" request where "maxResults" was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the "nextToken" value. Note: This token should be treated as an opaque identifier that is only used to retrieve the next items in a list and not for other programmatic purposes. Return type: dict Returns: **Response Syntax** { 'capacityProviders': [ { 'capacityProviderArn': 'string', 'name': 'string', 'status': 'ACTIVE'|'INACTIVE', 'autoScalingGroupProvider': { 'autoScalingGroupArn': 'string', 'managedScaling': { 'status': 'ENABLED'|'DISABLED', 'targetCapacity': 123, 'minimumScalingStepSize': 123, 'maximumScalingStepSize': 123, 'instanceWarmupPeriod': 123 }, 'managedTerminationProtection': 'ENABLED'|'DISABLED', 'managedDraining': 'ENABLED'|'DISABLED' }, 'updateStatus': 'DELETE_IN_PROGRESS'|'DELETE_COMPLETE'|'DELETE_FAILED'|'UPDATE_IN_PROGRESS'|'UPDATE_COMPLETE'|'UPDATE_FAILED', 'updateStatusReason': 'string', 'tags': [ { 'key': 'string', 'value': 'string' }, ] }, ], 'failures': [ { 'arn': 'string', 'reason': 'string', 'detail': 'string' }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **capacityProviders** *(list) --* The list of capacity providers. * *(dict) --* The details for a capacity provider. * **capacityProviderArn** *(string) --* The Amazon Resource Name (ARN) that identifies the capacity provider. * **name** *(string) --* The name of the capacity provider. * **status** *(string) --* The current status of the capacity provider. Only capacity providers in an "ACTIVE" state can be used in a cluster. When a capacity provider is successfully deleted, it has an "INACTIVE" status. * **autoScalingGroupProvider** *(dict) --* The Auto Scaling group settings for the capacity provider. * **autoScalingGroupArn** *(string) --* The Amazon Resource Name (ARN) that identifies the Auto Scaling group, or the Auto Scaling group name. * **managedScaling** *(dict) --* The managed scaling settings for the Auto Scaling group capacity provider. * **status** *(string) --* Determines whether to use managed scaling for the capacity provider. * **targetCapacity** *(integer) --* The target capacity utilization as a percentage for the capacity provider. The specified value must be greater than "0" and less than or equal to "100". For example, if you want the capacity provider to maintain 10% spare capacity, then that means the utilization is 90%, so use a "targetCapacity" of "90". The default value of "100" percent results in the Amazon EC2 instances in your Auto Scaling group being completely used. * **minimumScalingStepSize** *(integer) --* The minimum number of Amazon EC2 instances that Amazon ECS will scale out at one time. The scale in process is not affected by this parameter If this parameter is omitted, the default value of "1" is used. When additional capacity is required, Amazon ECS will scale up the minimum scaling step size even if the actual demand is less than the minimum scaling step size. If you use a capacity provider with an Auto Scaling group configured with more than one Amazon EC2 instance type or Availability Zone, Amazon ECS will scale up by the exact minimum scaling step size value and will ignore both the maximum scaling step size as well as the capacity demand. * **maximumScalingStepSize** *(integer) --* The maximum number of Amazon EC2 instances that Amazon ECS will scale out at one time. If this parameter is omitted, the default value of "10000" is used. * **instanceWarmupPeriod** *(integer) --* The period of time, in seconds, after a newly launched Amazon EC2 instance can contribute to CloudWatch metrics for Auto Scaling group. If this parameter is omitted, the default value of "300" seconds is used. * **managedTerminationProtection** *(string) --* The managed termination protection setting to use for the Auto Scaling group capacity provider. This determines whether the Auto Scaling group has managed termination protection. The default is off. Warning: When using managed termination protection, managed scaling must also be used otherwise managed termination protection doesn't work. When managed termination protection is on, Amazon ECS prevents the Amazon EC2 instances in an Auto Scaling group that contain tasks from being terminated during a scale-in action. The Auto Scaling group and each instance in the Auto Scaling group must have instance protection from scale-in actions on as well. For more information, see Instance Protection in the *Auto Scaling User Guide*. When managed termination protection is off, your Amazon EC2 instances aren't protected from termination when the Auto Scaling group scales in. * **managedDraining** *(string) --* The managed draining option for the Auto Scaling group capacity provider. When you enable this, Amazon ECS manages and gracefully drains the EC2 container instances that are in the Auto Scaling group capacity provider. * **updateStatus** *(string) --* The update status of the capacity provider. The following are the possible states that is returned. DELETE_IN_PROGRESS The capacity provider is in the process of being deleted. DELETE_COMPLETE The capacity provider was successfully deleted and has an "INACTIVE" status. DELETE_FAILED The capacity provider can't be deleted. The update status reason provides further details about why the delete failed. * **updateStatusReason** *(string) --* The update status reason. This provides further details about the update status for the capacity provider. * **tags** *(list) --* The metadata that you apply to the capacity provider to help you categorize and organize it. Each tag consists of a key and an optional value. You define both. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **failures** *(list) --* Any failures associated with the call. * *(dict) --* A failed resource. For a list of common causes, see API failure reasons in the *Amazon Elastic Container Service Developer Guide*. * **arn** *(string) --* The Amazon Resource Name (ARN) of the failed resource. * **reason** *(string) --* The reason for the failure. * **detail** *(string) --* The details of the failure. * **nextToken** *(string) --* The "nextToken" value to include in a future "DescribeCapacityProviders" request. When the results of a "DescribeCapacityProviders" request exceed "maxResults", this value can be used to retrieve the next page of results. This value is "null" when there are no more results to return. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" ECS / Client / update_service update_service ************** ECS.Client.update_service(**kwargs) Modifies the parameters of a service. Note: On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition. For services using the rolling update ( "ECS") you can update the desired count, deployment configuration, network configuration, load balancers, service registries, enable ECS managed tags option, propagate tags option, task placement constraints and strategies, and task definition. When you update any of these parameters, Amazon ECS starts new tasks with the new configuration. You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when starting or running a task, or when creating or updating a service. For more information, see Amazon EBS volumes in the *Amazon Elastic Container Service Developer Guide*. You can update your volume configurations and trigger a new deployment. "volumeConfigurations" is only supported for REPLICA service and not DAEMON service. If you leave "volumeConfigurations" "null", it doesn't trigger a new deployment. For more information on volumes, see Amazon EBS volumes in the *Amazon Elastic Container Service Developer Guide*. For services using the blue/green ( "CODE_DEPLOY") deployment controller, only the desired count, deployment configuration, health check grace period, task placement constraints and strategies, enable ECS managed tags option, and propagate tags can be updated using this API. If the network configuration, platform version, task definition, or load balancer need to be updated, create a new CodeDeploy deployment. For more information, see CreateDeployment in the *CodeDeploy API Reference*. For services using an external deployment controller, you can update only the desired count, task placement constraints and strategies, health check grace period, enable ECS managed tags option, and propagate tags option, using this API. If the launch type, load balancer, network configuration, platform version, or task definition need to be updated, create a new task set For more information, see CreateTaskSet. You can add to or subtract from the number of instantiations of a task definition in a service by specifying the cluster that the service is running in and a new "desiredCount" parameter. You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when starting or running a task, or when creating or updating a service. For more information, see Amazon EBS volumes in the *Amazon Elastic Container Service Developer Guide*. If you have updated the container image of your application, you can create a new task definition with that image and deploy it to your service. The service scheduler uses the minimum healthy percent and maximum percent parameters (in the service's deployment configuration) to determine the deployment strategy. Note: If your updated Docker image uses the same tag as what is in the existing task definition for your service (for example, "my_image:latest"), you don't need to create a new revision of your task definition. You can update the service using the "forceNewDeployment" option. The new tasks launched by the deployment pull the current image/tag combination from your repository when they start. You can also update the deployment configuration of a service. When a deployment is triggered by updating the task definition of a service, the service scheduler uses the deployment configuration parameters, "minimumHealthyPercent" and "maximumPercent", to determine the deployment strategy. * If "minimumHealthyPercent" is below 100%, the scheduler can ignore "desiredCount" temporarily during a deployment. For example, if "desiredCount" is four tasks, a minimum of 50% allows the scheduler to stop two existing tasks before starting two new tasks. Tasks for services that don't use a load balancer are considered healthy if they're in the "RUNNING" state. Tasks for services that use a load balancer are considered healthy if they're in the "RUNNING" state and are reported as healthy by the load balancer. * The "maximumPercent" parameter represents an upper limit on the number of running tasks during a deployment. You can use it to define the deployment batch size. For example, if "desiredCount" is four tasks, a maximum of 200% starts four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). When UpdateService stops a task during a deployment, the equivalent of "docker stop" is issued to the containers running in the task. This results in a "SIGTERM" and a 30-second timeout. After this, "SIGKILL" is sent and the containers are forcibly stopped. If the container handles the "SIGTERM" gracefully and exits within 30 seconds from receiving it, no "SIGKILL" is sent. When the service scheduler launches new tasks, it determines task placement in your cluster with the following logic. * Determine which of the container instances in your cluster can support your service's task definition. For example, they have the required CPU, memory, ports, and container instance attributes. * By default, the service scheduler attempts to balance tasks across Availability Zones in this manner even though you can choose a different placement strategy. * Sort the valid container instances by the fewest number of running tasks for this service in the same Availability Zone as the instance. For example, if zone A has one running service task and zones B and C each have zero, valid container instances in either zone B or C are considered optimal for placement. * Place the new service task on a valid container instance in an optimal Availability Zone (based on the previous steps), favoring container instances with the fewest number of running tasks for this service. When the service scheduler stops running tasks, it attempts to maintain balance across the Availability Zones in your cluster using the following logic: * Sort the container instances by the largest number of running tasks for this service in the same Availability Zone as the instance. For example, if zone A has one running service task and zones B and C each have two, container instances in either zone B or C are considered optimal for termination. * Stop the task on a container instance in an optimal Availability Zone (based on the previous steps), favoring container instances with the largest number of running tasks for this service. See also: AWS API Documentation **Request Syntax** response = client.update_service( cluster='string', service='string', desiredCount=123, taskDefinition='string', capacityProviderStrategy=[ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], deploymentConfiguration={ 'deploymentCircuitBreaker': { 'enable': True|False, 'rollback': True|False }, 'maximumPercent': 123, 'minimumHealthyPercent': 123, 'alarms': { 'alarmNames': [ 'string', ], 'rollback': True|False, 'enable': True|False }, 'strategy': 'ROLLING'|'BLUE_GREEN', 'bakeTimeInMinutes': 123, 'lifecycleHooks': [ { 'hookTargetArn': 'string', 'roleArn': 'string', 'lifecycleStages': [ 'RECONCILE_SERVICE'|'PRE_SCALE_UP'|'POST_SCALE_UP'|'TEST_TRAFFIC_SHIFT'|'POST_TEST_TRAFFIC_SHIFT'|'PRODUCTION_TRAFFIC_SHIFT'|'POST_PRODUCTION_TRAFFIC_SHIFT', ] }, ] }, availabilityZoneRebalancing='ENABLED'|'DISABLED', networkConfiguration={ 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, placementConstraints=[ { 'type': 'distinctInstance'|'memberOf', 'expression': 'string' }, ], placementStrategy=[ { 'type': 'random'|'spread'|'binpack', 'field': 'string' }, ], platformVersion='string', forceNewDeployment=True|False, healthCheckGracePeriodSeconds=123, deploymentController={ 'type': 'ECS'|'CODE_DEPLOY'|'EXTERNAL' }, enableExecuteCommand=True|False, enableECSManagedTags=True|False, loadBalancers=[ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], propagateTags='TASK_DEFINITION'|'SERVICE'|'NONE', serviceRegistries=[ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], serviceConnectConfiguration={ 'enabled': True|False, 'namespace': 'string', 'services': [ { 'portName': 'string', 'discoveryName': 'string', 'clientAliases': [ { 'port': 123, 'dnsName': 'string', 'testTrafficRules': { 'header': { 'name': 'string', 'value': { 'exact': 'string' } } } }, ], 'ingressPortOverride': 123, 'timeout': { 'idleTimeoutSeconds': 123, 'perRequestTimeoutSeconds': 123 }, 'tls': { 'issuerCertificateAuthority': { 'awsPcaAuthorityArn': 'string' }, 'kmsKey': 'string', 'roleArn': 'string' } }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] } }, volumeConfigurations=[ { 'name': 'string', 'managedEBSVolume': { 'encrypted': True|False, 'kmsKeyId': 'string', 'volumeType': 'string', 'sizeInGiB': 123, 'snapshotId': 'string', 'volumeInitializationRate': 123, 'iops': 123, 'throughput': 123, 'tagSpecifications': [ { 'resourceType': 'volume', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE' }, ], 'roleArn': 'string', 'filesystemType': 'ext3'|'ext4'|'xfs'|'ntfs' } }, ], vpcLatticeConfigurations=[ { 'roleArn': 'string', 'targetGroupArn': 'string', 'portName': 'string' }, ] ) type cluster: string param cluster: The short name or full Amazon Resource Name (ARN) of the cluster that your service runs on. If you do not specify a cluster, the default cluster is assumed. You can't change the cluster name. type service: string param service: **[REQUIRED]** The name of the service to update. type desiredCount: integer param desiredCount: The number of instantiations of the task to place and keep running in your service. type taskDefinition: string param taskDefinition: The "family" and "revision" ( "family:revision") or full ARN of the task definition to run in your service. If a "revision" is not specified, the latest "ACTIVE" revision is used. If you modify the task definition with "UpdateService", Amazon ECS spawns a task with the new version of the task definition and then stops an old task after the new version is running. type capacityProviderStrategy: list param capacityProviderStrategy: The details of a capacity provider strategy. You can set a capacity provider when you create a cluster, run a task, or update a service. When you use Fargate, the capacity providers are "FARGATE" or "FARGATE_SPOT". When you use Amazon EC2, the capacity providers are Auto Scaling groups. You can change capacity providers for rolling deployments and blue/green deployments. The following list provides the valid transitions: * Update the Fargate launch type to an Auto Scaling group capacity provider. * Update the Amazon EC2 launch type to a Fargate capacity provider. * Update the Fargate capacity provider to an Auto Scaling group capacity provider. * Update the Amazon EC2 capacity provider to a Fargate capacity provider. * Update the Auto Scaling group or Fargate capacity provider back to the launch type. Pass an empty list in the "capacityProviderStrategy" parameter. For information about Amazon Web Services CDK considerations, see Amazon Web Services CDK considerations. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* **[REQUIRED]** The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. type deploymentConfiguration: dict param deploymentConfiguration: Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks. * **deploymentCircuitBreaker** *(dict) --* Note: The deployment circuit breaker can only be used for services using the rolling update ( "ECS") deployment type. The **deployment circuit breaker** determines whether a service deployment will fail if the service can't reach a steady state. If you use the deployment circuit breaker, a service deployment will transition to a failed state and stop launching new tasks. If you use the rollback option, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. For more information, see Rolling update in the *Amazon Elastic Container Service Developer Guide* * **enable** *(boolean) --* **[REQUIRED]** Determines whether to use the deployment circuit breaker logic for the service. * **rollback** *(boolean) --* **[REQUIRED]** Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is on, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. * **maximumPercent** *(integer) --* If a service is using the rolling update ( "ECS") deployment type, the "maximumPercent" parameter represents an upper limit on the number of your service's tasks that are allowed in the "RUNNING" or "PENDING" state during a deployment, as a percentage of the "desiredCount" (rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using the "REPLICA" service scheduler and has a "desiredCount" of four tasks and a "maximumPercent" value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default "maximumPercent" value for a service using the "REPLICA" service scheduler is 200%. The Amazon ECS scheduler uses this parameter to replace unhealthy tasks by starting replacement tasks first and then stopping the unhealthy tasks, as long as cluster resources for starting replacement tasks are available. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services. If a service is using either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types, and tasks in the service use the EC2 launch type, the **maximum percent** value is set to the default value. The **maximum percent** value is used to define the upper limit on the number of the tasks in the service that remain in the "RUNNING" state while the container instances are in the "DRAINING" state. Note: You can't specify a custom "maximumPercent" value for a service that uses either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and has tasks that use the EC2 launch type. If the service uses either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types, and the tasks in the service use the Fargate launch type, the maximum percent value is not used. The value is still returned when describing your service. * **minimumHealthyPercent** *(integer) --* If a service is using the rolling update ( "ECS") deployment type, the "minimumHealthyPercent" represents a lower limit on the number of your service's tasks that must remain in the "RUNNING" state during a deployment, as a percentage of the "desiredCount" (rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a "desiredCount" of four tasks and a "minimumHealthyPercent" of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks. If any tasks are unhealthy and if "maximumPercent" doesn't allow the Amazon ECS scheduler to start replacement tasks, the scheduler stops the unhealthy tasks one-by-one — using the "minimumHealthyPercent" as a constraint — to clear up capacity to launch replacement tasks. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services . For services that *do not* use a load balancer, the following should be noted: * A service is considered healthy if all essential containers within the tasks in the service pass their health checks. * If a task has no essential containers with a health check defined, the service scheduler will wait for 40 seconds after a task reaches a "RUNNING" state before the task is counted towards the minimum healthy percent total. * If a task has one or more essential containers with a health check defined, the service scheduler will wait for the task to reach a healthy status before counting it towards the minimum healthy percent total. A task is considered healthy when all essential containers within the task have passed their health checks. The amount of time the service scheduler can wait for is determined by the container health check settings. For services that *do* use a load balancer, the following should be noted: * If a task has no essential containers with a health check defined, the service scheduler will wait for the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total. * If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total. The default value for a replica service for "minimumHealthyPercent" is 100%. The default "minimumHealthyPercent" value for a service using the "DAEMON" service schedule is 0% for the CLI, the Amazon Web Services SDKs, and the APIs and 50% for the Amazon Web Services Management Console. The minimum number of healthy tasks during a deployment is the "desiredCount" multiplied by the "minimumHealthyPercent"/100, rounded up to the nearest integer value. If a service is using either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and is running tasks that use the EC2 launch type, the **minimum healthy percent** value is set to the default value. The **minimum healthy percent** value is used to define the lower limit on the number of the tasks in the service that remain in the "RUNNING" state while the container instances are in the "DRAINING" state. Note: You can't specify a custom "minimumHealthyPercent" value for a service that uses either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and has tasks that use the EC2 launch type. If a service is using either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service. * **alarms** *(dict) --* Information about the CloudWatch alarms. * **alarmNames** *(list) --* **[REQUIRED]** One or more CloudWatch alarm names. Use a "," to separate the alarms. * *(string) --* * **rollback** *(boolean) --* **[REQUIRED]** Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is used, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. * **enable** *(boolean) --* **[REQUIRED]** Determines whether to use the CloudWatch alarm option in the service deployment process. * **strategy** *(string) --* The deployment strategy for the service. Choose from these valid values: * "ROLLING" - When you create a service which uses the rolling update ( "ROLLING") deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration. * "BLUE_GREEN" - A blue/green deployment strategy ( "BLUE_GREEN") is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed. * **bakeTimeInMinutes** *(integer) --* The time period when both blue and green service revisions are running simultaneously after the production traffic has shifted. You must provide this parameter when you use the "BLUE_GREEN" deployment strategy. * **lifecycleHooks** *(list) --* An array of deployment lifecycle hook objects to run custom logic at specific stages of the deployment lifecycle. * *(dict) --* A deployment lifecycle hook runs custom logic at specific stages of the deployment process. Currently, you can use Lambda functions as hook targets. For more information, see Lifecycle hooks for Amazon ECS service deployments in the *Amazon Elastic Container Service Developer Guide*. * **hookTargetArn** *(string) --* The Amazon Resource Name (ARN) of the hook target. Currently, only Lambda function ARNs are supported. You must provide this parameter when configuring a deployment lifecycle hook. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call Lambda functions on your behalf. For more information, see Permissions required for Lambda functions in Amazon ECS blue/green deployments in the *Amazon Elastic Container Service Developer Guide*. * **lifecycleStages** *(list) --* The lifecycle stages at which to run the hook. Choose from these valid values: * RECONCILE_SERVICE The reconciliation stage that only happens when you start a new service deployment with more than 1 service revision in an ACTIVE state. You can use a lifecycle hook for this stage. * PRE_SCALE_UP The green service revision has not started. The blue service revision is handling 100% of the production traffic. There is no test traffic. You can use a lifecycle hook for this stage. * POST_SCALE_UP The green service revision has started. The blue service revision is handling 100% of the production traffic. There is no test traffic. You can use a lifecycle hook for this stage. * TEST_TRAFFIC_SHIFT The blue and green service revisions are running. The blue service revision handles 100% of the production traffic. The green service revision is migrating from 0% to 100% of test traffic. You can use a lifecycle hook for this stage. * POST_TEST_TRAFFIC_SHIFT The test traffic shift is complete. The green service revision handles 100% of the test traffic. You can use a lifecycle hook for this stage. * PRODUCTION_TRAFFIC_SHIFT Production traffic is shifting to the green service revision. The green service revision is migrating from 0% to 100% of production traffic. You can use a lifecycle hook for this stage. * POST_PRODUCTION_TRAFFIC_SHIFT The production traffic shift is complete. You can use a lifecycle hook for this stage. You must provide this parameter when configuring a deployment lifecycle hook. * *(string) --* type availabilityZoneRebalancing: string param availabilityZoneRebalancing: Indicates whether to use Availability Zone rebalancing for the service. For more information, see Balancing an Amazon ECS service across Availability Zones in the *Amazon Elastic Container Service Developer Guide* . type networkConfiguration: dict param networkConfiguration: An object representing the network configuration for the service. * **awsvpcConfiguration** *(dict) --* The VPC subnets and security groups that are associated with a task. Note: All specified subnets and security groups must be from the same VPC. * **subnets** *(list) --* **[REQUIRED]** The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified. Note: All specified subnets must be from the same VPC. * *(string) --* * **securityGroups** *(list) --* The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified. Note: All specified security groups must be from the same VPC. * *(string) --* * **assignPublicIp** *(string) --* Whether the task's elastic network interface receives a public IP address. Consider the following when you set this value: * When you use "create-service" or "update-service", the default is "DISABLED". * When the service "deploymentController" is "ECS", the value must be "DISABLED". type placementConstraints: list param placementConstraints: An array of task placement constraint objects to update the service to use. If no value is specified, the existing placement constraints for the service will remain unchanged. If this value is specified, it will override any existing placement constraints defined for the service. To remove all existing placement constraints, specify an empty array. You can specify a maximum of 10 constraints for each task. This limit includes constraints in the task definition and those specified at runtime. * *(dict) --* An object representing a constraint on task placement. For more information, see Task placement constraints in the *Amazon Elastic Container Service Developer Guide*. Note: If you're using the Fargate launch type, task placement constraints aren't supported. * **type** *(string) --* The type of constraint. Use "distinctInstance" to ensure that each task in a particular group is running on a different container instance. Use "memberOf" to restrict the selection to a group of valid candidates. * **expression** *(string) --* A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is "distinctInstance". For more information, see Cluster query language in the *Amazon Elastic Container Service Developer Guide*. type placementStrategy: list param placementStrategy: The task placement strategy objects to update the service to use. If no value is specified, the existing placement strategy for the service will remain unchanged. If this value is specified, it will override the existing placement strategy defined for the service. To remove an existing placement strategy, specify an empty object. You can specify a maximum of five strategy rules for each service. * *(dict) --* The task placement strategy for a task or service. For more information, see Task placement strategies in the *Amazon Elastic Container Service Developer Guide*. * **type** *(string) --* The type of placement strategy. The "random" placement strategy randomly places tasks on available candidates. The "spread" placement strategy spreads placement across available candidates evenly based on the "field" parameter. The "binpack" strategy places tasks on available candidates that have the least available amount of the resource that's specified with the "field" parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task. * **field** *(string) --* The field to apply the placement strategy against. For the "spread" placement strategy, valid values are "instanceId" (or "host", which has the same effect), or any platform or custom attribute that's applied to a container instance, such as "attribute:ecs.availability- zone". For the "binpack" placement strategy, valid values are "cpu" and "memory". For the "random" placement strategy, this field is not used. type platformVersion: string param platformVersion: The platform version that your tasks in the service run on. A platform version is only specified for tasks using the Fargate launch type. If a platform version is not specified, the "LATEST" platform version is used. For more information, see Fargate Platform Versions in the *Amazon Elastic Container Service Developer Guide*. type forceNewDeployment: boolean param forceNewDeployment: Determines whether to force a new deployment of the service. By default, deployments aren't forced. You can use this option to start a new deployment with no service definition changes. For example, you can update a service's tasks to use a newer Docker image with the same image/tag combination ( "my_image:latest") or to roll Fargate tasks onto a newer platform version. type healthCheckGracePeriodSeconds: integer param healthCheckGracePeriodSeconds: The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing, VPC Lattice, and container health checks after a task has first started. If you don't specify a health check grace period value, the default value of "0" is used. If you don't use any of the health checks, then "healthCheckGracePeriodSeconds" is unused. If your service's tasks take a while to start and respond to health checks, you can specify a health check grace period of up to 2,147,483,647 seconds (about 69 years). During that time, the Amazon ECS service scheduler ignores health check status. This grace period can prevent the service scheduler from marking tasks as unhealthy and stopping them before they have time to come up. type deploymentController: dict param deploymentController: The deployment controller to use for the service. * **type** *(string) --* **[REQUIRED]** The deployment controller type to use. The deployment controller is the mechanism that determines how tasks are deployed for your service. The valid options are: * ECS When you create a service which uses the "ECS" deployment controller, you can choose between the following deployment strategies: * "ROLLING": When you create a service which uses the *rolling update* ( "ROLLING") deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration. Rolling update deployments are best suited for the following scenarios: * Gradual service updates: You need to update your service incrementally without taking the entire service offline at once. * Limited resource requirements: You want to avoid the additional resource costs of running two complete environments simultaneously (as required by blue/green deployments). * Acceptable deployment time: Your application can tolerate a longer deployment process, as rolling updates replace tasks one by one. * No need for instant roll back: Your service can tolerate a rollback process that takes minutes rather than seconds. * Simple deployment process: You prefer a straightforward deployment approach without the complexity of managing multiple environments, target groups, and listeners. * No load balancer requirement: Your service doesn't use or require a load balancer, Application Load Balancer, Network Load Balancer, or Service Connect (which are required for blue/green deployments). * Stateful applications: Your application maintains state that makes it difficult to run two parallel environments. * Cost sensitivity: You want to minimize deployment costs by not running duplicate environments during deployment. Rolling updates are the default deployment strategy for services and provide a balance between deployment safety and resource efficiency for many common application scenarios. * "BLUE_GREEN": A *blue/green* deployment strategy ( "BLUE_GREEN") is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed. Amazon ECS blue/green deployments are best suited for the following scenarios: * Service validation: When you need to validate new service revisions before directing production traffic to them * Zero downtime: When your service requires zero- downtime deployments * Instant roll back: When you need the ability to quickly roll back if issues are detected * Load balancer requirement: When your service uses Application Load Balancer, Network Load Balancer, or Service Connect * External Use a third-party deployment controller. * Blue/green deployment (powered by CodeDeploy) CodeDeploy installs an updated version of the application as a new replacement task set and reroutes production traffic from the original application task set to the replacement task set. The original task set is terminated after a successful deployment. Use this deployment controller to verify a new deployment of a service before sending production traffic to it. type enableExecuteCommand: boolean param enableExecuteCommand: If "true", this enables execute command functionality on all task containers. If you do not want to override the value that was set when the service was created, you can set this to "null" when performing this action. type enableECSManagedTags: boolean param enableECSManagedTags: Determines whether to turn on Amazon ECS managed tags for the tasks in the service. For more information, see Tagging Your Amazon ECS Resources in the *Amazon Elastic Container Service Developer Guide*. Only tasks launched after the update will reflect the update. To update the tags on all tasks, set "forceNewDeployment" to "true", so that Amazon ECS starts new tasks with the updated tags. type loadBalancers: list param loadBalancers: Note: You must have a service-linked role when you update this property A list of Elastic Load Balancing load balancer objects. It contains the load balancer name, the container name, and the container port to access from the load balancer. The container name is as it appears in a container definition. When you add, update, or remove a load balancer configuration, Amazon ECS starts new tasks with the updated Elastic Load Balancing configuration, and then stops the old tasks when the new tasks are running. For services that use rolling updates, you can add, update, or remove Elastic Load Balancing target groups. You can update from a single target group to multiple target groups and from multiple target groups to a single target group. For services that use blue/green deployments, you can update Elastic Load Balancing target groups by using "CreateDeployment" through CodeDeploy. Note that multiple target groups are not supported for blue/green deployments. For more information see Register multiple target groups with a service in the *Amazon Elastic Container Service Developer Guide*. For services that use the external deployment controller, you can add, update, or remove load balancers by using CreateTaskSet. Note that multiple target groups are not supported for external deployments. For more information see Register multiple target groups with a service in the *Amazon Elastic Container Service Developer Guide*. You can remove existing "loadBalancers" by passing an empty list. * *(dict) --* The load balancer configuration to use with a service or task set. When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers. We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration. A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the *Amazon Elastic Container Service Developer Guide*. * **targetGroupArn** *(string) --* The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set. A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. For services using the "ECS" deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the *Amazon Elastic Container Service Developer Guide*. For services using the "CODE_DEPLOY" deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the *Amazon Elastic Container Service Developer Guide*. Warning: If your service's task definition uses the "awsvpc" network mode, you must choose "ip" as the target type, not "instance". Do this when creating your target groups because tasks that use the "awsvpc" network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type. * **loadBalancerName** *(string) --* The name of the load balancer to associate with the Amazon ECS service or task set. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted. * **containerName** *(string) --* The name of the container (as it appears in a container definition) to associate with the load balancer. You need to specify the container name when configuring the target group for an Amazon ECS load balancer. * **containerPort** *(integer) --* The port on the container to associate with the load balancer. This port must correspond to a "containerPort" in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the "hostPort" of the port mapping. * **advancedConfiguration** *(dict) --* The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments. * **alternateTargetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments. * **productionListenerRule** *(string) --* The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic. * **testListenerRule** *(string) --* The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you. type propagateTags: string param propagateTags: Determines whether to propagate the tags from the task definition or the service to the task. If no value is specified, the tags aren't propagated. Only tasks launched after the update will reflect the update. To update the tags on all tasks, set "forceNewDeployment" to "true", so that Amazon ECS starts new tasks with the updated tags. type serviceRegistries: list param serviceRegistries: Note: You must have a service-linked role when you update this property.For more information about the role see the "CreateService" request parameter role. The details for the service discovery registries to assign to this service. For more information, see Service Discovery. When you add, update, or remove the service registries configuration, Amazon ECS starts new tasks with the updated service registries configuration, and then stops the old tasks when the new tasks are running. You can remove existing "serviceRegistries" by passing an empty list. * *(dict) --* The details for the service registry. Each service may be associated with one service registry. Multiple service registries for each service are not supported. When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration. * **registryArn** *(string) --* The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService. * **port** *(integer) --* The port value used if your service discovery service specified an SRV record. This field might be used if both the "awsvpc" network mode and SRV records are used. * **containerName** *(string) --* The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition that your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **containerPort** *(integer) --* The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. type serviceConnectConfiguration: dict param serviceConnectConfiguration: The configuration for this service to discover and connect to services, and be discovered by, and connected from, other services within a namespace. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **enabled** *(boolean) --* **[REQUIRED]** Specifies whether to use Service Connect with this service. * **namespace** *(string) --* The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace for use with Service Connect. The namespace must be in the same Amazon Web Services Region as the Amazon ECS service and cluster. The type of namespace doesn't affect Service Connect. For more information about Cloud Map, see Working with Services in the *Cloud Map Developer Guide*. * **services** *(list) --* The list of Service Connect service objects. These are names and aliases (also known as endpoints) that are used by other Amazon ECS services to connect to this service. This field is not required for a "client" Amazon ECS service that's a member of a namespace only to connect to other services within the namespace. An example of this would be a frontend application that accepts incoming requests from either a load balancer that's attached to the service or by other means. An object selects a port from the task definition, assigns a name for the Cloud Map service, and a list of aliases (endpoints) and ports for client applications to refer to this service. * *(dict) --* The Service Connect service object configuration. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **portName** *(string) --* **[REQUIRED]** The "portName" must match the name of one of the "portMappings" from all the containers in the task definition of this Amazon ECS service. * **discoveryName** *(string) --* The "discoveryName" is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen. If the "discoveryName" isn't specified, the port mapping name from the task definition is used in "portName.namespace". * **clientAliases** *(list) --* The list of client aliases for this Service Connect service. You use these to assign names that can be used by client applications. The maximum number of client aliases that you can have in this list is 1. Each alias ("endpoint") is a fully-qualified name and port number that other Amazon ECS tasks ("clients") can use to connect to this service. Each name and port mapping must be unique within the namespace. For each "ServiceConnectService", you must provide at least one "clientAlias" with one "port". * *(dict) --* Each alias ("endpoint") is a fully-qualified name and port number that other tasks ("clients") can use to connect to this service. Each name and port mapping must be unique within the namespace. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **port** *(integer) --* **[REQUIRED]** The listening port number for the Service Connect proxy. This port is available inside of all of the tasks within the same namespace. To avoid changing your applications in client Amazon ECS services, set this to the same port that the client application uses by default. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **dnsName** *(string) --* The "dnsName" is the name that you use in the applications of client tasks to connect to this service. The name must be a valid DNS name but doesn't need to be fully-qualified. The name can include up to 127 characters. The name can include lowercase letters, numbers, underscores (_), hyphens (-), and periods (.). The name can't start with a hyphen. If this parameter isn't specified, the default value of "discoveryName.namespace" is used. If the "discoveryName" isn't specified, the port mapping name from the task definition is used in "portName.namespace". To avoid changing your applications in client Amazon ECS services, set this to the same name that the client application uses by default. For example, a few common names are "database", "db", or the lowercase name of a database, such as "mysql" or "redis". For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **testTrafficRules** *(dict) --* The configuration for test traffic routing rules used during blue/green deployments with Amazon ECS Service Connect. This allows you to route a portion of traffic to the new service revision of your service for testing before shifting all production traffic. * **header** *(dict) --* **[REQUIRED]** The HTTP header-based routing rules that determine which requests should be routed to the new service version during blue/green deployment testing. These rules provide fine-grained control over test traffic routing based on request headers. * **name** *(string) --* **[REQUIRED]** The name of the HTTP header to examine for test traffic routing. Common examples include custom headers like "X-Test-Version" or "X-Canary- Request" that can be used to identify test traffic. * **value** *(dict) --* The header value matching configuration that determines how the HTTP header value is evaluated for test traffic routing decisions. * **exact** *(string) --* **[REQUIRED]** The exact value that the HTTP header must match for the test traffic routing rule to apply. This provides precise control over which requests are routed to the new service revision during blue/green deployments. * **ingressPortOverride** *(integer) --* The port number for the Service Connect proxy to listen on. Use the value of this field to bypass the proxy for traffic on the port number specified in the named "portMapping" in the task definition of this application, and then use it in your VPC security groups to allow traffic into the proxy for this Amazon ECS service. In "awsvpc" mode and Fargate, the default value is the container port number. The container port number is in the "portMapping" in the task definition. In bridge mode, the default value is the ephemeral port of the Service Connect proxy. * **timeout** *(dict) --* A reference to an object that represents the configured timeouts for Service Connect. * **idleTimeoutSeconds** *(integer) --* The amount of time in seconds a connection will stay active while idle. A value of "0" can be set to disable "idleTimeout". The "idleTimeout" default for "HTTP"/ "HTTP2"/ "GRPC" is 5 minutes. The "idleTimeout" default for "TCP" is 1 hour. * **perRequestTimeoutSeconds** *(integer) --* The amount of time waiting for the upstream to respond with a complete response per request. A value of "0" can be set to disable "perRequestTimeout". "perRequestTimeout" can only be set if Service Connect "appProtocol" isn't "TCP". Only "idleTimeout" is allowed for "TCP" "appProtocol". * **tls** *(dict) --* A reference to an object that represents a Transport Layer Security (TLS) configuration. * **issuerCertificateAuthority** *(dict) --* **[REQUIRED]** The signer certificate authority. * **awsPcaAuthorityArn** *(string) --* The ARN of the Amazon Web Services Private Certificate Authority certificate. * **kmsKey** *(string) --* The Amazon Web Services Key Management Service key. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that's associated with the Service Connect TLS. * **logConfiguration** *(dict) --* The log configuration for the container. This parameter maps to "LogConfig" in the docker container create command and the "--log-driver" option to docker run. By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition. Understand the following when specifying a log configuration for your containers. * Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent. For tasks on Fargate, the supported log drivers are "awslogs", "splunk", and "awsfirelens". For tasks hosted on Amazon EC2 instances, the supported log drivers are "awslogs", "fluentd", "gelf", "json- file", "journald", "syslog", "splunk", and "awsfirelens". * This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. * For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the "ECS_AVAILABLE_LOGGING_DRIVERS" environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the *Amazon Elastic Container Service Developer Guide*. * For tasks that are on Fargate, because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to. * **logDriver** *(string) --* **[REQUIRED]** The log driver to use for the container. For tasks on Fargate, the supported log drivers are "awslogs", "splunk", and "awsfirelens". For tasks hosted on Amazon EC2 instances, the supported log drivers are "awslogs", "fluentd", "gelf", "json- file", "journald", "syslog", "splunk", and "awsfirelens". For more information about using the "awslogs" log driver, see Send Amazon ECS logs to CloudWatch in the *Amazon Elastic Container Service Developer Guide*. For more information about using the "awsfirelens" log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner. Note: If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software. * **options** *(dict) --* The configuration options to send to the log driver. The options you can specify depend on the log driver. Some of the options you can specify when you use the "awslogs" log driver to route logs to Amazon CloudWatch include the following: awslogs-create-group Required: No Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to "false". Note: Your IAM policy must include the "logs:CreateLogGroup" permission before you attempt to use "awslogs-create- group".awslogs-region Required: Yes Specify the Amazon Web Services Region that the "awslogs" log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option. awslogs-group Required: Yes Make sure to specify a log group that the "awslogs" log driver sends its log streams to. awslogs-stream-prefix Required: Yes, when using Fargate.Optional when using EC2. Use the "awslogs-stream-prefix" option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format "prefix-name/container-name /ecs-task-id". If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option. For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to. You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console. awslogs-datetime-format Required: No This option defines a multiline start pattern in Python "strftime" format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry. For more information, see awslogs-datetime-format. You cannot configure both the "awslogs-datetime-format" and "awslogs-multiline-pattern" options. Note: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.awslogs- multiline-pattern Required: No This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. For more information, see awslogs-multiline-pattern. This option is ignored if "awslogs-datetime-format" is also configured. You cannot configure both the "awslogs-datetime-format" and "awslogs-multiline-pattern" options. Note: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. The following options apply to all supported log drivers. mode Required: No Valid values: "non-blocking" | "blocking" This option defines the delivery mode of log messages from the container to the log driver specified using "logDriver". The delivery mode you choose affects application availability when the flow of logs from container is interrupted. If you use the "blocking" mode and the flow of logs is interrupted, calls from container code to write to the "stdout" and "stderr" streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure. If you use the "non-blocking" mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the "max-buffer-size" option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver. You can set a default "mode" for all containers in a specific Amazon Web Services Region by using the "defaultLogDriverMode" account setting. If you don't specify the "mode" option or configure the account setting, Amazon ECS will default to the "non-blocking" mode. For more information about the account setting, see Default log driver mode in the *Amazon Elastic Container Service Developer Guide*. Note: On June 25, 2025, Amazon ECS changed the default log driver mode from "blocking" to "non-blocking" to prioritize task availability over logging. To continue using the "blocking" mode after this change, do one of the following: * Set the "mode" option in your container definition's "logConfiguration" as "blocking". * Set the "defaultLogDriverMode" account setting to "blocking". max-buffer-size Required: No Default value: "1m" When "non-blocking" mode is used, the "max-buffer-size" log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost. To route logs using the "splunk" log router, you need to specify a "splunk-token" and a "splunk-url". When you use the "awsfirelens" log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the "log-driver-buffer-limit" option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. Other options you can specify when using "awsfirelens" to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with "region" and a name for the log stream with "delivery_stream". When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with "region" and a data stream name with "stream". When you export logs to Amazon OpenSearch Service, you can specify options like "Name", "Host" (OpenSearch Service endpoint without protocol), "Port", "Index", "Type", "Aws_auth", "Aws_region", "Suppress_Type_Name", and "tls". For more information, see Under the hood: FireLens for Amazon ECS Tasks. When you export logs to Amazon S3, you can specify the bucket using the "bucket" option. You can also specify "region", "total_file_size", "upload_timeout", and "use_put_object" as options. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" * *(string) --* * *(string) --* * **secretOptions** *(list) --* The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * *(dict) --* An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways: * To inject sensitive data into your containers as environment variables, use the "secrets" container definition parameter. * To reference sensitive information in the log configuration of a container, use the "secretOptions" container definition parameter. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* **[REQUIRED]** The name of the secret. * **valueFrom** *(string) --* **[REQUIRED]** The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store. For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the *Amazon Elastic Container Service Developer Guide*. Note: If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified. type volumeConfigurations: list param volumeConfigurations: The details of the volume that was "configuredAtLaunch". You can configure the size, volumeType, IOPS, throughput, snapshot and encryption in ServiceManagedEBSVolumeConfiguration. The "name" of the volume must match the "name" from the task definition. If set to null, no new deployment is triggered. Otherwise, if this configuration differs from the existing one, it triggers a new deployment. * *(dict) --* The configuration for a volume specified in the task definition as a volume that is configured at launch time. Currently, the only supported volume type is an Amazon EBS volume. * **name** *(string) --* **[REQUIRED]** The name of the volume. This value must match the volume name from the "Volume" object in the task definition. * **managedEBSVolume** *(dict) --* The configuration for the Amazon EBS volume that Amazon ECS creates and manages on your behalf. These settings are used to create each Amazon EBS volume, with one volume created for each task in the service. The Amazon EBS volumes are visible in your account in the Amazon EC2 console once they are created. * **encrypted** *(boolean) --* Indicates whether the volume should be encrypted. If you turn on Region-level Amazon EBS encryption by default but set this value as "false", the setting is overridden and the volume is encrypted with the KMS key specified for Amazon EBS encryption by default. This parameter maps 1:1 with the "Encrypted" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * **kmsKeyId** *(string) --* The Amazon Resource Name (ARN) identifier of the Amazon Web Services Key Management Service key to use for Amazon EBS encryption. When a key is specified using this parameter, it overrides Amazon EBS default encryption or any KMS key that you specified for cluster-level managed storage encryption. This parameter maps 1:1 with the "KmsKeyId" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. For more information about encrypting Amazon EBS volumes attached to tasks, see Encrypt data stored in Amazon EBS volumes attached to Amazon ECS tasks. Warning: Amazon Web Services authenticates the Amazon Web Services Key Management Service key asynchronously. Therefore, if you specify an ID, alias, or ARN that is invalid, the action can appear to complete, but eventually fails. * **volumeType** *(string) --* The volume type. This parameter maps 1:1 with the "VolumeType" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. For more information, see Amazon EBS volume types in the *Amazon EC2 User Guide*. The following are the supported volume types. * General Purpose SSD: "gp2``| ``gp3" * Provisioned IOPS SSD: "io1``| ``io2" * Throughput Optimized HDD: "st1" * Cold HDD: "sc1" * Magnetic: "standard" Note: The magnetic volume type is not supported on Fargate. * **sizeInGiB** *(integer) --* The size of the volume in GiB. You must specify either a volume size or a snapshot ID. If you specify a snapshot ID, the snapshot size is used for the volume size by default. You can optionally specify a volume size greater than or equal to the snapshot size. This parameter maps 1:1 with the "Size" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. The following are the supported volume size values for each volume type. * "gp2" and "gp3": 1-16,384 * "io1" and "io2": 4-16,384 * "st1" and "sc1": 125-16,384 * "standard": 1-1,024 * **snapshotId** *(string) --* The snapshot that Amazon ECS uses to create volumes for attachment to tasks maintained by the service. You must specify either "snapshotId" or "sizeInGiB" in your volume configuration. This parameter maps 1:1 with the "SnapshotId" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * **volumeInitializationRate** *(integer) --* The rate, in MiB/s, at which data is fetched from a snapshot of an existing EBS volume to create new volumes for attachment to the tasks maintained by the service. This property can be specified only if you specify a "snapshotId". For more information, see Initialize Amazon EBS volumes in the *Amazon EBS User Guide*. * **iops** *(integer) --* The number of I/O operations per second (IOPS). For "gp3", "io1", and "io2" volumes, this represents the number of IOPS that are provisioned for the volume. For "gp2" volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting. The following are the supported values for each volume type. * "gp3": 3,000 - 16,000 IOPS * "io1": 100 - 64,000 IOPS * "io2": 100 - 256,000 IOPS This parameter is required for "io1" and "io2" volume types. The default for "gp3" volumes is "3,000 IOPS". This parameter is not supported for "st1", "sc1", or "standard" volume types. This parameter maps 1:1 with the "Iops" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * **throughput** *(integer) --* The throughput to provision for a volume, in MiB/s, with a maximum of 1,000 MiB/s. This parameter maps 1:1 with the "Throughput" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. Warning: This parameter is only supported for the "gp3" volume type. * **tagSpecifications** *(list) --* The tags to apply to the volume. Amazon ECS applies service-managed tags by default. This parameter maps 1:1 with the "TagSpecifications.N" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * *(dict) --* The tag specifications of an Amazon EBS volume. * **resourceType** *(string) --* **[REQUIRED]** The type of volume resource. * **tags** *(list) --* The tags applied to this Amazon EBS volume. "AmazonECSCreated" and "AmazonECSManaged" are reserved tags that can't be used. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **propagateTags** *(string) --* Determines whether to propagate the tags from the task definition to the Amazon EBS volume. Tags can only propagate to a "SERVICE" specified in "ServiceVolumeConfiguration". If no value is specified, the tags aren't propagated. * **roleArn** *(string) --* **[REQUIRED]** The ARN of the IAM role to associate with this volume. This is the Amazon ECS infrastructure IAM role that is used to manage your Amazon Web Services infrastructure. We recommend using the Amazon ECS-managed "AmazonECSInfrastructureRolePolicyForVolumes" IAM policy with this role. For more information, see Amazon ECS infrastructure IAM role in the *Amazon ECS Developer Guide*. * **filesystemType** *(string) --* The filesystem type for the volume. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the tasks will fail to start. The available Linux filesystem types are "ext3", "ext4", and "xfs". If no value is specified, the "xfs" filesystem type is used by default. The available Windows filesystem types are "NTFS". type vpcLatticeConfigurations: list param vpcLatticeConfigurations: An object representing the VPC Lattice configuration for the service being updated. * *(dict) --* The VPC Lattice configuration for your service that holds the information for the target group(s) Amazon ECS tasks will be registered to. * **roleArn** *(string) --* **[REQUIRED]** The ARN of the IAM role to associate with this VPC Lattice configuration. This is the Amazon ECS infrastructure IAM role that is used to manage your VPC Lattice infrastructure. * **targetGroupArn** *(string) --* **[REQUIRED]** The full Amazon Resource Name (ARN) of the target group or groups associated with the VPC Lattice configuration that the Amazon ECS tasks will be registered to. * **portName** *(string) --* **[REQUIRED]** The name of the port mapping to register in the VPC Lattice target group. This is the name of the "portMapping" you defined in your task definition. rtype: dict returns: **Response Syntax** { 'service': { 'serviceArn': 'string', 'serviceName': 'string', 'clusterArn': 'string', 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'status': 'string', 'desiredCount': 123, 'runningCount': 123, 'pendingCount': 123, 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'taskDefinition': 'string', 'deploymentConfiguration': { 'deploymentCircuitBreaker': { 'enable': True|False, 'rollback': True|False }, 'maximumPercent': 123, 'minimumHealthyPercent': 123, 'alarms': { 'alarmNames': [ 'string', ], 'rollback': True|False, 'enable': True|False }, 'strategy': 'ROLLING'|'BLUE_GREEN', 'bakeTimeInMinutes': 123, 'lifecycleHooks': [ { 'hookTargetArn': 'string', 'roleArn': 'string', 'lifecycleStages': [ 'RECONCILE_SERVICE'|'PRE_SCALE_UP'|'POST_SCALE_UP'|'TEST_TRAFFIC_SHIFT'|'POST_TEST_TRAFFIC_SHIFT'|'PRODUCTION_TRAFFIC_SHIFT'|'POST_PRODUCTION_TRAFFIC_SHIFT', ] }, ] }, 'taskSets': [ { 'id': 'string', 'taskSetArn': 'string', 'serviceArn': 'string', 'clusterArn': 'string', 'startedBy': 'string', 'externalId': 'string', 'status': 'string', 'taskDefinition': 'string', 'computedDesiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'scale': { 'value': 123.0, 'unit': 'PERCENT' }, 'stabilityStatus': 'STEADY_STATE'|'STABILIZING', 'stabilityStatusAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' } }, ], 'deployments': [ { 'id': 'string', 'status': 'string', 'taskDefinition': 'string', 'desiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'failedTasks': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'rolloutState': 'COMPLETED'|'FAILED'|'IN_PROGRESS', 'rolloutStateReason': 'string', 'serviceConnectConfiguration': { 'enabled': True|False, 'namespace': 'string', 'services': [ { 'portName': 'string', 'discoveryName': 'string', 'clientAliases': [ { 'port': 123, 'dnsName': 'string', 'testTrafficRules': { 'header': { 'name': 'string', 'value': { 'exact': 'string' } } } }, ], 'ingressPortOverride': 123, 'timeout': { 'idleTimeoutSeconds': 123, 'perRequestTimeoutSeconds': 123 }, 'tls': { 'issuerCertificateAuthority': { 'awsPcaAuthorityArn': 'string' }, 'kmsKey': 'string', 'roleArn': 'string' } }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] } }, 'serviceConnectResources': [ { 'discoveryName': 'string', 'discoveryArn': 'string' }, ], 'volumeConfigurations': [ { 'name': 'string', 'managedEBSVolume': { 'encrypted': True|False, 'kmsKeyId': 'string', 'volumeType': 'string', 'sizeInGiB': 123, 'snapshotId': 'string', 'volumeInitializationRate': 123, 'iops': 123, 'throughput': 123, 'tagSpecifications': [ { 'resourceType': 'volume', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE' }, ], 'roleArn': 'string', 'filesystemType': 'ext3'|'ext4'|'xfs'|'ntfs' } }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' }, 'vpcLatticeConfigurations': [ { 'roleArn': 'string', 'targetGroupArn': 'string', 'portName': 'string' }, ] }, ], 'roleArn': 'string', 'events': [ { 'id': 'string', 'createdAt': datetime(2015, 1, 1), 'message': 'string' }, ], 'createdAt': datetime(2015, 1, 1), 'placementConstraints': [ { 'type': 'distinctInstance'|'memberOf', 'expression': 'string' }, ], 'placementStrategy': [ { 'type': 'random'|'spread'|'binpack', 'field': 'string' }, ], 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'healthCheckGracePeriodSeconds': 123, 'schedulingStrategy': 'REPLICA'|'DAEMON', 'deploymentController': { 'type': 'ECS'|'CODE_DEPLOY'|'EXTERNAL' }, 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'createdBy': 'string', 'enableECSManagedTags': True|False, 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE', 'enableExecuteCommand': True|False, 'availabilityZoneRebalancing': 'ENABLED'|'DISABLED' } } **Response Structure** * *(dict) --* * **service** *(dict) --* The full description of your service following the update call. * **serviceArn** *(string) --* The ARN that identifies the service. For more information about the ARN format, see Amazon Resource Name (ARN) in the *Amazon ECS Developer Guide*. * **serviceName** *(string) --* The name of your service. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. Service names must be unique within a cluster. However, you can have similarly named services in multiple clusters within a Region or across multiple Regions. * **clusterArn** *(string) --* The Amazon Resource Name (ARN) of the cluster that hosts the service. * **loadBalancers** *(list) --* A list of Elastic Load Balancing load balancer objects. It contains the load balancer name, the container name, and the container port to access from the load balancer. The container name is as it appears in a container definition. * *(dict) --* The load balancer configuration to use with a service or task set. When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers. We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration. A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the *Amazon Elastic Container Service Developer Guide*. * **targetGroupArn** *(string) --* The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set. A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. For services using the "ECS" deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the *Amazon Elastic Container Service Developer Guide*. For services using the "CODE_DEPLOY" deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the *Amazon Elastic Container Service Developer Guide*. Warning: If your service's task definition uses the "awsvpc" network mode, you must choose "ip" as the target type, not "instance". Do this when creating your target groups because tasks that use the "awsvpc" network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type. * **loadBalancerName** *(string) --* The name of the load balancer to associate with the Amazon ECS service or task set. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted. * **containerName** *(string) --* The name of the container (as it appears in a container definition) to associate with the load balancer. You need to specify the container name when configuring the target group for an Amazon ECS load balancer. * **containerPort** *(integer) --* The port on the container to associate with the load balancer. This port must correspond to a "containerPort" in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the "hostPort" of the port mapping. * **advancedConfiguration** *(dict) --* The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments. * **alternateTargetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments. * **productionListenerRule** *(string) --* The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic. * **testListenerRule** *(string) --* The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you. * **serviceRegistries** *(list) --* The details for the service discovery registries to assign to this service. For more information, see Service Discovery. * *(dict) --* The details for the service registry. Each service may be associated with one service registry. Multiple service registries for each service are not supported. When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration. * **registryArn** *(string) --* The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService. * **port** *(integer) --* The port value used if your service discovery service specified an SRV record. This field might be used if both the "awsvpc" network mode and SRV records are used. * **containerName** *(string) --* The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition that your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **containerPort** *(integer) --* The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **status** *(string) --* The status of the service. The valid values are "ACTIVE", "DRAINING", or "INACTIVE". * **desiredCount** *(integer) --* The desired number of instantiations of the task definition to keep running on the service. This value is specified when the service is created with CreateService , and it can be modified with UpdateService. * **runningCount** *(integer) --* The number of tasks in the cluster that are in the "RUNNING" state. * **pendingCount** *(integer) --* The number of tasks in the cluster that are in the "PENDING" state. * **launchType** *(string) --* The launch type the service is using. When using the DescribeServices API, this field is omitted if the service was created using a capacity provider strategy. * **capacityProviderStrategy** *(list) --* The capacity provider strategy the service uses. When using the DescribeServices API, this field is omitted if the service was created using a launch type. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. * **platformVersion** *(string) --* The platform version to run your service on. A platform version is only specified for tasks that are hosted on Fargate. If one isn't specified, the "LATEST" platform version is used. For more information, see Fargate Platform Versions in the *Amazon Elastic Container Service Developer Guide*. * **platformFamily** *(string) --* The operating system that your tasks in the service run on. A platform family is specified only for tasks using the Fargate launch type. All tasks that run as part of this service must use the same "platformFamily" value as the service (for example, "LINUX"). * **taskDefinition** *(string) --* The task definition to use for tasks in the service. This value is specified when the service is created with CreateService, and it can be modified with UpdateService. * **deploymentConfiguration** *(dict) --* Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks. * **deploymentCircuitBreaker** *(dict) --* Note: The deployment circuit breaker can only be used for services using the rolling update ( "ECS") deployment type. The **deployment circuit breaker** determines whether a service deployment will fail if the service can't reach a steady state. If you use the deployment circuit breaker, a service deployment will transition to a failed state and stop launching new tasks. If you use the rollback option, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. For more information, see Rolling update in the *Amazon Elastic Container Service Developer Guide* * **enable** *(boolean) --* Determines whether to use the deployment circuit breaker logic for the service. * **rollback** *(boolean) --* Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is on, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. * **maximumPercent** *(integer) --* If a service is using the rolling update ( "ECS") deployment type, the "maximumPercent" parameter represents an upper limit on the number of your service's tasks that are allowed in the "RUNNING" or "PENDING" state during a deployment, as a percentage of the "desiredCount" (rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using the "REPLICA" service scheduler and has a "desiredCount" of four tasks and a "maximumPercent" value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default "maximumPercent" value for a service using the "REPLICA" service scheduler is 200%. The Amazon ECS scheduler uses this parameter to replace unhealthy tasks by starting replacement tasks first and then stopping the unhealthy tasks, as long as cluster resources for starting replacement tasks are available. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services. If a service is using either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types, and tasks in the service use the EC2 launch type, the **maximum percent** value is set to the default value. The **maximum percent** value is used to define the upper limit on the number of the tasks in the service that remain in the "RUNNING" state while the container instances are in the "DRAINING" state. Note: You can't specify a custom "maximumPercent" value for a service that uses either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and has tasks that use the EC2 launch type. If the service uses either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types, and the tasks in the service use the Fargate launch type, the maximum percent value is not used. The value is still returned when describing your service. * **minimumHealthyPercent** *(integer) --* If a service is using the rolling update ( "ECS") deployment type, the "minimumHealthyPercent" represents a lower limit on the number of your service's tasks that must remain in the "RUNNING" state during a deployment, as a percentage of the "desiredCount" (rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a "desiredCount" of four tasks and a "minimumHealthyPercent" of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks. If any tasks are unhealthy and if "maximumPercent" doesn't allow the Amazon ECS scheduler to start replacement tasks, the scheduler stops the unhealthy tasks one-by-one — using the "minimumHealthyPercent" as a constraint — to clear up capacity to launch replacement tasks. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services . For services that *do not* use a load balancer, the following should be noted: * A service is considered healthy if all essential containers within the tasks in the service pass their health checks. * If a task has no essential containers with a health check defined, the service scheduler will wait for 40 seconds after a task reaches a "RUNNING" state before the task is counted towards the minimum healthy percent total. * If a task has one or more essential containers with a health check defined, the service scheduler will wait for the task to reach a healthy status before counting it towards the minimum healthy percent total. A task is considered healthy when all essential containers within the task have passed their health checks. The amount of time the service scheduler can wait for is determined by the container health check settings. For services that *do* use a load balancer, the following should be noted: * If a task has no essential containers with a health check defined, the service scheduler will wait for the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total. * If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total. The default value for a replica service for "minimumHealthyPercent" is 100%. The default "minimumHealthyPercent" value for a service using the "DAEMON" service schedule is 0% for the CLI, the Amazon Web Services SDKs, and the APIs and 50% for the Amazon Web Services Management Console. The minimum number of healthy tasks during a deployment is the "desiredCount" multiplied by the "minimumHealthyPercent"/100, rounded up to the nearest integer value. If a service is using either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and is running tasks that use the EC2 launch type, the **minimum healthy percent** value is set to the default value. The **minimum healthy percent** value is used to define the lower limit on the number of the tasks in the service that remain in the "RUNNING" state while the container instances are in the "DRAINING" state. Note: You can't specify a custom "minimumHealthyPercent" value for a service that uses either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and has tasks that use the EC2 launch type. If a service is using either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service. * **alarms** *(dict) --* Information about the CloudWatch alarms. * **alarmNames** *(list) --* One or more CloudWatch alarm names. Use a "," to separate the alarms. * *(string) --* * **rollback** *(boolean) --* Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is used, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. * **enable** *(boolean) --* Determines whether to use the CloudWatch alarm option in the service deployment process. * **strategy** *(string) --* The deployment strategy for the service. Choose from these valid values: * "ROLLING" - When you create a service which uses the rolling update ( "ROLLING") deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration. * "BLUE_GREEN" - A blue/green deployment strategy ( "BLUE_GREEN") is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed. * **bakeTimeInMinutes** *(integer) --* The time period when both blue and green service revisions are running simultaneously after the production traffic has shifted. You must provide this parameter when you use the "BLUE_GREEN" deployment strategy. * **lifecycleHooks** *(list) --* An array of deployment lifecycle hook objects to run custom logic at specific stages of the deployment lifecycle. * *(dict) --* A deployment lifecycle hook runs custom logic at specific stages of the deployment process. Currently, you can use Lambda functions as hook targets. For more information, see Lifecycle hooks for Amazon ECS service deployments in the *Amazon Elastic Container Service Developer Guide*. * **hookTargetArn** *(string) --* The Amazon Resource Name (ARN) of the hook target. Currently, only Lambda function ARNs are supported. You must provide this parameter when configuring a deployment lifecycle hook. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call Lambda functions on your behalf. For more information, see Permissions required for Lambda functions in Amazon ECS blue/green deployments in the *Amazon Elastic Container Service Developer Guide*. * **lifecycleStages** *(list) --* The lifecycle stages at which to run the hook. Choose from these valid values: * RECONCILE_SERVICE The reconciliation stage that only happens when you start a new service deployment with more than 1 service revision in an ACTIVE state. You can use a lifecycle hook for this stage. * PRE_SCALE_UP The green service revision has not started. The blue service revision is handling 100% of the production traffic. There is no test traffic. You can use a lifecycle hook for this stage. * POST_SCALE_UP The green service revision has started. The blue service revision is handling 100% of the production traffic. There is no test traffic. You can use a lifecycle hook for this stage. * TEST_TRAFFIC_SHIFT The blue and green service revisions are running. The blue service revision handles 100% of the production traffic. The green service revision is migrating from 0% to 100% of test traffic. You can use a lifecycle hook for this stage. * POST_TEST_TRAFFIC_SHIFT The test traffic shift is complete. The green service revision handles 100% of the test traffic. You can use a lifecycle hook for this stage. * PRODUCTION_TRAFFIC_SHIFT Production traffic is shifting to the green service revision. The green service revision is migrating from 0% to 100% of production traffic. You can use a lifecycle hook for this stage. * POST_PRODUCTION_TRAFFIC_SHIFT The production traffic shift is complete. You can use a lifecycle hook for this stage. You must provide this parameter when configuring a deployment lifecycle hook. * *(string) --* * **taskSets** *(list) --* Information about a set of Amazon ECS tasks in either an CodeDeploy or an "EXTERNAL" deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic. * *(dict) --* Information about a set of Amazon ECS tasks in either an CodeDeploy or an "EXTERNAL" deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic. * **id** *(string) --* The ID of the task set. * **taskSetArn** *(string) --* The Amazon Resource Name (ARN) of the task set. * **serviceArn** *(string) --* The Amazon Resource Name (ARN) of the service the task set exists in. * **clusterArn** *(string) --* The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in. * **startedBy** *(string) --* The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the "startedBy" parameter is "CODE_DEPLOY". If an external deployment created the task set, the "startedBy" field isn't used. * **externalId** *(string) --* The external ID associated with the task set. If an CodeDeploy deployment created a task set, the "externalId" parameter contains the CodeDeploy deployment ID. If a task set is created for an external deployment and is associated with a service discovery registry, the "externalId" parameter contains the "ECS_TASK_SET_EXTERNAL_ID" Cloud Map attribute. * **status** *(string) --* The status of the task set. The following describes each state. PRIMARY The task set is serving production traffic. ACTIVE The task set isn't serving production traffic. DRAINING The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group. * **taskDefinition** *(string) --* The task definition that the task set is using. * **computedDesiredCount** *(integer) --* The computed desired count for the task set. This is calculated by multiplying the service's "desiredCount" by the task set's "scale" percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks. * **pendingCount** *(integer) --* The number of tasks in the task set that are in the "PENDING" status during a deployment. A task in the "PENDING" state is preparing to enter the "RUNNING" state. A task set enters the "PENDING" status when it launches for the first time or when it's restarted after being in the "STOPPED" state. * **runningCount** *(integer) --* The number of tasks in the task set that are in the "RUNNING" status during a deployment. A task in the "RUNNING" state is running and ready for use. * **createdAt** *(datetime) --* The Unix timestamp for the time when the task set was created. * **updatedAt** *(datetime) --* The Unix timestamp for the time when the task set was last updated. * **launchType** *(string) --* The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the *Amazon Elastic Container Service Developer Guide*. * **capacityProviderStrategy** *(list) --* The capacity provider strategy that are associated with the task set. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. * **platformVersion** *(string) --* The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the *Amazon Elastic Container Service Developer Guide*. * **platformFamily** *(string) --* The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type. All tasks in the set must have the same value. * **networkConfiguration** *(dict) --* The network configuration for the task set. * **awsvpcConfiguration** *(dict) --* The VPC subnets and security groups that are associated with a task. Note: All specified subnets and security groups must be from the same VPC. * **subnets** *(list) --* The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified. Note: All specified subnets must be from the same VPC. * *(string) --* * **securityGroups** *(list) --* The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified. Note: All specified security groups must be from the same VPC. * *(string) --* * **assignPublicIp** *(string) --* Whether the task's elastic network interface receives a public IP address. Consider the following when you set this value: * When you use "create-service" or "update- service", the default is "DISABLED". * When the service "deploymentController" is "ECS", the value must be "DISABLED". * **loadBalancers** *(list) --* Details on a load balancer that are used with a task set. * *(dict) --* The load balancer configuration to use with a service or task set. When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers. We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration. A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the *Amazon Elastic Container Service Developer Guide*. * **targetGroupArn** *(string) --* The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set. A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. For services using the "ECS" deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the *Amazon Elastic Container Service Developer Guide*. For services using the "CODE_DEPLOY" deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the *Amazon Elastic Container Service Developer Guide*. Warning: If your service's task definition uses the "awsvpc" network mode, you must choose "ip" as the target type, not "instance". Do this when creating your target groups because tasks that use the "awsvpc" network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type. * **loadBalancerName** *(string) --* The name of the load balancer to associate with the Amazon ECS service or task set. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted. * **containerName** *(string) --* The name of the container (as it appears in a container definition) to associate with the load balancer. You need to specify the container name when configuring the target group for an Amazon ECS load balancer. * **containerPort** *(integer) --* The port on the container to associate with the load balancer. This port must correspond to a "containerPort" in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the "hostPort" of the port mapping. * **advancedConfiguration** *(dict) --* The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments. * **alternateTargetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments. * **productionListenerRule** *(string) --* The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic. * **testListenerRule** *(string) --* The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you. * **serviceRegistries** *(list) --* The details for the service discovery registries to assign to this task set. For more information, see Service discovery. * *(dict) --* The details for the service registry. Each service may be associated with one service registry. Multiple service registries for each service are not supported. When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration. * **registryArn** *(string) --* The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService. * **port** *(integer) --* The port value used if your service discovery service specified an SRV record. This field might be used if both the "awsvpc" network mode and SRV records are used. * **containerName** *(string) --* The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition that your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **containerPort** *(integer) --* The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **scale** *(dict) --* A floating-point percentage of your desired number of tasks to place and keep running in the task set. * **value** *(float) --* The value, specified as a percent total of a service's "desiredCount", to scale the task set. Accepted values are numbers between 0 and 100. * **unit** *(string) --* The unit of measure for the scale value. * **stabilityStatus** *(string) --* The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set are in "STEADY_STATE": * The task "runningCount" is equal to the "computedDesiredCount". * The "pendingCount" is "0". * There are no tasks that are running on container instances in the "DRAINING" status. * All tasks are reporting a healthy status from the load balancers, service discovery, and container health checks. If any of those conditions aren't met, the stability status returns "STABILIZING". * **stabilityStatusAt** *(datetime) --* The Unix timestamp for the time when the task set stability status was retrieved. * **tags** *(list) --* The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **fargateEphemeralStorage** *(dict) --* The Fargate ephemeral storage settings for the task set. * **kmsKeyId** *(string) --* Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment. * **deployments** *(list) --* The current state of deployments for the service. * *(dict) --* The details of an Amazon ECS service deployment. This is used only when a service uses the "ECS" deployment controller type. * **id** *(string) --* The ID of the deployment. * **status** *(string) --* The status of the deployment. The following describes each state. PRIMARY The most recent deployment of a service. ACTIVE A service deployment that still has running tasks, but are in the process of being replaced with a new "PRIMARY" deployment. INACTIVE A deployment that has been completely replaced. * **taskDefinition** *(string) --* The most recent task definition that was specified for the tasks in the service to use. * **desiredCount** *(integer) --* The most recent desired count of tasks that was specified for the service to deploy or maintain. * **pendingCount** *(integer) --* The number of tasks in the deployment that are in the "PENDING" status. * **runningCount** *(integer) --* The number of tasks in the deployment that are in the "RUNNING" status. * **failedTasks** *(integer) --* The number of consecutively failed tasks in the deployment. A task is considered a failure if the service scheduler can't launch the task, the task doesn't transition to a "RUNNING" state, or if it fails any of its defined health checks and is stopped. Note: Once a service deployment has one or more successfully running tasks, the failed task count resets to zero and stops being evaluated. * **createdAt** *(datetime) --* The Unix timestamp for the time when the service deployment was created. * **updatedAt** *(datetime) --* The Unix timestamp for the time when the service deployment was last updated. * **capacityProviderStrategy** *(list) --* The capacity provider strategy that the deployment is using. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. * **launchType** *(string) --* The launch type the tasks in the service are using. For more information, see Amazon ECS Launch Types in the *Amazon Elastic Container Service Developer Guide*. * **platformVersion** *(string) --* The platform version that your tasks in the service run on. A platform version is only specified for tasks using the Fargate launch type. If one isn't specified, the "LATEST" platform version is used. For more information, see Fargate Platform Versions in the *Amazon Elastic Container Service Developer Guide*. * **platformFamily** *(string) --* The operating system that your tasks in the service, or tasks are running on. A platform family is specified only for tasks using the Fargate launch type. All tasks that run as part of this service must use the same "platformFamily" value as the service, for example, "LINUX.". * **networkConfiguration** *(dict) --* The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the "awsvpc" networking mode. * **awsvpcConfiguration** *(dict) --* The VPC subnets and security groups that are associated with a task. Note: All specified subnets and security groups must be from the same VPC. * **subnets** *(list) --* The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified. Note: All specified subnets must be from the same VPC. * *(string) --* * **securityGroups** *(list) --* The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified. Note: All specified security groups must be from the same VPC. * *(string) --* * **assignPublicIp** *(string) --* Whether the task's elastic network interface receives a public IP address. Consider the following when you set this value: * When you use "create-service" or "update- service", the default is "DISABLED". * When the service "deploymentController" is "ECS", the value must be "DISABLED". * **rolloutState** *(string) --* Note: The "rolloutState" of a service is only returned for services that use the rolling update ( "ECS") deployment type that aren't behind a Classic Load Balancer. The rollout state of the deployment. When a service deployment is started, it begins in an "IN_PROGRESS" state. When the service reaches a steady state, the deployment transitions to a "COMPLETED" state. If the service fails to reach a steady state and circuit breaker is turned on, the deployment transitions to a "FAILED" state. A deployment in "FAILED" state doesn't launch any new tasks. For more information, see DeploymentCircuitBreaker. * **rolloutStateReason** *(string) --* A description of the rollout state of a deployment. * **serviceConnectConfiguration** *(dict) --* The details of the Service Connect configuration that's used by this deployment. Compare the configuration between multiple deployments when troubleshooting issues with new deployments. The configuration for this service to discover and connect to services, and be discovered by, and connected from, other services within a namespace. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **enabled** *(boolean) --* Specifies whether to use Service Connect with this service. * **namespace** *(string) --* The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace for use with Service Connect. The namespace must be in the same Amazon Web Services Region as the Amazon ECS service and cluster. The type of namespace doesn't affect Service Connect. For more information about Cloud Map, see Working with Services in the *Cloud Map Developer Guide*. * **services** *(list) --* The list of Service Connect service objects. These are names and aliases (also known as endpoints) that are used by other Amazon ECS services to connect to this service. This field is not required for a "client" Amazon ECS service that's a member of a namespace only to connect to other services within the namespace. An example of this would be a frontend application that accepts incoming requests from either a load balancer that's attached to the service or by other means. An object selects a port from the task definition, assigns a name for the Cloud Map service, and a list of aliases (endpoints) and ports for client applications to refer to this service. * *(dict) --* The Service Connect service object configuration. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **portName** *(string) --* The "portName" must match the name of one of the "portMappings" from all the containers in the task definition of this Amazon ECS service. * **discoveryName** *(string) --* The "discoveryName" is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen. If the "discoveryName" isn't specified, the port mapping name from the task definition is used in "portName.namespace". * **clientAliases** *(list) --* The list of client aliases for this Service Connect service. You use these to assign names that can be used by client applications. The maximum number of client aliases that you can have in this list is 1. Each alias ("endpoint") is a fully-qualified name and port number that other Amazon ECS tasks ("clients") can use to connect to this service. Each name and port mapping must be unique within the namespace. For each "ServiceConnectService", you must provide at least one "clientAlias" with one "port". * *(dict) --* Each alias ("endpoint") is a fully- qualified name and port number that other tasks ("clients") can use to connect to this service. Each name and port mapping must be unique within the namespace. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **port** *(integer) --* The listening port number for the Service Connect proxy. This port is available inside of all of the tasks within the same namespace. To avoid changing your applications in client Amazon ECS services, set this to the same port that the client application uses by default. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **dnsName** *(string) --* The "dnsName" is the name that you use in the applications of client tasks to connect to this service. The name must be a valid DNS name but doesn't need to be fully-qualified. The name can include up to 127 characters. The name can include lowercase letters, numbers, underscores (_), hyphens (-), and periods (.). The name can't start with a hyphen. If this parameter isn't specified, the default value of "discoveryName.namespace" is used. If the "discoveryName" isn't specified, the port mapping name from the task definition is used in "portName.namespace". To avoid changing your applications in client Amazon ECS services, set this to the same name that the client application uses by default. For example, a few common names are "database", "db", or the lowercase name of a database, such as "mysql" or "redis". For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **testTrafficRules** *(dict) --* The configuration for test traffic routing rules used during blue/green deployments with Amazon ECS Service Connect. This allows you to route a portion of traffic to the new service revision of your service for testing before shifting all production traffic. * **header** *(dict) --* The HTTP header-based routing rules that determine which requests should be routed to the new service version during blue/green deployment testing. These rules provide fine-grained control over test traffic routing based on request headers. * **name** *(string) --* The name of the HTTP header to examine for test traffic routing. Common examples include custom headers like "X-Test-Version" or "X -Canary-Request" that can be used to identify test traffic. * **value** *(dict) --* The header value matching configuration that determines how the HTTP header value is evaluated for test traffic routing decisions. * **exact** *(string) --* The exact value that the HTTP header must match for the test traffic routing rule to apply. This provides precise control over which requests are routed to the new service revision during blue/green deployments. * **ingressPortOverride** *(integer) --* The port number for the Service Connect proxy to listen on. Use the value of this field to bypass the proxy for traffic on the port number specified in the named "portMapping" in the task definition of this application, and then use it in your VPC security groups to allow traffic into the proxy for this Amazon ECS service. In "awsvpc" mode and Fargate, the default value is the container port number. The container port number is in the "portMapping" in the task definition. In bridge mode, the default value is the ephemeral port of the Service Connect proxy. * **timeout** *(dict) --* A reference to an object that represents the configured timeouts for Service Connect. * **idleTimeoutSeconds** *(integer) --* The amount of time in seconds a connection will stay active while idle. A value of "0" can be set to disable "idleTimeout". The "idleTimeout" default for "HTTP"/ "HTTP2"/ "GRPC" is 5 minutes. The "idleTimeout" default for "TCP" is 1 hour. * **perRequestTimeoutSeconds** *(integer) --* The amount of time waiting for the upstream to respond with a complete response per request. A value of "0" can be set to disable "perRequestTimeout". "perRequestTimeout" can only be set if Service Connect "appProtocol" isn't "TCP". Only "idleTimeout" is allowed for "TCP" "appProtocol". * **tls** *(dict) --* A reference to an object that represents a Transport Layer Security (TLS) configuration. * **issuerCertificateAuthority** *(dict) --* The signer certificate authority. * **awsPcaAuthorityArn** *(string) --* The ARN of the Amazon Web Services Private Certificate Authority certificate. * **kmsKey** *(string) --* The Amazon Web Services Key Management Service key. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that's associated with the Service Connect TLS. * **logConfiguration** *(dict) --* The log configuration for the container. This parameter maps to "LogConfig" in the docker container create command and the "--log-driver" option to docker run. By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition. Understand the following when specifying a log configuration for your containers. * Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent. For tasks on Fargate, the supported log drivers are "awslogs", "splunk", and "awsfirelens". For tasks hosted on Amazon EC2 instances, the supported log drivers are "awslogs", "fluentd", "gelf", "json-file", "journald", "syslog", "splunk", and "awsfirelens". * This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. * For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the "ECS_AVAILABLE_LOGGING_DRIVERS" environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the *Amazon Elastic Container Service Developer Guide*. * For tasks that are on Fargate, because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to. * **logDriver** *(string) --* The log driver to use for the container. For tasks on Fargate, the supported log drivers are "awslogs", "splunk", and "awsfirelens". For tasks hosted on Amazon EC2 instances, the supported log drivers are "awslogs", "fluentd", "gelf", "json-file", "journald", "syslog", "splunk", and "awsfirelens". For more information about using the "awslogs" log driver, see Send Amazon ECS logs to CloudWatch in the *Amazon Elastic Container Service Developer Guide*. For more information about using the "awsfirelens" log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner. Note: If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software. * **options** *(dict) --* The configuration options to send to the log driver. The options you can specify depend on the log driver. Some of the options you can specify when you use the "awslogs" log driver to route logs to Amazon CloudWatch include the following: awslogs-create-group Required: No Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to "false". Note: Your IAM policy must include the "logs:CreateLogGroup" permission before you attempt to use "awslogs-create-group ".awslogs-region Required: Yes Specify the Amazon Web Services Region that the "awslogs" log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option. awslogs-group Required: Yes Make sure to specify a log group that the "awslogs" log driver sends its log streams to. awslogs-stream-prefix Required: Yes, when using Fargate.Optional when using EC2. Use the "awslogs-stream-prefix" option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format "prefix-name /container-name/ecs-task-id". If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option. For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to. You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console. awslogs-datetime-format Required: No This option defines a multiline start pattern in Python "strftime" format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry. For more information, see awslogs-datetime- format. You cannot configure both the "awslogs- datetime-format" and "awslogs-multiline- pattern" options. Note: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.awslogs-multiline-pattern Required: No This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. For more information, see awslogs-multiline- pattern. This option is ignored if "awslogs-datetime- format" is also configured. You cannot configure both the "awslogs- datetime-format" and "awslogs-multiline- pattern" options. Note: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. The following options apply to all supported log drivers. mode Required: No Valid values: "non-blocking" | "blocking" This option defines the delivery mode of log messages from the container to the log driver specified using "logDriver". The delivery mode you choose affects application availability when the flow of logs from container is interrupted. If you use the "blocking" mode and the flow of logs is interrupted, calls from container code to write to the "stdout" and "stderr" streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure. If you use the "non-blocking" mode, the container's logs are instead stored in an in- memory intermediate buffer configured with the "max-buffer-size" option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non- blocking mode in the awslogs container log driver. You can set a default "mode" for all containers in a specific Amazon Web Services Region by using the "defaultLogDriverMode" account setting. If you don't specify the "mode" option or configure the account setting, Amazon ECS will default to the "non-blocking" mode. For more information about the account setting, see Default log driver mode in the *Amazon Elastic Container Service Developer Guide*. Note: On June 25, 2025, Amazon ECS changed the default log driver mode from "blocking" to "non-blocking" to prioritize task availability over logging. To continue using the "blocking" mode after this change, do one of the following: * Set the "mode" option in your container definition's "logConfiguration" as "blocking". * Set the "defaultLogDriverMode" account setting to "blocking". max-buffer-size Required: No Default value: "1m" When "non-blocking" mode is used, the "max- buffer-size" log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost. To route logs using the "splunk" log router, you need to specify a "splunk-token" and a "splunk-url". When you use the "awsfirelens" log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the "log-driver-buffer-limit" option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. Other options you can specify when using "awsfirelens" to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with "region" and a name for the log stream with "delivery_stream". When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with "region" and a data stream name with "stream". When you export logs to Amazon OpenSearch Service, you can specify options like "Name", "Host" (OpenSearch Service endpoint without protocol), "Port", "Index", "Type", "Aws_auth", "Aws_region", "Suppress_Type_Name", and "tls". For more information, see Under the hood: FireLens for Amazon ECS Tasks. When you export logs to Amazon S3, you can specify the bucket using the "bucket" option. You can also specify "region", "total_file_size", "upload_timeout", and "use_put_object" as options. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" * *(string) --* * *(string) --* * **secretOptions** *(list) --* The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * *(dict) --* An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways: * To inject sensitive data into your containers as environment variables, use the "secrets" container definition parameter. * To reference sensitive information in the log configuration of a container, use the "secretOptions" container definition parameter. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the secret. * **valueFrom** *(string) --* The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store. For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the *Amazon Elastic Container Service Developer Guide*. Note: If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified. * **serviceConnectResources** *(list) --* The list of Service Connect resources that are associated with this deployment. Each list entry maps a discovery name to a Cloud Map service name. * *(dict) --* The Service Connect resource. Each configuration maps a discovery name to a Cloud Map service name. The data is stored in Cloud Map as part of the Service Connect configuration for each discovery name of this Amazon ECS service. A task can resolve the "dnsName" for each of the "clientAliases" of a service. However a task can't resolve the discovery names. If you want to connect to a service, refer to the "ServiceConnectConfiguration" of that service for the list of "clientAliases" that you can use. * **discoveryName** *(string) --* The discovery name of this Service Connect resource. The "discoveryName" is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen. If the "discoveryName" isn't specified, the port mapping name from the task definition is used in "portName.namespace". * **discoveryArn** *(string) --* The Amazon Resource Name (ARN) for the service in Cloud Map that matches the discovery name for this Service Connect resource. You can use this ARN in other integrations with Cloud Map. However, Service Connect can't ensure connectivity outside of Amazon ECS. * **volumeConfigurations** *(list) --* The details of the volume that was "configuredAtLaunch". You can configure different settings like the size, throughput, volumeType, and ecryption in ServiceManagedEBSVolumeConfiguration. The "name" of the volume must match the "name" from the task definition. * *(dict) --* The configuration for a volume specified in the task definition as a volume that is configured at launch time. Currently, the only supported volume type is an Amazon EBS volume. * **name** *(string) --* The name of the volume. This value must match the volume name from the "Volume" object in the task definition. * **managedEBSVolume** *(dict) --* The configuration for the Amazon EBS volume that Amazon ECS creates and manages on your behalf. These settings are used to create each Amazon EBS volume, with one volume created for each task in the service. The Amazon EBS volumes are visible in your account in the Amazon EC2 console once they are created. * **encrypted** *(boolean) --* Indicates whether the volume should be encrypted. If you turn on Region-level Amazon EBS encryption by default but set this value as "false", the setting is overridden and the volume is encrypted with the KMS key specified for Amazon EBS encryption by default. This parameter maps 1:1 with the "Encrypted" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * **kmsKeyId** *(string) --* The Amazon Resource Name (ARN) identifier of the Amazon Web Services Key Management Service key to use for Amazon EBS encryption. When a key is specified using this parameter, it overrides Amazon EBS default encryption or any KMS key that you specified for cluster- level managed storage encryption. This parameter maps 1:1 with the "KmsKeyId" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. For more information about encrypting Amazon EBS volumes attached to tasks, see Encrypt data stored in Amazon EBS volumes attached to Amazon ECS tasks. Warning: Amazon Web Services authenticates the Amazon Web Services Key Management Service key asynchronously. Therefore, if you specify an ID, alias, or ARN that is invalid, the action can appear to complete, but eventually fails. * **volumeType** *(string) --* The volume type. This parameter maps 1:1 with the "VolumeType" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. For more information, see Amazon EBS volume types in the *Amazon EC2 User Guide*. The following are the supported volume types. * General Purpose SSD: "gp2``| ``gp3" * Provisioned IOPS SSD: "io1``| ``io2" * Throughput Optimized HDD: "st1" * Cold HDD: "sc1" * Magnetic: "standard" Note: The magnetic volume type is not supported on Fargate. * **sizeInGiB** *(integer) --* The size of the volume in GiB. You must specify either a volume size or a snapshot ID. If you specify a snapshot ID, the snapshot size is used for the volume size by default. You can optionally specify a volume size greater than or equal to the snapshot size. This parameter maps 1:1 with the "Size" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. The following are the supported volume size values for each volume type. * "gp2" and "gp3": 1-16,384 * "io1" and "io2": 4-16,384 * "st1" and "sc1": 125-16,384 * "standard": 1-1,024 * **snapshotId** *(string) --* The snapshot that Amazon ECS uses to create volumes for attachment to tasks maintained by the service. You must specify either "snapshotId" or "sizeInGiB" in your volume configuration. This parameter maps 1:1 with the "SnapshotId" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * **volumeInitializationRate** *(integer) --* The rate, in MiB/s, at which data is fetched from a snapshot of an existing EBS volume to create new volumes for attachment to the tasks maintained by the service. This property can be specified only if you specify a "snapshotId". For more information, see Initialize Amazon EBS volumes in the *Amazon EBS User Guide*. * **iops** *(integer) --* The number of I/O operations per second (IOPS). For "gp3", "io1", and "io2" volumes, this represents the number of IOPS that are provisioned for the volume. For "gp2" volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting. The following are the supported values for each volume type. * "gp3": 3,000 - 16,000 IOPS * "io1": 100 - 64,000 IOPS * "io2": 100 - 256,000 IOPS This parameter is required for "io1" and "io2" volume types. The default for "gp3" volumes is "3,000 IOPS". This parameter is not supported for "st1", "sc1", or "standard" volume types. This parameter maps 1:1 with the "Iops" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * **throughput** *(integer) --* The throughput to provision for a volume, in MiB/s, with a maximum of 1,000 MiB/s. This parameter maps 1:1 with the "Throughput" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. Warning: This parameter is only supported for the "gp3" volume type. * **tagSpecifications** *(list) --* The tags to apply to the volume. Amazon ECS applies service-managed tags by default. This parameter maps 1:1 with the "TagSpecifications.N" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * *(dict) --* The tag specifications of an Amazon EBS volume. * **resourceType** *(string) --* The type of volume resource. * **tags** *(list) --* The tags applied to this Amazon EBS volume. "AmazonECSCreated" and "AmazonECSManaged" are reserved tags that can't be used. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case- sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **propagateTags** *(string) --* Determines whether to propagate the tags from the task definition to the Amazon EBS volume. Tags can only propagate to a "SERVICE" specified in "ServiceVolumeConfiguration". If no value is specified, the tags aren't propagated. * **roleArn** *(string) --* The ARN of the IAM role to associate with this volume. This is the Amazon ECS infrastructure IAM role that is used to manage your Amazon Web Services infrastructure. We recommend using the Amazon ECS-managed "AmazonECSInfrastructureRolePolicyForVolumes" IAM policy with this role. For more information, see Amazon ECS infrastructure IAM role in the *Amazon ECS Developer Guide*. * **filesystemType** *(string) --* The filesystem type for the volume. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the tasks will fail to start. The available Linux filesystem types are "ext3", "ext4", and "xfs". If no value is specified, the "xfs" filesystem type is used by default. The available Windows filesystem types are "NTFS". * **fargateEphemeralStorage** *(dict) --* The Fargate ephemeral storage settings for the deployment. * **kmsKeyId** *(string) --* Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment. * **vpcLatticeConfigurations** *(list) --* The VPC Lattice configuration for the service deployment. * *(dict) --* The VPC Lattice configuration for your service that holds the information for the target group(s) Amazon ECS tasks will be registered to. * **roleArn** *(string) --* The ARN of the IAM role to associate with this VPC Lattice configuration. This is the Amazon ECS infrastructure IAM role that is used to manage your VPC Lattice infrastructure. * **targetGroupArn** *(string) --* The full Amazon Resource Name (ARN) of the target group or groups associated with the VPC Lattice configuration that the Amazon ECS tasks will be registered to. * **portName** *(string) --* The name of the port mapping to register in the VPC Lattice target group. This is the name of the "portMapping" you defined in your task definition. * **roleArn** *(string) --* The ARN of the IAM role that's associated with the service. It allows the Amazon ECS container agent to register container instances with an Elastic Load Balancing load balancer. * **events** *(list) --* The event stream for your service. A maximum of 100 of the latest events are displayed. * *(dict) --* The details for an event that's associated with a service. * **id** *(string) --* The ID string for the event. * **createdAt** *(datetime) --* The Unix timestamp for the time when the event was triggered. * **message** *(string) --* The event message. * **createdAt** *(datetime) --* The Unix timestamp for the time when the service was created. * **placementConstraints** *(list) --* The placement constraints for the tasks in the service. * *(dict) --* An object representing a constraint on task placement. For more information, see Task placement constraints in the *Amazon Elastic Container Service Developer Guide*. Note: If you're using the Fargate launch type, task placement constraints aren't supported. * **type** *(string) --* The type of constraint. Use "distinctInstance" to ensure that each task in a particular group is running on a different container instance. Use "memberOf" to restrict the selection to a group of valid candidates. * **expression** *(string) --* A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is "distinctInstance". For more information, see Cluster query language in the *Amazon Elastic Container Service Developer Guide*. * **placementStrategy** *(list) --* The placement strategy that determines how tasks for the service are placed. * *(dict) --* The task placement strategy for a task or service. For more information, see Task placement strategies in the *Amazon Elastic Container Service Developer Guide*. * **type** *(string) --* The type of placement strategy. The "random" placement strategy randomly places tasks on available candidates. The "spread" placement strategy spreads placement across available candidates evenly based on the "field" parameter. The "binpack" strategy places tasks on available candidates that have the least available amount of the resource that's specified with the "field" parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task. * **field** *(string) --* The field to apply the placement strategy against. For the "spread" placement strategy, valid values are "instanceId" (or "host", which has the same effect), or any platform or custom attribute that's applied to a container instance, such as "attribute:ecs .availability-zone". For the "binpack" placement strategy, valid values are "cpu" and "memory". For the "random" placement strategy, this field is not used. * **networkConfiguration** *(dict) --* The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the "awsvpc" networking mode. * **awsvpcConfiguration** *(dict) --* The VPC subnets and security groups that are associated with a task. Note: All specified subnets and security groups must be from the same VPC. * **subnets** *(list) --* The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified. Note: All specified subnets must be from the same VPC. * *(string) --* * **securityGroups** *(list) --* The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified. Note: All specified security groups must be from the same VPC. * *(string) --* * **assignPublicIp** *(string) --* Whether the task's elastic network interface receives a public IP address. Consider the following when you set this value: * When you use "create-service" or "update-service", the default is "DISABLED". * When the service "deploymentController" is "ECS", the value must be "DISABLED". * **healthCheckGracePeriodSeconds** *(integer) --* The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing target health checks after a task has first started. * **schedulingStrategy** *(string) --* The scheduling strategy to use for the service. For more information, see Services. There are two service scheduler strategies available. * "REPLICA"-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. * "DAEMON"-The daemon scheduling strategy deploys exactly one task on each active container instance. This task meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks. It stop tasks that don't meet the placement constraints. Note: Fargate tasks don't support the "DAEMON" scheduling strategy. * **deploymentController** *(dict) --* The deployment controller type the service is using. * **type** *(string) --* The deployment controller type to use. The deployment controller is the mechanism that determines how tasks are deployed for your service. The valid options are: * ECS When you create a service which uses the "ECS" deployment controller, you can choose between the following deployment strategies: * "ROLLING": When you create a service which uses the *rolling update* ( "ROLLING") deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration. Rolling update deployments are best suited for the following scenarios: * Gradual service updates: You need to update your service incrementally without taking the entire service offline at once. * Limited resource requirements: You want to avoid the additional resource costs of running two complete environments simultaneously (as required by blue/green deployments). * Acceptable deployment time: Your application can tolerate a longer deployment process, as rolling updates replace tasks one by one. * No need for instant roll back: Your service can tolerate a rollback process that takes minutes rather than seconds. * Simple deployment process: You prefer a straightforward deployment approach without the complexity of managing multiple environments, target groups, and listeners. * No load balancer requirement: Your service doesn't use or require a load balancer, Application Load Balancer, Network Load Balancer, or Service Connect (which are required for blue/green deployments). * Stateful applications: Your application maintains state that makes it difficult to run two parallel environments. * Cost sensitivity: You want to minimize deployment costs by not running duplicate environments during deployment. Rolling updates are the default deployment strategy for services and provide a balance between deployment safety and resource efficiency for many common application scenarios. * "BLUE_GREEN": A *blue/green* deployment strategy ( "BLUE_GREEN") is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed. Amazon ECS blue/green deployments are best suited for the following scenarios: * Service validation: When you need to validate new service revisions before directing production traffic to them * Zero downtime: When your service requires zero- downtime deployments * Instant roll back: When you need the ability to quickly roll back if issues are detected * Load balancer requirement: When your service uses Application Load Balancer, Network Load Balancer, or Service Connect * External Use a third-party deployment controller. * Blue/green deployment (powered by CodeDeploy) CodeDeploy installs an updated version of the application as a new replacement task set and reroutes production traffic from the original application task set to the replacement task set. The original task set is terminated after a successful deployment. Use this deployment controller to verify a new deployment of a service before sending production traffic to it. * **tags** *(list) --* The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value. You define bot the key and value. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **createdBy** *(string) --* The principal that created the service. * **enableECSManagedTags** *(boolean) --* Determines whether to use Amazon ECS managed tags for the tasks in the service. For more information, see Tagging Your Amazon ECS Resources in the *Amazon Elastic Container Service Developer Guide*. * **propagateTags** *(string) --* Determines whether to propagate the tags from the task definition or the service to the task. If no value is specified, the tags aren't propagated. * **enableExecuteCommand** *(boolean) --* Determines whether the execute command functionality is turned on for the service. If "true", the execute command functionality is turned on for all containers in tasks as part of the service. * **availabilityZoneRebalancing** *(string) --* Indicates whether to use Availability Zone rebalancing for the service. For more information, see Balancing an Amazon ECS service across Availability Zones in the *Amazon Elastic Container Service Developer Guide* . **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ClusterNotFoundException" * "ECS.Client.exceptions.ServiceNotFoundException" * "ECS.Client.exceptions.ServiceNotActiveException" * "ECS.Client.exceptions.PlatformUnknownException" * "ECS.Client.exceptions.PlatformTaskDefinitionIncompatibilityE xception" * "ECS.Client.exceptions.AccessDeniedException" * "ECS.Client.exceptions.NamespaceNotFoundException" * "ECS.Client.exceptions.UnsupportedFeatureException" **Examples** This example updates the my-http-service service to use the amazon-ecs-sample task definition. response = client.update_service( service='my-http-service', taskDefinition='amazon-ecs-sample', ) print(response) Expected Output: { 'ResponseMetadata': { '...': '...', }, } This example updates the desired count of the my-http-service service to 10. response = client.update_service( desiredCount=10, service='my-http-service', ) print(response) Expected Output: { 'ResponseMetadata': { '...': '...', }, } ECS / Client / update_capacity_provider update_capacity_provider ************************ ECS.Client.update_capacity_provider(**kwargs) Modifies the parameters for a capacity provider. See also: AWS API Documentation **Request Syntax** response = client.update_capacity_provider( name='string', autoScalingGroupProvider={ 'managedScaling': { 'status': 'ENABLED'|'DISABLED', 'targetCapacity': 123, 'minimumScalingStepSize': 123, 'maximumScalingStepSize': 123, 'instanceWarmupPeriod': 123 }, 'managedTerminationProtection': 'ENABLED'|'DISABLED', 'managedDraining': 'ENABLED'|'DISABLED' } ) Parameters: * **name** (*string*) -- **[REQUIRED]** The name of the capacity provider to update. * **autoScalingGroupProvider** (*dict*) -- **[REQUIRED]** An object that represent the parameters to update for the Auto Scaling group capacity provider. * **managedScaling** *(dict) --* The managed scaling settings for the Auto Scaling group capacity provider. * **status** *(string) --* Determines whether to use managed scaling for the capacity provider. * **targetCapacity** *(integer) --* The target capacity utilization as a percentage for the capacity provider. The specified value must be greater than "0" and less than or equal to "100". For example, if you want the capacity provider to maintain 10% spare capacity, then that means the utilization is 90%, so use a "targetCapacity" of "90". The default value of "100" percent results in the Amazon EC2 instances in your Auto Scaling group being completely used. * **minimumScalingStepSize** *(integer) --* The minimum number of Amazon EC2 instances that Amazon ECS will scale out at one time. The scale in process is not affected by this parameter If this parameter is omitted, the default value of "1" is used. When additional capacity is required, Amazon ECS will scale up the minimum scaling step size even if the actual demand is less than the minimum scaling step size. If you use a capacity provider with an Auto Scaling group configured with more than one Amazon EC2 instance type or Availability Zone, Amazon ECS will scale up by the exact minimum scaling step size value and will ignore both the maximum scaling step size as well as the capacity demand. * **maximumScalingStepSize** *(integer) --* The maximum number of Amazon EC2 instances that Amazon ECS will scale out at one time. If this parameter is omitted, the default value of "10000" is used. * **instanceWarmupPeriod** *(integer) --* The period of time, in seconds, after a newly launched Amazon EC2 instance can contribute to CloudWatch metrics for Auto Scaling group. If this parameter is omitted, the default value of "300" seconds is used. * **managedTerminationProtection** *(string) --* The managed termination protection setting to use for the Auto Scaling group capacity provider. This determines whether the Auto Scaling group has managed termination protection. Warning: When using managed termination protection, managed scaling must also be used otherwise managed termination protection doesn't work. When managed termination protection is on, Amazon ECS prevents the Amazon EC2 instances in an Auto Scaling group that contain tasks from being terminated during a scale-in action. The Auto Scaling group and each instance in the Auto Scaling group must have instance protection from scale-in actions on. For more information, see Instance Protection in the *Auto Scaling User Guide*. When managed termination protection is off, your Amazon EC2 instances aren't protected from termination when the Auto Scaling group scales in. * **managedDraining** *(string) --* The managed draining option for the Auto Scaling group capacity provider. When you enable this, Amazon ECS manages and gracefully drains the EC2 container instances that are in the Auto Scaling group capacity provider. Return type: dict Returns: **Response Syntax** { 'capacityProvider': { 'capacityProviderArn': 'string', 'name': 'string', 'status': 'ACTIVE'|'INACTIVE', 'autoScalingGroupProvider': { 'autoScalingGroupArn': 'string', 'managedScaling': { 'status': 'ENABLED'|'DISABLED', 'targetCapacity': 123, 'minimumScalingStepSize': 123, 'maximumScalingStepSize': 123, 'instanceWarmupPeriod': 123 }, 'managedTerminationProtection': 'ENABLED'|'DISABLED', 'managedDraining': 'ENABLED'|'DISABLED' }, 'updateStatus': 'DELETE_IN_PROGRESS'|'DELETE_COMPLETE'|'DELETE_FAILED'|'UPDATE_IN_PROGRESS'|'UPDATE_COMPLETE'|'UPDATE_FAILED', 'updateStatusReason': 'string', 'tags': [ { 'key': 'string', 'value': 'string' }, ] } } **Response Structure** * *(dict) --* * **capacityProvider** *(dict) --* Details about the capacity provider. * **capacityProviderArn** *(string) --* The Amazon Resource Name (ARN) that identifies the capacity provider. * **name** *(string) --* The name of the capacity provider. * **status** *(string) --* The current status of the capacity provider. Only capacity providers in an "ACTIVE" state can be used in a cluster. When a capacity provider is successfully deleted, it has an "INACTIVE" status. * **autoScalingGroupProvider** *(dict) --* The Auto Scaling group settings for the capacity provider. * **autoScalingGroupArn** *(string) --* The Amazon Resource Name (ARN) that identifies the Auto Scaling group, or the Auto Scaling group name. * **managedScaling** *(dict) --* The managed scaling settings for the Auto Scaling group capacity provider. * **status** *(string) --* Determines whether to use managed scaling for the capacity provider. * **targetCapacity** *(integer) --* The target capacity utilization as a percentage for the capacity provider. The specified value must be greater than "0" and less than or equal to "100". For example, if you want the capacity provider to maintain 10% spare capacity, then that means the utilization is 90%, so use a "targetCapacity" of "90". The default value of "100" percent results in the Amazon EC2 instances in your Auto Scaling group being completely used. * **minimumScalingStepSize** *(integer) --* The minimum number of Amazon EC2 instances that Amazon ECS will scale out at one time. The scale in process is not affected by this parameter If this parameter is omitted, the default value of "1" is used. When additional capacity is required, Amazon ECS will scale up the minimum scaling step size even if the actual demand is less than the minimum scaling step size. If you use a capacity provider with an Auto Scaling group configured with more than one Amazon EC2 instance type or Availability Zone, Amazon ECS will scale up by the exact minimum scaling step size value and will ignore both the maximum scaling step size as well as the capacity demand. * **maximumScalingStepSize** *(integer) --* The maximum number of Amazon EC2 instances that Amazon ECS will scale out at one time. If this parameter is omitted, the default value of "10000" is used. * **instanceWarmupPeriod** *(integer) --* The period of time, in seconds, after a newly launched Amazon EC2 instance can contribute to CloudWatch metrics for Auto Scaling group. If this parameter is omitted, the default value of "300" seconds is used. * **managedTerminationProtection** *(string) --* The managed termination protection setting to use for the Auto Scaling group capacity provider. This determines whether the Auto Scaling group has managed termination protection. The default is off. Warning: When using managed termination protection, managed scaling must also be used otherwise managed termination protection doesn't work. When managed termination protection is on, Amazon ECS prevents the Amazon EC2 instances in an Auto Scaling group that contain tasks from being terminated during a scale-in action. The Auto Scaling group and each instance in the Auto Scaling group must have instance protection from scale-in actions on as well. For more information, see Instance Protection in the *Auto Scaling User Guide*. When managed termination protection is off, your Amazon EC2 instances aren't protected from termination when the Auto Scaling group scales in. * **managedDraining** *(string) --* The managed draining option for the Auto Scaling group capacity provider. When you enable this, Amazon ECS manages and gracefully drains the EC2 container instances that are in the Auto Scaling group capacity provider. * **updateStatus** *(string) --* The update status of the capacity provider. The following are the possible states that is returned. DELETE_IN_PROGRESS The capacity provider is in the process of being deleted. DELETE_COMPLETE The capacity provider was successfully deleted and has an "INACTIVE" status. DELETE_FAILED The capacity provider can't be deleted. The update status reason provides further details about why the delete failed. * **updateStatusReason** *(string) --* The update status reason. This provides further details about the update status for the capacity provider. * **tags** *(list) --* The metadata that you apply to the capacity provider to help you categorize and organize it. Each tag consists of a key and an optional value. You define both. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" ECS / Client / update_task_set update_task_set *************** ECS.Client.update_task_set(**kwargs) Modifies a task set. This is used when a service uses the "EXTERNAL" deployment controller type. For more information, see Amazon ECS Deployment Types in the *Amazon Elastic Container Service Developer Guide*. See also: AWS API Documentation **Request Syntax** response = client.update_task_set( cluster='string', service='string', taskSet='string', scale={ 'value': 123.0, 'unit': 'PERCENT' } ) Parameters: * **cluster** (*string*) -- **[REQUIRED]** The short name or full Amazon Resource Name (ARN) of the cluster that hosts the service that the task set is found in. * **service** (*string*) -- **[REQUIRED]** The short name or full Amazon Resource Name (ARN) of the service that the task set is found in. * **taskSet** (*string*) -- **[REQUIRED]** The short name or full Amazon Resource Name (ARN) of the task set to update. * **scale** (*dict*) -- **[REQUIRED]** A floating-point percentage of the desired number of tasks to place and keep running in the task set. * **value** *(float) --* The value, specified as a percent total of a service's "desiredCount", to scale the task set. Accepted values are numbers between 0 and 100. * **unit** *(string) --* The unit of measure for the scale value. Return type: dict Returns: **Response Syntax** { 'taskSet': { 'id': 'string', 'taskSetArn': 'string', 'serviceArn': 'string', 'clusterArn': 'string', 'startedBy': 'string', 'externalId': 'string', 'status': 'string', 'taskDefinition': 'string', 'computedDesiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'scale': { 'value': 123.0, 'unit': 'PERCENT' }, 'stabilityStatus': 'STEADY_STATE'|'STABILIZING', 'stabilityStatusAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' } } } **Response Structure** * *(dict) --* * **taskSet** *(dict) --* Details about the task set. * **id** *(string) --* The ID of the task set. * **taskSetArn** *(string) --* The Amazon Resource Name (ARN) of the task set. * **serviceArn** *(string) --* The Amazon Resource Name (ARN) of the service the task set exists in. * **clusterArn** *(string) --* The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in. * **startedBy** *(string) --* The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the "startedBy" parameter is "CODE_DEPLOY". If an external deployment created the task set, the "startedBy" field isn't used. * **externalId** *(string) --* The external ID associated with the task set. If an CodeDeploy deployment created a task set, the "externalId" parameter contains the CodeDeploy deployment ID. If a task set is created for an external deployment and is associated with a service discovery registry, the "externalId" parameter contains the "ECS_TASK_SET_EXTERNAL_ID" Cloud Map attribute. * **status** *(string) --* The status of the task set. The following describes each state. PRIMARY The task set is serving production traffic. ACTIVE The task set isn't serving production traffic. DRAINING The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group. * **taskDefinition** *(string) --* The task definition that the task set is using. * **computedDesiredCount** *(integer) --* The computed desired count for the task set. This is calculated by multiplying the service's "desiredCount" by the task set's "scale" percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks. * **pendingCount** *(integer) --* The number of tasks in the task set that are in the "PENDING" status during a deployment. A task in the "PENDING" state is preparing to enter the "RUNNING" state. A task set enters the "PENDING" status when it launches for the first time or when it's restarted after being in the "STOPPED" state. * **runningCount** *(integer) --* The number of tasks in the task set that are in the "RUNNING" status during a deployment. A task in the "RUNNING" state is running and ready for use. * **createdAt** *(datetime) --* The Unix timestamp for the time when the task set was created. * **updatedAt** *(datetime) --* The Unix timestamp for the time when the task set was last updated. * **launchType** *(string) --* The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the *Amazon Elastic Container Service Developer Guide*. * **capacityProviderStrategy** *(list) --* The capacity provider strategy that are associated with the task set. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two- minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. * **platformVersion** *(string) --* The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the *Amazon Elastic Container Service Developer Guide*. * **platformFamily** *(string) --* The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type. All tasks in the set must have the same value. * **networkConfiguration** *(dict) --* The network configuration for the task set. * **awsvpcConfiguration** *(dict) --* The VPC subnets and security groups that are associated with a task. Note: All specified subnets and security groups must be from the same VPC. * **subnets** *(list) --* The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified. Note: All specified subnets must be from the same VPC. * *(string) --* * **securityGroups** *(list) --* The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified. Note: All specified security groups must be from the same VPC. * *(string) --* * **assignPublicIp** *(string) --* Whether the task's elastic network interface receives a public IP address. Consider the following when you set this value: * When you use "create-service" or "update-service", the default is "DISABLED". * When the service "deploymentController" is "ECS", the value must be "DISABLED". * **loadBalancers** *(list) --* Details on a load balancer that are used with a task set. * *(dict) --* The load balancer configuration to use with a service or task set. When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers. We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration. A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the *Amazon Elastic Container Service Developer Guide*. * **targetGroupArn** *(string) --* The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set. A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. For services using the "ECS" deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the *Amazon Elastic Container Service Developer Guide*. For services using the "CODE_DEPLOY" deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the *Amazon Elastic Container Service Developer Guide*. Warning: If your service's task definition uses the "awsvpc" network mode, you must choose "ip" as the target type, not "instance". Do this when creating your target groups because tasks that use the "awsvpc" network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type. * **loadBalancerName** *(string) --* The name of the load balancer to associate with the Amazon ECS service or task set. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted. * **containerName** *(string) --* The name of the container (as it appears in a container definition) to associate with the load balancer. You need to specify the container name when configuring the target group for an Amazon ECS load balancer. * **containerPort** *(integer) --* The port on the container to associate with the load balancer. This port must correspond to a "containerPort" in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the "hostPort" of the port mapping. * **advancedConfiguration** *(dict) --* The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments. * **alternateTargetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments. * **productionListenerRule** *(string) --* The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic. * **testListenerRule** *(string) --* The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you. * **serviceRegistries** *(list) --* The details for the service discovery registries to assign to this task set. For more information, see Service discovery. * *(dict) --* The details for the service registry. Each service may be associated with one service registry. Multiple service registries for each service are not supported. When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration. * **registryArn** *(string) --* The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService. * **port** *(integer) --* The port value used if your service discovery service specified an SRV record. This field might be used if both the "awsvpc" network mode and SRV records are used. * **containerName** *(string) --* The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition that your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **containerPort** *(integer) --* The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **scale** *(dict) --* A floating-point percentage of your desired number of tasks to place and keep running in the task set. * **value** *(float) --* The value, specified as a percent total of a service's "desiredCount", to scale the task set. Accepted values are numbers between 0 and 100. * **unit** *(string) --* The unit of measure for the scale value. * **stabilityStatus** *(string) --* The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set are in "STEADY_STATE": * The task "runningCount" is equal to the "computedDesiredCount". * The "pendingCount" is "0". * There are no tasks that are running on container instances in the "DRAINING" status. * All tasks are reporting a healthy status from the load balancers, service discovery, and container health checks. If any of those conditions aren't met, the stability status returns "STABILIZING". * **stabilityStatusAt** *(datetime) --* The Unix timestamp for the time when the task set stability status was retrieved. * **tags** *(list) --* The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **fargateEphemeralStorage** *(dict) --* The Fargate ephemeral storage settings for the task set. * **kmsKeyId** *(string) --* Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ClusterNotFoundException" * "ECS.Client.exceptions.UnsupportedFeatureException" * "ECS.Client.exceptions.AccessDeniedException" * "ECS.Client.exceptions.ServiceNotFoundException" * "ECS.Client.exceptions.ServiceNotActiveException" * "ECS.Client.exceptions.TaskSetNotFoundException" ECS / Client / put_cluster_capacity_providers put_cluster_capacity_providers ****************************** ECS.Client.put_cluster_capacity_providers(**kwargs) Modifies the available capacity providers and the default capacity provider strategy for a cluster. You must specify both the available capacity providers and a default capacity provider strategy for the cluster. If the specified cluster has existing capacity providers associated with it, you must specify all existing capacity providers in addition to any new ones you want to add. Any existing capacity providers that are associated with a cluster that are omitted from a PutClusterCapacityProviders API call will be disassociated with the cluster. You can only disassociate an existing capacity provider from a cluster if it's not being used by any existing tasks. When creating a service or running a task on a cluster, if no capacity provider or launch type is specified, then the cluster's default capacity provider strategy is used. We recommend that you define a default capacity provider strategy for your cluster. However, you must specify an empty array ( "[]") to bypass defining a default strategy. See also: AWS API Documentation **Request Syntax** response = client.put_cluster_capacity_providers( cluster='string', capacityProviders=[ 'string', ], defaultCapacityProviderStrategy=[ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ] ) Parameters: * **cluster** (*string*) -- **[REQUIRED]** The short name or full Amazon Resource Name (ARN) of the cluster to modify the capacity provider settings for. If you don't specify a cluster, the default cluster is assumed. * **capacityProviders** (*list*) -- **[REQUIRED]** The name of one or more capacity providers to associate with the cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New capacity providers can be created with the CreateCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used. * *(string) --* * **defaultCapacityProviderStrategy** (*list*) -- **[REQUIRED]** The capacity provider strategy to use by default for the cluster. When creating a service or running a task on a cluster, if no capacity provider or launch type is specified then the default capacity provider strategy for the cluster is used. A capacity provider strategy consists of one or more capacity providers along with the "base" and "weight" to assign to them. A capacity provider must be associated with the cluster to be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. Only capacity providers with an "ACTIVE" or "UPDATING" status can be used. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New capacity providers can be created with the CreateCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* **[REQUIRED]** The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. Return type: dict Returns: **Response Syntax** { 'cluster': { 'clusterArn': 'string', 'clusterName': 'string', 'configuration': { 'executeCommandConfiguration': { 'kmsKeyId': 'string', 'logging': 'NONE'|'DEFAULT'|'OVERRIDE', 'logConfiguration': { 'cloudWatchLogGroupName': 'string', 'cloudWatchEncryptionEnabled': True|False, 's3BucketName': 'string', 's3EncryptionEnabled': True|False, 's3KeyPrefix': 'string' } }, 'managedStorageConfiguration': { 'kmsKeyId': 'string', 'fargateEphemeralStorageKmsKeyId': 'string' } }, 'status': 'string', 'registeredContainerInstancesCount': 123, 'runningTasksCount': 123, 'pendingTasksCount': 123, 'activeServicesCount': 123, 'statistics': [ { 'name': 'string', 'value': 'string' }, ], 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'settings': [ { 'name': 'containerInsights', 'value': 'string' }, ], 'capacityProviders': [ 'string', ], 'defaultCapacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'attachments': [ { 'id': 'string', 'type': 'string', 'status': 'string', 'details': [ { 'name': 'string', 'value': 'string' }, ] }, ], 'attachmentsStatus': 'string', 'serviceConnectDefaults': { 'namespace': 'string' } } } **Response Structure** * *(dict) --* * **cluster** *(dict) --* Details about the cluster. * **clusterArn** *(string) --* The Amazon Resource Name (ARN) that identifies the cluster. For more information about the ARN format, see Amazon Resource Name (ARN) in the *Amazon ECS Developer Guide*. * **clusterName** *(string) --* A user-generated string that you use to identify your cluster. * **configuration** *(dict) --* The execute command and managed storage configuration for the cluster. * **executeCommandConfiguration** *(dict) --* The details of the execute command configuration. * **kmsKeyId** *(string) --* Specify an Key Management Service key ID to encrypt the data between the local client and the container. * **logging** *(string) --* The log setting to use for redirecting logs for your execute command results. The following log settings are available. * "NONE": The execute command session is not logged. * "DEFAULT": The "awslogs" configuration in the task definition is used. If no logging parameter is specified, it defaults to this value. If no "awslogs" log driver is configured in the task definition, the output won't be logged. * "OVERRIDE": Specify the logging details as a part of "logConfiguration". If the "OVERRIDE" logging option is specified, the "logConfiguration" is required. * **logConfiguration** *(dict) --* The log configuration for the results of the execute command actions. The logs can be sent to CloudWatch Logs or an Amazon S3 bucket. When "logging=OVERRIDE" is specified, a "logConfiguration" must be provided. * **cloudWatchLogGroupName** *(string) --* The name of the CloudWatch log group to send logs to. Note: The CloudWatch log group must already be created. * **cloudWatchEncryptionEnabled** *(boolean) --* Determines whether to use encryption on the CloudWatch logs. If not specified, encryption will be off. * **s3BucketName** *(string) --* The name of the S3 bucket to send logs to. Note: The S3 bucket must already be created. * **s3EncryptionEnabled** *(boolean) --* Determines whether to use encryption on the S3 logs. If not specified, encryption is not used. * **s3KeyPrefix** *(string) --* An optional folder in the S3 bucket to place logs in. * **managedStorageConfiguration** *(dict) --* The details of the managed storage configuration. * **kmsKeyId** *(string) --* Specify a Key Management Service key ID to encrypt Amazon ECS managed storage. When you specify a "kmsKeyId", Amazon ECS uses the key to encrypt data volumes managed by Amazon ECS that are attached to tasks in the cluster. The following data volumes are managed by Amazon ECS: Amazon EBS. For more information about encryption of Amazon EBS volumes attached to Amazon ECS tasks, see Encrypt data stored in Amazon EBS volumes for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. The key must be a single Region key. * **fargateEphemeralStorageKmsKeyId** *(string) --* Specify the Key Management Service key ID for Fargate ephemeral storage. When you specify a "fargateEphemeralStorageKmsKeyId", Amazon Web Services Fargate uses the key to encrypt data at rest in ephemeral storage. For more information about Fargate ephemeral storage encryption, see Customer managed keys for Amazon Web Services Fargate ephemeral storage for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. The key must be a single Region key. * **status** *(string) --* The status of the cluster. The following are the possible states that are returned. ACTIVE The cluster is ready to accept tasks and if applicable you can register container instances with the cluster. PROVISIONING The cluster has capacity providers that are associated with it and the resources needed for the capacity provider are being created. DEPROVISIONING The cluster has capacity providers that are associated with it and the resources needed for the capacity provider are being deleted. FAILED The cluster has capacity providers that are associated with it and the resources needed for the capacity provider have failed to create. INACTIVE The cluster has been deleted. Clusters with an "INACTIVE" status may remain discoverable in your account for a period of time. However, this behavior is subject to change in the future. We don't recommend that you rely on "INACTIVE" clusters persisting. * **registeredContainerInstancesCount** *(integer) --* The number of container instances registered into the cluster. This includes container instances in both "ACTIVE" and "DRAINING" status. * **runningTasksCount** *(integer) --* The number of tasks in the cluster that are in the "RUNNING" state. * **pendingTasksCount** *(integer) --* The number of tasks in the cluster that are in the "PENDING" state. * **activeServicesCount** *(integer) --* The number of services that are running on the cluster in an "ACTIVE" state. You can view these services with PListServices. * **statistics** *(list) --* Additional information about your clusters that are separated by launch type. They include the following: * runningEC2TasksCount * RunningFargateTasksCount * pendingEC2TasksCount * pendingFargateTasksCount * activeEC2ServiceCount * activeFargateServiceCount * drainingEC2ServiceCount * drainingFargateServiceCount * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **tags** *(list) --* The metadata that you apply to the cluster to help you categorize and organize them. Each tag consists of a key and an optional value. You define both. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **settings** *(list) --* The settings for the cluster. This parameter indicates whether CloudWatch Container Insights is on or off for a cluster. * *(dict) --* The settings to use when creating a cluster. This parameter is used to turn on CloudWatch Container Insights with enhanced observability or CloudWatch Container Insights for a cluster. Container Insights with enhanced observability provides all the Container Insights metrics, plus additional task and container metrics. This version supports enhanced observability for Amazon ECS clusters using the Amazon EC2 and Fargate launch types. After you configure Container Insights with enhanced observability on Amazon ECS, Container Insights auto-collects detailed infrastructure telemetry from the cluster level down to the container level in your environment and displays these critical performance data in curated dashboards removing the heavy lifting in observability set-up. For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the cluster setting. The value is "containerInsights" . * **value** *(string) --* The value to set for the cluster setting. The supported values are "enhanced", "enabled", and "disabled". To use Container Insights with enhanced observability, set the "containerInsights" account setting to "enhanced". To use Container Insights, set the "containerInsights" account setting to "enabled". If a cluster value is specified, it will override the "containerInsights" value set with PutAccountSetting or PutAccountSettingDefault. * **capacityProviders** *(list) --* The capacity providers associated with the cluster. * *(string) --* * **defaultCapacityProviderStrategy** *(list) --* The default capacity provider strategy for the cluster. When services or tasks are run in the cluster with no launch type or capacity provider strategy specified, the default capacity provider strategy is used. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two- minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. * **attachments** *(list) --* The resources attached to a cluster. When using a capacity provider with a cluster, the capacity provider and associated resources are returned as cluster attachments. * *(dict) --* An object representing a container instance or task attachment. * **id** *(string) --* The unique identifier for the attachment. * **type** *(string) --* The type of the attachment, such as "ElasticNetworkInterface", "Service Connect", and "AmazonElasticBlockStorage". * **status** *(string) --* The status of the attachment. Valid values are "PRECREATED", "CREATED", "ATTACHING", "ATTACHED", "DETACHING", "DETACHED", "DELETED", and "FAILED". * **details** *(list) --* Details of the attachment. For elastic network interfaces, this includes the network interface ID, the MAC address, the subnet ID, and the private IPv4 address. For Service Connect services, this includes "portName", "clientAliases", "discoveryName", and "ingressPortOverride". For Elastic Block Storage, this includes "roleArn", "deleteOnTermination", "volumeName", "volumeId", and "statusReason" (only when the attachment fails to create or attach). * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **attachmentsStatus** *(string) --* The status of the capacity providers associated with the cluster. The following are the states that are returned. UPDATE_IN_PROGRESS The available capacity providers for the cluster are updating. UPDATE_COMPLETE The capacity providers have successfully updated. UPDATE_FAILED The capacity provider updates failed. * **serviceConnectDefaults** *(dict) --* Use this parameter to set a default Service Connect namespace. After you set a default Service Connect namespace, any new services with Service Connect turned on that are created in the cluster are added as client services in the namespace. This setting only applies to new services that set the "enabled" parameter to "true" in the "ServiceConnectConfiguration". You can set the namespace of each service individually in the "ServiceConnectConfiguration" to override this default parameter. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **namespace** *(string) --* The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace. When you create a service and don't specify a Service Connect configuration, this namespace is used. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ClusterNotFoundException" * "ECS.Client.exceptions.ResourceInUseException" * "ECS.Client.exceptions.UpdateInProgressException" ECS / Client / list_services list_services ************* ECS.Client.list_services(**kwargs) Returns a list of services. You can filter the results by cluster, launch type, and scheduling strategy. See also: AWS API Documentation **Request Syntax** response = client.list_services( cluster='string', nextToken='string', maxResults=123, launchType='EC2'|'FARGATE'|'EXTERNAL', schedulingStrategy='REPLICA'|'DAEMON' ) Parameters: * **cluster** (*string*) -- The short name or full Amazon Resource Name (ARN) of the cluster to use when filtering the "ListServices" results. If you do not specify a cluster, the default cluster is assumed. * **nextToken** (*string*) -- The "nextToken" value returned from a "ListServices" request indicating that more results are available to fulfill the request and further calls will be needed. If "maxResults" was provided, it is possible the number of results to be fewer than "maxResults". Note: This token should be treated as an opaque identifier that is only used to retrieve the next items in a list and not for other programmatic purposes. * **maxResults** (*integer*) -- The maximum number of service results that "ListServices" returned in paginated output. When this parameter is used, "ListServices" only returns "maxResults" results in a single page along with a "nextToken" response element. The remaining results of the initial request can be seen by sending another "ListServices" request with the returned "nextToken" value. This value can be between 1 and 100. If this parameter isn't used, then "ListServices" returns up to 10 results and a "nextToken" value if applicable. * **launchType** (*string*) -- The launch type to use when filtering the "ListServices" results. * **schedulingStrategy** (*string*) -- The scheduling strategy to use when filtering the "ListServices" results. Return type: dict Returns: **Response Syntax** { 'serviceArns': [ 'string', ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **serviceArns** *(list) --* The list of full ARN entries for each service that's associated with the specified cluster. * *(string) --* * **nextToken** *(string) --* The "nextToken" value to include in a future "ListServices" request. When the results of a "ListServices" request exceed "maxResults", this value can be used to retrieve the next page of results. This value is "null" when there are no more results to return. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ClusterNotFoundException" **Examples** This example lists the services running in the default cluster for an account. response = client.list_services( ) print(response) Expected Output: { 'serviceArns': [ 'arn:aws:ecs:us-east-1:012345678910:service/my-http-service', ], 'ResponseMetadata': { '...': '...', }, } ECS / Client / describe_task_sets describe_task_sets ****************** ECS.Client.describe_task_sets(**kwargs) Describes the task sets in the specified cluster and service. This is used when a service uses the "EXTERNAL" deployment controller type. For more information, see Amazon ECS Deployment Types in the *Amazon Elastic Container Service Developer Guide*. See also: AWS API Documentation **Request Syntax** response = client.describe_task_sets( cluster='string', service='string', taskSets=[ 'string', ], include=[ 'TAGS', ] ) Parameters: * **cluster** (*string*) -- **[REQUIRED]** The short name or full Amazon Resource Name (ARN) of the cluster that hosts the service that the task sets exist in. * **service** (*string*) -- **[REQUIRED]** The short name or full Amazon Resource Name (ARN) of the service that the task sets exist in. * **taskSets** (*list*) -- The ID or full Amazon Resource Name (ARN) of task sets to describe. * *(string) --* * **include** (*list*) -- Specifies whether to see the resource tags for the task set. If "TAGS" is specified, the tags are included in the response. If this field is omitted, tags aren't included in the response. * *(string) --* Return type: dict Returns: **Response Syntax** { 'taskSets': [ { 'id': 'string', 'taskSetArn': 'string', 'serviceArn': 'string', 'clusterArn': 'string', 'startedBy': 'string', 'externalId': 'string', 'status': 'string', 'taskDefinition': 'string', 'computedDesiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'scale': { 'value': 123.0, 'unit': 'PERCENT' }, 'stabilityStatus': 'STEADY_STATE'|'STABILIZING', 'stabilityStatusAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' } }, ], 'failures': [ { 'arn': 'string', 'reason': 'string', 'detail': 'string' }, ] } **Response Structure** * *(dict) --* * **taskSets** *(list) --* The list of task sets described. * *(dict) --* Information about a set of Amazon ECS tasks in either an CodeDeploy or an "EXTERNAL" deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic. * **id** *(string) --* The ID of the task set. * **taskSetArn** *(string) --* The Amazon Resource Name (ARN) of the task set. * **serviceArn** *(string) --* The Amazon Resource Name (ARN) of the service the task set exists in. * **clusterArn** *(string) --* The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in. * **startedBy** *(string) --* The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the "startedBy" parameter is "CODE_DEPLOY". If an external deployment created the task set, the "startedBy" field isn't used. * **externalId** *(string) --* The external ID associated with the task set. If an CodeDeploy deployment created a task set, the "externalId" parameter contains the CodeDeploy deployment ID. If a task set is created for an external deployment and is associated with a service discovery registry, the "externalId" parameter contains the "ECS_TASK_SET_EXTERNAL_ID" Cloud Map attribute. * **status** *(string) --* The status of the task set. The following describes each state. PRIMARY The task set is serving production traffic. ACTIVE The task set isn't serving production traffic. DRAINING The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group. * **taskDefinition** *(string) --* The task definition that the task set is using. * **computedDesiredCount** *(integer) --* The computed desired count for the task set. This is calculated by multiplying the service's "desiredCount" by the task set's "scale" percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks. * **pendingCount** *(integer) --* The number of tasks in the task set that are in the "PENDING" status during a deployment. A task in the "PENDING" state is preparing to enter the "RUNNING" state. A task set enters the "PENDING" status when it launches for the first time or when it's restarted after being in the "STOPPED" state. * **runningCount** *(integer) --* The number of tasks in the task set that are in the "RUNNING" status during a deployment. A task in the "RUNNING" state is running and ready for use. * **createdAt** *(datetime) --* The Unix timestamp for the time when the task set was created. * **updatedAt** *(datetime) --* The Unix timestamp for the time when the task set was last updated. * **launchType** *(string) --* The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the *Amazon Elastic Container Service Developer Guide*. * **capacityProviderStrategy** *(list) --* The capacity provider strategy that are associated with the task set. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two- minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. * **platformVersion** *(string) --* The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the *Amazon Elastic Container Service Developer Guide*. * **platformFamily** *(string) --* The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type. All tasks in the set must have the same value. * **networkConfiguration** *(dict) --* The network configuration for the task set. * **awsvpcConfiguration** *(dict) --* The VPC subnets and security groups that are associated with a task. Note: All specified subnets and security groups must be from the same VPC. * **subnets** *(list) --* The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified. Note: All specified subnets must be from the same VPC. * *(string) --* * **securityGroups** *(list) --* The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified. Note: All specified security groups must be from the same VPC. * *(string) --* * **assignPublicIp** *(string) --* Whether the task's elastic network interface receives a public IP address. Consider the following when you set this value: * When you use "create-service" or "update-service", the default is "DISABLED". * When the service "deploymentController" is "ECS", the value must be "DISABLED". * **loadBalancers** *(list) --* Details on a load balancer that are used with a task set. * *(dict) --* The load balancer configuration to use with a service or task set. When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers. We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration. A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the *Amazon Elastic Container Service Developer Guide*. * **targetGroupArn** *(string) --* The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set. A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. For services using the "ECS" deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the *Amazon Elastic Container Service Developer Guide*. For services using the "CODE_DEPLOY" deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the *Amazon Elastic Container Service Developer Guide*. Warning: If your service's task definition uses the "awsvpc" network mode, you must choose "ip" as the target type, not "instance". Do this when creating your target groups because tasks that use the "awsvpc" network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type. * **loadBalancerName** *(string) --* The name of the load balancer to associate with the Amazon ECS service or task set. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted. * **containerName** *(string) --* The name of the container (as it appears in a container definition) to associate with the load balancer. You need to specify the container name when configuring the target group for an Amazon ECS load balancer. * **containerPort** *(integer) --* The port on the container to associate with the load balancer. This port must correspond to a "containerPort" in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the "hostPort" of the port mapping. * **advancedConfiguration** *(dict) --* The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments. * **alternateTargetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments. * **productionListenerRule** *(string) --* The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic. * **testListenerRule** *(string) --* The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you. * **serviceRegistries** *(list) --* The details for the service discovery registries to assign to this task set. For more information, see Service discovery. * *(dict) --* The details for the service registry. Each service may be associated with one service registry. Multiple service registries for each service are not supported. When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration. * **registryArn** *(string) --* The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService. * **port** *(integer) --* The port value used if your service discovery service specified an SRV record. This field might be used if both the "awsvpc" network mode and SRV records are used. * **containerName** *(string) --* The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition that your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **containerPort** *(integer) --* The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **scale** *(dict) --* A floating-point percentage of your desired number of tasks to place and keep running in the task set. * **value** *(float) --* The value, specified as a percent total of a service's "desiredCount", to scale the task set. Accepted values are numbers between 0 and 100. * **unit** *(string) --* The unit of measure for the scale value. * **stabilityStatus** *(string) --* The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set are in "STEADY_STATE": * The task "runningCount" is equal to the "computedDesiredCount". * The "pendingCount" is "0". * There are no tasks that are running on container instances in the "DRAINING" status. * All tasks are reporting a healthy status from the load balancers, service discovery, and container health checks. If any of those conditions aren't met, the stability status returns "STABILIZING". * **stabilityStatusAt** *(datetime) --* The Unix timestamp for the time when the task set stability status was retrieved. * **tags** *(list) --* The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **fargateEphemeralStorage** *(dict) --* The Fargate ephemeral storage settings for the task set. * **kmsKeyId** *(string) --* Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment. * **failures** *(list) --* Any failures associated with the call. * *(dict) --* A failed resource. For a list of common causes, see API failure reasons in the *Amazon Elastic Container Service Developer Guide*. * **arn** *(string) --* The Amazon Resource Name (ARN) of the failed resource. * **reason** *(string) --* The reason for the failure. * **detail** *(string) --* The details of the failure. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ClusterNotFoundException" * "ECS.Client.exceptions.UnsupportedFeatureException" * "ECS.Client.exceptions.AccessDeniedException" * "ECS.Client.exceptions.ServiceNotFoundException" * "ECS.Client.exceptions.ServiceNotActiveException" ECS / Client / list_service_deployments list_service_deployments ************************ ECS.Client.list_service_deployments(**kwargs) This operation lists all the service deployments that meet the specified filter criteria. A service deployment happens when you release a software update for the service. You route traffic from the running service revisions to the new service revison and control the number of running tasks. This API returns the values that you use for the request parameters in DescribeServiceRevisions. See also: AWS API Documentation **Request Syntax** response = client.list_service_deployments( service='string', cluster='string', status=[ 'PENDING'|'SUCCESSFUL'|'STOPPED'|'STOP_REQUESTED'|'IN_PROGRESS'|'ROLLBACK_REQUESTED'|'ROLLBACK_IN_PROGRESS'|'ROLLBACK_SUCCESSFUL'|'ROLLBACK_FAILED', ], createdAt={ 'before': datetime(2015, 1, 1), 'after': datetime(2015, 1, 1) }, nextToken='string', maxResults=123 ) Parameters: * **service** (*string*) -- **[REQUIRED]** The ARN or name of the service * **cluster** (*string*) -- The cluster that hosts the service. This can either be the cluster name or ARN. Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. If you don't specify a cluster, "default" is used. * **status** (*list*) -- An optional filter you can use to narrow the results. If you do not specify a status, then all status values are included in the result. * *(string) --* * **createdAt** (*dict*) -- An optional filter you can use to narrow the results by the service creation date. If you do not specify a value, the result includes all services created before the current time. The format is yyyy-MM-dd HH:mm:ss.SSSSSS. * **before** *(datetime) --* Include service deployments in the result that were created before this time. The format is yyyy-MM-dd HH:mm:ss.SSSSSS. * **after** *(datetime) --* Include service deployments in the result that were created after this time. The format is yyyy-MM-dd HH:mm:ss.SSSSSS. * **nextToken** (*string*) -- The "nextToken" value returned from a "ListServiceDeployments" request indicating that more results are available to fulfill the request and further calls are needed. If you provided "maxResults", it's possible the number of results is fewer than "maxResults". * **maxResults** (*integer*) -- The maximum number of service deployment results that "ListServiceDeployments" returned in paginated output. When this parameter is used, "ListServiceDeployments" only returns "maxResults" results in a single page along with a "nextToken" response element. The remaining results of the initial request can be seen by sending another "ListServiceDeployments" request with the returned "nextToken" value. This value can be between 1 and 100. If this parameter isn't used, then "ListServiceDeployments" returns up to 20 results and a "nextToken" value if applicable. Return type: dict Returns: **Response Syntax** { 'serviceDeployments': [ { 'serviceDeploymentArn': 'string', 'serviceArn': 'string', 'clusterArn': 'string', 'startedAt': datetime(2015, 1, 1), 'createdAt': datetime(2015, 1, 1), 'finishedAt': datetime(2015, 1, 1), 'targetServiceRevisionArn': 'string', 'status': 'PENDING'|'SUCCESSFUL'|'STOPPED'|'STOP_REQUESTED'|'IN_PROGRESS'|'ROLLBACK_REQUESTED'|'ROLLBACK_IN_PROGRESS'|'ROLLBACK_SUCCESSFUL'|'ROLLBACK_FAILED', 'statusReason': 'string' }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **serviceDeployments** *(list) --* An overview of the service deployment, including the following properties: * The ARN of the service deployment. * The ARN of the service being deployed. * The ARN of the cluster that hosts the service in the service deployment. * The time that the service deployment started. * The time that the service deployment completed. * The service deployment status. * Information about why the service deployment is in the current state. * The ARN of the service revision that is being deployed. * *(dict) --* The service deployment properties that are retured when you call "ListServiceDeployments". This provides a high-level overview of the service deployment. * **serviceDeploymentArn** *(string) --* The ARN of the service deployment. * **serviceArn** *(string) --* The ARN of the service for this service deployment. * **clusterArn** *(string) --* The ARN of the cluster that hosts the service. * **startedAt** *(datetime) --* The time that the service deployment statred. The format is yyyy-MM-dd HH:mm:ss.SSSSSS. * **createdAt** *(datetime) --* The time that the service deployment was created. The format is yyyy-MM-dd HH:mm:ss.SSSSSS. * **finishedAt** *(datetime) --* The time that the service deployment completed. The format is yyyy-MM-dd HH:mm:ss.SSSSSS. * **targetServiceRevisionArn** *(string) --* The ARN of the service revision being deplyed. * **status** *(string) --* The status of the service deployment * **statusReason** *(string) --* Information about why the service deployment is in the current status. For example, the circuit breaker detected a deployment failure. * **nextToken** *(string) --* The "nextToken" value to include in a future "ListServiceDeployments" request. When the results of a "ListServiceDeployments" request exceed "maxResults", this value can be used to retrieve the next page of results. This value is null when there are no more results to return. **Exceptions** * "ECS.Client.exceptions.AccessDeniedException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ServiceNotFoundException" * "ECS.Client.exceptions.UnsupportedFeatureException" ECS / Client / list_container_instances list_container_instances ************************ ECS.Client.list_container_instances(**kwargs) Returns a list of container instances in a specified cluster. You can filter the results of a "ListContainerInstances" operation with cluster query language statements inside the "filter" parameter. For more information, see Cluster Query Language in the *Amazon Elastic Container Service Developer Guide*. See also: AWS API Documentation **Request Syntax** response = client.list_container_instances( cluster='string', filter='string', nextToken='string', maxResults=123, status='ACTIVE'|'DRAINING'|'REGISTERING'|'DEREGISTERING'|'REGISTRATION_FAILED' ) Parameters: * **cluster** (*string*) -- The short name or full Amazon Resource Name (ARN) of the cluster that hosts the container instances to list. If you do not specify a cluster, the default cluster is assumed. * **filter** (*string*) -- You can filter the results of a "ListContainerInstances" operation with cluster query language statements. For more information, see Cluster Query Language in the *Amazon Elastic Container Service Developer Guide*. * **nextToken** (*string*) -- The "nextToken" value returned from a "ListContainerInstances" request indicating that more results are available to fulfill the request and further calls are needed. If "maxResults" was provided, it's possible the number of results to be fewer than "maxResults". Note: This token should be treated as an opaque identifier that is only used to retrieve the next items in a list and not for other programmatic purposes. * **maxResults** (*integer*) -- The maximum number of container instance results that "ListContainerInstances" returned in paginated output. When this parameter is used, "ListContainerInstances" only returns "maxResults" results in a single page along with a "nextToken" response element. The remaining results of the initial request can be seen by sending another "ListContainerInstances" request with the returned "nextToken" value. This value can be between 1 and 100. If this parameter isn't used, then "ListContainerInstances" returns up to 100 results and a "nextToken" value if applicable. * **status** (*string*) -- Filters the container instances by status. For example, if you specify the "DRAINING" status, the results include only container instances that have been set to "DRAINING" using UpdateContainerInstancesState. If you don't specify this parameter, the default is to include container instances set to all states other than "INACTIVE". Return type: dict Returns: **Response Syntax** { 'containerInstanceArns': [ 'string', ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **containerInstanceArns** *(list) --* The list of container instances with full ARN entries for each container instance associated with the specified cluster. * *(string) --* * **nextToken** *(string) --* The "nextToken" value to include in a future "ListContainerInstances" request. When the results of a "ListContainerInstances" request exceed "maxResults", this value can be used to retrieve the next page of results. This value is "null" when there are no more results to return. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ClusterNotFoundException" **Examples** This example lists all of your available container instances in the specified cluster in your default region. response = client.list_container_instances( cluster='default', ) print(response) Expected Output: { 'containerInstanceArns': [ 'arn:aws:ecs:us-east-1::container-instance/f6bbb147-5370-4ace-8c73-c7181ded911f', 'arn:aws:ecs:us-east-1::container-instance/ffe3d344-77e2-476c-a4d0-bf560ad50acb', ], 'ResponseMetadata': { '...': '...', }, } ECS / Client / execute_command execute_command *************** ECS.Client.execute_command(**kwargs) Runs a command remotely on a container within a task. If you use a condition key in your IAM policy to refine the conditions for the policy statement, for example limit the actions to a specific cluster, you receive an "AccessDeniedException" when there is a mismatch between the condition key value and the corresponding parameter value. For information about required permissions and considerations, see Using Amazon ECS Exec for debugging in the *Amazon ECS Developer Guide*. See also: AWS API Documentation **Request Syntax** response = client.execute_command( cluster='string', container='string', command='string', interactive=True|False, task='string' ) Parameters: * **cluster** (*string*) -- The Amazon Resource Name (ARN) or short name of the cluster the task is running in. If you do not specify a cluster, the default cluster is assumed. * **container** (*string*) -- The name of the container to execute the command on. A container name only needs to be specified for tasks containing multiple containers. * **command** (*string*) -- **[REQUIRED]** The command to run on the container. * **interactive** (*boolean*) -- **[REQUIRED]** Use this flag to run your command in interactive mode. * **task** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) or ID of the task the container is part of. Return type: dict Returns: **Response Syntax** { 'clusterArn': 'string', 'containerArn': 'string', 'containerName': 'string', 'interactive': True|False, 'session': { 'sessionId': 'string', 'streamUrl': 'string', 'tokenValue': 'string' }, 'taskArn': 'string' } **Response Structure** * *(dict) --* * **clusterArn** *(string) --* The Amazon Resource Name (ARN) of the cluster. * **containerArn** *(string) --* The Amazon Resource Name (ARN) of the container. * **containerName** *(string) --* The name of the container. * **interactive** *(boolean) --* Determines whether the execute command session is running in interactive mode. Amazon ECS only supports initiating interactive sessions, so you must specify "true" for this value. * **session** *(dict) --* The details of the SSM session that was created for this instance of execute-command. * **sessionId** *(string) --* The ID of the execute command session. * **streamUrl** *(string) --* A URL to the managed agent on the container that the SSM Session Manager client uses to send commands and receive output from the container. * **tokenValue** *(string) --* An encrypted token value containing session and caller information. It's used to authenticate the connection to the container. * **taskArn** *(string) --* The Amazon Resource Name (ARN) of the task. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.AccessDeniedException" * "ECS.Client.exceptions.ClusterNotFoundException" * "ECS.Client.exceptions.TargetNotConnectedException" ECS / Client / update_cluster_settings update_cluster_settings *********************** ECS.Client.update_cluster_settings(**kwargs) Modifies the settings to use for a cluster. See also: AWS API Documentation **Request Syntax** response = client.update_cluster_settings( cluster='string', settings=[ { 'name': 'containerInsights', 'value': 'string' }, ] ) Parameters: * **cluster** (*string*) -- **[REQUIRED]** The name of the cluster to modify the settings for. * **settings** (*list*) -- **[REQUIRED]** The setting to use by default for a cluster. This parameter is used to turn on CloudWatch Container Insights for a cluster. If this value is specified, it overrides the "containerInsights" value set with PutAccountSetting or PutAccountSettingDefault. Warning: Currently, if you delete an existing cluster that does not have Container Insights turned on, and then create a new cluster with the same name with Container Insights tuned on, Container Insights will not actually be turned on. If you want to preserve the same name for your existing cluster and turn on Container Insights, you must wait 7 days before you can re-create it. * *(dict) --* The settings to use when creating a cluster. This parameter is used to turn on CloudWatch Container Insights with enhanced observability or CloudWatch Container Insights for a cluster. Container Insights with enhanced observability provides all the Container Insights metrics, plus additional task and container metrics. This version supports enhanced observability for Amazon ECS clusters using the Amazon EC2 and Fargate launch types. After you configure Container Insights with enhanced observability on Amazon ECS, Container Insights auto-collects detailed infrastructure telemetry from the cluster level down to the container level in your environment and displays these critical performance data in curated dashboards removing the heavy lifting in observability set-up. For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the cluster setting. The value is "containerInsights" . * **value** *(string) --* The value to set for the cluster setting. The supported values are "enhanced", "enabled", and "disabled". To use Container Insights with enhanced observability, set the "containerInsights" account setting to "enhanced". To use Container Insights, set the "containerInsights" account setting to "enabled". If a cluster value is specified, it will override the "containerInsights" value set with PutAccountSetting or PutAccountSettingDefault. Return type: dict Returns: **Response Syntax** { 'cluster': { 'clusterArn': 'string', 'clusterName': 'string', 'configuration': { 'executeCommandConfiguration': { 'kmsKeyId': 'string', 'logging': 'NONE'|'DEFAULT'|'OVERRIDE', 'logConfiguration': { 'cloudWatchLogGroupName': 'string', 'cloudWatchEncryptionEnabled': True|False, 's3BucketName': 'string', 's3EncryptionEnabled': True|False, 's3KeyPrefix': 'string' } }, 'managedStorageConfiguration': { 'kmsKeyId': 'string', 'fargateEphemeralStorageKmsKeyId': 'string' } }, 'status': 'string', 'registeredContainerInstancesCount': 123, 'runningTasksCount': 123, 'pendingTasksCount': 123, 'activeServicesCount': 123, 'statistics': [ { 'name': 'string', 'value': 'string' }, ], 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'settings': [ { 'name': 'containerInsights', 'value': 'string' }, ], 'capacityProviders': [ 'string', ], 'defaultCapacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'attachments': [ { 'id': 'string', 'type': 'string', 'status': 'string', 'details': [ { 'name': 'string', 'value': 'string' }, ] }, ], 'attachmentsStatus': 'string', 'serviceConnectDefaults': { 'namespace': 'string' } } } **Response Structure** * *(dict) --* * **cluster** *(dict) --* Details about the cluster * **clusterArn** *(string) --* The Amazon Resource Name (ARN) that identifies the cluster. For more information about the ARN format, see Amazon Resource Name (ARN) in the *Amazon ECS Developer Guide*. * **clusterName** *(string) --* A user-generated string that you use to identify your cluster. * **configuration** *(dict) --* The execute command and managed storage configuration for the cluster. * **executeCommandConfiguration** *(dict) --* The details of the execute command configuration. * **kmsKeyId** *(string) --* Specify an Key Management Service key ID to encrypt the data between the local client and the container. * **logging** *(string) --* The log setting to use for redirecting logs for your execute command results. The following log settings are available. * "NONE": The execute command session is not logged. * "DEFAULT": The "awslogs" configuration in the task definition is used. If no logging parameter is specified, it defaults to this value. If no "awslogs" log driver is configured in the task definition, the output won't be logged. * "OVERRIDE": Specify the logging details as a part of "logConfiguration". If the "OVERRIDE" logging option is specified, the "logConfiguration" is required. * **logConfiguration** *(dict) --* The log configuration for the results of the execute command actions. The logs can be sent to CloudWatch Logs or an Amazon S3 bucket. When "logging=OVERRIDE" is specified, a "logConfiguration" must be provided. * **cloudWatchLogGroupName** *(string) --* The name of the CloudWatch log group to send logs to. Note: The CloudWatch log group must already be created. * **cloudWatchEncryptionEnabled** *(boolean) --* Determines whether to use encryption on the CloudWatch logs. If not specified, encryption will be off. * **s3BucketName** *(string) --* The name of the S3 bucket to send logs to. Note: The S3 bucket must already be created. * **s3EncryptionEnabled** *(boolean) --* Determines whether to use encryption on the S3 logs. If not specified, encryption is not used. * **s3KeyPrefix** *(string) --* An optional folder in the S3 bucket to place logs in. * **managedStorageConfiguration** *(dict) --* The details of the managed storage configuration. * **kmsKeyId** *(string) --* Specify a Key Management Service key ID to encrypt Amazon ECS managed storage. When you specify a "kmsKeyId", Amazon ECS uses the key to encrypt data volumes managed by Amazon ECS that are attached to tasks in the cluster. The following data volumes are managed by Amazon ECS: Amazon EBS. For more information about encryption of Amazon EBS volumes attached to Amazon ECS tasks, see Encrypt data stored in Amazon EBS volumes for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. The key must be a single Region key. * **fargateEphemeralStorageKmsKeyId** *(string) --* Specify the Key Management Service key ID for Fargate ephemeral storage. When you specify a "fargateEphemeralStorageKmsKeyId", Amazon Web Services Fargate uses the key to encrypt data at rest in ephemeral storage. For more information about Fargate ephemeral storage encryption, see Customer managed keys for Amazon Web Services Fargate ephemeral storage for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. The key must be a single Region key. * **status** *(string) --* The status of the cluster. The following are the possible states that are returned. ACTIVE The cluster is ready to accept tasks and if applicable you can register container instances with the cluster. PROVISIONING The cluster has capacity providers that are associated with it and the resources needed for the capacity provider are being created. DEPROVISIONING The cluster has capacity providers that are associated with it and the resources needed for the capacity provider are being deleted. FAILED The cluster has capacity providers that are associated with it and the resources needed for the capacity provider have failed to create. INACTIVE The cluster has been deleted. Clusters with an "INACTIVE" status may remain discoverable in your account for a period of time. However, this behavior is subject to change in the future. We don't recommend that you rely on "INACTIVE" clusters persisting. * **registeredContainerInstancesCount** *(integer) --* The number of container instances registered into the cluster. This includes container instances in both "ACTIVE" and "DRAINING" status. * **runningTasksCount** *(integer) --* The number of tasks in the cluster that are in the "RUNNING" state. * **pendingTasksCount** *(integer) --* The number of tasks in the cluster that are in the "PENDING" state. * **activeServicesCount** *(integer) --* The number of services that are running on the cluster in an "ACTIVE" state. You can view these services with PListServices. * **statistics** *(list) --* Additional information about your clusters that are separated by launch type. They include the following: * runningEC2TasksCount * RunningFargateTasksCount * pendingEC2TasksCount * pendingFargateTasksCount * activeEC2ServiceCount * activeFargateServiceCount * drainingEC2ServiceCount * drainingFargateServiceCount * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **tags** *(list) --* The metadata that you apply to the cluster to help you categorize and organize them. Each tag consists of a key and an optional value. You define both. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **settings** *(list) --* The settings for the cluster. This parameter indicates whether CloudWatch Container Insights is on or off for a cluster. * *(dict) --* The settings to use when creating a cluster. This parameter is used to turn on CloudWatch Container Insights with enhanced observability or CloudWatch Container Insights for a cluster. Container Insights with enhanced observability provides all the Container Insights metrics, plus additional task and container metrics. This version supports enhanced observability for Amazon ECS clusters using the Amazon EC2 and Fargate launch types. After you configure Container Insights with enhanced observability on Amazon ECS, Container Insights auto-collects detailed infrastructure telemetry from the cluster level down to the container level in your environment and displays these critical performance data in curated dashboards removing the heavy lifting in observability set-up. For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the cluster setting. The value is "containerInsights" . * **value** *(string) --* The value to set for the cluster setting. The supported values are "enhanced", "enabled", and "disabled". To use Container Insights with enhanced observability, set the "containerInsights" account setting to "enhanced". To use Container Insights, set the "containerInsights" account setting to "enabled". If a cluster value is specified, it will override the "containerInsights" value set with PutAccountSetting or PutAccountSettingDefault. * **capacityProviders** *(list) --* The capacity providers associated with the cluster. * *(string) --* * **defaultCapacityProviderStrategy** *(list) --* The default capacity provider strategy for the cluster. When services or tasks are run in the cluster with no launch type or capacity provider strategy specified, the default capacity provider strategy is used. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two- minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. * **attachments** *(list) --* The resources attached to a cluster. When using a capacity provider with a cluster, the capacity provider and associated resources are returned as cluster attachments. * *(dict) --* An object representing a container instance or task attachment. * **id** *(string) --* The unique identifier for the attachment. * **type** *(string) --* The type of the attachment, such as "ElasticNetworkInterface", "Service Connect", and "AmazonElasticBlockStorage". * **status** *(string) --* The status of the attachment. Valid values are "PRECREATED", "CREATED", "ATTACHING", "ATTACHED", "DETACHING", "DETACHED", "DELETED", and "FAILED". * **details** *(list) --* Details of the attachment. For elastic network interfaces, this includes the network interface ID, the MAC address, the subnet ID, and the private IPv4 address. For Service Connect services, this includes "portName", "clientAliases", "discoveryName", and "ingressPortOverride". For Elastic Block Storage, this includes "roleArn", "deleteOnTermination", "volumeName", "volumeId", and "statusReason" (only when the attachment fails to create or attach). * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **attachmentsStatus** *(string) --* The status of the capacity providers associated with the cluster. The following are the states that are returned. UPDATE_IN_PROGRESS The available capacity providers for the cluster are updating. UPDATE_COMPLETE The capacity providers have successfully updated. UPDATE_FAILED The capacity provider updates failed. * **serviceConnectDefaults** *(dict) --* Use this parameter to set a default Service Connect namespace. After you set a default Service Connect namespace, any new services with Service Connect turned on that are created in the cluster are added as client services in the namespace. This setting only applies to new services that set the "enabled" parameter to "true" in the "ServiceConnectConfiguration". You can set the namespace of each service individually in the "ServiceConnectConfiguration" to override this default parameter. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **namespace** *(string) --* The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace. When you create a service and don't specify a Service Connect configuration, this namespace is used. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.ClusterNotFoundException" * "ECS.Client.exceptions.InvalidParameterException" ECS / Client / delete_service delete_service ************** ECS.Client.delete_service(**kwargs) Deletes a specified service within a cluster. You can delete a service if you have no running tasks in it and the desired task count is zero. If the service is actively maintaining tasks, you can't delete it, and you must update the service to a desired task count of zero. For more information, see UpdateService. Note: When you delete a service, if there are still running tasks that require cleanup, the service status moves from "ACTIVE" to "DRAINING", and the service is no longer visible in the console or in the ListServices API operation. After all tasks have transitioned to either "STOPPING" or "STOPPED" status, the service status moves from "DRAINING" to "INACTIVE". Services in the "DRAINING" or "INACTIVE" status can still be viewed with the DescribeServices API operation. However, in the future, "INACTIVE" services may be cleaned up and purged from Amazon ECS record keeping, and DescribeServices calls on those services return a "ServiceNotFoundException" error. Warning: If you attempt to create a new service with the same name as an existing service in either "ACTIVE" or "DRAINING" status, you receive an error. See also: AWS API Documentation **Request Syntax** response = client.delete_service( cluster='string', service='string', force=True|False ) type cluster: string param cluster: The short name or full Amazon Resource Name (ARN) of the cluster that hosts the service to delete. If you do not specify a cluster, the default cluster is assumed. type service: string param service: **[REQUIRED]** The name of the service to delete. type force: boolean param force: If "true", allows you to delete a service even if it wasn't scaled down to zero tasks. It's only necessary to use this if the service uses the "REPLICA" scheduling strategy. rtype: dict returns: **Response Syntax** { 'service': { 'serviceArn': 'string', 'serviceName': 'string', 'clusterArn': 'string', 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'status': 'string', 'desiredCount': 123, 'runningCount': 123, 'pendingCount': 123, 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'taskDefinition': 'string', 'deploymentConfiguration': { 'deploymentCircuitBreaker': { 'enable': True|False, 'rollback': True|False }, 'maximumPercent': 123, 'minimumHealthyPercent': 123, 'alarms': { 'alarmNames': [ 'string', ], 'rollback': True|False, 'enable': True|False }, 'strategy': 'ROLLING'|'BLUE_GREEN', 'bakeTimeInMinutes': 123, 'lifecycleHooks': [ { 'hookTargetArn': 'string', 'roleArn': 'string', 'lifecycleStages': [ 'RECONCILE_SERVICE'|'PRE_SCALE_UP'|'POST_SCALE_UP'|'TEST_TRAFFIC_SHIFT'|'POST_TEST_TRAFFIC_SHIFT'|'PRODUCTION_TRAFFIC_SHIFT'|'POST_PRODUCTION_TRAFFIC_SHIFT', ] }, ] }, 'taskSets': [ { 'id': 'string', 'taskSetArn': 'string', 'serviceArn': 'string', 'clusterArn': 'string', 'startedBy': 'string', 'externalId': 'string', 'status': 'string', 'taskDefinition': 'string', 'computedDesiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'scale': { 'value': 123.0, 'unit': 'PERCENT' }, 'stabilityStatus': 'STEADY_STATE'|'STABILIZING', 'stabilityStatusAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' } }, ], 'deployments': [ { 'id': 'string', 'status': 'string', 'taskDefinition': 'string', 'desiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'failedTasks': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'rolloutState': 'COMPLETED'|'FAILED'|'IN_PROGRESS', 'rolloutStateReason': 'string', 'serviceConnectConfiguration': { 'enabled': True|False, 'namespace': 'string', 'services': [ { 'portName': 'string', 'discoveryName': 'string', 'clientAliases': [ { 'port': 123, 'dnsName': 'string', 'testTrafficRules': { 'header': { 'name': 'string', 'value': { 'exact': 'string' } } } }, ], 'ingressPortOverride': 123, 'timeout': { 'idleTimeoutSeconds': 123, 'perRequestTimeoutSeconds': 123 }, 'tls': { 'issuerCertificateAuthority': { 'awsPcaAuthorityArn': 'string' }, 'kmsKey': 'string', 'roleArn': 'string' } }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] } }, 'serviceConnectResources': [ { 'discoveryName': 'string', 'discoveryArn': 'string' }, ], 'volumeConfigurations': [ { 'name': 'string', 'managedEBSVolume': { 'encrypted': True|False, 'kmsKeyId': 'string', 'volumeType': 'string', 'sizeInGiB': 123, 'snapshotId': 'string', 'volumeInitializationRate': 123, 'iops': 123, 'throughput': 123, 'tagSpecifications': [ { 'resourceType': 'volume', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE' }, ], 'roleArn': 'string', 'filesystemType': 'ext3'|'ext4'|'xfs'|'ntfs' } }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' }, 'vpcLatticeConfigurations': [ { 'roleArn': 'string', 'targetGroupArn': 'string', 'portName': 'string' }, ] }, ], 'roleArn': 'string', 'events': [ { 'id': 'string', 'createdAt': datetime(2015, 1, 1), 'message': 'string' }, ], 'createdAt': datetime(2015, 1, 1), 'placementConstraints': [ { 'type': 'distinctInstance'|'memberOf', 'expression': 'string' }, ], 'placementStrategy': [ { 'type': 'random'|'spread'|'binpack', 'field': 'string' }, ], 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'healthCheckGracePeriodSeconds': 123, 'schedulingStrategy': 'REPLICA'|'DAEMON', 'deploymentController': { 'type': 'ECS'|'CODE_DEPLOY'|'EXTERNAL' }, 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'createdBy': 'string', 'enableECSManagedTags': True|False, 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE', 'enableExecuteCommand': True|False, 'availabilityZoneRebalancing': 'ENABLED'|'DISABLED' } } **Response Structure** * *(dict) --* * **service** *(dict) --* The full description of the deleted service. * **serviceArn** *(string) --* The ARN that identifies the service. For more information about the ARN format, see Amazon Resource Name (ARN) in the *Amazon ECS Developer Guide*. * **serviceName** *(string) --* The name of your service. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. Service names must be unique within a cluster. However, you can have similarly named services in multiple clusters within a Region or across multiple Regions. * **clusterArn** *(string) --* The Amazon Resource Name (ARN) of the cluster that hosts the service. * **loadBalancers** *(list) --* A list of Elastic Load Balancing load balancer objects. It contains the load balancer name, the container name, and the container port to access from the load balancer. The container name is as it appears in a container definition. * *(dict) --* The load balancer configuration to use with a service or task set. When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers. We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration. A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the *Amazon Elastic Container Service Developer Guide*. * **targetGroupArn** *(string) --* The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set. A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. For services using the "ECS" deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the *Amazon Elastic Container Service Developer Guide*. For services using the "CODE_DEPLOY" deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the *Amazon Elastic Container Service Developer Guide*. Warning: If your service's task definition uses the "awsvpc" network mode, you must choose "ip" as the target type, not "instance". Do this when creating your target groups because tasks that use the "awsvpc" network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type. * **loadBalancerName** *(string) --* The name of the load balancer to associate with the Amazon ECS service or task set. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted. * **containerName** *(string) --* The name of the container (as it appears in a container definition) to associate with the load balancer. You need to specify the container name when configuring the target group for an Amazon ECS load balancer. * **containerPort** *(integer) --* The port on the container to associate with the load balancer. This port must correspond to a "containerPort" in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the "hostPort" of the port mapping. * **advancedConfiguration** *(dict) --* The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments. * **alternateTargetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments. * **productionListenerRule** *(string) --* The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic. * **testListenerRule** *(string) --* The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you. * **serviceRegistries** *(list) --* The details for the service discovery registries to assign to this service. For more information, see Service Discovery. * *(dict) --* The details for the service registry. Each service may be associated with one service registry. Multiple service registries for each service are not supported. When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration. * **registryArn** *(string) --* The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService. * **port** *(integer) --* The port value used if your service discovery service specified an SRV record. This field might be used if both the "awsvpc" network mode and SRV records are used. * **containerName** *(string) --* The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition that your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **containerPort** *(integer) --* The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **status** *(string) --* The status of the service. The valid values are "ACTIVE", "DRAINING", or "INACTIVE". * **desiredCount** *(integer) --* The desired number of instantiations of the task definition to keep running on the service. This value is specified when the service is created with CreateService , and it can be modified with UpdateService. * **runningCount** *(integer) --* The number of tasks in the cluster that are in the "RUNNING" state. * **pendingCount** *(integer) --* The number of tasks in the cluster that are in the "PENDING" state. * **launchType** *(string) --* The launch type the service is using. When using the DescribeServices API, this field is omitted if the service was created using a capacity provider strategy. * **capacityProviderStrategy** *(list) --* The capacity provider strategy the service uses. When using the DescribeServices API, this field is omitted if the service was created using a launch type. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. * **platformVersion** *(string) --* The platform version to run your service on. A platform version is only specified for tasks that are hosted on Fargate. If one isn't specified, the "LATEST" platform version is used. For more information, see Fargate Platform Versions in the *Amazon Elastic Container Service Developer Guide*. * **platformFamily** *(string) --* The operating system that your tasks in the service run on. A platform family is specified only for tasks using the Fargate launch type. All tasks that run as part of this service must use the same "platformFamily" value as the service (for example, "LINUX"). * **taskDefinition** *(string) --* The task definition to use for tasks in the service. This value is specified when the service is created with CreateService, and it can be modified with UpdateService. * **deploymentConfiguration** *(dict) --* Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks. * **deploymentCircuitBreaker** *(dict) --* Note: The deployment circuit breaker can only be used for services using the rolling update ( "ECS") deployment type. The **deployment circuit breaker** determines whether a service deployment will fail if the service can't reach a steady state. If you use the deployment circuit breaker, a service deployment will transition to a failed state and stop launching new tasks. If you use the rollback option, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. For more information, see Rolling update in the *Amazon Elastic Container Service Developer Guide* * **enable** *(boolean) --* Determines whether to use the deployment circuit breaker logic for the service. * **rollback** *(boolean) --* Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is on, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. * **maximumPercent** *(integer) --* If a service is using the rolling update ( "ECS") deployment type, the "maximumPercent" parameter represents an upper limit on the number of your service's tasks that are allowed in the "RUNNING" or "PENDING" state during a deployment, as a percentage of the "desiredCount" (rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using the "REPLICA" service scheduler and has a "desiredCount" of four tasks and a "maximumPercent" value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default "maximumPercent" value for a service using the "REPLICA" service scheduler is 200%. The Amazon ECS scheduler uses this parameter to replace unhealthy tasks by starting replacement tasks first and then stopping the unhealthy tasks, as long as cluster resources for starting replacement tasks are available. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services. If a service is using either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types, and tasks in the service use the EC2 launch type, the **maximum percent** value is set to the default value. The **maximum percent** value is used to define the upper limit on the number of the tasks in the service that remain in the "RUNNING" state while the container instances are in the "DRAINING" state. Note: You can't specify a custom "maximumPercent" value for a service that uses either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and has tasks that use the EC2 launch type. If the service uses either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types, and the tasks in the service use the Fargate launch type, the maximum percent value is not used. The value is still returned when describing your service. * **minimumHealthyPercent** *(integer) --* If a service is using the rolling update ( "ECS") deployment type, the "minimumHealthyPercent" represents a lower limit on the number of your service's tasks that must remain in the "RUNNING" state during a deployment, as a percentage of the "desiredCount" (rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a "desiredCount" of four tasks and a "minimumHealthyPercent" of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks. If any tasks are unhealthy and if "maximumPercent" doesn't allow the Amazon ECS scheduler to start replacement tasks, the scheduler stops the unhealthy tasks one-by-one — using the "minimumHealthyPercent" as a constraint — to clear up capacity to launch replacement tasks. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services . For services that *do not* use a load balancer, the following should be noted: * A service is considered healthy if all essential containers within the tasks in the service pass their health checks. * If a task has no essential containers with a health check defined, the service scheduler will wait for 40 seconds after a task reaches a "RUNNING" state before the task is counted towards the minimum healthy percent total. * If a task has one or more essential containers with a health check defined, the service scheduler will wait for the task to reach a healthy status before counting it towards the minimum healthy percent total. A task is considered healthy when all essential containers within the task have passed their health checks. The amount of time the service scheduler can wait for is determined by the container health check settings. For services that *do* use a load balancer, the following should be noted: * If a task has no essential containers with a health check defined, the service scheduler will wait for the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total. * If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total. The default value for a replica service for "minimumHealthyPercent" is 100%. The default "minimumHealthyPercent" value for a service using the "DAEMON" service schedule is 0% for the CLI, the Amazon Web Services SDKs, and the APIs and 50% for the Amazon Web Services Management Console. The minimum number of healthy tasks during a deployment is the "desiredCount" multiplied by the "minimumHealthyPercent"/100, rounded up to the nearest integer value. If a service is using either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and is running tasks that use the EC2 launch type, the **minimum healthy percent** value is set to the default value. The **minimum healthy percent** value is used to define the lower limit on the number of the tasks in the service that remain in the "RUNNING" state while the container instances are in the "DRAINING" state. Note: You can't specify a custom "minimumHealthyPercent" value for a service that uses either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and has tasks that use the EC2 launch type. If a service is using either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service. * **alarms** *(dict) --* Information about the CloudWatch alarms. * **alarmNames** *(list) --* One or more CloudWatch alarm names. Use a "," to separate the alarms. * *(string) --* * **rollback** *(boolean) --* Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is used, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. * **enable** *(boolean) --* Determines whether to use the CloudWatch alarm option in the service deployment process. * **strategy** *(string) --* The deployment strategy for the service. Choose from these valid values: * "ROLLING" - When you create a service which uses the rolling update ( "ROLLING") deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration. * "BLUE_GREEN" - A blue/green deployment strategy ( "BLUE_GREEN") is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed. * **bakeTimeInMinutes** *(integer) --* The time period when both blue and green service revisions are running simultaneously after the production traffic has shifted. You must provide this parameter when you use the "BLUE_GREEN" deployment strategy. * **lifecycleHooks** *(list) --* An array of deployment lifecycle hook objects to run custom logic at specific stages of the deployment lifecycle. * *(dict) --* A deployment lifecycle hook runs custom logic at specific stages of the deployment process. Currently, you can use Lambda functions as hook targets. For more information, see Lifecycle hooks for Amazon ECS service deployments in the *Amazon Elastic Container Service Developer Guide*. * **hookTargetArn** *(string) --* The Amazon Resource Name (ARN) of the hook target. Currently, only Lambda function ARNs are supported. You must provide this parameter when configuring a deployment lifecycle hook. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call Lambda functions on your behalf. For more information, see Permissions required for Lambda functions in Amazon ECS blue/green deployments in the *Amazon Elastic Container Service Developer Guide*. * **lifecycleStages** *(list) --* The lifecycle stages at which to run the hook. Choose from these valid values: * RECONCILE_SERVICE The reconciliation stage that only happens when you start a new service deployment with more than 1 service revision in an ACTIVE state. You can use a lifecycle hook for this stage. * PRE_SCALE_UP The green service revision has not started. The blue service revision is handling 100% of the production traffic. There is no test traffic. You can use a lifecycle hook for this stage. * POST_SCALE_UP The green service revision has started. The blue service revision is handling 100% of the production traffic. There is no test traffic. You can use a lifecycle hook for this stage. * TEST_TRAFFIC_SHIFT The blue and green service revisions are running. The blue service revision handles 100% of the production traffic. The green service revision is migrating from 0% to 100% of test traffic. You can use a lifecycle hook for this stage. * POST_TEST_TRAFFIC_SHIFT The test traffic shift is complete. The green service revision handles 100% of the test traffic. You can use a lifecycle hook for this stage. * PRODUCTION_TRAFFIC_SHIFT Production traffic is shifting to the green service revision. The green service revision is migrating from 0% to 100% of production traffic. You can use a lifecycle hook for this stage. * POST_PRODUCTION_TRAFFIC_SHIFT The production traffic shift is complete. You can use a lifecycle hook for this stage. You must provide this parameter when configuring a deployment lifecycle hook. * *(string) --* * **taskSets** *(list) --* Information about a set of Amazon ECS tasks in either an CodeDeploy or an "EXTERNAL" deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic. * *(dict) --* Information about a set of Amazon ECS tasks in either an CodeDeploy or an "EXTERNAL" deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic. * **id** *(string) --* The ID of the task set. * **taskSetArn** *(string) --* The Amazon Resource Name (ARN) of the task set. * **serviceArn** *(string) --* The Amazon Resource Name (ARN) of the service the task set exists in. * **clusterArn** *(string) --* The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in. * **startedBy** *(string) --* The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the "startedBy" parameter is "CODE_DEPLOY". If an external deployment created the task set, the "startedBy" field isn't used. * **externalId** *(string) --* The external ID associated with the task set. If an CodeDeploy deployment created a task set, the "externalId" parameter contains the CodeDeploy deployment ID. If a task set is created for an external deployment and is associated with a service discovery registry, the "externalId" parameter contains the "ECS_TASK_SET_EXTERNAL_ID" Cloud Map attribute. * **status** *(string) --* The status of the task set. The following describes each state. PRIMARY The task set is serving production traffic. ACTIVE The task set isn't serving production traffic. DRAINING The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group. * **taskDefinition** *(string) --* The task definition that the task set is using. * **computedDesiredCount** *(integer) --* The computed desired count for the task set. This is calculated by multiplying the service's "desiredCount" by the task set's "scale" percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks. * **pendingCount** *(integer) --* The number of tasks in the task set that are in the "PENDING" status during a deployment. A task in the "PENDING" state is preparing to enter the "RUNNING" state. A task set enters the "PENDING" status when it launches for the first time or when it's restarted after being in the "STOPPED" state. * **runningCount** *(integer) --* The number of tasks in the task set that are in the "RUNNING" status during a deployment. A task in the "RUNNING" state is running and ready for use. * **createdAt** *(datetime) --* The Unix timestamp for the time when the task set was created. * **updatedAt** *(datetime) --* The Unix timestamp for the time when the task set was last updated. * **launchType** *(string) --* The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the *Amazon Elastic Container Service Developer Guide*. * **capacityProviderStrategy** *(list) --* The capacity provider strategy that are associated with the task set. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. * **platformVersion** *(string) --* The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the *Amazon Elastic Container Service Developer Guide*. * **platformFamily** *(string) --* The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type. All tasks in the set must have the same value. * **networkConfiguration** *(dict) --* The network configuration for the task set. * **awsvpcConfiguration** *(dict) --* The VPC subnets and security groups that are associated with a task. Note: All specified subnets and security groups must be from the same VPC. * **subnets** *(list) --* The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified. Note: All specified subnets must be from the same VPC. * *(string) --* * **securityGroups** *(list) --* The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified. Note: All specified security groups must be from the same VPC. * *(string) --* * **assignPublicIp** *(string) --* Whether the task's elastic network interface receives a public IP address. Consider the following when you set this value: * When you use "create-service" or "update- service", the default is "DISABLED". * When the service "deploymentController" is "ECS", the value must be "DISABLED". * **loadBalancers** *(list) --* Details on a load balancer that are used with a task set. * *(dict) --* The load balancer configuration to use with a service or task set. When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers. We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration. A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the *Amazon Elastic Container Service Developer Guide*. * **targetGroupArn** *(string) --* The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set. A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. For services using the "ECS" deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the *Amazon Elastic Container Service Developer Guide*. For services using the "CODE_DEPLOY" deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the *Amazon Elastic Container Service Developer Guide*. Warning: If your service's task definition uses the "awsvpc" network mode, you must choose "ip" as the target type, not "instance". Do this when creating your target groups because tasks that use the "awsvpc" network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type. * **loadBalancerName** *(string) --* The name of the load balancer to associate with the Amazon ECS service or task set. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted. * **containerName** *(string) --* The name of the container (as it appears in a container definition) to associate with the load balancer. You need to specify the container name when configuring the target group for an Amazon ECS load balancer. * **containerPort** *(integer) --* The port on the container to associate with the load balancer. This port must correspond to a "containerPort" in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the "hostPort" of the port mapping. * **advancedConfiguration** *(dict) --* The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments. * **alternateTargetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments. * **productionListenerRule** *(string) --* The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic. * **testListenerRule** *(string) --* The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you. * **serviceRegistries** *(list) --* The details for the service discovery registries to assign to this task set. For more information, see Service discovery. * *(dict) --* The details for the service registry. Each service may be associated with one service registry. Multiple service registries for each service are not supported. When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration. * **registryArn** *(string) --* The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService. * **port** *(integer) --* The port value used if your service discovery service specified an SRV record. This field might be used if both the "awsvpc" network mode and SRV records are used. * **containerName** *(string) --* The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition that your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **containerPort** *(integer) --* The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **scale** *(dict) --* A floating-point percentage of your desired number of tasks to place and keep running in the task set. * **value** *(float) --* The value, specified as a percent total of a service's "desiredCount", to scale the task set. Accepted values are numbers between 0 and 100. * **unit** *(string) --* The unit of measure for the scale value. * **stabilityStatus** *(string) --* The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set are in "STEADY_STATE": * The task "runningCount" is equal to the "computedDesiredCount". * The "pendingCount" is "0". * There are no tasks that are running on container instances in the "DRAINING" status. * All tasks are reporting a healthy status from the load balancers, service discovery, and container health checks. If any of those conditions aren't met, the stability status returns "STABILIZING". * **stabilityStatusAt** *(datetime) --* The Unix timestamp for the time when the task set stability status was retrieved. * **tags** *(list) --* The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **fargateEphemeralStorage** *(dict) --* The Fargate ephemeral storage settings for the task set. * **kmsKeyId** *(string) --* Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment. * **deployments** *(list) --* The current state of deployments for the service. * *(dict) --* The details of an Amazon ECS service deployment. This is used only when a service uses the "ECS" deployment controller type. * **id** *(string) --* The ID of the deployment. * **status** *(string) --* The status of the deployment. The following describes each state. PRIMARY The most recent deployment of a service. ACTIVE A service deployment that still has running tasks, but are in the process of being replaced with a new "PRIMARY" deployment. INACTIVE A deployment that has been completely replaced. * **taskDefinition** *(string) --* The most recent task definition that was specified for the tasks in the service to use. * **desiredCount** *(integer) --* The most recent desired count of tasks that was specified for the service to deploy or maintain. * **pendingCount** *(integer) --* The number of tasks in the deployment that are in the "PENDING" status. * **runningCount** *(integer) --* The number of tasks in the deployment that are in the "RUNNING" status. * **failedTasks** *(integer) --* The number of consecutively failed tasks in the deployment. A task is considered a failure if the service scheduler can't launch the task, the task doesn't transition to a "RUNNING" state, or if it fails any of its defined health checks and is stopped. Note: Once a service deployment has one or more successfully running tasks, the failed task count resets to zero and stops being evaluated. * **createdAt** *(datetime) --* The Unix timestamp for the time when the service deployment was created. * **updatedAt** *(datetime) --* The Unix timestamp for the time when the service deployment was last updated. * **capacityProviderStrategy** *(list) --* The capacity provider strategy that the deployment is using. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. * **launchType** *(string) --* The launch type the tasks in the service are using. For more information, see Amazon ECS Launch Types in the *Amazon Elastic Container Service Developer Guide*. * **platformVersion** *(string) --* The platform version that your tasks in the service run on. A platform version is only specified for tasks using the Fargate launch type. If one isn't specified, the "LATEST" platform version is used. For more information, see Fargate Platform Versions in the *Amazon Elastic Container Service Developer Guide*. * **platformFamily** *(string) --* The operating system that your tasks in the service, or tasks are running on. A platform family is specified only for tasks using the Fargate launch type. All tasks that run as part of this service must use the same "platformFamily" value as the service, for example, "LINUX.". * **networkConfiguration** *(dict) --* The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the "awsvpc" networking mode. * **awsvpcConfiguration** *(dict) --* The VPC subnets and security groups that are associated with a task. Note: All specified subnets and security groups must be from the same VPC. * **subnets** *(list) --* The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified. Note: All specified subnets must be from the same VPC. * *(string) --* * **securityGroups** *(list) --* The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified. Note: All specified security groups must be from the same VPC. * *(string) --* * **assignPublicIp** *(string) --* Whether the task's elastic network interface receives a public IP address. Consider the following when you set this value: * When you use "create-service" or "update- service", the default is "DISABLED". * When the service "deploymentController" is "ECS", the value must be "DISABLED". * **rolloutState** *(string) --* Note: The "rolloutState" of a service is only returned for services that use the rolling update ( "ECS") deployment type that aren't behind a Classic Load Balancer. The rollout state of the deployment. When a service deployment is started, it begins in an "IN_PROGRESS" state. When the service reaches a steady state, the deployment transitions to a "COMPLETED" state. If the service fails to reach a steady state and circuit breaker is turned on, the deployment transitions to a "FAILED" state. A deployment in "FAILED" state doesn't launch any new tasks. For more information, see DeploymentCircuitBreaker. * **rolloutStateReason** *(string) --* A description of the rollout state of a deployment. * **serviceConnectConfiguration** *(dict) --* The details of the Service Connect configuration that's used by this deployment. Compare the configuration between multiple deployments when troubleshooting issues with new deployments. The configuration for this service to discover and connect to services, and be discovered by, and connected from, other services within a namespace. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **enabled** *(boolean) --* Specifies whether to use Service Connect with this service. * **namespace** *(string) --* The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace for use with Service Connect. The namespace must be in the same Amazon Web Services Region as the Amazon ECS service and cluster. The type of namespace doesn't affect Service Connect. For more information about Cloud Map, see Working with Services in the *Cloud Map Developer Guide*. * **services** *(list) --* The list of Service Connect service objects. These are names and aliases (also known as endpoints) that are used by other Amazon ECS services to connect to this service. This field is not required for a "client" Amazon ECS service that's a member of a namespace only to connect to other services within the namespace. An example of this would be a frontend application that accepts incoming requests from either a load balancer that's attached to the service or by other means. An object selects a port from the task definition, assigns a name for the Cloud Map service, and a list of aliases (endpoints) and ports for client applications to refer to this service. * *(dict) --* The Service Connect service object configuration. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **portName** *(string) --* The "portName" must match the name of one of the "portMappings" from all the containers in the task definition of this Amazon ECS service. * **discoveryName** *(string) --* The "discoveryName" is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen. If the "discoveryName" isn't specified, the port mapping name from the task definition is used in "portName.namespace". * **clientAliases** *(list) --* The list of client aliases for this Service Connect service. You use these to assign names that can be used by client applications. The maximum number of client aliases that you can have in this list is 1. Each alias ("endpoint") is a fully-qualified name and port number that other Amazon ECS tasks ("clients") can use to connect to this service. Each name and port mapping must be unique within the namespace. For each "ServiceConnectService", you must provide at least one "clientAlias" with one "port". * *(dict) --* Each alias ("endpoint") is a fully- qualified name and port number that other tasks ("clients") can use to connect to this service. Each name and port mapping must be unique within the namespace. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **port** *(integer) --* The listening port number for the Service Connect proxy. This port is available inside of all of the tasks within the same namespace. To avoid changing your applications in client Amazon ECS services, set this to the same port that the client application uses by default. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **dnsName** *(string) --* The "dnsName" is the name that you use in the applications of client tasks to connect to this service. The name must be a valid DNS name but doesn't need to be fully-qualified. The name can include up to 127 characters. The name can include lowercase letters, numbers, underscores (_), hyphens (-), and periods (.). The name can't start with a hyphen. If this parameter isn't specified, the default value of "discoveryName.namespace" is used. If the "discoveryName" isn't specified, the port mapping name from the task definition is used in "portName.namespace". To avoid changing your applications in client Amazon ECS services, set this to the same name that the client application uses by default. For example, a few common names are "database", "db", or the lowercase name of a database, such as "mysql" or "redis". For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **testTrafficRules** *(dict) --* The configuration for test traffic routing rules used during blue/green deployments with Amazon ECS Service Connect. This allows you to route a portion of traffic to the new service revision of your service for testing before shifting all production traffic. * **header** *(dict) --* The HTTP header-based routing rules that determine which requests should be routed to the new service version during blue/green deployment testing. These rules provide fine-grained control over test traffic routing based on request headers. * **name** *(string) --* The name of the HTTP header to examine for test traffic routing. Common examples include custom headers like "X-Test-Version" or "X -Canary-Request" that can be used to identify test traffic. * **value** *(dict) --* The header value matching configuration that determines how the HTTP header value is evaluated for test traffic routing decisions. * **exact** *(string) --* The exact value that the HTTP header must match for the test traffic routing rule to apply. This provides precise control over which requests are routed to the new service revision during blue/green deployments. * **ingressPortOverride** *(integer) --* The port number for the Service Connect proxy to listen on. Use the value of this field to bypass the proxy for traffic on the port number specified in the named "portMapping" in the task definition of this application, and then use it in your VPC security groups to allow traffic into the proxy for this Amazon ECS service. In "awsvpc" mode and Fargate, the default value is the container port number. The container port number is in the "portMapping" in the task definition. In bridge mode, the default value is the ephemeral port of the Service Connect proxy. * **timeout** *(dict) --* A reference to an object that represents the configured timeouts for Service Connect. * **idleTimeoutSeconds** *(integer) --* The amount of time in seconds a connection will stay active while idle. A value of "0" can be set to disable "idleTimeout". The "idleTimeout" default for "HTTP"/ "HTTP2"/ "GRPC" is 5 minutes. The "idleTimeout" default for "TCP" is 1 hour. * **perRequestTimeoutSeconds** *(integer) --* The amount of time waiting for the upstream to respond with a complete response per request. A value of "0" can be set to disable "perRequestTimeout". "perRequestTimeout" can only be set if Service Connect "appProtocol" isn't "TCP". Only "idleTimeout" is allowed for "TCP" "appProtocol". * **tls** *(dict) --* A reference to an object that represents a Transport Layer Security (TLS) configuration. * **issuerCertificateAuthority** *(dict) --* The signer certificate authority. * **awsPcaAuthorityArn** *(string) --* The ARN of the Amazon Web Services Private Certificate Authority certificate. * **kmsKey** *(string) --* The Amazon Web Services Key Management Service key. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that's associated with the Service Connect TLS. * **logConfiguration** *(dict) --* The log configuration for the container. This parameter maps to "LogConfig" in the docker container create command and the "--log-driver" option to docker run. By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition. Understand the following when specifying a log configuration for your containers. * Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent. For tasks on Fargate, the supported log drivers are "awslogs", "splunk", and "awsfirelens". For tasks hosted on Amazon EC2 instances, the supported log drivers are "awslogs", "fluentd", "gelf", "json-file", "journald", "syslog", "splunk", and "awsfirelens". * This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. * For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the "ECS_AVAILABLE_LOGGING_DRIVERS" environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the *Amazon Elastic Container Service Developer Guide*. * For tasks that are on Fargate, because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to. * **logDriver** *(string) --* The log driver to use for the container. For tasks on Fargate, the supported log drivers are "awslogs", "splunk", and "awsfirelens". For tasks hosted on Amazon EC2 instances, the supported log drivers are "awslogs", "fluentd", "gelf", "json-file", "journald", "syslog", "splunk", and "awsfirelens". For more information about using the "awslogs" log driver, see Send Amazon ECS logs to CloudWatch in the *Amazon Elastic Container Service Developer Guide*. For more information about using the "awsfirelens" log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner. Note: If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software. * **options** *(dict) --* The configuration options to send to the log driver. The options you can specify depend on the log driver. Some of the options you can specify when you use the "awslogs" log driver to route logs to Amazon CloudWatch include the following: awslogs-create-group Required: No Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to "false". Note: Your IAM policy must include the "logs:CreateLogGroup" permission before you attempt to use "awslogs-create-group ".awslogs-region Required: Yes Specify the Amazon Web Services Region that the "awslogs" log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option. awslogs-group Required: Yes Make sure to specify a log group that the "awslogs" log driver sends its log streams to. awslogs-stream-prefix Required: Yes, when using Fargate.Optional when using EC2. Use the "awslogs-stream-prefix" option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format "prefix-name /container-name/ecs-task-id". If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option. For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to. You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console. awslogs-datetime-format Required: No This option defines a multiline start pattern in Python "strftime" format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry. For more information, see awslogs-datetime- format. You cannot configure both the "awslogs- datetime-format" and "awslogs-multiline- pattern" options. Note: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.awslogs-multiline-pattern Required: No This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. For more information, see awslogs-multiline- pattern. This option is ignored if "awslogs-datetime- format" is also configured. You cannot configure both the "awslogs- datetime-format" and "awslogs-multiline- pattern" options. Note: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. The following options apply to all supported log drivers. mode Required: No Valid values: "non-blocking" | "blocking" This option defines the delivery mode of log messages from the container to the log driver specified using "logDriver". The delivery mode you choose affects application availability when the flow of logs from container is interrupted. If you use the "blocking" mode and the flow of logs is interrupted, calls from container code to write to the "stdout" and "stderr" streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure. If you use the "non-blocking" mode, the container's logs are instead stored in an in- memory intermediate buffer configured with the "max-buffer-size" option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non- blocking mode in the awslogs container log driver. You can set a default "mode" for all containers in a specific Amazon Web Services Region by using the "defaultLogDriverMode" account setting. If you don't specify the "mode" option or configure the account setting, Amazon ECS will default to the "non-blocking" mode. For more information about the account setting, see Default log driver mode in the *Amazon Elastic Container Service Developer Guide*. Note: On June 25, 2025, Amazon ECS changed the default log driver mode from "blocking" to "non-blocking" to prioritize task availability over logging. To continue using the "blocking" mode after this change, do one of the following: * Set the "mode" option in your container definition's "logConfiguration" as "blocking". * Set the "defaultLogDriverMode" account setting to "blocking". max-buffer-size Required: No Default value: "1m" When "non-blocking" mode is used, the "max- buffer-size" log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost. To route logs using the "splunk" log router, you need to specify a "splunk-token" and a "splunk-url". When you use the "awsfirelens" log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the "log-driver-buffer-limit" option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. Other options you can specify when using "awsfirelens" to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with "region" and a name for the log stream with "delivery_stream". When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with "region" and a data stream name with "stream". When you export logs to Amazon OpenSearch Service, you can specify options like "Name", "Host" (OpenSearch Service endpoint without protocol), "Port", "Index", "Type", "Aws_auth", "Aws_region", "Suppress_Type_Name", and "tls". For more information, see Under the hood: FireLens for Amazon ECS Tasks. When you export logs to Amazon S3, you can specify the bucket using the "bucket" option. You can also specify "region", "total_file_size", "upload_timeout", and "use_put_object" as options. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" * *(string) --* * *(string) --* * **secretOptions** *(list) --* The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * *(dict) --* An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways: * To inject sensitive data into your containers as environment variables, use the "secrets" container definition parameter. * To reference sensitive information in the log configuration of a container, use the "secretOptions" container definition parameter. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the secret. * **valueFrom** *(string) --* The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store. For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the *Amazon Elastic Container Service Developer Guide*. Note: If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified. * **serviceConnectResources** *(list) --* The list of Service Connect resources that are associated with this deployment. Each list entry maps a discovery name to a Cloud Map service name. * *(dict) --* The Service Connect resource. Each configuration maps a discovery name to a Cloud Map service name. The data is stored in Cloud Map as part of the Service Connect configuration for each discovery name of this Amazon ECS service. A task can resolve the "dnsName" for each of the "clientAliases" of a service. However a task can't resolve the discovery names. If you want to connect to a service, refer to the "ServiceConnectConfiguration" of that service for the list of "clientAliases" that you can use. * **discoveryName** *(string) --* The discovery name of this Service Connect resource. The "discoveryName" is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen. If the "discoveryName" isn't specified, the port mapping name from the task definition is used in "portName.namespace". * **discoveryArn** *(string) --* The Amazon Resource Name (ARN) for the service in Cloud Map that matches the discovery name for this Service Connect resource. You can use this ARN in other integrations with Cloud Map. However, Service Connect can't ensure connectivity outside of Amazon ECS. * **volumeConfigurations** *(list) --* The details of the volume that was "configuredAtLaunch". You can configure different settings like the size, throughput, volumeType, and ecryption in ServiceManagedEBSVolumeConfiguration. The "name" of the volume must match the "name" from the task definition. * *(dict) --* The configuration for a volume specified in the task definition as a volume that is configured at launch time. Currently, the only supported volume type is an Amazon EBS volume. * **name** *(string) --* The name of the volume. This value must match the volume name from the "Volume" object in the task definition. * **managedEBSVolume** *(dict) --* The configuration for the Amazon EBS volume that Amazon ECS creates and manages on your behalf. These settings are used to create each Amazon EBS volume, with one volume created for each task in the service. The Amazon EBS volumes are visible in your account in the Amazon EC2 console once they are created. * **encrypted** *(boolean) --* Indicates whether the volume should be encrypted. If you turn on Region-level Amazon EBS encryption by default but set this value as "false", the setting is overridden and the volume is encrypted with the KMS key specified for Amazon EBS encryption by default. This parameter maps 1:1 with the "Encrypted" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * **kmsKeyId** *(string) --* The Amazon Resource Name (ARN) identifier of the Amazon Web Services Key Management Service key to use for Amazon EBS encryption. When a key is specified using this parameter, it overrides Amazon EBS default encryption or any KMS key that you specified for cluster- level managed storage encryption. This parameter maps 1:1 with the "KmsKeyId" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. For more information about encrypting Amazon EBS volumes attached to tasks, see Encrypt data stored in Amazon EBS volumes attached to Amazon ECS tasks. Warning: Amazon Web Services authenticates the Amazon Web Services Key Management Service key asynchronously. Therefore, if you specify an ID, alias, or ARN that is invalid, the action can appear to complete, but eventually fails. * **volumeType** *(string) --* The volume type. This parameter maps 1:1 with the "VolumeType" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. For more information, see Amazon EBS volume types in the *Amazon EC2 User Guide*. The following are the supported volume types. * General Purpose SSD: "gp2``| ``gp3" * Provisioned IOPS SSD: "io1``| ``io2" * Throughput Optimized HDD: "st1" * Cold HDD: "sc1" * Magnetic: "standard" Note: The magnetic volume type is not supported on Fargate. * **sizeInGiB** *(integer) --* The size of the volume in GiB. You must specify either a volume size or a snapshot ID. If you specify a snapshot ID, the snapshot size is used for the volume size by default. You can optionally specify a volume size greater than or equal to the snapshot size. This parameter maps 1:1 with the "Size" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. The following are the supported volume size values for each volume type. * "gp2" and "gp3": 1-16,384 * "io1" and "io2": 4-16,384 * "st1" and "sc1": 125-16,384 * "standard": 1-1,024 * **snapshotId** *(string) --* The snapshot that Amazon ECS uses to create volumes for attachment to tasks maintained by the service. You must specify either "snapshotId" or "sizeInGiB" in your volume configuration. This parameter maps 1:1 with the "SnapshotId" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * **volumeInitializationRate** *(integer) --* The rate, in MiB/s, at which data is fetched from a snapshot of an existing EBS volume to create new volumes for attachment to the tasks maintained by the service. This property can be specified only if you specify a "snapshotId". For more information, see Initialize Amazon EBS volumes in the *Amazon EBS User Guide*. * **iops** *(integer) --* The number of I/O operations per second (IOPS). For "gp3", "io1", and "io2" volumes, this represents the number of IOPS that are provisioned for the volume. For "gp2" volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting. The following are the supported values for each volume type. * "gp3": 3,000 - 16,000 IOPS * "io1": 100 - 64,000 IOPS * "io2": 100 - 256,000 IOPS This parameter is required for "io1" and "io2" volume types. The default for "gp3" volumes is "3,000 IOPS". This parameter is not supported for "st1", "sc1", or "standard" volume types. This parameter maps 1:1 with the "Iops" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * **throughput** *(integer) --* The throughput to provision for a volume, in MiB/s, with a maximum of 1,000 MiB/s. This parameter maps 1:1 with the "Throughput" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. Warning: This parameter is only supported for the "gp3" volume type. * **tagSpecifications** *(list) --* The tags to apply to the volume. Amazon ECS applies service-managed tags by default. This parameter maps 1:1 with the "TagSpecifications.N" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * *(dict) --* The tag specifications of an Amazon EBS volume. * **resourceType** *(string) --* The type of volume resource. * **tags** *(list) --* The tags applied to this Amazon EBS volume. "AmazonECSCreated" and "AmazonECSManaged" are reserved tags that can't be used. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case- sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **propagateTags** *(string) --* Determines whether to propagate the tags from the task definition to the Amazon EBS volume. Tags can only propagate to a "SERVICE" specified in "ServiceVolumeConfiguration". If no value is specified, the tags aren't propagated. * **roleArn** *(string) --* The ARN of the IAM role to associate with this volume. This is the Amazon ECS infrastructure IAM role that is used to manage your Amazon Web Services infrastructure. We recommend using the Amazon ECS-managed "AmazonECSInfrastructureRolePolicyForVolumes" IAM policy with this role. For more information, see Amazon ECS infrastructure IAM role in the *Amazon ECS Developer Guide*. * **filesystemType** *(string) --* The filesystem type for the volume. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the tasks will fail to start. The available Linux filesystem types are "ext3", "ext4", and "xfs". If no value is specified, the "xfs" filesystem type is used by default. The available Windows filesystem types are "NTFS". * **fargateEphemeralStorage** *(dict) --* The Fargate ephemeral storage settings for the deployment. * **kmsKeyId** *(string) --* Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment. * **vpcLatticeConfigurations** *(list) --* The VPC Lattice configuration for the service deployment. * *(dict) --* The VPC Lattice configuration for your service that holds the information for the target group(s) Amazon ECS tasks will be registered to. * **roleArn** *(string) --* The ARN of the IAM role to associate with this VPC Lattice configuration. This is the Amazon ECS infrastructure IAM role that is used to manage your VPC Lattice infrastructure. * **targetGroupArn** *(string) --* The full Amazon Resource Name (ARN) of the target group or groups associated with the VPC Lattice configuration that the Amazon ECS tasks will be registered to. * **portName** *(string) --* The name of the port mapping to register in the VPC Lattice target group. This is the name of the "portMapping" you defined in your task definition. * **roleArn** *(string) --* The ARN of the IAM role that's associated with the service. It allows the Amazon ECS container agent to register container instances with an Elastic Load Balancing load balancer. * **events** *(list) --* The event stream for your service. A maximum of 100 of the latest events are displayed. * *(dict) --* The details for an event that's associated with a service. * **id** *(string) --* The ID string for the event. * **createdAt** *(datetime) --* The Unix timestamp for the time when the event was triggered. * **message** *(string) --* The event message. * **createdAt** *(datetime) --* The Unix timestamp for the time when the service was created. * **placementConstraints** *(list) --* The placement constraints for the tasks in the service. * *(dict) --* An object representing a constraint on task placement. For more information, see Task placement constraints in the *Amazon Elastic Container Service Developer Guide*. Note: If you're using the Fargate launch type, task placement constraints aren't supported. * **type** *(string) --* The type of constraint. Use "distinctInstance" to ensure that each task in a particular group is running on a different container instance. Use "memberOf" to restrict the selection to a group of valid candidates. * **expression** *(string) --* A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is "distinctInstance". For more information, see Cluster query language in the *Amazon Elastic Container Service Developer Guide*. * **placementStrategy** *(list) --* The placement strategy that determines how tasks for the service are placed. * *(dict) --* The task placement strategy for a task or service. For more information, see Task placement strategies in the *Amazon Elastic Container Service Developer Guide*. * **type** *(string) --* The type of placement strategy. The "random" placement strategy randomly places tasks on available candidates. The "spread" placement strategy spreads placement across available candidates evenly based on the "field" parameter. The "binpack" strategy places tasks on available candidates that have the least available amount of the resource that's specified with the "field" parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task. * **field** *(string) --* The field to apply the placement strategy against. For the "spread" placement strategy, valid values are "instanceId" (or "host", which has the same effect), or any platform or custom attribute that's applied to a container instance, such as "attribute:ecs .availability-zone". For the "binpack" placement strategy, valid values are "cpu" and "memory". For the "random" placement strategy, this field is not used. * **networkConfiguration** *(dict) --* The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the "awsvpc" networking mode. * **awsvpcConfiguration** *(dict) --* The VPC subnets and security groups that are associated with a task. Note: All specified subnets and security groups must be from the same VPC. * **subnets** *(list) --* The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified. Note: All specified subnets must be from the same VPC. * *(string) --* * **securityGroups** *(list) --* The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified. Note: All specified security groups must be from the same VPC. * *(string) --* * **assignPublicIp** *(string) --* Whether the task's elastic network interface receives a public IP address. Consider the following when you set this value: * When you use "create-service" or "update-service", the default is "DISABLED". * When the service "deploymentController" is "ECS", the value must be "DISABLED". * **healthCheckGracePeriodSeconds** *(integer) --* The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing target health checks after a task has first started. * **schedulingStrategy** *(string) --* The scheduling strategy to use for the service. For more information, see Services. There are two service scheduler strategies available. * "REPLICA"-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. * "DAEMON"-The daemon scheduling strategy deploys exactly one task on each active container instance. This task meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks. It stop tasks that don't meet the placement constraints. Note: Fargate tasks don't support the "DAEMON" scheduling strategy. * **deploymentController** *(dict) --* The deployment controller type the service is using. * **type** *(string) --* The deployment controller type to use. The deployment controller is the mechanism that determines how tasks are deployed for your service. The valid options are: * ECS When you create a service which uses the "ECS" deployment controller, you can choose between the following deployment strategies: * "ROLLING": When you create a service which uses the *rolling update* ( "ROLLING") deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration. Rolling update deployments are best suited for the following scenarios: * Gradual service updates: You need to update your service incrementally without taking the entire service offline at once. * Limited resource requirements: You want to avoid the additional resource costs of running two complete environments simultaneously (as required by blue/green deployments). * Acceptable deployment time: Your application can tolerate a longer deployment process, as rolling updates replace tasks one by one. * No need for instant roll back: Your service can tolerate a rollback process that takes minutes rather than seconds. * Simple deployment process: You prefer a straightforward deployment approach without the complexity of managing multiple environments, target groups, and listeners. * No load balancer requirement: Your service doesn't use or require a load balancer, Application Load Balancer, Network Load Balancer, or Service Connect (which are required for blue/green deployments). * Stateful applications: Your application maintains state that makes it difficult to run two parallel environments. * Cost sensitivity: You want to minimize deployment costs by not running duplicate environments during deployment. Rolling updates are the default deployment strategy for services and provide a balance between deployment safety and resource efficiency for many common application scenarios. * "BLUE_GREEN": A *blue/green* deployment strategy ( "BLUE_GREEN") is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed. Amazon ECS blue/green deployments are best suited for the following scenarios: * Service validation: When you need to validate new service revisions before directing production traffic to them * Zero downtime: When your service requires zero- downtime deployments * Instant roll back: When you need the ability to quickly roll back if issues are detected * Load balancer requirement: When your service uses Application Load Balancer, Network Load Balancer, or Service Connect * External Use a third-party deployment controller. * Blue/green deployment (powered by CodeDeploy) CodeDeploy installs an updated version of the application as a new replacement task set and reroutes production traffic from the original application task set to the replacement task set. The original task set is terminated after a successful deployment. Use this deployment controller to verify a new deployment of a service before sending production traffic to it. * **tags** *(list) --* The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value. You define bot the key and value. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **createdBy** *(string) --* The principal that created the service. * **enableECSManagedTags** *(boolean) --* Determines whether to use Amazon ECS managed tags for the tasks in the service. For more information, see Tagging Your Amazon ECS Resources in the *Amazon Elastic Container Service Developer Guide*. * **propagateTags** *(string) --* Determines whether to propagate the tags from the task definition or the service to the task. If no value is specified, the tags aren't propagated. * **enableExecuteCommand** *(boolean) --* Determines whether the execute command functionality is turned on for the service. If "true", the execute command functionality is turned on for all containers in tasks as part of the service. * **availabilityZoneRebalancing** *(string) --* Indicates whether to use Availability Zone rebalancing for the service. For more information, see Balancing an Amazon ECS service across Availability Zones in the *Amazon Elastic Container Service Developer Guide* . **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ClusterNotFoundException" * "ECS.Client.exceptions.ServiceNotFoundException" **Examples** This example deletes the my-http-service service. The service must have a desired count and running count of 0 before you can delete it. response = client.delete_service( service='my-http-service', ) print(response) Expected Output: { 'ResponseMetadata': { '...': '...', }, } ECS / Client / register_task_definition register_task_definition ************************ ECS.Client.register_task_definition(**kwargs) Registers a new task definition from the supplied "family" and "containerDefinitions". Optionally, you can add data volumes to your containers with the "volumes" parameter. For more information about task definition parameters and defaults, see Amazon ECS Task Definitions in the *Amazon Elastic Container Service Developer Guide*. You can specify a role for your task with the "taskRoleArn" parameter. When you specify a role for a task, its containers can then use the latest versions of the CLI or SDKs to make API requests to the Amazon Web Services services that are specified in the policy that's associated with the role. For more information, see IAM Roles for Tasks in the *Amazon Elastic Container Service Developer Guide*. You can specify a Docker networking mode for the containers in your task definition with the "networkMode" parameter. If you specify the "awsvpc" network mode, the task is allocated an elastic network interface, and you must specify a NetworkConfiguration when you create a service or run a task with the task definition. For more information, see Task Networking in the *Amazon Elastic Container Service Developer Guide*. See also: AWS API Documentation **Request Syntax** response = client.register_task_definition( family='string', taskRoleArn='string', executionRoleArn='string', networkMode='bridge'|'host'|'awsvpc'|'none', containerDefinitions=[ { 'name': 'string', 'image': 'string', 'repositoryCredentials': { 'credentialsParameter': 'string' }, 'cpu': 123, 'memory': 123, 'memoryReservation': 123, 'links': [ 'string', ], 'portMappings': [ { 'containerPort': 123, 'hostPort': 123, 'protocol': 'tcp'|'udp', 'name': 'string', 'appProtocol': 'http'|'http2'|'grpc', 'containerPortRange': 'string' }, ], 'essential': True|False, 'restartPolicy': { 'enabled': True|False, 'ignoredExitCodes': [ 123, ], 'restartAttemptPeriod': 123 }, 'entryPoint': [ 'string', ], 'command': [ 'string', ], 'environment': [ { 'name': 'string', 'value': 'string' }, ], 'environmentFiles': [ { 'value': 'string', 'type': 's3' }, ], 'mountPoints': [ { 'sourceVolume': 'string', 'containerPath': 'string', 'readOnly': True|False }, ], 'volumesFrom': [ { 'sourceContainer': 'string', 'readOnly': True|False }, ], 'linuxParameters': { 'capabilities': { 'add': [ 'string', ], 'drop': [ 'string', ] }, 'devices': [ { 'hostPath': 'string', 'containerPath': 'string', 'permissions': [ 'read'|'write'|'mknod', ] }, ], 'initProcessEnabled': True|False, 'sharedMemorySize': 123, 'tmpfs': [ { 'containerPath': 'string', 'size': 123, 'mountOptions': [ 'string', ] }, ], 'maxSwap': 123, 'swappiness': 123 }, 'secrets': [ { 'name': 'string', 'valueFrom': 'string' }, ], 'dependsOn': [ { 'containerName': 'string', 'condition': 'START'|'COMPLETE'|'SUCCESS'|'HEALTHY' }, ], 'startTimeout': 123, 'stopTimeout': 123, 'versionConsistency': 'enabled'|'disabled', 'hostname': 'string', 'user': 'string', 'workingDirectory': 'string', 'disableNetworking': True|False, 'privileged': True|False, 'readonlyRootFilesystem': True|False, 'dnsServers': [ 'string', ], 'dnsSearchDomains': [ 'string', ], 'extraHosts': [ { 'hostname': 'string', 'ipAddress': 'string' }, ], 'dockerSecurityOptions': [ 'string', ], 'interactive': True|False, 'pseudoTerminal': True|False, 'dockerLabels': { 'string': 'string' }, 'ulimits': [ { 'name': 'core'|'cpu'|'data'|'fsize'|'locks'|'memlock'|'msgqueue'|'nice'|'nofile'|'nproc'|'rss'|'rtprio'|'rttime'|'sigpending'|'stack', 'softLimit': 123, 'hardLimit': 123 }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] }, 'healthCheck': { 'command': [ 'string', ], 'interval': 123, 'timeout': 123, 'retries': 123, 'startPeriod': 123 }, 'systemControls': [ { 'namespace': 'string', 'value': 'string' }, ], 'resourceRequirements': [ { 'value': 'string', 'type': 'GPU'|'InferenceAccelerator' }, ], 'firelensConfiguration': { 'type': 'fluentd'|'fluentbit', 'options': { 'string': 'string' } }, 'credentialSpecs': [ 'string', ] }, ], volumes=[ { 'name': 'string', 'host': { 'sourcePath': 'string' }, 'dockerVolumeConfiguration': { 'scope': 'task'|'shared', 'autoprovision': True|False, 'driver': 'string', 'driverOpts': { 'string': 'string' }, 'labels': { 'string': 'string' } }, 'efsVolumeConfiguration': { 'fileSystemId': 'string', 'rootDirectory': 'string', 'transitEncryption': 'ENABLED'|'DISABLED', 'transitEncryptionPort': 123, 'authorizationConfig': { 'accessPointId': 'string', 'iam': 'ENABLED'|'DISABLED' } }, 'fsxWindowsFileServerVolumeConfiguration': { 'fileSystemId': 'string', 'rootDirectory': 'string', 'authorizationConfig': { 'credentialsParameter': 'string', 'domain': 'string' } }, 'configuredAtLaunch': True|False }, ], placementConstraints=[ { 'type': 'memberOf', 'expression': 'string' }, ], requiresCompatibilities=[ 'EC2'|'FARGATE'|'EXTERNAL', ], cpu='string', memory='string', tags=[ { 'key': 'string', 'value': 'string' }, ], pidMode='host'|'task', ipcMode='host'|'task'|'none', proxyConfiguration={ 'type': 'APPMESH', 'containerName': 'string', 'properties': [ { 'name': 'string', 'value': 'string' }, ] }, inferenceAccelerators=[ { 'deviceName': 'string', 'deviceType': 'string' }, ], ephemeralStorage={ 'sizeInGiB': 123 }, runtimePlatform={ 'cpuArchitecture': 'X86_64'|'ARM64', 'operatingSystemFamily': 'WINDOWS_SERVER_2019_FULL'|'WINDOWS_SERVER_2019_CORE'|'WINDOWS_SERVER_2016_FULL'|'WINDOWS_SERVER_2004_CORE'|'WINDOWS_SERVER_2022_CORE'|'WINDOWS_SERVER_2022_FULL'|'WINDOWS_SERVER_2025_CORE'|'WINDOWS_SERVER_2025_FULL'|'WINDOWS_SERVER_20H2_CORE'|'LINUX' }, enableFaultInjection=True|False ) Parameters: * **family** (*string*) -- **[REQUIRED]** You must specify a "family" for a task definition. You can use it track multiple versions of the same task definition. The "family" is used as a name for your task definition. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. * **taskRoleArn** (*string*) -- The short name or full Amazon Resource Name (ARN) of the IAM role that containers in this task can assume. All containers in this task are granted the permissions that are specified in this role. For more information, see IAM Roles for Tasks in the *Amazon Elastic Container Service Developer Guide*. * **executionRoleArn** (*string*) -- The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make Amazon Web Services API calls on your behalf. For informationabout the required IAM roles for Amazon ECS, see IAM roles for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. * **networkMode** (*string*) -- The Docker networking mode to use for the containers in the task. The valid values are "none", "bridge", "awsvpc", and "host". If no network mode is specified, the default is "bridge". For Amazon ECS tasks on Fargate, the "awsvpc" network mode is required. For Amazon ECS tasks on Amazon EC2 Linux instances, any network mode can be used. For Amazon ECS tasks on Amazon EC2 Windows instances, "" or "awsvpc" can be used. If the network mode is set to "none", you cannot specify port mappings in your container definitions, and the tasks containers do not have external connectivity. The "host" and "awsvpc" network modes offer the highest networking performance for containers because they use the EC2 network stack instead of the virtualized network stack provided by the "bridge" mode. With the "host" and "awsvpc" network modes, exposed container ports are mapped directly to the corresponding host port (for the "host" network mode) or the attached elastic network interface port (for the "awsvpc" network mode), so you cannot take advantage of dynamic host port mappings. Warning: When using the "host" network mode, you should not run containers using the root user (UID 0). It is considered best practice to use a non-root user. If the network mode is "awsvpc", the task is allocated an elastic network interface, and you must specify a NetworkConfiguration value when you create a service or run a task with the task definition. For more information, see Task Networking in the *Amazon Elastic Container Service Developer Guide*. If the network mode is "host", you cannot run multiple instantiations of the same task on a single container instance when port mappings are used. * **containerDefinitions** (*list*) -- **[REQUIRED]** A list of container definitions in JSON format that describe the different containers that make up your task. * *(dict) --* Container definitions are used in task definitions to describe the different containers that are launched as part of a task. * **name** *(string) --* The name of a container. If you're linking multiple containers together in a task definition, the "name" of one container can be entered in the "links" of another container to connect the containers. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. This parameter maps to "name" in the docker container create command and the "--name" option to docker run. * **image** *(string) --* The image used to start a container. This string is passed directly to the Docker daemon. By default, images in the Docker Hub registry are available. Other repositories are specified with either "repository-url/image:tag" or "repository-url/image@digest ``. For images using tags (repository-url/image:tag), up to 255 characters total are allowed, including letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs (#). For images using digests (repository-url/image@digest), the 255 character limit applies only to the repository URL and image name (everything before the @ sign). The only supported hash function is sha256, and the hash value after sha256: must be exactly 64 characters (only letters A-F, a-f, and numbers 0-9 are allowed). This parameter maps to ``Image" in the docker container create command and the "IMAGE" parameter of docker run. * When a new task starts, the Amazon ECS container agent pulls the latest version of the specified image and tag for the container to use. However, subsequent updates to a repository image aren't propagated to already running tasks. * Images in Amazon ECR repositories can be specified by either using the full "registry/repository:tag" or "registry/repository@digest". For example, "012345678910.dkr.ecr..amazonaws.com /:latest" or "012345678910.dkr.ecr ..amazonaws.com/@sha256:94afd1f2e64d908bc90dbca0035a5b567EXAMPLE". * Images in official repositories on Docker Hub use a single name (for example, "ubuntu" or "mongo"). * Images in other repositories on Docker Hub are qualified with an organization name (for example, "amazon/amazon- ecs-agent"). * Images in other online repositories are qualified further by a domain name (for example, "quay.io/assemblyline/ubuntu"). * **repositoryCredentials** *(dict) --* The private repository authentication credentials to use. * **credentialsParameter** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) of the secret containing the private repository credentials. Note: When you use the Amazon ECS API, CLI, or Amazon Web Services SDK, if the secret exists in the same Region as the task that you're launching then you can use either the full ARN or the name of the secret. When you use the Amazon Web Services Management Console, you must specify the full ARN of the secret. * **cpu** *(integer) --* The number of "cpu" units reserved for the container. This parameter maps to "CpuShares" in the docker container create commandand the "--cpu-shares" option to docker run. This field is optional for tasks using the Fargate launch type, and the only requirement is that the total amount of CPU reserved for all containers within a task be lower than the task-level "cpu" value. Note: You can determine the number of CPU units that are available per EC2 instance type by multiplying the vCPUs listed for that instance type on the Amazon EC2 Instances detail page by 1,024. Linux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that's the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task is guaranteed a minimum of 512 CPU units when needed. Moreover, each container could float to higher CPU usage if the other container was not using it. If both tasks were 100% active all of the time, they would be limited to 512 CPU units. On Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. The minimum valid CPU share value that the Linux kernel allows is 2, and the maximum valid CPU share value that the Linux kernel allows is 262144. However, the CPU parameter isn't required, and you can use CPU values below 2 or above 262144 in your container definitions. For CPU values below 2 (including null) or above 262144, the behavior varies based on your Amazon ECS container agent version: * **Agent versions less than or equal to 1.1.0:** Null and zero CPU values are passed to Docker as 0, which Docker then converts to 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux kernel converts to two CPU shares. * **Agent versions greater than or equal to 1.2.0:** Null, zero, and CPU values of 1 are passed to Docker as 2. * **Agent versions greater than or equal to 1.84.0:** CPU values greater than 256 vCPU are passed to Docker as 256, which is equivalent to 262144 CPU shares. On Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only have access to the specified amount of CPU that's described in the task definition. A null or zero CPU value is passed to Docker as "0", which Windows interprets as 1% of one CPU. * **memory** *(integer) --* The amount (in MiB) of memory to present to the container. If your container attempts to exceed the memory specified here, the container is killed. The total amount of memory reserved for all containers within a task must be lower than the task "memory" value, if one is specified. This parameter maps to "Memory" in the docker container create command and the "--memory" option to docker run. If using the Fargate launch type, this parameter is optional. If using the EC2 launch type, you must specify either a task-level memory value or a container-level memory value. If you specify both a container-level "memory" and "memoryReservation" value, "memory" must be greater than "memoryReservation". If you specify "memoryReservation", then that value is subtracted from the available memory resources for the container instance where the container is placed. Otherwise, the value of "memory" is used. The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers. The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers. * **memoryReservation** *(integer) --* The soft limit (in MiB) of memory to reserve for the container. When system memory is under heavy contention, Docker attempts to keep the container memory to this soft limit. However, your container can consume more memory when it needs to, up to either the hard limit specified with the "memory" parameter (if applicable), or all of the available memory on the container instance, whichever comes first. This parameter maps to "MemoryReservation" in the docker container create command and the "--memory- reservation" option to docker run. If a task-level memory value is not specified, you must specify a non-zero integer for one or both of "memory" or "memoryReservation" in a container definition. If you specify both, "memory" must be greater than "memoryReservation". If you specify "memoryReservation", then that value is subtracted from the available memory resources for the container instance where the container is placed. Otherwise, the value of "memory" is used. For example, if your container normally uses 128 MiB of memory, but occasionally bursts to 256 MiB of memory for short periods of time, you can set a "memoryReservation" of 128 MiB, and a "memory" hard limit of 300 MiB. This configuration would allow the container to only reserve 128 MiB of memory from the remaining resources on the container instance, but also allow the container to consume more memory resources when needed. The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers. The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers. * **links** *(list) --* The "links" parameter allows containers to communicate with each other without the need for port mappings. This parameter is only supported if the network mode of a task definition is "bridge". The "name:internalName" construct is analogous to "name:alias" in Docker links. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.. This parameter maps to "Links" in the docker container create command and the "--link" option to docker run. Note: This parameter is not supported for Windows containers. Warning: Containers that are collocated on a single container instance may be able to communicate with each other without requiring links or host port mappings. Network isolation is achieved on the container instance using security groups and VPC settings. * *(string) --* * **portMappings** *(list) --* The list of port mappings for the container. Port mappings allow containers to access ports on the host container instance to send or receive traffic. For task definitions that use the "awsvpc" network mode, only specify the "containerPort". The "hostPort" can be left blank or it must be the same value as the "containerPort". Port mappings on Windows use the "NetNAT" gateway address rather than "localhost". There's no loopback for port mappings on Windows, so you can't access a container's mapped port from the host itself. This parameter maps to "PortBindings" in the the docker container create command and the "--publish" option to docker run. If the network mode of a task definition is set to "none", then you can't specify port mappings. If the network mode of a task definition is set to "host", then host ports must either be undefined or they must match the container port in the port mapping. Note: After a task reaches the "RUNNING" status, manual and automatic host and container port assignments are visible in the **Network Bindings** section of a container description for a selected task in the Amazon ECS console. The assignments are also visible in the "networkBindings" section DescribeTasks responses. * *(dict) --* Port mappings allow containers to access ports on the host container instance to send or receive traffic. Port mappings are specified as part of the container definition. If you use containers in a task with the "awsvpc" or "host" network mode, specify the exposed ports using "containerPort". The "hostPort" can be left blank or it must be the same value as the "containerPort". Most fields of this parameter ( "containerPort", "hostPort", "protocol") maps to "PortBindings" in the docker container create command and the "--publish" option to "docker run". If the network mode of a task definition is set to "host", host ports must either be undefined or match the container port in the port mapping. Note: You can't expose the same container port for multiple protocols. If you attempt this, an error is returned. After a task reaches the "RUNNING" status, manual and automatic host and container port assignments are visible in the "networkBindings" section of DescribeTasks API responses. * **containerPort** *(integer) --* The port number on the container that's bound to the user-specified or automatically assigned host port. If you use containers in a task with the "awsvpc" or "host" network mode, specify the exposed ports using "containerPort". If you use containers in a task with the "bridge" network mode and you specify a container port and not a host port, your container automatically receives a host port in the ephemeral port range. For more information, see "hostPort". Port mappings that are automatically assigned in this way do not count toward the 100 reserved ports limit of a container instance. * **hostPort** *(integer) --* The port number on the container instance to reserve for your container. If you specify a "containerPortRange", leave this field empty and the value of the "hostPort" is set as follows: * For containers in a task with the "awsvpc" network mode, the "hostPort" is set to the same value as the "containerPort". This is a static mapping strategy. * For containers in a task with the "bridge" network mode, the Amazon ECS agent finds open ports on the host and automatically binds them to the container ports. This is a dynamic mapping strategy. If you use containers in a task with the "awsvpc" or "host" network mode, the "hostPort" can either be left blank or set to the same value as the "containerPort". If you use containers in a task with the "bridge" network mode, you can specify a non-reserved host port for your container port mapping, or you can omit the "hostPort" (or set it to "0") while specifying a "containerPort" and your container automatically receives a port in the ephemeral port range for your container instance operating system and Docker version. The default ephemeral port range for Docker version 1.6.0 and later is listed on the instance under "/proc/sys/net/ipv4/ip_local_port_range". If this kernel parameter is unavailable, the default ephemeral port range from 49153 through 65535 (Linux) or 49152 through 65535 (Windows) is used. Do not attempt to specify a host port in the ephemeral port range as these are reserved for automatic assignment. In general, ports below 32768 are outside of the ephemeral port range. The default reserved ports are 22 for SSH, the Docker ports 2375 and 2376, and the Amazon ECS container agent ports 51678-51680. Any host port that was previously specified in a running task is also reserved while the task is running. That is, after a task stops, the host port is released. The current reserved ports are displayed in the "remainingResources" of DescribeContainerInstances output. A container instance can have up to 100 reserved ports at a time. This number includes the default reserved ports. Automatically assigned ports aren't included in the 100 reserved ports quota. * **protocol** *(string) --* The protocol used for the port mapping. Valid values are "tcp" and "udp". The default is "tcp". "protocol" is immutable in a Service Connect service. Updating this field requires a service deletion and redeployment. * **name** *(string) --* The name that's used for the port mapping. This parameter is the name that you use in the "serviceConnectConfiguration" and the "vpcLatticeConfigurations" of a service. The name can include up to 64 characters. The characters can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen. * **appProtocol** *(string) --* The application protocol that's used for the port mapping. This parameter only applies to Service Connect. We recommend that you set this parameter to be consistent with the protocol that your application uses. If you set this parameter, Amazon ECS adds protocol-specific connection handling to the Service Connect proxy. If you set this parameter, Amazon ECS adds protocol-specific telemetry in the Amazon ECS console and CloudWatch. If you don't set a value for this parameter, then TCP is used. However, Amazon ECS doesn't add protocol- specific telemetry for TCP. "appProtocol" is immutable in a Service Connect service. Updating this field requires a service deletion and redeployment. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **containerPortRange** *(string) --* The port number range on the container that's bound to the dynamically mapped host port range. The following rules apply when you specify a "containerPortRange": * You must use either the "bridge" network mode or the "awsvpc" network mode. * This parameter is available for both the EC2 and Fargate launch types. * This parameter is available for both the Linux and Windows operating systems. * The container instance must have at least version 1.67.0 of the container agent and at least version 1.67.0-1 of the "ecs-init" package * You can specify a maximum of 100 port ranges per container. * You do not specify a "hostPortRange". The value of the "hostPortRange" is set as follows: * For containers in a task with the "awsvpc" network mode, the "hostPortRange" is set to the same value as the "containerPortRange". This is a static mapping strategy. * For containers in a task with the "bridge" network mode, the Amazon ECS agent finds open host ports from the default ephemeral range and passes it to docker to bind them to the container ports. * The "containerPortRange" valid values are between 1 and 65535. * A port can only be included in one port mapping per container. * You cannot specify overlapping port ranges. * The first port in the range must be less than last port in the range. * Docker recommends that you turn off the docker-proxy in the Docker daemon config file when you have a large number of ports. For more information, see Issue #11185 on the Github website. For information about how to turn off the docker-proxy in the Docker daemon config file, see Docker daemon in the *Amazon ECS Developer Guide*. You can call DescribeTasks to view the "hostPortRange" which are the host ports that are bound to the container ports. * **essential** *(boolean) --* If the "essential" parameter of a container is marked as "true", and that container fails or stops for any reason, all other containers that are part of the task are stopped. If the "essential" parameter of a container is marked as "false", its failure doesn't affect the rest of the containers in a task. If this parameter is omitted, a container is assumed to be essential. All tasks must have at least one essential container. If you have an application that's composed of multiple containers, group containers that are used for a common purpose into components, and separate the different components into multiple task definitions. For more information, see Application Architecture in the *Amazon Elastic Container Service Developer Guide*. * **restartPolicy** *(dict) --* The restart policy for a container. When you set up a restart policy, Amazon ECS can restart the container without needing to replace the task. For more information, see Restart individual containers in Amazon ECS tasks with container restart policies in the *Amazon Elastic Container Service Developer Guide*. * **enabled** *(boolean) --* **[REQUIRED]** Specifies whether a restart policy is enabled for the container. * **ignoredExitCodes** *(list) --* A list of exit codes that Amazon ECS will ignore and not attempt a restart on. You can specify a maximum of 50 container exit codes. By default, Amazon ECS does not ignore any exit codes. * *(integer) --* * **restartAttemptPeriod** *(integer) --* A period of time (in seconds) that the container must run for before a restart can be attempted. A container can be restarted only once every "restartAttemptPeriod" seconds. If a container isn't able to run for this time period and exits early, it will not be restarted. You can set a minimum "restartAttemptPeriod" of 60 seconds and a maximum "restartAttemptPeriod" of 1800 seconds. By default, a container must run for 300 seconds before it can be restarted. * **entryPoint** *(list) --* Warning: Early versions of the Amazon ECS container agent don't properly handle "entryPoint" parameters. If you have problems using "entryPoint", update your container agent or enter your commands and arguments as "command" array items instead. The entry point that's passed to the container. This parameter maps to "Entrypoint" in the docker container create command and the "--entrypoint" option to docker run. * *(string) --* * **command** *(list) --* The command that's passed to the container. This parameter maps to "Cmd" in the docker container create command and the "COMMAND" parameter to docker run. If there are multiple arguments, each argument is a separated string in the array. * *(string) --* * **environment** *(list) --* The environment variables to pass to a container. This parameter maps to "Env" in the docker container create command and the "--env" option to docker run. Warning: We don't recommend that you use plaintext environment variables for sensitive information, such as credential data. * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **environmentFiles** *(list) --* A list of files containing the environment variables to pass to a container. This parameter maps to the "--env- file" option to docker run. You can specify up to ten environment files. The file must have a ".env" file extension. Each line in an environment file contains an environment variable in "VARIABLE=VALUE" format. Lines beginning with "#" are treated as comments and are ignored. If there are environment variables specified using the "environment" parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Specifying Environment Variables in the *Amazon Elastic Container Service Developer Guide*. * *(dict) --* A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a ".env" file extension. Each line in an environment file should contain an environment variable in "VARIABLE=VALUE" format. Lines beginning with "#" are treated as comments and are ignored. If there are environment variables specified using the "environment" parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Use a file to pass environment variables to a container in the *Amazon Elastic Container Service Developer Guide*. Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply. You must use the following platforms for the Fargate launch type: * Linux platform version "1.4.0" or later. * Windows platform version "1.0.0" or later. Consider the following when using the Fargate launch type: * The file is handled like a native Docker env-file. * There is no support for shell escape handling. * The container entry point interperts the "VARIABLE" values. * **value** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file. * **type** *(string) --* **[REQUIRED]** The file type to use. Environment files are objects in Amazon S3. The only supported value is "s3". * **mountPoints** *(list) --* The mount points for data volumes in your container. This parameter maps to "Volumes" in the docker container create command and the "--volume" option to docker run. Windows containers can mount whole directories on the same drive as "$env:ProgramData". Windows containers can't mount directories on a different drive, and mount point can't be across drives. * *(dict) --* The details for a volume mount point that's used in a container definition. * **sourceVolume** *(string) --* The name of the volume to mount. Must be a volume name referenced in the "name" parameter of task definition "volume". * **containerPath** *(string) --* The path on the container to mount the host volume at. * **readOnly** *(boolean) --* If this value is "true", the container has read-only access to the volume. If this value is "false", then the container can write to the volume. The default value is "false". * **volumesFrom** *(list) --* Data volumes to mount from another container. This parameter maps to "VolumesFrom" in the docker container create command and the "--volumes-from" option to docker run. * *(dict) --* Details on a data volume from another container in the same task definition. * **sourceContainer** *(string) --* The name of another container within the same task definition to mount volumes from. * **readOnly** *(boolean) --* If this value is "true", the container has read-only access to the volume. If this value is "false", then the container can write to the volume. The default value is "false". * **linuxParameters** *(dict) --* Linux-specific modifications that are applied to the default Docker container configuration, such as Linux kernel capabilities. For more information see KernelCapabilities. Note: This parameter is not supported for Windows containers. * **capabilities** *(dict) --* The Linux capabilities for the container that are added to or dropped from the default configuration provided by Docker. Note: For tasks that use the Fargate launch type, "capabilities" is supported for all platform versions but the "add" parameter is only supported if using platform version 1.4.0 or later. * **add** *(list) --* The Linux capabilities for the container that have been added to the default configuration provided by Docker. This parameter maps to "CapAdd" in the docker container create command and the "--cap-add" option to docker run. Note: Tasks launched on Fargate only support adding the "SYS_PTRACE" kernel capability. Valid values: ""ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"" * *(string) --* * **drop** *(list) --* The Linux capabilities for the container that have been removed from the default configuration provided by Docker. This parameter maps to "CapDrop" in the docker container create command and the "--cap-drop" option to docker run. Valid values: ""ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"" * *(string) --* * **devices** *(list) --* Any host devices to expose to the container. This parameter maps to "Devices" in the docker container create command and the "--device" option to docker run. Note: If you're using tasks that use the Fargate launch type, the "devices" parameter isn't supported. * *(dict) --* An object representing a container instance host device. * **hostPath** *(string) --* **[REQUIRED]** The path for the device on the host container instance. * **containerPath** *(string) --* The path inside the container at which to expose the host device. * **permissions** *(list) --* The explicit permissions to provide to the container for the device. By default, the container has permissions for "read", "write", and "mknod" for the device. * *(string) --* * **initProcessEnabled** *(boolean) --* Run an "init" process inside the container that forwards signals and reaps processes. This parameter maps to the "--init" option to docker run. This parameter requires version 1.25 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" * **sharedMemorySize** *(integer) --* The value for the size (in MiB) of the "/dev/shm" volume. This parameter maps to the "--shm-size" option to docker run. Note: If you are using tasks that use the Fargate launch type, the "sharedMemorySize" parameter is not supported. * **tmpfs** *(list) --* The container path, mount options, and size (in MiB) of the tmpfs mount. This parameter maps to the "--tmpfs" option to docker run. Note: If you're using tasks that use the Fargate launch type, the "tmpfs" parameter isn't supported. * *(dict) --* The container path, mount options, and size of the tmpfs mount. * **containerPath** *(string) --* **[REQUIRED]** The absolute file path where the tmpfs volume is to be mounted. * **size** *(integer) --* **[REQUIRED]** The maximum size (in MiB) of the tmpfs volume. * **mountOptions** *(list) --* The list of tmpfs volume mount options. Valid values: ""defaults" | "ro" | "rw" | "suid" | "nosuid" | "dev" | "nodev" | "exec" | "noexec" | "sync" | "async" | "dirsync" | "remount" | "mand" | "nomand" | "atime" | "noatime" | "diratime" | "nodiratime" | "bind" | "rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime" | "norelatime" | "strictatime" | "nostrictatime" | "mode" | "uid" | "gid" | "nr_inodes" | "nr_blocks" | "mpol"" * *(string) --* * **maxSwap** *(integer) --* The total amount of swap memory (in MiB) a container can use. This parameter will be translated to the "--memory- swap" option to docker run where the value would be the sum of the container memory plus the "maxSwap" value. If a "maxSwap" value of "0" is specified, the container will not use swap. Accepted values are "0" or any positive integer. If the "maxSwap" parameter is omitted, the container will use the swap configuration for the container instance it is running on. A "maxSwap" value must be set for the "swappiness" parameter to be used. Note: If you're using tasks that use the Fargate launch type, the "maxSwap" parameter isn't supported.If you're using tasks on Amazon Linux 2023 the "swappiness" parameter isn't supported. * **swappiness** *(integer) --* This allows you to tune a container's memory swappiness behavior. A "swappiness" value of "0" will cause swapping to not happen unless absolutely necessary. A "swappiness" value of "100" will cause pages to be swapped very aggressively. Accepted values are whole numbers between "0" and "100". If the "swappiness" parameter is not specified, a default value of "60" is used. If a value is not specified for "maxSwap" then this parameter is ignored. This parameter maps to the " --memory-swappiness" option to docker run. Note: If you're using tasks that use the Fargate launch type, the "swappiness" parameter isn't supported.If you're using tasks on Amazon Linux 2023 the "swappiness" parameter isn't supported. * **secrets** *(list) --* The secrets to pass to the container. For more information, see Specifying Sensitive Data in the *Amazon Elastic Container Service Developer Guide*. * *(dict) --* An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways: * To inject sensitive data into your containers as environment variables, use the "secrets" container definition parameter. * To reference sensitive information in the log configuration of a container, use the "secretOptions" container definition parameter. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* **[REQUIRED]** The name of the secret. * **valueFrom** *(string) --* **[REQUIRED]** The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store. For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the *Amazon Elastic Container Service Developer Guide*. Note: If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified. * **dependsOn** *(list) --* The dependencies defined for container startup and shutdown. A container can contain multiple dependencies on other containers in a task definition. When a dependency is defined for container startup, for container shutdown it is reversed. For tasks using the EC2 launch type, the container instances require at least version 1.26.0 of the container agent to turn on container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the *Amazon Elastic Container Service Developer Guide*. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the "ecs-init" package. If your container instances are launched from version "20190301" or later, then they contain the required versions of the container agent and "ecs-init". For more information, see Amazon ECS-optimized Linux AMI in the *Amazon Elastic Container Service Developer Guide*. For tasks using the Fargate launch type, the task or service requires the following platforms: * Linux platform version "1.3.0" or later. * Windows platform version "1.0.0" or later. * *(dict) --* The dependencies defined for container startup and shutdown. A container can contain multiple dependencies. When a dependency is defined for container startup, for container shutdown it is reversed. Your Amazon ECS container instances require at least version 1.26.0 of the container agent to use container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the *Amazon Elastic Container Service Developer Guide*. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the "ecs- init" package. If your container instances are launched from version "20190301" or later, then they contain the required versions of the container agent and "ecs-init". For more information, see Amazon ECS-optimized Linux AMI in the *Amazon Elastic Container Service Developer Guide*. Note: For tasks that use the Fargate launch type, the task or service requires the following platforms: * Linux platform version "1.3.0" or later. * Windows platform version "1.0.0" or later. For more information about how to create a container dependency, see Container dependency in the *Amazon Elastic Container Service Developer Guide*. * **containerName** *(string) --* **[REQUIRED]** The name of a container. * **condition** *(string) --* **[REQUIRED]** The dependency condition of the container. The following are the available conditions and their behavior: * "START" - This condition emulates the behavior of links and volumes today. It validates that a dependent container is started before permitting other containers to start. * "COMPLETE" - This condition validates that a dependent container runs to completion (exits) before permitting other containers to start. This can be useful for nonessential containers that run a script and then exit. This condition can't be set on an essential container. * "SUCCESS" - This condition is the same as "COMPLETE", but it also requires that the container exits with a "zero" status. This condition can't be set on an essential container. * "HEALTHY" - This condition validates that the dependent container passes its Docker health check before permitting other containers to start. This requires that the dependent container has health checks configured. This condition is confirmed only at task startup. * **startTimeout** *(integer) --* Time duration (in seconds) to wait before giving up on resolving dependencies for a container. For example, you specify two containers in a task definition with containerA having a dependency on containerB reaching a "COMPLETE", "SUCCESS", or "HEALTHY" status. If a "startTimeout" value is specified for containerB and it doesn't reach the desired status within that time then containerA gives up and not start. This results in the task transitioning to a "STOPPED" state. Note: When the "ECS_CONTAINER_START_TIMEOUT" container agent configuration variable is used, it's enforced independently from this start timeout value. For tasks using the Fargate launch type, the task or service requires the following platforms: * Linux platform version "1.3.0" or later. * Windows platform version "1.0.0" or later. For tasks using the EC2 launch type, your container instances require at least version "1.26.0" of the container agent to use a container start timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the *Amazon Elastic Container Service Developer Guide*. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version "1.26.0-1" of the "ecs-init" package. If your container instances are launched from version "20190301" or later, then they contain the required versions of the container agent and "ecs-init". For more information, see Amazon ECS-optimized Linux AMI in the *Amazon Elastic Container Service Developer Guide*. The valid values for Fargate are 2-120 seconds. * **stopTimeout** *(integer) --* Time duration (in seconds) to wait before the container is forcefully killed if it doesn't exit normally on its own. For tasks using the Fargate launch type, the task or service requires the following platforms: * Linux platform version "1.3.0" or later. * Windows platform version "1.0.0" or later. For tasks that use the Fargate launch type, the max stop timeout value is 120 seconds and if the parameter is not specified, the default value of 30 seconds is used. For tasks that use the EC2 launch type, if the "stopTimeout" parameter isn't specified, the value set for the Amazon ECS container agent configuration variable "ECS_CONTAINER_STOP_TIMEOUT" is used. If neither the "stopTimeout" parameter or the "ECS_CONTAINER_STOP_TIMEOUT" agent configuration variable are set, then the default values of 30 seconds for Linux containers and 30 seconds on Windows containers are used. Your container instances require at least version 1.26.0 of the container agent to use a container stop timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the *Amazon Elastic Container Service Developer Guide*. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the "ecs-init" package. If your container instances are launched from version "20190301" or later, then they contain the required versions of the container agent and "ecs-init". For more information, see Amazon ECS-optimized Linux AMI in the *Amazon Elastic Container Service Developer Guide*. The valid values for Fargate are 2-120 seconds. * **versionConsistency** *(string) --* Specifies whether Amazon ECS will resolve the container image tag provided in the container definition to an image digest. By default, the value is "enabled". If you set the value for a container as "disabled", Amazon ECS will not resolve the provided container image tag to a digest and will use the original image URI specified in the container definition for deployment. For more information about container image resolution, see Container image resolution in the *Amazon ECS Developer Guide*. * **hostname** *(string) --* The hostname to use for your container. This parameter maps to "Hostname" in the docker container create command and the "--hostname" option to docker run. Note: The "hostname" parameter is not supported if you're using the "awsvpc" network mode. * **user** *(string) --* The user to use inside the container. This parameter maps to "User" in the docker container create command and the " --user" option to docker run. Warning: When running tasks using the "host" network mode, don't run containers using the root user (UID 0). We recommend using a non-root user for better security. You can specify the "user" using the following formats. If specifying a UID or GID, you must specify it as a positive integer. * "user" * "user:group" * "uid" * "uid:gid" * "user:gid" * "uid:group" Note: This parameter is not supported for Windows containers. * **workingDirectory** *(string) --* The working directory to run commands inside the container in. This parameter maps to "WorkingDir" in the docker container create command and the "--workdir" option to docker run. * **disableNetworking** *(boolean) --* When this parameter is true, networking is off within the container. This parameter maps to "NetworkDisabled" in the docker container create command. Note: This parameter is not supported for Windows containers. * **privileged** *(boolean) --* When this parameter is true, the container is given elevated privileges on the host container instance (similar to the "root" user). This parameter maps to "Privileged" in the docker container create command and the "--privileged" option to docker run Note: This parameter is not supported for Windows containers or tasks run on Fargate. * **readonlyRootFilesystem** *(boolean) --* When this parameter is true, the container is given read- only access to its root file system. This parameter maps to "ReadonlyRootfs" in the docker container create command and the "--read-only" option to docker run. Note: This parameter is not supported for Windows containers. * **dnsServers** *(list) --* A list of DNS servers that are presented to the container. This parameter maps to "Dns" in the docker container create command and the "--dns" option to docker run. Note: This parameter is not supported for Windows containers. * *(string) --* * **dnsSearchDomains** *(list) --* A list of DNS search domains that are presented to the container. This parameter maps to "DnsSearch" in the docker container create command and the "--dns-search" option to docker run. Note: This parameter is not supported for Windows containers. * *(string) --* * **extraHosts** *(list) --* A list of hostnames and IP address mappings to append to the "/etc/hosts" file on the container. This parameter maps to "ExtraHosts" in the docker container create command and the "--add-host" option to docker run. Note: This parameter isn't supported for Windows containers or tasks that use the "awsvpc" network mode. * *(dict) --* Hostnames and IP address entries that are added to the "/etc/hosts" file of a container via the "extraHosts" parameter of its ContainerDefinition. * **hostname** *(string) --* **[REQUIRED]** The hostname to use in the "/etc/hosts" entry. * **ipAddress** *(string) --* **[REQUIRED]** The IP address to use in the "/etc/hosts" entry. * **dockerSecurityOptions** *(list) --* A list of strings to provide custom configuration for multiple security systems. This field isn't valid for containers in tasks using the Fargate launch type. For Linux tasks on EC2, this parameter can be used to reference custom labels for SELinux and AppArmor multi- level security systems. For any tasks on EC2, this parameter can be used to reference a credential spec file that configures a container for Active Directory authentication. For more information, see Using gMSAs for Windows Containers and Using gMSAs for Linux Containers in the *Amazon Elastic Container Service Developer Guide*. This parameter maps to "SecurityOpt" in the docker container create command and the "--security-opt" option to docker run. Note: The Amazon ECS container agent running on a container instance must register with the "ECS_SELINUX_CAPABLE=true" or "ECS_APPARMOR_CAPABLE=true" environment variables before containers placed on that instance can use these security options. For more information, see Amazon ECS Container Agent Configuration in the *Amazon Elastic Container Service Developer Guide*. Valid values: "no-new-privileges" | "apparmor:PROFILE" | "label:value" | "credentialspec:CredentialSpecFilePath" * *(string) --* * **interactive** *(boolean) --* When this parameter is "true", you can deploy containerized applications that require "stdin" or a "tty" to be allocated. This parameter maps to "OpenStdin" in the docker container create command and the "--interactive" option to docker run. * **pseudoTerminal** *(boolean) --* When this parameter is "true", a TTY is allocated. This parameter maps to "Tty" in the docker container create command and the "--tty" option to docker run. * **dockerLabels** *(dict) --* A key/value map of labels to add to the container. This parameter maps to "Labels" in the docker container create command and the "--label" option to docker run. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" * *(string) --* * *(string) --* * **ulimits** *(list) --* A list of "ulimits" to set in the container. If a "ulimit" value is specified in a task definition, it overrides the default values set by Docker. This parameter maps to "Ulimits" in the docker container create command and the " --ulimit" option to docker run. Valid naming values are displayed in the Ulimit data type. Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the "nofile" resource limit parameter which Fargate overrides. The "nofile" resource limit sets a restriction on the number of open files that a container can use. The default "nofile" soft limit is "65535" and the default hard limit is "65535". This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" Note: This parameter is not supported for Windows containers. * *(dict) --* The "ulimit" settings to pass to the container. Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the "nofile" resource limit parameter which Fargate overrides. The "nofile" resource limit sets a restriction on the number of open files that a container can use. The default "nofile" soft limit is "65535" and the default hard limit is "65535". You can specify the "ulimit" settings for a container in a task definition. * **name** *(string) --* **[REQUIRED]** The "type" of the "ulimit". * **softLimit** *(integer) --* **[REQUIRED]** The soft limit for the "ulimit" type. The value can be specified in bytes, seconds, or as a count, depending on the "type" of the "ulimit". * **hardLimit** *(integer) --* **[REQUIRED]** The hard limit for the "ulimit" type. The value can be specified in bytes, seconds, or as a count, depending on the "type" of the "ulimit". * **logConfiguration** *(dict) --* The log configuration specification for the container. This parameter maps to "LogConfig" in the docker container create command and the "--log-driver" option to docker run. By default, containers use the same logging driver that the Docker daemon uses. However the container can use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance (or on a different log server for remote logging options). Note: Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon (shown in the LogConfiguration data type). Additional log drivers may be available in future releases of the Amazon ECS container agent. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" Note: The Amazon ECS container agent running on a container instance must register the logging drivers available on that instance with the "ECS_AVAILABLE_LOGGING_DRIVERS" environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS Container Agent Configuration in the *Amazon Elastic Container Service Developer Guide*. * **logDriver** *(string) --* **[REQUIRED]** The log driver to use for the container. For tasks on Fargate, the supported log drivers are "awslogs", "splunk", and "awsfirelens". For tasks hosted on Amazon EC2 instances, the supported log drivers are "awslogs", "fluentd", "gelf", "json- file", "journald", "syslog", "splunk", and "awsfirelens". For more information about using the "awslogs" log driver, see Send Amazon ECS logs to CloudWatch in the *Amazon Elastic Container Service Developer Guide*. For more information about using the "awsfirelens" log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner. Note: If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software. * **options** *(dict) --* The configuration options to send to the log driver. The options you can specify depend on the log driver. Some of the options you can specify when you use the "awslogs" log driver to route logs to Amazon CloudWatch include the following: awslogs-create-group Required: No Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to "false". Note: Your IAM policy must include the "logs:CreateLogGroup" permission before you attempt to use "awslogs-create- group".awslogs-region Required: Yes Specify the Amazon Web Services Region that the "awslogs" log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option. awslogs-group Required: Yes Make sure to specify a log group that the "awslogs" log driver sends its log streams to. awslogs-stream-prefix Required: Yes, when using Fargate.Optional when using EC2. Use the "awslogs-stream-prefix" option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format "prefix- name/container-name/ecs-task-id". If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option. For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to. You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console. awslogs-datetime-format Required: No This option defines a multiline start pattern in Python "strftime" format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry. For more information, see awslogs-datetime-format. You cannot configure both the "awslogs-datetime-format" and "awslogs-multiline-pattern" options. Note: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.awslogs- multiline-pattern Required: No This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. For more information, see awslogs-multiline-pattern. This option is ignored if "awslogs-datetime-format" is also configured. You cannot configure both the "awslogs-datetime-format" and "awslogs-multiline-pattern" options. Note: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. The following options apply to all supported log drivers. mode Required: No Valid values: "non-blocking" | "blocking" This option defines the delivery mode of log messages from the container to the log driver specified using "logDriver". The delivery mode you choose affects application availability when the flow of logs from container is interrupted. If you use the "blocking" mode and the flow of logs is interrupted, calls from container code to write to the "stdout" and "stderr" streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure. If you use the "non-blocking" mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the "max-buffer-size" option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver. You can set a default "mode" for all containers in a specific Amazon Web Services Region by using the "defaultLogDriverMode" account setting. If you don't specify the "mode" option or configure the account setting, Amazon ECS will default to the "non-blocking" mode. For more information about the account setting, see Default log driver mode in the *Amazon Elastic Container Service Developer Guide*. Note: On June 25, 2025, Amazon ECS changed the default log driver mode from "blocking" to "non-blocking" to prioritize task availability over logging. To continue using the "blocking" mode after this change, do one of the following: * Set the "mode" option in your container definition's "logConfiguration" as "blocking". * Set the "defaultLogDriverMode" account setting to "blocking". max-buffer-size Required: No Default value: "1m" When "non-blocking" mode is used, the "max-buffer-size" log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost. To route logs using the "splunk" log router, you need to specify a "splunk-token" and a "splunk-url". When you use the "awsfirelens" log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the "log-driver-buffer-limit" option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. Other options you can specify when using "awsfirelens" to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with "region" and a name for the log stream with "delivery_stream". When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with "region" and a data stream name with "stream". When you export logs to Amazon OpenSearch Service, you can specify options like "Name", "Host" (OpenSearch Service endpoint without protocol), "Port", "Index", "Type", "Aws_auth", "Aws_region", "Suppress_Type_Name", and "tls". For more information, see Under the hood: FireLens for Amazon ECS Tasks. When you export logs to Amazon S3, you can specify the bucket using the "bucket" option. You can also specify "region", "total_file_size", "upload_timeout", and "use_put_object" as options. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" * *(string) --* * *(string) --* * **secretOptions** *(list) --* The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * *(dict) --* An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways: * To inject sensitive data into your containers as environment variables, use the "secrets" container definition parameter. * To reference sensitive information in the log configuration of a container, use the "secretOptions" container definition parameter. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* **[REQUIRED]** The name of the secret. * **valueFrom** *(string) --* **[REQUIRED]** The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store. For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the *Amazon Elastic Container Service Developer Guide*. Note: If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified. * **healthCheck** *(dict) --* The container health check command and associated configuration parameters for the container. This parameter maps to "HealthCheck" in the docker container create command and the "HEALTHCHECK" parameter of docker run. * **command** *(list) --* **[REQUIRED]** A string array representing the command that the container runs to determine if it is healthy. The string array must start with "CMD" to run the command arguments directly, or "CMD-SHELL" to run the command with the container's default shell. When you use the Amazon Web Services Management Console JSON panel, the Command Line Interface, or the APIs, enclose the list of commands in double quotes and brackets. "[ "CMD-SHELL", "curl -f http://localhost/ || exit 1" ]" You don't include the double quotes and brackets when you use the Amazon Web Services Management Console. "CMD-SHELL, curl -f http://localhost/ || exit 1" An exit code of 0 indicates success, and non-zero exit code indicates failure. For more information, see "HealthCheck" in the docker container create command. * *(string) --* * **interval** *(integer) --* The time period in seconds between each health check execution. You may specify between 5 and 300 seconds. The default value is 30 seconds. This value applies only when you specify a "command". * **timeout** *(integer) --* The time period in seconds to wait for a health check to succeed before it is considered a failure. You may specify between 2 and 60 seconds. The default value is 5. This value applies only when you specify a "command". * **retries** *(integer) --* The number of times to retry a failed health check before the container is considered unhealthy. You may specify between 1 and 10 retries. The default value is 3. This value applies only when you specify a "command". * **startPeriod** *(integer) --* The optional grace period to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. You can specify between 0 and 300 seconds. By default, the "startPeriod" is off. This value applies only when you specify a "command". Note: If a health check succeeds within the "startPeriod", then the container is considered healthy and any subsequent failures count toward the maximum number of retries. * **systemControls** *(list) --* A list of namespaced kernel parameters to set in the container. This parameter maps to "Sysctls" in the docker container create command and the "--sysctl" option to docker run. For example, you can configure "net.ipv4.tcp_keepalive_time" setting to maintain longer lived connections. * *(dict) --* A list of namespaced kernel parameters to set in the container. This parameter maps to "Sysctls" in the docker container create command and the "--sysctl" option to docker run. For example, you can configure "net.ipv4.tcp_keepalive_time" setting to maintain longer lived connections. We don't recommend that you specify network-related "systemControls" parameters for multiple containers in a single task that also uses either the "awsvpc" or "host" network mode. Doing this has the following disadvantages: * For tasks that use the "awsvpc" network mode including Fargate, if you set "systemControls" for any container, it applies to all containers in the task. If you set different "systemControls" for multiple containers in a single task, the container that's started last determines which "systemControls" take effect. * For tasks that use the "host" network mode, the network namespace "systemControls" aren't supported. If you're setting an IPC resource namespace to use for the containers in the task, the following conditions apply to your system controls. For more information, see IPC mode. * For tasks that use the "host" IPC mode, IPC namespace "systemControls" aren't supported. * For tasks that use the "task" IPC mode, IPC namespace "systemControls" values apply to all containers within a task. Note: This parameter is not supported for Windows containers. Note: This parameter is only supported for tasks that are hosted on Fargate if the tasks are using platform version "1.4.0" or later (Linux). This isn't supported for Windows containers on Fargate. * **namespace** *(string) --* The namespaced kernel parameter to set a "value" for. * **value** *(string) --* The namespaced kernel parameter to set a "value" for. Valid IPC namespace values: ""kernel.msgmax" | "kernel.msgmnb" | "kernel.msgmni" | "kernel.sem" | "kernel.shmall" | "kernel.shmmax" | "kernel.shmmni" | "kernel.shm_rmid_forced"", and "Sysctls" that start with ""fs.mqueue.*"" Valid network namespace values: "Sysctls" that start with ""net.*"". Only namespaced "Sysctls" that exist within the container starting with "net.* are accepted. All of these values are supported by Fargate. * **resourceRequirements** *(list) --* The type and amount of a resource to assign to a container. The only supported resource is a GPU. * *(dict) --* The type and amount of a resource to assign to a container. The supported resource types are GPUs and Elastic Inference accelerators. For more information, see Working with GPUs on Amazon ECS or Working with Amazon Elastic Inference on Amazon ECS in the *Amazon Elastic Container Service Developer Guide* * **value** *(string) --* **[REQUIRED]** The value for the specified resource type. When the type is "GPU", the value is the number of physical "GPUs" the Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for all containers in a task can't exceed the number of available GPUs on the container instance that the task is launched on. When the type is "InferenceAccelerator", the "value" matches the "deviceName" for an InferenceAccelerator specified in a task definition. * **type** *(string) --* **[REQUIRED]** The type of resource to assign to a container. * **firelensConfiguration** *(dict) --* The FireLens configuration for the container. This is used to specify and configure a log router for container logs. For more information, see Custom Log Routing in the *Amazon Elastic Container Service Developer Guide*. * **type** *(string) --* **[REQUIRED]** The log router to use. The valid values are "fluentd" or "fluentbit". * **options** *(dict) --* The options to use when configuring the log router. This field is optional and can be used to specify a custom configuration file or to add additional metadata, such as the task, task definition, cluster, and container instance details to the log event. If specified, the syntax to use is ""options":{"enable-ecs-log- metadata":"true|false","config-file-type:"s3|file ","config-file- value":"arn:aws:s3:::mybucket/fluent.conf|filepath"}". For more information, see Creating a task definition that uses a FireLens configuration in the *Amazon Elastic Container Service Developer Guide*. Note: Tasks hosted on Fargate only support the "file" configuration file type. * *(string) --* * *(string) --* * **credentialSpecs** *(list) --* A list of ARNs in SSM or Amazon S3 to a credential spec ( "CredSpec") file that configures the container for Active Directory authentication. We recommend that you use this parameter instead of the "dockerSecurityOptions". The maximum number of ARNs is 1. There are two formats for each ARN. credentialspecdomainless:MyARN You use "credentialspecdomainless:MyARN" to provide a "CredSpec" with an additional section for a secret in Secrets Manager. You provide the login credentials to the domain in the secret. Each task that runs on any container instance can join different domains. You can use this format without joining the container instance to a domain. credentialspec:MyARN You use "credentialspec:MyARN" to provide a "CredSpec" for a single domain. You must join the container instance to the domain before you start any tasks that use this task definition. In both formats, replace "MyARN" with the ARN in SSM or Amazon S3. If you provide a "credentialspecdomainless:MyARN", the "credspec" must provide a ARN in Secrets Manager for a secret containing the username, password, and the domain to connect to. For better security, the instance isn't joined to the domain for domainless authentication. Other applications on the instance can't use the domainless credentials. You can use this parameter to run tasks on the same instance, even it the tasks need to join different domains. For more information, see Using gMSAs for Windows Containers and Using gMSAs for Linux Containers. * *(string) --* * **volumes** (*list*) -- A list of volume definitions in JSON format that containers in your task might use. * *(dict) --* The data volume configuration for tasks launched using this task definition. Specifying a volume configuration in a task definition is optional. The volume configuration may contain multiple volumes but only one volume configured at launch is supported. Each volume defined in the volume configuration may only specify a "name" and one of either "configuredAtLaunch", "dockerVolumeConfiguration", "efsVolumeConfiguration", "fsxWindowsFileServerVolumeConfiguration", or "host". If an empty volume configuration is specified, by default Amazon ECS uses a host volume. For more information, see Using data volumes in tasks. * **name** *(string) --* The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. When using a volume configured at launch, the "name" is required and must also be specified as the volume name in the "ServiceVolumeConfiguration" or "TaskVolumeConfiguration" parameter when creating your service or standalone task. For all other types of volumes, this name is referenced in the "sourceVolume" parameter of the "mountPoints" object in the container definition. When a volume is using the "efsVolumeConfiguration", the name is required. * **host** *(dict) --* This parameter is specified when you use bind mount host volumes. The contents of the "host" parameter determine whether your bind mount host volume persists on the host container instance and where it's stored. If the "host" parameter is empty, then the Docker daemon assigns a host path for your data volume. However, the data isn't guaranteed to persist after the containers that are associated with it stop running. Windows containers can mount whole directories on the same drive as "$env:ProgramData". Windows containers can't mount directories on a different drive, and mount point can't be across drives. For example, you can mount "C:\my\path:C:\my\path" and "D:\:D:\", but not "D:\my\path:C:\my\path" or "D:\:C:\my\path". * **sourcePath** *(string) --* When the "host" parameter is used, specify a "sourcePath" to declare the path on the host container instance that's presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If the "host" parameter contains a "sourcePath" file location, then the data volume persists at the specified location on the host container instance until you delete it manually. If the "sourcePath" value doesn't exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported. If you're using the Fargate launch type, the "sourcePath" parameter is not supported. * **dockerVolumeConfiguration** *(dict) --* This parameter is specified when you use Docker volumes. Windows containers only support the use of the "local" driver. To use bind mounts, specify the "host" parameter instead. Note: Docker volumes aren't supported by tasks run on Fargate. * **scope** *(string) --* The scope for the Docker volume that determines its lifecycle. Docker volumes that are scoped to a "task" are automatically provisioned when the task starts and destroyed when the task stops. Docker volumes that are scoped as "shared" persist after the task stops. * **autoprovision** *(boolean) --* If this value is "true", the Docker volume is created if it doesn't already exist. Note: This field is only used if the "scope" is "shared". * **driver** *(string) --* The Docker volume driver to use. The driver value must match the driver name provided by Docker because it is used for task placement. If the driver was installed using the Docker plugin CLI, use "docker plugin ls" to retrieve the driver name from your container instance. If the driver was installed using another method, use Docker plugin discovery to retrieve the driver name. This parameter maps to "Driver" in the docker container create command and the "xxdriver" option to docker volume create. * **driverOpts** *(dict) --* A map of Docker driver-specific options passed through. This parameter maps to "DriverOpts" in the docker create-volume command and the "xxopt" option to docker volume create. * *(string) --* * *(string) --* * **labels** *(dict) --* Custom metadata to add to your Docker volume. This parameter maps to "Labels" in the docker container create command and the "xxlabel" option to docker volume create. * *(string) --* * *(string) --* * **efsVolumeConfiguration** *(dict) --* This parameter is specified when you use an Amazon Elastic File System file system for task storage. * **fileSystemId** *(string) --* **[REQUIRED]** The Amazon EFS file system ID to use. * **rootDirectory** *(string) --* The directory within the Amazon EFS file system to mount as the root directory inside the host. If this parameter is omitted, the root of the Amazon EFS volume will be used. Specifying "/" will have the same effect as omitting this parameter. Warning: If an EFS access point is specified in the "authorizationConfig", the root directory parameter must either be omitted or set to "/" which will enforce the path set on the EFS access point. * **transitEncryption** *(string) --* Determines whether to use encryption for Amazon EFS data in transit between the Amazon ECS host and the Amazon EFS server. Transit encryption must be turned on if Amazon EFS IAM authorization is used. If this parameter is omitted, the default value of "DISABLED" is used. For more information, see Encrypting data in transit in the *Amazon Elastic File System User Guide*. * **transitEncryptionPort** *(integer) --* The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server. If you do not specify a transit encryption port, it will use the port selection strategy that the Amazon EFS mount helper uses. For more information, see EFS mount helper in the *Amazon Elastic File System User Guide*. * **authorizationConfig** *(dict) --* The authorization configuration details for the Amazon EFS file system. * **accessPointId** *(string) --* The Amazon EFS access point ID to use. If an access point is specified, the root directory value specified in the "EFSVolumeConfiguration" must either be omitted or set to "/" which will enforce the path set on the EFS access point. If an access point is used, transit encryption must be on in the "EFSVolumeConfiguration". For more information, see Working with Amazon EFS access points in the *Amazon Elastic File System User Guide*. * **iam** *(string) --* Determines whether to use the Amazon ECS task role defined in a task definition when mounting the Amazon EFS file system. If it is turned on, transit encryption must be turned on in the "EFSVolumeConfiguration". If this parameter is omitted, the default value of "DISABLED" is used. For more information, see Using Amazon EFS access points in the *Amazon Elastic Container Service Developer Guide*. * **fsxWindowsFileServerVolumeConfiguration** *(dict) --* This parameter is specified when you use Amazon FSx for Windows File Server file system for task storage. * **fileSystemId** *(string) --* **[REQUIRED]** The Amazon FSx for Windows File Server file system ID to use. * **rootDirectory** *(string) --* **[REQUIRED]** The directory within the Amazon FSx for Windows File Server file system to mount as the root directory inside the host. * **authorizationConfig** *(dict) --* **[REQUIRED]** The authorization configuration details for the Amazon FSx for Windows File Server file system. * **credentialsParameter** *(string) --* **[REQUIRED]** The authorization credential option to use. The authorization credential options can be provided using either the Amazon Resource Name (ARN) of an Secrets Manager secret or SSM Parameter Store parameter. The ARN refers to the stored credentials. * **domain** *(string) --* **[REQUIRED]** A fully qualified domain name hosted by an Directory Service Managed Microsoft AD (Active Directory) or self-hosted AD on Amazon EC2. * **configuredAtLaunch** *(boolean) --* Indicates whether the volume should be configured at launch time. This is used to create Amazon EBS volumes for standalone tasks or tasks created as part of a service. Each task definition revision may only have one volume configured at launch in the volume configuration. To configure a volume at launch time, use this task definition revision and specify a "volumeConfigurations" object when calling the "CreateService", "UpdateService", "RunTask" or "StartTask" APIs. * **placementConstraints** (*list*) -- An array of placement constraint objects to use for the task. You can specify a maximum of 10 constraints for each task. This limit includes constraints in the task definition and those specified at runtime. * *(dict) --* The constraint on task placement in the task definition. For more information, see Task placement constraints in the *Amazon Elastic Container Service Developer Guide*. Note: Task placement constraints aren't supported for tasks run on Fargate. * **type** *(string) --* The type of constraint. The "MemberOf" constraint restricts selection to be from a group of valid candidates. * **expression** *(string) --* A cluster query language expression to apply to the constraint. For more information, see Cluster query language in the *Amazon Elastic Container Service Developer Guide*. * **requiresCompatibilities** (*list*) -- The task launch type that Amazon ECS validates the task definition against. A client exception is returned if the task definition doesn't validate against the compatibilities specified. If no value is specified, the parameter is omitted from the response. * *(string) --* * **cpu** (*string*) -- The number of CPU units used by the task. It can be expressed as an integer using CPU units (for example, "1024") or as a string using vCPUs (for example, "1 vCPU" or "1 vcpu") in a task definition. String values are converted to an integer indicating the CPU units when the task definition is registered. Note: Task-level CPU and memory parameters are ignored for Windows containers. We recommend specifying container-level resources for Windows containers. If you're using the EC2 launch type or external launch type, this field is optional. Supported values are between "128" CPU units ( "0.125" vCPUs) and "196608" CPU units ( "192" vCPUs). If you do not specify a value, the parameter is ignored. This field is required for Fargate. For information about the valid values, see Task size in the *Amazon Elastic Container Service Developer Guide*. * **memory** (*string*) -- The amount of memory (in MiB) used by the task. It can be expressed as an integer using MiB (for example , "1024") or as a string using GB (for example, "1GB" or "1 GB") in a task definition. String values are converted to an integer indicating the MiB when the task definition is registered. Note: Task-level CPU and memory parameters are ignored for Windows containers. We recommend specifying container-level resources for Windows containers. If using the EC2 launch type, this field is optional. If using the Fargate launch type, this field is required and you must use one of the following values. This determines your range of supported values for the "cpu" parameter. The CPU units cannot be less than 1 vCPU when you use Windows containers on Fargate. * 512 (0.5 GB), 1024 (1 GB), 2048 (2 GB) - Available "cpu" values: 256 (.25 vCPU) * 1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB) - Available "cpu" values: 512 (.5 vCPU) * 2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB) - Available "cpu" values: 1024 (1 vCPU) * Between 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB) - Available "cpu" values: 2048 (2 vCPU) * Between 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB) - Available "cpu" values: 4096 (4 vCPU) * Between 16 GB and 60 GB in 4 GB increments - Available "cpu" values: 8192 (8 vCPU) This option requires Linux platform "1.4.0" or later. * Between 32GB and 120 GB in 8 GB increments - Available "cpu" values: 16384 (16 vCPU) This option requires Linux platform "1.4.0" or later. * **tags** (*list*) -- The metadata that you apply to the task definition to help you categorize and organize them. Each tag consists of a key and an optional value. You define both of them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **pidMode** (*string*) -- The process namespace to use for the containers in the task. The valid values are "host" or "task". On Fargate for Linux containers, the only valid value is "task". For example, monitoring sidecars might need "pidMode" to access information about other containers running in the same task. If "host" is specified, all containers within the tasks that specified the "host" PID mode on the same container instance share the same process namespace with the host Amazon EC2 instance. If "task" is specified, all containers within the specified task share the same process namespace. If no value is specified, the default is a private namespace for each container. If the "host" PID mode is used, there's a heightened risk of undesired process namespace exposure. Note: This parameter is not supported for Windows containers. Note: This parameter is only supported for tasks that are hosted on Fargate if the tasks are using platform version "1.4.0" or later (Linux). This isn't supported for Windows containers on Fargate. * **ipcMode** (*string*) -- The IPC resource namespace to use for the containers in the task. The valid values are "host", "task", or "none". If "host" is specified, then all containers within the tasks that specified the "host" IPC mode on the same container instance share the same IPC resources with the host Amazon EC2 instance. If "task" is specified, all containers within the specified task share the same IPC resources. If "none" is specified, then IPC resources within the containers of a task are private and not shared with other containers in a task or on the container instance. If no value is specified, then the IPC resource namespace sharing depends on the Docker daemon setting on the container instance. If the "host" IPC mode is used, be aware that there is a heightened risk of undesired IPC namespace expose. If you are setting namespaced kernel parameters using "systemControls" for the containers in the task, the following will apply to your IPC resource namespace. For more information, see System Controls in the *Amazon Elastic Container Service Developer Guide*. * For tasks that use the "host" IPC mode, IPC namespace related "systemControls" are not supported. * For tasks that use the "task" IPC mode, IPC namespace related "systemControls" will apply to all containers within a task. Note: This parameter is not supported for Windows containers or tasks run on Fargate. * **proxyConfiguration** (*dict*) -- The configuration details for the App Mesh proxy. For tasks hosted on Amazon EC2 instances, the container instances require at least version "1.26.0" of the container agent and at least version "1.26.0-1" of the "ecs-init" package to use a proxy configuration. If your container instances are launched from the Amazon ECS-optimized AMI version "20190301" or later, then they contain the required versions of the container agent and "ecs-init". For more information, see Amazon ECS-optimized AMI versions in the *Amazon Elastic Container Service Developer Guide*. * **type** *(string) --* The proxy type. The only supported value is "APPMESH". * **containerName** *(string) --* **[REQUIRED]** The name of the container that will serve as the App Mesh proxy. * **properties** *(list) --* The set of network configuration parameters to provide the Container Network Interface (CNI) plugin, specified as key- value pairs. * "IgnoredUID" - (Required) The user ID (UID) of the proxy container as defined by the "user" parameter in a container definition. This is used to ensure the proxy ignores its own traffic. If "IgnoredGID" is specified, this field can be empty. * "IgnoredGID" - (Required) The group ID (GID) of the proxy container as defined by the "user" parameter in a container definition. This is used to ensure the proxy ignores its own traffic. If "IgnoredUID" is specified, this field can be empty. * "AppPorts" - (Required) The list of ports that the application uses. Network traffic to these ports is forwarded to the "ProxyIngressPort" and "ProxyEgressPort". * "ProxyIngressPort" - (Required) Specifies the port that incoming traffic to the "AppPorts" is directed to. * "ProxyEgressPort" - (Required) Specifies the port that outgoing traffic from the "AppPorts" is directed to. * "EgressIgnoredPorts" - (Required) The egress traffic going to the specified ports is ignored and not redirected to the "ProxyEgressPort". It can be an empty list. * "EgressIgnoredIPs" - (Required) The egress traffic going to the specified IP addresses is ignored and not redirected to the "ProxyEgressPort". It can be an empty list. * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **inferenceAccelerators** (*list*) -- The Elastic Inference accelerators to use for the containers in the task. * *(dict) --* Details on an Elastic Inference accelerator. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. * **deviceName** *(string) --* **[REQUIRED]** The Elastic Inference accelerator device name. The "deviceName" must also be referenced in a container definition as a ResourceRequirement. * **deviceType** *(string) --* **[REQUIRED]** The Elastic Inference accelerator type to use. * **ephemeralStorage** (*dict*) -- The amount of ephemeral storage to allocate for the task. This parameter is used to expand the total amount of ephemeral storage available, beyond the default amount, for tasks hosted on Fargate. For more information, see Using data volumes in tasks in the *Amazon ECS Developer Guide*. Note: For tasks using the Fargate launch type, the task requires the following platforms: * Linux platform version "1.4.0" or later. * Windows platform version "1.0.0" or later. * **sizeInGiB** *(integer) --* **[REQUIRED]** The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is "21" GiB and the maximum supported value is "200" GiB. * **runtimePlatform** (*dict*) -- The operating system that your tasks definitions run on. A platform family is specified only for tasks using the Fargate launch type. * **cpuArchitecture** *(string) --* The CPU architecture. You can run your Linux tasks on an ARM-based platform by setting the value to "ARM64". This option is available for tasks that run on Linux Amazon EC2 instance or Linux containers on Fargate. * **operatingSystemFamily** *(string) --* The operating system. * **enableFaultInjection** (*boolean*) -- Enables fault injection when you register your task definition and allows for fault injection requests to be accepted from the task's containers. The default value is "false". Return type: dict Returns: **Response Syntax** { 'taskDefinition': { 'taskDefinitionArn': 'string', 'containerDefinitions': [ { 'name': 'string', 'image': 'string', 'repositoryCredentials': { 'credentialsParameter': 'string' }, 'cpu': 123, 'memory': 123, 'memoryReservation': 123, 'links': [ 'string', ], 'portMappings': [ { 'containerPort': 123, 'hostPort': 123, 'protocol': 'tcp'|'udp', 'name': 'string', 'appProtocol': 'http'|'http2'|'grpc', 'containerPortRange': 'string' }, ], 'essential': True|False, 'restartPolicy': { 'enabled': True|False, 'ignoredExitCodes': [ 123, ], 'restartAttemptPeriod': 123 }, 'entryPoint': [ 'string', ], 'command': [ 'string', ], 'environment': [ { 'name': 'string', 'value': 'string' }, ], 'environmentFiles': [ { 'value': 'string', 'type': 's3' }, ], 'mountPoints': [ { 'sourceVolume': 'string', 'containerPath': 'string', 'readOnly': True|False }, ], 'volumesFrom': [ { 'sourceContainer': 'string', 'readOnly': True|False }, ], 'linuxParameters': { 'capabilities': { 'add': [ 'string', ], 'drop': [ 'string', ] }, 'devices': [ { 'hostPath': 'string', 'containerPath': 'string', 'permissions': [ 'read'|'write'|'mknod', ] }, ], 'initProcessEnabled': True|False, 'sharedMemorySize': 123, 'tmpfs': [ { 'containerPath': 'string', 'size': 123, 'mountOptions': [ 'string', ] }, ], 'maxSwap': 123, 'swappiness': 123 }, 'secrets': [ { 'name': 'string', 'valueFrom': 'string' }, ], 'dependsOn': [ { 'containerName': 'string', 'condition': 'START'|'COMPLETE'|'SUCCESS'|'HEALTHY' }, ], 'startTimeout': 123, 'stopTimeout': 123, 'versionConsistency': 'enabled'|'disabled', 'hostname': 'string', 'user': 'string', 'workingDirectory': 'string', 'disableNetworking': True|False, 'privileged': True|False, 'readonlyRootFilesystem': True|False, 'dnsServers': [ 'string', ], 'dnsSearchDomains': [ 'string', ], 'extraHosts': [ { 'hostname': 'string', 'ipAddress': 'string' }, ], 'dockerSecurityOptions': [ 'string', ], 'interactive': True|False, 'pseudoTerminal': True|False, 'dockerLabels': { 'string': 'string' }, 'ulimits': [ { 'name': 'core'|'cpu'|'data'|'fsize'|'locks'|'memlock'|'msgqueue'|'nice'|'nofile'|'nproc'|'rss'|'rtprio'|'rttime'|'sigpending'|'stack', 'softLimit': 123, 'hardLimit': 123 }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] }, 'healthCheck': { 'command': [ 'string', ], 'interval': 123, 'timeout': 123, 'retries': 123, 'startPeriod': 123 }, 'systemControls': [ { 'namespace': 'string', 'value': 'string' }, ], 'resourceRequirements': [ { 'value': 'string', 'type': 'GPU'|'InferenceAccelerator' }, ], 'firelensConfiguration': { 'type': 'fluentd'|'fluentbit', 'options': { 'string': 'string' } }, 'credentialSpecs': [ 'string', ] }, ], 'family': 'string', 'taskRoleArn': 'string', 'executionRoleArn': 'string', 'networkMode': 'bridge'|'host'|'awsvpc'|'none', 'revision': 123, 'volumes': [ { 'name': 'string', 'host': { 'sourcePath': 'string' }, 'dockerVolumeConfiguration': { 'scope': 'task'|'shared', 'autoprovision': True|False, 'driver': 'string', 'driverOpts': { 'string': 'string' }, 'labels': { 'string': 'string' } }, 'efsVolumeConfiguration': { 'fileSystemId': 'string', 'rootDirectory': 'string', 'transitEncryption': 'ENABLED'|'DISABLED', 'transitEncryptionPort': 123, 'authorizationConfig': { 'accessPointId': 'string', 'iam': 'ENABLED'|'DISABLED' } }, 'fsxWindowsFileServerVolumeConfiguration': { 'fileSystemId': 'string', 'rootDirectory': 'string', 'authorizationConfig': { 'credentialsParameter': 'string', 'domain': 'string' } }, 'configuredAtLaunch': True|False }, ], 'status': 'ACTIVE'|'INACTIVE'|'DELETE_IN_PROGRESS', 'requiresAttributes': [ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ], 'placementConstraints': [ { 'type': 'memberOf', 'expression': 'string' }, ], 'compatibilities': [ 'EC2'|'FARGATE'|'EXTERNAL', ], 'runtimePlatform': { 'cpuArchitecture': 'X86_64'|'ARM64', 'operatingSystemFamily': 'WINDOWS_SERVER_2019_FULL'|'WINDOWS_SERVER_2019_CORE'|'WINDOWS_SERVER_2016_FULL'|'WINDOWS_SERVER_2004_CORE'|'WINDOWS_SERVER_2022_CORE'|'WINDOWS_SERVER_2022_FULL'|'WINDOWS_SERVER_2025_CORE'|'WINDOWS_SERVER_2025_FULL'|'WINDOWS_SERVER_20H2_CORE'|'LINUX' }, 'requiresCompatibilities': [ 'EC2'|'FARGATE'|'EXTERNAL', ], 'cpu': 'string', 'memory': 'string', 'inferenceAccelerators': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'pidMode': 'host'|'task', 'ipcMode': 'host'|'task'|'none', 'proxyConfiguration': { 'type': 'APPMESH', 'containerName': 'string', 'properties': [ { 'name': 'string', 'value': 'string' }, ] }, 'registeredAt': datetime(2015, 1, 1), 'deregisteredAt': datetime(2015, 1, 1), 'registeredBy': 'string', 'ephemeralStorage': { 'sizeInGiB': 123 }, 'enableFaultInjection': True|False }, 'tags': [ { 'key': 'string', 'value': 'string' }, ] } **Response Structure** * *(dict) --* * **taskDefinition** *(dict) --* The full description of the registered task definition. * **taskDefinitionArn** *(string) --* The full Amazon Resource Name (ARN) of the task definition. * **containerDefinitions** *(list) --* A list of container definitions in JSON format that describe the different containers that make up your task. For more information about container definition parameters and defaults, see Amazon ECS Task Definitions in the *Amazon Elastic Container Service Developer Guide*. * *(dict) --* Container definitions are used in task definitions to describe the different containers that are launched as part of a task. * **name** *(string) --* The name of a container. If you're linking multiple containers together in a task definition, the "name" of one container can be entered in the "links" of another container to connect the containers. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. This parameter maps to "name" in the docker container create command and the "--name" option to docker run. * **image** *(string) --* The image used to start a container. This string is passed directly to the Docker daemon. By default, images in the Docker Hub registry are available. Other repositories are specified with either "repository- url/image:tag" or "repository-url/image@digest ``. For images using tags (repository-url/image:tag), up to 255 characters total are allowed, including letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs (#). For images using digests (repository-url/image@digest), the 255 character limit applies only to the repository URL and image name (everything before the @ sign). The only supported hash function is sha256, and the hash value after sha256: must be exactly 64 characters (only letters A-F, a-f, and numbers 0-9 are allowed). This parameter maps to ``Image" in the docker container create command and the "IMAGE" parameter of docker run. * When a new task starts, the Amazon ECS container agent pulls the latest version of the specified image and tag for the container to use. However, subsequent updates to a repository image aren't propagated to already running tasks. * Images in Amazon ECR repositories can be specified by either using the full "registry/repository:tag" or "registry/repository@digest". For example, "012345678910.dkr.ecr..amazonaws.com /:latest" or "012345678910.dkr.ecr ..amazonaws.com/@sha2 56:94afd1f2e64d908bc90dbca0035a5b567EXAMPLE". * Images in official repositories on Docker Hub use a single name (for example, "ubuntu" or "mongo"). * Images in other repositories on Docker Hub are qualified with an organization name (for example, "amazon/amazon-ecs-agent"). * Images in other online repositories are qualified further by a domain name (for example, "quay.io/assemblyline/ubuntu"). * **repositoryCredentials** *(dict) --* The private repository authentication credentials to use. * **credentialsParameter** *(string) --* The Amazon Resource Name (ARN) of the secret containing the private repository credentials. Note: When you use the Amazon ECS API, CLI, or Amazon Web Services SDK, if the secret exists in the same Region as the task that you're launching then you can use either the full ARN or the name of the secret. When you use the Amazon Web Services Management Console, you must specify the full ARN of the secret. * **cpu** *(integer) --* The number of "cpu" units reserved for the container. This parameter maps to "CpuShares" in the docker container create commandand the "--cpu-shares" option to docker run. This field is optional for tasks using the Fargate launch type, and the only requirement is that the total amount of CPU reserved for all containers within a task be lower than the task-level "cpu" value. Note: You can determine the number of CPU units that are available per EC2 instance type by multiplying the vCPUs listed for that instance type on the Amazon EC2 Instances detail page by 1,024. Linux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that's the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task is guaranteed a minimum of 512 CPU units when needed. Moreover, each container could float to higher CPU usage if the other container was not using it. If both tasks were 100% active all of the time, they would be limited to 512 CPU units. On Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. The minimum valid CPU share value that the Linux kernel allows is 2, and the maximum valid CPU share value that the Linux kernel allows is 262144. However, the CPU parameter isn't required, and you can use CPU values below 2 or above 262144 in your container definitions. For CPU values below 2 (including null) or above 262144, the behavior varies based on your Amazon ECS container agent version: * **Agent versions less than or equal to 1.1.0:** Null and zero CPU values are passed to Docker as 0, which Docker then converts to 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux kernel converts to two CPU shares. * **Agent versions greater than or equal to 1.2.0:** Null, zero, and CPU values of 1 are passed to Docker as 2. * **Agent versions greater than or equal to 1.84.0:** CPU values greater than 256 vCPU are passed to Docker as 256, which is equivalent to 262144 CPU shares. On Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only have access to the specified amount of CPU that's described in the task definition. A null or zero CPU value is passed to Docker as "0", which Windows interprets as 1% of one CPU. * **memory** *(integer) --* The amount (in MiB) of memory to present to the container. If your container attempts to exceed the memory specified here, the container is killed. The total amount of memory reserved for all containers within a task must be lower than the task "memory" value, if one is specified. This parameter maps to "Memory" in the docker container create command and the "--memory" option to docker run. If using the Fargate launch type, this parameter is optional. If using the EC2 launch type, you must specify either a task-level memory value or a container-level memory value. If you specify both a container-level "memory" and "memoryReservation" value, "memory" must be greater than "memoryReservation". If you specify "memoryReservation", then that value is subtracted from the available memory resources for the container instance where the container is placed. Otherwise, the value of "memory" is used. The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers. The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers. * **memoryReservation** *(integer) --* The soft limit (in MiB) of memory to reserve for the container. When system memory is under heavy contention, Docker attempts to keep the container memory to this soft limit. However, your container can consume more memory when it needs to, up to either the hard limit specified with the "memory" parameter (if applicable), or all of the available memory on the container instance, whichever comes first. This parameter maps to "MemoryReservation" in the docker container create command and the "--memory- reservation" option to docker run. If a task-level memory value is not specified, you must specify a non-zero integer for one or both of "memory" or "memoryReservation" in a container definition. If you specify both, "memory" must be greater than "memoryReservation". If you specify "memoryReservation", then that value is subtracted from the available memory resources for the container instance where the container is placed. Otherwise, the value of "memory" is used. For example, if your container normally uses 128 MiB of memory, but occasionally bursts to 256 MiB of memory for short periods of time, you can set a "memoryReservation" of 128 MiB, and a "memory" hard limit of 300 MiB. This configuration would allow the container to only reserve 128 MiB of memory from the remaining resources on the container instance, but also allow the container to consume more memory resources when needed. The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers. The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers. * **links** *(list) --* The "links" parameter allows containers to communicate with each other without the need for port mappings. This parameter is only supported if the network mode of a task definition is "bridge". The "name:internalName" construct is analogous to "name:alias" in Docker links. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.. This parameter maps to "Links" in the docker container create command and the "-- link" option to docker run. Note: This parameter is not supported for Windows containers. Warning: Containers that are collocated on a single container instance may be able to communicate with each other without requiring links or host port mappings. Network isolation is achieved on the container instance using security groups and VPC settings. * *(string) --* * **portMappings** *(list) --* The list of port mappings for the container. Port mappings allow containers to access ports on the host container instance to send or receive traffic. For task definitions that use the "awsvpc" network mode, only specify the "containerPort". The "hostPort" can be left blank or it must be the same value as the "containerPort". Port mappings on Windows use the "NetNAT" gateway address rather than "localhost". There's no loopback for port mappings on Windows, so you can't access a container's mapped port from the host itself. This parameter maps to "PortBindings" in the the docker container create command and the "--publish" option to docker run. If the network mode of a task definition is set to "none", then you can't specify port mappings. If the network mode of a task definition is set to "host", then host ports must either be undefined or they must match the container port in the port mapping. Note: After a task reaches the "RUNNING" status, manual and automatic host and container port assignments are visible in the **Network Bindings** section of a container description for a selected task in the Amazon ECS console. The assignments are also visible in the "networkBindings" section DescribeTasks responses. * *(dict) --* Port mappings allow containers to access ports on the host container instance to send or receive traffic. Port mappings are specified as part of the container definition. If you use containers in a task with the "awsvpc" or "host" network mode, specify the exposed ports using "containerPort". The "hostPort" can be left blank or it must be the same value as the "containerPort". Most fields of this parameter ( "containerPort", "hostPort", "protocol") maps to "PortBindings" in the docker container create command and the "-- publish" option to "docker run". If the network mode of a task definition is set to "host", host ports must either be undefined or match the container port in the port mapping. Note: You can't expose the same container port for multiple protocols. If you attempt this, an error is returned. After a task reaches the "RUNNING" status, manual and automatic host and container port assignments are visible in the "networkBindings" section of DescribeTasks API responses. * **containerPort** *(integer) --* The port number on the container that's bound to the user-specified or automatically assigned host port. If you use containers in a task with the "awsvpc" or "host" network mode, specify the exposed ports using "containerPort". If you use containers in a task with the "bridge" network mode and you specify a container port and not a host port, your container automatically receives a host port in the ephemeral port range. For more information, see "hostPort". Port mappings that are automatically assigned in this way do not count toward the 100 reserved ports limit of a container instance. * **hostPort** *(integer) --* The port number on the container instance to reserve for your container. If you specify a "containerPortRange", leave this field empty and the value of the "hostPort" is set as follows: * For containers in a task with the "awsvpc" network mode, the "hostPort" is set to the same value as the "containerPort". This is a static mapping strategy. * For containers in a task with the "bridge" network mode, the Amazon ECS agent finds open ports on the host and automatically binds them to the container ports. This is a dynamic mapping strategy. If you use containers in a task with the "awsvpc" or "host" network mode, the "hostPort" can either be left blank or set to the same value as the "containerPort". If you use containers in a task with the "bridge" network mode, you can specify a non-reserved host port for your container port mapping, or you can omit the "hostPort" (or set it to "0") while specifying a "containerPort" and your container automatically receives a port in the ephemeral port range for your container instance operating system and Docker version. The default ephemeral port range for Docker version 1.6.0 and later is listed on the instance under "/proc/sys/net/ipv4/ip_local_port_range". If this kernel parameter is unavailable, the default ephemeral port range from 49153 through 65535 (Linux) or 49152 through 65535 (Windows) is used. Do not attempt to specify a host port in the ephemeral port range as these are reserved for automatic assignment. In general, ports below 32768 are outside of the ephemeral port range. The default reserved ports are 22 for SSH, the Docker ports 2375 and 2376, and the Amazon ECS container agent ports 51678-51680. Any host port that was previously specified in a running task is also reserved while the task is running. That is, after a task stops, the host port is released. The current reserved ports are displayed in the "remainingResources" of DescribeContainerInstances output. A container instance can have up to 100 reserved ports at a time. This number includes the default reserved ports. Automatically assigned ports aren't included in the 100 reserved ports quota. * **protocol** *(string) --* The protocol used for the port mapping. Valid values are "tcp" and "udp". The default is "tcp". "protocol" is immutable in a Service Connect service. Updating this field requires a service deletion and redeployment. * **name** *(string) --* The name that's used for the port mapping. This parameter is the name that you use in the "serviceConnectConfiguration" and the "vpcLatticeConfigurations" of a service. The name can include up to 64 characters. The characters can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen. * **appProtocol** *(string) --* The application protocol that's used for the port mapping. This parameter only applies to Service Connect. We recommend that you set this parameter to be consistent with the protocol that your application uses. If you set this parameter, Amazon ECS adds protocol-specific connection handling to the Service Connect proxy. If you set this parameter, Amazon ECS adds protocol-specific telemetry in the Amazon ECS console and CloudWatch. If you don't set a value for this parameter, then TCP is used. However, Amazon ECS doesn't add protocol-specific telemetry for TCP. "appProtocol" is immutable in a Service Connect service. Updating this field requires a service deletion and redeployment. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **containerPortRange** *(string) --* The port number range on the container that's bound to the dynamically mapped host port range. The following rules apply when you specify a "containerPortRange": * You must use either the "bridge" network mode or the "awsvpc" network mode. * This parameter is available for both the EC2 and Fargate launch types. * This parameter is available for both the Linux and Windows operating systems. * The container instance must have at least version 1.67.0 of the container agent and at least version 1.67.0-1 of the "ecs-init" package * You can specify a maximum of 100 port ranges per container. * You do not specify a "hostPortRange". The value of the "hostPortRange" is set as follows: * For containers in a task with the "awsvpc" network mode, the "hostPortRange" is set to the same value as the "containerPortRange". This is a static mapping strategy. * For containers in a task with the "bridge" network mode, the Amazon ECS agent finds open host ports from the default ephemeral range and passes it to docker to bind them to the container ports. * The "containerPortRange" valid values are between 1 and 65535. * A port can only be included in one port mapping per container. * You cannot specify overlapping port ranges. * The first port in the range must be less than last port in the range. * Docker recommends that you turn off the docker- proxy in the Docker daemon config file when you have a large number of ports. For more information, see Issue #11185 on the Github website. For information about how to turn off the docker-proxy in the Docker daemon config file, see Docker daemon in the *Amazon ECS Developer Guide*. You can call DescribeTasks to view the "hostPortRange" which are the host ports that are bound to the container ports. * **essential** *(boolean) --* If the "essential" parameter of a container is marked as "true", and that container fails or stops for any reason, all other containers that are part of the task are stopped. If the "essential" parameter of a container is marked as "false", its failure doesn't affect the rest of the containers in a task. If this parameter is omitted, a container is assumed to be essential. All tasks must have at least one essential container. If you have an application that's composed of multiple containers, group containers that are used for a common purpose into components, and separate the different components into multiple task definitions. For more information, see Application Architecture in the *Amazon Elastic Container Service Developer Guide*. * **restartPolicy** *(dict) --* The restart policy for a container. When you set up a restart policy, Amazon ECS can restart the container without needing to replace the task. For more information, see Restart individual containers in Amazon ECS tasks with container restart policies in the *Amazon Elastic Container Service Developer Guide*. * **enabled** *(boolean) --* Specifies whether a restart policy is enabled for the container. * **ignoredExitCodes** *(list) --* A list of exit codes that Amazon ECS will ignore and not attempt a restart on. You can specify a maximum of 50 container exit codes. By default, Amazon ECS does not ignore any exit codes. * *(integer) --* * **restartAttemptPeriod** *(integer) --* A period of time (in seconds) that the container must run for before a restart can be attempted. A container can be restarted only once every "restartAttemptPeriod" seconds. If a container isn't able to run for this time period and exits early, it will not be restarted. You can set a minimum "restartAttemptPeriod" of 60 seconds and a maximum "restartAttemptPeriod" of 1800 seconds. By default, a container must run for 300 seconds before it can be restarted. * **entryPoint** *(list) --* Warning: Early versions of the Amazon ECS container agent don't properly handle "entryPoint" parameters. If you have problems using "entryPoint", update your container agent or enter your commands and arguments as "command" array items instead. The entry point that's passed to the container. This parameter maps to "Entrypoint" in the docker container create command and the "--entrypoint" option to docker run. * *(string) --* * **command** *(list) --* The command that's passed to the container. This parameter maps to "Cmd" in the docker container create command and the "COMMAND" parameter to docker run. If there are multiple arguments, each argument is a separated string in the array. * *(string) --* * **environment** *(list) --* The environment variables to pass to a container. This parameter maps to "Env" in the docker container create command and the "--env" option to docker run. Warning: We don't recommend that you use plaintext environment variables for sensitive information, such as credential data. * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **environmentFiles** *(list) --* A list of files containing the environment variables to pass to a container. This parameter maps to the "-- env-file" option to docker run. You can specify up to ten environment files. The file must have a ".env" file extension. Each line in an environment file contains an environment variable in "VARIABLE=VALUE" format. Lines beginning with "#" are treated as comments and are ignored. If there are environment variables specified using the "environment" parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Specifying Environment Variables in the *Amazon Elastic Container Service Developer Guide*. * *(dict) --* A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a ".env" file extension. Each line in an environment file should contain an environment variable in "VARIABLE=VALUE" format. Lines beginning with "#" are treated as comments and are ignored. If there are environment variables specified using the "environment" parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Use a file to pass environment variables to a container in the *Amazon Elastic Container Service Developer Guide*. Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply. You must use the following platforms for the Fargate launch type: * Linux platform version "1.4.0" or later. * Windows platform version "1.0.0" or later. Consider the following when using the Fargate launch type: * The file is handled like a native Docker env-file. * There is no support for shell escape handling. * The container entry point interperts the "VARIABLE" values. * **value** *(string) --* The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file. * **type** *(string) --* The file type to use. Environment files are objects in Amazon S3. The only supported value is "s3". * **mountPoints** *(list) --* The mount points for data volumes in your container. This parameter maps to "Volumes" in the docker container create command and the "--volume" option to docker run. Windows containers can mount whole directories on the same drive as "$env:ProgramData". Windows containers can't mount directories on a different drive, and mount point can't be across drives. * *(dict) --* The details for a volume mount point that's used in a container definition. * **sourceVolume** *(string) --* The name of the volume to mount. Must be a volume name referenced in the "name" parameter of task definition "volume". * **containerPath** *(string) --* The path on the container to mount the host volume at. * **readOnly** *(boolean) --* If this value is "true", the container has read- only access to the volume. If this value is "false", then the container can write to the volume. The default value is "false". * **volumesFrom** *(list) --* Data volumes to mount from another container. This parameter maps to "VolumesFrom" in the docker container create command and the "--volumes-from" option to docker run. * *(dict) --* Details on a data volume from another container in the same task definition. * **sourceContainer** *(string) --* The name of another container within the same task definition to mount volumes from. * **readOnly** *(boolean) --* If this value is "true", the container has read- only access to the volume. If this value is "false", then the container can write to the volume. The default value is "false". * **linuxParameters** *(dict) --* Linux-specific modifications that are applied to the default Docker container configuration, such as Linux kernel capabilities. For more information see KernelCapabilities. Note: This parameter is not supported for Windows containers. * **capabilities** *(dict) --* The Linux capabilities for the container that are added to or dropped from the default configuration provided by Docker. Note: For tasks that use the Fargate launch type, "capabilities" is supported for all platform versions but the "add" parameter is only supported if using platform version 1.4.0 or later. * **add** *(list) --* The Linux capabilities for the container that have been added to the default configuration provided by Docker. This parameter maps to "CapAdd" in the docker container create command and the "--cap- add" option to docker run. Note: Tasks launched on Fargate only support adding the "SYS_PTRACE" kernel capability. Valid values: ""ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"" * *(string) --* * **drop** *(list) --* The Linux capabilities for the container that have been removed from the default configuration provided by Docker. This parameter maps to "CapDrop" in the docker container create command and the "--cap-drop" option to docker run. Valid values: ""ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"" * *(string) --* * **devices** *(list) --* Any host devices to expose to the container. This parameter maps to "Devices" in the docker container create command and the "--device" option to docker run. Note: If you're using tasks that use the Fargate launch type, the "devices" parameter isn't supported. * *(dict) --* An object representing a container instance host device. * **hostPath** *(string) --* The path for the device on the host container instance. * **containerPath** *(string) --* The path inside the container at which to expose the host device. * **permissions** *(list) --* The explicit permissions to provide to the container for the device. By default, the container has permissions for "read", "write", and "mknod" for the device. * *(string) --* * **initProcessEnabled** *(boolean) --* Run an "init" process inside the container that forwards signals and reaps processes. This parameter maps to the "--init" option to docker run. This parameter requires version 1.25 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" * **sharedMemorySize** *(integer) --* The value for the size (in MiB) of the "/dev/shm" volume. This parameter maps to the "--shm-size" option to docker run. Note: If you are using tasks that use the Fargate launch type, the "sharedMemorySize" parameter is not supported. * **tmpfs** *(list) --* The container path, mount options, and size (in MiB) of the tmpfs mount. This parameter maps to the "-- tmpfs" option to docker run. Note: If you're using tasks that use the Fargate launch type, the "tmpfs" parameter isn't supported. * *(dict) --* The container path, mount options, and size of the tmpfs mount. * **containerPath** *(string) --* The absolute file path where the tmpfs volume is to be mounted. * **size** *(integer) --* The maximum size (in MiB) of the tmpfs volume. * **mountOptions** *(list) --* The list of tmpfs volume mount options. Valid values: ""defaults" | "ro" | "rw" | "suid" | "nosuid" | "dev" | "nodev" | "exec" | "noexec" | "sync" | "async" | "dirsync" | "remount" | "mand" | "nomand" | "atime" | "noatime" | "diratime" | "nodiratime" | "bind" | "rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime" | "norelatime" | "strictatime" | "nostrictatime" | "mode" | "uid" | "gid" | "nr_inodes" | "nr_blocks" | "mpol"" * *(string) --* * **maxSwap** *(integer) --* The total amount of swap memory (in MiB) a container can use. This parameter will be translated to the " --memory-swap" option to docker run where the value would be the sum of the container memory plus the "maxSwap" value. If a "maxSwap" value of "0" is specified, the container will not use swap. Accepted values are "0" or any positive integer. If the "maxSwap" parameter is omitted, the container will use the swap configuration for the container instance it is running on. A "maxSwap" value must be set for the "swappiness" parameter to be used. Note: If you're using tasks that use the Fargate launch type, the "maxSwap" parameter isn't supported.If you're using tasks on Amazon Linux 2023 the "swappiness" parameter isn't supported. * **swappiness** *(integer) --* This allows you to tune a container's memory swappiness behavior. A "swappiness" value of "0" will cause swapping to not happen unless absolutely necessary. A "swappiness" value of "100" will cause pages to be swapped very aggressively. Accepted values are whole numbers between "0" and "100". If the "swappiness" parameter is not specified, a default value of "60" is used. If a value is not specified for "maxSwap" then this parameter is ignored. This parameter maps to the "--memory- swappiness" option to docker run. Note: If you're using tasks that use the Fargate launch type, the "swappiness" parameter isn't supported.If you're using tasks on Amazon Linux 2023 the "swappiness" parameter isn't supported. * **secrets** *(list) --* The secrets to pass to the container. For more information, see Specifying Sensitive Data in the *Amazon Elastic Container Service Developer Guide*. * *(dict) --* An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways: * To inject sensitive data into your containers as environment variables, use the "secrets" container definition parameter. * To reference sensitive information in the log configuration of a container, use the "secretOptions" container definition parameter. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the secret. * **valueFrom** *(string) --* The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store. For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the *Amazon Elastic Container Service Developer Guide*. Note: If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified. * **dependsOn** *(list) --* The dependencies defined for container startup and shutdown. A container can contain multiple dependencies on other containers in a task definition. When a dependency is defined for container startup, for container shutdown it is reversed. For tasks using the EC2 launch type, the container instances require at least version 1.26.0 of the container agent to turn on container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the *Amazon Elastic Container Service Developer Guide*. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the "ecs-init" package. If your container instances are launched from version "20190301" or later, then they contain the required versions of the container agent and "ecs- init". For more information, see Amazon ECS-optimized Linux AMI in the *Amazon Elastic Container Service Developer Guide*. For tasks using the Fargate launch type, the task or service requires the following platforms: * Linux platform version "1.3.0" or later. * Windows platform version "1.0.0" or later. * *(dict) --* The dependencies defined for container startup and shutdown. A container can contain multiple dependencies. When a dependency is defined for container startup, for container shutdown it is reversed. Your Amazon ECS container instances require at least version 1.26.0 of the container agent to use container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the *Amazon Elastic Container Service Developer Guide*. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the "ecs-init" package. If your container instances are launched from version "20190301" or later, then they contain the required versions of the container agent and "ecs-init". For more information, see Amazon ECS-optimized Linux AMI in the *Amazon Elastic Container Service Developer Guide*. Note: For tasks that use the Fargate launch type, the task or service requires the following platforms: * Linux platform version "1.3.0" or later. * Windows platform version "1.0.0" or later. For more information about how to create a container dependency, see Container dependency in the *Amazon Elastic Container Service Developer Guide*. * **containerName** *(string) --* The name of a container. * **condition** *(string) --* The dependency condition of the container. The following are the available conditions and their behavior: * "START" - This condition emulates the behavior of links and volumes today. It validates that a dependent container is started before permitting other containers to start. * "COMPLETE" - This condition validates that a dependent container runs to completion (exits) before permitting other containers to start. This can be useful for nonessential containers that run a script and then exit. This condition can't be set on an essential container. * "SUCCESS" - This condition is the same as "COMPLETE", but it also requires that the container exits with a "zero" status. This condition can't be set on an essential container. * "HEALTHY" - This condition validates that the dependent container passes its Docker health check before permitting other containers to start. This requires that the dependent container has health checks configured. This condition is confirmed only at task startup. * **startTimeout** *(integer) --* Time duration (in seconds) to wait before giving up on resolving dependencies for a container. For example, you specify two containers in a task definition with containerA having a dependency on containerB reaching a "COMPLETE", "SUCCESS", or "HEALTHY" status. If a "startTimeout" value is specified for containerB and it doesn't reach the desired status within that time then containerA gives up and not start. This results in the task transitioning to a "STOPPED" state. Note: When the "ECS_CONTAINER_START_TIMEOUT" container agent configuration variable is used, it's enforced independently from this start timeout value. For tasks using the Fargate launch type, the task or service requires the following platforms: * Linux platform version "1.3.0" or later. * Windows platform version "1.0.0" or later. For tasks using the EC2 launch type, your container instances require at least version "1.26.0" of the container agent to use a container start timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the *Amazon Elastic Container Service Developer Guide*. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version "1.26.0-1" of the "ecs-init" package. If your container instances are launched from version "20190301" or later, then they contain the required versions of the container agent and "ecs-init". For more information, see Amazon ECS-optimized Linux AMI in the *Amazon Elastic Container Service Developer Guide*. The valid values for Fargate are 2-120 seconds. * **stopTimeout** *(integer) --* Time duration (in seconds) to wait before the container is forcefully killed if it doesn't exit normally on its own. For tasks using the Fargate launch type, the task or service requires the following platforms: * Linux platform version "1.3.0" or later. * Windows platform version "1.0.0" or later. For tasks that use the Fargate launch type, the max stop timeout value is 120 seconds and if the parameter is not specified, the default value of 30 seconds is used. For tasks that use the EC2 launch type, if the "stopTimeout" parameter isn't specified, the value set for the Amazon ECS container agent configuration variable "ECS_CONTAINER_STOP_TIMEOUT" is used. If neither the "stopTimeout" parameter or the "ECS_CONTAINER_STOP_TIMEOUT" agent configuration variable are set, then the default values of 30 seconds for Linux containers and 30 seconds on Windows containers are used. Your container instances require at least version 1.26.0 of the container agent to use a container stop timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the *Amazon Elastic Container Service Developer Guide*. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the "ecs-init" package. If your container instances are launched from version "20190301" or later, then they contain the required versions of the container agent and "ecs-init". For more information, see Amazon ECS-optimized Linux AMI in the *Amazon Elastic Container Service Developer Guide*. The valid values for Fargate are 2-120 seconds. * **versionConsistency** *(string) --* Specifies whether Amazon ECS will resolve the container image tag provided in the container definition to an image digest. By default, the value is "enabled". If you set the value for a container as "disabled", Amazon ECS will not resolve the provided container image tag to a digest and will use the original image URI specified in the container definition for deployment. For more information about container image resolution, see Container image resolution in the *Amazon ECS Developer Guide*. * **hostname** *(string) --* The hostname to use for your container. This parameter maps to "Hostname" in the docker container create command and the "--hostname" option to docker run. Note: The "hostname" parameter is not supported if you're using the "awsvpc" network mode. * **user** *(string) --* The user to use inside the container. This parameter maps to "User" in the docker container create command and the "--user" option to docker run. Warning: When running tasks using the "host" network mode, don't run containers using the root user (UID 0). We recommend using a non-root user for better security. You can specify the "user" using the following formats. If specifying a UID or GID, you must specify it as a positive integer. * "user" * "user:group" * "uid" * "uid:gid" * "user:gid" * "uid:group" Note: This parameter is not supported for Windows containers. * **workingDirectory** *(string) --* The working directory to run commands inside the container in. This parameter maps to "WorkingDir" in the docker container create command and the "-- workdir" option to docker run. * **disableNetworking** *(boolean) --* When this parameter is true, networking is off within the container. This parameter maps to "NetworkDisabled" in the docker container create command. Note: This parameter is not supported for Windows containers. * **privileged** *(boolean) --* When this parameter is true, the container is given elevated privileges on the host container instance (similar to the "root" user). This parameter maps to "Privileged" in the docker container create command and the "--privileged" option to docker run Note: This parameter is not supported for Windows containers or tasks run on Fargate. * **readonlyRootFilesystem** *(boolean) --* When this parameter is true, the container is given read-only access to its root file system. This parameter maps to "ReadonlyRootfs" in the docker container create command and the "--read-only" option to docker run. Note: This parameter is not supported for Windows containers. * **dnsServers** *(list) --* A list of DNS servers that are presented to the container. This parameter maps to "Dns" in the docker container create command and the "--dns" option to docker run. Note: This parameter is not supported for Windows containers. * *(string) --* * **dnsSearchDomains** *(list) --* A list of DNS search domains that are presented to the container. This parameter maps to "DnsSearch" in the docker container create command and the "--dns-search" option to docker run. Note: This parameter is not supported for Windows containers. * *(string) --* * **extraHosts** *(list) --* A list of hostnames and IP address mappings to append to the "/etc/hosts" file on the container. This parameter maps to "ExtraHosts" in the docker container create command and the "--add-host" option to docker run. Note: This parameter isn't supported for Windows containers or tasks that use the "awsvpc" network mode. * *(dict) --* Hostnames and IP address entries that are added to the "/etc/hosts" file of a container via the "extraHosts" parameter of its ContainerDefinition. * **hostname** *(string) --* The hostname to use in the "/etc/hosts" entry. * **ipAddress** *(string) --* The IP address to use in the "/etc/hosts" entry. * **dockerSecurityOptions** *(list) --* A list of strings to provide custom configuration for multiple security systems. This field isn't valid for containers in tasks using the Fargate launch type. For Linux tasks on EC2, this parameter can be used to reference custom labels for SELinux and AppArmor multi-level security systems. For any tasks on EC2, this parameter can be used to reference a credential spec file that configures a container for Active Directory authentication. For more information, see Using gMSAs for Windows Containers and Using gMSAs for Linux Containers in the *Amazon Elastic Container Service Developer Guide*. This parameter maps to "SecurityOpt" in the docker container create command and the "--security-opt" option to docker run. Note: The Amazon ECS container agent running on a container instance must register with the "ECS_SELINUX_CAPABLE=true" or "ECS_APPARMOR_CAPABLE=true" environment variables before containers placed on that instance can use these security options. For more information, see Amazon ECS Container Agent Configuration in the *Amazon Elastic Container Service Developer Guide*. Valid values: "no-new-privileges" | "apparmor:PROFILE" | "label:value" | "credentialspec:CredentialSpecFilePath" * *(string) --* * **interactive** *(boolean) --* When this parameter is "true", you can deploy containerized applications that require "stdin" or a "tty" to be allocated. This parameter maps to "OpenStdin" in the docker container create command and the "--interactive" option to docker run. * **pseudoTerminal** *(boolean) --* When this parameter is "true", a TTY is allocated. This parameter maps to "Tty" in the docker container create command and the "--tty" option to docker run. * **dockerLabels** *(dict) --* A key/value map of labels to add to the container. This parameter maps to "Labels" in the docker container create command and the "--label" option to docker run. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" * *(string) --* * *(string) --* * **ulimits** *(list) --* A list of "ulimits" to set in the container. If a "ulimit" value is specified in a task definition, it overrides the default values set by Docker. This parameter maps to "Ulimits" in the docker container create command and the "--ulimit" option to docker run. Valid naming values are displayed in the Ulimit data type. Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the "nofile" resource limit parameter which Fargate overrides. The "nofile" resource limit sets a restriction on the number of open files that a container can use. The default "nofile" soft limit is "65535" and the default hard limit is "65535". This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" Note: This parameter is not supported for Windows containers. * *(dict) --* The "ulimit" settings to pass to the container. Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the "nofile" resource limit parameter which Fargate overrides. The "nofile" resource limit sets a restriction on the number of open files that a container can use. The default "nofile" soft limit is "65535" and the default hard limit is "65535". You can specify the "ulimit" settings for a container in a task definition. * **name** *(string) --* The "type" of the "ulimit". * **softLimit** *(integer) --* The soft limit for the "ulimit" type. The value can be specified in bytes, seconds, or as a count, depending on the "type" of the "ulimit". * **hardLimit** *(integer) --* The hard limit for the "ulimit" type. The value can be specified in bytes, seconds, or as a count, depending on the "type" of the "ulimit". * **logConfiguration** *(dict) --* The log configuration specification for the container. This parameter maps to "LogConfig" in the docker container create command and the "--log-driver" option to docker run. By default, containers use the same logging driver that the Docker daemon uses. However the container can use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance (or on a different log server for remote logging options). Note: Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon (shown in the LogConfiguration data type). Additional log drivers may be available in future releases of the Amazon ECS container agent. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" Note: The Amazon ECS container agent running on a container instance must register the logging drivers available on that instance with the "ECS_AVAILABLE_LOGGING_DRIVERS" environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS Container Agent Configuration in the *Amazon Elastic Container Service Developer Guide*. * **logDriver** *(string) --* The log driver to use for the container. For tasks on Fargate, the supported log drivers are "awslogs", "splunk", and "awsfirelens". For tasks hosted on Amazon EC2 instances, the supported log drivers are "awslogs", "fluentd", "gelf", "json-file", "journald", "syslog", "splunk", and "awsfirelens". For more information about using the "awslogs" log driver, see Send Amazon ECS logs to CloudWatch in the *Amazon Elastic Container Service Developer Guide*. For more information about using the "awsfirelens" log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner. Note: If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software. * **options** *(dict) --* The configuration options to send to the log driver. The options you can specify depend on the log driver. Some of the options you can specify when you use the "awslogs" log driver to route logs to Amazon CloudWatch include the following: awslogs-create-group Required: No Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to "false". Note: Your IAM policy must include the "logs:CreateLogGroup" permission before you attempt to use "awslogs-create-group".awslogs- region Required: Yes Specify the Amazon Web Services Region that the "awslogs" log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option. awslogs-group Required: Yes Make sure to specify a log group that the "awslogs" log driver sends its log streams to. awslogs-stream-prefix Required: Yes, when using Fargate.Optional when using EC2. Use the "awslogs-stream-prefix" option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format "prefix-name/container-name/ecs-task-id". If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option. For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to. You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console. awslogs-datetime-format Required: No This option defines a multiline start pattern in Python "strftime" format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry. For more information, see awslogs-datetime-format. You cannot configure both the "awslogs-datetime- format" and "awslogs-multiline-pattern" options. Note: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.awslogs-multiline-pattern Required: No This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. For more information, see awslogs-multiline-pattern. This option is ignored if "awslogs-datetime-format" is also configured. You cannot configure both the "awslogs-datetime- format" and "awslogs-multiline-pattern" options. Note: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. The following options apply to all supported log drivers. mode Required: No Valid values: "non-blocking" | "blocking" This option defines the delivery mode of log messages from the container to the log driver specified using "logDriver". The delivery mode you choose affects application availability when the flow of logs from container is interrupted. If you use the "blocking" mode and the flow of logs is interrupted, calls from container code to write to the "stdout" and "stderr" streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure. If you use the "non-blocking" mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the "max-buffer-size" option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non- blocking mode in the awslogs container log driver. You can set a default "mode" for all containers in a specific Amazon Web Services Region by using the "defaultLogDriverMode" account setting. If you don't specify the "mode" option or configure the account setting, Amazon ECS will default to the "non- blocking" mode. For more information about the account setting, see Default log driver mode in the *Amazon Elastic Container Service Developer Guide*. Note: On June 25, 2025, Amazon ECS changed the default log driver mode from "blocking" to "non-blocking" to prioritize task availability over logging. To continue using the "blocking" mode after this change, do one of the following: * Set the "mode" option in your container definition's "logConfiguration" as "blocking". * Set the "defaultLogDriverMode" account setting to "blocking". max-buffer-size Required: No Default value: "1m" When "non-blocking" mode is used, the "max-buffer- size" log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost. To route logs using the "splunk" log router, you need to specify a "splunk-token" and a "splunk-url". When you use the "awsfirelens" log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the "log-driver-buffer- limit" option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. Other options you can specify when using "awsfirelens" to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with "region" and a name for the log stream with "delivery_stream". When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with "region" and a data stream name with "stream". When you export logs to Amazon OpenSearch Service, you can specify options like "Name", "Host" (OpenSearch Service endpoint without protocol), "Port", "Index", "Type", "Aws_auth", "Aws_region", "Suppress_Type_Name", and "tls". For more information, see Under the hood: FireLens for Amazon ECS Tasks. When you export logs to Amazon S3, you can specify the bucket using the "bucket" option. You can also specify "region", "total_file_size", "upload_timeout", and "use_put_object" as options. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" * *(string) --* * *(string) --* * **secretOptions** *(list) --* The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * *(dict) --* An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways: * To inject sensitive data into your containers as environment variables, use the "secrets" container definition parameter. * To reference sensitive information in the log configuration of a container, use the "secretOptions" container definition parameter. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the secret. * **valueFrom** *(string) --* The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store. For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the *Amazon Elastic Container Service Developer Guide*. Note: If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified. * **healthCheck** *(dict) --* The container health check command and associated configuration parameters for the container. This parameter maps to "HealthCheck" in the docker container create command and the "HEALTHCHECK" parameter of docker run. * **command** *(list) --* A string array representing the command that the container runs to determine if it is healthy. The string array must start with "CMD" to run the command arguments directly, or "CMD-SHELL" to run the command with the container's default shell. When you use the Amazon Web Services Management Console JSON panel, the Command Line Interface, or the APIs, enclose the list of commands in double quotes and brackets. "[ "CMD-SHELL", "curl -f http://localhost/ || exit 1" ]" You don't include the double quotes and brackets when you use the Amazon Web Services Management Console. "CMD-SHELL, curl -f http://localhost/ || exit 1" An exit code of 0 indicates success, and non-zero exit code indicates failure. For more information, see "HealthCheck" in the docker container create command. * *(string) --* * **interval** *(integer) --* The time period in seconds between each health check execution. You may specify between 5 and 300 seconds. The default value is 30 seconds. This value applies only when you specify a "command". * **timeout** *(integer) --* The time period in seconds to wait for a health check to succeed before it is considered a failure. You may specify between 2 and 60 seconds. The default value is 5. This value applies only when you specify a "command". * **retries** *(integer) --* The number of times to retry a failed health check before the container is considered unhealthy. You may specify between 1 and 10 retries. The default value is 3. This value applies only when you specify a "command". * **startPeriod** *(integer) --* The optional grace period to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. You can specify between 0 and 300 seconds. By default, the "startPeriod" is off. This value applies only when you specify a "command". Note: If a health check succeeds within the "startPeriod", then the container is considered healthy and any subsequent failures count toward the maximum number of retries. * **systemControls** *(list) --* A list of namespaced kernel parameters to set in the container. This parameter maps to "Sysctls" in the docker container create command and the "--sysctl" option to docker run. For example, you can configure "net.ipv4.tcp_keepalive_time" setting to maintain longer lived connections. * *(dict) --* A list of namespaced kernel parameters to set in the container. This parameter maps to "Sysctls" in the docker container create command and the "--sysctl" option to docker run. For example, you can configure "net.ipv4.tcp_keepalive_time" setting to maintain longer lived connections. We don't recommend that you specify network-related "systemControls" parameters for multiple containers in a single task that also uses either the "awsvpc" or "host" network mode. Doing this has the following disadvantages: * For tasks that use the "awsvpc" network mode including Fargate, if you set "systemControls" for any container, it applies to all containers in the task. If you set different "systemControls" for multiple containers in a single task, the container that's started last determines which "systemControls" take effect. * For tasks that use the "host" network mode, the network namespace "systemControls" aren't supported. If you're setting an IPC resource namespace to use for the containers in the task, the following conditions apply to your system controls. For more information, see IPC mode. * For tasks that use the "host" IPC mode, IPC namespace "systemControls" aren't supported. * For tasks that use the "task" IPC mode, IPC namespace "systemControls" values apply to all containers within a task. Note: This parameter is not supported for Windows containers. Note: This parameter is only supported for tasks that are hosted on Fargate if the tasks are using platform version "1.4.0" or later (Linux). This isn't supported for Windows containers on Fargate. * **namespace** *(string) --* The namespaced kernel parameter to set a "value" for. * **value** *(string) --* The namespaced kernel parameter to set a "value" for. Valid IPC namespace values: ""kernel.msgmax" | "kernel.msgmnb" | "kernel.msgmni" | "kernel.sem" | "kernel.shmall" | "kernel.shmmax" | "kernel.shmmni" | "kernel.shm_rmid_forced"", and "Sysctls" that start with ""fs.mqueue.*"" Valid network namespace values: "Sysctls" that start with ""net.*"". Only namespaced "Sysctls" that exist within the container starting with "net.* are accepted. All of these values are supported by Fargate. * **resourceRequirements** *(list) --* The type and amount of a resource to assign to a container. The only supported resource is a GPU. * *(dict) --* The type and amount of a resource to assign to a container. The supported resource types are GPUs and Elastic Inference accelerators. For more information, see Working with GPUs on Amazon ECS or Working with Amazon Elastic Inference on Amazon ECS in the *Amazon Elastic Container Service Developer Guide* * **value** *(string) --* The value for the specified resource type. When the type is "GPU", the value is the number of physical "GPUs" the Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for all containers in a task can't exceed the number of available GPUs on the container instance that the task is launched on. When the type is "InferenceAccelerator", the "value" matches the "deviceName" for an InferenceAccelerator specified in a task definition. * **type** *(string) --* The type of resource to assign to a container. * **firelensConfiguration** *(dict) --* The FireLens configuration for the container. This is used to specify and configure a log router for container logs. For more information, see Custom Log Routing in the *Amazon Elastic Container Service Developer Guide*. * **type** *(string) --* The log router to use. The valid values are "fluentd" or "fluentbit". * **options** *(dict) --* The options to use when configuring the log router. This field is optional and can be used to specify a custom configuration file or to add additional metadata, such as the task, task definition, cluster, and container instance details to the log event. If specified, the syntax to use is ""options ":{"enable-ecs-log-metadata":"true|false","config- file-type:"s3|file","config-file-value":"arn:aws:s3 :::mybucket/fluent.conf|filepath"}". For more information, see Creating a task definition that uses a FireLens configuration in the *Amazon Elastic Container Service Developer Guide*. Note: Tasks hosted on Fargate only support the "file" configuration file type. * *(string) --* * *(string) --* * **credentialSpecs** *(list) --* A list of ARNs in SSM or Amazon S3 to a credential spec ( "CredSpec") file that configures the container for Active Directory authentication. We recommend that you use this parameter instead of the "dockerSecurityOptions". The maximum number of ARNs is 1. There are two formats for each ARN. credentialspecdomainless:MyARN You use "credentialspecdomainless:MyARN" to provide a "CredSpec" with an additional section for a secret in Secrets Manager. You provide the login credentials to the domain in the secret. Each task that runs on any container instance can join different domains. You can use this format without joining the container instance to a domain. credentialspec:MyARN You use "credentialspec:MyARN" to provide a "CredSpec" for a single domain. You must join the container instance to the domain before you start any tasks that use this task definition. In both formats, replace "MyARN" with the ARN in SSM or Amazon S3. If you provide a "credentialspecdomainless:MyARN", the "credspec" must provide a ARN in Secrets Manager for a secret containing the username, password, and the domain to connect to. For better security, the instance isn't joined to the domain for domainless authentication. Other applications on the instance can't use the domainless credentials. You can use this parameter to run tasks on the same instance, even it the tasks need to join different domains. For more information, see Using gMSAs for Windows Containers and Using gMSAs for Linux Containers. * *(string) --* * **family** *(string) --* The name of a family that this task definition is registered to. Up to 255 characters are allowed. Letters (both uppercase and lowercase letters), numbers, hyphens (-), and underscores (_) are allowed. A family groups multiple versions of a task definition. Amazon ECS gives the first task definition that you registered to a family a revision number of 1. Amazon ECS gives sequential revision numbers to each task definition that you add. * **taskRoleArn** *(string) --* The short name or full Amazon Resource Name (ARN) of the Identity and Access Management role that grants containers in the task permission to call Amazon Web Services APIs on your behalf. For informationabout the required IAM roles for Amazon ECS, see IAM roles for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. * **executionRoleArn** *(string) --* The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make Amazon Web Services API calls on your behalf. For informationabout the required IAM roles for Amazon ECS, see IAM roles for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. * **networkMode** *(string) --* The Docker networking mode to use for the containers in the task. The valid values are "none", "bridge", "awsvpc", and "host". If no network mode is specified, the default is "bridge". For Amazon ECS tasks on Fargate, the "awsvpc" network mode is required. For Amazon ECS tasks on Amazon EC2 Linux instances, any network mode can be used. For Amazon ECS tasks on Amazon EC2 Windows instances, "" or "awsvpc" can be used. If the network mode is set to "none", you cannot specify port mappings in your container definitions, and the tasks containers do not have external connectivity. The "host" and "awsvpc" network modes offer the highest networking performance for containers because they use the EC2 network stack instead of the virtualized network stack provided by the "bridge" mode. With the "host" and "awsvpc" network modes, exposed container ports are mapped directly to the corresponding host port (for the "host" network mode) or the attached elastic network interface port (for the "awsvpc" network mode), so you cannot take advantage of dynamic host port mappings. Warning: When using the "host" network mode, you should not run containers using the root user (UID 0). It is considered best practice to use a non-root user. If the network mode is "awsvpc", the task is allocated an elastic network interface, and you must specify a NetworkConfiguration value when you create a service or run a task with the task definition. For more information, see Task Networking in the *Amazon Elastic Container Service Developer Guide*. If the network mode is "host", you cannot run multiple instantiations of the same task on a single container instance when port mappings are used. * **revision** *(integer) --* The revision of the task in a particular family. The revision is a version number of a task definition in a family. When you register a task definition for the first time, the revision is "1". Each time that you register a new revision of a task definition in the same family, the revision value always increases by one. This is even if you deregistered previous revisions in this family. * **volumes** *(list) --* The list of data volume definitions for the task. For more information, see Using data volumes in tasks in the *Amazon Elastic Container Service Developer Guide*. Note: The "host" and "sourcePath" parameters aren't supported for tasks run on Fargate. * *(dict) --* The data volume configuration for tasks launched using this task definition. Specifying a volume configuration in a task definition is optional. The volume configuration may contain multiple volumes but only one volume configured at launch is supported. Each volume defined in the volume configuration may only specify a "name" and one of either "configuredAtLaunch", "dockerVolumeConfiguration", "efsVolumeConfiguration", "fsxWindowsFileServerVolumeConfiguration", or "host". If an empty volume configuration is specified, by default Amazon ECS uses a host volume. For more information, see Using data volumes in tasks. * **name** *(string) --* The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. When using a volume configured at launch, the "name" is required and must also be specified as the volume name in the "ServiceVolumeConfiguration" or "TaskVolumeConfiguration" parameter when creating your service or standalone task. For all other types of volumes, this name is referenced in the "sourceVolume" parameter of the "mountPoints" object in the container definition. When a volume is using the "efsVolumeConfiguration", the name is required. * **host** *(dict) --* This parameter is specified when you use bind mount host volumes. The contents of the "host" parameter determine whether your bind mount host volume persists on the host container instance and where it's stored. If the "host" parameter is empty, then the Docker daemon assigns a host path for your data volume. However, the data isn't guaranteed to persist after the containers that are associated with it stop running. Windows containers can mount whole directories on the same drive as "$env:ProgramData". Windows containers can't mount directories on a different drive, and mount point can't be across drives. For example, you can mount "C:\my\path:C:\my\path" and "D:\:D:\", but not "D:\my\path:C:\my\path" or "D:\:C:\my\path". * **sourcePath** *(string) --* When the "host" parameter is used, specify a "sourcePath" to declare the path on the host container instance that's presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If the "host" parameter contains a "sourcePath" file location, then the data volume persists at the specified location on the host container instance until you delete it manually. If the "sourcePath" value doesn't exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported. If you're using the Fargate launch type, the "sourcePath" parameter is not supported. * **dockerVolumeConfiguration** *(dict) --* This parameter is specified when you use Docker volumes. Windows containers only support the use of the "local" driver. To use bind mounts, specify the "host" parameter instead. Note: Docker volumes aren't supported by tasks run on Fargate. * **scope** *(string) --* The scope for the Docker volume that determines its lifecycle. Docker volumes that are scoped to a "task" are automatically provisioned when the task starts and destroyed when the task stops. Docker volumes that are scoped as "shared" persist after the task stops. * **autoprovision** *(boolean) --* If this value is "true", the Docker volume is created if it doesn't already exist. Note: This field is only used if the "scope" is "shared". * **driver** *(string) --* The Docker volume driver to use. The driver value must match the driver name provided by Docker because it is used for task placement. If the driver was installed using the Docker plugin CLI, use "docker plugin ls" to retrieve the driver name from your container instance. If the driver was installed using another method, use Docker plugin discovery to retrieve the driver name. This parameter maps to "Driver" in the docker container create command and the "xxdriver" option to docker volume create. * **driverOpts** *(dict) --* A map of Docker driver-specific options passed through. This parameter maps to "DriverOpts" in the docker create-volume command and the "xxopt" option to docker volume create. * *(string) --* * *(string) --* * **labels** *(dict) --* Custom metadata to add to your Docker volume. This parameter maps to "Labels" in the docker container create command and the "xxlabel" option to docker volume create. * *(string) --* * *(string) --* * **efsVolumeConfiguration** *(dict) --* This parameter is specified when you use an Amazon Elastic File System file system for task storage. * **fileSystemId** *(string) --* The Amazon EFS file system ID to use. * **rootDirectory** *(string) --* The directory within the Amazon EFS file system to mount as the root directory inside the host. If this parameter is omitted, the root of the Amazon EFS volume will be used. Specifying "/" will have the same effect as omitting this parameter. Warning: If an EFS access point is specified in the "authorizationConfig", the root directory parameter must either be omitted or set to "/" which will enforce the path set on the EFS access point. * **transitEncryption** *(string) --* Determines whether to use encryption for Amazon EFS data in transit between the Amazon ECS host and the Amazon EFS server. Transit encryption must be turned on if Amazon EFS IAM authorization is used. If this parameter is omitted, the default value of "DISABLED" is used. For more information, see Encrypting data in transit in the *Amazon Elastic File System User Guide*. * **transitEncryptionPort** *(integer) --* The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server. If you do not specify a transit encryption port, it will use the port selection strategy that the Amazon EFS mount helper uses. For more information, see EFS mount helper in the *Amazon Elastic File System User Guide*. * **authorizationConfig** *(dict) --* The authorization configuration details for the Amazon EFS file system. * **accessPointId** *(string) --* The Amazon EFS access point ID to use. If an access point is specified, the root directory value specified in the "EFSVolumeConfiguration" must either be omitted or set to "/" which will enforce the path set on the EFS access point. If an access point is used, transit encryption must be on in the "EFSVolumeConfiguration". For more information, see Working with Amazon EFS access points in the *Amazon Elastic File System User Guide*. * **iam** *(string) --* Determines whether to use the Amazon ECS task role defined in a task definition when mounting the Amazon EFS file system. If it is turned on, transit encryption must be turned on in the "EFSVolumeConfiguration". If this parameter is omitted, the default value of "DISABLED" is used. For more information, see Using Amazon EFS access points in the *Amazon Elastic Container Service Developer Guide*. * **fsxWindowsFileServerVolumeConfiguration** *(dict) --* This parameter is specified when you use Amazon FSx for Windows File Server file system for task storage. * **fileSystemId** *(string) --* The Amazon FSx for Windows File Server file system ID to use. * **rootDirectory** *(string) --* The directory within the Amazon FSx for Windows File Server file system to mount as the root directory inside the host. * **authorizationConfig** *(dict) --* The authorization configuration details for the Amazon FSx for Windows File Server file system. * **credentialsParameter** *(string) --* The authorization credential option to use. The authorization credential options can be provided using either the Amazon Resource Name (ARN) of an Secrets Manager secret or SSM Parameter Store parameter. The ARN refers to the stored credentials. * **domain** *(string) --* A fully qualified domain name hosted by an Directory Service Managed Microsoft AD (Active Directory) or self-hosted AD on Amazon EC2. * **configuredAtLaunch** *(boolean) --* Indicates whether the volume should be configured at launch time. This is used to create Amazon EBS volumes for standalone tasks or tasks created as part of a service. Each task definition revision may only have one volume configured at launch in the volume configuration. To configure a volume at launch time, use this task definition revision and specify a "volumeConfigurations" object when calling the "CreateService", "UpdateService", "RunTask" or "StartTask" APIs. * **status** *(string) --* The status of the task definition. * **requiresAttributes** *(list) --* The container instance attributes required by your task. When an Amazon EC2 instance is registered to your cluster, the Amazon ECS container agent assigns some standard attributes to the instance. You can apply custom attributes. These are specified as key-value pairs using the Amazon ECS console or the PutAttributes API. These attributes are used when determining task placement for tasks hosted on Amazon EC2 instances. For more information, see Attributes in the *Amazon Elastic Container Service Developer Guide*. Note: This parameter isn't supported for tasks run on Fargate. * *(dict) --* An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the attribute. The "name" must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.). * **value** *(string) --* The value of the attribute. The "value" must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space. * **targetType** *(string) --* The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN. * **targetId** *(string) --* The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN). * **placementConstraints** *(list) --* An array of placement constraint objects to use for tasks. Note: This parameter isn't supported for tasks run on Fargate. * *(dict) --* The constraint on task placement in the task definition. For more information, see Task placement constraints in the *Amazon Elastic Container Service Developer Guide*. Note: Task placement constraints aren't supported for tasks run on Fargate. * **type** *(string) --* The type of constraint. The "MemberOf" constraint restricts selection to be from a group of valid candidates. * **expression** *(string) --* A cluster query language expression to apply to the constraint. For more information, see Cluster query language in the *Amazon Elastic Container Service Developer Guide*. * **compatibilities** *(list) --* Amazon ECS validates the task definition parameters with those supported by the launch type. For more information, see Amazon ECS launch types in the *Amazon Elastic Container Service Developer Guide*. * *(string) --* * **runtimePlatform** *(dict) --* The operating system that your task definitions are running on. A platform family is specified only for tasks using the Fargate launch type. When you specify a task in a service, this value must match the "runtimePlatform" value of the service. * **cpuArchitecture** *(string) --* The CPU architecture. You can run your Linux tasks on an ARM-based platform by setting the value to "ARM64". This option is available for tasks that run on Linux Amazon EC2 instance or Linux containers on Fargate. * **operatingSystemFamily** *(string) --* The operating system. * **requiresCompatibilities** *(list) --* The task launch types the task definition was validated against. The valid values are "EC2", "FARGATE", and "EXTERNAL". For more information, see Amazon ECS launch types in the *Amazon Elastic Container Service Developer Guide*. * *(string) --* * **cpu** *(string) --* The number of "cpu" units used by the task. If you use the EC2 launch type, this field is optional. Any value can be used. If you use the Fargate launch type, this field is required. You must use one of the following values. The value that you choose determines your range of valid values for the "memory" parameter. If you're using the EC2 launch type or the external launch type, this field is optional. Supported values are between "128" CPU units ( "0.125" vCPUs) and "196608" CPU units ( "192" vCPUs). This field is required for Fargate. For information about the valid values, see Task size in the *Amazon Elastic Container Service Developer Guide*. * **memory** *(string) --* The amount (in MiB) of memory used by the task. If your tasks runs on Amazon EC2 instances, you must specify either a task-level memory value or a container- level memory value. This field is optional and any value can be used. If a task-level memory value is specified, the container-level memory value is optional. For more information regarding container-level memory and memory reservation, see ContainerDefinition. If your tasks runs on Fargate, this field is required. You must use one of the following values. The value you choose determines your range of valid values for the "cpu" parameter. * 512 (0.5 GB), 1024 (1 GB), 2048 (2 GB) - Available "cpu" values: 256 (.25 vCPU) * 1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB) - Available "cpu" values: 512 (.5 vCPU) * 2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB) - Available "cpu" values: 1024 (1 vCPU) * Between 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB) - Available "cpu" values: 2048 (2 vCPU) * Between 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB) - Available "cpu" values: 4096 (4 vCPU) * Between 16 GB and 60 GB in 4 GB increments - Available "cpu" values: 8192 (8 vCPU) This option requires Linux platform "1.4.0" or later. * Between 32GB and 120 GB in 8 GB increments - Available "cpu" values: 16384 (16 vCPU) This option requires Linux platform "1.4.0" or later. * **inferenceAccelerators** *(list) --* The Elastic Inference accelerator that's associated with the task. * *(dict) --* Details on an Elastic Inference accelerator. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. * **deviceName** *(string) --* The Elastic Inference accelerator device name. The "deviceName" must also be referenced in a container definition as a ResourceRequirement. * **deviceType** *(string) --* The Elastic Inference accelerator type to use. * **pidMode** *(string) --* The process namespace to use for the containers in the task. The valid values are "host" or "task". On Fargate for Linux containers, the only valid value is "task". For example, monitoring sidecars might need "pidMode" to access information about other containers running in the same task. If "host" is specified, all containers within the tasks that specified the "host" PID mode on the same container instance share the same process namespace with the host Amazon EC2 instance. If "task" is specified, all containers within the specified task share the same process namespace. If no value is specified, the default is a private namespace for each container. If the "host" PID mode is used, there's a heightened risk of undesired process namespace exposure. Note: This parameter is not supported for Windows containers. Note: This parameter is only supported for tasks that are hosted on Fargate if the tasks are using platform version "1.4.0" or later (Linux). This isn't supported for Windows containers on Fargate. * **ipcMode** *(string) --* The IPC resource namespace to use for the containers in the task. The valid values are "host", "task", or "none". If "host" is specified, then all containers within the tasks that specified the "host" IPC mode on the same container instance share the same IPC resources with the host Amazon EC2 instance. If "task" is specified, all containers within the specified task share the same IPC resources. If "none" is specified, then IPC resources within the containers of a task are private and not shared with other containers in a task or on the container instance. If no value is specified, then the IPC resource namespace sharing depends on the Docker daemon setting on the container instance. If the "host" IPC mode is used, be aware that there is a heightened risk of undesired IPC namespace expose. If you are setting namespaced kernel parameters using "systemControls" for the containers in the task, the following will apply to your IPC resource namespace. For more information, see System Controls in the *Amazon Elastic Container Service Developer Guide*. * For tasks that use the "host" IPC mode, IPC namespace related "systemControls" are not supported. * For tasks that use the "task" IPC mode, IPC namespace related "systemControls" will apply to all containers within a task. Note: This parameter is not supported for Windows containers or tasks run on Fargate. * **proxyConfiguration** *(dict) --* The configuration details for the App Mesh proxy. Your Amazon ECS container instances require at least version 1.26.0 of the container agent and at least version 1.26.0-1 of the "ecs-init" package to use a proxy configuration. If your container instances are launched from the Amazon ECS optimized AMI version "20190301" or later, they contain the required versions of the container agent and "ecs-init". For more information, see Amazon ECS-optimized Linux AMI in the *Amazon Elastic Container Service Developer Guide*. * **type** *(string) --* The proxy type. The only supported value is "APPMESH". * **containerName** *(string) --* The name of the container that will serve as the App Mesh proxy. * **properties** *(list) --* The set of network configuration parameters to provide the Container Network Interface (CNI) plugin, specified as key-value pairs. * "IgnoredUID" - (Required) The user ID (UID) of the proxy container as defined by the "user" parameter in a container definition. This is used to ensure the proxy ignores its own traffic. If "IgnoredGID" is specified, this field can be empty. * "IgnoredGID" - (Required) The group ID (GID) of the proxy container as defined by the "user" parameter in a container definition. This is used to ensure the proxy ignores its own traffic. If "IgnoredUID" is specified, this field can be empty. * "AppPorts" - (Required) The list of ports that the application uses. Network traffic to these ports is forwarded to the "ProxyIngressPort" and "ProxyEgressPort". * "ProxyIngressPort" - (Required) Specifies the port that incoming traffic to the "AppPorts" is directed to. * "ProxyEgressPort" - (Required) Specifies the port that outgoing traffic from the "AppPorts" is directed to. * "EgressIgnoredPorts" - (Required) The egress traffic going to the specified ports is ignored and not redirected to the "ProxyEgressPort". It can be an empty list. * "EgressIgnoredIPs" - (Required) The egress traffic going to the specified IP addresses is ignored and not redirected to the "ProxyEgressPort". It can be an empty list. * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **registeredAt** *(datetime) --* The Unix timestamp for the time when the task definition was registered. * **deregisteredAt** *(datetime) --* The Unix timestamp for the time when the task definition was deregistered. * **registeredBy** *(string) --* The principal that registered the task definition. * **ephemeralStorage** *(dict) --* The ephemeral storage settings to use for tasks run with the task definition. * **sizeInGiB** *(integer) --* The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is "21" GiB and the maximum supported value is "200" GiB. * **enableFaultInjection** *(boolean) --* Enables fault injection and allows for fault injection requests to be accepted from the task's containers. The default value is "false". * **tags** *(list) --* The list of tags associated with the task definition. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" **Examples** This example registers a task definition to the specified family. response = client.register_task_definition( containerDefinitions=[ { 'name': 'sleep', 'command': [ 'sleep', '360', ], 'cpu': 10, 'essential': True, 'image': 'busybox', 'memory': 10, }, ], family='sleep360', taskRoleArn='', volumes=[ ], ) print(response) Expected Output: { 'taskDefinition': { 'containerDefinitions': [ { 'name': 'sleep', 'command': [ 'sleep', '360', ], 'cpu': 10, 'environment': [ ], 'essential': True, 'image': 'busybox', 'memory': 10, 'mountPoints': [ ], 'portMappings': [ ], 'volumesFrom': [ ], }, ], 'family': 'sleep360', 'revision': 1, 'taskDefinitionArn': 'arn:aws:ecs:us-east-1::task-definition/sleep360:19', 'volumes': [ ], }, 'ResponseMetadata': { '...': '...', }, } ECS / Client / update_task_protection update_task_protection ********************** ECS.Client.update_task_protection(**kwargs) Updates the protection status of a task. You can set "protectionEnabled" to "true" to protect your task from termination during scale-in events from Service Autoscaling or deployments. Task-protection, by default, expires after 2 hours at which point Amazon ECS clears the "protectionEnabled" property making the task eligible for termination by a subsequent scale-in event. You can specify a custom expiration period for task protection from 1 minute to up to 2,880 minutes (48 hours). To specify the custom expiration period, set the "expiresInMinutes" property. The "expiresInMinutes" property is always reset when you invoke this operation for a task that already has "protectionEnabled" set to "true". You can keep extending the protection expiration period of a task by invoking this operation repeatedly. To learn more about Amazon ECS task protection, see Task scale-in protection in the *Amazon Elastic Container Service Developer Guide* . Note: This operation is only supported for tasks belonging to an Amazon ECS service. Invoking this operation for a standalone task will result in an "TASK_NOT_VALID" failure. For more information, see API failure reasons. Warning: If you prefer to set task protection from within the container, we recommend using the Task scale-in protection endpoint. See also: AWS API Documentation **Request Syntax** response = client.update_task_protection( cluster='string', tasks=[ 'string', ], protectionEnabled=True|False, expiresInMinutes=123 ) Parameters: * **cluster** (*string*) -- **[REQUIRED]** The short name or full Amazon Resource Name (ARN) of the cluster that hosts the service that the task sets exist in. * **tasks** (*list*) -- **[REQUIRED]** A list of up to 10 task IDs or full ARN entries. * *(string) --* * **protectionEnabled** (*boolean*) -- **[REQUIRED]** Specify "true" to mark a task for protection and "false" to unset protection, making it eligible for termination. * **expiresInMinutes** (*integer*) -- If you set "protectionEnabled" to "true", you can specify the duration for task protection in minutes. You can specify a value from 1 minute to up to 2,880 minutes (48 hours). During this time, your task will not be terminated by scale-in events from Service Auto Scaling or deployments. After this time period lapses, "protectionEnabled" will be reset to "false". If you don’t specify the time, then the task is automatically protected for 120 minutes (2 hours). Return type: dict Returns: **Response Syntax** { 'protectedTasks': [ { 'taskArn': 'string', 'protectionEnabled': True|False, 'expirationDate': datetime(2015, 1, 1) }, ], 'failures': [ { 'arn': 'string', 'reason': 'string', 'detail': 'string' }, ] } **Response Structure** * *(dict) --* * **protectedTasks** *(list) --* A list of tasks with the following information. * "taskArn": The task ARN. * "protectionEnabled": The protection status of the task. If scale-in protection is turned on for a task, the value is "true". Otherwise, it is "false". * "expirationDate": The epoch time when protection for the task will expire. * *(dict) --* An object representing the protection status details for a task. You can set the protection status with the UpdateTaskProtection API and get the status of tasks with the GetTaskProtection API. * **taskArn** *(string) --* The task ARN. * **protectionEnabled** *(boolean) --* The protection status of the task. If scale-in protection is on for a task, the value is "true". Otherwise, it is "false". * **expirationDate** *(datetime) --* The epoch time when protection for the task will expire. * **failures** *(list) --* Any failures associated with the call. * *(dict) --* A failed resource. For a list of common causes, see API failure reasons in the *Amazon Elastic Container Service Developer Guide*. * **arn** *(string) --* The Amazon Resource Name (ARN) of the failed resource. * **reason** *(string) --* The reason for the failure. * **detail** *(string) --* The details of the failure. **Exceptions** * "ECS.Client.exceptions.AccessDeniedException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.ClusterNotFoundException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ResourceNotFoundException" * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.UnsupportedFeatureException" ECS / Client / list_account_settings list_account_settings ********************* ECS.Client.list_account_settings(**kwargs) Lists the account settings for a specified principal. See also: AWS API Documentation **Request Syntax** response = client.list_account_settings( name='serviceLongArnFormat'|'taskLongArnFormat'|'containerInstanceLongArnFormat'|'awsvpcTrunking'|'containerInsights'|'fargateFIPSMode'|'tagResourceAuthorization'|'fargateTaskRetirementWaitPeriod'|'guardDutyActivate'|'defaultLogDriverMode', value='string', principalArn='string', effectiveSettings=True|False, nextToken='string', maxResults=123 ) Parameters: * **name** (*string*) -- The name of the account setting you want to list the settings for. * **value** (*string*) -- The value of the account settings to filter results with. You must also specify an account setting name to use this parameter. * **principalArn** (*string*) -- The ARN of the principal, which can be a user, role, or the root user. If this field is omitted, the account settings are listed only for the authenticated user. In order to use this parameter, you must be the root user, or the principal. Note: Federated users assume the account setting of the root user and can't have explicit account settings set for them. * **effectiveSettings** (*boolean*) -- Determines whether to return the effective settings. If "true", the account settings for the root user or the default setting for the "principalArn" are returned. If "false", the account settings for the "principalArn" are returned if they're set. Otherwise, no account settings are returned. * **nextToken** (*string*) -- The "nextToken" value returned from a "ListAccountSettings" request indicating that more results are available to fulfill the request and further calls will be needed. If "maxResults" was provided, it's possible the number of results to be fewer than "maxResults". Note: This token should be treated as an opaque identifier that is only used to retrieve the next items in a list and not for other programmatic purposes. * **maxResults** (*integer*) -- The maximum number of account setting results returned by "ListAccountSettings" in paginated output. When this parameter is used, "ListAccountSettings" only returns "maxResults" results in a single page along with a "nextToken" response element. The remaining results of the initial request can be seen by sending another "ListAccountSettings" request with the returned "nextToken" value. This value can be between 1 and 10. If this parameter isn't used, then "ListAccountSettings" returns up to 10 results and a "nextToken" value if applicable. Return type: dict Returns: **Response Syntax** { 'settings': [ { 'name': 'serviceLongArnFormat'|'taskLongArnFormat'|'containerInstanceLongArnFormat'|'awsvpcTrunking'|'containerInsights'|'fargateFIPSMode'|'tagResourceAuthorization'|'fargateTaskRetirementWaitPeriod'|'guardDutyActivate'|'defaultLogDriverMode', 'value': 'string', 'principalArn': 'string', 'type': 'user'|'aws_managed' }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **settings** *(list) --* The account settings for the resource. * *(dict) --* The current account setting for a resource. * **name** *(string) --* The Amazon ECS resource name. * **value** *(string) --* Determines whether the account setting is on or off for the specified resource. * **principalArn** *(string) --* The ARN of the principal. It can be a user, role, or the root user. If this field is omitted, the authenticated user is assumed. * **type** *(string) --* Indicates whether Amazon Web Services manages the account setting, or if the user manages it. "aws_managed" account settings are read-only, as Amazon Web Services manages such on the customer's behalf. Currently, the "guardDutyActivate" account setting is the only one Amazon Web Services manages. * **nextToken** *(string) --* The "nextToken" value to include in a future "ListAccountSettings" request. When the results of a "ListAccountSettings" request exceed "maxResults", this value can be used to retrieve the next page of results. This value is "null" when there are no more results to return. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" **Examples** This example displays the effective account settings for your account. response = client.list_account_settings( effectiveSettings=True, ) print(response) Expected Output: { 'settings': [ { 'name': 'containerInstanceLongArnFormat', 'value': 'disabled', 'principalArn': 'arn:aws:iam:::user/principalName', }, { 'name': 'serviceLongArnFormat', 'value': 'enabled', 'principalArn': 'arn:aws:iam:::user/principalName', }, { 'name': 'taskLongArnFormat', 'value': 'disabled', 'principalArn': 'arn:aws:iam:::user/principalName', }, ], 'ResponseMetadata': { '...': '...', }, } This example displays the effective account settings for the specified user or role. response = client.list_account_settings( effectiveSettings=True, principalArn='arn:aws:iam:::user/principalName', ) print(response) Expected Output: { 'settings': [ { 'name': 'containerInstanceLongArnFormat', 'value': 'disabled', 'principalArn': 'arn:aws:iam:::user/principalName', }, { 'name': 'serviceLongArnFormat', 'value': 'enabled', 'principalArn': 'arn:aws:iam:::user/principalName', }, { 'name': 'taskLongArnFormat', 'value': 'disabled', 'principalArn': 'arn:aws:iam:::user/principalName', }, ], 'ResponseMetadata': { '...': '...', }, } ECS / Client / put_account_setting put_account_setting ******************* ECS.Client.put_account_setting(**kwargs) Modifies an account setting. Account settings are set on a per- Region basis. If you change the root user account setting, the default settings are reset for users and roles that do not have specified individual account settings. For more information, see Account Settings in the *Amazon Elastic Container Service Developer Guide*. See also: AWS API Documentation **Request Syntax** response = client.put_account_setting( name='serviceLongArnFormat'|'taskLongArnFormat'|'containerInstanceLongArnFormat'|'awsvpcTrunking'|'containerInsights'|'fargateFIPSMode'|'tagResourceAuthorization'|'fargateTaskRetirementWaitPeriod'|'guardDutyActivate'|'defaultLogDriverMode', value='string', principalArn='string' ) Parameters: * **name** (*string*) -- **[REQUIRED]** The Amazon ECS account setting name to modify. The following are the valid values for the account setting name. * "serviceLongArnFormat" - When modified, the Amazon Resource Name (ARN) and resource ID format of the resource type for a specified user, role, or the root user for an account is affected. The opt-in and opt-out account setting must be set for each Amazon ECS resource separately. The ARN and resource ID format of a resource is defined by the opt-in status of the user or role that created the resource. You must turn on this setting to use Amazon ECS features such as resource tagging. * "taskLongArnFormat" - When modified, the Amazon Resource Name (ARN) and resource ID format of the resource type for a specified user, role, or the root user for an account is affected. The opt-in and opt-out account setting must be set for each Amazon ECS resource separately. The ARN and resource ID format of a resource is defined by the opt-in status of the user or role that created the resource. You must turn on this setting to use Amazon ECS features such as resource tagging. * "containerInstanceLongArnFormat" - When modified, the Amazon Resource Name (ARN) and resource ID format of the resource type for a specified user, role, or the root user for an account is affected. The opt-in and opt-out account setting must be set for each Amazon ECS resource separately. The ARN and resource ID format of a resource is defined by the opt- in status of the user or role that created the resource. You must turn on this setting to use Amazon ECS features such as resource tagging. * "awsvpcTrunking" - When modified, the elastic network interface (ENI) limit for any new container instances that support the feature is changed. If "awsvpcTrunking" is turned on, any new container instances that support the feature are launched have the increased ENI limits available to them. For more information, see Elastic Network Interface Trunking in the *Amazon Elastic Container Service Developer Guide*. * "containerInsights" - Container Insights with enhanced observability provides all the Container Insights metrics, plus additional task and container metrics. This version supports enhanced observability for Amazon ECS clusters using the Amazon EC2 and Fargate launch types. After you configure Container Insights with enhanced observability on Amazon ECS, Container Insights auto-collects detailed infrastructure telemetry from the cluster level down to the container level in your environment and displays these critical performance data in curated dashboards removing the heavy lifting in observability set-up. To use Container Insights with enhanced observability, set the "containerInsights" account setting to "enhanced". To use Container Insights, set the "containerInsights" account setting to "enabled". For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability in the *Amazon Elastic Container Service Developer Guide*. * "dualStackIPv6" - When turned on, when using a VPC in dual stack mode, your tasks using the "awsvpc" network mode can have an IPv6 address assigned. For more information on using IPv6 with tasks launched on Amazon EC2 instances, see Using a VPC in dual-stack mode. For more information on using IPv6 with tasks launched on Fargate, see Using a VPC in dual- stack mode. * "fargateTaskRetirementWaitPeriod" - When Amazon Web Services determines that a security or infrastructure update is needed for an Amazon ECS task hosted on Fargate, the tasks need to be stopped and new tasks launched to replace them. Use "fargateTaskRetirementWaitPeriod" to configure the wait time to retire a Fargate task. For information about the Fargate tasks maintenance, see Amazon Web Services Fargate task maintenance in the *Amazon ECS Developer Guide*. * "tagResourceAuthorization" - Amazon ECS is introducing tagging authorization for resource creation. Users must have permissions for actions that create the resource, such as "ecsCreateCluster". If tags are specified when you create a resource, Amazon Web Services performs additional authorization to verify if users or roles have permissions to create tags. Therefore, you must grant explicit permissions to use the "ecs:TagResource" action. For more information, see Grant permission to tag resources on creation in the *Amazon ECS Developer Guide*. * "defaultLogDriverMode" - Amazon ECS supports setting a default delivery mode of log messages from a container to the "logDriver" that you specify in the container's "logConfiguration". The delivery mode affects application stability when the flow of logs from the container to the log driver is interrupted. The "defaultLogDriverMode" setting supports two values: "blocking" and "non-blocking". If you don't specify a delivery mode in your container definition's "logConfiguration", the mode you specify using this account setting will be used as the default. For more information about log delivery modes, see LogConfiguration. Note: On June 25, 2025, Amazon ECS changed the default log driver mode from "blocking" to "non-blocking" to prioritize task availability over logging. To continue using the "blocking" mode after this change, do one of the following: * Set the "mode" option in your container definition's "logConfiguration" as "blocking". * Set the "defaultLogDriverMode" account setting to "blocking". * "guardDutyActivate" - The "guardDutyActivate" parameter is read-only in Amazon ECS and indicates whether Amazon ECS Runtime Monitoring is enabled or disabled by your security administrator in your Amazon ECS account. Amazon GuardDuty controls this account setting on your behalf. For more information, see Protecting Amazon ECS workloads with Amazon ECS Runtime Monitoring. * **value** (*string*) -- **[REQUIRED]** The account setting value for the specified principal ARN. Accepted values are "enabled", "disabled", "enhanced", "on", and "off". When you specify "fargateTaskRetirementWaitPeriod" for the "name", the following are the valid values: * "0" - Amazon Web Services sends the notification, and immediately retires the affected tasks. * "7" - Amazon Web Services sends the notification, and waits 7 calendar days to retire the tasks. * "14" - Amazon Web Services sends the notification, and waits 14 calendar days to retire the tasks. * **principalArn** (*string*) -- The ARN of the principal, which can be a user, role, or the root user. If you specify the root user, it modifies the account setting for all users, roles, and the root user of the account unless a user or role explicitly overrides these settings. If this field is omitted, the setting is changed only for the authenticated user. In order to use this parameter, you must be the root user, or the principal. Note: You must use the root user when you set the Fargate wait time ( "fargateTaskRetirementWaitPeriod").Federated users assume the account setting of the root user and can't have explicit account settings set for them. Return type: dict Returns: **Response Syntax** { 'setting': { 'name': 'serviceLongArnFormat'|'taskLongArnFormat'|'containerInstanceLongArnFormat'|'awsvpcTrunking'|'containerInsights'|'fargateFIPSMode'|'tagResourceAuthorization'|'fargateTaskRetirementWaitPeriod'|'guardDutyActivate'|'defaultLogDriverMode', 'value': 'string', 'principalArn': 'string', 'type': 'user'|'aws_managed' } } **Response Structure** * *(dict) --* * **setting** *(dict) --* The current account setting for a resource. * **name** *(string) --* The Amazon ECS resource name. * **value** *(string) --* Determines whether the account setting is on or off for the specified resource. * **principalArn** *(string) --* The ARN of the principal. It can be a user, role, or the root user. If this field is omitted, the authenticated user is assumed. * **type** *(string) --* Indicates whether Amazon Web Services manages the account setting, or if the user manages it. "aws_managed" account settings are read-only, as Amazon Web Services manages such on the customer's behalf. Currently, the "guardDutyActivate" account setting is the only one Amazon Web Services manages. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" **Examples** This example modifies your account settings to opt in to the new ARN and resource ID format for Amazon ECS services. If you’re using this command as the root user, then changes apply to the entire AWS account, unless an IAM user or role explicitly overrides these settings for themselves. response = client.put_account_setting( name='serviceLongArnFormat', value='enabled', ) print(response) Expected Output: { 'setting': { 'name': 'serviceLongArnFormat', 'value': 'enabled', 'principalArn': 'arn:aws:iam:::user/principalName', }, 'ResponseMetadata': { '...': '...', }, } This example modifies the account setting for a specific IAM user or IAM role to opt in to the new ARN and resource ID format for Amazon ECS container instances. If you’re using this command as the root user, then changes apply to the entire AWS account, unless an IAM user or role explicitly overrides these settings for themselves. response = client.put_account_setting( name='containerInstanceLongArnFormat', value='enabled', principalArn='arn:aws:iam:::user/principalName', ) print(response) Expected Output: { 'setting': { 'name': 'containerInstanceLongArnFormat', 'value': 'enabled', 'principalArn': 'arn:aws:iam:::user/principalName', }, 'ResponseMetadata': { '...': '...', }, } ECS / Client / list_tags_for_resource list_tags_for_resource ********************** ECS.Client.list_tags_for_resource(**kwargs) List the tags for an Amazon ECS resource. See also: AWS API Documentation **Request Syntax** response = client.list_tags_for_resource( resourceArn='string' ) Parameters: **resourceArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) that identifies the resource to list the tags for. Currently, the supported resources are Amazon ECS tasks, services, task definitions, clusters, and container instances. Return type: dict Returns: **Response Syntax** { 'tags': [ { 'key': 'string', 'value': 'string' }, ] } **Response Structure** * *(dict) --* * **tags** *(list) --* The tags for the resource. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.ClusterNotFoundException" * "ECS.Client.exceptions.InvalidParameterException" **Examples** This example lists the tags for the 'dev' cluster. response = client.list_tags_for_resource( resourceArn='arn:aws:ecs:region:aws_account_id:cluster/dev', ) print(response) Expected Output: { 'tags': [ { 'key': 'team', 'value': 'dev', }, ], 'ResponseMetadata': { '...': '...', }, } ECS / Client / create_service create_service ************** ECS.Client.create_service(**kwargs) Runs and maintains your desired number of tasks from a specified task definition. If the number of tasks running in a service drops below the "desiredCount", Amazon ECS runs another copy of the task in the specified cluster. To update an existing service, use UpdateService. Note: On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition. Note: Amazon Elastic Inference (EI) is no longer available to customers. In addition to maintaining the desired count of tasks in your service, you can optionally run your service behind one or more load balancers. The load balancers distribute traffic across the tasks that are associated with the service. For more information, see Service load balancing in the *Amazon Elastic Container Service Developer Guide*. You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or updating a service. "volumeConfigurations" is only supported for REPLICA service and not DAEMON service. For more information, see Amazon EBS volumes in the *Amazon Elastic Container Service Developer Guide*. Tasks for services that don't use a load balancer are considered healthy if they're in the "RUNNING" state. Tasks for services that use a load balancer are considered healthy if they're in the "RUNNING" state and are reported as healthy by the load balancer. There are two service scheduler strategies available: * "REPLICA" - The replica scheduling strategy places and maintains your desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. For more information, see Service scheduler concepts in the *Amazon Elastic Container Service Developer Guide*. * "DAEMON" - The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks. It also stops tasks that don't meet the placement constraints. When using this strategy, you don't need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies. For more information, see Amazon ECS services in the *Amazon Elastic Container Service Developer Guide*. The deployment controller is the mechanism that determines how tasks are deployed for your service. The valid options are: * ECS When you create a service which uses the "ECS" deployment controller, you can choose between the following deployment strategies (which you can set in the “ "strategy"” field in “ "deploymentConfiguration"”): : * "ROLLING": When you create a service which uses the *rolling update* ( "ROLLING") deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration. For more information, see Deploy Amazon ECS services by replacing tasks in the *Amazon Elastic Container Service Developer Guide*. Rolling update deployments are best suited for the following scenarios: * Gradual service updates: You need to update your service incrementally without taking the entire service offline at once. * Limited resource requirements: You want to avoid the additional resource costs of running two complete environments simultaneously (as required by blue/green deployments). * Acceptable deployment time: Your application can tolerate a longer deployment process, as rolling updates replace tasks one by one. * No need for instant roll back: Your service can tolerate a rollback process that takes minutes rather than seconds. * Simple deployment process: You prefer a straightforward deployment approach without the complexity of managing multiple environments, target groups, and listeners. * No load balancer requirement: Your service doesn't use or require a load balancer, Application Load Balancer, Network Load Balancer, or Service Connect (which are required for blue/green deployments). * Stateful applications: Your application maintains state that makes it difficult to run two parallel environments. * Cost sensitivity: You want to minimize deployment costs by not running duplicate environments during deployment. Rolling updates are the default deployment strategy for services and provide a balance between deployment safety and resource efficiency for many common application scenarios. * "BLUE_GREEN": A *blue/green* deployment strategy ( "BLUE_GREEN") is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed. For more information, see Amazon ECS blue/green deployments in the *Amazon Elastic Container Service Developer Guide*. Amazon ECS blue/green deployments are best suited for the following scenarios: * Service validation: When you need to validate new service revisions before directing production traffic to them * Zero downtime: When your service requires zero-downtime deployments * Instant roll back: When you need the ability to quickly roll back if issues are detected * Load balancer requirement: When your service uses Application Load Balancer, Network Load Balancer, or Service Connect * External Use a third-party deployment controller. * Blue/green deployment (powered by CodeDeploy) CodeDeploy installs an updated version of the application as a new replacement task set and reroutes production traffic from the original application task set to the replacement task set. The original task set is terminated after a successful deployment. Use this deployment controller to verify a new deployment of a service before sending production traffic to it. When creating a service that uses the "EXTERNAL" deployment controller, you can specify only parameters that aren't controlled at the task set level. The only required parameter is the service name. You control your services using the CreateTaskSet. For more information, see Amazon ECS deployment types in the *Amazon Elastic Container Service Developer Guide*. When the service scheduler launches new tasks, it determines task placement. For information about task placement and task placement strategies, see Amazon ECS task placement in the *Amazon Elastic Container Service Developer Guide* See also: AWS API Documentation **Request Syntax** response = client.create_service( cluster='string', serviceName='string', taskDefinition='string', availabilityZoneRebalancing='ENABLED'|'DISABLED', loadBalancers=[ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], serviceRegistries=[ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], desiredCount=123, clientToken='string', launchType='EC2'|'FARGATE'|'EXTERNAL', capacityProviderStrategy=[ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], platformVersion='string', role='string', deploymentConfiguration={ 'deploymentCircuitBreaker': { 'enable': True|False, 'rollback': True|False }, 'maximumPercent': 123, 'minimumHealthyPercent': 123, 'alarms': { 'alarmNames': [ 'string', ], 'rollback': True|False, 'enable': True|False }, 'strategy': 'ROLLING'|'BLUE_GREEN', 'bakeTimeInMinutes': 123, 'lifecycleHooks': [ { 'hookTargetArn': 'string', 'roleArn': 'string', 'lifecycleStages': [ 'RECONCILE_SERVICE'|'PRE_SCALE_UP'|'POST_SCALE_UP'|'TEST_TRAFFIC_SHIFT'|'POST_TEST_TRAFFIC_SHIFT'|'PRODUCTION_TRAFFIC_SHIFT'|'POST_PRODUCTION_TRAFFIC_SHIFT', ] }, ] }, placementConstraints=[ { 'type': 'distinctInstance'|'memberOf', 'expression': 'string' }, ], placementStrategy=[ { 'type': 'random'|'spread'|'binpack', 'field': 'string' }, ], networkConfiguration={ 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, healthCheckGracePeriodSeconds=123, schedulingStrategy='REPLICA'|'DAEMON', deploymentController={ 'type': 'ECS'|'CODE_DEPLOY'|'EXTERNAL' }, tags=[ { 'key': 'string', 'value': 'string' }, ], enableECSManagedTags=True|False, propagateTags='TASK_DEFINITION'|'SERVICE'|'NONE', enableExecuteCommand=True|False, serviceConnectConfiguration={ 'enabled': True|False, 'namespace': 'string', 'services': [ { 'portName': 'string', 'discoveryName': 'string', 'clientAliases': [ { 'port': 123, 'dnsName': 'string', 'testTrafficRules': { 'header': { 'name': 'string', 'value': { 'exact': 'string' } } } }, ], 'ingressPortOverride': 123, 'timeout': { 'idleTimeoutSeconds': 123, 'perRequestTimeoutSeconds': 123 }, 'tls': { 'issuerCertificateAuthority': { 'awsPcaAuthorityArn': 'string' }, 'kmsKey': 'string', 'roleArn': 'string' } }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] } }, volumeConfigurations=[ { 'name': 'string', 'managedEBSVolume': { 'encrypted': True|False, 'kmsKeyId': 'string', 'volumeType': 'string', 'sizeInGiB': 123, 'snapshotId': 'string', 'volumeInitializationRate': 123, 'iops': 123, 'throughput': 123, 'tagSpecifications': [ { 'resourceType': 'volume', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE' }, ], 'roleArn': 'string', 'filesystemType': 'ext3'|'ext4'|'xfs'|'ntfs' } }, ], vpcLatticeConfigurations=[ { 'roleArn': 'string', 'targetGroupArn': 'string', 'portName': 'string' }, ] ) type cluster: string param cluster: The short name or full Amazon Resource Name (ARN) of the cluster that you run your service on. If you do not specify a cluster, the default cluster is assumed. type serviceName: string param serviceName: **[REQUIRED]** The name of your service. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. Service names must be unique within a cluster, but you can have similarly named services in multiple clusters within a Region or across multiple Regions. type taskDefinition: string param taskDefinition: The "family" and "revision" ( "family:revision") or full ARN of the task definition to run in your service. If a "revision" isn't specified, the latest "ACTIVE" revision is used. A task definition must be specified if the service uses either the "ECS" or "CODE_DEPLOY" deployment controllers. For more information about deployment types, see Amazon ECS deployment types. type availabilityZoneRebalancing: string param availabilityZoneRebalancing: Indicates whether to use Availability Zone rebalancing for the service. For more information, see Balancing an Amazon ECS service across Availability Zones in the *Amazon Elastic Container Service Developer Guide* . type loadBalancers: list param loadBalancers: A load balancer object representing the load balancers to use with your service. For more information, see Service load balancing in the *Amazon Elastic Container Service Developer Guide*. If the service uses the rolling update ( "ECS") deployment controller and using either an Application Load Balancer or Network Load Balancer, you must specify one or more target group ARNs to attach to the service. The service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. If the service uses the "CODE_DEPLOY" deployment controller, the service is required to use either an Application Load Balancer or Network Load Balancer. When creating an CodeDeploy deployment group, you specify two target groups (referred to as a "targetGroupPair"). During a deployment, CodeDeploy determines which task set in your service has the status "PRIMARY", and it associates one target group with it. Then, it also associates the other target group with the replacement task set. The load balancer can also have up to two listeners: a required listener for production traffic and an optional listener that you can use to perform validation tests with Lambda functions before routing production traffic to it. If you use the "CODE_DEPLOY" deployment controller, these values can be changed when updating the service. For Application Load Balancers and Network Load Balancers, this object must contain the load balancer target group ARN, the container name, and the container port to access from the load balancer. The container name must be as it appears in a container definition. The load balancer name parameter must be omitted. When a task from this service is placed on a container instance, the container instance and port combination is registered as a target in the target group that's specified here. For Classic Load Balancers, this object must contain the load balancer name, the container name , and the container port to access from the load balancer. The container name must be as it appears in a container definition. The target group ARN parameter must be omitted. When a task from this service is placed on a container instance, the container instance is registered with the load balancer that's specified here. Services with tasks that use the "awsvpc" network mode (for example, those with the Fargate launch type) only support Application Load Balancers and Network Load Balancers. Classic Load Balancers aren't supported. Also, when you create any target groups for these services, you must choose "ip" as the target type, not "instance". This is because tasks that use the "awsvpc" network mode are associated with an elastic network interface, not an Amazon EC2 instance. * *(dict) --* The load balancer configuration to use with a service or task set. When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers. We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration. A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the *Amazon Elastic Container Service Developer Guide*. * **targetGroupArn** *(string) --* The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set. A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. For services using the "ECS" deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the *Amazon Elastic Container Service Developer Guide*. For services using the "CODE_DEPLOY" deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the *Amazon Elastic Container Service Developer Guide*. Warning: If your service's task definition uses the "awsvpc" network mode, you must choose "ip" as the target type, not "instance". Do this when creating your target groups because tasks that use the "awsvpc" network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type. * **loadBalancerName** *(string) --* The name of the load balancer to associate with the Amazon ECS service or task set. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted. * **containerName** *(string) --* The name of the container (as it appears in a container definition) to associate with the load balancer. You need to specify the container name when configuring the target group for an Amazon ECS load balancer. * **containerPort** *(integer) --* The port on the container to associate with the load balancer. This port must correspond to a "containerPort" in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the "hostPort" of the port mapping. * **advancedConfiguration** *(dict) --* The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments. * **alternateTargetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments. * **productionListenerRule** *(string) --* The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic. * **testListenerRule** *(string) --* The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you. type serviceRegistries: list param serviceRegistries: The details of the service discovery registry to associate with this service. For more information, see Service discovery. Note: Each service may be associated with one service registry. Multiple service registries for each service isn't supported. * *(dict) --* The details for the service registry. Each service may be associated with one service registry. Multiple service registries for each service are not supported. When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration. * **registryArn** *(string) --* The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService. * **port** *(integer) --* The port value used if your service discovery service specified an SRV record. This field might be used if both the "awsvpc" network mode and SRV records are used. * **containerName** *(string) --* The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition that your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **containerPort** *(integer) --* The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. type desiredCount: integer param desiredCount: The number of instantiations of the specified task definition to place and keep running in your service. This is required if "schedulingStrategy" is "REPLICA" or isn't specified. If "schedulingStrategy" is "DAEMON" then this isn't required. type clientToken: string param clientToken: An identifier that you provide to ensure the idempotency of the request. It must be unique and is case sensitive. Up to 36 ASCII characters in the range of 33-126 (inclusive) are allowed. type launchType: string param launchType: The infrastructure that you run your service on. For more information, see Amazon ECS launch types in the *Amazon Elastic Container Service Developer Guide*. The "FARGATE" launch type runs your tasks on Fargate On- Demand infrastructure. Note: Fargate Spot infrastructure is available for use but a capacity provider strategy must be used. For more information, see Fargate capacity providers in the *Amazon ECS Developer Guide*. The "EC2" launch type runs your tasks on Amazon EC2 instances registered to your cluster. The "EXTERNAL" launch type runs your tasks on your on- premises server or virtual machine (VM) capacity registered to your cluster. A service can use either a launch type or a capacity provider strategy. If a "launchType" is specified, the "capacityProviderStrategy" parameter must be omitted. type capacityProviderStrategy: list param capacityProviderStrategy: The capacity provider strategy to use for the service. If a "capacityProviderStrategy" is specified, the "launchType" parameter must be omitted. If no "capacityProviderStrategy" or "launchType" is specified, the "defaultCapacityProviderStrategy" for the cluster is used. A capacity provider strategy can contain a maximum of 20 capacity providers. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* **[REQUIRED]** The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. type platformVersion: string param platformVersion: The platform version that your tasks in the service are running on. A platform version is specified only for tasks using the Fargate launch type. If one isn't specified, the "LATEST" platform version is used. For more information, see Fargate platform versions in the *Amazon Elastic Container Service Developer Guide*. type role: string param role: The name or full Amazon Resource Name (ARN) of the IAM role that allows Amazon ECS to make calls to your load balancer on your behalf. This parameter is only permitted if you are using a load balancer with your service and your task definition doesn't use the "awsvpc" network mode. If you specify the "role" parameter, you must also specify a load balancer object with the "loadBalancers" parameter. Warning: If your account has already created the Amazon ECS service- linked role, that role is used for your service unless you specify a role here. The service-linked role is required if your task definition uses the "awsvpc" network mode or if the service is configured to use service discovery, an external deployment controller, multiple target groups, or Elastic Inference accelerators in which case you don't specify a role here. For more information, see Using service-linked roles for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. If your specified role has a path other than "/", then you must either specify the full role ARN (this is recommended) or prefix the role name with the path. For example, if a role with the name "bar" has a path of "/foo/" then you would specify "/foo/bar" as the role name. For more information, see Friendly names and paths in the *IAM User Guide*. type deploymentConfiguration: dict param deploymentConfiguration: Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks. * **deploymentCircuitBreaker** *(dict) --* Note: The deployment circuit breaker can only be used for services using the rolling update ( "ECS") deployment type. The **deployment circuit breaker** determines whether a service deployment will fail if the service can't reach a steady state. If you use the deployment circuit breaker, a service deployment will transition to a failed state and stop launching new tasks. If you use the rollback option, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. For more information, see Rolling update in the *Amazon Elastic Container Service Developer Guide* * **enable** *(boolean) --* **[REQUIRED]** Determines whether to use the deployment circuit breaker logic for the service. * **rollback** *(boolean) --* **[REQUIRED]** Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is on, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. * **maximumPercent** *(integer) --* If a service is using the rolling update ( "ECS") deployment type, the "maximumPercent" parameter represents an upper limit on the number of your service's tasks that are allowed in the "RUNNING" or "PENDING" state during a deployment, as a percentage of the "desiredCount" (rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using the "REPLICA" service scheduler and has a "desiredCount" of four tasks and a "maximumPercent" value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default "maximumPercent" value for a service using the "REPLICA" service scheduler is 200%. The Amazon ECS scheduler uses this parameter to replace unhealthy tasks by starting replacement tasks first and then stopping the unhealthy tasks, as long as cluster resources for starting replacement tasks are available. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services. If a service is using either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types, and tasks in the service use the EC2 launch type, the **maximum percent** value is set to the default value. The **maximum percent** value is used to define the upper limit on the number of the tasks in the service that remain in the "RUNNING" state while the container instances are in the "DRAINING" state. Note: You can't specify a custom "maximumPercent" value for a service that uses either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and has tasks that use the EC2 launch type. If the service uses either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types, and the tasks in the service use the Fargate launch type, the maximum percent value is not used. The value is still returned when describing your service. * **minimumHealthyPercent** *(integer) --* If a service is using the rolling update ( "ECS") deployment type, the "minimumHealthyPercent" represents a lower limit on the number of your service's tasks that must remain in the "RUNNING" state during a deployment, as a percentage of the "desiredCount" (rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a "desiredCount" of four tasks and a "minimumHealthyPercent" of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks. If any tasks are unhealthy and if "maximumPercent" doesn't allow the Amazon ECS scheduler to start replacement tasks, the scheduler stops the unhealthy tasks one-by-one — using the "minimumHealthyPercent" as a constraint — to clear up capacity to launch replacement tasks. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services . For services that *do not* use a load balancer, the following should be noted: * A service is considered healthy if all essential containers within the tasks in the service pass their health checks. * If a task has no essential containers with a health check defined, the service scheduler will wait for 40 seconds after a task reaches a "RUNNING" state before the task is counted towards the minimum healthy percent total. * If a task has one or more essential containers with a health check defined, the service scheduler will wait for the task to reach a healthy status before counting it towards the minimum healthy percent total. A task is considered healthy when all essential containers within the task have passed their health checks. The amount of time the service scheduler can wait for is determined by the container health check settings. For services that *do* use a load balancer, the following should be noted: * If a task has no essential containers with a health check defined, the service scheduler will wait for the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total. * If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total. The default value for a replica service for "minimumHealthyPercent" is 100%. The default "minimumHealthyPercent" value for a service using the "DAEMON" service schedule is 0% for the CLI, the Amazon Web Services SDKs, and the APIs and 50% for the Amazon Web Services Management Console. The minimum number of healthy tasks during a deployment is the "desiredCount" multiplied by the "minimumHealthyPercent"/100, rounded up to the nearest integer value. If a service is using either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and is running tasks that use the EC2 launch type, the **minimum healthy percent** value is set to the default value. The **minimum healthy percent** value is used to define the lower limit on the number of the tasks in the service that remain in the "RUNNING" state while the container instances are in the "DRAINING" state. Note: You can't specify a custom "minimumHealthyPercent" value for a service that uses either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and has tasks that use the EC2 launch type. If a service is using either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service. * **alarms** *(dict) --* Information about the CloudWatch alarms. * **alarmNames** *(list) --* **[REQUIRED]** One or more CloudWatch alarm names. Use a "," to separate the alarms. * *(string) --* * **rollback** *(boolean) --* **[REQUIRED]** Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is used, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. * **enable** *(boolean) --* **[REQUIRED]** Determines whether to use the CloudWatch alarm option in the service deployment process. * **strategy** *(string) --* The deployment strategy for the service. Choose from these valid values: * "ROLLING" - When you create a service which uses the rolling update ( "ROLLING") deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration. * "BLUE_GREEN" - A blue/green deployment strategy ( "BLUE_GREEN") is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed. * **bakeTimeInMinutes** *(integer) --* The time period when both blue and green service revisions are running simultaneously after the production traffic has shifted. You must provide this parameter when you use the "BLUE_GREEN" deployment strategy. * **lifecycleHooks** *(list) --* An array of deployment lifecycle hook objects to run custom logic at specific stages of the deployment lifecycle. * *(dict) --* A deployment lifecycle hook runs custom logic at specific stages of the deployment process. Currently, you can use Lambda functions as hook targets. For more information, see Lifecycle hooks for Amazon ECS service deployments in the *Amazon Elastic Container Service Developer Guide*. * **hookTargetArn** *(string) --* The Amazon Resource Name (ARN) of the hook target. Currently, only Lambda function ARNs are supported. You must provide this parameter when configuring a deployment lifecycle hook. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call Lambda functions on your behalf. For more information, see Permissions required for Lambda functions in Amazon ECS blue/green deployments in the *Amazon Elastic Container Service Developer Guide*. * **lifecycleStages** *(list) --* The lifecycle stages at which to run the hook. Choose from these valid values: * RECONCILE_SERVICE The reconciliation stage that only happens when you start a new service deployment with more than 1 service revision in an ACTIVE state. You can use a lifecycle hook for this stage. * PRE_SCALE_UP The green service revision has not started. The blue service revision is handling 100% of the production traffic. There is no test traffic. You can use a lifecycle hook for this stage. * POST_SCALE_UP The green service revision has started. The blue service revision is handling 100% of the production traffic. There is no test traffic. You can use a lifecycle hook for this stage. * TEST_TRAFFIC_SHIFT The blue and green service revisions are running. The blue service revision handles 100% of the production traffic. The green service revision is migrating from 0% to 100% of test traffic. You can use a lifecycle hook for this stage. * POST_TEST_TRAFFIC_SHIFT The test traffic shift is complete. The green service revision handles 100% of the test traffic. You can use a lifecycle hook for this stage. * PRODUCTION_TRAFFIC_SHIFT Production traffic is shifting to the green service revision. The green service revision is migrating from 0% to 100% of production traffic. You can use a lifecycle hook for this stage. * POST_PRODUCTION_TRAFFIC_SHIFT The production traffic shift is complete. You can use a lifecycle hook for this stage. You must provide this parameter when configuring a deployment lifecycle hook. * *(string) --* type placementConstraints: list param placementConstraints: An array of placement constraint objects to use for tasks in your service. You can specify a maximum of 10 constraints for each task. This limit includes constraints in the task definition and those specified at runtime. * *(dict) --* An object representing a constraint on task placement. For more information, see Task placement constraints in the *Amazon Elastic Container Service Developer Guide*. Note: If you're using the Fargate launch type, task placement constraints aren't supported. * **type** *(string) --* The type of constraint. Use "distinctInstance" to ensure that each task in a particular group is running on a different container instance. Use "memberOf" to restrict the selection to a group of valid candidates. * **expression** *(string) --* A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is "distinctInstance". For more information, see Cluster query language in the *Amazon Elastic Container Service Developer Guide*. type placementStrategy: list param placementStrategy: The placement strategy objects to use for tasks in your service. You can specify a maximum of 5 strategy rules for each service. * *(dict) --* The task placement strategy for a task or service. For more information, see Task placement strategies in the *Amazon Elastic Container Service Developer Guide*. * **type** *(string) --* The type of placement strategy. The "random" placement strategy randomly places tasks on available candidates. The "spread" placement strategy spreads placement across available candidates evenly based on the "field" parameter. The "binpack" strategy places tasks on available candidates that have the least available amount of the resource that's specified with the "field" parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task. * **field** *(string) --* The field to apply the placement strategy against. For the "spread" placement strategy, valid values are "instanceId" (or "host", which has the same effect), or any platform or custom attribute that's applied to a container instance, such as "attribute:ecs.availability- zone". For the "binpack" placement strategy, valid values are "cpu" and "memory". For the "random" placement strategy, this field is not used. type networkConfiguration: dict param networkConfiguration: The network configuration for the service. This parameter is required for task definitions that use the "awsvpc" network mode to receive their own elastic network interface, and it isn't supported for other network modes. For more information, see Task networking in the *Amazon Elastic Container Service Developer Guide*. * **awsvpcConfiguration** *(dict) --* The VPC subnets and security groups that are associated with a task. Note: All specified subnets and security groups must be from the same VPC. * **subnets** *(list) --* **[REQUIRED]** The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified. Note: All specified subnets must be from the same VPC. * *(string) --* * **securityGroups** *(list) --* The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified. Note: All specified security groups must be from the same VPC. * *(string) --* * **assignPublicIp** *(string) --* Whether the task's elastic network interface receives a public IP address. Consider the following when you set this value: * When you use "create-service" or "update-service", the default is "DISABLED". * When the service "deploymentController" is "ECS", the value must be "DISABLED". type healthCheckGracePeriodSeconds: integer param healthCheckGracePeriodSeconds: The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing, VPC Lattice, and container health checks after a task has first started. If you don't specify a health check grace period value, the default value of "0" is used. If you don't use any of the health checks, then "healthCheckGracePeriodSeconds" is unused. If your service's tasks take a while to start and respond to health checks, you can specify a health check grace period of up to 2,147,483,647 seconds (about 69 years). During that time, the Amazon ECS service scheduler ignores health check status. This grace period can prevent the service scheduler from marking tasks as unhealthy and stopping them before they have time to come up. type schedulingStrategy: string param schedulingStrategy: The scheduling strategy to use for the service. For more information, see Services. There are two service scheduler strategies available: * "REPLICA"-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. This scheduler strategy is required if the service uses the "CODE_DEPLOY" or "EXTERNAL" deployment controller types. * "DAEMON"-The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks and will stop tasks that don't meet the placement constraints. When you're using this strategy, you don't need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies. Note: Tasks using the Fargate launch type or the "CODE_DEPLOY" or "EXTERNAL" deployment controller types don't support the "DAEMON" scheduling strategy. type deploymentController: dict param deploymentController: The deployment controller to use for the service. If no deployment controller is specified, the default value of "ECS" is used. * **type** *(string) --* **[REQUIRED]** The deployment controller type to use. The deployment controller is the mechanism that determines how tasks are deployed for your service. The valid options are: * ECS When you create a service which uses the "ECS" deployment controller, you can choose between the following deployment strategies: * "ROLLING": When you create a service which uses the *rolling update* ( "ROLLING") deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration. Rolling update deployments are best suited for the following scenarios: * Gradual service updates: You need to update your service incrementally without taking the entire service offline at once. * Limited resource requirements: You want to avoid the additional resource costs of running two complete environments simultaneously (as required by blue/green deployments). * Acceptable deployment time: Your application can tolerate a longer deployment process, as rolling updates replace tasks one by one. * No need for instant roll back: Your service can tolerate a rollback process that takes minutes rather than seconds. * Simple deployment process: You prefer a straightforward deployment approach without the complexity of managing multiple environments, target groups, and listeners. * No load balancer requirement: Your service doesn't use or require a load balancer, Application Load Balancer, Network Load Balancer, or Service Connect (which are required for blue/green deployments). * Stateful applications: Your application maintains state that makes it difficult to run two parallel environments. * Cost sensitivity: You want to minimize deployment costs by not running duplicate environments during deployment. Rolling updates are the default deployment strategy for services and provide a balance between deployment safety and resource efficiency for many common application scenarios. * "BLUE_GREEN": A *blue/green* deployment strategy ( "BLUE_GREEN") is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed. Amazon ECS blue/green deployments are best suited for the following scenarios: * Service validation: When you need to validate new service revisions before directing production traffic to them * Zero downtime: When your service requires zero- downtime deployments * Instant roll back: When you need the ability to quickly roll back if issues are detected * Load balancer requirement: When your service uses Application Load Balancer, Network Load Balancer, or Service Connect * External Use a third-party deployment controller. * Blue/green deployment (powered by CodeDeploy) CodeDeploy installs an updated version of the application as a new replacement task set and reroutes production traffic from the original application task set to the replacement task set. The original task set is terminated after a successful deployment. Use this deployment controller to verify a new deployment of a service before sending production traffic to it. type tags: list param tags: The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. When a service is deleted, the tags are deleted as well. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). type enableECSManagedTags: boolean param enableECSManagedTags: Specifies whether to turn on Amazon ECS managed tags for the tasks within the service. For more information, see Tagging your Amazon ECS resources in the *Amazon Elastic Container Service Developer Guide*. When you use Amazon ECS managed tags, you must set the "propagateTags" request parameter. type propagateTags: string param propagateTags: Specifies whether to propagate the tags from the task definition to the task. If no value is specified, the tags aren't propagated. Tags can only be propagated to the task during task creation. To add tags to a task after task creation, use the TagResource API action. You must set this to a value other than "NONE" when you use Cost Explorer. For more information, see Amazon ECS usage reports in the *Amazon Elastic Container Service Developer Guide*. The default is "NONE". type enableExecuteCommand: boolean param enableExecuteCommand: Determines whether the execute command functionality is turned on for the service. If "true", this enables execute command functionality on all containers in the service tasks. type serviceConnectConfiguration: dict param serviceConnectConfiguration: The configuration for this service to discover and connect to services, and be discovered by, and connected from, other services within a namespace. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **enabled** *(boolean) --* **[REQUIRED]** Specifies whether to use Service Connect with this service. * **namespace** *(string) --* The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace for use with Service Connect. The namespace must be in the same Amazon Web Services Region as the Amazon ECS service and cluster. The type of namespace doesn't affect Service Connect. For more information about Cloud Map, see Working with Services in the *Cloud Map Developer Guide*. * **services** *(list) --* The list of Service Connect service objects. These are names and aliases (also known as endpoints) that are used by other Amazon ECS services to connect to this service. This field is not required for a "client" Amazon ECS service that's a member of a namespace only to connect to other services within the namespace. An example of this would be a frontend application that accepts incoming requests from either a load balancer that's attached to the service or by other means. An object selects a port from the task definition, assigns a name for the Cloud Map service, and a list of aliases (endpoints) and ports for client applications to refer to this service. * *(dict) --* The Service Connect service object configuration. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **portName** *(string) --* **[REQUIRED]** The "portName" must match the name of one of the "portMappings" from all the containers in the task definition of this Amazon ECS service. * **discoveryName** *(string) --* The "discoveryName" is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen. If the "discoveryName" isn't specified, the port mapping name from the task definition is used in "portName.namespace". * **clientAliases** *(list) --* The list of client aliases for this Service Connect service. You use these to assign names that can be used by client applications. The maximum number of client aliases that you can have in this list is 1. Each alias ("endpoint") is a fully-qualified name and port number that other Amazon ECS tasks ("clients") can use to connect to this service. Each name and port mapping must be unique within the namespace. For each "ServiceConnectService", you must provide at least one "clientAlias" with one "port". * *(dict) --* Each alias ("endpoint") is a fully-qualified name and port number that other tasks ("clients") can use to connect to this service. Each name and port mapping must be unique within the namespace. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **port** *(integer) --* **[REQUIRED]** The listening port number for the Service Connect proxy. This port is available inside of all of the tasks within the same namespace. To avoid changing your applications in client Amazon ECS services, set this to the same port that the client application uses by default. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **dnsName** *(string) --* The "dnsName" is the name that you use in the applications of client tasks to connect to this service. The name must be a valid DNS name but doesn't need to be fully-qualified. The name can include up to 127 characters. The name can include lowercase letters, numbers, underscores (_), hyphens (-), and periods (.). The name can't start with a hyphen. If this parameter isn't specified, the default value of "discoveryName.namespace" is used. If the "discoveryName" isn't specified, the port mapping name from the task definition is used in "portName.namespace". To avoid changing your applications in client Amazon ECS services, set this to the same name that the client application uses by default. For example, a few common names are "database", "db", or the lowercase name of a database, such as "mysql" or "redis". For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **testTrafficRules** *(dict) --* The configuration for test traffic routing rules used during blue/green deployments with Amazon ECS Service Connect. This allows you to route a portion of traffic to the new service revision of your service for testing before shifting all production traffic. * **header** *(dict) --* **[REQUIRED]** The HTTP header-based routing rules that determine which requests should be routed to the new service version during blue/green deployment testing. These rules provide fine-grained control over test traffic routing based on request headers. * **name** *(string) --* **[REQUIRED]** The name of the HTTP header to examine for test traffic routing. Common examples include custom headers like "X-Test-Version" or "X-Canary- Request" that can be used to identify test traffic. * **value** *(dict) --* The header value matching configuration that determines how the HTTP header value is evaluated for test traffic routing decisions. * **exact** *(string) --* **[REQUIRED]** The exact value that the HTTP header must match for the test traffic routing rule to apply. This provides precise control over which requests are routed to the new service revision during blue/green deployments. * **ingressPortOverride** *(integer) --* The port number for the Service Connect proxy to listen on. Use the value of this field to bypass the proxy for traffic on the port number specified in the named "portMapping" in the task definition of this application, and then use it in your VPC security groups to allow traffic into the proxy for this Amazon ECS service. In "awsvpc" mode and Fargate, the default value is the container port number. The container port number is in the "portMapping" in the task definition. In bridge mode, the default value is the ephemeral port of the Service Connect proxy. * **timeout** *(dict) --* A reference to an object that represents the configured timeouts for Service Connect. * **idleTimeoutSeconds** *(integer) --* The amount of time in seconds a connection will stay active while idle. A value of "0" can be set to disable "idleTimeout". The "idleTimeout" default for "HTTP"/ "HTTP2"/ "GRPC" is 5 minutes. The "idleTimeout" default for "TCP" is 1 hour. * **perRequestTimeoutSeconds** *(integer) --* The amount of time waiting for the upstream to respond with a complete response per request. A value of "0" can be set to disable "perRequestTimeout". "perRequestTimeout" can only be set if Service Connect "appProtocol" isn't "TCP". Only "idleTimeout" is allowed for "TCP" "appProtocol". * **tls** *(dict) --* A reference to an object that represents a Transport Layer Security (TLS) configuration. * **issuerCertificateAuthority** *(dict) --* **[REQUIRED]** The signer certificate authority. * **awsPcaAuthorityArn** *(string) --* The ARN of the Amazon Web Services Private Certificate Authority certificate. * **kmsKey** *(string) --* The Amazon Web Services Key Management Service key. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that's associated with the Service Connect TLS. * **logConfiguration** *(dict) --* The log configuration for the container. This parameter maps to "LogConfig" in the docker container create command and the "--log-driver" option to docker run. By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition. Understand the following when specifying a log configuration for your containers. * Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent. For tasks on Fargate, the supported log drivers are "awslogs", "splunk", and "awsfirelens". For tasks hosted on Amazon EC2 instances, the supported log drivers are "awslogs", "fluentd", "gelf", "json- file", "journald", "syslog", "splunk", and "awsfirelens". * This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. * For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the "ECS_AVAILABLE_LOGGING_DRIVERS" environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the *Amazon Elastic Container Service Developer Guide*. * For tasks that are on Fargate, because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to. * **logDriver** *(string) --* **[REQUIRED]** The log driver to use for the container. For tasks on Fargate, the supported log drivers are "awslogs", "splunk", and "awsfirelens". For tasks hosted on Amazon EC2 instances, the supported log drivers are "awslogs", "fluentd", "gelf", "json- file", "journald", "syslog", "splunk", and "awsfirelens". For more information about using the "awslogs" log driver, see Send Amazon ECS logs to CloudWatch in the *Amazon Elastic Container Service Developer Guide*. For more information about using the "awsfirelens" log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner. Note: If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software. * **options** *(dict) --* The configuration options to send to the log driver. The options you can specify depend on the log driver. Some of the options you can specify when you use the "awslogs" log driver to route logs to Amazon CloudWatch include the following: awslogs-create-group Required: No Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to "false". Note: Your IAM policy must include the "logs:CreateLogGroup" permission before you attempt to use "awslogs-create- group".awslogs-region Required: Yes Specify the Amazon Web Services Region that the "awslogs" log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option. awslogs-group Required: Yes Make sure to specify a log group that the "awslogs" log driver sends its log streams to. awslogs-stream-prefix Required: Yes, when using Fargate.Optional when using EC2. Use the "awslogs-stream-prefix" option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format "prefix-name/container-name /ecs-task-id". If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option. For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to. You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console. awslogs-datetime-format Required: No This option defines a multiline start pattern in Python "strftime" format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry. For more information, see awslogs-datetime-format. You cannot configure both the "awslogs-datetime-format" and "awslogs-multiline-pattern" options. Note: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.awslogs- multiline-pattern Required: No This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. For more information, see awslogs-multiline-pattern. This option is ignored if "awslogs-datetime-format" is also configured. You cannot configure both the "awslogs-datetime-format" and "awslogs-multiline-pattern" options. Note: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. The following options apply to all supported log drivers. mode Required: No Valid values: "non-blocking" | "blocking" This option defines the delivery mode of log messages from the container to the log driver specified using "logDriver". The delivery mode you choose affects application availability when the flow of logs from container is interrupted. If you use the "blocking" mode and the flow of logs is interrupted, calls from container code to write to the "stdout" and "stderr" streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure. If you use the "non-blocking" mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the "max-buffer-size" option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver. You can set a default "mode" for all containers in a specific Amazon Web Services Region by using the "defaultLogDriverMode" account setting. If you don't specify the "mode" option or configure the account setting, Amazon ECS will default to the "non-blocking" mode. For more information about the account setting, see Default log driver mode in the *Amazon Elastic Container Service Developer Guide*. Note: On June 25, 2025, Amazon ECS changed the default log driver mode from "blocking" to "non-blocking" to prioritize task availability over logging. To continue using the "blocking" mode after this change, do one of the following: * Set the "mode" option in your container definition's "logConfiguration" as "blocking". * Set the "defaultLogDriverMode" account setting to "blocking". max-buffer-size Required: No Default value: "1m" When "non-blocking" mode is used, the "max-buffer-size" log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost. To route logs using the "splunk" log router, you need to specify a "splunk-token" and a "splunk-url". When you use the "awsfirelens" log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the "log-driver-buffer-limit" option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. Other options you can specify when using "awsfirelens" to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with "region" and a name for the log stream with "delivery_stream". When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with "region" and a data stream name with "stream". When you export logs to Amazon OpenSearch Service, you can specify options like "Name", "Host" (OpenSearch Service endpoint without protocol), "Port", "Index", "Type", "Aws_auth", "Aws_region", "Suppress_Type_Name", and "tls". For more information, see Under the hood: FireLens for Amazon ECS Tasks. When you export logs to Amazon S3, you can specify the bucket using the "bucket" option. You can also specify "region", "total_file_size", "upload_timeout", and "use_put_object" as options. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" * *(string) --* * *(string) --* * **secretOptions** *(list) --* The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * *(dict) --* An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways: * To inject sensitive data into your containers as environment variables, use the "secrets" container definition parameter. * To reference sensitive information in the log configuration of a container, use the "secretOptions" container definition parameter. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* **[REQUIRED]** The name of the secret. * **valueFrom** *(string) --* **[REQUIRED]** The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store. For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the *Amazon Elastic Container Service Developer Guide*. Note: If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified. type volumeConfigurations: list param volumeConfigurations: The configuration for a volume specified in the task definition as a volume that is configured at launch time. Currently, the only supported volume type is an Amazon EBS volume. * *(dict) --* The configuration for a volume specified in the task definition as a volume that is configured at launch time. Currently, the only supported volume type is an Amazon EBS volume. * **name** *(string) --* **[REQUIRED]** The name of the volume. This value must match the volume name from the "Volume" object in the task definition. * **managedEBSVolume** *(dict) --* The configuration for the Amazon EBS volume that Amazon ECS creates and manages on your behalf. These settings are used to create each Amazon EBS volume, with one volume created for each task in the service. The Amazon EBS volumes are visible in your account in the Amazon EC2 console once they are created. * **encrypted** *(boolean) --* Indicates whether the volume should be encrypted. If you turn on Region-level Amazon EBS encryption by default but set this value as "false", the setting is overridden and the volume is encrypted with the KMS key specified for Amazon EBS encryption by default. This parameter maps 1:1 with the "Encrypted" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * **kmsKeyId** *(string) --* The Amazon Resource Name (ARN) identifier of the Amazon Web Services Key Management Service key to use for Amazon EBS encryption. When a key is specified using this parameter, it overrides Amazon EBS default encryption or any KMS key that you specified for cluster-level managed storage encryption. This parameter maps 1:1 with the "KmsKeyId" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. For more information about encrypting Amazon EBS volumes attached to tasks, see Encrypt data stored in Amazon EBS volumes attached to Amazon ECS tasks. Warning: Amazon Web Services authenticates the Amazon Web Services Key Management Service key asynchronously. Therefore, if you specify an ID, alias, or ARN that is invalid, the action can appear to complete, but eventually fails. * **volumeType** *(string) --* The volume type. This parameter maps 1:1 with the "VolumeType" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. For more information, see Amazon EBS volume types in the *Amazon EC2 User Guide*. The following are the supported volume types. * General Purpose SSD: "gp2``| ``gp3" * Provisioned IOPS SSD: "io1``| ``io2" * Throughput Optimized HDD: "st1" * Cold HDD: "sc1" * Magnetic: "standard" Note: The magnetic volume type is not supported on Fargate. * **sizeInGiB** *(integer) --* The size of the volume in GiB. You must specify either a volume size or a snapshot ID. If you specify a snapshot ID, the snapshot size is used for the volume size by default. You can optionally specify a volume size greater than or equal to the snapshot size. This parameter maps 1:1 with the "Size" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. The following are the supported volume size values for each volume type. * "gp2" and "gp3": 1-16,384 * "io1" and "io2": 4-16,384 * "st1" and "sc1": 125-16,384 * "standard": 1-1,024 * **snapshotId** *(string) --* The snapshot that Amazon ECS uses to create volumes for attachment to tasks maintained by the service. You must specify either "snapshotId" or "sizeInGiB" in your volume configuration. This parameter maps 1:1 with the "SnapshotId" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * **volumeInitializationRate** *(integer) --* The rate, in MiB/s, at which data is fetched from a snapshot of an existing EBS volume to create new volumes for attachment to the tasks maintained by the service. This property can be specified only if you specify a "snapshotId". For more information, see Initialize Amazon EBS volumes in the *Amazon EBS User Guide*. * **iops** *(integer) --* The number of I/O operations per second (IOPS). For "gp3", "io1", and "io2" volumes, this represents the number of IOPS that are provisioned for the volume. For "gp2" volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting. The following are the supported values for each volume type. * "gp3": 3,000 - 16,000 IOPS * "io1": 100 - 64,000 IOPS * "io2": 100 - 256,000 IOPS This parameter is required for "io1" and "io2" volume types. The default for "gp3" volumes is "3,000 IOPS". This parameter is not supported for "st1", "sc1", or "standard" volume types. This parameter maps 1:1 with the "Iops" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * **throughput** *(integer) --* The throughput to provision for a volume, in MiB/s, with a maximum of 1,000 MiB/s. This parameter maps 1:1 with the "Throughput" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. Warning: This parameter is only supported for the "gp3" volume type. * **tagSpecifications** *(list) --* The tags to apply to the volume. Amazon ECS applies service-managed tags by default. This parameter maps 1:1 with the "TagSpecifications.N" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * *(dict) --* The tag specifications of an Amazon EBS volume. * **resourceType** *(string) --* **[REQUIRED]** The type of volume resource. * **tags** *(list) --* The tags applied to this Amazon EBS volume. "AmazonECSCreated" and "AmazonECSManaged" are reserved tags that can't be used. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **propagateTags** *(string) --* Determines whether to propagate the tags from the task definition to the Amazon EBS volume. Tags can only propagate to a "SERVICE" specified in "ServiceVolumeConfiguration". If no value is specified, the tags aren't propagated. * **roleArn** *(string) --* **[REQUIRED]** The ARN of the IAM role to associate with this volume. This is the Amazon ECS infrastructure IAM role that is used to manage your Amazon Web Services infrastructure. We recommend using the Amazon ECS-managed "AmazonECSInfrastructureRolePolicyForVolumes" IAM policy with this role. For more information, see Amazon ECS infrastructure IAM role in the *Amazon ECS Developer Guide*. * **filesystemType** *(string) --* The filesystem type for the volume. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the tasks will fail to start. The available Linux filesystem types are "ext3", "ext4", and "xfs". If no value is specified, the "xfs" filesystem type is used by default. The available Windows filesystem types are "NTFS". type vpcLatticeConfigurations: list param vpcLatticeConfigurations: The VPC Lattice configuration for the service being created. * *(dict) --* The VPC Lattice configuration for your service that holds the information for the target group(s) Amazon ECS tasks will be registered to. * **roleArn** *(string) --* **[REQUIRED]** The ARN of the IAM role to associate with this VPC Lattice configuration. This is the Amazon ECS infrastructure IAM role that is used to manage your VPC Lattice infrastructure. * **targetGroupArn** *(string) --* **[REQUIRED]** The full Amazon Resource Name (ARN) of the target group or groups associated with the VPC Lattice configuration that the Amazon ECS tasks will be registered to. * **portName** *(string) --* **[REQUIRED]** The name of the port mapping to register in the VPC Lattice target group. This is the name of the "portMapping" you defined in your task definition. rtype: dict returns: **Response Syntax** { 'service': { 'serviceArn': 'string', 'serviceName': 'string', 'clusterArn': 'string', 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'status': 'string', 'desiredCount': 123, 'runningCount': 123, 'pendingCount': 123, 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'taskDefinition': 'string', 'deploymentConfiguration': { 'deploymentCircuitBreaker': { 'enable': True|False, 'rollback': True|False }, 'maximumPercent': 123, 'minimumHealthyPercent': 123, 'alarms': { 'alarmNames': [ 'string', ], 'rollback': True|False, 'enable': True|False }, 'strategy': 'ROLLING'|'BLUE_GREEN', 'bakeTimeInMinutes': 123, 'lifecycleHooks': [ { 'hookTargetArn': 'string', 'roleArn': 'string', 'lifecycleStages': [ 'RECONCILE_SERVICE'|'PRE_SCALE_UP'|'POST_SCALE_UP'|'TEST_TRAFFIC_SHIFT'|'POST_TEST_TRAFFIC_SHIFT'|'PRODUCTION_TRAFFIC_SHIFT'|'POST_PRODUCTION_TRAFFIC_SHIFT', ] }, ] }, 'taskSets': [ { 'id': 'string', 'taskSetArn': 'string', 'serviceArn': 'string', 'clusterArn': 'string', 'startedBy': 'string', 'externalId': 'string', 'status': 'string', 'taskDefinition': 'string', 'computedDesiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'scale': { 'value': 123.0, 'unit': 'PERCENT' }, 'stabilityStatus': 'STEADY_STATE'|'STABILIZING', 'stabilityStatusAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' } }, ], 'deployments': [ { 'id': 'string', 'status': 'string', 'taskDefinition': 'string', 'desiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'failedTasks': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'rolloutState': 'COMPLETED'|'FAILED'|'IN_PROGRESS', 'rolloutStateReason': 'string', 'serviceConnectConfiguration': { 'enabled': True|False, 'namespace': 'string', 'services': [ { 'portName': 'string', 'discoveryName': 'string', 'clientAliases': [ { 'port': 123, 'dnsName': 'string', 'testTrafficRules': { 'header': { 'name': 'string', 'value': { 'exact': 'string' } } } }, ], 'ingressPortOverride': 123, 'timeout': { 'idleTimeoutSeconds': 123, 'perRequestTimeoutSeconds': 123 }, 'tls': { 'issuerCertificateAuthority': { 'awsPcaAuthorityArn': 'string' }, 'kmsKey': 'string', 'roleArn': 'string' } }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] } }, 'serviceConnectResources': [ { 'discoveryName': 'string', 'discoveryArn': 'string' }, ], 'volumeConfigurations': [ { 'name': 'string', 'managedEBSVolume': { 'encrypted': True|False, 'kmsKeyId': 'string', 'volumeType': 'string', 'sizeInGiB': 123, 'snapshotId': 'string', 'volumeInitializationRate': 123, 'iops': 123, 'throughput': 123, 'tagSpecifications': [ { 'resourceType': 'volume', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE' }, ], 'roleArn': 'string', 'filesystemType': 'ext3'|'ext4'|'xfs'|'ntfs' } }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' }, 'vpcLatticeConfigurations': [ { 'roleArn': 'string', 'targetGroupArn': 'string', 'portName': 'string' }, ] }, ], 'roleArn': 'string', 'events': [ { 'id': 'string', 'createdAt': datetime(2015, 1, 1), 'message': 'string' }, ], 'createdAt': datetime(2015, 1, 1), 'placementConstraints': [ { 'type': 'distinctInstance'|'memberOf', 'expression': 'string' }, ], 'placementStrategy': [ { 'type': 'random'|'spread'|'binpack', 'field': 'string' }, ], 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'healthCheckGracePeriodSeconds': 123, 'schedulingStrategy': 'REPLICA'|'DAEMON', 'deploymentController': { 'type': 'ECS'|'CODE_DEPLOY'|'EXTERNAL' }, 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'createdBy': 'string', 'enableECSManagedTags': True|False, 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE', 'enableExecuteCommand': True|False, 'availabilityZoneRebalancing': 'ENABLED'|'DISABLED' } } **Response Structure** * *(dict) --* * **service** *(dict) --* The full description of your service following the create call. A service will return either a "capacityProviderStrategy" or "launchType" parameter, but not both, depending where one was specified when it was created. If a service is using the "ECS" deployment controller, the "deploymentController" and "taskSets" parameters will not be returned. if the service uses the "CODE_DEPLOY" deployment controller, the "deploymentController", "taskSets" and "deployments" parameters will be returned, however the "deployments" parameter will be an empty list. * **serviceArn** *(string) --* The ARN that identifies the service. For more information about the ARN format, see Amazon Resource Name (ARN) in the *Amazon ECS Developer Guide*. * **serviceName** *(string) --* The name of your service. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. Service names must be unique within a cluster. However, you can have similarly named services in multiple clusters within a Region or across multiple Regions. * **clusterArn** *(string) --* The Amazon Resource Name (ARN) of the cluster that hosts the service. * **loadBalancers** *(list) --* A list of Elastic Load Balancing load balancer objects. It contains the load balancer name, the container name, and the container port to access from the load balancer. The container name is as it appears in a container definition. * *(dict) --* The load balancer configuration to use with a service or task set. When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers. We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration. A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the *Amazon Elastic Container Service Developer Guide*. * **targetGroupArn** *(string) --* The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set. A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. For services using the "ECS" deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the *Amazon Elastic Container Service Developer Guide*. For services using the "CODE_DEPLOY" deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the *Amazon Elastic Container Service Developer Guide*. Warning: If your service's task definition uses the "awsvpc" network mode, you must choose "ip" as the target type, not "instance". Do this when creating your target groups because tasks that use the "awsvpc" network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type. * **loadBalancerName** *(string) --* The name of the load balancer to associate with the Amazon ECS service or task set. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted. * **containerName** *(string) --* The name of the container (as it appears in a container definition) to associate with the load balancer. You need to specify the container name when configuring the target group for an Amazon ECS load balancer. * **containerPort** *(integer) --* The port on the container to associate with the load balancer. This port must correspond to a "containerPort" in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the "hostPort" of the port mapping. * **advancedConfiguration** *(dict) --* The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments. * **alternateTargetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments. * **productionListenerRule** *(string) --* The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic. * **testListenerRule** *(string) --* The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you. * **serviceRegistries** *(list) --* The details for the service discovery registries to assign to this service. For more information, see Service Discovery. * *(dict) --* The details for the service registry. Each service may be associated with one service registry. Multiple service registries for each service are not supported. When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration. * **registryArn** *(string) --* The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService. * **port** *(integer) --* The port value used if your service discovery service specified an SRV record. This field might be used if both the "awsvpc" network mode and SRV records are used. * **containerName** *(string) --* The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition that your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **containerPort** *(integer) --* The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **status** *(string) --* The status of the service. The valid values are "ACTIVE", "DRAINING", or "INACTIVE". * **desiredCount** *(integer) --* The desired number of instantiations of the task definition to keep running on the service. This value is specified when the service is created with CreateService , and it can be modified with UpdateService. * **runningCount** *(integer) --* The number of tasks in the cluster that are in the "RUNNING" state. * **pendingCount** *(integer) --* The number of tasks in the cluster that are in the "PENDING" state. * **launchType** *(string) --* The launch type the service is using. When using the DescribeServices API, this field is omitted if the service was created using a capacity provider strategy. * **capacityProviderStrategy** *(list) --* The capacity provider strategy the service uses. When using the DescribeServices API, this field is omitted if the service was created using a launch type. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. * **platformVersion** *(string) --* The platform version to run your service on. A platform version is only specified for tasks that are hosted on Fargate. If one isn't specified, the "LATEST" platform version is used. For more information, see Fargate Platform Versions in the *Amazon Elastic Container Service Developer Guide*. * **platformFamily** *(string) --* The operating system that your tasks in the service run on. A platform family is specified only for tasks using the Fargate launch type. All tasks that run as part of this service must use the same "platformFamily" value as the service (for example, "LINUX"). * **taskDefinition** *(string) --* The task definition to use for tasks in the service. This value is specified when the service is created with CreateService, and it can be modified with UpdateService. * **deploymentConfiguration** *(dict) --* Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks. * **deploymentCircuitBreaker** *(dict) --* Note: The deployment circuit breaker can only be used for services using the rolling update ( "ECS") deployment type. The **deployment circuit breaker** determines whether a service deployment will fail if the service can't reach a steady state. If you use the deployment circuit breaker, a service deployment will transition to a failed state and stop launching new tasks. If you use the rollback option, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. For more information, see Rolling update in the *Amazon Elastic Container Service Developer Guide* * **enable** *(boolean) --* Determines whether to use the deployment circuit breaker logic for the service. * **rollback** *(boolean) --* Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is on, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. * **maximumPercent** *(integer) --* If a service is using the rolling update ( "ECS") deployment type, the "maximumPercent" parameter represents an upper limit on the number of your service's tasks that are allowed in the "RUNNING" or "PENDING" state during a deployment, as a percentage of the "desiredCount" (rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using the "REPLICA" service scheduler and has a "desiredCount" of four tasks and a "maximumPercent" value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default "maximumPercent" value for a service using the "REPLICA" service scheduler is 200%. The Amazon ECS scheduler uses this parameter to replace unhealthy tasks by starting replacement tasks first and then stopping the unhealthy tasks, as long as cluster resources for starting replacement tasks are available. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services. If a service is using either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types, and tasks in the service use the EC2 launch type, the **maximum percent** value is set to the default value. The **maximum percent** value is used to define the upper limit on the number of the tasks in the service that remain in the "RUNNING" state while the container instances are in the "DRAINING" state. Note: You can't specify a custom "maximumPercent" value for a service that uses either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and has tasks that use the EC2 launch type. If the service uses either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types, and the tasks in the service use the Fargate launch type, the maximum percent value is not used. The value is still returned when describing your service. * **minimumHealthyPercent** *(integer) --* If a service is using the rolling update ( "ECS") deployment type, the "minimumHealthyPercent" represents a lower limit on the number of your service's tasks that must remain in the "RUNNING" state during a deployment, as a percentage of the "desiredCount" (rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a "desiredCount" of four tasks and a "minimumHealthyPercent" of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks. If any tasks are unhealthy and if "maximumPercent" doesn't allow the Amazon ECS scheduler to start replacement tasks, the scheduler stops the unhealthy tasks one-by-one — using the "minimumHealthyPercent" as a constraint — to clear up capacity to launch replacement tasks. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services . For services that *do not* use a load balancer, the following should be noted: * A service is considered healthy if all essential containers within the tasks in the service pass their health checks. * If a task has no essential containers with a health check defined, the service scheduler will wait for 40 seconds after a task reaches a "RUNNING" state before the task is counted towards the minimum healthy percent total. * If a task has one or more essential containers with a health check defined, the service scheduler will wait for the task to reach a healthy status before counting it towards the minimum healthy percent total. A task is considered healthy when all essential containers within the task have passed their health checks. The amount of time the service scheduler can wait for is determined by the container health check settings. For services that *do* use a load balancer, the following should be noted: * If a task has no essential containers with a health check defined, the service scheduler will wait for the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total. * If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total. The default value for a replica service for "minimumHealthyPercent" is 100%. The default "minimumHealthyPercent" value for a service using the "DAEMON" service schedule is 0% for the CLI, the Amazon Web Services SDKs, and the APIs and 50% for the Amazon Web Services Management Console. The minimum number of healthy tasks during a deployment is the "desiredCount" multiplied by the "minimumHealthyPercent"/100, rounded up to the nearest integer value. If a service is using either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and is running tasks that use the EC2 launch type, the **minimum healthy percent** value is set to the default value. The **minimum healthy percent** value is used to define the lower limit on the number of the tasks in the service that remain in the "RUNNING" state while the container instances are in the "DRAINING" state. Note: You can't specify a custom "minimumHealthyPercent" value for a service that uses either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and has tasks that use the EC2 launch type. If a service is using either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service. * **alarms** *(dict) --* Information about the CloudWatch alarms. * **alarmNames** *(list) --* One or more CloudWatch alarm names. Use a "," to separate the alarms. * *(string) --* * **rollback** *(boolean) --* Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is used, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. * **enable** *(boolean) --* Determines whether to use the CloudWatch alarm option in the service deployment process. * **strategy** *(string) --* The deployment strategy for the service. Choose from these valid values: * "ROLLING" - When you create a service which uses the rolling update ( "ROLLING") deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration. * "BLUE_GREEN" - A blue/green deployment strategy ( "BLUE_GREEN") is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed. * **bakeTimeInMinutes** *(integer) --* The time period when both blue and green service revisions are running simultaneously after the production traffic has shifted. You must provide this parameter when you use the "BLUE_GREEN" deployment strategy. * **lifecycleHooks** *(list) --* An array of deployment lifecycle hook objects to run custom logic at specific stages of the deployment lifecycle. * *(dict) --* A deployment lifecycle hook runs custom logic at specific stages of the deployment process. Currently, you can use Lambda functions as hook targets. For more information, see Lifecycle hooks for Amazon ECS service deployments in the *Amazon Elastic Container Service Developer Guide*. * **hookTargetArn** *(string) --* The Amazon Resource Name (ARN) of the hook target. Currently, only Lambda function ARNs are supported. You must provide this parameter when configuring a deployment lifecycle hook. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call Lambda functions on your behalf. For more information, see Permissions required for Lambda functions in Amazon ECS blue/green deployments in the *Amazon Elastic Container Service Developer Guide*. * **lifecycleStages** *(list) --* The lifecycle stages at which to run the hook. Choose from these valid values: * RECONCILE_SERVICE The reconciliation stage that only happens when you start a new service deployment with more than 1 service revision in an ACTIVE state. You can use a lifecycle hook for this stage. * PRE_SCALE_UP The green service revision has not started. The blue service revision is handling 100% of the production traffic. There is no test traffic. You can use a lifecycle hook for this stage. * POST_SCALE_UP The green service revision has started. The blue service revision is handling 100% of the production traffic. There is no test traffic. You can use a lifecycle hook for this stage. * TEST_TRAFFIC_SHIFT The blue and green service revisions are running. The blue service revision handles 100% of the production traffic. The green service revision is migrating from 0% to 100% of test traffic. You can use a lifecycle hook for this stage. * POST_TEST_TRAFFIC_SHIFT The test traffic shift is complete. The green service revision handles 100% of the test traffic. You can use a lifecycle hook for this stage. * PRODUCTION_TRAFFIC_SHIFT Production traffic is shifting to the green service revision. The green service revision is migrating from 0% to 100% of production traffic. You can use a lifecycle hook for this stage. * POST_PRODUCTION_TRAFFIC_SHIFT The production traffic shift is complete. You can use a lifecycle hook for this stage. You must provide this parameter when configuring a deployment lifecycle hook. * *(string) --* * **taskSets** *(list) --* Information about a set of Amazon ECS tasks in either an CodeDeploy or an "EXTERNAL" deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic. * *(dict) --* Information about a set of Amazon ECS tasks in either an CodeDeploy or an "EXTERNAL" deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic. * **id** *(string) --* The ID of the task set. * **taskSetArn** *(string) --* The Amazon Resource Name (ARN) of the task set. * **serviceArn** *(string) --* The Amazon Resource Name (ARN) of the service the task set exists in. * **clusterArn** *(string) --* The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in. * **startedBy** *(string) --* The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the "startedBy" parameter is "CODE_DEPLOY". If an external deployment created the task set, the "startedBy" field isn't used. * **externalId** *(string) --* The external ID associated with the task set. If an CodeDeploy deployment created a task set, the "externalId" parameter contains the CodeDeploy deployment ID. If a task set is created for an external deployment and is associated with a service discovery registry, the "externalId" parameter contains the "ECS_TASK_SET_EXTERNAL_ID" Cloud Map attribute. * **status** *(string) --* The status of the task set. The following describes each state. PRIMARY The task set is serving production traffic. ACTIVE The task set isn't serving production traffic. DRAINING The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group. * **taskDefinition** *(string) --* The task definition that the task set is using. * **computedDesiredCount** *(integer) --* The computed desired count for the task set. This is calculated by multiplying the service's "desiredCount" by the task set's "scale" percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks. * **pendingCount** *(integer) --* The number of tasks in the task set that are in the "PENDING" status during a deployment. A task in the "PENDING" state is preparing to enter the "RUNNING" state. A task set enters the "PENDING" status when it launches for the first time or when it's restarted after being in the "STOPPED" state. * **runningCount** *(integer) --* The number of tasks in the task set that are in the "RUNNING" status during a deployment. A task in the "RUNNING" state is running and ready for use. * **createdAt** *(datetime) --* The Unix timestamp for the time when the task set was created. * **updatedAt** *(datetime) --* The Unix timestamp for the time when the task set was last updated. * **launchType** *(string) --* The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the *Amazon Elastic Container Service Developer Guide*. * **capacityProviderStrategy** *(list) --* The capacity provider strategy that are associated with the task set. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. * **platformVersion** *(string) --* The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the *Amazon Elastic Container Service Developer Guide*. * **platformFamily** *(string) --* The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type. All tasks in the set must have the same value. * **networkConfiguration** *(dict) --* The network configuration for the task set. * **awsvpcConfiguration** *(dict) --* The VPC subnets and security groups that are associated with a task. Note: All specified subnets and security groups must be from the same VPC. * **subnets** *(list) --* The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified. Note: All specified subnets must be from the same VPC. * *(string) --* * **securityGroups** *(list) --* The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified. Note: All specified security groups must be from the same VPC. * *(string) --* * **assignPublicIp** *(string) --* Whether the task's elastic network interface receives a public IP address. Consider the following when you set this value: * When you use "create-service" or "update- service", the default is "DISABLED". * When the service "deploymentController" is "ECS", the value must be "DISABLED". * **loadBalancers** *(list) --* Details on a load balancer that are used with a task set. * *(dict) --* The load balancer configuration to use with a service or task set. When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers. We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration. A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the *Amazon Elastic Container Service Developer Guide*. * **targetGroupArn** *(string) --* The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set. A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. For services using the "ECS" deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the *Amazon Elastic Container Service Developer Guide*. For services using the "CODE_DEPLOY" deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the *Amazon Elastic Container Service Developer Guide*. Warning: If your service's task definition uses the "awsvpc" network mode, you must choose "ip" as the target type, not "instance". Do this when creating your target groups because tasks that use the "awsvpc" network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type. * **loadBalancerName** *(string) --* The name of the load balancer to associate with the Amazon ECS service or task set. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted. * **containerName** *(string) --* The name of the container (as it appears in a container definition) to associate with the load balancer. You need to specify the container name when configuring the target group for an Amazon ECS load balancer. * **containerPort** *(integer) --* The port on the container to associate with the load balancer. This port must correspond to a "containerPort" in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the "hostPort" of the port mapping. * **advancedConfiguration** *(dict) --* The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments. * **alternateTargetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments. * **productionListenerRule** *(string) --* The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic. * **testListenerRule** *(string) --* The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you. * **serviceRegistries** *(list) --* The details for the service discovery registries to assign to this task set. For more information, see Service discovery. * *(dict) --* The details for the service registry. Each service may be associated with one service registry. Multiple service registries for each service are not supported. When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration. * **registryArn** *(string) --* The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService. * **port** *(integer) --* The port value used if your service discovery service specified an SRV record. This field might be used if both the "awsvpc" network mode and SRV records are used. * **containerName** *(string) --* The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition that your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **containerPort** *(integer) --* The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **scale** *(dict) --* A floating-point percentage of your desired number of tasks to place and keep running in the task set. * **value** *(float) --* The value, specified as a percent total of a service's "desiredCount", to scale the task set. Accepted values are numbers between 0 and 100. * **unit** *(string) --* The unit of measure for the scale value. * **stabilityStatus** *(string) --* The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set are in "STEADY_STATE": * The task "runningCount" is equal to the "computedDesiredCount". * The "pendingCount" is "0". * There are no tasks that are running on container instances in the "DRAINING" status. * All tasks are reporting a healthy status from the load balancers, service discovery, and container health checks. If any of those conditions aren't met, the stability status returns "STABILIZING". * **stabilityStatusAt** *(datetime) --* The Unix timestamp for the time when the task set stability status was retrieved. * **tags** *(list) --* The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **fargateEphemeralStorage** *(dict) --* The Fargate ephemeral storage settings for the task set. * **kmsKeyId** *(string) --* Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment. * **deployments** *(list) --* The current state of deployments for the service. * *(dict) --* The details of an Amazon ECS service deployment. This is used only when a service uses the "ECS" deployment controller type. * **id** *(string) --* The ID of the deployment. * **status** *(string) --* The status of the deployment. The following describes each state. PRIMARY The most recent deployment of a service. ACTIVE A service deployment that still has running tasks, but are in the process of being replaced with a new "PRIMARY" deployment. INACTIVE A deployment that has been completely replaced. * **taskDefinition** *(string) --* The most recent task definition that was specified for the tasks in the service to use. * **desiredCount** *(integer) --* The most recent desired count of tasks that was specified for the service to deploy or maintain. * **pendingCount** *(integer) --* The number of tasks in the deployment that are in the "PENDING" status. * **runningCount** *(integer) --* The number of tasks in the deployment that are in the "RUNNING" status. * **failedTasks** *(integer) --* The number of consecutively failed tasks in the deployment. A task is considered a failure if the service scheduler can't launch the task, the task doesn't transition to a "RUNNING" state, or if it fails any of its defined health checks and is stopped. Note: Once a service deployment has one or more successfully running tasks, the failed task count resets to zero and stops being evaluated. * **createdAt** *(datetime) --* The Unix timestamp for the time when the service deployment was created. * **updatedAt** *(datetime) --* The Unix timestamp for the time when the service deployment was last updated. * **capacityProviderStrategy** *(list) --* The capacity provider strategy that the deployment is using. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. * **launchType** *(string) --* The launch type the tasks in the service are using. For more information, see Amazon ECS Launch Types in the *Amazon Elastic Container Service Developer Guide*. * **platformVersion** *(string) --* The platform version that your tasks in the service run on. A platform version is only specified for tasks using the Fargate launch type. If one isn't specified, the "LATEST" platform version is used. For more information, see Fargate Platform Versions in the *Amazon Elastic Container Service Developer Guide*. * **platformFamily** *(string) --* The operating system that your tasks in the service, or tasks are running on. A platform family is specified only for tasks using the Fargate launch type. All tasks that run as part of this service must use the same "platformFamily" value as the service, for example, "LINUX.". * **networkConfiguration** *(dict) --* The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the "awsvpc" networking mode. * **awsvpcConfiguration** *(dict) --* The VPC subnets and security groups that are associated with a task. Note: All specified subnets and security groups must be from the same VPC. * **subnets** *(list) --* The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified. Note: All specified subnets must be from the same VPC. * *(string) --* * **securityGroups** *(list) --* The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified. Note: All specified security groups must be from the same VPC. * *(string) --* * **assignPublicIp** *(string) --* Whether the task's elastic network interface receives a public IP address. Consider the following when you set this value: * When you use "create-service" or "update- service", the default is "DISABLED". * When the service "deploymentController" is "ECS", the value must be "DISABLED". * **rolloutState** *(string) --* Note: The "rolloutState" of a service is only returned for services that use the rolling update ( "ECS") deployment type that aren't behind a Classic Load Balancer. The rollout state of the deployment. When a service deployment is started, it begins in an "IN_PROGRESS" state. When the service reaches a steady state, the deployment transitions to a "COMPLETED" state. If the service fails to reach a steady state and circuit breaker is turned on, the deployment transitions to a "FAILED" state. A deployment in "FAILED" state doesn't launch any new tasks. For more information, see DeploymentCircuitBreaker. * **rolloutStateReason** *(string) --* A description of the rollout state of a deployment. * **serviceConnectConfiguration** *(dict) --* The details of the Service Connect configuration that's used by this deployment. Compare the configuration between multiple deployments when troubleshooting issues with new deployments. The configuration for this service to discover and connect to services, and be discovered by, and connected from, other services within a namespace. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **enabled** *(boolean) --* Specifies whether to use Service Connect with this service. * **namespace** *(string) --* The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace for use with Service Connect. The namespace must be in the same Amazon Web Services Region as the Amazon ECS service and cluster. The type of namespace doesn't affect Service Connect. For more information about Cloud Map, see Working with Services in the *Cloud Map Developer Guide*. * **services** *(list) --* The list of Service Connect service objects. These are names and aliases (also known as endpoints) that are used by other Amazon ECS services to connect to this service. This field is not required for a "client" Amazon ECS service that's a member of a namespace only to connect to other services within the namespace. An example of this would be a frontend application that accepts incoming requests from either a load balancer that's attached to the service or by other means. An object selects a port from the task definition, assigns a name for the Cloud Map service, and a list of aliases (endpoints) and ports for client applications to refer to this service. * *(dict) --* The Service Connect service object configuration. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **portName** *(string) --* The "portName" must match the name of one of the "portMappings" from all the containers in the task definition of this Amazon ECS service. * **discoveryName** *(string) --* The "discoveryName" is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen. If the "discoveryName" isn't specified, the port mapping name from the task definition is used in "portName.namespace". * **clientAliases** *(list) --* The list of client aliases for this Service Connect service. You use these to assign names that can be used by client applications. The maximum number of client aliases that you can have in this list is 1. Each alias ("endpoint") is a fully-qualified name and port number that other Amazon ECS tasks ("clients") can use to connect to this service. Each name and port mapping must be unique within the namespace. For each "ServiceConnectService", you must provide at least one "clientAlias" with one "port". * *(dict) --* Each alias ("endpoint") is a fully- qualified name and port number that other tasks ("clients") can use to connect to this service. Each name and port mapping must be unique within the namespace. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **port** *(integer) --* The listening port number for the Service Connect proxy. This port is available inside of all of the tasks within the same namespace. To avoid changing your applications in client Amazon ECS services, set this to the same port that the client application uses by default. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **dnsName** *(string) --* The "dnsName" is the name that you use in the applications of client tasks to connect to this service. The name must be a valid DNS name but doesn't need to be fully-qualified. The name can include up to 127 characters. The name can include lowercase letters, numbers, underscores (_), hyphens (-), and periods (.). The name can't start with a hyphen. If this parameter isn't specified, the default value of "discoveryName.namespace" is used. If the "discoveryName" isn't specified, the port mapping name from the task definition is used in "portName.namespace". To avoid changing your applications in client Amazon ECS services, set this to the same name that the client application uses by default. For example, a few common names are "database", "db", or the lowercase name of a database, such as "mysql" or "redis". For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **testTrafficRules** *(dict) --* The configuration for test traffic routing rules used during blue/green deployments with Amazon ECS Service Connect. This allows you to route a portion of traffic to the new service revision of your service for testing before shifting all production traffic. * **header** *(dict) --* The HTTP header-based routing rules that determine which requests should be routed to the new service version during blue/green deployment testing. These rules provide fine-grained control over test traffic routing based on request headers. * **name** *(string) --* The name of the HTTP header to examine for test traffic routing. Common examples include custom headers like "X-Test-Version" or "X -Canary-Request" that can be used to identify test traffic. * **value** *(dict) --* The header value matching configuration that determines how the HTTP header value is evaluated for test traffic routing decisions. * **exact** *(string) --* The exact value that the HTTP header must match for the test traffic routing rule to apply. This provides precise control over which requests are routed to the new service revision during blue/green deployments. * **ingressPortOverride** *(integer) --* The port number for the Service Connect proxy to listen on. Use the value of this field to bypass the proxy for traffic on the port number specified in the named "portMapping" in the task definition of this application, and then use it in your VPC security groups to allow traffic into the proxy for this Amazon ECS service. In "awsvpc" mode and Fargate, the default value is the container port number. The container port number is in the "portMapping" in the task definition. In bridge mode, the default value is the ephemeral port of the Service Connect proxy. * **timeout** *(dict) --* A reference to an object that represents the configured timeouts for Service Connect. * **idleTimeoutSeconds** *(integer) --* The amount of time in seconds a connection will stay active while idle. A value of "0" can be set to disable "idleTimeout". The "idleTimeout" default for "HTTP"/ "HTTP2"/ "GRPC" is 5 minutes. The "idleTimeout" default for "TCP" is 1 hour. * **perRequestTimeoutSeconds** *(integer) --* The amount of time waiting for the upstream to respond with a complete response per request. A value of "0" can be set to disable "perRequestTimeout". "perRequestTimeout" can only be set if Service Connect "appProtocol" isn't "TCP". Only "idleTimeout" is allowed for "TCP" "appProtocol". * **tls** *(dict) --* A reference to an object that represents a Transport Layer Security (TLS) configuration. * **issuerCertificateAuthority** *(dict) --* The signer certificate authority. * **awsPcaAuthorityArn** *(string) --* The ARN of the Amazon Web Services Private Certificate Authority certificate. * **kmsKey** *(string) --* The Amazon Web Services Key Management Service key. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that's associated with the Service Connect TLS. * **logConfiguration** *(dict) --* The log configuration for the container. This parameter maps to "LogConfig" in the docker container create command and the "--log-driver" option to docker run. By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition. Understand the following when specifying a log configuration for your containers. * Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent. For tasks on Fargate, the supported log drivers are "awslogs", "splunk", and "awsfirelens". For tasks hosted on Amazon EC2 instances, the supported log drivers are "awslogs", "fluentd", "gelf", "json-file", "journald", "syslog", "splunk", and "awsfirelens". * This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. * For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the "ECS_AVAILABLE_LOGGING_DRIVERS" environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the *Amazon Elastic Container Service Developer Guide*. * For tasks that are on Fargate, because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to. * **logDriver** *(string) --* The log driver to use for the container. For tasks on Fargate, the supported log drivers are "awslogs", "splunk", and "awsfirelens". For tasks hosted on Amazon EC2 instances, the supported log drivers are "awslogs", "fluentd", "gelf", "json-file", "journald", "syslog", "splunk", and "awsfirelens". For more information about using the "awslogs" log driver, see Send Amazon ECS logs to CloudWatch in the *Amazon Elastic Container Service Developer Guide*. For more information about using the "awsfirelens" log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner. Note: If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software. * **options** *(dict) --* The configuration options to send to the log driver. The options you can specify depend on the log driver. Some of the options you can specify when you use the "awslogs" log driver to route logs to Amazon CloudWatch include the following: awslogs-create-group Required: No Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to "false". Note: Your IAM policy must include the "logs:CreateLogGroup" permission before you attempt to use "awslogs-create-group ".awslogs-region Required: Yes Specify the Amazon Web Services Region that the "awslogs" log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option. awslogs-group Required: Yes Make sure to specify a log group that the "awslogs" log driver sends its log streams to. awslogs-stream-prefix Required: Yes, when using Fargate.Optional when using EC2. Use the "awslogs-stream-prefix" option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format "prefix-name /container-name/ecs-task-id". If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option. For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to. You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console. awslogs-datetime-format Required: No This option defines a multiline start pattern in Python "strftime" format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry. For more information, see awslogs-datetime- format. You cannot configure both the "awslogs- datetime-format" and "awslogs-multiline- pattern" options. Note: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.awslogs-multiline-pattern Required: No This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. For more information, see awslogs-multiline- pattern. This option is ignored if "awslogs-datetime- format" is also configured. You cannot configure both the "awslogs- datetime-format" and "awslogs-multiline- pattern" options. Note: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. The following options apply to all supported log drivers. mode Required: No Valid values: "non-blocking" | "blocking" This option defines the delivery mode of log messages from the container to the log driver specified using "logDriver". The delivery mode you choose affects application availability when the flow of logs from container is interrupted. If you use the "blocking" mode and the flow of logs is interrupted, calls from container code to write to the "stdout" and "stderr" streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure. If you use the "non-blocking" mode, the container's logs are instead stored in an in- memory intermediate buffer configured with the "max-buffer-size" option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non- blocking mode in the awslogs container log driver. You can set a default "mode" for all containers in a specific Amazon Web Services Region by using the "defaultLogDriverMode" account setting. If you don't specify the "mode" option or configure the account setting, Amazon ECS will default to the "non-blocking" mode. For more information about the account setting, see Default log driver mode in the *Amazon Elastic Container Service Developer Guide*. Note: On June 25, 2025, Amazon ECS changed the default log driver mode from "blocking" to "non-blocking" to prioritize task availability over logging. To continue using the "blocking" mode after this change, do one of the following: * Set the "mode" option in your container definition's "logConfiguration" as "blocking". * Set the "defaultLogDriverMode" account setting to "blocking". max-buffer-size Required: No Default value: "1m" When "non-blocking" mode is used, the "max- buffer-size" log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost. To route logs using the "splunk" log router, you need to specify a "splunk-token" and a "splunk-url". When you use the "awsfirelens" log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the "log-driver-buffer-limit" option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. Other options you can specify when using "awsfirelens" to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with "region" and a name for the log stream with "delivery_stream". When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with "region" and a data stream name with "stream". When you export logs to Amazon OpenSearch Service, you can specify options like "Name", "Host" (OpenSearch Service endpoint without protocol), "Port", "Index", "Type", "Aws_auth", "Aws_region", "Suppress_Type_Name", and "tls". For more information, see Under the hood: FireLens for Amazon ECS Tasks. When you export logs to Amazon S3, you can specify the bucket using the "bucket" option. You can also specify "region", "total_file_size", "upload_timeout", and "use_put_object" as options. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" * *(string) --* * *(string) --* * **secretOptions** *(list) --* The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * *(dict) --* An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways: * To inject sensitive data into your containers as environment variables, use the "secrets" container definition parameter. * To reference sensitive information in the log configuration of a container, use the "secretOptions" container definition parameter. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the secret. * **valueFrom** *(string) --* The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store. For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the *Amazon Elastic Container Service Developer Guide*. Note: If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified. * **serviceConnectResources** *(list) --* The list of Service Connect resources that are associated with this deployment. Each list entry maps a discovery name to a Cloud Map service name. * *(dict) --* The Service Connect resource. Each configuration maps a discovery name to a Cloud Map service name. The data is stored in Cloud Map as part of the Service Connect configuration for each discovery name of this Amazon ECS service. A task can resolve the "dnsName" for each of the "clientAliases" of a service. However a task can't resolve the discovery names. If you want to connect to a service, refer to the "ServiceConnectConfiguration" of that service for the list of "clientAliases" that you can use. * **discoveryName** *(string) --* The discovery name of this Service Connect resource. The "discoveryName" is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen. If the "discoveryName" isn't specified, the port mapping name from the task definition is used in "portName.namespace". * **discoveryArn** *(string) --* The Amazon Resource Name (ARN) for the service in Cloud Map that matches the discovery name for this Service Connect resource. You can use this ARN in other integrations with Cloud Map. However, Service Connect can't ensure connectivity outside of Amazon ECS. * **volumeConfigurations** *(list) --* The details of the volume that was "configuredAtLaunch". You can configure different settings like the size, throughput, volumeType, and ecryption in ServiceManagedEBSVolumeConfiguration. The "name" of the volume must match the "name" from the task definition. * *(dict) --* The configuration for a volume specified in the task definition as a volume that is configured at launch time. Currently, the only supported volume type is an Amazon EBS volume. * **name** *(string) --* The name of the volume. This value must match the volume name from the "Volume" object in the task definition. * **managedEBSVolume** *(dict) --* The configuration for the Amazon EBS volume that Amazon ECS creates and manages on your behalf. These settings are used to create each Amazon EBS volume, with one volume created for each task in the service. The Amazon EBS volumes are visible in your account in the Amazon EC2 console once they are created. * **encrypted** *(boolean) --* Indicates whether the volume should be encrypted. If you turn on Region-level Amazon EBS encryption by default but set this value as "false", the setting is overridden and the volume is encrypted with the KMS key specified for Amazon EBS encryption by default. This parameter maps 1:1 with the "Encrypted" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * **kmsKeyId** *(string) --* The Amazon Resource Name (ARN) identifier of the Amazon Web Services Key Management Service key to use for Amazon EBS encryption. When a key is specified using this parameter, it overrides Amazon EBS default encryption or any KMS key that you specified for cluster- level managed storage encryption. This parameter maps 1:1 with the "KmsKeyId" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. For more information about encrypting Amazon EBS volumes attached to tasks, see Encrypt data stored in Amazon EBS volumes attached to Amazon ECS tasks. Warning: Amazon Web Services authenticates the Amazon Web Services Key Management Service key asynchronously. Therefore, if you specify an ID, alias, or ARN that is invalid, the action can appear to complete, but eventually fails. * **volumeType** *(string) --* The volume type. This parameter maps 1:1 with the "VolumeType" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. For more information, see Amazon EBS volume types in the *Amazon EC2 User Guide*. The following are the supported volume types. * General Purpose SSD: "gp2``| ``gp3" * Provisioned IOPS SSD: "io1``| ``io2" * Throughput Optimized HDD: "st1" * Cold HDD: "sc1" * Magnetic: "standard" Note: The magnetic volume type is not supported on Fargate. * **sizeInGiB** *(integer) --* The size of the volume in GiB. You must specify either a volume size or a snapshot ID. If you specify a snapshot ID, the snapshot size is used for the volume size by default. You can optionally specify a volume size greater than or equal to the snapshot size. This parameter maps 1:1 with the "Size" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. The following are the supported volume size values for each volume type. * "gp2" and "gp3": 1-16,384 * "io1" and "io2": 4-16,384 * "st1" and "sc1": 125-16,384 * "standard": 1-1,024 * **snapshotId** *(string) --* The snapshot that Amazon ECS uses to create volumes for attachment to tasks maintained by the service. You must specify either "snapshotId" or "sizeInGiB" in your volume configuration. This parameter maps 1:1 with the "SnapshotId" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * **volumeInitializationRate** *(integer) --* The rate, in MiB/s, at which data is fetched from a snapshot of an existing EBS volume to create new volumes for attachment to the tasks maintained by the service. This property can be specified only if you specify a "snapshotId". For more information, see Initialize Amazon EBS volumes in the *Amazon EBS User Guide*. * **iops** *(integer) --* The number of I/O operations per second (IOPS). For "gp3", "io1", and "io2" volumes, this represents the number of IOPS that are provisioned for the volume. For "gp2" volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting. The following are the supported values for each volume type. * "gp3": 3,000 - 16,000 IOPS * "io1": 100 - 64,000 IOPS * "io2": 100 - 256,000 IOPS This parameter is required for "io1" and "io2" volume types. The default for "gp3" volumes is "3,000 IOPS". This parameter is not supported for "st1", "sc1", or "standard" volume types. This parameter maps 1:1 with the "Iops" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * **throughput** *(integer) --* The throughput to provision for a volume, in MiB/s, with a maximum of 1,000 MiB/s. This parameter maps 1:1 with the "Throughput" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. Warning: This parameter is only supported for the "gp3" volume type. * **tagSpecifications** *(list) --* The tags to apply to the volume. Amazon ECS applies service-managed tags by default. This parameter maps 1:1 with the "TagSpecifications.N" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * *(dict) --* The tag specifications of an Amazon EBS volume. * **resourceType** *(string) --* The type of volume resource. * **tags** *(list) --* The tags applied to this Amazon EBS volume. "AmazonECSCreated" and "AmazonECSManaged" are reserved tags that can't be used. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case- sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **propagateTags** *(string) --* Determines whether to propagate the tags from the task definition to the Amazon EBS volume. Tags can only propagate to a "SERVICE" specified in "ServiceVolumeConfiguration". If no value is specified, the tags aren't propagated. * **roleArn** *(string) --* The ARN of the IAM role to associate with this volume. This is the Amazon ECS infrastructure IAM role that is used to manage your Amazon Web Services infrastructure. We recommend using the Amazon ECS-managed "AmazonECSInfrastructureRolePolicyForVolumes" IAM policy with this role. For more information, see Amazon ECS infrastructure IAM role in the *Amazon ECS Developer Guide*. * **filesystemType** *(string) --* The filesystem type for the volume. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the tasks will fail to start. The available Linux filesystem types are "ext3", "ext4", and "xfs". If no value is specified, the "xfs" filesystem type is used by default. The available Windows filesystem types are "NTFS". * **fargateEphemeralStorage** *(dict) --* The Fargate ephemeral storage settings for the deployment. * **kmsKeyId** *(string) --* Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment. * **vpcLatticeConfigurations** *(list) --* The VPC Lattice configuration for the service deployment. * *(dict) --* The VPC Lattice configuration for your service that holds the information for the target group(s) Amazon ECS tasks will be registered to. * **roleArn** *(string) --* The ARN of the IAM role to associate with this VPC Lattice configuration. This is the Amazon ECS infrastructure IAM role that is used to manage your VPC Lattice infrastructure. * **targetGroupArn** *(string) --* The full Amazon Resource Name (ARN) of the target group or groups associated with the VPC Lattice configuration that the Amazon ECS tasks will be registered to. * **portName** *(string) --* The name of the port mapping to register in the VPC Lattice target group. This is the name of the "portMapping" you defined in your task definition. * **roleArn** *(string) --* The ARN of the IAM role that's associated with the service. It allows the Amazon ECS container agent to register container instances with an Elastic Load Balancing load balancer. * **events** *(list) --* The event stream for your service. A maximum of 100 of the latest events are displayed. * *(dict) --* The details for an event that's associated with a service. * **id** *(string) --* The ID string for the event. * **createdAt** *(datetime) --* The Unix timestamp for the time when the event was triggered. * **message** *(string) --* The event message. * **createdAt** *(datetime) --* The Unix timestamp for the time when the service was created. * **placementConstraints** *(list) --* The placement constraints for the tasks in the service. * *(dict) --* An object representing a constraint on task placement. For more information, see Task placement constraints in the *Amazon Elastic Container Service Developer Guide*. Note: If you're using the Fargate launch type, task placement constraints aren't supported. * **type** *(string) --* The type of constraint. Use "distinctInstance" to ensure that each task in a particular group is running on a different container instance. Use "memberOf" to restrict the selection to a group of valid candidates. * **expression** *(string) --* A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is "distinctInstance". For more information, see Cluster query language in the *Amazon Elastic Container Service Developer Guide*. * **placementStrategy** *(list) --* The placement strategy that determines how tasks for the service are placed. * *(dict) --* The task placement strategy for a task or service. For more information, see Task placement strategies in the *Amazon Elastic Container Service Developer Guide*. * **type** *(string) --* The type of placement strategy. The "random" placement strategy randomly places tasks on available candidates. The "spread" placement strategy spreads placement across available candidates evenly based on the "field" parameter. The "binpack" strategy places tasks on available candidates that have the least available amount of the resource that's specified with the "field" parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task. * **field** *(string) --* The field to apply the placement strategy against. For the "spread" placement strategy, valid values are "instanceId" (or "host", which has the same effect), or any platform or custom attribute that's applied to a container instance, such as "attribute:ecs .availability-zone". For the "binpack" placement strategy, valid values are "cpu" and "memory". For the "random" placement strategy, this field is not used. * **networkConfiguration** *(dict) --* The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the "awsvpc" networking mode. * **awsvpcConfiguration** *(dict) --* The VPC subnets and security groups that are associated with a task. Note: All specified subnets and security groups must be from the same VPC. * **subnets** *(list) --* The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified. Note: All specified subnets must be from the same VPC. * *(string) --* * **securityGroups** *(list) --* The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified. Note: All specified security groups must be from the same VPC. * *(string) --* * **assignPublicIp** *(string) --* Whether the task's elastic network interface receives a public IP address. Consider the following when you set this value: * When you use "create-service" or "update-service", the default is "DISABLED". * When the service "deploymentController" is "ECS", the value must be "DISABLED". * **healthCheckGracePeriodSeconds** *(integer) --* The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing target health checks after a task has first started. * **schedulingStrategy** *(string) --* The scheduling strategy to use for the service. For more information, see Services. There are two service scheduler strategies available. * "REPLICA"-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. * "DAEMON"-The daemon scheduling strategy deploys exactly one task on each active container instance. This task meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks. It stop tasks that don't meet the placement constraints. Note: Fargate tasks don't support the "DAEMON" scheduling strategy. * **deploymentController** *(dict) --* The deployment controller type the service is using. * **type** *(string) --* The deployment controller type to use. The deployment controller is the mechanism that determines how tasks are deployed for your service. The valid options are: * ECS When you create a service which uses the "ECS" deployment controller, you can choose between the following deployment strategies: * "ROLLING": When you create a service which uses the *rolling update* ( "ROLLING") deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration. Rolling update deployments are best suited for the following scenarios: * Gradual service updates: You need to update your service incrementally without taking the entire service offline at once. * Limited resource requirements: You want to avoid the additional resource costs of running two complete environments simultaneously (as required by blue/green deployments). * Acceptable deployment time: Your application can tolerate a longer deployment process, as rolling updates replace tasks one by one. * No need for instant roll back: Your service can tolerate a rollback process that takes minutes rather than seconds. * Simple deployment process: You prefer a straightforward deployment approach without the complexity of managing multiple environments, target groups, and listeners. * No load balancer requirement: Your service doesn't use or require a load balancer, Application Load Balancer, Network Load Balancer, or Service Connect (which are required for blue/green deployments). * Stateful applications: Your application maintains state that makes it difficult to run two parallel environments. * Cost sensitivity: You want to minimize deployment costs by not running duplicate environments during deployment. Rolling updates are the default deployment strategy for services and provide a balance between deployment safety and resource efficiency for many common application scenarios. * "BLUE_GREEN": A *blue/green* deployment strategy ( "BLUE_GREEN") is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed. Amazon ECS blue/green deployments are best suited for the following scenarios: * Service validation: When you need to validate new service revisions before directing production traffic to them * Zero downtime: When your service requires zero- downtime deployments * Instant roll back: When you need the ability to quickly roll back if issues are detected * Load balancer requirement: When your service uses Application Load Balancer, Network Load Balancer, or Service Connect * External Use a third-party deployment controller. * Blue/green deployment (powered by CodeDeploy) CodeDeploy installs an updated version of the application as a new replacement task set and reroutes production traffic from the original application task set to the replacement task set. The original task set is terminated after a successful deployment. Use this deployment controller to verify a new deployment of a service before sending production traffic to it. * **tags** *(list) --* The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value. You define bot the key and value. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **createdBy** *(string) --* The principal that created the service. * **enableECSManagedTags** *(boolean) --* Determines whether to use Amazon ECS managed tags for the tasks in the service. For more information, see Tagging Your Amazon ECS Resources in the *Amazon Elastic Container Service Developer Guide*. * **propagateTags** *(string) --* Determines whether to propagate the tags from the task definition or the service to the task. If no value is specified, the tags aren't propagated. * **enableExecuteCommand** *(boolean) --* Determines whether the execute command functionality is turned on for the service. If "true", the execute command functionality is turned on for all containers in tasks as part of the service. * **availabilityZoneRebalancing** *(string) --* Indicates whether to use Availability Zone rebalancing for the service. For more information, see Balancing an Amazon ECS service across Availability Zones in the *Amazon Elastic Container Service Developer Guide* . **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ClusterNotFoundException" * "ECS.Client.exceptions.UnsupportedFeatureException" * "ECS.Client.exceptions.PlatformUnknownException" * "ECS.Client.exceptions.PlatformTaskDefinitionIncompatibilityE xception" * "ECS.Client.exceptions.AccessDeniedException" * "ECS.Client.exceptions.NamespaceNotFoundException" **Examples** This example creates a service in your default region called "ecs-simple-service". The service uses the "hello_world" task definition and it maintains 10 copies of that task. response = client.create_service( desiredCount=10, serviceName='ecs-simple-service', taskDefinition='hello_world', ) print(response) Expected Output: { 'service': { 'clusterArn': 'arn:aws:ecs:us-east-1:012345678910:cluster/default', 'createdAt': datetime(2016, 8, 29, 16, 13, 47, 0, 242, 0), 'deploymentConfiguration': { 'maximumPercent': 200, 'minimumHealthyPercent': 100, }, 'deployments': [ { 'createdAt': datetime(2016, 8, 29, 16, 13, 47, 0, 242, 0), 'desiredCount': 10, 'id': 'ecs-svc/9223370564342348388', 'pendingCount': 0, 'runningCount': 0, 'status': 'PRIMARY', 'taskDefinition': 'arn:aws:ecs:us-east-1:012345678910:task-definition/hello_world:6', 'updatedAt': datetime(2016, 8, 29, 16, 13, 47, 0, 242, 0), }, { 'createdAt': datetime(2016, 8, 29, 15, 52, 44, 0, 242, 0), 'desiredCount': 0, 'id': 'ecs-svc/9223370564343611322', 'pendingCount': 0, 'runningCount': 0, 'status': 'ACTIVE', 'taskDefinition': 'arn:aws:ecs:us-east-1:012345678910:task-definition/hello_world:6', 'updatedAt': datetime(2016, 8, 29, 16, 11, 38, 0, 242, 0), }, ], 'desiredCount': 10, 'events': [ ], 'loadBalancers': [ ], 'pendingCount': 0, 'runningCount': 0, 'serviceArn': 'arn:aws:ecs:us-east-1:012345678910:service/ecs-simple-service', 'serviceName': 'ecs-simple-service', 'status': 'ACTIVE', 'taskDefinition': 'arn:aws:ecs:us-east-1:012345678910:task-definition/hello_world:6', }, 'ResponseMetadata': { '...': '...', }, } This example creates a service in your default region called "ecs-simple-service-elb". The service uses the "ecs-demo" task definition and it maintains 10 copies of that task. You must reference an existing load balancer in the same region by its name. response = client.create_service( desiredCount=10, loadBalancers=[ { 'containerName': 'simple-app', 'containerPort': 80, 'loadBalancerName': 'EC2Contai-EcsElast-15DCDAURT3ZO2', }, ], role='ecsServiceRole', serviceName='ecs-simple-service-elb', taskDefinition='console-sample-app-static', ) print(response) Expected Output: { 'service': { 'clusterArn': 'arn:aws:ecs:us-east-1:012345678910:cluster/default', 'createdAt': datetime(2016, 8, 29, 16, 2, 54, 0, 242, 0), 'deploymentConfiguration': { 'maximumPercent': 200, 'minimumHealthyPercent': 100, }, 'deployments': [ { 'createdAt': datetime(2016, 8, 29, 16, 2, 54, 0, 242, 0), 'desiredCount': 10, 'id': 'ecs-svc/9223370564343000923', 'pendingCount': 0, 'runningCount': 0, 'status': 'PRIMARY', 'taskDefinition': 'arn:aws:ecs:us-east-1:012345678910:task-definition/console-sample-app-static:6', 'updatedAt': datetime(2016, 8, 29, 16, 2, 54, 0, 242, 0), }, ], 'desiredCount': 10, 'events': [ ], 'loadBalancers': [ { 'containerName': 'simple-app', 'containerPort': 80, 'loadBalancerName': 'EC2Contai-EcsElast-15DCDAURT3ZO2', }, ], 'pendingCount': 0, 'roleArn': 'arn:aws:iam::012345678910:role/ecsServiceRole', 'runningCount': 0, 'serviceArn': 'arn:aws:ecs:us-east-1:012345678910:service/ecs-simple-service-elb', 'serviceName': 'ecs-simple-service-elb', 'status': 'ACTIVE', 'taskDefinition': 'arn:aws:ecs:us-east-1:012345678910:task-definition/console-sample-app-static:6', }, 'ResponseMetadata': { '...': '...', }, } ECS / Client / untag_resource untag_resource ************** ECS.Client.untag_resource(**kwargs) Deletes specified tags from a resource. See also: AWS API Documentation **Request Syntax** response = client.untag_resource( resourceArn='string', tagKeys=[ 'string', ] ) Parameters: * **resourceArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the resource to delete tags from. Currently, the supported resources are Amazon ECS capacity providers, tasks, services, task definitions, clusters, and container instances. * **tagKeys** (*list*) -- **[REQUIRED]** The keys of the tags to be removed. * *(string) --* Return type: dict Returns: **Response Syntax** {} **Response Structure** * *(dict) --* **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.ClusterNotFoundException" * "ECS.Client.exceptions.ResourceNotFoundException" * "ECS.Client.exceptions.InvalidParameterException" **Examples** This example deletes the 'team' tag from the 'dev' cluster. response = client.untag_resource( resourceArn='arn:aws:ecs:region:aws_account_id:cluster/dev', tagKeys=[ 'team', ], ) print(response) Expected Output: { 'ResponseMetadata': { '...': '...', }, } ECS / Client / put_attributes put_attributes ************** ECS.Client.put_attributes(**kwargs) Create or update an attribute on an Amazon ECS resource. If the attribute doesn't exist, it's created. If the attribute exists, its value is replaced with the specified value. To delete an attribute, use DeleteAttributes. For more information, see Attributes in the *Amazon Elastic Container Service Developer Guide*. See also: AWS API Documentation **Request Syntax** response = client.put_attributes( cluster='string', attributes=[ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ] ) Parameters: * **cluster** (*string*) -- The short name or full Amazon Resource Name (ARN) of the cluster that contains the resource to apply attributes. If you do not specify a cluster, the default cluster is assumed. * **attributes** (*list*) -- **[REQUIRED]** The attributes to apply to your resource. You can specify up to 10 custom attributes for each resource. You can specify up to 10 attributes in a single call. * *(dict) --* An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* **[REQUIRED]** The name of the attribute. The "name" must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.). * **value** *(string) --* The value of the attribute. The "value" must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space. * **targetType** *(string) --* The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN. * **targetId** *(string) --* The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN). Return type: dict Returns: **Response Syntax** { 'attributes': [ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ] } **Response Structure** * *(dict) --* * **attributes** *(list) --* The attributes applied to your resource. * *(dict) --* An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the attribute. The "name" must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.). * **value** *(string) --* The value of the attribute. The "value" must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space. * **targetType** *(string) --* The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN. * **targetId** *(string) --* The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN). **Exceptions** * "ECS.Client.exceptions.ClusterNotFoundException" * "ECS.Client.exceptions.TargetNotFoundException" * "ECS.Client.exceptions.AttributeLimitExceededException" * "ECS.Client.exceptions.InvalidParameterException" ECS / Client / create_capacity_provider create_capacity_provider ************************ ECS.Client.create_capacity_provider(**kwargs) Creates a new capacity provider. Capacity providers are associated with an Amazon ECS cluster and are used in capacity provider strategies to facilitate cluster auto scaling. Only capacity providers that use an Auto Scaling group can be created. Amazon ECS tasks on Fargate use the "FARGATE" and "FARGATE_SPOT" capacity providers. These providers are available to all accounts in the Amazon Web Services Regions that Fargate supports. See also: AWS API Documentation **Request Syntax** response = client.create_capacity_provider( name='string', autoScalingGroupProvider={ 'autoScalingGroupArn': 'string', 'managedScaling': { 'status': 'ENABLED'|'DISABLED', 'targetCapacity': 123, 'minimumScalingStepSize': 123, 'maximumScalingStepSize': 123, 'instanceWarmupPeriod': 123 }, 'managedTerminationProtection': 'ENABLED'|'DISABLED', 'managedDraining': 'ENABLED'|'DISABLED' }, tags=[ { 'key': 'string', 'value': 'string' }, ] ) Parameters: * **name** (*string*) -- **[REQUIRED]** The name of the capacity provider. Up to 255 characters are allowed. They include letters (both upper and lowercase letters), numbers, underscores (_), and hyphens (-). The name can't be prefixed with " "aws"", " "ecs"", or " "fargate"". * **autoScalingGroupProvider** (*dict*) -- **[REQUIRED]** The details of the Auto Scaling group for the capacity provider. * **autoScalingGroupArn** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) that identifies the Auto Scaling group, or the Auto Scaling group name. * **managedScaling** *(dict) --* The managed scaling settings for the Auto Scaling group capacity provider. * **status** *(string) --* Determines whether to use managed scaling for the capacity provider. * **targetCapacity** *(integer) --* The target capacity utilization as a percentage for the capacity provider. The specified value must be greater than "0" and less than or equal to "100". For example, if you want the capacity provider to maintain 10% spare capacity, then that means the utilization is 90%, so use a "targetCapacity" of "90". The default value of "100" percent results in the Amazon EC2 instances in your Auto Scaling group being completely used. * **minimumScalingStepSize** *(integer) --* The minimum number of Amazon EC2 instances that Amazon ECS will scale out at one time. The scale in process is not affected by this parameter If this parameter is omitted, the default value of "1" is used. When additional capacity is required, Amazon ECS will scale up the minimum scaling step size even if the actual demand is less than the minimum scaling step size. If you use a capacity provider with an Auto Scaling group configured with more than one Amazon EC2 instance type or Availability Zone, Amazon ECS will scale up by the exact minimum scaling step size value and will ignore both the maximum scaling step size as well as the capacity demand. * **maximumScalingStepSize** *(integer) --* The maximum number of Amazon EC2 instances that Amazon ECS will scale out at one time. If this parameter is omitted, the default value of "10000" is used. * **instanceWarmupPeriod** *(integer) --* The period of time, in seconds, after a newly launched Amazon EC2 instance can contribute to CloudWatch metrics for Auto Scaling group. If this parameter is omitted, the default value of "300" seconds is used. * **managedTerminationProtection** *(string) --* The managed termination protection setting to use for the Auto Scaling group capacity provider. This determines whether the Auto Scaling group has managed termination protection. The default is off. Warning: When using managed termination protection, managed scaling must also be used otherwise managed termination protection doesn't work. When managed termination protection is on, Amazon ECS prevents the Amazon EC2 instances in an Auto Scaling group that contain tasks from being terminated during a scale-in action. The Auto Scaling group and each instance in the Auto Scaling group must have instance protection from scale-in actions on as well. For more information, see Instance Protection in the *Auto Scaling User Guide*. When managed termination protection is off, your Amazon EC2 instances aren't protected from termination when the Auto Scaling group scales in. * **managedDraining** *(string) --* The managed draining option for the Auto Scaling group capacity provider. When you enable this, Amazon ECS manages and gracefully drains the EC2 container instances that are in the Auto Scaling group capacity provider. * **tags** (*list*) -- The metadata that you apply to the capacity provider to categorize and organize them more conveniently. Each tag consists of a key and an optional value. You define both of them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). Return type: dict Returns: **Response Syntax** { 'capacityProvider': { 'capacityProviderArn': 'string', 'name': 'string', 'status': 'ACTIVE'|'INACTIVE', 'autoScalingGroupProvider': { 'autoScalingGroupArn': 'string', 'managedScaling': { 'status': 'ENABLED'|'DISABLED', 'targetCapacity': 123, 'minimumScalingStepSize': 123, 'maximumScalingStepSize': 123, 'instanceWarmupPeriod': 123 }, 'managedTerminationProtection': 'ENABLED'|'DISABLED', 'managedDraining': 'ENABLED'|'DISABLED' }, 'updateStatus': 'DELETE_IN_PROGRESS'|'DELETE_COMPLETE'|'DELETE_FAILED'|'UPDATE_IN_PROGRESS'|'UPDATE_COMPLETE'|'UPDATE_FAILED', 'updateStatusReason': 'string', 'tags': [ { 'key': 'string', 'value': 'string' }, ] } } **Response Structure** * *(dict) --* * **capacityProvider** *(dict) --* The full description of the new capacity provider. * **capacityProviderArn** *(string) --* The Amazon Resource Name (ARN) that identifies the capacity provider. * **name** *(string) --* The name of the capacity provider. * **status** *(string) --* The current status of the capacity provider. Only capacity providers in an "ACTIVE" state can be used in a cluster. When a capacity provider is successfully deleted, it has an "INACTIVE" status. * **autoScalingGroupProvider** *(dict) --* The Auto Scaling group settings for the capacity provider. * **autoScalingGroupArn** *(string) --* The Amazon Resource Name (ARN) that identifies the Auto Scaling group, or the Auto Scaling group name. * **managedScaling** *(dict) --* The managed scaling settings for the Auto Scaling group capacity provider. * **status** *(string) --* Determines whether to use managed scaling for the capacity provider. * **targetCapacity** *(integer) --* The target capacity utilization as a percentage for the capacity provider. The specified value must be greater than "0" and less than or equal to "100". For example, if you want the capacity provider to maintain 10% spare capacity, then that means the utilization is 90%, so use a "targetCapacity" of "90". The default value of "100" percent results in the Amazon EC2 instances in your Auto Scaling group being completely used. * **minimumScalingStepSize** *(integer) --* The minimum number of Amazon EC2 instances that Amazon ECS will scale out at one time. The scale in process is not affected by this parameter If this parameter is omitted, the default value of "1" is used. When additional capacity is required, Amazon ECS will scale up the minimum scaling step size even if the actual demand is less than the minimum scaling step size. If you use a capacity provider with an Auto Scaling group configured with more than one Amazon EC2 instance type or Availability Zone, Amazon ECS will scale up by the exact minimum scaling step size value and will ignore both the maximum scaling step size as well as the capacity demand. * **maximumScalingStepSize** *(integer) --* The maximum number of Amazon EC2 instances that Amazon ECS will scale out at one time. If this parameter is omitted, the default value of "10000" is used. * **instanceWarmupPeriod** *(integer) --* The period of time, in seconds, after a newly launched Amazon EC2 instance can contribute to CloudWatch metrics for Auto Scaling group. If this parameter is omitted, the default value of "300" seconds is used. * **managedTerminationProtection** *(string) --* The managed termination protection setting to use for the Auto Scaling group capacity provider. This determines whether the Auto Scaling group has managed termination protection. The default is off. Warning: When using managed termination protection, managed scaling must also be used otherwise managed termination protection doesn't work. When managed termination protection is on, Amazon ECS prevents the Amazon EC2 instances in an Auto Scaling group that contain tasks from being terminated during a scale-in action. The Auto Scaling group and each instance in the Auto Scaling group must have instance protection from scale-in actions on as well. For more information, see Instance Protection in the *Auto Scaling User Guide*. When managed termination protection is off, your Amazon EC2 instances aren't protected from termination when the Auto Scaling group scales in. * **managedDraining** *(string) --* The managed draining option for the Auto Scaling group capacity provider. When you enable this, Amazon ECS manages and gracefully drains the EC2 container instances that are in the Auto Scaling group capacity provider. * **updateStatus** *(string) --* The update status of the capacity provider. The following are the possible states that is returned. DELETE_IN_PROGRESS The capacity provider is in the process of being deleted. DELETE_COMPLETE The capacity provider was successfully deleted and has an "INACTIVE" status. DELETE_FAILED The capacity provider can't be deleted. The update status reason provides further details about why the delete failed. * **updateStatusReason** *(string) --* The update status reason. This provides further details about the update status for the capacity provider. * **tags** *(list) --* The metadata that you apply to the capacity provider to help you categorize and organize it. Each tag consists of a key and an optional value. You define both. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.LimitExceededException" * "ECS.Client.exceptions.UpdateInProgressException" ECS / Client / update_cluster update_cluster ************** ECS.Client.update_cluster(**kwargs) Updates the cluster. See also: AWS API Documentation **Request Syntax** response = client.update_cluster( cluster='string', settings=[ { 'name': 'containerInsights', 'value': 'string' }, ], configuration={ 'executeCommandConfiguration': { 'kmsKeyId': 'string', 'logging': 'NONE'|'DEFAULT'|'OVERRIDE', 'logConfiguration': { 'cloudWatchLogGroupName': 'string', 'cloudWatchEncryptionEnabled': True|False, 's3BucketName': 'string', 's3EncryptionEnabled': True|False, 's3KeyPrefix': 'string' } }, 'managedStorageConfiguration': { 'kmsKeyId': 'string', 'fargateEphemeralStorageKmsKeyId': 'string' } }, serviceConnectDefaults={ 'namespace': 'string' } ) Parameters: * **cluster** (*string*) -- **[REQUIRED]** The name of the cluster to modify the settings for. * **settings** (*list*) -- The cluster settings for your cluster. * *(dict) --* The settings to use when creating a cluster. This parameter is used to turn on CloudWatch Container Insights with enhanced observability or CloudWatch Container Insights for a cluster. Container Insights with enhanced observability provides all the Container Insights metrics, plus additional task and container metrics. This version supports enhanced observability for Amazon ECS clusters using the Amazon EC2 and Fargate launch types. After you configure Container Insights with enhanced observability on Amazon ECS, Container Insights auto-collects detailed infrastructure telemetry from the cluster level down to the container level in your environment and displays these critical performance data in curated dashboards removing the heavy lifting in observability set-up. For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the cluster setting. The value is "containerInsights" . * **value** *(string) --* The value to set for the cluster setting. The supported values are "enhanced", "enabled", and "disabled". To use Container Insights with enhanced observability, set the "containerInsights" account setting to "enhanced". To use Container Insights, set the "containerInsights" account setting to "enabled". If a cluster value is specified, it will override the "containerInsights" value set with PutAccountSetting or PutAccountSettingDefault. * **configuration** (*dict*) -- The execute command configuration for the cluster. * **executeCommandConfiguration** *(dict) --* The details of the execute command configuration. * **kmsKeyId** *(string) --* Specify an Key Management Service key ID to encrypt the data between the local client and the container. * **logging** *(string) --* The log setting to use for redirecting logs for your execute command results. The following log settings are available. * "NONE": The execute command session is not logged. * "DEFAULT": The "awslogs" configuration in the task definition is used. If no logging parameter is specified, it defaults to this value. If no "awslogs" log driver is configured in the task definition, the output won't be logged. * "OVERRIDE": Specify the logging details as a part of "logConfiguration". If the "OVERRIDE" logging option is specified, the "logConfiguration" is required. * **logConfiguration** *(dict) --* The log configuration for the results of the execute command actions. The logs can be sent to CloudWatch Logs or an Amazon S3 bucket. When "logging=OVERRIDE" is specified, a "logConfiguration" must be provided. * **cloudWatchLogGroupName** *(string) --* The name of the CloudWatch log group to send logs to. Note: The CloudWatch log group must already be created. * **cloudWatchEncryptionEnabled** *(boolean) --* Determines whether to use encryption on the CloudWatch logs. If not specified, encryption will be off. * **s3BucketName** *(string) --* The name of the S3 bucket to send logs to. Note: The S3 bucket must already be created. * **s3EncryptionEnabled** *(boolean) --* Determines whether to use encryption on the S3 logs. If not specified, encryption is not used. * **s3KeyPrefix** *(string) --* An optional folder in the S3 bucket to place logs in. * **managedStorageConfiguration** *(dict) --* The details of the managed storage configuration. * **kmsKeyId** *(string) --* Specify a Key Management Service key ID to encrypt Amazon ECS managed storage. When you specify a "kmsKeyId", Amazon ECS uses the key to encrypt data volumes managed by Amazon ECS that are attached to tasks in the cluster. The following data volumes are managed by Amazon ECS: Amazon EBS. For more information about encryption of Amazon EBS volumes attached to Amazon ECS tasks, see Encrypt data stored in Amazon EBS volumes for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. The key must be a single Region key. * **fargateEphemeralStorageKmsKeyId** *(string) --* Specify the Key Management Service key ID for Fargate ephemeral storage. When you specify a "fargateEphemeralStorageKmsKeyId", Amazon Web Services Fargate uses the key to encrypt data at rest in ephemeral storage. For more information about Fargate ephemeral storage encryption, see Customer managed keys for Amazon Web Services Fargate ephemeral storage for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. The key must be a single Region key. * **serviceConnectDefaults** (*dict*) -- Use this parameter to set a default Service Connect namespace. After you set a default Service Connect namespace, any new services with Service Connect turned on that are created in the cluster are added as client services in the namespace. This setting only applies to new services that set the "enabled" parameter to "true" in the "ServiceConnectConfiguration". You can set the namespace of each service individually in the "ServiceConnectConfiguration" to override this default parameter. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **namespace** *(string) --* **[REQUIRED]** The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace that's used when you create a service and don't specify a Service Connect configuration. The namespace name can include up to 1024 characters. The name is case-sensitive. The name can't include greater than (>), less than (<), double quotation marks ("), or slash (/). If you enter an existing namespace name or ARN, then that namespace will be used. Any namespace type is supported. The namespace must be in this account and this Amazon Web Services Region. If you enter a new name, a Cloud Map namespace will be created. Amazon ECS creates a Cloud Map namespace with the "API calls" method of instance discovery only. This instance discovery method is the "HTTP" namespace type in the Command Line Interface. Other types of instance discovery aren't used by Service Connect. If you update the cluster with an empty string """" for the namespace name, the cluster configuration for Service Connect is removed. Note that the namespace will remain in Cloud Map and must be deleted separately. For more information about Cloud Map, see Working with Services in the *Cloud Map Developer Guide*. Return type: dict Returns: **Response Syntax** { 'cluster': { 'clusterArn': 'string', 'clusterName': 'string', 'configuration': { 'executeCommandConfiguration': { 'kmsKeyId': 'string', 'logging': 'NONE'|'DEFAULT'|'OVERRIDE', 'logConfiguration': { 'cloudWatchLogGroupName': 'string', 'cloudWatchEncryptionEnabled': True|False, 's3BucketName': 'string', 's3EncryptionEnabled': True|False, 's3KeyPrefix': 'string' } }, 'managedStorageConfiguration': { 'kmsKeyId': 'string', 'fargateEphemeralStorageKmsKeyId': 'string' } }, 'status': 'string', 'registeredContainerInstancesCount': 123, 'runningTasksCount': 123, 'pendingTasksCount': 123, 'activeServicesCount': 123, 'statistics': [ { 'name': 'string', 'value': 'string' }, ], 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'settings': [ { 'name': 'containerInsights', 'value': 'string' }, ], 'capacityProviders': [ 'string', ], 'defaultCapacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'attachments': [ { 'id': 'string', 'type': 'string', 'status': 'string', 'details': [ { 'name': 'string', 'value': 'string' }, ] }, ], 'attachmentsStatus': 'string', 'serviceConnectDefaults': { 'namespace': 'string' } } } **Response Structure** * *(dict) --* * **cluster** *(dict) --* Details about the cluster. * **clusterArn** *(string) --* The Amazon Resource Name (ARN) that identifies the cluster. For more information about the ARN format, see Amazon Resource Name (ARN) in the *Amazon ECS Developer Guide*. * **clusterName** *(string) --* A user-generated string that you use to identify your cluster. * **configuration** *(dict) --* The execute command and managed storage configuration for the cluster. * **executeCommandConfiguration** *(dict) --* The details of the execute command configuration. * **kmsKeyId** *(string) --* Specify an Key Management Service key ID to encrypt the data between the local client and the container. * **logging** *(string) --* The log setting to use for redirecting logs for your execute command results. The following log settings are available. * "NONE": The execute command session is not logged. * "DEFAULT": The "awslogs" configuration in the task definition is used. If no logging parameter is specified, it defaults to this value. If no "awslogs" log driver is configured in the task definition, the output won't be logged. * "OVERRIDE": Specify the logging details as a part of "logConfiguration". If the "OVERRIDE" logging option is specified, the "logConfiguration" is required. * **logConfiguration** *(dict) --* The log configuration for the results of the execute command actions. The logs can be sent to CloudWatch Logs or an Amazon S3 bucket. When "logging=OVERRIDE" is specified, a "logConfiguration" must be provided. * **cloudWatchLogGroupName** *(string) --* The name of the CloudWatch log group to send logs to. Note: The CloudWatch log group must already be created. * **cloudWatchEncryptionEnabled** *(boolean) --* Determines whether to use encryption on the CloudWatch logs. If not specified, encryption will be off. * **s3BucketName** *(string) --* The name of the S3 bucket to send logs to. Note: The S3 bucket must already be created. * **s3EncryptionEnabled** *(boolean) --* Determines whether to use encryption on the S3 logs. If not specified, encryption is not used. * **s3KeyPrefix** *(string) --* An optional folder in the S3 bucket to place logs in. * **managedStorageConfiguration** *(dict) --* The details of the managed storage configuration. * **kmsKeyId** *(string) --* Specify a Key Management Service key ID to encrypt Amazon ECS managed storage. When you specify a "kmsKeyId", Amazon ECS uses the key to encrypt data volumes managed by Amazon ECS that are attached to tasks in the cluster. The following data volumes are managed by Amazon ECS: Amazon EBS. For more information about encryption of Amazon EBS volumes attached to Amazon ECS tasks, see Encrypt data stored in Amazon EBS volumes for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. The key must be a single Region key. * **fargateEphemeralStorageKmsKeyId** *(string) --* Specify the Key Management Service key ID for Fargate ephemeral storage. When you specify a "fargateEphemeralStorageKmsKeyId", Amazon Web Services Fargate uses the key to encrypt data at rest in ephemeral storage. For more information about Fargate ephemeral storage encryption, see Customer managed keys for Amazon Web Services Fargate ephemeral storage for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. The key must be a single Region key. * **status** *(string) --* The status of the cluster. The following are the possible states that are returned. ACTIVE The cluster is ready to accept tasks and if applicable you can register container instances with the cluster. PROVISIONING The cluster has capacity providers that are associated with it and the resources needed for the capacity provider are being created. DEPROVISIONING The cluster has capacity providers that are associated with it and the resources needed for the capacity provider are being deleted. FAILED The cluster has capacity providers that are associated with it and the resources needed for the capacity provider have failed to create. INACTIVE The cluster has been deleted. Clusters with an "INACTIVE" status may remain discoverable in your account for a period of time. However, this behavior is subject to change in the future. We don't recommend that you rely on "INACTIVE" clusters persisting. * **registeredContainerInstancesCount** *(integer) --* The number of container instances registered into the cluster. This includes container instances in both "ACTIVE" and "DRAINING" status. * **runningTasksCount** *(integer) --* The number of tasks in the cluster that are in the "RUNNING" state. * **pendingTasksCount** *(integer) --* The number of tasks in the cluster that are in the "PENDING" state. * **activeServicesCount** *(integer) --* The number of services that are running on the cluster in an "ACTIVE" state. You can view these services with PListServices. * **statistics** *(list) --* Additional information about your clusters that are separated by launch type. They include the following: * runningEC2TasksCount * RunningFargateTasksCount * pendingEC2TasksCount * pendingFargateTasksCount * activeEC2ServiceCount * activeFargateServiceCount * drainingEC2ServiceCount * drainingFargateServiceCount * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **tags** *(list) --* The metadata that you apply to the cluster to help you categorize and organize them. Each tag consists of a key and an optional value. You define both. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **settings** *(list) --* The settings for the cluster. This parameter indicates whether CloudWatch Container Insights is on or off for a cluster. * *(dict) --* The settings to use when creating a cluster. This parameter is used to turn on CloudWatch Container Insights with enhanced observability or CloudWatch Container Insights for a cluster. Container Insights with enhanced observability provides all the Container Insights metrics, plus additional task and container metrics. This version supports enhanced observability for Amazon ECS clusters using the Amazon EC2 and Fargate launch types. After you configure Container Insights with enhanced observability on Amazon ECS, Container Insights auto-collects detailed infrastructure telemetry from the cluster level down to the container level in your environment and displays these critical performance data in curated dashboards removing the heavy lifting in observability set-up. For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the cluster setting. The value is "containerInsights" . * **value** *(string) --* The value to set for the cluster setting. The supported values are "enhanced", "enabled", and "disabled". To use Container Insights with enhanced observability, set the "containerInsights" account setting to "enhanced". To use Container Insights, set the "containerInsights" account setting to "enabled". If a cluster value is specified, it will override the "containerInsights" value set with PutAccountSetting or PutAccountSettingDefault. * **capacityProviders** *(list) --* The capacity providers associated with the cluster. * *(string) --* * **defaultCapacityProviderStrategy** *(list) --* The default capacity provider strategy for the cluster. When services or tasks are run in the cluster with no launch type or capacity provider strategy specified, the default capacity provider strategy is used. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two- minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. * **attachments** *(list) --* The resources attached to a cluster. When using a capacity provider with a cluster, the capacity provider and associated resources are returned as cluster attachments. * *(dict) --* An object representing a container instance or task attachment. * **id** *(string) --* The unique identifier for the attachment. * **type** *(string) --* The type of the attachment, such as "ElasticNetworkInterface", "Service Connect", and "AmazonElasticBlockStorage". * **status** *(string) --* The status of the attachment. Valid values are "PRECREATED", "CREATED", "ATTACHING", "ATTACHED", "DETACHING", "DETACHED", "DELETED", and "FAILED". * **details** *(list) --* Details of the attachment. For elastic network interfaces, this includes the network interface ID, the MAC address, the subnet ID, and the private IPv4 address. For Service Connect services, this includes "portName", "clientAliases", "discoveryName", and "ingressPortOverride". For Elastic Block Storage, this includes "roleArn", "deleteOnTermination", "volumeName", "volumeId", and "statusReason" (only when the attachment fails to create or attach). * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **attachmentsStatus** *(string) --* The status of the capacity providers associated with the cluster. The following are the states that are returned. UPDATE_IN_PROGRESS The available capacity providers for the cluster are updating. UPDATE_COMPLETE The capacity providers have successfully updated. UPDATE_FAILED The capacity provider updates failed. * **serviceConnectDefaults** *(dict) --* Use this parameter to set a default Service Connect namespace. After you set a default Service Connect namespace, any new services with Service Connect turned on that are created in the cluster are added as client services in the namespace. This setting only applies to new services that set the "enabled" parameter to "true" in the "ServiceConnectConfiguration". You can set the namespace of each service individually in the "ServiceConnectConfiguration" to override this default parameter. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **namespace** *(string) --* The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace. When you create a service and don't specify a Service Connect configuration, this namespace is used. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.ClusterNotFoundException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.NamespaceNotFoundException" ECS / Client / get_waiter get_waiter ********** ECS.Client.get_waiter(waiter_name) Returns an object that can wait for some condition. Parameters: **waiter_name** (*str*) -- The name of the waiter to get. See the waiters section of the service docs for a list of available waiters. Returns: The specified waiter object. Return type: "botocore.waiter.Waiter" ECS / Client / stop_task stop_task ********* ECS.Client.stop_task(**kwargs) Stops a running task. Any tags associated with the task will be deleted. When you call "StopTask" on a task, the equivalent of "docker stop" is issued to the containers running in the task. This results in a "SIGTERM" value and a default 30-second timeout, after which the "SIGKILL" value is sent and the containers are forcibly stopped. If the container handles the "SIGTERM" value gracefully and exits within 30 seconds from receiving it, no "SIGKILL" value is sent. For Windows containers, POSIX signals do not work and runtime stops the container by sending a "CTRL_SHUTDOWN_EVENT". For more information, see Unable to react to graceful shutdown of (Windows) container #25982 on GitHub. Note: The default 30-second timeout can be configured on the Amazon ECS container agent with the "ECS_CONTAINER_STOP_TIMEOUT" variable. For more information, see Amazon ECS Container Agent Configuration in the *Amazon Elastic Container Service Developer Guide*. See also: AWS API Documentation **Request Syntax** response = client.stop_task( cluster='string', task='string', reason='string' ) type cluster: string param cluster: The short name or full Amazon Resource Name (ARN) of the cluster that hosts the task to stop. If you do not specify a cluster, the default cluster is assumed. type task: string param task: **[REQUIRED]** Thefull Amazon Resource Name (ARN) of the task. type reason: string param reason: An optional message specified when a task is stopped. For example, if you're using a custom scheduler, you can use this parameter to specify the reason for stopping the task here, and the message appears in subsequent DescribeTasks> API operations on this task. rtype: dict returns: **Response Syntax** { 'task': { 'attachments': [ { 'id': 'string', 'type': 'string', 'status': 'string', 'details': [ { 'name': 'string', 'value': 'string' }, ] }, ], 'attributes': [ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ], 'availabilityZone': 'string', 'capacityProviderName': 'string', 'clusterArn': 'string', 'connectivity': 'CONNECTED'|'DISCONNECTED', 'connectivityAt': datetime(2015, 1, 1), 'containerInstanceArn': 'string', 'containers': [ { 'containerArn': 'string', 'taskArn': 'string', 'name': 'string', 'image': 'string', 'imageDigest': 'string', 'runtimeId': 'string', 'lastStatus': 'string', 'exitCode': 123, 'reason': 'string', 'networkBindings': [ { 'bindIP': 'string', 'containerPort': 123, 'hostPort': 123, 'protocol': 'tcp'|'udp', 'containerPortRange': 'string', 'hostPortRange': 'string' }, ], 'networkInterfaces': [ { 'attachmentId': 'string', 'privateIpv4Address': 'string', 'ipv6Address': 'string' }, ], 'healthStatus': 'HEALTHY'|'UNHEALTHY'|'UNKNOWN', 'managedAgents': [ { 'lastStartedAt': datetime(2015, 1, 1), 'name': 'ExecuteCommandAgent', 'reason': 'string', 'lastStatus': 'string' }, ], 'cpu': 'string', 'memory': 'string', 'memoryReservation': 'string', 'gpuIds': [ 'string', ] }, ], 'cpu': 'string', 'createdAt': datetime(2015, 1, 1), 'desiredStatus': 'string', 'enableExecuteCommand': True|False, 'executionStoppedAt': datetime(2015, 1, 1), 'group': 'string', 'healthStatus': 'HEALTHY'|'UNHEALTHY'|'UNKNOWN', 'inferenceAccelerators': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'lastStatus': 'string', 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'memory': 'string', 'overrides': { 'containerOverrides': [ { 'name': 'string', 'command': [ 'string', ], 'environment': [ { 'name': 'string', 'value': 'string' }, ], 'environmentFiles': [ { 'value': 'string', 'type': 's3' }, ], 'cpu': 123, 'memory': 123, 'memoryReservation': 123, 'resourceRequirements': [ { 'value': 'string', 'type': 'GPU'|'InferenceAccelerator' }, ] }, ], 'cpu': 'string', 'inferenceAcceleratorOverrides': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'executionRoleArn': 'string', 'memory': 'string', 'taskRoleArn': 'string', 'ephemeralStorage': { 'sizeInGiB': 123 } }, 'platformVersion': 'string', 'platformFamily': 'string', 'pullStartedAt': datetime(2015, 1, 1), 'pullStoppedAt': datetime(2015, 1, 1), 'startedAt': datetime(2015, 1, 1), 'startedBy': 'string', 'stopCode': 'TaskFailedToStart'|'EssentialContainerExited'|'UserInitiated'|'ServiceSchedulerInitiated'|'SpotInterruption'|'TerminationNotice', 'stoppedAt': datetime(2015, 1, 1), 'stoppedReason': 'string', 'stoppingAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'taskArn': 'string', 'taskDefinitionArn': 'string', 'version': 123, 'ephemeralStorage': { 'sizeInGiB': 123 }, 'fargateEphemeralStorage': { 'sizeInGiB': 123, 'kmsKeyId': 'string' } } } **Response Structure** * *(dict) --* * **task** *(dict) --* The task that was stopped. * **attachments** *(list) --* The Elastic Network Adapter that's associated with the task if the task uses the "awsvpc" network mode. * *(dict) --* An object representing a container instance or task attachment. * **id** *(string) --* The unique identifier for the attachment. * **type** *(string) --* The type of the attachment, such as "ElasticNetworkInterface", "Service Connect", and "AmazonElasticBlockStorage". * **status** *(string) --* The status of the attachment. Valid values are "PRECREATED", "CREATED", "ATTACHING", "ATTACHED", "DETACHING", "DETACHED", "DELETED", and "FAILED". * **details** *(list) --* Details of the attachment. For elastic network interfaces, this includes the network interface ID, the MAC address, the subnet ID, and the private IPv4 address. For Service Connect services, this includes "portName", "clientAliases", "discoveryName", and "ingressPortOverride". For Elastic Block Storage, this includes "roleArn", "deleteOnTermination", "volumeName", "volumeId", and "statusReason" (only when the attachment fails to create or attach). * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **attributes** *(list) --* The attributes of the task * *(dict) --* An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the attribute. The "name" must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.). * **value** *(string) --* The value of the attribute. The "value" must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space. * **targetType** *(string) --* The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN. * **targetId** *(string) --* The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN). * **availabilityZone** *(string) --* The Availability Zone for the task. * **capacityProviderName** *(string) --* The capacity provider that's associated with the task. * **clusterArn** *(string) --* The ARN of the cluster that hosts the task. * **connectivity** *(string) --* The connectivity status of a task. * **connectivityAt** *(datetime) --* The Unix timestamp for the time when the task last went into "CONNECTED" status. * **containerInstanceArn** *(string) --* The ARN of the container instances that host the task. * **containers** *(list) --* The containers that's associated with the task. * *(dict) --* A Docker container that's part of a task. * **containerArn** *(string) --* The Amazon Resource Name (ARN) of the container. * **taskArn** *(string) --* The ARN of the task. * **name** *(string) --* The name of the container. * **image** *(string) --* The image used for the container. * **imageDigest** *(string) --* The container image manifest digest. * **runtimeId** *(string) --* The ID of the Docker container. * **lastStatus** *(string) --* The last known status of the container. * **exitCode** *(integer) --* The exit code returned from the container. * **reason** *(string) --* A short (1024 max characters) human-readable string to provide additional details about a running or stopped container. * **networkBindings** *(list) --* The network bindings associated with the container. * *(dict) --* Details on the network bindings between a container and its host container instance. After a task reaches the "RUNNING" status, manual and automatic host and container port assignments are visible in the "networkBindings" section of DescribeTasks API responses. * **bindIP** *(string) --* The IP address that the container is bound to on the container instance. * **containerPort** *(integer) --* The port number on the container that's used with the network binding. * **hostPort** *(integer) --* The port number on the host that's used with the network binding. * **protocol** *(string) --* The protocol used for the network binding. * **containerPortRange** *(string) --* The port number range on the container that's bound to the dynamically mapped host port range. The following rules apply when you specify a "containerPortRange": * You must use either the "bridge" network mode or the "awsvpc" network mode. * This parameter is available for both the EC2 and Fargate launch types. * This parameter is available for both the Linux and Windows operating systems. * The container instance must have at least version 1.67.0 of the container agent and at least version 1.67.0-1 of the "ecs-init" package * You can specify a maximum of 100 port ranges per container. * You do not specify a "hostPortRange". The value of the "hostPortRange" is set as follows: * For containers in a task with the "awsvpc" network mode, the "hostPortRange" is set to the same value as the "containerPortRange". This is a static mapping strategy. * For containers in a task with the "bridge" network mode, the Amazon ECS agent finds open host ports from the default ephemeral range and passes it to docker to bind them to the container ports. * The "containerPortRange" valid values are between 1 and 65535. * A port can only be included in one port mapping per container. * You cannot specify overlapping port ranges. * The first port in the range must be less than last port in the range. * Docker recommends that you turn off the docker-proxy in the Docker daemon config file when you have a large number of ports. For more information, see Issue #11185 on the Github website. For information about how to turn off the docker-proxy in the Docker daemon config file, see Docker daemon in the *Amazon ECS Developer Guide*. You can call DescribeTasks to view the "hostPortRange" which are the host ports that are bound to the container ports. * **hostPortRange** *(string) --* The port number range on the host that's used with the network binding. This is assigned is assigned by Docker and delivered by the Amazon ECS agent. * **networkInterfaces** *(list) --* The network interfaces associated with the container. * *(dict) --* An object representing the elastic network interface for tasks that use the "awsvpc" network mode. * **attachmentId** *(string) --* The attachment ID for the network interface. * **privateIpv4Address** *(string) --* The private IPv4 address for the network interface. * **ipv6Address** *(string) --* The private IPv6 address for the network interface. * **healthStatus** *(string) --* The health status of the container. If health checks aren't configured for this container in its task definition, then it reports the health status as "UNKNOWN". * **managedAgents** *(list) --* The details of any Amazon ECS managed agents associated with the container. * *(dict) --* Details about the managed agent status for the container. * **lastStartedAt** *(datetime) --* The Unix timestamp for the time when the managed agent was last started. * **name** *(string) --* The name of the managed agent. When the execute command feature is turned on, the managed agent name is "ExecuteCommandAgent". * **reason** *(string) --* The reason for why the managed agent is in the state it is in. * **lastStatus** *(string) --* The last known status of the managed agent. * **cpu** *(string) --* The number of CPU units set for the container. The value is "0" if no value was specified in the container definition when the task definition was registered. * **memory** *(string) --* The hard limit (in MiB) of memory set for the container. * **memoryReservation** *(string) --* The soft limit (in MiB) of memory set for the container. * **gpuIds** *(list) --* The IDs of each GPU assigned to the container. * *(string) --* * **cpu** *(string) --* The number of CPU units used by the task as expressed in a task definition. It can be expressed as an integer using CPU units (for example, "1024"). It can also be expressed as a string using vCPUs (for example, "1 vCPU" or "1 vcpu"). String values are converted to an integer that indicates the CPU units when the task definition is registered. If you're using the EC2 launch type or the external launch type, this field is optional. Supported values are between "128" CPU units ( "0.125" vCPUs) and "196608" CPU units ( "192" vCPUs). If you do not specify a value, the parameter is ignored. This field is required for Fargate. For information about the valid values, see Task size in the *Amazon Elastic Container Service Developer Guide*. * **createdAt** *(datetime) --* The Unix timestamp for the time when the task was created. More specifically, it's for the time when the task entered the "PENDING" state. * **desiredStatus** *(string) --* The desired status of the task. For more information, see Task Lifecycle. * **enableExecuteCommand** *(boolean) --* Determines whether execute command functionality is turned on for this task. If "true", execute command functionality is turned on all the containers in the task. * **executionStoppedAt** *(datetime) --* The Unix timestamp for the time when the task execution stopped. * **group** *(string) --* The name of the task group that's associated with the task. * **healthStatus** *(string) --* The health status for the task. It's determined by the health of the essential containers in the task. If all essential containers in the task are reporting as "HEALTHY", the task status also reports as "HEALTHY". If any essential containers in the task are reporting as "UNHEALTHY" or "UNKNOWN", the task status also reports as "UNHEALTHY" or "UNKNOWN". Note: The Amazon ECS container agent doesn't monitor or report on Docker health checks that are embedded in a container image and not specified in the container definition. For example, this includes those specified in a parent image or from the image's Dockerfile. Health check parameters that are specified in a container definition override any Docker health checks that are found in the container image. * **inferenceAccelerators** *(list) --* The Elastic Inference accelerator that's associated with the task. * *(dict) --* Details on an Elastic Inference accelerator. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. * **deviceName** *(string) --* The Elastic Inference accelerator device name. The "deviceName" must also be referenced in a container definition as a ResourceRequirement. * **deviceType** *(string) --* The Elastic Inference accelerator type to use. * **lastStatus** *(string) --* The last known status for the task. For more information, see Task Lifecycle. * **launchType** *(string) --* The infrastructure where your task runs on. For more information, see Amazon ECS launch types in the *Amazon Elastic Container Service Developer Guide*. * **memory** *(string) --* The amount of memory (in MiB) that the task uses as expressed in a task definition. It can be expressed as an integer using MiB (for example, "1024"). If it's expressed as a string using GB (for example, "1GB" or "1 GB"), it's converted to an integer indicating the MiB when the task definition is registered. If you use the EC2 launch type, this field is optional. If you use the Fargate launch type, this field is required. You must use one of the following values. The value that you choose determines the range of supported values for the "cpu" parameter. * 512 (0.5 GB), 1024 (1 GB), 2048 (2 GB) - Available "cpu" values: 256 (.25 vCPU) * 1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB) - Available "cpu" values: 512 (.5 vCPU) * 2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB) - Available "cpu" values: 1024 (1 vCPU) * Between 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB) - Available "cpu" values: 2048 (2 vCPU) * Between 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB) - Available "cpu" values: 4096 (4 vCPU) * Between 16 GB and 60 GB in 4 GB increments - Available "cpu" values: 8192 (8 vCPU) This option requires Linux platform "1.4.0" or later. * Between 32GB and 120 GB in 8 GB increments - Available "cpu" values: 16384 (16 vCPU) This option requires Linux platform "1.4.0" or later. * **overrides** *(dict) --* One or more container overrides. * **containerOverrides** *(list) --* One or more container overrides that are sent to a task. * *(dict) --* The overrides that are sent to a container. An empty container override can be passed in. An example of an empty container override is "{"containerOverrides": [ ] }". If a non-empty container override is specified, the "name" parameter must be included. You can use Secrets Manager or Amazon Web Services Systems Manager Parameter Store to store the sensitive data. For more information, see Retrieve secrets through environment variables in the Amazon ECS Developer Guide. * **name** *(string) --* The name of the container that receives the override. This parameter is required if any override is specified. * **command** *(list) --* The command to send to the container that overrides the default command from the Docker image or the task definition. You must also specify a container name. * *(string) --* * **environment** *(list) --* The environment variables to send to the container. You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the task definition. You must also specify a container name. * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **environmentFiles** *(list) --* A list of files containing the environment variables to pass to a container, instead of the value from the container definition. * *(dict) --* A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a ".env" file extension. Each line in an environment file should contain an environment variable in "VARIABLE=VALUE" format. Lines beginning with "#" are treated as comments and are ignored. If there are environment variables specified using the "environment" parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Use a file to pass environment variables to a container in the *Amazon Elastic Container Service Developer Guide*. Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply. You must use the following platforms for the Fargate launch type: * Linux platform version "1.4.0" or later. * Windows platform version "1.0.0" or later. Consider the following when using the Fargate launch type: * The file is handled like a native Docker env- file. * There is no support for shell escape handling. * The container entry point interperts the "VARIABLE" values. * **value** *(string) --* The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file. * **type** *(string) --* The file type to use. Environment files are objects in Amazon S3. The only supported value is "s3". * **cpu** *(integer) --* The number of "cpu" units reserved for the container, instead of the default value from the task definition. You must also specify a container name. * **memory** *(integer) --* The hard limit (in MiB) of memory to present to the container, instead of the default value from the task definition. If your container attempts to exceed the memory specified here, the container is killed. You must also specify a container name. * **memoryReservation** *(integer) --* The soft limit (in MiB) of memory to reserve for the container, instead of the default value from the task definition. You must also specify a container name. * **resourceRequirements** *(list) --* The type and amount of a resource to assign to a container, instead of the default value from the task definition. The only supported resource is a GPU. * *(dict) --* The type and amount of a resource to assign to a container. The supported resource types are GPUs and Elastic Inference accelerators. For more information, see Working with GPUs on Amazon ECS or Working with Amazon Elastic Inference on Amazon ECS in the *Amazon Elastic Container Service Developer Guide* * **value** *(string) --* The value for the specified resource type. When the type is "GPU", the value is the number of physical "GPUs" the Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for all containers in a task can't exceed the number of available GPUs on the container instance that the task is launched on. When the type is "InferenceAccelerator", the "value" matches the "deviceName" for an InferenceAccelerator specified in a task definition. * **type** *(string) --* The type of resource to assign to a container. * **cpu** *(string) --* The CPU override for the task. * **inferenceAcceleratorOverrides** *(list) --* The Elastic Inference accelerator override for the task. * *(dict) --* Details on an Elastic Inference accelerator task override. This parameter is used to override the Elastic Inference accelerator specified in the task definition. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. * **deviceName** *(string) --* The Elastic Inference accelerator device name to override for the task. This parameter must match a "deviceName" specified in the task definition. * **deviceType** *(string) --* The Elastic Inference accelerator type to use. * **executionRoleArn** *(string) --* The Amazon Resource Name (ARN) of the task execution role override for the task. For more information, see Amazon ECS task execution IAM role in the *Amazon Elastic Container Service Developer Guide*. * **memory** *(string) --* The memory override for the task. * **taskRoleArn** *(string) --* The Amazon Resource Name (ARN) of the role that containers in this task can assume. All containers in this task are granted the permissions that are specified in this role. For more information, see IAM Role for Tasks in the *Amazon Elastic Container Service Developer Guide*. * **ephemeralStorage** *(dict) --* The ephemeral storage setting override for the task. Note: This parameter is only supported for tasks hosted on Fargate that use the following platform versions: * Linux platform version "1.4.0" or later. * Windows platform version "1.0.0" or later. * **sizeInGiB** *(integer) --* The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is "21" GiB and the maximum supported value is "200" GiB. * **platformVersion** *(string) --* The platform version where your task runs on. A platform version is only specified for tasks that use the Fargate launch type. If you didn't specify one, the "LATEST" platform version is used. For more information, see Fargate Platform Versions in the *Amazon Elastic Container Service Developer Guide*. * **platformFamily** *(string) --* The operating system that your tasks are running on. A platform family is specified only for tasks that use the Fargate launch type. All tasks that run as part of this service must use the same "platformFamily" value as the service (for example, "LINUX."). * **pullStartedAt** *(datetime) --* The Unix timestamp for the time when the container image pull began. * **pullStoppedAt** *(datetime) --* The Unix timestamp for the time when the container image pull completed. * **startedAt** *(datetime) --* The Unix timestamp for the time when the task started. More specifically, it's for the time when the task transitioned from the "PENDING" state to the "RUNNING" state. * **startedBy** *(string) --* The tag specified when a task is started. If an Amazon ECS service started the task, the "startedBy" parameter contains the deployment ID of that service. * **stopCode** *(string) --* The stop code indicating why a task was stopped. The "stoppedReason" might contain additional details. For more information about stop code, see Stopped tasks error codes in the *Amazon ECS Developer Guide*. * **stoppedAt** *(datetime) --* The Unix timestamp for the time when the task was stopped. More specifically, it's for the time when the task transitioned from the "RUNNING" state to the "STOPPED" state. * **stoppedReason** *(string) --* The reason that the task was stopped. * **stoppingAt** *(datetime) --* The Unix timestamp for the time when the task stops. More specifically, it's for the time when the task transitions from the "RUNNING" state to "STOPPING". * **tags** *(list) --* The metadata that you apply to the task to help you categorize and organize the task. Each tag consists of a key and an optional value. You define both the key and value. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **taskArn** *(string) --* The Amazon Resource Name (ARN) of the task. * **taskDefinitionArn** *(string) --* The ARN of the task definition that creates the task. * **version** *(integer) --* The version counter for the task. Every time a task experiences a change that starts a CloudWatch event, the version counter is incremented. If you replicate your Amazon ECS task state with CloudWatch Events, you can compare the version of a task reported by the Amazon ECS API actions with the version reported in CloudWatch Events for the task (inside the "detail" object) to verify that the version in your event stream is current. * **ephemeralStorage** *(dict) --* The ephemeral storage settings for the task. * **sizeInGiB** *(integer) --* The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is "21" GiB and the maximum supported value is "200" GiB. * **fargateEphemeralStorage** *(dict) --* The Fargate ephemeral storage settings for the task. * **sizeInGiB** *(integer) --* The total amount, in GiB, of the ephemeral storage to set for the task. The minimum supported value is "20" GiB and the maximum supported value is "200" GiB. * **kmsKeyId** *(string) --* Specify an Key Management Service key ID to encrypt the ephemeral storage for the task. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ClusterNotFoundException" ECS / Client / register_container_instance register_container_instance *************************** ECS.Client.register_container_instance(**kwargs) Note: This action is only used by the Amazon ECS agent, and it is not intended for use outside of the agent. Registers an EC2 instance into the specified cluster. This instance becomes available to place containers on. See also: AWS API Documentation **Request Syntax** response = client.register_container_instance( cluster='string', instanceIdentityDocument='string', instanceIdentityDocumentSignature='string', totalResources=[ { 'name': 'string', 'type': 'string', 'doubleValue': 123.0, 'longValue': 123, 'integerValue': 123, 'stringSetValue': [ 'string', ] }, ], versionInfo={ 'agentVersion': 'string', 'agentHash': 'string', 'dockerVersion': 'string' }, containerInstanceArn='string', attributes=[ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ], platformDevices=[ { 'id': 'string', 'type': 'GPU' }, ], tags=[ { 'key': 'string', 'value': 'string' }, ] ) Parameters: * **cluster** (*string*) -- The short name or full Amazon Resource Name (ARN) of the cluster to register your container instance with. If you do not specify a cluster, the default cluster is assumed. * **instanceIdentityDocument** (*string*) -- The instance identity document for the EC2 instance to register. This document can be found by running the following command from the instance: "curl http://169.254.169.254/latest/dynamic /instance-identity/document/" * **instanceIdentityDocumentSignature** (*string*) -- The instance identity document signature for the EC2 instance to register. This signature can be found by running the following command from the instance: "curl http://169.254.169.254/latest/dynamic/instance- identity/signature/" * **totalResources** (*list*) -- The resources available on the instance. * *(dict) --* Describes the resources available for a container instance. * **name** *(string) --* The name of the resource, such as "CPU", "MEMORY", "PORTS", "PORTS_UDP", or a user-defined resource. * **type** *(string) --* The type of the resource. Valid values: "INTEGER", "DOUBLE", "LONG", or "STRINGSET". * **doubleValue** *(float) --* When the "doubleValue" type is set, the value of the resource must be a double precision floating-point type. * **longValue** *(integer) --* When the "longValue" type is set, the value of the resource must be an extended precision floating-point type. * **integerValue** *(integer) --* When the "integerValue" type is set, the value of the resource must be an integer. * **stringSetValue** *(list) --* When the "stringSetValue" type is set, the value of the resource must be a string type. * *(string) --* * **versionInfo** (*dict*) -- The version information for the Amazon ECS container agent and Docker daemon that runs on the container instance. * **agentVersion** *(string) --* The version number of the Amazon ECS container agent. * **agentHash** *(string) --* The Git commit hash for the Amazon ECS container agent build on the amazon-ecs-agent GitHub repository. * **dockerVersion** *(string) --* The Docker version that's running on the container instance. * **containerInstanceArn** (*string*) -- The ARN of the container instance (if it was previously registered). * **attributes** (*list*) -- The container instance attributes that this container instance supports. * *(dict) --* An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* **[REQUIRED]** The name of the attribute. The "name" must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.). * **value** *(string) --* The value of the attribute. The "value" must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space. * **targetType** *(string) --* The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN. * **targetId** *(string) --* The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN). * **platformDevices** (*list*) -- The devices that are available on the container instance. The only supported device type is a GPU. * *(dict) --* The devices that are available on the container instance. The only supported device type is a GPU. * **id** *(string) --* **[REQUIRED]** The ID for the GPUs on the container instance. The available GPU IDs can also be obtained on the container instance in the "/var/lib/ecs/gpu/nvidia_gpu_info.json" file. * **type** *(string) --* **[REQUIRED]** The type of device that's available on the container instance. The only supported value is "GPU". * **tags** (*list*) -- The metadata that you apply to the container instance to help you categorize and organize them. Each tag consists of a key and an optional value. You define both. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). Return type: dict Returns: **Response Syntax** { 'containerInstance': { 'containerInstanceArn': 'string', 'ec2InstanceId': 'string', 'capacityProviderName': 'string', 'version': 123, 'versionInfo': { 'agentVersion': 'string', 'agentHash': 'string', 'dockerVersion': 'string' }, 'remainingResources': [ { 'name': 'string', 'type': 'string', 'doubleValue': 123.0, 'longValue': 123, 'integerValue': 123, 'stringSetValue': [ 'string', ] }, ], 'registeredResources': [ { 'name': 'string', 'type': 'string', 'doubleValue': 123.0, 'longValue': 123, 'integerValue': 123, 'stringSetValue': [ 'string', ] }, ], 'status': 'string', 'statusReason': 'string', 'agentConnected': True|False, 'runningTasksCount': 123, 'pendingTasksCount': 123, 'agentUpdateStatus': 'PENDING'|'STAGING'|'STAGED'|'UPDATING'|'UPDATED'|'FAILED', 'attributes': [ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ], 'registeredAt': datetime(2015, 1, 1), 'attachments': [ { 'id': 'string', 'type': 'string', 'status': 'string', 'details': [ { 'name': 'string', 'value': 'string' }, ] }, ], 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'healthStatus': { 'overallStatus': 'OK'|'IMPAIRED'|'INSUFFICIENT_DATA'|'INITIALIZING', 'details': [ { 'type': 'CONTAINER_RUNTIME', 'status': 'OK'|'IMPAIRED'|'INSUFFICIENT_DATA'|'INITIALIZING', 'lastUpdated': datetime(2015, 1, 1), 'lastStatusChange': datetime(2015, 1, 1) }, ] } } } **Response Structure** * *(dict) --* * **containerInstance** *(dict) --* The container instance that was registered. * **containerInstanceArn** *(string) --* The Amazon Resource Name (ARN) of the container instance. For more information about the ARN format, see Amazon Resource Name (ARN) in the *Amazon ECS Developer Guide*. * **ec2InstanceId** *(string) --* The ID of the container instance. For Amazon EC2 instances, this value is the Amazon EC2 instance ID. For external instances, this value is the Amazon Web Services Systems Manager managed instance ID. * **capacityProviderName** *(string) --* The capacity provider that's associated with the container instance. * **version** *(integer) --* The version counter for the container instance. Every time a container instance experiences a change that triggers a CloudWatch event, the version counter is incremented. If you're replicating your Amazon ECS container instance state with CloudWatch Events, you can compare the version of a container instance reported by the Amazon ECS APIs with the version reported in CloudWatch Events for the container instance (inside the "detail" object) to verify that the version in your event stream is current. * **versionInfo** *(dict) --* The version information for the Amazon ECS container agent and Docker daemon running on the container instance. * **agentVersion** *(string) --* The version number of the Amazon ECS container agent. * **agentHash** *(string) --* The Git commit hash for the Amazon ECS container agent build on the amazon-ecs-agent GitHub repository. * **dockerVersion** *(string) --* The Docker version that's running on the container instance. * **remainingResources** *(list) --* For CPU and memory resource types, this parameter describes the remaining CPU and memory that wasn't already allocated to tasks and is therefore available for new tasks. For port resource types, this parameter describes the ports that were reserved by the Amazon ECS container agent (at instance registration time) and any task containers that have reserved port mappings on the host (with the "host" or "bridge" network mode). Any port that's not specified here is available for new tasks. * *(dict) --* Describes the resources available for a container instance. * **name** *(string) --* The name of the resource, such as "CPU", "MEMORY", "PORTS", "PORTS_UDP", or a user-defined resource. * **type** *(string) --* The type of the resource. Valid values: "INTEGER", "DOUBLE", "LONG", or "STRINGSET". * **doubleValue** *(float) --* When the "doubleValue" type is set, the value of the resource must be a double precision floating-point type. * **longValue** *(integer) --* When the "longValue" type is set, the value of the resource must be an extended precision floating-point type. * **integerValue** *(integer) --* When the "integerValue" type is set, the value of the resource must be an integer. * **stringSetValue** *(list) --* When the "stringSetValue" type is set, the value of the resource must be a string type. * *(string) --* * **registeredResources** *(list) --* For CPU and memory resource types, this parameter describes the amount of each resource that was available on the container instance when the container agent registered it with Amazon ECS. This value represents the total amount of CPU and memory that can be allocated on this container instance to tasks. For port resource types, this parameter describes the ports that were reserved by the Amazon ECS container agent when it registered the container instance with Amazon ECS. * *(dict) --* Describes the resources available for a container instance. * **name** *(string) --* The name of the resource, such as "CPU", "MEMORY", "PORTS", "PORTS_UDP", or a user-defined resource. * **type** *(string) --* The type of the resource. Valid values: "INTEGER", "DOUBLE", "LONG", or "STRINGSET". * **doubleValue** *(float) --* When the "doubleValue" type is set, the value of the resource must be a double precision floating-point type. * **longValue** *(integer) --* When the "longValue" type is set, the value of the resource must be an extended precision floating-point type. * **integerValue** *(integer) --* When the "integerValue" type is set, the value of the resource must be an integer. * **stringSetValue** *(list) --* When the "stringSetValue" type is set, the value of the resource must be a string type. * *(string) --* * **status** *(string) --* The status of the container instance. The valid values are "REGISTERING", "REGISTRATION_FAILED", "ACTIVE", "INACTIVE", "DEREGISTERING", or "DRAINING". If your account has opted in to the "awsvpcTrunking" account setting, then any newly registered container instance will transition to a "REGISTERING" status while the trunk elastic network interface is provisioned for the instance. If the registration fails, the instance will transition to a "REGISTRATION_FAILED" status. You can describe the container instance and see the reason for failure in the "statusReason" parameter. Once the container instance is terminated, the instance transitions to a "DEREGISTERING" status while the trunk elastic network interface is deprovisioned. The instance then transitions to an "INACTIVE" status. The "ACTIVE" status indicates that the container instance can accept tasks. The "DRAINING" indicates that new tasks aren't placed on the container instance and any service tasks running on the container instance are removed if possible. For more information, see Container instance draining in the *Amazon Elastic Container Service Developer Guide*. * **statusReason** *(string) --* The reason that the container instance reached its current status. * **agentConnected** *(boolean) --* This parameter returns "true" if the agent is connected to Amazon ECS. An instance with an agent that may be unhealthy or stopped return "false". Only instances connected to an agent can accept task placement requests. * **runningTasksCount** *(integer) --* The number of tasks on the container instance that have a desired status ( "desiredStatus") of "RUNNING". * **pendingTasksCount** *(integer) --* The number of tasks on the container instance that are in the "PENDING" status. * **agentUpdateStatus** *(string) --* The status of the most recent agent update. If an update wasn't ever requested, this value is "NULL". * **attributes** *(list) --* The attributes set for the container instance, either by the Amazon ECS container agent at instance registration or manually with the PutAttributes operation. * *(dict) --* An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the attribute. The "name" must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.). * **value** *(string) --* The value of the attribute. The "value" must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space. * **targetType** *(string) --* The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN. * **targetId** *(string) --* The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN). * **registeredAt** *(datetime) --* The Unix timestamp for the time when the container instance was registered. * **attachments** *(list) --* The resources attached to a container instance, such as an elastic network interface. * *(dict) --* An object representing a container instance or task attachment. * **id** *(string) --* The unique identifier for the attachment. * **type** *(string) --* The type of the attachment, such as "ElasticNetworkInterface", "Service Connect", and "AmazonElasticBlockStorage". * **status** *(string) --* The status of the attachment. Valid values are "PRECREATED", "CREATED", "ATTACHING", "ATTACHED", "DETACHING", "DETACHED", "DELETED", and "FAILED". * **details** *(list) --* Details of the attachment. For elastic network interfaces, this includes the network interface ID, the MAC address, the subnet ID, and the private IPv4 address. For Service Connect services, this includes "portName", "clientAliases", "discoveryName", and "ingressPortOverride". For Elastic Block Storage, this includes "roleArn", "deleteOnTermination", "volumeName", "volumeId", and "statusReason" (only when the attachment fails to create or attach). * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **tags** *(list) --* The metadata that you apply to the container instance to help you categorize and organize them. Each tag consists of a key and an optional value. You define both. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **healthStatus** *(dict) --* An object representing the health status of the container instance. * **overallStatus** *(string) --* The overall health status of the container instance. This is an aggregate status of all container instance health checks. * **details** *(list) --* An array of objects representing the details of the container instance health status. * *(dict) --* An object representing the result of a container instance health status check. * **type** *(string) --* The type of container instance health status that was verified. * **status** *(string) --* The container instance health status. * **lastUpdated** *(datetime) --* The Unix timestamp for when the container instance health status was last updated. * **lastStatusChange** *(datetime) --* The Unix timestamp for when the container instance health status last changed. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" ECS / Client / get_task_protection get_task_protection ******************* ECS.Client.get_task_protection(**kwargs) Retrieves the protection status of tasks in an Amazon ECS service. See also: AWS API Documentation **Request Syntax** response = client.get_task_protection( cluster='string', tasks=[ 'string', ] ) Parameters: * **cluster** (*string*) -- **[REQUIRED]** The short name or full Amazon Resource Name (ARN) of the cluster that hosts the service that the task sets exist in. * **tasks** (*list*) -- A list of up to 100 task IDs or full ARN entries. * *(string) --* Return type: dict Returns: **Response Syntax** { 'protectedTasks': [ { 'taskArn': 'string', 'protectionEnabled': True|False, 'expirationDate': datetime(2015, 1, 1) }, ], 'failures': [ { 'arn': 'string', 'reason': 'string', 'detail': 'string' }, ] } **Response Structure** * *(dict) --* * **protectedTasks** *(list) --* A list of tasks with the following information. * "taskArn": The task ARN. * "protectionEnabled": The protection status of the task. If scale-in protection is turned on for a task, the value is "true". Otherwise, it is "false". * "expirationDate": The epoch time when protection for the task will expire. * *(dict) --* An object representing the protection status details for a task. You can set the protection status with the UpdateTaskProtection API and get the status of tasks with the GetTaskProtection API. * **taskArn** *(string) --* The task ARN. * **protectionEnabled** *(boolean) --* The protection status of the task. If scale-in protection is on for a task, the value is "true". Otherwise, it is "false". * **expirationDate** *(datetime) --* The epoch time when protection for the task will expire. * **failures** *(list) --* Any failures associated with the call. * *(dict) --* A failed resource. For a list of common causes, see API failure reasons in the *Amazon Elastic Container Service Developer Guide*. * **arn** *(string) --* The Amazon Resource Name (ARN) of the failed resource. * **reason** *(string) --* The reason for the failure. * **detail** *(string) --* The details of the failure. **Exceptions** * "ECS.Client.exceptions.AccessDeniedException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.ClusterNotFoundException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ResourceNotFoundException" * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.UnsupportedFeatureException" ECS / Client / describe_container_instances describe_container_instances **************************** ECS.Client.describe_container_instances(**kwargs) Describes one or more container instances. Returns metadata about each container instance requested. See also: AWS API Documentation **Request Syntax** response = client.describe_container_instances( cluster='string', containerInstances=[ 'string', ], include=[ 'TAGS'|'CONTAINER_INSTANCE_HEALTH', ] ) Parameters: * **cluster** (*string*) -- The short name or full Amazon Resource Name (ARN) of the cluster that hosts the container instances to describe. If you do not specify a cluster, the default cluster is assumed. This parameter is required if the container instance or container instances you are describing were launched in any cluster other than the default cluster. * **containerInstances** (*list*) -- **[REQUIRED]** A list of up to 100 container instance IDs or full Amazon Resource Name (ARN) entries. * *(string) --* * **include** (*list*) -- Specifies whether you want to see the resource tags for the container instance. If "TAGS" is specified, the tags are included in the response. If "CONTAINER_INSTANCE_HEALTH" is specified, the container instance health is included in the response. If this field is omitted, tags and container instance health status aren't included in the response. * *(string) --* Return type: dict Returns: **Response Syntax** { 'containerInstances': [ { 'containerInstanceArn': 'string', 'ec2InstanceId': 'string', 'capacityProviderName': 'string', 'version': 123, 'versionInfo': { 'agentVersion': 'string', 'agentHash': 'string', 'dockerVersion': 'string' }, 'remainingResources': [ { 'name': 'string', 'type': 'string', 'doubleValue': 123.0, 'longValue': 123, 'integerValue': 123, 'stringSetValue': [ 'string', ] }, ], 'registeredResources': [ { 'name': 'string', 'type': 'string', 'doubleValue': 123.0, 'longValue': 123, 'integerValue': 123, 'stringSetValue': [ 'string', ] }, ], 'status': 'string', 'statusReason': 'string', 'agentConnected': True|False, 'runningTasksCount': 123, 'pendingTasksCount': 123, 'agentUpdateStatus': 'PENDING'|'STAGING'|'STAGED'|'UPDATING'|'UPDATED'|'FAILED', 'attributes': [ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ], 'registeredAt': datetime(2015, 1, 1), 'attachments': [ { 'id': 'string', 'type': 'string', 'status': 'string', 'details': [ { 'name': 'string', 'value': 'string' }, ] }, ], 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'healthStatus': { 'overallStatus': 'OK'|'IMPAIRED'|'INSUFFICIENT_DATA'|'INITIALIZING', 'details': [ { 'type': 'CONTAINER_RUNTIME', 'status': 'OK'|'IMPAIRED'|'INSUFFICIENT_DATA'|'INITIALIZING', 'lastUpdated': datetime(2015, 1, 1), 'lastStatusChange': datetime(2015, 1, 1) }, ] } }, ], 'failures': [ { 'arn': 'string', 'reason': 'string', 'detail': 'string' }, ] } **Response Structure** * *(dict) --* * **containerInstances** *(list) --* The list of container instances. * *(dict) --* An Amazon EC2 or External instance that's running the Amazon ECS agent and has been registered with a cluster. * **containerInstanceArn** *(string) --* The Amazon Resource Name (ARN) of the container instance. For more information about the ARN format, see Amazon Resource Name (ARN) in the *Amazon ECS Developer Guide*. * **ec2InstanceId** *(string) --* The ID of the container instance. For Amazon EC2 instances, this value is the Amazon EC2 instance ID. For external instances, this value is the Amazon Web Services Systems Manager managed instance ID. * **capacityProviderName** *(string) --* The capacity provider that's associated with the container instance. * **version** *(integer) --* The version counter for the container instance. Every time a container instance experiences a change that triggers a CloudWatch event, the version counter is incremented. If you're replicating your Amazon ECS container instance state with CloudWatch Events, you can compare the version of a container instance reported by the Amazon ECS APIs with the version reported in CloudWatch Events for the container instance (inside the "detail" object) to verify that the version in your event stream is current. * **versionInfo** *(dict) --* The version information for the Amazon ECS container agent and Docker daemon running on the container instance. * **agentVersion** *(string) --* The version number of the Amazon ECS container agent. * **agentHash** *(string) --* The Git commit hash for the Amazon ECS container agent build on the amazon-ecs-agent GitHub repository. * **dockerVersion** *(string) --* The Docker version that's running on the container instance. * **remainingResources** *(list) --* For CPU and memory resource types, this parameter describes the remaining CPU and memory that wasn't already allocated to tasks and is therefore available for new tasks. For port resource types, this parameter describes the ports that were reserved by the Amazon ECS container agent (at instance registration time) and any task containers that have reserved port mappings on the host (with the "host" or "bridge" network mode). Any port that's not specified here is available for new tasks. * *(dict) --* Describes the resources available for a container instance. * **name** *(string) --* The name of the resource, such as "CPU", "MEMORY", "PORTS", "PORTS_UDP", or a user-defined resource. * **type** *(string) --* The type of the resource. Valid values: "INTEGER", "DOUBLE", "LONG", or "STRINGSET". * **doubleValue** *(float) --* When the "doubleValue" type is set, the value of the resource must be a double precision floating-point type. * **longValue** *(integer) --* When the "longValue" type is set, the value of the resource must be an extended precision floating- point type. * **integerValue** *(integer) --* When the "integerValue" type is set, the value of the resource must be an integer. * **stringSetValue** *(list) --* When the "stringSetValue" type is set, the value of the resource must be a string type. * *(string) --* * **registeredResources** *(list) --* For CPU and memory resource types, this parameter describes the amount of each resource that was available on the container instance when the container agent registered it with Amazon ECS. This value represents the total amount of CPU and memory that can be allocated on this container instance to tasks. For port resource types, this parameter describes the ports that were reserved by the Amazon ECS container agent when it registered the container instance with Amazon ECS. * *(dict) --* Describes the resources available for a container instance. * **name** *(string) --* The name of the resource, such as "CPU", "MEMORY", "PORTS", "PORTS_UDP", or a user-defined resource. * **type** *(string) --* The type of the resource. Valid values: "INTEGER", "DOUBLE", "LONG", or "STRINGSET". * **doubleValue** *(float) --* When the "doubleValue" type is set, the value of the resource must be a double precision floating-point type. * **longValue** *(integer) --* When the "longValue" type is set, the value of the resource must be an extended precision floating- point type. * **integerValue** *(integer) --* When the "integerValue" type is set, the value of the resource must be an integer. * **stringSetValue** *(list) --* When the "stringSetValue" type is set, the value of the resource must be a string type. * *(string) --* * **status** *(string) --* The status of the container instance. The valid values are "REGISTERING", "REGISTRATION_FAILED", "ACTIVE", "INACTIVE", "DEREGISTERING", or "DRAINING". If your account has opted in to the "awsvpcTrunking" account setting, then any newly registered container instance will transition to a "REGISTERING" status while the trunk elastic network interface is provisioned for the instance. If the registration fails, the instance will transition to a "REGISTRATION_FAILED" status. You can describe the container instance and see the reason for failure in the "statusReason" parameter. Once the container instance is terminated, the instance transitions to a "DEREGISTERING" status while the trunk elastic network interface is deprovisioned. The instance then transitions to an "INACTIVE" status. The "ACTIVE" status indicates that the container instance can accept tasks. The "DRAINING" indicates that new tasks aren't placed on the container instance and any service tasks running on the container instance are removed if possible. For more information, see Container instance draining in the *Amazon Elastic Container Service Developer Guide*. * **statusReason** *(string) --* The reason that the container instance reached its current status. * **agentConnected** *(boolean) --* This parameter returns "true" if the agent is connected to Amazon ECS. An instance with an agent that may be unhealthy or stopped return "false". Only instances connected to an agent can accept task placement requests. * **runningTasksCount** *(integer) --* The number of tasks on the container instance that have a desired status ( "desiredStatus") of "RUNNING". * **pendingTasksCount** *(integer) --* The number of tasks on the container instance that are in the "PENDING" status. * **agentUpdateStatus** *(string) --* The status of the most recent agent update. If an update wasn't ever requested, this value is "NULL". * **attributes** *(list) --* The attributes set for the container instance, either by the Amazon ECS container agent at instance registration or manually with the PutAttributes operation. * *(dict) --* An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the attribute. The "name" must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.). * **value** *(string) --* The value of the attribute. The "value" must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space. * **targetType** *(string) --* The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN. * **targetId** *(string) --* The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN). * **registeredAt** *(datetime) --* The Unix timestamp for the time when the container instance was registered. * **attachments** *(list) --* The resources attached to a container instance, such as an elastic network interface. * *(dict) --* An object representing a container instance or task attachment. * **id** *(string) --* The unique identifier for the attachment. * **type** *(string) --* The type of the attachment, such as "ElasticNetworkInterface", "Service Connect", and "AmazonElasticBlockStorage". * **status** *(string) --* The status of the attachment. Valid values are "PRECREATED", "CREATED", "ATTACHING", "ATTACHED", "DETACHING", "DETACHED", "DELETED", and "FAILED". * **details** *(list) --* Details of the attachment. For elastic network interfaces, this includes the network interface ID, the MAC address, the subnet ID, and the private IPv4 address. For Service Connect services, this includes "portName", "clientAliases", "discoveryName", and "ingressPortOverride". For Elastic Block Storage, this includes "roleArn", "deleteOnTermination", "volumeName", "volumeId", and "statusReason" (only when the attachment fails to create or attach). * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **tags** *(list) --* The metadata that you apply to the container instance to help you categorize and organize them. Each tag consists of a key and an optional value. You define both. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **healthStatus** *(dict) --* An object representing the health status of the container instance. * **overallStatus** *(string) --* The overall health status of the container instance. This is an aggregate status of all container instance health checks. * **details** *(list) --* An array of objects representing the details of the container instance health status. * *(dict) --* An object representing the result of a container instance health status check. * **type** *(string) --* The type of container instance health status that was verified. * **status** *(string) --* The container instance health status. * **lastUpdated** *(datetime) --* The Unix timestamp for when the container instance health status was last updated. * **lastStatusChange** *(datetime) --* The Unix timestamp for when the container instance health status last changed. * **failures** *(list) --* Any failures associated with the call. * *(dict) --* A failed resource. For a list of common causes, see API failure reasons in the *Amazon Elastic Container Service Developer Guide*. * **arn** *(string) --* The Amazon Resource Name (ARN) of the failed resource. * **reason** *(string) --* The reason for the failure. * **detail** *(string) --* The details of the failure. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ClusterNotFoundException" **Examples** This example provides a description of the specified container instance in your default region, using the container instance UUID as an identifier. response = client.describe_container_instances( cluster='default', containerInstances=[ 'f2756532-8f13-4d53-87c9-aed50dc94cd7', ], ) print(response) Expected Output: { 'containerInstances': [ { 'agentConnected': True, 'containerInstanceArn': 'arn:aws:ecs:us-east-1:012345678910:container-instance/f2756532-8f13-4d53-87c9-aed50dc94cd7', 'ec2InstanceId': 'i-807f3249', 'pendingTasksCount': 0, 'registeredResources': [ { 'name': 'CPU', 'type': 'INTEGER', 'doubleValue': 0.0, 'integerValue': 2048, 'longValue': 0, }, { 'name': 'MEMORY', 'type': 'INTEGER', 'doubleValue': 0.0, 'integerValue': 3768, 'longValue': 0, }, { 'name': 'PORTS', 'type': 'STRINGSET', 'doubleValue': 0.0, 'integerValue': 0, 'longValue': 0, 'stringSetValue': [ '2376', '22', '51678', '2375', ], }, ], 'remainingResources': [ { 'name': 'CPU', 'type': 'INTEGER', 'doubleValue': 0.0, 'integerValue': 1948, 'longValue': 0, }, { 'name': 'MEMORY', 'type': 'INTEGER', 'doubleValue': 0.0, 'integerValue': 3668, 'longValue': 0, }, { 'name': 'PORTS', 'type': 'STRINGSET', 'doubleValue': 0.0, 'integerValue': 0, 'longValue': 0, 'stringSetValue': [ '2376', '22', '80', '51678', '2375', ], }, ], 'runningTasksCount': 1, 'status': 'ACTIVE', }, ], 'failures': [ ], 'ResponseMetadata': { '...': '...', }, } ECS / Client / deregister_task_definition deregister_task_definition ************************** ECS.Client.deregister_task_definition(**kwargs) Deregisters the specified task definition by family and revision. Upon deregistration, the task definition is marked as "INACTIVE". Existing tasks and services that reference an "INACTIVE" task definition continue to run without disruption. Existing services that reference an "INACTIVE" task definition can still scale up or down by modifying the service's desired count. If you want to delete a task definition revision, you must first deregister the task definition revision. You can't use an "INACTIVE" task definition to run new tasks or create new services, and you can't update an existing service to reference an "INACTIVE" task definition. However, there may be up to a 10-minute window following deregistration where these restrictions have not yet taken effect. Note: At this time, "INACTIVE" task definitions remain discoverable in your account indefinitely. However, this behavior is subject to change in the future. We don't recommend that you rely on "INACTIVE" task definitions persisting beyond the lifecycle of any associated tasks and services. You must deregister a task definition revision before you delete it. For more information, see DeleteTaskDefinitions. See also: AWS API Documentation **Request Syntax** response = client.deregister_task_definition( taskDefinition='string' ) Parameters: **taskDefinition** (*string*) -- **[REQUIRED]** The "family" and "revision" ( "family:revision") or full Amazon Resource Name (ARN) of the task definition to deregister. You must specify a "revision". Return type: dict Returns: **Response Syntax** { 'taskDefinition': { 'taskDefinitionArn': 'string', 'containerDefinitions': [ { 'name': 'string', 'image': 'string', 'repositoryCredentials': { 'credentialsParameter': 'string' }, 'cpu': 123, 'memory': 123, 'memoryReservation': 123, 'links': [ 'string', ], 'portMappings': [ { 'containerPort': 123, 'hostPort': 123, 'protocol': 'tcp'|'udp', 'name': 'string', 'appProtocol': 'http'|'http2'|'grpc', 'containerPortRange': 'string' }, ], 'essential': True|False, 'restartPolicy': { 'enabled': True|False, 'ignoredExitCodes': [ 123, ], 'restartAttemptPeriod': 123 }, 'entryPoint': [ 'string', ], 'command': [ 'string', ], 'environment': [ { 'name': 'string', 'value': 'string' }, ], 'environmentFiles': [ { 'value': 'string', 'type': 's3' }, ], 'mountPoints': [ { 'sourceVolume': 'string', 'containerPath': 'string', 'readOnly': True|False }, ], 'volumesFrom': [ { 'sourceContainer': 'string', 'readOnly': True|False }, ], 'linuxParameters': { 'capabilities': { 'add': [ 'string', ], 'drop': [ 'string', ] }, 'devices': [ { 'hostPath': 'string', 'containerPath': 'string', 'permissions': [ 'read'|'write'|'mknod', ] }, ], 'initProcessEnabled': True|False, 'sharedMemorySize': 123, 'tmpfs': [ { 'containerPath': 'string', 'size': 123, 'mountOptions': [ 'string', ] }, ], 'maxSwap': 123, 'swappiness': 123 }, 'secrets': [ { 'name': 'string', 'valueFrom': 'string' }, ], 'dependsOn': [ { 'containerName': 'string', 'condition': 'START'|'COMPLETE'|'SUCCESS'|'HEALTHY' }, ], 'startTimeout': 123, 'stopTimeout': 123, 'versionConsistency': 'enabled'|'disabled', 'hostname': 'string', 'user': 'string', 'workingDirectory': 'string', 'disableNetworking': True|False, 'privileged': True|False, 'readonlyRootFilesystem': True|False, 'dnsServers': [ 'string', ], 'dnsSearchDomains': [ 'string', ], 'extraHosts': [ { 'hostname': 'string', 'ipAddress': 'string' }, ], 'dockerSecurityOptions': [ 'string', ], 'interactive': True|False, 'pseudoTerminal': True|False, 'dockerLabels': { 'string': 'string' }, 'ulimits': [ { 'name': 'core'|'cpu'|'data'|'fsize'|'locks'|'memlock'|'msgqueue'|'nice'|'nofile'|'nproc'|'rss'|'rtprio'|'rttime'|'sigpending'|'stack', 'softLimit': 123, 'hardLimit': 123 }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] }, 'healthCheck': { 'command': [ 'string', ], 'interval': 123, 'timeout': 123, 'retries': 123, 'startPeriod': 123 }, 'systemControls': [ { 'namespace': 'string', 'value': 'string' }, ], 'resourceRequirements': [ { 'value': 'string', 'type': 'GPU'|'InferenceAccelerator' }, ], 'firelensConfiguration': { 'type': 'fluentd'|'fluentbit', 'options': { 'string': 'string' } }, 'credentialSpecs': [ 'string', ] }, ], 'family': 'string', 'taskRoleArn': 'string', 'executionRoleArn': 'string', 'networkMode': 'bridge'|'host'|'awsvpc'|'none', 'revision': 123, 'volumes': [ { 'name': 'string', 'host': { 'sourcePath': 'string' }, 'dockerVolumeConfiguration': { 'scope': 'task'|'shared', 'autoprovision': True|False, 'driver': 'string', 'driverOpts': { 'string': 'string' }, 'labels': { 'string': 'string' } }, 'efsVolumeConfiguration': { 'fileSystemId': 'string', 'rootDirectory': 'string', 'transitEncryption': 'ENABLED'|'DISABLED', 'transitEncryptionPort': 123, 'authorizationConfig': { 'accessPointId': 'string', 'iam': 'ENABLED'|'DISABLED' } }, 'fsxWindowsFileServerVolumeConfiguration': { 'fileSystemId': 'string', 'rootDirectory': 'string', 'authorizationConfig': { 'credentialsParameter': 'string', 'domain': 'string' } }, 'configuredAtLaunch': True|False }, ], 'status': 'ACTIVE'|'INACTIVE'|'DELETE_IN_PROGRESS', 'requiresAttributes': [ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ], 'placementConstraints': [ { 'type': 'memberOf', 'expression': 'string' }, ], 'compatibilities': [ 'EC2'|'FARGATE'|'EXTERNAL', ], 'runtimePlatform': { 'cpuArchitecture': 'X86_64'|'ARM64', 'operatingSystemFamily': 'WINDOWS_SERVER_2019_FULL'|'WINDOWS_SERVER_2019_CORE'|'WINDOWS_SERVER_2016_FULL'|'WINDOWS_SERVER_2004_CORE'|'WINDOWS_SERVER_2022_CORE'|'WINDOWS_SERVER_2022_FULL'|'WINDOWS_SERVER_2025_CORE'|'WINDOWS_SERVER_2025_FULL'|'WINDOWS_SERVER_20H2_CORE'|'LINUX' }, 'requiresCompatibilities': [ 'EC2'|'FARGATE'|'EXTERNAL', ], 'cpu': 'string', 'memory': 'string', 'inferenceAccelerators': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'pidMode': 'host'|'task', 'ipcMode': 'host'|'task'|'none', 'proxyConfiguration': { 'type': 'APPMESH', 'containerName': 'string', 'properties': [ { 'name': 'string', 'value': 'string' }, ] }, 'registeredAt': datetime(2015, 1, 1), 'deregisteredAt': datetime(2015, 1, 1), 'registeredBy': 'string', 'ephemeralStorage': { 'sizeInGiB': 123 }, 'enableFaultInjection': True|False } } **Response Structure** * *(dict) --* * **taskDefinition** *(dict) --* The full description of the deregistered task. * **taskDefinitionArn** *(string) --* The full Amazon Resource Name (ARN) of the task definition. * **containerDefinitions** *(list) --* A list of container definitions in JSON format that describe the different containers that make up your task. For more information about container definition parameters and defaults, see Amazon ECS Task Definitions in the *Amazon Elastic Container Service Developer Guide*. * *(dict) --* Container definitions are used in task definitions to describe the different containers that are launched as part of a task. * **name** *(string) --* The name of a container. If you're linking multiple containers together in a task definition, the "name" of one container can be entered in the "links" of another container to connect the containers. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. This parameter maps to "name" in the docker container create command and the "--name" option to docker run. * **image** *(string) --* The image used to start a container. This string is passed directly to the Docker daemon. By default, images in the Docker Hub registry are available. Other repositories are specified with either "repository- url/image:tag" or "repository-url/image@digest ``. For images using tags (repository-url/image:tag), up to 255 characters total are allowed, including letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs (#). For images using digests (repository-url/image@digest), the 255 character limit applies only to the repository URL and image name (everything before the @ sign). The only supported hash function is sha256, and the hash value after sha256: must be exactly 64 characters (only letters A-F, a-f, and numbers 0-9 are allowed). This parameter maps to ``Image" in the docker container create command and the "IMAGE" parameter of docker run. * When a new task starts, the Amazon ECS container agent pulls the latest version of the specified image and tag for the container to use. However, subsequent updates to a repository image aren't propagated to already running tasks. * Images in Amazon ECR repositories can be specified by either using the full "registry/repository:tag" or "registry/repository@digest". For example, "012345678910.dkr.ecr..amazonaws.com /:latest" or "012345678910.dkr.ecr ..amazonaws.com/@sha2 56:94afd1f2e64d908bc90dbca0035a5b567EXAMPLE". * Images in official repositories on Docker Hub use a single name (for example, "ubuntu" or "mongo"). * Images in other repositories on Docker Hub are qualified with an organization name (for example, "amazon/amazon-ecs-agent"). * Images in other online repositories are qualified further by a domain name (for example, "quay.io/assemblyline/ubuntu"). * **repositoryCredentials** *(dict) --* The private repository authentication credentials to use. * **credentialsParameter** *(string) --* The Amazon Resource Name (ARN) of the secret containing the private repository credentials. Note: When you use the Amazon ECS API, CLI, or Amazon Web Services SDK, if the secret exists in the same Region as the task that you're launching then you can use either the full ARN or the name of the secret. When you use the Amazon Web Services Management Console, you must specify the full ARN of the secret. * **cpu** *(integer) --* The number of "cpu" units reserved for the container. This parameter maps to "CpuShares" in the docker container create commandand the "--cpu-shares" option to docker run. This field is optional for tasks using the Fargate launch type, and the only requirement is that the total amount of CPU reserved for all containers within a task be lower than the task-level "cpu" value. Note: You can determine the number of CPU units that are available per EC2 instance type by multiplying the vCPUs listed for that instance type on the Amazon EC2 Instances detail page by 1,024. Linux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that's the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task is guaranteed a minimum of 512 CPU units when needed. Moreover, each container could float to higher CPU usage if the other container was not using it. If both tasks were 100% active all of the time, they would be limited to 512 CPU units. On Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. The minimum valid CPU share value that the Linux kernel allows is 2, and the maximum valid CPU share value that the Linux kernel allows is 262144. However, the CPU parameter isn't required, and you can use CPU values below 2 or above 262144 in your container definitions. For CPU values below 2 (including null) or above 262144, the behavior varies based on your Amazon ECS container agent version: * **Agent versions less than or equal to 1.1.0:** Null and zero CPU values are passed to Docker as 0, which Docker then converts to 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux kernel converts to two CPU shares. * **Agent versions greater than or equal to 1.2.0:** Null, zero, and CPU values of 1 are passed to Docker as 2. * **Agent versions greater than or equal to 1.84.0:** CPU values greater than 256 vCPU are passed to Docker as 256, which is equivalent to 262144 CPU shares. On Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only have access to the specified amount of CPU that's described in the task definition. A null or zero CPU value is passed to Docker as "0", which Windows interprets as 1% of one CPU. * **memory** *(integer) --* The amount (in MiB) of memory to present to the container. If your container attempts to exceed the memory specified here, the container is killed. The total amount of memory reserved for all containers within a task must be lower than the task "memory" value, if one is specified. This parameter maps to "Memory" in the docker container create command and the "--memory" option to docker run. If using the Fargate launch type, this parameter is optional. If using the EC2 launch type, you must specify either a task-level memory value or a container-level memory value. If you specify both a container-level "memory" and "memoryReservation" value, "memory" must be greater than "memoryReservation". If you specify "memoryReservation", then that value is subtracted from the available memory resources for the container instance where the container is placed. Otherwise, the value of "memory" is used. The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers. The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers. * **memoryReservation** *(integer) --* The soft limit (in MiB) of memory to reserve for the container. When system memory is under heavy contention, Docker attempts to keep the container memory to this soft limit. However, your container can consume more memory when it needs to, up to either the hard limit specified with the "memory" parameter (if applicable), or all of the available memory on the container instance, whichever comes first. This parameter maps to "MemoryReservation" in the docker container create command and the "--memory- reservation" option to docker run. If a task-level memory value is not specified, you must specify a non-zero integer for one or both of "memory" or "memoryReservation" in a container definition. If you specify both, "memory" must be greater than "memoryReservation". If you specify "memoryReservation", then that value is subtracted from the available memory resources for the container instance where the container is placed. Otherwise, the value of "memory" is used. For example, if your container normally uses 128 MiB of memory, but occasionally bursts to 256 MiB of memory for short periods of time, you can set a "memoryReservation" of 128 MiB, and a "memory" hard limit of 300 MiB. This configuration would allow the container to only reserve 128 MiB of memory from the remaining resources on the container instance, but also allow the container to consume more memory resources when needed. The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers. The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers. * **links** *(list) --* The "links" parameter allows containers to communicate with each other without the need for port mappings. This parameter is only supported if the network mode of a task definition is "bridge". The "name:internalName" construct is analogous to "name:alias" in Docker links. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.. This parameter maps to "Links" in the docker container create command and the "-- link" option to docker run. Note: This parameter is not supported for Windows containers. Warning: Containers that are collocated on a single container instance may be able to communicate with each other without requiring links or host port mappings. Network isolation is achieved on the container instance using security groups and VPC settings. * *(string) --* * **portMappings** *(list) --* The list of port mappings for the container. Port mappings allow containers to access ports on the host container instance to send or receive traffic. For task definitions that use the "awsvpc" network mode, only specify the "containerPort". The "hostPort" can be left blank or it must be the same value as the "containerPort". Port mappings on Windows use the "NetNAT" gateway address rather than "localhost". There's no loopback for port mappings on Windows, so you can't access a container's mapped port from the host itself. This parameter maps to "PortBindings" in the the docker container create command and the "--publish" option to docker run. If the network mode of a task definition is set to "none", then you can't specify port mappings. If the network mode of a task definition is set to "host", then host ports must either be undefined or they must match the container port in the port mapping. Note: After a task reaches the "RUNNING" status, manual and automatic host and container port assignments are visible in the **Network Bindings** section of a container description for a selected task in the Amazon ECS console. The assignments are also visible in the "networkBindings" section DescribeTasks responses. * *(dict) --* Port mappings allow containers to access ports on the host container instance to send or receive traffic. Port mappings are specified as part of the container definition. If you use containers in a task with the "awsvpc" or "host" network mode, specify the exposed ports using "containerPort". The "hostPort" can be left blank or it must be the same value as the "containerPort". Most fields of this parameter ( "containerPort", "hostPort", "protocol") maps to "PortBindings" in the docker container create command and the "-- publish" option to "docker run". If the network mode of a task definition is set to "host", host ports must either be undefined or match the container port in the port mapping. Note: You can't expose the same container port for multiple protocols. If you attempt this, an error is returned. After a task reaches the "RUNNING" status, manual and automatic host and container port assignments are visible in the "networkBindings" section of DescribeTasks API responses. * **containerPort** *(integer) --* The port number on the container that's bound to the user-specified or automatically assigned host port. If you use containers in a task with the "awsvpc" or "host" network mode, specify the exposed ports using "containerPort". If you use containers in a task with the "bridge" network mode and you specify a container port and not a host port, your container automatically receives a host port in the ephemeral port range. For more information, see "hostPort". Port mappings that are automatically assigned in this way do not count toward the 100 reserved ports limit of a container instance. * **hostPort** *(integer) --* The port number on the container instance to reserve for your container. If you specify a "containerPortRange", leave this field empty and the value of the "hostPort" is set as follows: * For containers in a task with the "awsvpc" network mode, the "hostPort" is set to the same value as the "containerPort". This is a static mapping strategy. * For containers in a task with the "bridge" network mode, the Amazon ECS agent finds open ports on the host and automatically binds them to the container ports. This is a dynamic mapping strategy. If you use containers in a task with the "awsvpc" or "host" network mode, the "hostPort" can either be left blank or set to the same value as the "containerPort". If you use containers in a task with the "bridge" network mode, you can specify a non-reserved host port for your container port mapping, or you can omit the "hostPort" (or set it to "0") while specifying a "containerPort" and your container automatically receives a port in the ephemeral port range for your container instance operating system and Docker version. The default ephemeral port range for Docker version 1.6.0 and later is listed on the instance under "/proc/sys/net/ipv4/ip_local_port_range". If this kernel parameter is unavailable, the default ephemeral port range from 49153 through 65535 (Linux) or 49152 through 65535 (Windows) is used. Do not attempt to specify a host port in the ephemeral port range as these are reserved for automatic assignment. In general, ports below 32768 are outside of the ephemeral port range. The default reserved ports are 22 for SSH, the Docker ports 2375 and 2376, and the Amazon ECS container agent ports 51678-51680. Any host port that was previously specified in a running task is also reserved while the task is running. That is, after a task stops, the host port is released. The current reserved ports are displayed in the "remainingResources" of DescribeContainerInstances output. A container instance can have up to 100 reserved ports at a time. This number includes the default reserved ports. Automatically assigned ports aren't included in the 100 reserved ports quota. * **protocol** *(string) --* The protocol used for the port mapping. Valid values are "tcp" and "udp". The default is "tcp". "protocol" is immutable in a Service Connect service. Updating this field requires a service deletion and redeployment. * **name** *(string) --* The name that's used for the port mapping. This parameter is the name that you use in the "serviceConnectConfiguration" and the "vpcLatticeConfigurations" of a service. The name can include up to 64 characters. The characters can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen. * **appProtocol** *(string) --* The application protocol that's used for the port mapping. This parameter only applies to Service Connect. We recommend that you set this parameter to be consistent with the protocol that your application uses. If you set this parameter, Amazon ECS adds protocol-specific connection handling to the Service Connect proxy. If you set this parameter, Amazon ECS adds protocol-specific telemetry in the Amazon ECS console and CloudWatch. If you don't set a value for this parameter, then TCP is used. However, Amazon ECS doesn't add protocol-specific telemetry for TCP. "appProtocol" is immutable in a Service Connect service. Updating this field requires a service deletion and redeployment. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **containerPortRange** *(string) --* The port number range on the container that's bound to the dynamically mapped host port range. The following rules apply when you specify a "containerPortRange": * You must use either the "bridge" network mode or the "awsvpc" network mode. * This parameter is available for both the EC2 and Fargate launch types. * This parameter is available for both the Linux and Windows operating systems. * The container instance must have at least version 1.67.0 of the container agent and at least version 1.67.0-1 of the "ecs-init" package * You can specify a maximum of 100 port ranges per container. * You do not specify a "hostPortRange". The value of the "hostPortRange" is set as follows: * For containers in a task with the "awsvpc" network mode, the "hostPortRange" is set to the same value as the "containerPortRange". This is a static mapping strategy. * For containers in a task with the "bridge" network mode, the Amazon ECS agent finds open host ports from the default ephemeral range and passes it to docker to bind them to the container ports. * The "containerPortRange" valid values are between 1 and 65535. * A port can only be included in one port mapping per container. * You cannot specify overlapping port ranges. * The first port in the range must be less than last port in the range. * Docker recommends that you turn off the docker- proxy in the Docker daemon config file when you have a large number of ports. For more information, see Issue #11185 on the Github website. For information about how to turn off the docker-proxy in the Docker daemon config file, see Docker daemon in the *Amazon ECS Developer Guide*. You can call DescribeTasks to view the "hostPortRange" which are the host ports that are bound to the container ports. * **essential** *(boolean) --* If the "essential" parameter of a container is marked as "true", and that container fails or stops for any reason, all other containers that are part of the task are stopped. If the "essential" parameter of a container is marked as "false", its failure doesn't affect the rest of the containers in a task. If this parameter is omitted, a container is assumed to be essential. All tasks must have at least one essential container. If you have an application that's composed of multiple containers, group containers that are used for a common purpose into components, and separate the different components into multiple task definitions. For more information, see Application Architecture in the *Amazon Elastic Container Service Developer Guide*. * **restartPolicy** *(dict) --* The restart policy for a container. When you set up a restart policy, Amazon ECS can restart the container without needing to replace the task. For more information, see Restart individual containers in Amazon ECS tasks with container restart policies in the *Amazon Elastic Container Service Developer Guide*. * **enabled** *(boolean) --* Specifies whether a restart policy is enabled for the container. * **ignoredExitCodes** *(list) --* A list of exit codes that Amazon ECS will ignore and not attempt a restart on. You can specify a maximum of 50 container exit codes. By default, Amazon ECS does not ignore any exit codes. * *(integer) --* * **restartAttemptPeriod** *(integer) --* A period of time (in seconds) that the container must run for before a restart can be attempted. A container can be restarted only once every "restartAttemptPeriod" seconds. If a container isn't able to run for this time period and exits early, it will not be restarted. You can set a minimum "restartAttemptPeriod" of 60 seconds and a maximum "restartAttemptPeriod" of 1800 seconds. By default, a container must run for 300 seconds before it can be restarted. * **entryPoint** *(list) --* Warning: Early versions of the Amazon ECS container agent don't properly handle "entryPoint" parameters. If you have problems using "entryPoint", update your container agent or enter your commands and arguments as "command" array items instead. The entry point that's passed to the container. This parameter maps to "Entrypoint" in the docker container create command and the "--entrypoint" option to docker run. * *(string) --* * **command** *(list) --* The command that's passed to the container. This parameter maps to "Cmd" in the docker container create command and the "COMMAND" parameter to docker run. If there are multiple arguments, each argument is a separated string in the array. * *(string) --* * **environment** *(list) --* The environment variables to pass to a container. This parameter maps to "Env" in the docker container create command and the "--env" option to docker run. Warning: We don't recommend that you use plaintext environment variables for sensitive information, such as credential data. * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **environmentFiles** *(list) --* A list of files containing the environment variables to pass to a container. This parameter maps to the "-- env-file" option to docker run. You can specify up to ten environment files. The file must have a ".env" file extension. Each line in an environment file contains an environment variable in "VARIABLE=VALUE" format. Lines beginning with "#" are treated as comments and are ignored. If there are environment variables specified using the "environment" parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Specifying Environment Variables in the *Amazon Elastic Container Service Developer Guide*. * *(dict) --* A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a ".env" file extension. Each line in an environment file should contain an environment variable in "VARIABLE=VALUE" format. Lines beginning with "#" are treated as comments and are ignored. If there are environment variables specified using the "environment" parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Use a file to pass environment variables to a container in the *Amazon Elastic Container Service Developer Guide*. Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply. You must use the following platforms for the Fargate launch type: * Linux platform version "1.4.0" or later. * Windows platform version "1.0.0" or later. Consider the following when using the Fargate launch type: * The file is handled like a native Docker env-file. * There is no support for shell escape handling. * The container entry point interperts the "VARIABLE" values. * **value** *(string) --* The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file. * **type** *(string) --* The file type to use. Environment files are objects in Amazon S3. The only supported value is "s3". * **mountPoints** *(list) --* The mount points for data volumes in your container. This parameter maps to "Volumes" in the docker container create command and the "--volume" option to docker run. Windows containers can mount whole directories on the same drive as "$env:ProgramData". Windows containers can't mount directories on a different drive, and mount point can't be across drives. * *(dict) --* The details for a volume mount point that's used in a container definition. * **sourceVolume** *(string) --* The name of the volume to mount. Must be a volume name referenced in the "name" parameter of task definition "volume". * **containerPath** *(string) --* The path on the container to mount the host volume at. * **readOnly** *(boolean) --* If this value is "true", the container has read- only access to the volume. If this value is "false", then the container can write to the volume. The default value is "false". * **volumesFrom** *(list) --* Data volumes to mount from another container. This parameter maps to "VolumesFrom" in the docker container create command and the "--volumes-from" option to docker run. * *(dict) --* Details on a data volume from another container in the same task definition. * **sourceContainer** *(string) --* The name of another container within the same task definition to mount volumes from. * **readOnly** *(boolean) --* If this value is "true", the container has read- only access to the volume. If this value is "false", then the container can write to the volume. The default value is "false". * **linuxParameters** *(dict) --* Linux-specific modifications that are applied to the default Docker container configuration, such as Linux kernel capabilities. For more information see KernelCapabilities. Note: This parameter is not supported for Windows containers. * **capabilities** *(dict) --* The Linux capabilities for the container that are added to or dropped from the default configuration provided by Docker. Note: For tasks that use the Fargate launch type, "capabilities" is supported for all platform versions but the "add" parameter is only supported if using platform version 1.4.0 or later. * **add** *(list) --* The Linux capabilities for the container that have been added to the default configuration provided by Docker. This parameter maps to "CapAdd" in the docker container create command and the "--cap- add" option to docker run. Note: Tasks launched on Fargate only support adding the "SYS_PTRACE" kernel capability. Valid values: ""ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"" * *(string) --* * **drop** *(list) --* The Linux capabilities for the container that have been removed from the default configuration provided by Docker. This parameter maps to "CapDrop" in the docker container create command and the "--cap-drop" option to docker run. Valid values: ""ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"" * *(string) --* * **devices** *(list) --* Any host devices to expose to the container. This parameter maps to "Devices" in the docker container create command and the "--device" option to docker run. Note: If you're using tasks that use the Fargate launch type, the "devices" parameter isn't supported. * *(dict) --* An object representing a container instance host device. * **hostPath** *(string) --* The path for the device on the host container instance. * **containerPath** *(string) --* The path inside the container at which to expose the host device. * **permissions** *(list) --* The explicit permissions to provide to the container for the device. By default, the container has permissions for "read", "write", and "mknod" for the device. * *(string) --* * **initProcessEnabled** *(boolean) --* Run an "init" process inside the container that forwards signals and reaps processes. This parameter maps to the "--init" option to docker run. This parameter requires version 1.25 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" * **sharedMemorySize** *(integer) --* The value for the size (in MiB) of the "/dev/shm" volume. This parameter maps to the "--shm-size" option to docker run. Note: If you are using tasks that use the Fargate launch type, the "sharedMemorySize" parameter is not supported. * **tmpfs** *(list) --* The container path, mount options, and size (in MiB) of the tmpfs mount. This parameter maps to the "-- tmpfs" option to docker run. Note: If you're using tasks that use the Fargate launch type, the "tmpfs" parameter isn't supported. * *(dict) --* The container path, mount options, and size of the tmpfs mount. * **containerPath** *(string) --* The absolute file path where the tmpfs volume is to be mounted. * **size** *(integer) --* The maximum size (in MiB) of the tmpfs volume. * **mountOptions** *(list) --* The list of tmpfs volume mount options. Valid values: ""defaults" | "ro" | "rw" | "suid" | "nosuid" | "dev" | "nodev" | "exec" | "noexec" | "sync" | "async" | "dirsync" | "remount" | "mand" | "nomand" | "atime" | "noatime" | "diratime" | "nodiratime" | "bind" | "rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime" | "norelatime" | "strictatime" | "nostrictatime" | "mode" | "uid" | "gid" | "nr_inodes" | "nr_blocks" | "mpol"" * *(string) --* * **maxSwap** *(integer) --* The total amount of swap memory (in MiB) a container can use. This parameter will be translated to the " --memory-swap" option to docker run where the value would be the sum of the container memory plus the "maxSwap" value. If a "maxSwap" value of "0" is specified, the container will not use swap. Accepted values are "0" or any positive integer. If the "maxSwap" parameter is omitted, the container will use the swap configuration for the container instance it is running on. A "maxSwap" value must be set for the "swappiness" parameter to be used. Note: If you're using tasks that use the Fargate launch type, the "maxSwap" parameter isn't supported.If you're using tasks on Amazon Linux 2023 the "swappiness" parameter isn't supported. * **swappiness** *(integer) --* This allows you to tune a container's memory swappiness behavior. A "swappiness" value of "0" will cause swapping to not happen unless absolutely necessary. A "swappiness" value of "100" will cause pages to be swapped very aggressively. Accepted values are whole numbers between "0" and "100". If the "swappiness" parameter is not specified, a default value of "60" is used. If a value is not specified for "maxSwap" then this parameter is ignored. This parameter maps to the "--memory- swappiness" option to docker run. Note: If you're using tasks that use the Fargate launch type, the "swappiness" parameter isn't supported.If you're using tasks on Amazon Linux 2023 the "swappiness" parameter isn't supported. * **secrets** *(list) --* The secrets to pass to the container. For more information, see Specifying Sensitive Data in the *Amazon Elastic Container Service Developer Guide*. * *(dict) --* An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways: * To inject sensitive data into your containers as environment variables, use the "secrets" container definition parameter. * To reference sensitive information in the log configuration of a container, use the "secretOptions" container definition parameter. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the secret. * **valueFrom** *(string) --* The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store. For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the *Amazon Elastic Container Service Developer Guide*. Note: If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified. * **dependsOn** *(list) --* The dependencies defined for container startup and shutdown. A container can contain multiple dependencies on other containers in a task definition. When a dependency is defined for container startup, for container shutdown it is reversed. For tasks using the EC2 launch type, the container instances require at least version 1.26.0 of the container agent to turn on container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the *Amazon Elastic Container Service Developer Guide*. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the "ecs-init" package. If your container instances are launched from version "20190301" or later, then they contain the required versions of the container agent and "ecs- init". For more information, see Amazon ECS-optimized Linux AMI in the *Amazon Elastic Container Service Developer Guide*. For tasks using the Fargate launch type, the task or service requires the following platforms: * Linux platform version "1.3.0" or later. * Windows platform version "1.0.0" or later. * *(dict) --* The dependencies defined for container startup and shutdown. A container can contain multiple dependencies. When a dependency is defined for container startup, for container shutdown it is reversed. Your Amazon ECS container instances require at least version 1.26.0 of the container agent to use container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the *Amazon Elastic Container Service Developer Guide*. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the "ecs-init" package. If your container instances are launched from version "20190301" or later, then they contain the required versions of the container agent and "ecs-init". For more information, see Amazon ECS-optimized Linux AMI in the *Amazon Elastic Container Service Developer Guide*. Note: For tasks that use the Fargate launch type, the task or service requires the following platforms: * Linux platform version "1.3.0" or later. * Windows platform version "1.0.0" or later. For more information about how to create a container dependency, see Container dependency in the *Amazon Elastic Container Service Developer Guide*. * **containerName** *(string) --* The name of a container. * **condition** *(string) --* The dependency condition of the container. The following are the available conditions and their behavior: * "START" - This condition emulates the behavior of links and volumes today. It validates that a dependent container is started before permitting other containers to start. * "COMPLETE" - This condition validates that a dependent container runs to completion (exits) before permitting other containers to start. This can be useful for nonessential containers that run a script and then exit. This condition can't be set on an essential container. * "SUCCESS" - This condition is the same as "COMPLETE", but it also requires that the container exits with a "zero" status. This condition can't be set on an essential container. * "HEALTHY" - This condition validates that the dependent container passes its Docker health check before permitting other containers to start. This requires that the dependent container has health checks configured. This condition is confirmed only at task startup. * **startTimeout** *(integer) --* Time duration (in seconds) to wait before giving up on resolving dependencies for a container. For example, you specify two containers in a task definition with containerA having a dependency on containerB reaching a "COMPLETE", "SUCCESS", or "HEALTHY" status. If a "startTimeout" value is specified for containerB and it doesn't reach the desired status within that time then containerA gives up and not start. This results in the task transitioning to a "STOPPED" state. Note: When the "ECS_CONTAINER_START_TIMEOUT" container agent configuration variable is used, it's enforced independently from this start timeout value. For tasks using the Fargate launch type, the task or service requires the following platforms: * Linux platform version "1.3.0" or later. * Windows platform version "1.0.0" or later. For tasks using the EC2 launch type, your container instances require at least version "1.26.0" of the container agent to use a container start timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the *Amazon Elastic Container Service Developer Guide*. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version "1.26.0-1" of the "ecs-init" package. If your container instances are launched from version "20190301" or later, then they contain the required versions of the container agent and "ecs-init". For more information, see Amazon ECS-optimized Linux AMI in the *Amazon Elastic Container Service Developer Guide*. The valid values for Fargate are 2-120 seconds. * **stopTimeout** *(integer) --* Time duration (in seconds) to wait before the container is forcefully killed if it doesn't exit normally on its own. For tasks using the Fargate launch type, the task or service requires the following platforms: * Linux platform version "1.3.0" or later. * Windows platform version "1.0.0" or later. For tasks that use the Fargate launch type, the max stop timeout value is 120 seconds and if the parameter is not specified, the default value of 30 seconds is used. For tasks that use the EC2 launch type, if the "stopTimeout" parameter isn't specified, the value set for the Amazon ECS container agent configuration variable "ECS_CONTAINER_STOP_TIMEOUT" is used. If neither the "stopTimeout" parameter or the "ECS_CONTAINER_STOP_TIMEOUT" agent configuration variable are set, then the default values of 30 seconds for Linux containers and 30 seconds on Windows containers are used. Your container instances require at least version 1.26.0 of the container agent to use a container stop timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the *Amazon Elastic Container Service Developer Guide*. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the "ecs-init" package. If your container instances are launched from version "20190301" or later, then they contain the required versions of the container agent and "ecs-init". For more information, see Amazon ECS-optimized Linux AMI in the *Amazon Elastic Container Service Developer Guide*. The valid values for Fargate are 2-120 seconds. * **versionConsistency** *(string) --* Specifies whether Amazon ECS will resolve the container image tag provided in the container definition to an image digest. By default, the value is "enabled". If you set the value for a container as "disabled", Amazon ECS will not resolve the provided container image tag to a digest and will use the original image URI specified in the container definition for deployment. For more information about container image resolution, see Container image resolution in the *Amazon ECS Developer Guide*. * **hostname** *(string) --* The hostname to use for your container. This parameter maps to "Hostname" in the docker container create command and the "--hostname" option to docker run. Note: The "hostname" parameter is not supported if you're using the "awsvpc" network mode. * **user** *(string) --* The user to use inside the container. This parameter maps to "User" in the docker container create command and the "--user" option to docker run. Warning: When running tasks using the "host" network mode, don't run containers using the root user (UID 0). We recommend using a non-root user for better security. You can specify the "user" using the following formats. If specifying a UID or GID, you must specify it as a positive integer. * "user" * "user:group" * "uid" * "uid:gid" * "user:gid" * "uid:group" Note: This parameter is not supported for Windows containers. * **workingDirectory** *(string) --* The working directory to run commands inside the container in. This parameter maps to "WorkingDir" in the docker container create command and the "-- workdir" option to docker run. * **disableNetworking** *(boolean) --* When this parameter is true, networking is off within the container. This parameter maps to "NetworkDisabled" in the docker container create command. Note: This parameter is not supported for Windows containers. * **privileged** *(boolean) --* When this parameter is true, the container is given elevated privileges on the host container instance (similar to the "root" user). This parameter maps to "Privileged" in the docker container create command and the "--privileged" option to docker run Note: This parameter is not supported for Windows containers or tasks run on Fargate. * **readonlyRootFilesystem** *(boolean) --* When this parameter is true, the container is given read-only access to its root file system. This parameter maps to "ReadonlyRootfs" in the docker container create command and the "--read-only" option to docker run. Note: This parameter is not supported for Windows containers. * **dnsServers** *(list) --* A list of DNS servers that are presented to the container. This parameter maps to "Dns" in the docker container create command and the "--dns" option to docker run. Note: This parameter is not supported for Windows containers. * *(string) --* * **dnsSearchDomains** *(list) --* A list of DNS search domains that are presented to the container. This parameter maps to "DnsSearch" in the docker container create command and the "--dns-search" option to docker run. Note: This parameter is not supported for Windows containers. * *(string) --* * **extraHosts** *(list) --* A list of hostnames and IP address mappings to append to the "/etc/hosts" file on the container. This parameter maps to "ExtraHosts" in the docker container create command and the "--add-host" option to docker run. Note: This parameter isn't supported for Windows containers or tasks that use the "awsvpc" network mode. * *(dict) --* Hostnames and IP address entries that are added to the "/etc/hosts" file of a container via the "extraHosts" parameter of its ContainerDefinition. * **hostname** *(string) --* The hostname to use in the "/etc/hosts" entry. * **ipAddress** *(string) --* The IP address to use in the "/etc/hosts" entry. * **dockerSecurityOptions** *(list) --* A list of strings to provide custom configuration for multiple security systems. This field isn't valid for containers in tasks using the Fargate launch type. For Linux tasks on EC2, this parameter can be used to reference custom labels for SELinux and AppArmor multi-level security systems. For any tasks on EC2, this parameter can be used to reference a credential spec file that configures a container for Active Directory authentication. For more information, see Using gMSAs for Windows Containers and Using gMSAs for Linux Containers in the *Amazon Elastic Container Service Developer Guide*. This parameter maps to "SecurityOpt" in the docker container create command and the "--security-opt" option to docker run. Note: The Amazon ECS container agent running on a container instance must register with the "ECS_SELINUX_CAPABLE=true" or "ECS_APPARMOR_CAPABLE=true" environment variables before containers placed on that instance can use these security options. For more information, see Amazon ECS Container Agent Configuration in the *Amazon Elastic Container Service Developer Guide*. Valid values: "no-new-privileges" | "apparmor:PROFILE" | "label:value" | "credentialspec:CredentialSpecFilePath" * *(string) --* * **interactive** *(boolean) --* When this parameter is "true", you can deploy containerized applications that require "stdin" or a "tty" to be allocated. This parameter maps to "OpenStdin" in the docker container create command and the "--interactive" option to docker run. * **pseudoTerminal** *(boolean) --* When this parameter is "true", a TTY is allocated. This parameter maps to "Tty" in the docker container create command and the "--tty" option to docker run. * **dockerLabels** *(dict) --* A key/value map of labels to add to the container. This parameter maps to "Labels" in the docker container create command and the "--label" option to docker run. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" * *(string) --* * *(string) --* * **ulimits** *(list) --* A list of "ulimits" to set in the container. If a "ulimit" value is specified in a task definition, it overrides the default values set by Docker. This parameter maps to "Ulimits" in the docker container create command and the "--ulimit" option to docker run. Valid naming values are displayed in the Ulimit data type. Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the "nofile" resource limit parameter which Fargate overrides. The "nofile" resource limit sets a restriction on the number of open files that a container can use. The default "nofile" soft limit is "65535" and the default hard limit is "65535". This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" Note: This parameter is not supported for Windows containers. * *(dict) --* The "ulimit" settings to pass to the container. Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the "nofile" resource limit parameter which Fargate overrides. The "nofile" resource limit sets a restriction on the number of open files that a container can use. The default "nofile" soft limit is "65535" and the default hard limit is "65535". You can specify the "ulimit" settings for a container in a task definition. * **name** *(string) --* The "type" of the "ulimit". * **softLimit** *(integer) --* The soft limit for the "ulimit" type. The value can be specified in bytes, seconds, or as a count, depending on the "type" of the "ulimit". * **hardLimit** *(integer) --* The hard limit for the "ulimit" type. The value can be specified in bytes, seconds, or as a count, depending on the "type" of the "ulimit". * **logConfiguration** *(dict) --* The log configuration specification for the container. This parameter maps to "LogConfig" in the docker container create command and the "--log-driver" option to docker run. By default, containers use the same logging driver that the Docker daemon uses. However the container can use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance (or on a different log server for remote logging options). Note: Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon (shown in the LogConfiguration data type). Additional log drivers may be available in future releases of the Amazon ECS container agent. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" Note: The Amazon ECS container agent running on a container instance must register the logging drivers available on that instance with the "ECS_AVAILABLE_LOGGING_DRIVERS" environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS Container Agent Configuration in the *Amazon Elastic Container Service Developer Guide*. * **logDriver** *(string) --* The log driver to use for the container. For tasks on Fargate, the supported log drivers are "awslogs", "splunk", and "awsfirelens". For tasks hosted on Amazon EC2 instances, the supported log drivers are "awslogs", "fluentd", "gelf", "json-file", "journald", "syslog", "splunk", and "awsfirelens". For more information about using the "awslogs" log driver, see Send Amazon ECS logs to CloudWatch in the *Amazon Elastic Container Service Developer Guide*. For more information about using the "awsfirelens" log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner. Note: If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software. * **options** *(dict) --* The configuration options to send to the log driver. The options you can specify depend on the log driver. Some of the options you can specify when you use the "awslogs" log driver to route logs to Amazon CloudWatch include the following: awslogs-create-group Required: No Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to "false". Note: Your IAM policy must include the "logs:CreateLogGroup" permission before you attempt to use "awslogs-create-group".awslogs- region Required: Yes Specify the Amazon Web Services Region that the "awslogs" log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option. awslogs-group Required: Yes Make sure to specify a log group that the "awslogs" log driver sends its log streams to. awslogs-stream-prefix Required: Yes, when using Fargate.Optional when using EC2. Use the "awslogs-stream-prefix" option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format "prefix-name/container-name/ecs-task-id". If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option. For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to. You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console. awslogs-datetime-format Required: No This option defines a multiline start pattern in Python "strftime" format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry. For more information, see awslogs-datetime-format. You cannot configure both the "awslogs-datetime- format" and "awslogs-multiline-pattern" options. Note: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.awslogs-multiline-pattern Required: No This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. For more information, see awslogs-multiline-pattern. This option is ignored if "awslogs-datetime-format" is also configured. You cannot configure both the "awslogs-datetime- format" and "awslogs-multiline-pattern" options. Note: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. The following options apply to all supported log drivers. mode Required: No Valid values: "non-blocking" | "blocking" This option defines the delivery mode of log messages from the container to the log driver specified using "logDriver". The delivery mode you choose affects application availability when the flow of logs from container is interrupted. If you use the "blocking" mode and the flow of logs is interrupted, calls from container code to write to the "stdout" and "stderr" streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure. If you use the "non-blocking" mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the "max-buffer-size" option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non- blocking mode in the awslogs container log driver. You can set a default "mode" for all containers in a specific Amazon Web Services Region by using the "defaultLogDriverMode" account setting. If you don't specify the "mode" option or configure the account setting, Amazon ECS will default to the "non- blocking" mode. For more information about the account setting, see Default log driver mode in the *Amazon Elastic Container Service Developer Guide*. Note: On June 25, 2025, Amazon ECS changed the default log driver mode from "blocking" to "non-blocking" to prioritize task availability over logging. To continue using the "blocking" mode after this change, do one of the following: * Set the "mode" option in your container definition's "logConfiguration" as "blocking". * Set the "defaultLogDriverMode" account setting to "blocking". max-buffer-size Required: No Default value: "1m" When "non-blocking" mode is used, the "max-buffer- size" log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost. To route logs using the "splunk" log router, you need to specify a "splunk-token" and a "splunk-url". When you use the "awsfirelens" log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the "log-driver-buffer- limit" option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. Other options you can specify when using "awsfirelens" to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with "region" and a name for the log stream with "delivery_stream". When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with "region" and a data stream name with "stream". When you export logs to Amazon OpenSearch Service, you can specify options like "Name", "Host" (OpenSearch Service endpoint without protocol), "Port", "Index", "Type", "Aws_auth", "Aws_region", "Suppress_Type_Name", and "tls". For more information, see Under the hood: FireLens for Amazon ECS Tasks. When you export logs to Amazon S3, you can specify the bucket using the "bucket" option. You can also specify "region", "total_file_size", "upload_timeout", and "use_put_object" as options. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" * *(string) --* * *(string) --* * **secretOptions** *(list) --* The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * *(dict) --* An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways: * To inject sensitive data into your containers as environment variables, use the "secrets" container definition parameter. * To reference sensitive information in the log configuration of a container, use the "secretOptions" container definition parameter. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the secret. * **valueFrom** *(string) --* The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store. For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the *Amazon Elastic Container Service Developer Guide*. Note: If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified. * **healthCheck** *(dict) --* The container health check command and associated configuration parameters for the container. This parameter maps to "HealthCheck" in the docker container create command and the "HEALTHCHECK" parameter of docker run. * **command** *(list) --* A string array representing the command that the container runs to determine if it is healthy. The string array must start with "CMD" to run the command arguments directly, or "CMD-SHELL" to run the command with the container's default shell. When you use the Amazon Web Services Management Console JSON panel, the Command Line Interface, or the APIs, enclose the list of commands in double quotes and brackets. "[ "CMD-SHELL", "curl -f http://localhost/ || exit 1" ]" You don't include the double quotes and brackets when you use the Amazon Web Services Management Console. "CMD-SHELL, curl -f http://localhost/ || exit 1" An exit code of 0 indicates success, and non-zero exit code indicates failure. For more information, see "HealthCheck" in the docker container create command. * *(string) --* * **interval** *(integer) --* The time period in seconds between each health check execution. You may specify between 5 and 300 seconds. The default value is 30 seconds. This value applies only when you specify a "command". * **timeout** *(integer) --* The time period in seconds to wait for a health check to succeed before it is considered a failure. You may specify between 2 and 60 seconds. The default value is 5. This value applies only when you specify a "command". * **retries** *(integer) --* The number of times to retry a failed health check before the container is considered unhealthy. You may specify between 1 and 10 retries. The default value is 3. This value applies only when you specify a "command". * **startPeriod** *(integer) --* The optional grace period to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. You can specify between 0 and 300 seconds. By default, the "startPeriod" is off. This value applies only when you specify a "command". Note: If a health check succeeds within the "startPeriod", then the container is considered healthy and any subsequent failures count toward the maximum number of retries. * **systemControls** *(list) --* A list of namespaced kernel parameters to set in the container. This parameter maps to "Sysctls" in the docker container create command and the "--sysctl" option to docker run. For example, you can configure "net.ipv4.tcp_keepalive_time" setting to maintain longer lived connections. * *(dict) --* A list of namespaced kernel parameters to set in the container. This parameter maps to "Sysctls" in the docker container create command and the "--sysctl" option to docker run. For example, you can configure "net.ipv4.tcp_keepalive_time" setting to maintain longer lived connections. We don't recommend that you specify network-related "systemControls" parameters for multiple containers in a single task that also uses either the "awsvpc" or "host" network mode. Doing this has the following disadvantages: * For tasks that use the "awsvpc" network mode including Fargate, if you set "systemControls" for any container, it applies to all containers in the task. If you set different "systemControls" for multiple containers in a single task, the container that's started last determines which "systemControls" take effect. * For tasks that use the "host" network mode, the network namespace "systemControls" aren't supported. If you're setting an IPC resource namespace to use for the containers in the task, the following conditions apply to your system controls. For more information, see IPC mode. * For tasks that use the "host" IPC mode, IPC namespace "systemControls" aren't supported. * For tasks that use the "task" IPC mode, IPC namespace "systemControls" values apply to all containers within a task. Note: This parameter is not supported for Windows containers. Note: This parameter is only supported for tasks that are hosted on Fargate if the tasks are using platform version "1.4.0" or later (Linux). This isn't supported for Windows containers on Fargate. * **namespace** *(string) --* The namespaced kernel parameter to set a "value" for. * **value** *(string) --* The namespaced kernel parameter to set a "value" for. Valid IPC namespace values: ""kernel.msgmax" | "kernel.msgmnb" | "kernel.msgmni" | "kernel.sem" | "kernel.shmall" | "kernel.shmmax" | "kernel.shmmni" | "kernel.shm_rmid_forced"", and "Sysctls" that start with ""fs.mqueue.*"" Valid network namespace values: "Sysctls" that start with ""net.*"". Only namespaced "Sysctls" that exist within the container starting with "net.* are accepted. All of these values are supported by Fargate. * **resourceRequirements** *(list) --* The type and amount of a resource to assign to a container. The only supported resource is a GPU. * *(dict) --* The type and amount of a resource to assign to a container. The supported resource types are GPUs and Elastic Inference accelerators. For more information, see Working with GPUs on Amazon ECS or Working with Amazon Elastic Inference on Amazon ECS in the *Amazon Elastic Container Service Developer Guide* * **value** *(string) --* The value for the specified resource type. When the type is "GPU", the value is the number of physical "GPUs" the Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for all containers in a task can't exceed the number of available GPUs on the container instance that the task is launched on. When the type is "InferenceAccelerator", the "value" matches the "deviceName" for an InferenceAccelerator specified in a task definition. * **type** *(string) --* The type of resource to assign to a container. * **firelensConfiguration** *(dict) --* The FireLens configuration for the container. This is used to specify and configure a log router for container logs. For more information, see Custom Log Routing in the *Amazon Elastic Container Service Developer Guide*. * **type** *(string) --* The log router to use. The valid values are "fluentd" or "fluentbit". * **options** *(dict) --* The options to use when configuring the log router. This field is optional and can be used to specify a custom configuration file or to add additional metadata, such as the task, task definition, cluster, and container instance details to the log event. If specified, the syntax to use is ""options ":{"enable-ecs-log-metadata":"true|false","config- file-type:"s3|file","config-file-value":"arn:aws:s3 :::mybucket/fluent.conf|filepath"}". For more information, see Creating a task definition that uses a FireLens configuration in the *Amazon Elastic Container Service Developer Guide*. Note: Tasks hosted on Fargate only support the "file" configuration file type. * *(string) --* * *(string) --* * **credentialSpecs** *(list) --* A list of ARNs in SSM or Amazon S3 to a credential spec ( "CredSpec") file that configures the container for Active Directory authentication. We recommend that you use this parameter instead of the "dockerSecurityOptions". The maximum number of ARNs is 1. There are two formats for each ARN. credentialspecdomainless:MyARN You use "credentialspecdomainless:MyARN" to provide a "CredSpec" with an additional section for a secret in Secrets Manager. You provide the login credentials to the domain in the secret. Each task that runs on any container instance can join different domains. You can use this format without joining the container instance to a domain. credentialspec:MyARN You use "credentialspec:MyARN" to provide a "CredSpec" for a single domain. You must join the container instance to the domain before you start any tasks that use this task definition. In both formats, replace "MyARN" with the ARN in SSM or Amazon S3. If you provide a "credentialspecdomainless:MyARN", the "credspec" must provide a ARN in Secrets Manager for a secret containing the username, password, and the domain to connect to. For better security, the instance isn't joined to the domain for domainless authentication. Other applications on the instance can't use the domainless credentials. You can use this parameter to run tasks on the same instance, even it the tasks need to join different domains. For more information, see Using gMSAs for Windows Containers and Using gMSAs for Linux Containers. * *(string) --* * **family** *(string) --* The name of a family that this task definition is registered to. Up to 255 characters are allowed. Letters (both uppercase and lowercase letters), numbers, hyphens (-), and underscores (_) are allowed. A family groups multiple versions of a task definition. Amazon ECS gives the first task definition that you registered to a family a revision number of 1. Amazon ECS gives sequential revision numbers to each task definition that you add. * **taskRoleArn** *(string) --* The short name or full Amazon Resource Name (ARN) of the Identity and Access Management role that grants containers in the task permission to call Amazon Web Services APIs on your behalf. For informationabout the required IAM roles for Amazon ECS, see IAM roles for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. * **executionRoleArn** *(string) --* The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make Amazon Web Services API calls on your behalf. For informationabout the required IAM roles for Amazon ECS, see IAM roles for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. * **networkMode** *(string) --* The Docker networking mode to use for the containers in the task. The valid values are "none", "bridge", "awsvpc", and "host". If no network mode is specified, the default is "bridge". For Amazon ECS tasks on Fargate, the "awsvpc" network mode is required. For Amazon ECS tasks on Amazon EC2 Linux instances, any network mode can be used. For Amazon ECS tasks on Amazon EC2 Windows instances, "" or "awsvpc" can be used. If the network mode is set to "none", you cannot specify port mappings in your container definitions, and the tasks containers do not have external connectivity. The "host" and "awsvpc" network modes offer the highest networking performance for containers because they use the EC2 network stack instead of the virtualized network stack provided by the "bridge" mode. With the "host" and "awsvpc" network modes, exposed container ports are mapped directly to the corresponding host port (for the "host" network mode) or the attached elastic network interface port (for the "awsvpc" network mode), so you cannot take advantage of dynamic host port mappings. Warning: When using the "host" network mode, you should not run containers using the root user (UID 0). It is considered best practice to use a non-root user. If the network mode is "awsvpc", the task is allocated an elastic network interface, and you must specify a NetworkConfiguration value when you create a service or run a task with the task definition. For more information, see Task Networking in the *Amazon Elastic Container Service Developer Guide*. If the network mode is "host", you cannot run multiple instantiations of the same task on a single container instance when port mappings are used. * **revision** *(integer) --* The revision of the task in a particular family. The revision is a version number of a task definition in a family. When you register a task definition for the first time, the revision is "1". Each time that you register a new revision of a task definition in the same family, the revision value always increases by one. This is even if you deregistered previous revisions in this family. * **volumes** *(list) --* The list of data volume definitions for the task. For more information, see Using data volumes in tasks in the *Amazon Elastic Container Service Developer Guide*. Note: The "host" and "sourcePath" parameters aren't supported for tasks run on Fargate. * *(dict) --* The data volume configuration for tasks launched using this task definition. Specifying a volume configuration in a task definition is optional. The volume configuration may contain multiple volumes but only one volume configured at launch is supported. Each volume defined in the volume configuration may only specify a "name" and one of either "configuredAtLaunch", "dockerVolumeConfiguration", "efsVolumeConfiguration", "fsxWindowsFileServerVolumeConfiguration", or "host". If an empty volume configuration is specified, by default Amazon ECS uses a host volume. For more information, see Using data volumes in tasks. * **name** *(string) --* The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. When using a volume configured at launch, the "name" is required and must also be specified as the volume name in the "ServiceVolumeConfiguration" or "TaskVolumeConfiguration" parameter when creating your service or standalone task. For all other types of volumes, this name is referenced in the "sourceVolume" parameter of the "mountPoints" object in the container definition. When a volume is using the "efsVolumeConfiguration", the name is required. * **host** *(dict) --* This parameter is specified when you use bind mount host volumes. The contents of the "host" parameter determine whether your bind mount host volume persists on the host container instance and where it's stored. If the "host" parameter is empty, then the Docker daemon assigns a host path for your data volume. However, the data isn't guaranteed to persist after the containers that are associated with it stop running. Windows containers can mount whole directories on the same drive as "$env:ProgramData". Windows containers can't mount directories on a different drive, and mount point can't be across drives. For example, you can mount "C:\my\path:C:\my\path" and "D:\:D:\", but not "D:\my\path:C:\my\path" or "D:\:C:\my\path". * **sourcePath** *(string) --* When the "host" parameter is used, specify a "sourcePath" to declare the path on the host container instance that's presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If the "host" parameter contains a "sourcePath" file location, then the data volume persists at the specified location on the host container instance until you delete it manually. If the "sourcePath" value doesn't exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported. If you're using the Fargate launch type, the "sourcePath" parameter is not supported. * **dockerVolumeConfiguration** *(dict) --* This parameter is specified when you use Docker volumes. Windows containers only support the use of the "local" driver. To use bind mounts, specify the "host" parameter instead. Note: Docker volumes aren't supported by tasks run on Fargate. * **scope** *(string) --* The scope for the Docker volume that determines its lifecycle. Docker volumes that are scoped to a "task" are automatically provisioned when the task starts and destroyed when the task stops. Docker volumes that are scoped as "shared" persist after the task stops. * **autoprovision** *(boolean) --* If this value is "true", the Docker volume is created if it doesn't already exist. Note: This field is only used if the "scope" is "shared". * **driver** *(string) --* The Docker volume driver to use. The driver value must match the driver name provided by Docker because it is used for task placement. If the driver was installed using the Docker plugin CLI, use "docker plugin ls" to retrieve the driver name from your container instance. If the driver was installed using another method, use Docker plugin discovery to retrieve the driver name. This parameter maps to "Driver" in the docker container create command and the "xxdriver" option to docker volume create. * **driverOpts** *(dict) --* A map of Docker driver-specific options passed through. This parameter maps to "DriverOpts" in the docker create-volume command and the "xxopt" option to docker volume create. * *(string) --* * *(string) --* * **labels** *(dict) --* Custom metadata to add to your Docker volume. This parameter maps to "Labels" in the docker container create command and the "xxlabel" option to docker volume create. * *(string) --* * *(string) --* * **efsVolumeConfiguration** *(dict) --* This parameter is specified when you use an Amazon Elastic File System file system for task storage. * **fileSystemId** *(string) --* The Amazon EFS file system ID to use. * **rootDirectory** *(string) --* The directory within the Amazon EFS file system to mount as the root directory inside the host. If this parameter is omitted, the root of the Amazon EFS volume will be used. Specifying "/" will have the same effect as omitting this parameter. Warning: If an EFS access point is specified in the "authorizationConfig", the root directory parameter must either be omitted or set to "/" which will enforce the path set on the EFS access point. * **transitEncryption** *(string) --* Determines whether to use encryption for Amazon EFS data in transit between the Amazon ECS host and the Amazon EFS server. Transit encryption must be turned on if Amazon EFS IAM authorization is used. If this parameter is omitted, the default value of "DISABLED" is used. For more information, see Encrypting data in transit in the *Amazon Elastic File System User Guide*. * **transitEncryptionPort** *(integer) --* The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server. If you do not specify a transit encryption port, it will use the port selection strategy that the Amazon EFS mount helper uses. For more information, see EFS mount helper in the *Amazon Elastic File System User Guide*. * **authorizationConfig** *(dict) --* The authorization configuration details for the Amazon EFS file system. * **accessPointId** *(string) --* The Amazon EFS access point ID to use. If an access point is specified, the root directory value specified in the "EFSVolumeConfiguration" must either be omitted or set to "/" which will enforce the path set on the EFS access point. If an access point is used, transit encryption must be on in the "EFSVolumeConfiguration". For more information, see Working with Amazon EFS access points in the *Amazon Elastic File System User Guide*. * **iam** *(string) --* Determines whether to use the Amazon ECS task role defined in a task definition when mounting the Amazon EFS file system. If it is turned on, transit encryption must be turned on in the "EFSVolumeConfiguration". If this parameter is omitted, the default value of "DISABLED" is used. For more information, see Using Amazon EFS access points in the *Amazon Elastic Container Service Developer Guide*. * **fsxWindowsFileServerVolumeConfiguration** *(dict) --* This parameter is specified when you use Amazon FSx for Windows File Server file system for task storage. * **fileSystemId** *(string) --* The Amazon FSx for Windows File Server file system ID to use. * **rootDirectory** *(string) --* The directory within the Amazon FSx for Windows File Server file system to mount as the root directory inside the host. * **authorizationConfig** *(dict) --* The authorization configuration details for the Amazon FSx for Windows File Server file system. * **credentialsParameter** *(string) --* The authorization credential option to use. The authorization credential options can be provided using either the Amazon Resource Name (ARN) of an Secrets Manager secret or SSM Parameter Store parameter. The ARN refers to the stored credentials. * **domain** *(string) --* A fully qualified domain name hosted by an Directory Service Managed Microsoft AD (Active Directory) or self-hosted AD on Amazon EC2. * **configuredAtLaunch** *(boolean) --* Indicates whether the volume should be configured at launch time. This is used to create Amazon EBS volumes for standalone tasks or tasks created as part of a service. Each task definition revision may only have one volume configured at launch in the volume configuration. To configure a volume at launch time, use this task definition revision and specify a "volumeConfigurations" object when calling the "CreateService", "UpdateService", "RunTask" or "StartTask" APIs. * **status** *(string) --* The status of the task definition. * **requiresAttributes** *(list) --* The container instance attributes required by your task. When an Amazon EC2 instance is registered to your cluster, the Amazon ECS container agent assigns some standard attributes to the instance. You can apply custom attributes. These are specified as key-value pairs using the Amazon ECS console or the PutAttributes API. These attributes are used when determining task placement for tasks hosted on Amazon EC2 instances. For more information, see Attributes in the *Amazon Elastic Container Service Developer Guide*. Note: This parameter isn't supported for tasks run on Fargate. * *(dict) --* An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the attribute. The "name" must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.). * **value** *(string) --* The value of the attribute. The "value" must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space. * **targetType** *(string) --* The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN. * **targetId** *(string) --* The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN). * **placementConstraints** *(list) --* An array of placement constraint objects to use for tasks. Note: This parameter isn't supported for tasks run on Fargate. * *(dict) --* The constraint on task placement in the task definition. For more information, see Task placement constraints in the *Amazon Elastic Container Service Developer Guide*. Note: Task placement constraints aren't supported for tasks run on Fargate. * **type** *(string) --* The type of constraint. The "MemberOf" constraint restricts selection to be from a group of valid candidates. * **expression** *(string) --* A cluster query language expression to apply to the constraint. For more information, see Cluster query language in the *Amazon Elastic Container Service Developer Guide*. * **compatibilities** *(list) --* Amazon ECS validates the task definition parameters with those supported by the launch type. For more information, see Amazon ECS launch types in the *Amazon Elastic Container Service Developer Guide*. * *(string) --* * **runtimePlatform** *(dict) --* The operating system that your task definitions are running on. A platform family is specified only for tasks using the Fargate launch type. When you specify a task in a service, this value must match the "runtimePlatform" value of the service. * **cpuArchitecture** *(string) --* The CPU architecture. You can run your Linux tasks on an ARM-based platform by setting the value to "ARM64". This option is available for tasks that run on Linux Amazon EC2 instance or Linux containers on Fargate. * **operatingSystemFamily** *(string) --* The operating system. * **requiresCompatibilities** *(list) --* The task launch types the task definition was validated against. The valid values are "EC2", "FARGATE", and "EXTERNAL". For more information, see Amazon ECS launch types in the *Amazon Elastic Container Service Developer Guide*. * *(string) --* * **cpu** *(string) --* The number of "cpu" units used by the task. If you use the EC2 launch type, this field is optional. Any value can be used. If you use the Fargate launch type, this field is required. You must use one of the following values. The value that you choose determines your range of valid values for the "memory" parameter. If you're using the EC2 launch type or the external launch type, this field is optional. Supported values are between "128" CPU units ( "0.125" vCPUs) and "196608" CPU units ( "192" vCPUs). This field is required for Fargate. For information about the valid values, see Task size in the *Amazon Elastic Container Service Developer Guide*. * **memory** *(string) --* The amount (in MiB) of memory used by the task. If your tasks runs on Amazon EC2 instances, you must specify either a task-level memory value or a container- level memory value. This field is optional and any value can be used. If a task-level memory value is specified, the container-level memory value is optional. For more information regarding container-level memory and memory reservation, see ContainerDefinition. If your tasks runs on Fargate, this field is required. You must use one of the following values. The value you choose determines your range of valid values for the "cpu" parameter. * 512 (0.5 GB), 1024 (1 GB), 2048 (2 GB) - Available "cpu" values: 256 (.25 vCPU) * 1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB) - Available "cpu" values: 512 (.5 vCPU) * 2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB) - Available "cpu" values: 1024 (1 vCPU) * Between 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB) - Available "cpu" values: 2048 (2 vCPU) * Between 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB) - Available "cpu" values: 4096 (4 vCPU) * Between 16 GB and 60 GB in 4 GB increments - Available "cpu" values: 8192 (8 vCPU) This option requires Linux platform "1.4.0" or later. * Between 32GB and 120 GB in 8 GB increments - Available "cpu" values: 16384 (16 vCPU) This option requires Linux platform "1.4.0" or later. * **inferenceAccelerators** *(list) --* The Elastic Inference accelerator that's associated with the task. * *(dict) --* Details on an Elastic Inference accelerator. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. * **deviceName** *(string) --* The Elastic Inference accelerator device name. The "deviceName" must also be referenced in a container definition as a ResourceRequirement. * **deviceType** *(string) --* The Elastic Inference accelerator type to use. * **pidMode** *(string) --* The process namespace to use for the containers in the task. The valid values are "host" or "task". On Fargate for Linux containers, the only valid value is "task". For example, monitoring sidecars might need "pidMode" to access information about other containers running in the same task. If "host" is specified, all containers within the tasks that specified the "host" PID mode on the same container instance share the same process namespace with the host Amazon EC2 instance. If "task" is specified, all containers within the specified task share the same process namespace. If no value is specified, the default is a private namespace for each container. If the "host" PID mode is used, there's a heightened risk of undesired process namespace exposure. Note: This parameter is not supported for Windows containers. Note: This parameter is only supported for tasks that are hosted on Fargate if the tasks are using platform version "1.4.0" or later (Linux). This isn't supported for Windows containers on Fargate. * **ipcMode** *(string) --* The IPC resource namespace to use for the containers in the task. The valid values are "host", "task", or "none". If "host" is specified, then all containers within the tasks that specified the "host" IPC mode on the same container instance share the same IPC resources with the host Amazon EC2 instance. If "task" is specified, all containers within the specified task share the same IPC resources. If "none" is specified, then IPC resources within the containers of a task are private and not shared with other containers in a task or on the container instance. If no value is specified, then the IPC resource namespace sharing depends on the Docker daemon setting on the container instance. If the "host" IPC mode is used, be aware that there is a heightened risk of undesired IPC namespace expose. If you are setting namespaced kernel parameters using "systemControls" for the containers in the task, the following will apply to your IPC resource namespace. For more information, see System Controls in the *Amazon Elastic Container Service Developer Guide*. * For tasks that use the "host" IPC mode, IPC namespace related "systemControls" are not supported. * For tasks that use the "task" IPC mode, IPC namespace related "systemControls" will apply to all containers within a task. Note: This parameter is not supported for Windows containers or tasks run on Fargate. * **proxyConfiguration** *(dict) --* The configuration details for the App Mesh proxy. Your Amazon ECS container instances require at least version 1.26.0 of the container agent and at least version 1.26.0-1 of the "ecs-init" package to use a proxy configuration. If your container instances are launched from the Amazon ECS optimized AMI version "20190301" or later, they contain the required versions of the container agent and "ecs-init". For more information, see Amazon ECS-optimized Linux AMI in the *Amazon Elastic Container Service Developer Guide*. * **type** *(string) --* The proxy type. The only supported value is "APPMESH". * **containerName** *(string) --* The name of the container that will serve as the App Mesh proxy. * **properties** *(list) --* The set of network configuration parameters to provide the Container Network Interface (CNI) plugin, specified as key-value pairs. * "IgnoredUID" - (Required) The user ID (UID) of the proxy container as defined by the "user" parameter in a container definition. This is used to ensure the proxy ignores its own traffic. If "IgnoredGID" is specified, this field can be empty. * "IgnoredGID" - (Required) The group ID (GID) of the proxy container as defined by the "user" parameter in a container definition. This is used to ensure the proxy ignores its own traffic. If "IgnoredUID" is specified, this field can be empty. * "AppPorts" - (Required) The list of ports that the application uses. Network traffic to these ports is forwarded to the "ProxyIngressPort" and "ProxyEgressPort". * "ProxyIngressPort" - (Required) Specifies the port that incoming traffic to the "AppPorts" is directed to. * "ProxyEgressPort" - (Required) Specifies the port that outgoing traffic from the "AppPorts" is directed to. * "EgressIgnoredPorts" - (Required) The egress traffic going to the specified ports is ignored and not redirected to the "ProxyEgressPort". It can be an empty list. * "EgressIgnoredIPs" - (Required) The egress traffic going to the specified IP addresses is ignored and not redirected to the "ProxyEgressPort". It can be an empty list. * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **registeredAt** *(datetime) --* The Unix timestamp for the time when the task definition was registered. * **deregisteredAt** *(datetime) --* The Unix timestamp for the time when the task definition was deregistered. * **registeredBy** *(string) --* The principal that registered the task definition. * **ephemeralStorage** *(dict) --* The ephemeral storage settings to use for tasks run with the task definition. * **sizeInGiB** *(integer) --* The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is "21" GiB and the maximum supported value is "200" GiB. * **enableFaultInjection** *(boolean) --* Enables fault injection and allows for fault injection requests to be accepted from the task's containers. The default value is "false". **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" ECS / Client / describe_service_deployments describe_service_deployments **************************** ECS.Client.describe_service_deployments(**kwargs) Describes one or more of your service deployments. A service deployment happens when you release a software update for the service. For more information, see View service history using Amazon ECS service deployments. See also: AWS API Documentation **Request Syntax** response = client.describe_service_deployments( serviceDeploymentArns=[ 'string', ] ) Parameters: **serviceDeploymentArns** (*list*) -- **[REQUIRED]** The ARN of the service deployment. You can specify a maximum of 20 ARNs. * *(string) --* Return type: dict Returns: **Response Syntax** { 'serviceDeployments': [ { 'serviceDeploymentArn': 'string', 'serviceArn': 'string', 'clusterArn': 'string', 'createdAt': datetime(2015, 1, 1), 'startedAt': datetime(2015, 1, 1), 'finishedAt': datetime(2015, 1, 1), 'stoppedAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'sourceServiceRevisions': [ { 'arn': 'string', 'requestedTaskCount': 123, 'runningTaskCount': 123, 'pendingTaskCount': 123 }, ], 'targetServiceRevision': { 'arn': 'string', 'requestedTaskCount': 123, 'runningTaskCount': 123, 'pendingTaskCount': 123 }, 'status': 'PENDING'|'SUCCESSFUL'|'STOPPED'|'STOP_REQUESTED'|'IN_PROGRESS'|'ROLLBACK_REQUESTED'|'ROLLBACK_IN_PROGRESS'|'ROLLBACK_SUCCESSFUL'|'ROLLBACK_FAILED', 'statusReason': 'string', 'lifecycleStage': 'RECONCILE_SERVICE'|'PRE_SCALE_UP'|'SCALE_UP'|'POST_SCALE_UP'|'TEST_TRAFFIC_SHIFT'|'POST_TEST_TRAFFIC_SHIFT'|'PRODUCTION_TRAFFIC_SHIFT'|'POST_PRODUCTION_TRAFFIC_SHIFT'|'BAKE_TIME'|'CLEAN_UP', 'deploymentConfiguration': { 'deploymentCircuitBreaker': { 'enable': True|False, 'rollback': True|False }, 'maximumPercent': 123, 'minimumHealthyPercent': 123, 'alarms': { 'alarmNames': [ 'string', ], 'rollback': True|False, 'enable': True|False }, 'strategy': 'ROLLING'|'BLUE_GREEN', 'bakeTimeInMinutes': 123, 'lifecycleHooks': [ { 'hookTargetArn': 'string', 'roleArn': 'string', 'lifecycleStages': [ 'RECONCILE_SERVICE'|'PRE_SCALE_UP'|'POST_SCALE_UP'|'TEST_TRAFFIC_SHIFT'|'POST_TEST_TRAFFIC_SHIFT'|'PRODUCTION_TRAFFIC_SHIFT'|'POST_PRODUCTION_TRAFFIC_SHIFT', ] }, ] }, 'rollback': { 'reason': 'string', 'startedAt': datetime(2015, 1, 1), 'serviceRevisionArn': 'string' }, 'deploymentCircuitBreaker': { 'status': 'TRIGGERED'|'MONITORING'|'MONITORING_COMPLETE'|'DISABLED', 'failureCount': 123, 'threshold': 123 }, 'alarms': { 'status': 'TRIGGERED'|'MONITORING'|'MONITORING_COMPLETE'|'DISABLED', 'alarmNames': [ 'string', ], 'triggeredAlarmNames': [ 'string', ] } }, ], 'failures': [ { 'arn': 'string', 'reason': 'string', 'detail': 'string' }, ] } **Response Structure** * *(dict) --* * **serviceDeployments** *(list) --* The list of service deployments described. * *(dict) --* Information about the service deployment. Service deployments provide a comprehensive view of your deployments. For information about service deployments, see View service history using Amazon ECS service deployments in the *Amazon Elastic Container Service Developer Guide* . * **serviceDeploymentArn** *(string) --* The ARN of the service deployment. * **serviceArn** *(string) --* The ARN of the service for this service deployment. * **clusterArn** *(string) --* The ARN of the cluster that hosts the service. * **createdAt** *(datetime) --* The time the service deployment was created. The format is yyyy-MM-dd HH:mm:ss.SSSSSS. * **startedAt** *(datetime) --* The time the service deployment statred. The format is yyyy-MM-dd HH:mm:ss.SSSSSS. * **finishedAt** *(datetime) --* The time the service deployment finished. The format is yyyy-MM-dd HH:mm:ss.SSSSSS. * **stoppedAt** *(datetime) --* The time the service deployment stopped. The format is yyyy-MM-dd HH:mm:ss.SSSSSS. The service deployment stops when any of the following actions happen: * A user manually stops the deployment * The rollback option is not in use for the failure detection mechanism (the circuit breaker or alarm- based) and the service fails. * **updatedAt** *(datetime) --* The time that the service deployment was last updated. The format is yyyy-MM-dd HH:mm:ss.SSSSSS. * **sourceServiceRevisions** *(list) --* The currently deployed workload configuration. * *(dict) --* The information about the number of requested, pending, and running tasks for a service revision. * **arn** *(string) --* The ARN of the service revision. * **requestedTaskCount** *(integer) --* The number of requested tasks for the service revision. * **runningTaskCount** *(integer) --* The number of running tasks for the service revision. * **pendingTaskCount** *(integer) --* The number of pending tasks for the service revision. * **targetServiceRevision** *(dict) --* The workload configuration being deployed. * **arn** *(string) --* The ARN of the service revision. * **requestedTaskCount** *(integer) --* The number of requested tasks for the service revision. * **runningTaskCount** *(integer) --* The number of running tasks for the service revision. * **pendingTaskCount** *(integer) --* The number of pending tasks for the service revision. * **status** *(string) --* The service deployment state. * **statusReason** *(string) --* Information about why the service deployment is in the current status. For example, the circuit breaker detected a failure. * **lifecycleStage** *(string) --* The current lifecycle stage of the deployment. Possible values include: * RECONCILE_SERVICE The reconciliation stage that only happens when you start a new service deployment with more than 1 service revision in an ACTIVE state. * PRE_SCALE_UP The green service revision has not started. The blue service revision is handling 100% of the production traffic. There is no test traffic. * SCALE_UP The stage when the green service revision scales up to 100% and launches new tasks. The green service revision is not serving any traffic at this point. * POST_SCALE_UP The green service revision has started. The blue service revision is handling 100% of the production traffic. There is no test traffic. * TEST_TRAFFIC_SHIFT The blue and green service revisions are running. The blue service revision handles 100% of the production traffic. The green service revision is migrating from 0% to 100% of test traffic. * POST_TEST_TRAFFIC_SHIFT The test traffic shift is complete. The green service revision handles 100% of the test traffic. * PRODUCTION_TRAFFIC_SHIFT Production traffic is shifting to the green service revision. The green service revision is migrating from 0% to 100% of production traffic. * POST_PRODUCTION_TRAFFIC_SHIFT The production traffic shift is complete. * BAKE_TIME The stage when both blue and green service revisions are running simultaneously after the production traffic has shifted. * CLEAN_UP The stage when the blue service revision has completely scaled down to 0 running tasks. The green service revision is now the production service revision after this stage. * **deploymentConfiguration** *(dict) --* Optional deployment parameters that control how many tasks run during a deployment and the ordering of stopping and starting tasks. * **deploymentCircuitBreaker** *(dict) --* Note: The deployment circuit breaker can only be used for services using the rolling update ( "ECS") deployment type. The **deployment circuit breaker** determines whether a service deployment will fail if the service can't reach a steady state. If you use the deployment circuit breaker, a service deployment will transition to a failed state and stop launching new tasks. If you use the rollback option, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. For more information, see Rolling update in the *Amazon Elastic Container Service Developer Guide* * **enable** *(boolean) --* Determines whether to use the deployment circuit breaker logic for the service. * **rollback** *(boolean) --* Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is on, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. * **maximumPercent** *(integer) --* If a service is using the rolling update ( "ECS") deployment type, the "maximumPercent" parameter represents an upper limit on the number of your service's tasks that are allowed in the "RUNNING" or "PENDING" state during a deployment, as a percentage of the "desiredCount" (rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using the "REPLICA" service scheduler and has a "desiredCount" of four tasks and a "maximumPercent" value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default "maximumPercent" value for a service using the "REPLICA" service scheduler is 200%. The Amazon ECS scheduler uses this parameter to replace unhealthy tasks by starting replacement tasks first and then stopping the unhealthy tasks, as long as cluster resources for starting replacement tasks are available. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services. If a service is using either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types, and tasks in the service use the EC2 launch type, the **maximum percent** value is set to the default value. The **maximum percent** value is used to define the upper limit on the number of the tasks in the service that remain in the "RUNNING" state while the container instances are in the "DRAINING" state. Note: You can't specify a custom "maximumPercent" value for a service that uses either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and has tasks that use the EC2 launch type. If the service uses either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types, and the tasks in the service use the Fargate launch type, the maximum percent value is not used. The value is still returned when describing your service. * **minimumHealthyPercent** *(integer) --* If a service is using the rolling update ( "ECS") deployment type, the "minimumHealthyPercent" represents a lower limit on the number of your service's tasks that must remain in the "RUNNING" state during a deployment, as a percentage of the "desiredCount" (rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a "desiredCount" of four tasks and a "minimumHealthyPercent" of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks. If any tasks are unhealthy and if "maximumPercent" doesn't allow the Amazon ECS scheduler to start replacement tasks, the scheduler stops the unhealthy tasks one-by-one — using the "minimumHealthyPercent" as a constraint — to clear up capacity to launch replacement tasks. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services . For services that *do not* use a load balancer, the following should be noted: * A service is considered healthy if all essential containers within the tasks in the service pass their health checks. * If a task has no essential containers with a health check defined, the service scheduler will wait for 40 seconds after a task reaches a "RUNNING" state before the task is counted towards the minimum healthy percent total. * If a task has one or more essential containers with a health check defined, the service scheduler will wait for the task to reach a healthy status before counting it towards the minimum healthy percent total. A task is considered healthy when all essential containers within the task have passed their health checks. The amount of time the service scheduler can wait for is determined by the container health check settings. For services that *do* use a load balancer, the following should be noted: * If a task has no essential containers with a health check defined, the service scheduler will wait for the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total. * If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total. The default value for a replica service for "minimumHealthyPercent" is 100%. The default "minimumHealthyPercent" value for a service using the "DAEMON" service schedule is 0% for the CLI, the Amazon Web Services SDKs, and the APIs and 50% for the Amazon Web Services Management Console. The minimum number of healthy tasks during a deployment is the "desiredCount" multiplied by the "minimumHealthyPercent"/100, rounded up to the nearest integer value. If a service is using either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and is running tasks that use the EC2 launch type, the **minimum healthy percent** value is set to the default value. The **minimum healthy percent** value is used to define the lower limit on the number of the tasks in the service that remain in the "RUNNING" state while the container instances are in the "DRAINING" state. Note: You can't specify a custom "minimumHealthyPercent" value for a service that uses either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and has tasks that use the EC2 launch type. If a service is using either the blue/green ( "CODE_DEPLOY") or "EXTERNAL" deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service. * **alarms** *(dict) --* Information about the CloudWatch alarms. * **alarmNames** *(list) --* One or more CloudWatch alarm names. Use a "," to separate the alarms. * *(string) --* * **rollback** *(boolean) --* Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is used, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. * **enable** *(boolean) --* Determines whether to use the CloudWatch alarm option in the service deployment process. * **strategy** *(string) --* The deployment strategy for the service. Choose from these valid values: * "ROLLING" - When you create a service which uses the rolling update ( "ROLLING") deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration. * "BLUE_GREEN" - A blue/green deployment strategy ( "BLUE_GREEN") is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed. * **bakeTimeInMinutes** *(integer) --* The time period when both blue and green service revisions are running simultaneously after the production traffic has shifted. You must provide this parameter when you use the "BLUE_GREEN" deployment strategy. * **lifecycleHooks** *(list) --* An array of deployment lifecycle hook objects to run custom logic at specific stages of the deployment lifecycle. * *(dict) --* A deployment lifecycle hook runs custom logic at specific stages of the deployment process. Currently, you can use Lambda functions as hook targets. For more information, see Lifecycle hooks for Amazon ECS service deployments in the *Amazon Elastic Container Service Developer Guide*. * **hookTargetArn** *(string) --* The Amazon Resource Name (ARN) of the hook target. Currently, only Lambda function ARNs are supported. You must provide this parameter when configuring a deployment lifecycle hook. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call Lambda functions on your behalf. For more information, see Permissions required for Lambda functions in Amazon ECS blue/green deployments in the *Amazon Elastic Container Service Developer Guide*. * **lifecycleStages** *(list) --* The lifecycle stages at which to run the hook. Choose from these valid values: * RECONCILE_SERVICE The reconciliation stage that only happens when you start a new service deployment with more than 1 service revision in an ACTIVE state. You can use a lifecycle hook for this stage. * PRE_SCALE_UP The green service revision has not started. The blue service revision is handling 100% of the production traffic. There is no test traffic. You can use a lifecycle hook for this stage. * POST_SCALE_UP The green service revision has started. The blue service revision is handling 100% of the production traffic. There is no test traffic. You can use a lifecycle hook for this stage. * TEST_TRAFFIC_SHIFT The blue and green service revisions are running. The blue service revision handles 100% of the production traffic. The green service revision is migrating from 0% to 100% of test traffic. You can use a lifecycle hook for this stage. * POST_TEST_TRAFFIC_SHIFT The test traffic shift is complete. The green service revision handles 100% of the test traffic. You can use a lifecycle hook for this stage. * PRODUCTION_TRAFFIC_SHIFT Production traffic is shifting to the green service revision. The green service revision is migrating from 0% to 100% of production traffic. You can use a lifecycle hook for this stage. * POST_PRODUCTION_TRAFFIC_SHIFT The production traffic shift is complete. You can use a lifecycle hook for this stage. You must provide this parameter when configuring a deployment lifecycle hook. * *(string) --* * **rollback** *(dict) --* The rollback options the service deployment uses when the deployment fails. * **reason** *(string) --* The reason the rollback happened. For example, the circuit breaker initiated the rollback operation. * **startedAt** *(datetime) --* Time time that the rollback started. The format is yyyy-MM-dd HH:mm:ss.SSSSSS. * **serviceRevisionArn** *(string) --* The ARN of the service revision deployed as part of the rollback. * **deploymentCircuitBreaker** *(dict) --* The circuit breaker configuration that determines a service deployment failed. * **status** *(string) --* The circuit breaker status. Amazon ECS is not using the circuit breaker for service deployment failures when the status is "DISABLED". * **failureCount** *(integer) --* The number of times the circuit breaker detected a service deploymeny failure. * **threshold** *(integer) --* The threshhold which determines that the service deployment failed. The deployment circuit breaker calculates the threshold value, and then uses the value to determine when to move the deployment to a FAILED state. The deployment circuit breaker has a minimum threshold of 3 and a maximum threshold of 200. and uses the values in the following formula to determine the deployment failure. "0.5 * desired task count" * **alarms** *(dict) --* The CloudWatch alarms that determine when a service deployment fails. * **status** *(string) --* The status of the alarms check. Amazon ECS is not using alarms for service deployment failures when the status is "DISABLED". * **alarmNames** *(list) --* The name of the CloudWatch alarms that determine when a service deployment failed. A "," separates the alarms. * *(string) --* * **triggeredAlarmNames** *(list) --* One or more CloudWatch alarm names that have been triggered during the service deployment. A "," separates the alarm names. * *(string) --* * **failures** *(list) --* Any failures associated with the call. If you decsribe a deployment with a service revision created before October 25, 2024, the call fails. The failure includes the service revision ARN and the reason set to "MISSING". * *(dict) --* A failed resource. For a list of common causes, see API failure reasons in the *Amazon Elastic Container Service Developer Guide*. * **arn** *(string) --* The Amazon Resource Name (ARN) of the failed resource. * **reason** *(string) --* The reason for the failure. * **detail** *(string) --* The details of the failure. **Exceptions** * "ECS.Client.exceptions.AccessDeniedException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.ClusterNotFoundException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ServiceNotFoundException" * "ECS.Client.exceptions.UnsupportedFeatureException" ECS / Client / delete_task_set delete_task_set *************** ECS.Client.delete_task_set(**kwargs) Deletes a specified task set within a service. This is used when a service uses the "EXTERNAL" deployment controller type. For more information, see Amazon ECS deployment types in the *Amazon Elastic Container Service Developer Guide*. See also: AWS API Documentation **Request Syntax** response = client.delete_task_set( cluster='string', service='string', taskSet='string', force=True|False ) Parameters: * **cluster** (*string*) -- **[REQUIRED]** The short name or full Amazon Resource Name (ARN) of the cluster that hosts the service that the task set found in to delete. * **service** (*string*) -- **[REQUIRED]** The short name or full Amazon Resource Name (ARN) of the service that hosts the task set to delete. * **taskSet** (*string*) -- **[REQUIRED]** The task set ID or full Amazon Resource Name (ARN) of the task set to delete. * **force** (*boolean*) -- If "true", you can delete a task set even if it hasn't been scaled down to zero. Return type: dict Returns: **Response Syntax** { 'taskSet': { 'id': 'string', 'taskSetArn': 'string', 'serviceArn': 'string', 'clusterArn': 'string', 'startedBy': 'string', 'externalId': 'string', 'status': 'string', 'taskDefinition': 'string', 'computedDesiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'scale': { 'value': 123.0, 'unit': 'PERCENT' }, 'stabilityStatus': 'STEADY_STATE'|'STABILIZING', 'stabilityStatusAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' } } } **Response Structure** * *(dict) --* * **taskSet** *(dict) --* Details about the task set. * **id** *(string) --* The ID of the task set. * **taskSetArn** *(string) --* The Amazon Resource Name (ARN) of the task set. * **serviceArn** *(string) --* The Amazon Resource Name (ARN) of the service the task set exists in. * **clusterArn** *(string) --* The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in. * **startedBy** *(string) --* The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the "startedBy" parameter is "CODE_DEPLOY". If an external deployment created the task set, the "startedBy" field isn't used. * **externalId** *(string) --* The external ID associated with the task set. If an CodeDeploy deployment created a task set, the "externalId" parameter contains the CodeDeploy deployment ID. If a task set is created for an external deployment and is associated with a service discovery registry, the "externalId" parameter contains the "ECS_TASK_SET_EXTERNAL_ID" Cloud Map attribute. * **status** *(string) --* The status of the task set. The following describes each state. PRIMARY The task set is serving production traffic. ACTIVE The task set isn't serving production traffic. DRAINING The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group. * **taskDefinition** *(string) --* The task definition that the task set is using. * **computedDesiredCount** *(integer) --* The computed desired count for the task set. This is calculated by multiplying the service's "desiredCount" by the task set's "scale" percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks. * **pendingCount** *(integer) --* The number of tasks in the task set that are in the "PENDING" status during a deployment. A task in the "PENDING" state is preparing to enter the "RUNNING" state. A task set enters the "PENDING" status when it launches for the first time or when it's restarted after being in the "STOPPED" state. * **runningCount** *(integer) --* The number of tasks in the task set that are in the "RUNNING" status during a deployment. A task in the "RUNNING" state is running and ready for use. * **createdAt** *(datetime) --* The Unix timestamp for the time when the task set was created. * **updatedAt** *(datetime) --* The Unix timestamp for the time when the task set was last updated. * **launchType** *(string) --* The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the *Amazon Elastic Container Service Developer Guide*. * **capacityProviderStrategy** *(list) --* The capacity provider strategy that are associated with the task set. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two- minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. * **platformVersion** *(string) --* The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the *Amazon Elastic Container Service Developer Guide*. * **platformFamily** *(string) --* The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type. All tasks in the set must have the same value. * **networkConfiguration** *(dict) --* The network configuration for the task set. * **awsvpcConfiguration** *(dict) --* The VPC subnets and security groups that are associated with a task. Note: All specified subnets and security groups must be from the same VPC. * **subnets** *(list) --* The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified. Note: All specified subnets must be from the same VPC. * *(string) --* * **securityGroups** *(list) --* The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified. Note: All specified security groups must be from the same VPC. * *(string) --* * **assignPublicIp** *(string) --* Whether the task's elastic network interface receives a public IP address. Consider the following when you set this value: * When you use "create-service" or "update-service", the default is "DISABLED". * When the service "deploymentController" is "ECS", the value must be "DISABLED". * **loadBalancers** *(list) --* Details on a load balancer that are used with a task set. * *(dict) --* The load balancer configuration to use with a service or task set. When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers. We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration. A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the *Amazon Elastic Container Service Developer Guide*. * **targetGroupArn** *(string) --* The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set. A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. For services using the "ECS" deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the *Amazon Elastic Container Service Developer Guide*. For services using the "CODE_DEPLOY" deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the *Amazon Elastic Container Service Developer Guide*. Warning: If your service's task definition uses the "awsvpc" network mode, you must choose "ip" as the target type, not "instance". Do this when creating your target groups because tasks that use the "awsvpc" network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type. * **loadBalancerName** *(string) --* The name of the load balancer to associate with the Amazon ECS service or task set. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted. * **containerName** *(string) --* The name of the container (as it appears in a container definition) to associate with the load balancer. You need to specify the container name when configuring the target group for an Amazon ECS load balancer. * **containerPort** *(integer) --* The port on the container to associate with the load balancer. This port must correspond to a "containerPort" in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the "hostPort" of the port mapping. * **advancedConfiguration** *(dict) --* The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments. * **alternateTargetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments. * **productionListenerRule** *(string) --* The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic. * **testListenerRule** *(string) --* The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you. * **serviceRegistries** *(list) --* The details for the service discovery registries to assign to this task set. For more information, see Service discovery. * *(dict) --* The details for the service registry. Each service may be associated with one service registry. Multiple service registries for each service are not supported. When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration. * **registryArn** *(string) --* The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService. * **port** *(integer) --* The port value used if your service discovery service specified an SRV record. This field might be used if both the "awsvpc" network mode and SRV records are used. * **containerName** *(string) --* The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition that your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **containerPort** *(integer) --* The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **scale** *(dict) --* A floating-point percentage of your desired number of tasks to place and keep running in the task set. * **value** *(float) --* The value, specified as a percent total of a service's "desiredCount", to scale the task set. Accepted values are numbers between 0 and 100. * **unit** *(string) --* The unit of measure for the scale value. * **stabilityStatus** *(string) --* The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set are in "STEADY_STATE": * The task "runningCount" is equal to the "computedDesiredCount". * The "pendingCount" is "0". * There are no tasks that are running on container instances in the "DRAINING" status. * All tasks are reporting a healthy status from the load balancers, service discovery, and container health checks. If any of those conditions aren't met, the stability status returns "STABILIZING". * **stabilityStatusAt** *(datetime) --* The Unix timestamp for the time when the task set stability status was retrieved. * **tags** *(list) --* The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **fargateEphemeralStorage** *(dict) --* The Fargate ephemeral storage settings for the task set. * **kmsKeyId** *(string) --* Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ClusterNotFoundException" * "ECS.Client.exceptions.UnsupportedFeatureException" * "ECS.Client.exceptions.AccessDeniedException" * "ECS.Client.exceptions.ServiceNotFoundException" * "ECS.Client.exceptions.ServiceNotActiveException" * "ECS.Client.exceptions.TaskSetNotFoundException" ECS / Client / list_services_by_namespace list_services_by_namespace ************************** ECS.Client.list_services_by_namespace(**kwargs) This operation lists all of the services that are associated with a Cloud Map namespace. This list might include services in different clusters. In contrast, "ListServices" can only list services in one cluster at a time. If you need to filter the list of services in a single cluster by various parameters, use "ListServices". For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. See also: AWS API Documentation **Request Syntax** response = client.list_services_by_namespace( namespace='string', nextToken='string', maxResults=123 ) Parameters: * **namespace** (*string*) -- **[REQUIRED]** The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace to list the services in. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **nextToken** (*string*) -- The "nextToken" value that's returned from a "ListServicesByNamespace" request. It indicates that more results are available to fulfill the request and further calls are needed. If "maxResults" is returned, it is possible the number of results is less than "maxResults". * **maxResults** (*integer*) -- The maximum number of service results that "ListServicesByNamespace" returns in paginated output. When this parameter is used, "ListServicesByNamespace" only returns "maxResults" results in a single page along with a "nextToken" response element. The remaining results of the initial request can be seen by sending another "ListServicesByNamespace" request with the returned "nextToken" value. This value can be between 1 and 100. If this parameter isn't used, then "ListServicesByNamespace" returns up to 10 results and a "nextToken" value if applicable. Return type: dict Returns: **Response Syntax** { 'serviceArns': [ 'string', ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **serviceArns** *(list) --* The list of full ARN entries for each service that's associated with the specified namespace. * *(string) --* * **nextToken** *(string) --* The "nextToken" value to include in a future "ListServicesByNamespace" request. When the results of a "ListServicesByNamespace" request exceed "maxResults", this value can be used to retrieve the next page of results. When there are no more results to return, this value is "null". **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.NamespaceNotFoundException" ECS / Client / delete_capacity_provider delete_capacity_provider ************************ ECS.Client.delete_capacity_provider(**kwargs) Deletes the specified capacity provider. Note: The "FARGATE" and "FARGATE_SPOT" capacity providers are reserved and can't be deleted. You can disassociate them from a cluster using either PutClusterCapacityProviders or by deleting the cluster. Prior to a capacity provider being deleted, the capacity provider must be removed from the capacity provider strategy from all services. The UpdateService API can be used to remove a capacity provider from a service's capacity provider strategy. When updating a service, the "forceNewDeployment" option can be used to ensure that any tasks using the Amazon EC2 instance capacity provided by the capacity provider are transitioned to use the capacity from the remaining capacity providers. Only capacity providers that aren't associated with a cluster can be deleted. To remove a capacity provider from a cluster, you can either use PutClusterCapacityProviders or delete the cluster. See also: AWS API Documentation **Request Syntax** response = client.delete_capacity_provider( capacityProvider='string' ) Parameters: **capacityProvider** (*string*) -- **[REQUIRED]** The short name or full Amazon Resource Name (ARN) of the capacity provider to delete. Return type: dict Returns: **Response Syntax** { 'capacityProvider': { 'capacityProviderArn': 'string', 'name': 'string', 'status': 'ACTIVE'|'INACTIVE', 'autoScalingGroupProvider': { 'autoScalingGroupArn': 'string', 'managedScaling': { 'status': 'ENABLED'|'DISABLED', 'targetCapacity': 123, 'minimumScalingStepSize': 123, 'maximumScalingStepSize': 123, 'instanceWarmupPeriod': 123 }, 'managedTerminationProtection': 'ENABLED'|'DISABLED', 'managedDraining': 'ENABLED'|'DISABLED' }, 'updateStatus': 'DELETE_IN_PROGRESS'|'DELETE_COMPLETE'|'DELETE_FAILED'|'UPDATE_IN_PROGRESS'|'UPDATE_COMPLETE'|'UPDATE_FAILED', 'updateStatusReason': 'string', 'tags': [ { 'key': 'string', 'value': 'string' }, ] } } **Response Structure** * *(dict) --* * **capacityProvider** *(dict) --* The details of the capacity provider. * **capacityProviderArn** *(string) --* The Amazon Resource Name (ARN) that identifies the capacity provider. * **name** *(string) --* The name of the capacity provider. * **status** *(string) --* The current status of the capacity provider. Only capacity providers in an "ACTIVE" state can be used in a cluster. When a capacity provider is successfully deleted, it has an "INACTIVE" status. * **autoScalingGroupProvider** *(dict) --* The Auto Scaling group settings for the capacity provider. * **autoScalingGroupArn** *(string) --* The Amazon Resource Name (ARN) that identifies the Auto Scaling group, or the Auto Scaling group name. * **managedScaling** *(dict) --* The managed scaling settings for the Auto Scaling group capacity provider. * **status** *(string) --* Determines whether to use managed scaling for the capacity provider. * **targetCapacity** *(integer) --* The target capacity utilization as a percentage for the capacity provider. The specified value must be greater than "0" and less than or equal to "100". For example, if you want the capacity provider to maintain 10% spare capacity, then that means the utilization is 90%, so use a "targetCapacity" of "90". The default value of "100" percent results in the Amazon EC2 instances in your Auto Scaling group being completely used. * **minimumScalingStepSize** *(integer) --* The minimum number of Amazon EC2 instances that Amazon ECS will scale out at one time. The scale in process is not affected by this parameter If this parameter is omitted, the default value of "1" is used. When additional capacity is required, Amazon ECS will scale up the minimum scaling step size even if the actual demand is less than the minimum scaling step size. If you use a capacity provider with an Auto Scaling group configured with more than one Amazon EC2 instance type or Availability Zone, Amazon ECS will scale up by the exact minimum scaling step size value and will ignore both the maximum scaling step size as well as the capacity demand. * **maximumScalingStepSize** *(integer) --* The maximum number of Amazon EC2 instances that Amazon ECS will scale out at one time. If this parameter is omitted, the default value of "10000" is used. * **instanceWarmupPeriod** *(integer) --* The period of time, in seconds, after a newly launched Amazon EC2 instance can contribute to CloudWatch metrics for Auto Scaling group. If this parameter is omitted, the default value of "300" seconds is used. * **managedTerminationProtection** *(string) --* The managed termination protection setting to use for the Auto Scaling group capacity provider. This determines whether the Auto Scaling group has managed termination protection. The default is off. Warning: When using managed termination protection, managed scaling must also be used otherwise managed termination protection doesn't work. When managed termination protection is on, Amazon ECS prevents the Amazon EC2 instances in an Auto Scaling group that contain tasks from being terminated during a scale-in action. The Auto Scaling group and each instance in the Auto Scaling group must have instance protection from scale-in actions on as well. For more information, see Instance Protection in the *Auto Scaling User Guide*. When managed termination protection is off, your Amazon EC2 instances aren't protected from termination when the Auto Scaling group scales in. * **managedDraining** *(string) --* The managed draining option for the Auto Scaling group capacity provider. When you enable this, Amazon ECS manages and gracefully drains the EC2 container instances that are in the Auto Scaling group capacity provider. * **updateStatus** *(string) --* The update status of the capacity provider. The following are the possible states that is returned. DELETE_IN_PROGRESS The capacity provider is in the process of being deleted. DELETE_COMPLETE The capacity provider was successfully deleted and has an "INACTIVE" status. DELETE_FAILED The capacity provider can't be deleted. The update status reason provides further details about why the delete failed. * **updateStatusReason** *(string) --* The update status reason. This provides further details about the update status for the capacity provider. * **tags** *(list) --* The metadata that you apply to the capacity provider to help you categorize and organize it. Each tag consists of a key and an optional value. You define both. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" ECS / Client / update_container_instances_state update_container_instances_state ******************************** ECS.Client.update_container_instances_state(**kwargs) Modifies the status of an Amazon ECS container instance. Once a container instance has reached an "ACTIVE" state, you can change the status of a container instance to "DRAINING" to manually remove an instance from a cluster, for example to perform system updates, update the Docker daemon, or scale down the cluster size. Warning: A container instance can't be changed to "DRAINING" until it has reached an "ACTIVE" status. If the instance is in any other status, an error will be received. When you set a container instance to "DRAINING", Amazon ECS prevents new tasks from being scheduled for placement on the container instance and replacement service tasks are started on other container instances in the cluster if the resources are available. Service tasks on the container instance that are in the "PENDING" state are stopped immediately. Service tasks on the container instance that are in the "RUNNING" state are stopped and replaced according to the service's deployment configuration parameters, "minimumHealthyPercent" and "maximumPercent". You can change the deployment configuration of your service using UpdateService. * If "minimumHealthyPercent" is below 100%, the scheduler can ignore "desiredCount" temporarily during task replacement. For example, "desiredCount" is four tasks, a minimum of 50% allows the scheduler to stop two existing tasks before starting two new tasks. If the minimum is 100%, the service scheduler can't remove existing tasks until the replacement tasks are considered healthy. Tasks for services that do not use a load balancer are considered healthy if they're in the "RUNNING" state. Tasks for services that use a load balancer are considered healthy if they're in the "RUNNING" state and are reported as healthy by the load balancer. * The "maximumPercent" parameter represents an upper limit on the number of running tasks during task replacement. You can use this to define the replacement batch size. For example, if "desiredCount" is four tasks, a maximum of 200% starts four new tasks before stopping the four tasks to be drained, provided that the cluster resources required to do this are available. If the maximum is 100%, then replacement tasks can't start until the draining tasks have stopped. Any "PENDING" or "RUNNING" tasks that do not belong to a service aren't affected. You must wait for them to finish or stop them manually. A container instance has completed draining when it has no more "RUNNING" tasks. You can verify this using ListTasks. When a container instance has been drained, you can set a container instance to "ACTIVE" status and once it has reached that status the Amazon ECS scheduler can begin scheduling tasks on the instance again. See also: AWS API Documentation **Request Syntax** response = client.update_container_instances_state( cluster='string', containerInstances=[ 'string', ], status='ACTIVE'|'DRAINING'|'REGISTERING'|'DEREGISTERING'|'REGISTRATION_FAILED' ) Parameters: * **cluster** (*string*) -- The short name or full Amazon Resource Name (ARN) of the cluster that hosts the container instance to update. If you do not specify a cluster, the default cluster is assumed. * **containerInstances** (*list*) -- **[REQUIRED]** A list of up to 10 container instance IDs or full ARN entries. * *(string) --* * **status** (*string*) -- **[REQUIRED]** The container instance state to update the container instance with. The only valid values for this action are "ACTIVE" and "DRAINING". A container instance can only be updated to "DRAINING" status once it has reached an "ACTIVE" state. If a container instance is in "REGISTERING", "DEREGISTERING", or "REGISTRATION_FAILED" state you can describe the container instance but can't update the container instance state. Return type: dict Returns: **Response Syntax** { 'containerInstances': [ { 'containerInstanceArn': 'string', 'ec2InstanceId': 'string', 'capacityProviderName': 'string', 'version': 123, 'versionInfo': { 'agentVersion': 'string', 'agentHash': 'string', 'dockerVersion': 'string' }, 'remainingResources': [ { 'name': 'string', 'type': 'string', 'doubleValue': 123.0, 'longValue': 123, 'integerValue': 123, 'stringSetValue': [ 'string', ] }, ], 'registeredResources': [ { 'name': 'string', 'type': 'string', 'doubleValue': 123.0, 'longValue': 123, 'integerValue': 123, 'stringSetValue': [ 'string', ] }, ], 'status': 'string', 'statusReason': 'string', 'agentConnected': True|False, 'runningTasksCount': 123, 'pendingTasksCount': 123, 'agentUpdateStatus': 'PENDING'|'STAGING'|'STAGED'|'UPDATING'|'UPDATED'|'FAILED', 'attributes': [ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ], 'registeredAt': datetime(2015, 1, 1), 'attachments': [ { 'id': 'string', 'type': 'string', 'status': 'string', 'details': [ { 'name': 'string', 'value': 'string' }, ] }, ], 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'healthStatus': { 'overallStatus': 'OK'|'IMPAIRED'|'INSUFFICIENT_DATA'|'INITIALIZING', 'details': [ { 'type': 'CONTAINER_RUNTIME', 'status': 'OK'|'IMPAIRED'|'INSUFFICIENT_DATA'|'INITIALIZING', 'lastUpdated': datetime(2015, 1, 1), 'lastStatusChange': datetime(2015, 1, 1) }, ] } }, ], 'failures': [ { 'arn': 'string', 'reason': 'string', 'detail': 'string' }, ] } **Response Structure** * *(dict) --* * **containerInstances** *(list) --* The list of container instances. * *(dict) --* An Amazon EC2 or External instance that's running the Amazon ECS agent and has been registered with a cluster. * **containerInstanceArn** *(string) --* The Amazon Resource Name (ARN) of the container instance. For more information about the ARN format, see Amazon Resource Name (ARN) in the *Amazon ECS Developer Guide*. * **ec2InstanceId** *(string) --* The ID of the container instance. For Amazon EC2 instances, this value is the Amazon EC2 instance ID. For external instances, this value is the Amazon Web Services Systems Manager managed instance ID. * **capacityProviderName** *(string) --* The capacity provider that's associated with the container instance. * **version** *(integer) --* The version counter for the container instance. Every time a container instance experiences a change that triggers a CloudWatch event, the version counter is incremented. If you're replicating your Amazon ECS container instance state with CloudWatch Events, you can compare the version of a container instance reported by the Amazon ECS APIs with the version reported in CloudWatch Events for the container instance (inside the "detail" object) to verify that the version in your event stream is current. * **versionInfo** *(dict) --* The version information for the Amazon ECS container agent and Docker daemon running on the container instance. * **agentVersion** *(string) --* The version number of the Amazon ECS container agent. * **agentHash** *(string) --* The Git commit hash for the Amazon ECS container agent build on the amazon-ecs-agent GitHub repository. * **dockerVersion** *(string) --* The Docker version that's running on the container instance. * **remainingResources** *(list) --* For CPU and memory resource types, this parameter describes the remaining CPU and memory that wasn't already allocated to tasks and is therefore available for new tasks. For port resource types, this parameter describes the ports that were reserved by the Amazon ECS container agent (at instance registration time) and any task containers that have reserved port mappings on the host (with the "host" or "bridge" network mode). Any port that's not specified here is available for new tasks. * *(dict) --* Describes the resources available for a container instance. * **name** *(string) --* The name of the resource, such as "CPU", "MEMORY", "PORTS", "PORTS_UDP", or a user-defined resource. * **type** *(string) --* The type of the resource. Valid values: "INTEGER", "DOUBLE", "LONG", or "STRINGSET". * **doubleValue** *(float) --* When the "doubleValue" type is set, the value of the resource must be a double precision floating-point type. * **longValue** *(integer) --* When the "longValue" type is set, the value of the resource must be an extended precision floating- point type. * **integerValue** *(integer) --* When the "integerValue" type is set, the value of the resource must be an integer. * **stringSetValue** *(list) --* When the "stringSetValue" type is set, the value of the resource must be a string type. * *(string) --* * **registeredResources** *(list) --* For CPU and memory resource types, this parameter describes the amount of each resource that was available on the container instance when the container agent registered it with Amazon ECS. This value represents the total amount of CPU and memory that can be allocated on this container instance to tasks. For port resource types, this parameter describes the ports that were reserved by the Amazon ECS container agent when it registered the container instance with Amazon ECS. * *(dict) --* Describes the resources available for a container instance. * **name** *(string) --* The name of the resource, such as "CPU", "MEMORY", "PORTS", "PORTS_UDP", or a user-defined resource. * **type** *(string) --* The type of the resource. Valid values: "INTEGER", "DOUBLE", "LONG", or "STRINGSET". * **doubleValue** *(float) --* When the "doubleValue" type is set, the value of the resource must be a double precision floating-point type. * **longValue** *(integer) --* When the "longValue" type is set, the value of the resource must be an extended precision floating- point type. * **integerValue** *(integer) --* When the "integerValue" type is set, the value of the resource must be an integer. * **stringSetValue** *(list) --* When the "stringSetValue" type is set, the value of the resource must be a string type. * *(string) --* * **status** *(string) --* The status of the container instance. The valid values are "REGISTERING", "REGISTRATION_FAILED", "ACTIVE", "INACTIVE", "DEREGISTERING", or "DRAINING". If your account has opted in to the "awsvpcTrunking" account setting, then any newly registered container instance will transition to a "REGISTERING" status while the trunk elastic network interface is provisioned for the instance. If the registration fails, the instance will transition to a "REGISTRATION_FAILED" status. You can describe the container instance and see the reason for failure in the "statusReason" parameter. Once the container instance is terminated, the instance transitions to a "DEREGISTERING" status while the trunk elastic network interface is deprovisioned. The instance then transitions to an "INACTIVE" status. The "ACTIVE" status indicates that the container instance can accept tasks. The "DRAINING" indicates that new tasks aren't placed on the container instance and any service tasks running on the container instance are removed if possible. For more information, see Container instance draining in the *Amazon Elastic Container Service Developer Guide*. * **statusReason** *(string) --* The reason that the container instance reached its current status. * **agentConnected** *(boolean) --* This parameter returns "true" if the agent is connected to Amazon ECS. An instance with an agent that may be unhealthy or stopped return "false". Only instances connected to an agent can accept task placement requests. * **runningTasksCount** *(integer) --* The number of tasks on the container instance that have a desired status ( "desiredStatus") of "RUNNING". * **pendingTasksCount** *(integer) --* The number of tasks on the container instance that are in the "PENDING" status. * **agentUpdateStatus** *(string) --* The status of the most recent agent update. If an update wasn't ever requested, this value is "NULL". * **attributes** *(list) --* The attributes set for the container instance, either by the Amazon ECS container agent at instance registration or manually with the PutAttributes operation. * *(dict) --* An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the attribute. The "name" must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.). * **value** *(string) --* The value of the attribute. The "value" must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space. * **targetType** *(string) --* The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN. * **targetId** *(string) --* The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN). * **registeredAt** *(datetime) --* The Unix timestamp for the time when the container instance was registered. * **attachments** *(list) --* The resources attached to a container instance, such as an elastic network interface. * *(dict) --* An object representing a container instance or task attachment. * **id** *(string) --* The unique identifier for the attachment. * **type** *(string) --* The type of the attachment, such as "ElasticNetworkInterface", "Service Connect", and "AmazonElasticBlockStorage". * **status** *(string) --* The status of the attachment. Valid values are "PRECREATED", "CREATED", "ATTACHING", "ATTACHED", "DETACHING", "DETACHED", "DELETED", and "FAILED". * **details** *(list) --* Details of the attachment. For elastic network interfaces, this includes the network interface ID, the MAC address, the subnet ID, and the private IPv4 address. For Service Connect services, this includes "portName", "clientAliases", "discoveryName", and "ingressPortOverride". For Elastic Block Storage, this includes "roleArn", "deleteOnTermination", "volumeName", "volumeId", and "statusReason" (only when the attachment fails to create or attach). * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **tags** *(list) --* The metadata that you apply to the container instance to help you categorize and organize them. Each tag consists of a key and an optional value. You define both. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **healthStatus** *(dict) --* An object representing the health status of the container instance. * **overallStatus** *(string) --* The overall health status of the container instance. This is an aggregate status of all container instance health checks. * **details** *(list) --* An array of objects representing the details of the container instance health status. * *(dict) --* An object representing the result of a container instance health status check. * **type** *(string) --* The type of container instance health status that was verified. * **status** *(string) --* The container instance health status. * **lastUpdated** *(datetime) --* The Unix timestamp for when the container instance health status was last updated. * **lastStatusChange** *(datetime) --* The Unix timestamp for when the container instance health status last changed. * **failures** *(list) --* Any failures associated with the call. * *(dict) --* A failed resource. For a list of common causes, see API failure reasons in the *Amazon Elastic Container Service Developer Guide*. * **arn** *(string) --* The Amazon Resource Name (ARN) of the failed resource. * **reason** *(string) --* The reason for the failure. * **detail** *(string) --* The details of the failure. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ClusterNotFoundException" ECS / Client / list_tasks list_tasks ********** ECS.Client.list_tasks(**kwargs) Returns a list of tasks. You can filter the results by cluster, task definition family, container instance, launch type, what IAM principal started the task, or by the desired status of the task. Recently stopped tasks might appear in the returned results. See also: AWS API Documentation **Request Syntax** response = client.list_tasks( cluster='string', containerInstance='string', family='string', nextToken='string', maxResults=123, startedBy='string', serviceName='string', desiredStatus='RUNNING'|'PENDING'|'STOPPED', launchType='EC2'|'FARGATE'|'EXTERNAL' ) Parameters: * **cluster** (*string*) -- The short name or full Amazon Resource Name (ARN) of the cluster to use when filtering the "ListTasks" results. If you do not specify a cluster, the default cluster is assumed. * **containerInstance** (*string*) -- The container instance ID or full ARN of the container instance to use when filtering the "ListTasks" results. Specifying a "containerInstance" limits the results to tasks that belong to that container instance. * **family** (*string*) -- The name of the task definition family to use when filtering the "ListTasks" results. Specifying a "family" limits the results to tasks that belong to that family. * **nextToken** (*string*) -- The "nextToken" value returned from a "ListTasks" request indicating that more results are available to fulfill the request and further calls will be needed. If "maxResults" was provided, it's possible the number of results to be fewer than "maxResults". Note: This token should be treated as an opaque identifier that is only used to retrieve the next items in a list and not for other programmatic purposes. * **maxResults** (*integer*) -- The maximum number of task results that "ListTasks" returned in paginated output. When this parameter is used, "ListTasks" only returns "maxResults" results in a single page along with a "nextToken" response element. The remaining results of the initial request can be seen by sending another "ListTasks" request with the returned "nextToken" value. This value can be between 1 and 100. If this parameter isn't used, then "ListTasks" returns up to 100 results and a "nextToken" value if applicable. * **startedBy** (*string*) -- The "startedBy" value to filter the task results with. Specifying a "startedBy" value limits the results to tasks that were started with that value. When you specify "startedBy" as the filter, it must be the only filter that you use. * **serviceName** (*string*) -- The name of the service to use when filtering the "ListTasks" results. Specifying a "serviceName" limits the results to tasks that belong to that service. * **desiredStatus** (*string*) -- The task desired status to use when filtering the "ListTasks" results. Specifying a "desiredStatus" of "STOPPED" limits the results to tasks that Amazon ECS has set the desired status to "STOPPED". This can be useful for debugging tasks that aren't starting properly or have died or finished. The default status filter is "RUNNING", which shows tasks that Amazon ECS has set the desired status to "RUNNING". Note: Although you can filter results based on a desired status of "PENDING", this doesn't return any results. Amazon ECS never sets the desired status of a task to that value (only a task's "lastStatus" may have a value of "PENDING"). * **launchType** (*string*) -- The launch type to use when filtering the "ListTasks" results. Return type: dict Returns: **Response Syntax** { 'taskArns': [ 'string', ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **taskArns** *(list) --* The list of task ARN entries for the "ListTasks" request. * *(string) --* * **nextToken** *(string) --* The "nextToken" value to include in a future "ListTasks" request. When the results of a "ListTasks" request exceed "maxResults", this value can be used to retrieve the next page of results. This value is "null" when there are no more results to return. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ClusterNotFoundException" * "ECS.Client.exceptions.ServiceNotFoundException" **Examples** This example lists all of the tasks in a cluster. response = client.list_tasks( cluster='default', ) print(response) Expected Output: { 'taskArns': [ 'arn:aws:ecs:us-east-1:012345678910:task/0cc43cdb-3bee-4407-9c26-c0e6ea5bee84', 'arn:aws:ecs:us-east-1:012345678910:task/6b809ef6-c67e-4467-921f-ee261c15a0a1', ], 'ResponseMetadata': { '...': '...', }, } This example lists the tasks of a specified container instance. Specifying a "containerInstance" value limits the results to tasks that belong to that container instance. response = client.list_tasks( cluster='default', containerInstance='f6bbb147-5370-4ace-8c73-c7181ded911f', ) print(response) Expected Output: { 'taskArns': [ 'arn:aws:ecs:us-east-1:012345678910:task/0cc43cdb-3bee-4407-9c26-c0e6ea5bee84', ], 'ResponseMetadata': { '...': '...', }, } ECS / Client / submit_container_state_change submit_container_state_change ***************************** ECS.Client.submit_container_state_change(**kwargs) Note: This action is only used by the Amazon ECS agent, and it is not intended for use outside of the agent. Sent to acknowledge that a container changed states. See also: AWS API Documentation **Request Syntax** response = client.submit_container_state_change( cluster='string', task='string', containerName='string', runtimeId='string', status='string', exitCode=123, reason='string', networkBindings=[ { 'bindIP': 'string', 'containerPort': 123, 'hostPort': 123, 'protocol': 'tcp'|'udp', 'containerPortRange': 'string', 'hostPortRange': 'string' }, ] ) Parameters: * **cluster** (*string*) -- The short name or full ARN of the cluster that hosts the container. * **task** (*string*) -- The task ID or full Amazon Resource Name (ARN) of the task that hosts the container. * **containerName** (*string*) -- The name of the container. * **runtimeId** (*string*) -- The ID of the Docker container. * **status** (*string*) -- The status of the state change request. * **exitCode** (*integer*) -- The exit code that's returned for the state change request. * **reason** (*string*) -- The reason for the state change request. * **networkBindings** (*list*) -- The network bindings of the container. * *(dict) --* Details on the network bindings between a container and its host container instance. After a task reaches the "RUNNING" status, manual and automatic host and container port assignments are visible in the "networkBindings" section of DescribeTasks API responses. * **bindIP** *(string) --* The IP address that the container is bound to on the container instance. * **containerPort** *(integer) --* The port number on the container that's used with the network binding. * **hostPort** *(integer) --* The port number on the host that's used with the network binding. * **protocol** *(string) --* The protocol used for the network binding. * **containerPortRange** *(string) --* The port number range on the container that's bound to the dynamically mapped host port range. The following rules apply when you specify a "containerPortRange": * You must use either the "bridge" network mode or the "awsvpc" network mode. * This parameter is available for both the EC2 and Fargate launch types. * This parameter is available for both the Linux and Windows operating systems. * The container instance must have at least version 1.67.0 of the container agent and at least version 1.67.0-1 of the "ecs-init" package * You can specify a maximum of 100 port ranges per container. * You do not specify a "hostPortRange". The value of the "hostPortRange" is set as follows: * For containers in a task with the "awsvpc" network mode, the "hostPortRange" is set to the same value as the "containerPortRange". This is a static mapping strategy. * For containers in a task with the "bridge" network mode, the Amazon ECS agent finds open host ports from the default ephemeral range and passes it to docker to bind them to the container ports. * The "containerPortRange" valid values are between 1 and 65535. * A port can only be included in one port mapping per container. * You cannot specify overlapping port ranges. * The first port in the range must be less than last port in the range. * Docker recommends that you turn off the docker-proxy in the Docker daemon config file when you have a large number of ports. For more information, see Issue #11185 on the Github website. For information about how to turn off the docker-proxy in the Docker daemon config file, see Docker daemon in the *Amazon ECS Developer Guide*. You can call DescribeTasks to view the "hostPortRange" which are the host ports that are bound to the container ports. * **hostPortRange** *(string) --* The port number range on the host that's used with the network binding. This is assigned is assigned by Docker and delivered by the Amazon ECS agent. Return type: dict Returns: **Response Syntax** { 'acknowledgment': 'string' } **Response Structure** * *(dict) --* * **acknowledgment** *(string) --* Acknowledgement of the state change. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.AccessDeniedException" ECS / Client / put_account_setting_default put_account_setting_default *************************** ECS.Client.put_account_setting_default(**kwargs) Modifies an account setting for all users on an account for whom no individual account setting has been specified. Account settings are set on a per-Region basis. See also: AWS API Documentation **Request Syntax** response = client.put_account_setting_default( name='serviceLongArnFormat'|'taskLongArnFormat'|'containerInstanceLongArnFormat'|'awsvpcTrunking'|'containerInsights'|'fargateFIPSMode'|'tagResourceAuthorization'|'fargateTaskRetirementWaitPeriod'|'guardDutyActivate'|'defaultLogDriverMode', value='string' ) Parameters: * **name** (*string*) -- **[REQUIRED]** The resource name for which to modify the account setting. The following are the valid values for the account setting name. * "serviceLongArnFormat" - When modified, the Amazon Resource Name (ARN) and resource ID format of the resource type for a specified user, role, or the root user for an account is affected. The opt-in and opt-out account setting must be set for each Amazon ECS resource separately. The ARN and resource ID format of a resource is defined by the opt-in status of the user or role that created the resource. You must turn on this setting to use Amazon ECS features such as resource tagging. * "taskLongArnFormat" - When modified, the Amazon Resource Name (ARN) and resource ID format of the resource type for a specified user, role, or the root user for an account is affected. The opt-in and opt-out account setting must be set for each Amazon ECS resource separately. The ARN and resource ID format of a resource is defined by the opt-in status of the user or role that created the resource. You must turn on this setting to use Amazon ECS features such as resource tagging. * "containerInstanceLongArnFormat" - When modified, the Amazon Resource Name (ARN) and resource ID format of the resource type for a specified user, role, or the root user for an account is affected. The opt-in and opt-out account setting must be set for each Amazon ECS resource separately. The ARN and resource ID format of a resource is defined by the opt- in status of the user or role that created the resource. You must turn on this setting to use Amazon ECS features such as resource tagging. * "awsvpcTrunking" - When modified, the elastic network interface (ENI) limit for any new container instances that support the feature is changed. If "awsvpcTrunking" is turned on, any new container instances that support the feature are launched have the increased ENI limits available to them. For more information, see Elastic Network Interface Trunking in the *Amazon Elastic Container Service Developer Guide*. * "containerInsights" - Container Insights with enhanced observability provides all the Container Insights metrics, plus additional task and container metrics. This version supports enhanced observability for Amazon ECS clusters using the Amazon EC2 and Fargate launch types. After you configure Container Insights with enhanced observability on Amazon ECS, Container Insights auto-collects detailed infrastructure telemetry from the cluster level down to the container level in your environment and displays these critical performance data in curated dashboards removing the heavy lifting in observability set-up. To use Container Insights with enhanced observability, set the "containerInsights" account setting to "enhanced". To use Container Insights, set the "containerInsights" account setting to "enabled". For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability in the *Amazon Elastic Container Service Developer Guide*. * "dualStackIPv6" - When turned on, when using a VPC in dual stack mode, your tasks using the "awsvpc" network mode can have an IPv6 address assigned. For more information on using IPv6 with tasks launched on Amazon EC2 instances, see Using a VPC in dual-stack mode. For more information on using IPv6 with tasks launched on Fargate, see Using a VPC in dual- stack mode. * "fargateFIPSMode" - If you specify "fargateFIPSMode", Fargate FIPS 140 compliance is affected. * "fargateTaskRetirementWaitPeriod" - When Amazon Web Services determines that a security or infrastructure update is needed for an Amazon ECS task hosted on Fargate, the tasks need to be stopped and new tasks launched to replace them. Use "fargateTaskRetirementWaitPeriod" to configure the wait time to retire a Fargate task. For information about the Fargate tasks maintenance, see Amazon Web Services Fargate task maintenance in the *Amazon ECS Developer Guide*. * "tagResourceAuthorization" - Amazon ECS is introducing tagging authorization for resource creation. Users must have permissions for actions that create the resource, such as "ecsCreateCluster". If tags are specified when you create a resource, Amazon Web Services performs additional authorization to verify if users or roles have permissions to create tags. Therefore, you must grant explicit permissions to use the "ecs:TagResource" action. For more information, see Grant permission to tag resources on creation in the *Amazon ECS Developer Guide*. * "defaultLogDriverMode" -Amazon ECS supports setting a default delivery mode of log messages from a container to the "logDriver" that you specify in the container's "logConfiguration". The delivery mode affects application stability when the flow of logs from the container to the log driver is interrupted. The "defaultLogDriverMode" setting supports two values: "blocking" and "non-blocking". If you don't specify a delivery mode in your container definition's "logConfiguration", the mode you specify using this account setting will be used as the default. For more information about log delivery modes, see LogConfiguration. Note: On June 25, 2025, Amazon ECS changed the default log driver mode from "blocking" to "non-blocking" to prioritize task availability over logging. To continue using the "blocking" mode after this change, do one of the following: * Set the "mode" option in your container definition's "logConfiguration" as "blocking". * Set the "defaultLogDriverMode" account setting to "blocking". * "guardDutyActivate" - The "guardDutyActivate" parameter is read-only in Amazon ECS and indicates whether Amazon ECS Runtime Monitoring is enabled or disabled by your security administrator in your Amazon ECS account. Amazon GuardDuty controls this account setting on your behalf. For more information, see Protecting Amazon ECS workloads with Amazon ECS Runtime Monitoring. * **value** (*string*) -- **[REQUIRED]** The account setting value for the specified principal ARN. Accepted values are "enabled", "disabled", "on", "enhanced", and "off". When you specify "fargateTaskRetirementWaitPeriod" for the "name", the following are the valid values: * "0" - Amazon Web Services sends the notification, and immediately retires the affected tasks. * "7" - Amazon Web Services sends the notification, and waits 7 calendar days to retire the tasks. * "14" - Amazon Web Services sends the notification, and waits 14 calendar days to retire the tasks. Return type: dict Returns: **Response Syntax** { 'setting': { 'name': 'serviceLongArnFormat'|'taskLongArnFormat'|'containerInstanceLongArnFormat'|'awsvpcTrunking'|'containerInsights'|'fargateFIPSMode'|'tagResourceAuthorization'|'fargateTaskRetirementWaitPeriod'|'guardDutyActivate'|'defaultLogDriverMode', 'value': 'string', 'principalArn': 'string', 'type': 'user'|'aws_managed' } } **Response Structure** * *(dict) --* * **setting** *(dict) --* The current setting for a resource. * **name** *(string) --* The Amazon ECS resource name. * **value** *(string) --* Determines whether the account setting is on or off for the specified resource. * **principalArn** *(string) --* The ARN of the principal. It can be a user, role, or the root user. If this field is omitted, the authenticated user is assumed. * **type** *(string) --* Indicates whether Amazon Web Services manages the account setting, or if the user manages it. "aws_managed" account settings are read-only, as Amazon Web Services manages such on the customer's behalf. Currently, the "guardDutyActivate" account setting is the only one Amazon Web Services manages. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" **Examples** This example modifies the default account setting for the specified resource for all IAM users or roles on an account. These changes apply to the entire AWS account, unless an IAM user or role explicitly overrides these settings for themselves. response = client.put_account_setting_default( name='serviceLongArnFormat', value='enabled', ) print(response) Expected Output: { 'setting': { 'name': 'serviceLongArnFormat', 'value': 'enabled', 'principalArn': 'arn:aws:iam:::root', }, 'ResponseMetadata': { '...': '...', }, } ECS / Client / discover_poll_endpoint discover_poll_endpoint ********************** ECS.Client.discover_poll_endpoint(**kwargs) Note: This action is only used by the Amazon ECS agent, and it is not intended for use outside of the agent. Returns an endpoint for the Amazon ECS agent to poll for updates. See also: AWS API Documentation **Request Syntax** response = client.discover_poll_endpoint( containerInstance='string', cluster='string' ) Parameters: * **containerInstance** (*string*) -- The container instance ID or full ARN of the container instance. For more information about the ARN format, see Amazon Resource Name (ARN) in the *Amazon ECS Developer Guide*. * **cluster** (*string*) -- The short name or full Amazon Resource Name (ARN) of the cluster that the container instance belongs to. Return type: dict Returns: **Response Syntax** { 'endpoint': 'string', 'telemetryEndpoint': 'string', 'serviceConnectEndpoint': 'string' } **Response Structure** * *(dict) --* * **endpoint** *(string) --* The endpoint for the Amazon ECS agent to poll. * **telemetryEndpoint** *(string) --* The telemetry endpoint for the Amazon ECS agent. * **serviceConnectEndpoint** *(string) --* The endpoint for the Amazon ECS agent to poll for Service Connect configuration. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" ECS / Client / update_service_primary_task_set update_service_primary_task_set ******************************* ECS.Client.update_service_primary_task_set(**kwargs) Modifies which task set in a service is the primary task set. Any parameters that are updated on the primary task set in a service will transition to the service. This is used when a service uses the "EXTERNAL" deployment controller type. For more information, see Amazon ECS Deployment Types in the *Amazon Elastic Container Service Developer Guide*. See also: AWS API Documentation **Request Syntax** response = client.update_service_primary_task_set( cluster='string', service='string', primaryTaskSet='string' ) Parameters: * **cluster** (*string*) -- **[REQUIRED]** The short name or full Amazon Resource Name (ARN) of the cluster that hosts the service that the task set exists in. * **service** (*string*) -- **[REQUIRED]** The short name or full Amazon Resource Name (ARN) of the service that the task set exists in. * **primaryTaskSet** (*string*) -- **[REQUIRED]** The short name or full Amazon Resource Name (ARN) of the task set to set as the primary task set in the deployment. Return type: dict Returns: **Response Syntax** { 'taskSet': { 'id': 'string', 'taskSetArn': 'string', 'serviceArn': 'string', 'clusterArn': 'string', 'startedBy': 'string', 'externalId': 'string', 'status': 'string', 'taskDefinition': 'string', 'computedDesiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'scale': { 'value': 123.0, 'unit': 'PERCENT' }, 'stabilityStatus': 'STEADY_STATE'|'STABILIZING', 'stabilityStatusAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' } } } **Response Structure** * *(dict) --* * **taskSet** *(dict) --* The details about the task set. * **id** *(string) --* The ID of the task set. * **taskSetArn** *(string) --* The Amazon Resource Name (ARN) of the task set. * **serviceArn** *(string) --* The Amazon Resource Name (ARN) of the service the task set exists in. * **clusterArn** *(string) --* The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in. * **startedBy** *(string) --* The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the "startedBy" parameter is "CODE_DEPLOY". If an external deployment created the task set, the "startedBy" field isn't used. * **externalId** *(string) --* The external ID associated with the task set. If an CodeDeploy deployment created a task set, the "externalId" parameter contains the CodeDeploy deployment ID. If a task set is created for an external deployment and is associated with a service discovery registry, the "externalId" parameter contains the "ECS_TASK_SET_EXTERNAL_ID" Cloud Map attribute. * **status** *(string) --* The status of the task set. The following describes each state. PRIMARY The task set is serving production traffic. ACTIVE The task set isn't serving production traffic. DRAINING The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group. * **taskDefinition** *(string) --* The task definition that the task set is using. * **computedDesiredCount** *(integer) --* The computed desired count for the task set. This is calculated by multiplying the service's "desiredCount" by the task set's "scale" percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks. * **pendingCount** *(integer) --* The number of tasks in the task set that are in the "PENDING" status during a deployment. A task in the "PENDING" state is preparing to enter the "RUNNING" state. A task set enters the "PENDING" status when it launches for the first time or when it's restarted after being in the "STOPPED" state. * **runningCount** *(integer) --* The number of tasks in the task set that are in the "RUNNING" status during a deployment. A task in the "RUNNING" state is running and ready for use. * **createdAt** *(datetime) --* The Unix timestamp for the time when the task set was created. * **updatedAt** *(datetime) --* The Unix timestamp for the time when the task set was last updated. * **launchType** *(string) --* The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the *Amazon Elastic Container Service Developer Guide*. * **capacityProviderStrategy** *(list) --* The capacity provider strategy that are associated with the task set. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two- minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. * **platformVersion** *(string) --* The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the *Amazon Elastic Container Service Developer Guide*. * **platformFamily** *(string) --* The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type. All tasks in the set must have the same value. * **networkConfiguration** *(dict) --* The network configuration for the task set. * **awsvpcConfiguration** *(dict) --* The VPC subnets and security groups that are associated with a task. Note: All specified subnets and security groups must be from the same VPC. * **subnets** *(list) --* The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified. Note: All specified subnets must be from the same VPC. * *(string) --* * **securityGroups** *(list) --* The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified. Note: All specified security groups must be from the same VPC. * *(string) --* * **assignPublicIp** *(string) --* Whether the task's elastic network interface receives a public IP address. Consider the following when you set this value: * When you use "create-service" or "update-service", the default is "DISABLED". * When the service "deploymentController" is "ECS", the value must be "DISABLED". * **loadBalancers** *(list) --* Details on a load balancer that are used with a task set. * *(dict) --* The load balancer configuration to use with a service or task set. When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers. We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration. A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the *Amazon Elastic Container Service Developer Guide*. * **targetGroupArn** *(string) --* The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set. A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. For services using the "ECS" deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the *Amazon Elastic Container Service Developer Guide*. For services using the "CODE_DEPLOY" deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the *Amazon Elastic Container Service Developer Guide*. Warning: If your service's task definition uses the "awsvpc" network mode, you must choose "ip" as the target type, not "instance". Do this when creating your target groups because tasks that use the "awsvpc" network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type. * **loadBalancerName** *(string) --* The name of the load balancer to associate with the Amazon ECS service or task set. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted. * **containerName** *(string) --* The name of the container (as it appears in a container definition) to associate with the load balancer. You need to specify the container name when configuring the target group for an Amazon ECS load balancer. * **containerPort** *(integer) --* The port on the container to associate with the load balancer. This port must correspond to a "containerPort" in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the "hostPort" of the port mapping. * **advancedConfiguration** *(dict) --* The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments. * **alternateTargetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments. * **productionListenerRule** *(string) --* The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic. * **testListenerRule** *(string) --* The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you. * **serviceRegistries** *(list) --* The details for the service discovery registries to assign to this task set. For more information, see Service discovery. * *(dict) --* The details for the service registry. Each service may be associated with one service registry. Multiple service registries for each service are not supported. When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration. * **registryArn** *(string) --* The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService. * **port** *(integer) --* The port value used if your service discovery service specified an SRV record. This field might be used if both the "awsvpc" network mode and SRV records are used. * **containerName** *(string) --* The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition that your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **containerPort** *(integer) --* The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **scale** *(dict) --* A floating-point percentage of your desired number of tasks to place and keep running in the task set. * **value** *(float) --* The value, specified as a percent total of a service's "desiredCount", to scale the task set. Accepted values are numbers between 0 and 100. * **unit** *(string) --* The unit of measure for the scale value. * **stabilityStatus** *(string) --* The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set are in "STEADY_STATE": * The task "runningCount" is equal to the "computedDesiredCount". * The "pendingCount" is "0". * There are no tasks that are running on container instances in the "DRAINING" status. * All tasks are reporting a healthy status from the load balancers, service discovery, and container health checks. If any of those conditions aren't met, the stability status returns "STABILIZING". * **stabilityStatusAt** *(datetime) --* The Unix timestamp for the time when the task set stability status was retrieved. * **tags** *(list) --* The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **fargateEphemeralStorage** *(dict) --* The Fargate ephemeral storage settings for the task set. * **kmsKeyId** *(string) --* Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ClusterNotFoundException" * "ECS.Client.exceptions.UnsupportedFeatureException" * "ECS.Client.exceptions.ServiceNotFoundException" * "ECS.Client.exceptions.ServiceNotActiveException" * "ECS.Client.exceptions.TaskSetNotFoundException" * "ECS.Client.exceptions.AccessDeniedException" ECS / Client / list_attributes list_attributes *************** ECS.Client.list_attributes(**kwargs) Lists the attributes for Amazon ECS resources within a specified target type and cluster. When you specify a target type and cluster, "ListAttributes" returns a list of attribute objects, one for each attribute on each resource. You can filter the list of results to a single attribute name to only return results that have that name. You can also filter the results by attribute name and value. You can do this, for example, to see which container instances in a cluster are running a Linux AMI ( "ecs.os- type=linux"). See also: AWS API Documentation **Request Syntax** response = client.list_attributes( cluster='string', targetType='container-instance', attributeName='string', attributeValue='string', nextToken='string', maxResults=123 ) Parameters: * **cluster** (*string*) -- The short name or full Amazon Resource Name (ARN) of the cluster to list attributes. If you do not specify a cluster, the default cluster is assumed. * **targetType** (*string*) -- **[REQUIRED]** The type of the target to list attributes with. * **attributeName** (*string*) -- The name of the attribute to filter the results with. * **attributeValue** (*string*) -- The value of the attribute to filter results with. You must also specify an attribute name to use this parameter. * **nextToken** (*string*) -- The "nextToken" value returned from a "ListAttributes" request indicating that more results are available to fulfill the request and further calls are needed. If "maxResults" was provided, it's possible the number of results to be fewer than "maxResults". Note: This token should be treated as an opaque identifier that is only used to retrieve the next items in a list and not for other programmatic purposes. * **maxResults** (*integer*) -- The maximum number of cluster results that "ListAttributes" returned in paginated output. When this parameter is used, "ListAttributes" only returns "maxResults" results in a single page along with a "nextToken" response element. The remaining results of the initial request can be seen by sending another "ListAttributes" request with the returned "nextToken" value. This value can be between 1 and 100. If this parameter isn't used, then "ListAttributes" returns up to 100 results and a "nextToken" value if applicable. Return type: dict Returns: **Response Syntax** { 'attributes': [ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **attributes** *(list) --* A list of attribute objects that meet the criteria of the request. * *(dict) --* An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the attribute. The "name" must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.). * **value** *(string) --* The value of the attribute. The "value" must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space. * **targetType** *(string) --* The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN. * **targetId** *(string) --* The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN). * **nextToken** *(string) --* The "nextToken" value to include in a future "ListAttributes" request. When the results of a "ListAttributes" request exceed "maxResults", this value can be used to retrieve the next page of results. This value is "null" when there are no more results to return. **Exceptions** * "ECS.Client.exceptions.ClusterNotFoundException" * "ECS.Client.exceptions.InvalidParameterException" ECS / Client / list_task_definition_families list_task_definition_families ***************************** ECS.Client.list_task_definition_families(**kwargs) Returns a list of task definition families that are registered to your account. This list includes task definition families that no longer have any "ACTIVE" task definition revisions. You can filter out task definition families that don't contain any "ACTIVE" task definition revisions by setting the "status" parameter to "ACTIVE". You can also filter the results with the "familyPrefix" parameter. See also: AWS API Documentation **Request Syntax** response = client.list_task_definition_families( familyPrefix='string', status='ACTIVE'|'INACTIVE'|'ALL', nextToken='string', maxResults=123 ) Parameters: * **familyPrefix** (*string*) -- The "familyPrefix" is a string that's used to filter the results of "ListTaskDefinitionFamilies". If you specify a "familyPrefix", only task definition family names that begin with the "familyPrefix" string are returned. * **status** (*string*) -- The task definition family status to filter the "ListTaskDefinitionFamilies" results with. By default, both "ACTIVE" and "INACTIVE" task definition families are listed. If this parameter is set to "ACTIVE", only task definition families that have an "ACTIVE" task definition revision are returned. If this parameter is set to "INACTIVE", only task definition families that do not have any "ACTIVE" task definition revisions are returned. If you paginate the resulting output, be sure to keep the "status" value constant in each subsequent request. * **nextToken** (*string*) -- The "nextToken" value returned from a "ListTaskDefinitionFamilies" request indicating that more results are available to fulfill the request and further calls will be needed. If "maxResults" was provided, it is possible the number of results to be fewer than "maxResults". Note: This token should be treated as an opaque identifier that is only used to retrieve the next items in a list and not for other programmatic purposes. * **maxResults** (*integer*) -- The maximum number of task definition family results that "ListTaskDefinitionFamilies" returned in paginated output. When this parameter is used, "ListTaskDefinitions" only returns "maxResults" results in a single page along with a "nextToken" response element. The remaining results of the initial request can be seen by sending another "ListTaskDefinitionFamilies" request with the returned "nextToken" value. This value can be between 1 and 100. If this parameter isn't used, then "ListTaskDefinitionFamilies" returns up to 100 results and a "nextToken" value if applicable. Return type: dict Returns: **Response Syntax** { 'families': [ 'string', ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **families** *(list) --* The list of task definition family names that match the "ListTaskDefinitionFamilies" request. * *(string) --* * **nextToken** *(string) --* The "nextToken" value to include in a future "ListTaskDefinitionFamilies" request. When the results of a "ListTaskDefinitionFamilies" request exceed "maxResults", this value can be used to retrieve the next page of results. This value is "null" when there are no more results to return. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" **Examples** This example lists all of your registered task definition families. response = client.list_task_definition_families( ) print(response) Expected Output: { 'families': [ 'node-js-app', 'web-timer', 'hpcc', 'hpcc-c4-8xlarge', ], 'ResponseMetadata': { '...': '...', }, } This example lists the task definition revisions that start with "hpcc". response = client.list_task_definition_families( familyPrefix='hpcc', ) print(response) Expected Output: { 'families': [ 'hpcc', 'hpcc-c4-8xlarge', ], 'ResponseMetadata': { '...': '...', }, } ECS / Client / close close ***** ECS.Client.close() Closes underlying endpoint connections. ECS / Client / list_task_definitions list_task_definitions ********************* ECS.Client.list_task_definitions(**kwargs) Returns a list of task definitions that are registered to your account. You can filter the results by family name with the "familyPrefix" parameter or by status with the "status" parameter. See also: AWS API Documentation **Request Syntax** response = client.list_task_definitions( familyPrefix='string', status='ACTIVE'|'INACTIVE'|'DELETE_IN_PROGRESS', sort='ASC'|'DESC', nextToken='string', maxResults=123 ) Parameters: * **familyPrefix** (*string*) -- The full family name to filter the "ListTaskDefinitions" results with. Specifying a "familyPrefix" limits the listed task definitions to task definition revisions that belong to that family. * **status** (*string*) -- The task definition status to filter the "ListTaskDefinitions" results with. By default, only "ACTIVE" task definitions are listed. By setting this parameter to "INACTIVE", you can view task definitions that are "INACTIVE" as long as an active task or service still references them. If you paginate the resulting output, be sure to keep the "status" value constant in each subsequent request. * **sort** (*string*) -- The order to sort the results in. Valid values are "ASC" and "DESC". By default, ( "ASC") task definitions are listed lexicographically by family name and in ascending numerical order by revision so that the newest task definitions in a family are listed last. Setting this parameter to "DESC" reverses the sort order on family name and revision. This is so that the newest task definitions in a family are listed first. * **nextToken** (*string*) -- The "nextToken" value returned from a "ListTaskDefinitions" request indicating that more results are available to fulfill the request and further calls will be needed. If "maxResults" was provided, it is possible the number of results to be fewer than "maxResults". Note: This token should be treated as an opaque identifier that is only used to retrieve the next items in a list and not for other programmatic purposes. * **maxResults** (*integer*) -- The maximum number of task definition results that "ListTaskDefinitions" returned in paginated output. When this parameter is used, "ListTaskDefinitions" only returns "maxResults" results in a single page along with a "nextToken" response element. The remaining results of the initial request can be seen by sending another "ListTaskDefinitions" request with the returned "nextToken" value. This value can be between 1 and 100. If this parameter isn't used, then "ListTaskDefinitions" returns up to 100 results and a "nextToken" value if applicable. Return type: dict Returns: **Response Syntax** { 'taskDefinitionArns': [ 'string', ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **taskDefinitionArns** *(list) --* The list of task definition Amazon Resource Name (ARN) entries for the "ListTaskDefinitions" request. * *(string) --* * **nextToken** *(string) --* The "nextToken" value to include in a future "ListTaskDefinitions" request. When the results of a "ListTaskDefinitions" request exceed "maxResults", this value can be used to retrieve the next page of results. This value is "null" when there are no more results to return. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" **Examples** This example lists all of your registered task definitions. response = client.list_task_definitions( ) print(response) Expected Output: { 'taskDefinitionArns': [ 'arn:aws:ecs:us-east-1::task-definition/sleep300:2', 'arn:aws:ecs:us-east-1::task-definition/sleep360:1', 'arn:aws:ecs:us-east-1::task-definition/wordpress:3', 'arn:aws:ecs:us-east-1::task-definition/wordpress:4', 'arn:aws:ecs:us-east-1::task-definition/wordpress:5', 'arn:aws:ecs:us-east-1::task-definition/wordpress:6', ], 'ResponseMetadata': { '...': '...', }, } This example lists the task definition revisions of a specified family. response = client.list_task_definitions( familyPrefix='wordpress', ) print(response) Expected Output: { 'taskDefinitionArns': [ 'arn:aws:ecs:us-east-1::task-definition/wordpress:3', 'arn:aws:ecs:us-east-1::task-definition/wordpress:4', 'arn:aws:ecs:us-east-1::task-definition/wordpress:5', 'arn:aws:ecs:us-east-1::task-definition/wordpress:6', ], 'ResponseMetadata': { '...': '...', }, } ECS / Client / delete_task_definitions delete_task_definitions *********************** ECS.Client.delete_task_definitions(**kwargs) Deletes one or more task definitions. You must deregister a task definition revision before you delete it. For more information, see DeregisterTaskDefinition. When you delete a task definition revision, it is immediately transitions from the "INACTIVE" to "DELETE_IN_PROGRESS". Existing tasks and services that reference a "DELETE_IN_PROGRESS" task definition revision continue to run without disruption. Existing services that reference a "DELETE_IN_PROGRESS" task definition revision can still scale up or down by modifying the service's desired count. You can't use a "DELETE_IN_PROGRESS" task definition revision to run new tasks or create new services. You also can't update an existing service to reference a "DELETE_IN_PROGRESS" task definition revision. A task definition revision will stay in "DELETE_IN_PROGRESS" status until all the associated tasks and services have been terminated. When you delete all "INACTIVE" task definition revisions, the task definition name is not displayed in the console and not returned in the API. If a task definition revisions are in the "DELETE_IN_PROGRESS" state, the task definition name is displayed in the console and returned in the API. The task definition name is retained by Amazon ECS and the revision is incremented the next time you create a task definition with that name. See also: AWS API Documentation **Request Syntax** response = client.delete_task_definitions( taskDefinitions=[ 'string', ] ) Parameters: **taskDefinitions** (*list*) -- **[REQUIRED]** The "family" and "revision" ( "family:revision") or full Amazon Resource Name (ARN) of the task definition to delete. You must specify a "revision". You can specify up to 10 task definitions as a comma separated list. * *(string) --* Return type: dict Returns: **Response Syntax** { 'taskDefinitions': [ { 'taskDefinitionArn': 'string', 'containerDefinitions': [ { 'name': 'string', 'image': 'string', 'repositoryCredentials': { 'credentialsParameter': 'string' }, 'cpu': 123, 'memory': 123, 'memoryReservation': 123, 'links': [ 'string', ], 'portMappings': [ { 'containerPort': 123, 'hostPort': 123, 'protocol': 'tcp'|'udp', 'name': 'string', 'appProtocol': 'http'|'http2'|'grpc', 'containerPortRange': 'string' }, ], 'essential': True|False, 'restartPolicy': { 'enabled': True|False, 'ignoredExitCodes': [ 123, ], 'restartAttemptPeriod': 123 }, 'entryPoint': [ 'string', ], 'command': [ 'string', ], 'environment': [ { 'name': 'string', 'value': 'string' }, ], 'environmentFiles': [ { 'value': 'string', 'type': 's3' }, ], 'mountPoints': [ { 'sourceVolume': 'string', 'containerPath': 'string', 'readOnly': True|False }, ], 'volumesFrom': [ { 'sourceContainer': 'string', 'readOnly': True|False }, ], 'linuxParameters': { 'capabilities': { 'add': [ 'string', ], 'drop': [ 'string', ] }, 'devices': [ { 'hostPath': 'string', 'containerPath': 'string', 'permissions': [ 'read'|'write'|'mknod', ] }, ], 'initProcessEnabled': True|False, 'sharedMemorySize': 123, 'tmpfs': [ { 'containerPath': 'string', 'size': 123, 'mountOptions': [ 'string', ] }, ], 'maxSwap': 123, 'swappiness': 123 }, 'secrets': [ { 'name': 'string', 'valueFrom': 'string' }, ], 'dependsOn': [ { 'containerName': 'string', 'condition': 'START'|'COMPLETE'|'SUCCESS'|'HEALTHY' }, ], 'startTimeout': 123, 'stopTimeout': 123, 'versionConsistency': 'enabled'|'disabled', 'hostname': 'string', 'user': 'string', 'workingDirectory': 'string', 'disableNetworking': True|False, 'privileged': True|False, 'readonlyRootFilesystem': True|False, 'dnsServers': [ 'string', ], 'dnsSearchDomains': [ 'string', ], 'extraHosts': [ { 'hostname': 'string', 'ipAddress': 'string' }, ], 'dockerSecurityOptions': [ 'string', ], 'interactive': True|False, 'pseudoTerminal': True|False, 'dockerLabels': { 'string': 'string' }, 'ulimits': [ { 'name': 'core'|'cpu'|'data'|'fsize'|'locks'|'memlock'|'msgqueue'|'nice'|'nofile'|'nproc'|'rss'|'rtprio'|'rttime'|'sigpending'|'stack', 'softLimit': 123, 'hardLimit': 123 }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] }, 'healthCheck': { 'command': [ 'string', ], 'interval': 123, 'timeout': 123, 'retries': 123, 'startPeriod': 123 }, 'systemControls': [ { 'namespace': 'string', 'value': 'string' }, ], 'resourceRequirements': [ { 'value': 'string', 'type': 'GPU'|'InferenceAccelerator' }, ], 'firelensConfiguration': { 'type': 'fluentd'|'fluentbit', 'options': { 'string': 'string' } }, 'credentialSpecs': [ 'string', ] }, ], 'family': 'string', 'taskRoleArn': 'string', 'executionRoleArn': 'string', 'networkMode': 'bridge'|'host'|'awsvpc'|'none', 'revision': 123, 'volumes': [ { 'name': 'string', 'host': { 'sourcePath': 'string' }, 'dockerVolumeConfiguration': { 'scope': 'task'|'shared', 'autoprovision': True|False, 'driver': 'string', 'driverOpts': { 'string': 'string' }, 'labels': { 'string': 'string' } }, 'efsVolumeConfiguration': { 'fileSystemId': 'string', 'rootDirectory': 'string', 'transitEncryption': 'ENABLED'|'DISABLED', 'transitEncryptionPort': 123, 'authorizationConfig': { 'accessPointId': 'string', 'iam': 'ENABLED'|'DISABLED' } }, 'fsxWindowsFileServerVolumeConfiguration': { 'fileSystemId': 'string', 'rootDirectory': 'string', 'authorizationConfig': { 'credentialsParameter': 'string', 'domain': 'string' } }, 'configuredAtLaunch': True|False }, ], 'status': 'ACTIVE'|'INACTIVE'|'DELETE_IN_PROGRESS', 'requiresAttributes': [ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ], 'placementConstraints': [ { 'type': 'memberOf', 'expression': 'string' }, ], 'compatibilities': [ 'EC2'|'FARGATE'|'EXTERNAL', ], 'runtimePlatform': { 'cpuArchitecture': 'X86_64'|'ARM64', 'operatingSystemFamily': 'WINDOWS_SERVER_2019_FULL'|'WINDOWS_SERVER_2019_CORE'|'WINDOWS_SERVER_2016_FULL'|'WINDOWS_SERVER_2004_CORE'|'WINDOWS_SERVER_2022_CORE'|'WINDOWS_SERVER_2022_FULL'|'WINDOWS_SERVER_2025_CORE'|'WINDOWS_SERVER_2025_FULL'|'WINDOWS_SERVER_20H2_CORE'|'LINUX' }, 'requiresCompatibilities': [ 'EC2'|'FARGATE'|'EXTERNAL', ], 'cpu': 'string', 'memory': 'string', 'inferenceAccelerators': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'pidMode': 'host'|'task', 'ipcMode': 'host'|'task'|'none', 'proxyConfiguration': { 'type': 'APPMESH', 'containerName': 'string', 'properties': [ { 'name': 'string', 'value': 'string' }, ] }, 'registeredAt': datetime(2015, 1, 1), 'deregisteredAt': datetime(2015, 1, 1), 'registeredBy': 'string', 'ephemeralStorage': { 'sizeInGiB': 123 }, 'enableFaultInjection': True|False }, ], 'failures': [ { 'arn': 'string', 'reason': 'string', 'detail': 'string' }, ] } **Response Structure** * *(dict) --* * **taskDefinitions** *(list) --* The list of deleted task definitions. * *(dict) --* The details of a task definition which describes the container and volume definitions of an Amazon Elastic Container Service task. You can specify which Docker images to use, the required resources, and other configurations related to launching the task definition through an Amazon ECS service or task. * **taskDefinitionArn** *(string) --* The full Amazon Resource Name (ARN) of the task definition. * **containerDefinitions** *(list) --* A list of container definitions in JSON format that describe the different containers that make up your task. For more information about container definition parameters and defaults, see Amazon ECS Task Definitions in the *Amazon Elastic Container Service Developer Guide*. * *(dict) --* Container definitions are used in task definitions to describe the different containers that are launched as part of a task. * **name** *(string) --* The name of a container. If you're linking multiple containers together in a task definition, the "name" of one container can be entered in the "links" of another container to connect the containers. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. This parameter maps to "name" in the docker container create command and the "--name" option to docker run. * **image** *(string) --* The image used to start a container. This string is passed directly to the Docker daemon. By default, images in the Docker Hub registry are available. Other repositories are specified with either "repository-url/image:tag" or "repository- url/image@digest ``. For images using tags (repository-url/image:tag), up to 255 characters total are allowed, including letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs (#). For images using digests (repository-url/image@digest), the 255 character limit applies only to the repository URL and image name (everything before the @ sign). The only supported hash function is sha256, and the hash value after sha256: must be exactly 64 characters (only letters A-F, a-f, and numbers 0-9 are allowed). This parameter maps to ``Image" in the docker container create command and the "IMAGE" parameter of docker run. * When a new task starts, the Amazon ECS container agent pulls the latest version of the specified image and tag for the container to use. However, subsequent updates to a repository image aren't propagated to already running tasks. * Images in Amazon ECR repositories can be specified by either using the full "registry/repository:tag" or "registry/repository@digest". For example, "012345678910.dkr.ecr..amazonaws.com /:latest" or "012345678910.dkr.ecr..amazonaws.com /@sha256:94afd1f2e64d908bc90dbca 0035a5b567EXAMPLE". * Images in official repositories on Docker Hub use a single name (for example, "ubuntu" or "mongo"). * Images in other repositories on Docker Hub are qualified with an organization name (for example, "amazon/amazon-ecs-agent"). * Images in other online repositories are qualified further by a domain name (for example, "quay.io/assemblyline/ubuntu"). * **repositoryCredentials** *(dict) --* The private repository authentication credentials to use. * **credentialsParameter** *(string) --* The Amazon Resource Name (ARN) of the secret containing the private repository credentials. Note: When you use the Amazon ECS API, CLI, or Amazon Web Services SDK, if the secret exists in the same Region as the task that you're launching then you can use either the full ARN or the name of the secret. When you use the Amazon Web Services Management Console, you must specify the full ARN of the secret. * **cpu** *(integer) --* The number of "cpu" units reserved for the container. This parameter maps to "CpuShares" in the docker container create commandand the "--cpu- shares" option to docker run. This field is optional for tasks using the Fargate launch type, and the only requirement is that the total amount of CPU reserved for all containers within a task be lower than the task-level "cpu" value. Note: You can determine the number of CPU units that are available per EC2 instance type by multiplying the vCPUs listed for that instance type on the Amazon EC2 Instances detail page by 1,024. Linux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that's the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task is guaranteed a minimum of 512 CPU units when needed. Moreover, each container could float to higher CPU usage if the other container was not using it. If both tasks were 100% active all of the time, they would be limited to 512 CPU units. On Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. The minimum valid CPU share value that the Linux kernel allows is 2, and the maximum valid CPU share value that the Linux kernel allows is 262144. However, the CPU parameter isn't required, and you can use CPU values below 2 or above 262144 in your container definitions. For CPU values below 2 (including null) or above 262144, the behavior varies based on your Amazon ECS container agent version: * **Agent versions less than or equal to 1.1.0:** Null and zero CPU values are passed to Docker as 0, which Docker then converts to 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux kernel converts to two CPU shares. * **Agent versions greater than or equal to 1.2.0:** Null, zero, and CPU values of 1 are passed to Docker as 2. * **Agent versions greater than or equal to 1.84.0:** CPU values greater than 256 vCPU are passed to Docker as 256, which is equivalent to 262144 CPU shares. On Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only have access to the specified amount of CPU that's described in the task definition. A null or zero CPU value is passed to Docker as "0", which Windows interprets as 1% of one CPU. * **memory** *(integer) --* The amount (in MiB) of memory to present to the container. If your container attempts to exceed the memory specified here, the container is killed. The total amount of memory reserved for all containers within a task must be lower than the task "memory" value, if one is specified. This parameter maps to "Memory" in the docker container create command and the "--memory" option to docker run. If using the Fargate launch type, this parameter is optional. If using the EC2 launch type, you must specify either a task-level memory value or a container- level memory value. If you specify both a container- level "memory" and "memoryReservation" value, "memory" must be greater than "memoryReservation". If you specify "memoryReservation", then that value is subtracted from the available memory resources for the container instance where the container is placed. Otherwise, the value of "memory" is used. The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers. The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers. * **memoryReservation** *(integer) --* The soft limit (in MiB) of memory to reserve for the container. When system memory is under heavy contention, Docker attempts to keep the container memory to this soft limit. However, your container can consume more memory when it needs to, up to either the hard limit specified with the "memory" parameter (if applicable), or all of the available memory on the container instance, whichever comes first. This parameter maps to "MemoryReservation" in the docker container create command and the "-- memory-reservation" option to docker run. If a task-level memory value is not specified, you must specify a non-zero integer for one or both of "memory" or "memoryReservation" in a container definition. If you specify both, "memory" must be greater than "memoryReservation". If you specify "memoryReservation", then that value is subtracted from the available memory resources for the container instance where the container is placed. Otherwise, the value of "memory" is used. For example, if your container normally uses 128 MiB of memory, but occasionally bursts to 256 MiB of memory for short periods of time, you can set a "memoryReservation" of 128 MiB, and a "memory" hard limit of 300 MiB. This configuration would allow the container to only reserve 128 MiB of memory from the remaining resources on the container instance, but also allow the container to consume more memory resources when needed. The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers. The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers. * **links** *(list) --* The "links" parameter allows containers to communicate with each other without the need for port mappings. This parameter is only supported if the network mode of a task definition is "bridge". The "name:internalName" construct is analogous to "name:alias" in Docker links. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.. This parameter maps to "Links" in the docker container create command and the "-- link" option to docker run. Note: This parameter is not supported for Windows containers. Warning: Containers that are collocated on a single container instance may be able to communicate with each other without requiring links or host port mappings. Network isolation is achieved on the container instance using security groups and VPC settings. * *(string) --* * **portMappings** *(list) --* The list of port mappings for the container. Port mappings allow containers to access ports on the host container instance to send or receive traffic. For task definitions that use the "awsvpc" network mode, only specify the "containerPort". The "hostPort" can be left blank or it must be the same value as the "containerPort". Port mappings on Windows use the "NetNAT" gateway address rather than "localhost". There's no loopback for port mappings on Windows, so you can't access a container's mapped port from the host itself. This parameter maps to "PortBindings" in the the docker container create command and the "--publish" option to docker run. If the network mode of a task definition is set to "none", then you can't specify port mappings. If the network mode of a task definition is set to "host", then host ports must either be undefined or they must match the container port in the port mapping. Note: After a task reaches the "RUNNING" status, manual and automatic host and container port assignments are visible in the **Network Bindings** section of a container description for a selected task in the Amazon ECS console. The assignments are also visible in the "networkBindings" section DescribeTasks responses. * *(dict) --* Port mappings allow containers to access ports on the host container instance to send or receive traffic. Port mappings are specified as part of the container definition. If you use containers in a task with the "awsvpc" or "host" network mode, specify the exposed ports using "containerPort". The "hostPort" can be left blank or it must be the same value as the "containerPort". Most fields of this parameter ( "containerPort", "hostPort", "protocol") maps to "PortBindings" in the docker container create command and the "-- publish" option to "docker run". If the network mode of a task definition is set to "host", host ports must either be undefined or match the container port in the port mapping. Note: You can't expose the same container port for multiple protocols. If you attempt this, an error is returned. After a task reaches the "RUNNING" status, manual and automatic host and container port assignments are visible in the "networkBindings" section of DescribeTasks API responses. * **containerPort** *(integer) --* The port number on the container that's bound to the user-specified or automatically assigned host port. If you use containers in a task with the "awsvpc" or "host" network mode, specify the exposed ports using "containerPort". If you use containers in a task with the "bridge" network mode and you specify a container port and not a host port, your container automatically receives a host port in the ephemeral port range. For more information, see "hostPort". Port mappings that are automatically assigned in this way do not count toward the 100 reserved ports limit of a container instance. * **hostPort** *(integer) --* The port number on the container instance to reserve for your container. If you specify a "containerPortRange", leave this field empty and the value of the "hostPort" is set as follows: * For containers in a task with the "awsvpc" network mode, the "hostPort" is set to the same value as the "containerPort". This is a static mapping strategy. * For containers in a task with the "bridge" network mode, the Amazon ECS agent finds open ports on the host and automatically binds them to the container ports. This is a dynamic mapping strategy. If you use containers in a task with the "awsvpc" or "host" network mode, the "hostPort" can either be left blank or set to the same value as the "containerPort". If you use containers in a task with the "bridge" network mode, you can specify a non- reserved host port for your container port mapping, or you can omit the "hostPort" (or set it to "0") while specifying a "containerPort" and your container automatically receives a port in the ephemeral port range for your container instance operating system and Docker version. The default ephemeral port range for Docker version 1.6.0 and later is listed on the instance under "/proc/sys/net/ipv4/ip_local_port_range". If this kernel parameter is unavailable, the default ephemeral port range from 49153 through 65535 (Linux) or 49152 through 65535 (Windows) is used. Do not attempt to specify a host port in the ephemeral port range as these are reserved for automatic assignment. In general, ports below 32768 are outside of the ephemeral port range. The default reserved ports are 22 for SSH, the Docker ports 2375 and 2376, and the Amazon ECS container agent ports 51678-51680. Any host port that was previously specified in a running task is also reserved while the task is running. That is, after a task stops, the host port is released. The current reserved ports are displayed in the "remainingResources" of DescribeContainerInstances output. A container instance can have up to 100 reserved ports at a time. This number includes the default reserved ports. Automatically assigned ports aren't included in the 100 reserved ports quota. * **protocol** *(string) --* The protocol used for the port mapping. Valid values are "tcp" and "udp". The default is "tcp". "protocol" is immutable in a Service Connect service. Updating this field requires a service deletion and redeployment. * **name** *(string) --* The name that's used for the port mapping. This parameter is the name that you use in the "serviceConnectConfiguration" and the "vpcLatticeConfigurations" of a service. The name can include up to 64 characters. The characters can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen. * **appProtocol** *(string) --* The application protocol that's used for the port mapping. This parameter only applies to Service Connect. We recommend that you set this parameter to be consistent with the protocol that your application uses. If you set this parameter, Amazon ECS adds protocol-specific connection handling to the Service Connect proxy. If you set this parameter, Amazon ECS adds protocol-specific telemetry in the Amazon ECS console and CloudWatch. If you don't set a value for this parameter, then TCP is used. However, Amazon ECS doesn't add protocol-specific telemetry for TCP. "appProtocol" is immutable in a Service Connect service. Updating this field requires a service deletion and redeployment. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **containerPortRange** *(string) --* The port number range on the container that's bound to the dynamically mapped host port range. The following rules apply when you specify a "containerPortRange": * You must use either the "bridge" network mode or the "awsvpc" network mode. * This parameter is available for both the EC2 and Fargate launch types. * This parameter is available for both the Linux and Windows operating systems. * The container instance must have at least version 1.67.0 of the container agent and at least version 1.67.0-1 of the "ecs-init" package * You can specify a maximum of 100 port ranges per container. * You do not specify a "hostPortRange". The value of the "hostPortRange" is set as follows: * For containers in a task with the "awsvpc" network mode, the "hostPortRange" is set to the same value as the "containerPortRange". This is a static mapping strategy. * For containers in a task with the "bridge" network mode, the Amazon ECS agent finds open host ports from the default ephemeral range and passes it to docker to bind them to the container ports. * The "containerPortRange" valid values are between 1 and 65535. * A port can only be included in one port mapping per container. * You cannot specify overlapping port ranges. * The first port in the range must be less than last port in the range. * Docker recommends that you turn off the docker-proxy in the Docker daemon config file when you have a large number of ports. For more information, see Issue #11185 on the Github website. For information about how to turn off the docker-proxy in the Docker daemon config file, see Docker daemon in the *Amazon ECS Developer Guide*. You can call DescribeTasks to view the "hostPortRange" which are the host ports that are bound to the container ports. * **essential** *(boolean) --* If the "essential" parameter of a container is marked as "true", and that container fails or stops for any reason, all other containers that are part of the task are stopped. If the "essential" parameter of a container is marked as "false", its failure doesn't affect the rest of the containers in a task. If this parameter is omitted, a container is assumed to be essential. All tasks must have at least one essential container. If you have an application that's composed of multiple containers, group containers that are used for a common purpose into components, and separate the different components into multiple task definitions. For more information, see Application Architecture in the *Amazon Elastic Container Service Developer Guide*. * **restartPolicy** *(dict) --* The restart policy for a container. When you set up a restart policy, Amazon ECS can restart the container without needing to replace the task. For more information, see Restart individual containers in Amazon ECS tasks with container restart policies in the *Amazon Elastic Container Service Developer Guide*. * **enabled** *(boolean) --* Specifies whether a restart policy is enabled for the container. * **ignoredExitCodes** *(list) --* A list of exit codes that Amazon ECS will ignore and not attempt a restart on. You can specify a maximum of 50 container exit codes. By default, Amazon ECS does not ignore any exit codes. * *(integer) --* * **restartAttemptPeriod** *(integer) --* A period of time (in seconds) that the container must run for before a restart can be attempted. A container can be restarted only once every "restartAttemptPeriod" seconds. If a container isn't able to run for this time period and exits early, it will not be restarted. You can set a minimum "restartAttemptPeriod" of 60 seconds and a maximum "restartAttemptPeriod" of 1800 seconds. By default, a container must run for 300 seconds before it can be restarted. * **entryPoint** *(list) --* Warning: Early versions of the Amazon ECS container agent don't properly handle "entryPoint" parameters. If you have problems using "entryPoint", update your container agent or enter your commands and arguments as "command" array items instead. The entry point that's passed to the container. This parameter maps to "Entrypoint" in the docker container create command and the "--entrypoint" option to docker run. * *(string) --* * **command** *(list) --* The command that's passed to the container. This parameter maps to "Cmd" in the docker container create command and the "COMMAND" parameter to docker run. If there are multiple arguments, each argument is a separated string in the array. * *(string) --* * **environment** *(list) --* The environment variables to pass to a container. This parameter maps to "Env" in the docker container create command and the "--env" option to docker run. Warning: We don't recommend that you use plaintext environment variables for sensitive information, such as credential data. * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **environmentFiles** *(list) --* A list of files containing the environment variables to pass to a container. This parameter maps to the " --env-file" option to docker run. You can specify up to ten environment files. The file must have a ".env" file extension. Each line in an environment file contains an environment variable in "VARIABLE=VALUE" format. Lines beginning with "#" are treated as comments and are ignored. If there are environment variables specified using the "environment" parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Specifying Environment Variables in the *Amazon Elastic Container Service Developer Guide*. * *(dict) --* A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a ".env" file extension. Each line in an environment file should contain an environment variable in "VARIABLE=VALUE" format. Lines beginning with "#" are treated as comments and are ignored. If there are environment variables specified using the "environment" parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Use a file to pass environment variables to a container in the *Amazon Elastic Container Service Developer Guide*. Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply. You must use the following platforms for the Fargate launch type: * Linux platform version "1.4.0" or later. * Windows platform version "1.0.0" or later. Consider the following when using the Fargate launch type: * The file is handled like a native Docker env- file. * There is no support for shell escape handling. * The container entry point interperts the "VARIABLE" values. * **value** *(string) --* The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file. * **type** *(string) --* The file type to use. Environment files are objects in Amazon S3. The only supported value is "s3". * **mountPoints** *(list) --* The mount points for data volumes in your container. This parameter maps to "Volumes" in the docker container create command and the "--volume" option to docker run. Windows containers can mount whole directories on the same drive as "$env:ProgramData". Windows containers can't mount directories on a different drive, and mount point can't be across drives. * *(dict) --* The details for a volume mount point that's used in a container definition. * **sourceVolume** *(string) --* The name of the volume to mount. Must be a volume name referenced in the "name" parameter of task definition "volume". * **containerPath** *(string) --* The path on the container to mount the host volume at. * **readOnly** *(boolean) --* If this value is "true", the container has read- only access to the volume. If this value is "false", then the container can write to the volume. The default value is "false". * **volumesFrom** *(list) --* Data volumes to mount from another container. This parameter maps to "VolumesFrom" in the docker container create command and the "--volumes-from" option to docker run. * *(dict) --* Details on a data volume from another container in the same task definition. * **sourceContainer** *(string) --* The name of another container within the same task definition to mount volumes from. * **readOnly** *(boolean) --* If this value is "true", the container has read- only access to the volume. If this value is "false", then the container can write to the volume. The default value is "false". * **linuxParameters** *(dict) --* Linux-specific modifications that are applied to the default Docker container configuration, such as Linux kernel capabilities. For more information see KernelCapabilities. Note: This parameter is not supported for Windows containers. * **capabilities** *(dict) --* The Linux capabilities for the container that are added to or dropped from the default configuration provided by Docker. Note: For tasks that use the Fargate launch type, "capabilities" is supported for all platform versions but the "add" parameter is only supported if using platform version 1.4.0 or later. * **add** *(list) --* The Linux capabilities for the container that have been added to the default configuration provided by Docker. This parameter maps to "CapAdd" in the docker container create command and the "--cap-add" option to docker run. Note: Tasks launched on Fargate only support adding the "SYS_PTRACE" kernel capability. Valid values: ""ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"" * *(string) --* * **drop** *(list) --* The Linux capabilities for the container that have been removed from the default configuration provided by Docker. This parameter maps to "CapDrop" in the docker container create command and the "--cap-drop" option to docker run. Valid values: ""ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"" * *(string) --* * **devices** *(list) --* Any host devices to expose to the container. This parameter maps to "Devices" in the docker container create command and the "--device" option to docker run. Note: If you're using tasks that use the Fargate launch type, the "devices" parameter isn't supported. * *(dict) --* An object representing a container instance host device. * **hostPath** *(string) --* The path for the device on the host container instance. * **containerPath** *(string) --* The path inside the container at which to expose the host device. * **permissions** *(list) --* The explicit permissions to provide to the container for the device. By default, the container has permissions for "read", "write", and "mknod" for the device. * *(string) --* * **initProcessEnabled** *(boolean) --* Run an "init" process inside the container that forwards signals and reaps processes. This parameter maps to the "--init" option to docker run. This parameter requires version 1.25 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" * **sharedMemorySize** *(integer) --* The value for the size (in MiB) of the "/dev/shm" volume. This parameter maps to the "--shm-size" option to docker run. Note: If you are using tasks that use the Fargate launch type, the "sharedMemorySize" parameter is not supported. * **tmpfs** *(list) --* The container path, mount options, and size (in MiB) of the tmpfs mount. This parameter maps to the "--tmpfs" option to docker run. Note: If you're using tasks that use the Fargate launch type, the "tmpfs" parameter isn't supported. * *(dict) --* The container path, mount options, and size of the tmpfs mount. * **containerPath** *(string) --* The absolute file path where the tmpfs volume is to be mounted. * **size** *(integer) --* The maximum size (in MiB) of the tmpfs volume. * **mountOptions** *(list) --* The list of tmpfs volume mount options. Valid values: ""defaults" | "ro" | "rw" | "suid" | "nosuid" | "dev" | "nodev" | "exec" | "noexec" | "sync" | "async" | "dirsync" | "remount" | "mand" | "nomand" | "atime" | "noatime" | "diratime" | "nodiratime" | "bind" | "rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime" | "norelatime" | "strictatime" | "nostrictatime" | "mode" | "uid" | "gid" | "nr_inodes" | "nr_blocks" | "mpol"" * *(string) --* * **maxSwap** *(integer) --* The total amount of swap memory (in MiB) a container can use. This parameter will be translated to the "--memory-swap" option to docker run where the value would be the sum of the container memory plus the "maxSwap" value. If a "maxSwap" value of "0" is specified, the container will not use swap. Accepted values are "0" or any positive integer. If the "maxSwap" parameter is omitted, the container will use the swap configuration for the container instance it is running on. A "maxSwap" value must be set for the "swappiness" parameter to be used. Note: If you're using tasks that use the Fargate launch type, the "maxSwap" parameter isn't supported.If you're using tasks on Amazon Linux 2023 the "swappiness" parameter isn't supported. * **swappiness** *(integer) --* This allows you to tune a container's memory swappiness behavior. A "swappiness" value of "0" will cause swapping to not happen unless absolutely necessary. A "swappiness" value of "100" will cause pages to be swapped very aggressively. Accepted values are whole numbers between "0" and "100". If the "swappiness" parameter is not specified, a default value of "60" is used. If a value is not specified for "maxSwap" then this parameter is ignored. This parameter maps to the "--memory-swappiness" option to docker run. Note: If you're using tasks that use the Fargate launch type, the "swappiness" parameter isn't supported.If you're using tasks on Amazon Linux 2023 the "swappiness" parameter isn't supported. * **secrets** *(list) --* The secrets to pass to the container. For more information, see Specifying Sensitive Data in the *Amazon Elastic Container Service Developer Guide*. * *(dict) --* An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways: * To inject sensitive data into your containers as environment variables, use the "secrets" container definition parameter. * To reference sensitive information in the log configuration of a container, use the "secretOptions" container definition parameter. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the secret. * **valueFrom** *(string) --* The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store. For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the *Amazon Elastic Container Service Developer Guide*. Note: If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified. * **dependsOn** *(list) --* The dependencies defined for container startup and shutdown. A container can contain multiple dependencies on other containers in a task definition. When a dependency is defined for container startup, for container shutdown it is reversed. For tasks using the EC2 launch type, the container instances require at least version 1.26.0 of the container agent to turn on container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the *Amazon Elastic Container Service Developer Guide*. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the "ecs-init" package. If your container instances are launched from version "20190301" or later, then they contain the required versions of the container agent and "ecs-init". For more information, see Amazon ECS-optimized Linux AMI in the *Amazon Elastic Container Service Developer Guide*. For tasks using the Fargate launch type, the task or service requires the following platforms: * Linux platform version "1.3.0" or later. * Windows platform version "1.0.0" or later. * *(dict) --* The dependencies defined for container startup and shutdown. A container can contain multiple dependencies. When a dependency is defined for container startup, for container shutdown it is reversed. Your Amazon ECS container instances require at least version 1.26.0 of the container agent to use container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the *Amazon Elastic Container Service Developer Guide*. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the "ecs-init" package. If your container instances are launched from version "20190301" or later, then they contain the required versions of the container agent and "ecs-init". For more information, see Amazon ECS-optimized Linux AMI in the *Amazon Elastic Container Service Developer Guide*. Note: For tasks that use the Fargate launch type, the task or service requires the following platforms: * Linux platform version "1.3.0" or later. * Windows platform version "1.0.0" or later. For more information about how to create a container dependency, see Container dependency in the *Amazon Elastic Container Service Developer Guide*. * **containerName** *(string) --* The name of a container. * **condition** *(string) --* The dependency condition of the container. The following are the available conditions and their behavior: * "START" - This condition emulates the behavior of links and volumes today. It validates that a dependent container is started before permitting other containers to start. * "COMPLETE" - This condition validates that a dependent container runs to completion (exits) before permitting other containers to start. This can be useful for nonessential containers that run a script and then exit. This condition can't be set on an essential container. * "SUCCESS" - This condition is the same as "COMPLETE", but it also requires that the container exits with a "zero" status. This condition can't be set on an essential container. * "HEALTHY" - This condition validates that the dependent container passes its Docker health check before permitting other containers to start. This requires that the dependent container has health checks configured. This condition is confirmed only at task startup. * **startTimeout** *(integer) --* Time duration (in seconds) to wait before giving up on resolving dependencies for a container. For example, you specify two containers in a task definition with containerA having a dependency on containerB reaching a "COMPLETE", "SUCCESS", or "HEALTHY" status. If a "startTimeout" value is specified for containerB and it doesn't reach the desired status within that time then containerA gives up and not start. This results in the task transitioning to a "STOPPED" state. Note: When the "ECS_CONTAINER_START_TIMEOUT" container agent configuration variable is used, it's enforced independently from this start timeout value. For tasks using the Fargate launch type, the task or service requires the following platforms: * Linux platform version "1.3.0" or later. * Windows platform version "1.0.0" or later. For tasks using the EC2 launch type, your container instances require at least version "1.26.0" of the container agent to use a container start timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the *Amazon Elastic Container Service Developer Guide*. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version "1.26.0-1" of the "ecs-init" package. If your container instances are launched from version "20190301" or later, then they contain the required versions of the container agent and "ecs- init". For more information, see Amazon ECS- optimized Linux AMI in the *Amazon Elastic Container Service Developer Guide*. The valid values for Fargate are 2-120 seconds. * **stopTimeout** *(integer) --* Time duration (in seconds) to wait before the container is forcefully killed if it doesn't exit normally on its own. For tasks using the Fargate launch type, the task or service requires the following platforms: * Linux platform version "1.3.0" or later. * Windows platform version "1.0.0" or later. For tasks that use the Fargate launch type, the max stop timeout value is 120 seconds and if the parameter is not specified, the default value of 30 seconds is used. For tasks that use the EC2 launch type, if the "stopTimeout" parameter isn't specified, the value set for the Amazon ECS container agent configuration variable "ECS_CONTAINER_STOP_TIMEOUT" is used. If neither the "stopTimeout" parameter or the "ECS_CONTAINER_STOP_TIMEOUT" agent configuration variable are set, then the default values of 30 seconds for Linux containers and 30 seconds on Windows containers are used. Your container instances require at least version 1.26.0 of the container agent to use a container stop timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the *Amazon Elastic Container Service Developer Guide*. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the "ecs-init" package. If your container instances are launched from version "20190301" or later, then they contain the required versions of the container agent and "ecs-init". For more information, see Amazon ECS-optimized Linux AMI in the *Amazon Elastic Container Service Developer Guide*. The valid values for Fargate are 2-120 seconds. * **versionConsistency** *(string) --* Specifies whether Amazon ECS will resolve the container image tag provided in the container definition to an image digest. By default, the value is "enabled". If you set the value for a container as "disabled", Amazon ECS will not resolve the provided container image tag to a digest and will use the original image URI specified in the container definition for deployment. For more information about container image resolution, see Container image resolution in the *Amazon ECS Developer Guide*. * **hostname** *(string) --* The hostname to use for your container. This parameter maps to "Hostname" in the docker container create command and the "--hostname" option to docker run. Note: The "hostname" parameter is not supported if you're using the "awsvpc" network mode. * **user** *(string) --* The user to use inside the container. This parameter maps to "User" in the docker container create command and the "--user" option to docker run. Warning: When running tasks using the "host" network mode, don't run containers using the root user (UID 0). We recommend using a non-root user for better security. You can specify the "user" using the following formats. If specifying a UID or GID, you must specify it as a positive integer. * "user" * "user:group" * "uid" * "uid:gid" * "user:gid" * "uid:group" Note: This parameter is not supported for Windows containers. * **workingDirectory** *(string) --* The working directory to run commands inside the container in. This parameter maps to "WorkingDir" in the docker container create command and the "-- workdir" option to docker run. * **disableNetworking** *(boolean) --* When this parameter is true, networking is off within the container. This parameter maps to "NetworkDisabled" in the docker container create command. Note: This parameter is not supported for Windows containers. * **privileged** *(boolean) --* When this parameter is true, the container is given elevated privileges on the host container instance (similar to the "root" user). This parameter maps to "Privileged" in the docker container create command and the "--privileged" option to docker run Note: This parameter is not supported for Windows containers or tasks run on Fargate. * **readonlyRootFilesystem** *(boolean) --* When this parameter is true, the container is given read-only access to its root file system. This parameter maps to "ReadonlyRootfs" in the docker container create command and the "--read-only" option to docker run. Note: This parameter is not supported for Windows containers. * **dnsServers** *(list) --* A list of DNS servers that are presented to the container. This parameter maps to "Dns" in the docker container create command and the "--dns" option to docker run. Note: This parameter is not supported for Windows containers. * *(string) --* * **dnsSearchDomains** *(list) --* A list of DNS search domains that are presented to the container. This parameter maps to "DnsSearch" in the docker container create command and the "--dns- search" option to docker run. Note: This parameter is not supported for Windows containers. * *(string) --* * **extraHosts** *(list) --* A list of hostnames and IP address mappings to append to the "/etc/hosts" file on the container. This parameter maps to "ExtraHosts" in the docker container create command and the "--add-host" option to docker run. Note: This parameter isn't supported for Windows containers or tasks that use the "awsvpc" network mode. * *(dict) --* Hostnames and IP address entries that are added to the "/etc/hosts" file of a container via the "extraHosts" parameter of its ContainerDefinition. * **hostname** *(string) --* The hostname to use in the "/etc/hosts" entry. * **ipAddress** *(string) --* The IP address to use in the "/etc/hosts" entry. * **dockerSecurityOptions** *(list) --* A list of strings to provide custom configuration for multiple security systems. This field isn't valid for containers in tasks using the Fargate launch type. For Linux tasks on EC2, this parameter can be used to reference custom labels for SELinux and AppArmor multi-level security systems. For any tasks on EC2, this parameter can be used to reference a credential spec file that configures a container for Active Directory authentication. For more information, see Using gMSAs for Windows Containers and Using gMSAs for Linux Containers in the *Amazon Elastic Container Service Developer Guide*. This parameter maps to "SecurityOpt" in the docker container create command and the "--security-opt" option to docker run. Note: The Amazon ECS container agent running on a container instance must register with the "ECS_SELINUX_CAPABLE=true" or "ECS_APPARMOR_CAPABLE=true" environment variables before containers placed on that instance can use these security options. For more information, see Amazon ECS Container Agent Configuration in the *Amazon Elastic Container Service Developer Guide*. Valid values: "no-new-privileges" | "apparmor:PROFILE" | "label:value" | "credentialspec:CredentialSpecFilePath" * *(string) --* * **interactive** *(boolean) --* When this parameter is "true", you can deploy containerized applications that require "stdin" or a "tty" to be allocated. This parameter maps to "OpenStdin" in the docker container create command and the "--interactive" option to docker run. * **pseudoTerminal** *(boolean) --* When this parameter is "true", a TTY is allocated. This parameter maps to "Tty" in the docker container create command and the "--tty" option to docker run. * **dockerLabels** *(dict) --* A key/value map of labels to add to the container. This parameter maps to "Labels" in the docker container create command and the "--label" option to docker run. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" * *(string) --* * *(string) --* * **ulimits** *(list) --* A list of "ulimits" to set in the container. If a "ulimit" value is specified in a task definition, it overrides the default values set by Docker. This parameter maps to "Ulimits" in the docker container create command and the "--ulimit" option to docker run. Valid naming values are displayed in the Ulimit data type. Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the "nofile" resource limit parameter which Fargate overrides. The "nofile" resource limit sets a restriction on the number of open files that a container can use. The default "nofile" soft limit is "65535" and the default hard limit is "65535". This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" Note: This parameter is not supported for Windows containers. * *(dict) --* The "ulimit" settings to pass to the container. Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the "nofile" resource limit parameter which Fargate overrides. The "nofile" resource limit sets a restriction on the number of open files that a container can use. The default "nofile" soft limit is "65535" and the default hard limit is "65535". You can specify the "ulimit" settings for a container in a task definition. * **name** *(string) --* The "type" of the "ulimit". * **softLimit** *(integer) --* The soft limit for the "ulimit" type. The value can be specified in bytes, seconds, or as a count, depending on the "type" of the "ulimit". * **hardLimit** *(integer) --* The hard limit for the "ulimit" type. The value can be specified in bytes, seconds, or as a count, depending on the "type" of the "ulimit". * **logConfiguration** *(dict) --* The log configuration specification for the container. This parameter maps to "LogConfig" in the docker container create command and the "--log-driver" option to docker run. By default, containers use the same logging driver that the Docker daemon uses. However the container can use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance (or on a different log server for remote logging options). Note: Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon (shown in the LogConfiguration data type). Additional log drivers may be available in future releases of the Amazon ECS container agent. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" Note: The Amazon ECS container agent running on a container instance must register the logging drivers available on that instance with the "ECS_AVAILABLE_LOGGING_DRIVERS" environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS Container Agent Configuration in the *Amazon Elastic Container Service Developer Guide*. * **logDriver** *(string) --* The log driver to use for the container. For tasks on Fargate, the supported log drivers are "awslogs", "splunk", and "awsfirelens". For tasks hosted on Amazon EC2 instances, the supported log drivers are "awslogs", "fluentd", "gelf", "json-file", "journald", "syslog", "splunk", and "awsfirelens". For more information about using the "awslogs" log driver, see Send Amazon ECS logs to CloudWatch in the *Amazon Elastic Container Service Developer Guide*. For more information about using the "awsfirelens" log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner. Note: If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software. * **options** *(dict) --* The configuration options to send to the log driver. The options you can specify depend on the log driver. Some of the options you can specify when you use the "awslogs" log driver to route logs to Amazon CloudWatch include the following: awslogs-create-group Required: No Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to "false". Note: Your IAM policy must include the "logs:CreateLogGroup" permission before you attempt to use "awslogs-create-group".awslogs- region Required: Yes Specify the Amazon Web Services Region that the "awslogs" log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option. awslogs-group Required: Yes Make sure to specify a log group that the "awslogs" log driver sends its log streams to. awslogs-stream-prefix Required: Yes, when using Fargate.Optional when using EC2. Use the "awslogs-stream-prefix" option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format "prefix-name/container-name/ecs- task-id". If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option. For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to. You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console. awslogs-datetime-format Required: No This option defines a multiline start pattern in Python "strftime" format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry. For more information, see awslogs-datetime-format. You cannot configure both the "awslogs-datetime- format" and "awslogs-multiline-pattern" options. Note: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.awslogs-multiline-pattern Required: No This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. For more information, see awslogs-multiline- pattern. This option is ignored if "awslogs-datetime- format" is also configured. You cannot configure both the "awslogs-datetime- format" and "awslogs-multiline-pattern" options. Note: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. The following options apply to all supported log drivers. mode Required: No Valid values: "non-blocking" | "blocking" This option defines the delivery mode of log messages from the container to the log driver specified using "logDriver". The delivery mode you choose affects application availability when the flow of logs from container is interrupted. If you use the "blocking" mode and the flow of logs is interrupted, calls from container code to write to the "stdout" and "stderr" streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure. If you use the "non-blocking" mode, the container's logs are instead stored in an in- memory intermediate buffer configured with the "max-buffer-size" option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver. You can set a default "mode" for all containers in a specific Amazon Web Services Region by using the "defaultLogDriverMode" account setting. If you don't specify the "mode" option or configure the account setting, Amazon ECS will default to the "non-blocking" mode. For more information about the account setting, see Default log driver mode in the *Amazon Elastic Container Service Developer Guide*. Note: On June 25, 2025, Amazon ECS changed the default log driver mode from "blocking" to "non- blocking" to prioritize task availability over logging. To continue using the "blocking" mode after this change, do one of the following: * Set the "mode" option in your container definition's "logConfiguration" as "blocking". * Set the "defaultLogDriverMode" account setting to "blocking". max-buffer-size Required: No Default value: "1m" When "non-blocking" mode is used, the "max-buffer- size" log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost. To route logs using the "splunk" log router, you need to specify a "splunk-token" and a "splunk- url". When you use the "awsfirelens" log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the "log- driver-buffer-limit" option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. Other options you can specify when using "awsfirelens" to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with "region" and a name for the log stream with "delivery_stream". When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with "region" and a data stream name with "stream". When you export logs to Amazon OpenSearch Service, you can specify options like "Name", "Host" (OpenSearch Service endpoint without protocol), "Port", "Index", "Type", "Aws_auth", "Aws_region", "Suppress_Type_Name", and "tls". For more information, see Under the hood: FireLens for Amazon ECS Tasks. When you export logs to Amazon S3, you can specify the bucket using the "bucket" option. You can also specify "region", "total_file_size", "upload_timeout", and "use_put_object" as options. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" * *(string) --* * *(string) --* * **secretOptions** *(list) --* The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * *(dict) --* An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways: * To inject sensitive data into your containers as environment variables, use the "secrets" container definition parameter. * To reference sensitive information in the log configuration of a container, use the "secretOptions" container definition parameter. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the secret. * **valueFrom** *(string) --* The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store. For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the *Amazon Elastic Container Service Developer Guide*. Note: If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified. * **healthCheck** *(dict) --* The container health check command and associated configuration parameters for the container. This parameter maps to "HealthCheck" in the docker container create command and the "HEALTHCHECK" parameter of docker run. * **command** *(list) --* A string array representing the command that the container runs to determine if it is healthy. The string array must start with "CMD" to run the command arguments directly, or "CMD-SHELL" to run the command with the container's default shell. When you use the Amazon Web Services Management Console JSON panel, the Command Line Interface, or the APIs, enclose the list of commands in double quotes and brackets. "[ "CMD-SHELL", "curl -f http://localhost/ || exit 1" ]" You don't include the double quotes and brackets when you use the Amazon Web Services Management Console. "CMD-SHELL, curl -f http://localhost/ || exit 1" An exit code of 0 indicates success, and non-zero exit code indicates failure. For more information, see "HealthCheck" in the docker container create command. * *(string) --* * **interval** *(integer) --* The time period in seconds between each health check execution. You may specify between 5 and 300 seconds. The default value is 30 seconds. This value applies only when you specify a "command". * **timeout** *(integer) --* The time period in seconds to wait for a health check to succeed before it is considered a failure. You may specify between 2 and 60 seconds. The default value is 5. This value applies only when you specify a "command". * **retries** *(integer) --* The number of times to retry a failed health check before the container is considered unhealthy. You may specify between 1 and 10 retries. The default value is 3. This value applies only when you specify a "command". * **startPeriod** *(integer) --* The optional grace period to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. You can specify between 0 and 300 seconds. By default, the "startPeriod" is off. This value applies only when you specify a "command". Note: If a health check succeeds within the "startPeriod", then the container is considered healthy and any subsequent failures count toward the maximum number of retries. * **systemControls** *(list) --* A list of namespaced kernel parameters to set in the container. This parameter maps to "Sysctls" in the docker container create command and the "--sysctl" option to docker run. For example, you can configure "net.ipv4.tcp_keepalive_time" setting to maintain longer lived connections. * *(dict) --* A list of namespaced kernel parameters to set in the container. This parameter maps to "Sysctls" in the docker container create command and the "-- sysctl" option to docker run. For example, you can configure "net.ipv4.tcp_keepalive_time" setting to maintain longer lived connections. We don't recommend that you specify network- related "systemControls" parameters for multiple containers in a single task that also uses either the "awsvpc" or "host" network mode. Doing this has the following disadvantages: * For tasks that use the "awsvpc" network mode including Fargate, if you set "systemControls" for any container, it applies to all containers in the task. If you set different "systemControls" for multiple containers in a single task, the container that's started last determines which "systemControls" take effect. * For tasks that use the "host" network mode, the network namespace "systemControls" aren't supported. If you're setting an IPC resource namespace to use for the containers in the task, the following conditions apply to your system controls. For more information, see IPC mode. * For tasks that use the "host" IPC mode, IPC namespace "systemControls" aren't supported. * For tasks that use the "task" IPC mode, IPC namespace "systemControls" values apply to all containers within a task. Note: This parameter is not supported for Windows containers. Note: This parameter is only supported for tasks that are hosted on Fargate if the tasks are using platform version "1.4.0" or later (Linux). This isn't supported for Windows containers on Fargate. * **namespace** *(string) --* The namespaced kernel parameter to set a "value" for. * **value** *(string) --* The namespaced kernel parameter to set a "value" for. Valid IPC namespace values: ""kernel.msgmax" | "kernel.msgmnb" | "kernel.msgmni" | "kernel.sem" | "kernel.shmall" | "kernel.shmmax" | "kernel.shmmni" | "kernel.shm_rmid_forced"", and "Sysctls" that start with ""fs.mqueue.*"" Valid network namespace values: "Sysctls" that start with ""net.*"". Only namespaced "Sysctls" that exist within the container starting with "net.* are accepted. All of these values are supported by Fargate. * **resourceRequirements** *(list) --* The type and amount of a resource to assign to a container. The only supported resource is a GPU. * *(dict) --* The type and amount of a resource to assign to a container. The supported resource types are GPUs and Elastic Inference accelerators. For more information, see Working with GPUs on Amazon ECS or Working with Amazon Elastic Inference on Amazon ECS in the *Amazon Elastic Container Service Developer Guide* * **value** *(string) --* The value for the specified resource type. When the type is "GPU", the value is the number of physical "GPUs" the Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for all containers in a task can't exceed the number of available GPUs on the container instance that the task is launched on. When the type is "InferenceAccelerator", the "value" matches the "deviceName" for an InferenceAccelerator specified in a task definition. * **type** *(string) --* The type of resource to assign to a container. * **firelensConfiguration** *(dict) --* The FireLens configuration for the container. This is used to specify and configure a log router for container logs. For more information, see Custom Log Routing in the *Amazon Elastic Container Service Developer Guide*. * **type** *(string) --* The log router to use. The valid values are "fluentd" or "fluentbit". * **options** *(dict) --* The options to use when configuring the log router. This field is optional and can be used to specify a custom configuration file or to add additional metadata, such as the task, task definition, cluster, and container instance details to the log event. If specified, the syntax to use is ""options":{"enable-ecs-log- metadata":"true|false","config-file-type:"s3|file ","config-file-value":"arn:aws:s3:::mybucket/flue nt.conf|filepath"}". For more information, see Creating a task definition that uses a FireLens configuration in the *Amazon Elastic Container Service Developer Guide*. Note: Tasks hosted on Fargate only support the "file" configuration file type. * *(string) --* * *(string) --* * **credentialSpecs** *(list) --* A list of ARNs in SSM or Amazon S3 to a credential spec ( "CredSpec") file that configures the container for Active Directory authentication. We recommend that you use this parameter instead of the "dockerSecurityOptions". The maximum number of ARNs is 1. There are two formats for each ARN. credentialspecdomainless:MyARN You use "credentialspecdomainless:MyARN" to provide a "CredSpec" with an additional section for a secret in Secrets Manager. You provide the login credentials to the domain in the secret. Each task that runs on any container instance can join different domains. You can use this format without joining the container instance to a domain. credentialspec:MyARN You use "credentialspec:MyARN" to provide a "CredSpec" for a single domain. You must join the container instance to the domain before you start any tasks that use this task definition. In both formats, replace "MyARN" with the ARN in SSM or Amazon S3. If you provide a "credentialspecdomainless:MyARN", the "credspec" must provide a ARN in Secrets Manager for a secret containing the username, password, and the domain to connect to. For better security, the instance isn't joined to the domain for domainless authentication. Other applications on the instance can't use the domainless credentials. You can use this parameter to run tasks on the same instance, even it the tasks need to join different domains. For more information, see Using gMSAs for Windows Containers and Using gMSAs for Linux Containers. * *(string) --* * **family** *(string) --* The name of a family that this task definition is registered to. Up to 255 characters are allowed. Letters (both uppercase and lowercase letters), numbers, hyphens (-), and underscores (_) are allowed. A family groups multiple versions of a task definition. Amazon ECS gives the first task definition that you registered to a family a revision number of 1. Amazon ECS gives sequential revision numbers to each task definition that you add. * **taskRoleArn** *(string) --* The short name or full Amazon Resource Name (ARN) of the Identity and Access Management role that grants containers in the task permission to call Amazon Web Services APIs on your behalf. For informationabout the required IAM roles for Amazon ECS, see IAM roles for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. * **executionRoleArn** *(string) --* The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make Amazon Web Services API calls on your behalf. For informationabout the required IAM roles for Amazon ECS, see IAM roles for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. * **networkMode** *(string) --* The Docker networking mode to use for the containers in the task. The valid values are "none", "bridge", "awsvpc", and "host". If no network mode is specified, the default is "bridge". For Amazon ECS tasks on Fargate, the "awsvpc" network mode is required. For Amazon ECS tasks on Amazon EC2 Linux instances, any network mode can be used. For Amazon ECS tasks on Amazon EC2 Windows instances, "" or "awsvpc" can be used. If the network mode is set to "none", you cannot specify port mappings in your container definitions, and the tasks containers do not have external connectivity. The "host" and "awsvpc" network modes offer the highest networking performance for containers because they use the EC2 network stack instead of the virtualized network stack provided by the "bridge" mode. With the "host" and "awsvpc" network modes, exposed container ports are mapped directly to the corresponding host port (for the "host" network mode) or the attached elastic network interface port (for the "awsvpc" network mode), so you cannot take advantage of dynamic host port mappings. Warning: When using the "host" network mode, you should not run containers using the root user (UID 0). It is considered best practice to use a non-root user. If the network mode is "awsvpc", the task is allocated an elastic network interface, and you must specify a NetworkConfiguration value when you create a service or run a task with the task definition. For more information, see Task Networking in the *Amazon Elastic Container Service Developer Guide*. If the network mode is "host", you cannot run multiple instantiations of the same task on a single container instance when port mappings are used. * **revision** *(integer) --* The revision of the task in a particular family. The revision is a version number of a task definition in a family. When you register a task definition for the first time, the revision is "1". Each time that you register a new revision of a task definition in the same family, the revision value always increases by one. This is even if you deregistered previous revisions in this family. * **volumes** *(list) --* The list of data volume definitions for the task. For more information, see Using data volumes in tasks in the *Amazon Elastic Container Service Developer Guide*. Note: The "host" and "sourcePath" parameters aren't supported for tasks run on Fargate. * *(dict) --* The data volume configuration for tasks launched using this task definition. Specifying a volume configuration in a task definition is optional. The volume configuration may contain multiple volumes but only one volume configured at launch is supported. Each volume defined in the volume configuration may only specify a "name" and one of either "configuredAtLaunch", "dockerVolumeConfiguration", "efsVolumeConfiguration", "fsxWindowsFileServerVolumeConfiguration", or "host". If an empty volume configuration is specified, by default Amazon ECS uses a host volume. For more information, see Using data volumes in tasks. * **name** *(string) --* The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. When using a volume configured at launch, the "name" is required and must also be specified as the volume name in the "ServiceVolumeConfiguration" or "TaskVolumeConfiguration" parameter when creating your service or standalone task. For all other types of volumes, this name is referenced in the "sourceVolume" parameter of the "mountPoints" object in the container definition. When a volume is using the "efsVolumeConfiguration", the name is required. * **host** *(dict) --* This parameter is specified when you use bind mount host volumes. The contents of the "host" parameter determine whether your bind mount host volume persists on the host container instance and where it's stored. If the "host" parameter is empty, then the Docker daemon assigns a host path for your data volume. However, the data isn't guaranteed to persist after the containers that are associated with it stop running. Windows containers can mount whole directories on the same drive as "$env:ProgramData". Windows containers can't mount directories on a different drive, and mount point can't be across drives. For example, you can mount "C:\my\path:C:\my\path" and "D:\:D:\", but not "D:\my\path:C:\my\path" or "D:\:C:\my\path". * **sourcePath** *(string) --* When the "host" parameter is used, specify a "sourcePath" to declare the path on the host container instance that's presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If the "host" parameter contains a "sourcePath" file location, then the data volume persists at the specified location on the host container instance until you delete it manually. If the "sourcePath" value doesn't exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported. If you're using the Fargate launch type, the "sourcePath" parameter is not supported. * **dockerVolumeConfiguration** *(dict) --* This parameter is specified when you use Docker volumes. Windows containers only support the use of the "local" driver. To use bind mounts, specify the "host" parameter instead. Note: Docker volumes aren't supported by tasks run on Fargate. * **scope** *(string) --* The scope for the Docker volume that determines its lifecycle. Docker volumes that are scoped to a "task" are automatically provisioned when the task starts and destroyed when the task stops. Docker volumes that are scoped as "shared" persist after the task stops. * **autoprovision** *(boolean) --* If this value is "true", the Docker volume is created if it doesn't already exist. Note: This field is only used if the "scope" is "shared". * **driver** *(string) --* The Docker volume driver to use. The driver value must match the driver name provided by Docker because it is used for task placement. If the driver was installed using the Docker plugin CLI, use "docker plugin ls" to retrieve the driver name from your container instance. If the driver was installed using another method, use Docker plugin discovery to retrieve the driver name. This parameter maps to "Driver" in the docker container create command and the "xxdriver" option to docker volume create. * **driverOpts** *(dict) --* A map of Docker driver-specific options passed through. This parameter maps to "DriverOpts" in the docker create-volume command and the "xxopt" option to docker volume create. * *(string) --* * *(string) --* * **labels** *(dict) --* Custom metadata to add to your Docker volume. This parameter maps to "Labels" in the docker container create command and the "xxlabel" option to docker volume create. * *(string) --* * *(string) --* * **efsVolumeConfiguration** *(dict) --* This parameter is specified when you use an Amazon Elastic File System file system for task storage. * **fileSystemId** *(string) --* The Amazon EFS file system ID to use. * **rootDirectory** *(string) --* The directory within the Amazon EFS file system to mount as the root directory inside the host. If this parameter is omitted, the root of the Amazon EFS volume will be used. Specifying "/" will have the same effect as omitting this parameter. Warning: If an EFS access point is specified in the "authorizationConfig", the root directory parameter must either be omitted or set to "/" which will enforce the path set on the EFS access point. * **transitEncryption** *(string) --* Determines whether to use encryption for Amazon EFS data in transit between the Amazon ECS host and the Amazon EFS server. Transit encryption must be turned on if Amazon EFS IAM authorization is used. If this parameter is omitted, the default value of "DISABLED" is used. For more information, see Encrypting data in transit in the *Amazon Elastic File System User Guide*. * **transitEncryptionPort** *(integer) --* The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server. If you do not specify a transit encryption port, it will use the port selection strategy that the Amazon EFS mount helper uses. For more information, see EFS mount helper in the *Amazon Elastic File System User Guide*. * **authorizationConfig** *(dict) --* The authorization configuration details for the Amazon EFS file system. * **accessPointId** *(string) --* The Amazon EFS access point ID to use. If an access point is specified, the root directory value specified in the "EFSVolumeConfiguration" must either be omitted or set to "/" which will enforce the path set on the EFS access point. If an access point is used, transit encryption must be on in the "EFSVolumeConfiguration". For more information, see Working with Amazon EFS access points in the *Amazon Elastic File System User Guide*. * **iam** *(string) --* Determines whether to use the Amazon ECS task role defined in a task definition when mounting the Amazon EFS file system. If it is turned on, transit encryption must be turned on in the "EFSVolumeConfiguration". If this parameter is omitted, the default value of "DISABLED" is used. For more information, see Using Amazon EFS access points in the *Amazon Elastic Container Service Developer Guide*. * **fsxWindowsFileServerVolumeConfiguration** *(dict) --* This parameter is specified when you use Amazon FSx for Windows File Server file system for task storage. * **fileSystemId** *(string) --* The Amazon FSx for Windows File Server file system ID to use. * **rootDirectory** *(string) --* The directory within the Amazon FSx for Windows File Server file system to mount as the root directory inside the host. * **authorizationConfig** *(dict) --* The authorization configuration details for the Amazon FSx for Windows File Server file system. * **credentialsParameter** *(string) --* The authorization credential option to use. The authorization credential options can be provided using either the Amazon Resource Name (ARN) of an Secrets Manager secret or SSM Parameter Store parameter. The ARN refers to the stored credentials. * **domain** *(string) --* A fully qualified domain name hosted by an Directory Service Managed Microsoft AD (Active Directory) or self-hosted AD on Amazon EC2. * **configuredAtLaunch** *(boolean) --* Indicates whether the volume should be configured at launch time. This is used to create Amazon EBS volumes for standalone tasks or tasks created as part of a service. Each task definition revision may only have one volume configured at launch in the volume configuration. To configure a volume at launch time, use this task definition revision and specify a "volumeConfigurations" object when calling the "CreateService", "UpdateService", "RunTask" or "StartTask" APIs. * **status** *(string) --* The status of the task definition. * **requiresAttributes** *(list) --* The container instance attributes required by your task. When an Amazon EC2 instance is registered to your cluster, the Amazon ECS container agent assigns some standard attributes to the instance. You can apply custom attributes. These are specified as key-value pairs using the Amazon ECS console or the PutAttributes API. These attributes are used when determining task placement for tasks hosted on Amazon EC2 instances. For more information, see Attributes in the *Amazon Elastic Container Service Developer Guide*. Note: This parameter isn't supported for tasks run on Fargate. * *(dict) --* An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the attribute. The "name" must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.). * **value** *(string) --* The value of the attribute. The "value" must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space. * **targetType** *(string) --* The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN. * **targetId** *(string) --* The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN). * **placementConstraints** *(list) --* An array of placement constraint objects to use for tasks. Note: This parameter isn't supported for tasks run on Fargate. * *(dict) --* The constraint on task placement in the task definition. For more information, see Task placement constraints in the *Amazon Elastic Container Service Developer Guide*. Note: Task placement constraints aren't supported for tasks run on Fargate. * **type** *(string) --* The type of constraint. The "MemberOf" constraint restricts selection to be from a group of valid candidates. * **expression** *(string) --* A cluster query language expression to apply to the constraint. For more information, see Cluster query language in the *Amazon Elastic Container Service Developer Guide*. * **compatibilities** *(list) --* Amazon ECS validates the task definition parameters with those supported by the launch type. For more information, see Amazon ECS launch types in the *Amazon Elastic Container Service Developer Guide*. * *(string) --* * **runtimePlatform** *(dict) --* The operating system that your task definitions are running on. A platform family is specified only for tasks using the Fargate launch type. When you specify a task in a service, this value must match the "runtimePlatform" value of the service. * **cpuArchitecture** *(string) --* The CPU architecture. You can run your Linux tasks on an ARM-based platform by setting the value to "ARM64". This option is available for tasks that run on Linux Amazon EC2 instance or Linux containers on Fargate. * **operatingSystemFamily** *(string) --* The operating system. * **requiresCompatibilities** *(list) --* The task launch types the task definition was validated against. The valid values are "EC2", "FARGATE", and "EXTERNAL". For more information, see Amazon ECS launch types in the *Amazon Elastic Container Service Developer Guide*. * *(string) --* * **cpu** *(string) --* The number of "cpu" units used by the task. If you use the EC2 launch type, this field is optional. Any value can be used. If you use the Fargate launch type, this field is required. You must use one of the following values. The value that you choose determines your range of valid values for the "memory" parameter. If you're using the EC2 launch type or the external launch type, this field is optional. Supported values are between "128" CPU units ( "0.125" vCPUs) and "196608" CPU units ( "192" vCPUs). This field is required for Fargate. For information about the valid values, see Task size in the *Amazon Elastic Container Service Developer Guide*. * **memory** *(string) --* The amount (in MiB) of memory used by the task. If your tasks runs on Amazon EC2 instances, you must specify either a task-level memory value or a container- level memory value. This field is optional and any value can be used. If a task-level memory value is specified, the container-level memory value is optional. For more information regarding container-level memory and memory reservation, see ContainerDefinition. If your tasks runs on Fargate, this field is required. You must use one of the following values. The value you choose determines your range of valid values for the "cpu" parameter. * 512 (0.5 GB), 1024 (1 GB), 2048 (2 GB) - Available "cpu" values: 256 (.25 vCPU) * 1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB) - Available "cpu" values: 512 (.5 vCPU) * 2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB) - Available "cpu" values: 1024 (1 vCPU) * Between 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB) - Available "cpu" values: 2048 (2 vCPU) * Between 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB) - Available "cpu" values: 4096 (4 vCPU) * Between 16 GB and 60 GB in 4 GB increments - Available "cpu" values: 8192 (8 vCPU) This option requires Linux platform "1.4.0" or later. * Between 32GB and 120 GB in 8 GB increments - Available "cpu" values: 16384 (16 vCPU) This option requires Linux platform "1.4.0" or later. * **inferenceAccelerators** *(list) --* The Elastic Inference accelerator that's associated with the task. * *(dict) --* Details on an Elastic Inference accelerator. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. * **deviceName** *(string) --* The Elastic Inference accelerator device name. The "deviceName" must also be referenced in a container definition as a ResourceRequirement. * **deviceType** *(string) --* The Elastic Inference accelerator type to use. * **pidMode** *(string) --* The process namespace to use for the containers in the task. The valid values are "host" or "task". On Fargate for Linux containers, the only valid value is "task". For example, monitoring sidecars might need "pidMode" to access information about other containers running in the same task. If "host" is specified, all containers within the tasks that specified the "host" PID mode on the same container instance share the same process namespace with the host Amazon EC2 instance. If "task" is specified, all containers within the specified task share the same process namespace. If no value is specified, the default is a private namespace for each container. If the "host" PID mode is used, there's a heightened risk of undesired process namespace exposure. Note: This parameter is not supported for Windows containers. Note: This parameter is only supported for tasks that are hosted on Fargate if the tasks are using platform version "1.4.0" or later (Linux). This isn't supported for Windows containers on Fargate. * **ipcMode** *(string) --* The IPC resource namespace to use for the containers in the task. The valid values are "host", "task", or "none". If "host" is specified, then all containers within the tasks that specified the "host" IPC mode on the same container instance share the same IPC resources with the host Amazon EC2 instance. If "task" is specified, all containers within the specified task share the same IPC resources. If "none" is specified, then IPC resources within the containers of a task are private and not shared with other containers in a task or on the container instance. If no value is specified, then the IPC resource namespace sharing depends on the Docker daemon setting on the container instance. If the "host" IPC mode is used, be aware that there is a heightened risk of undesired IPC namespace expose. If you are setting namespaced kernel parameters using "systemControls" for the containers in the task, the following will apply to your IPC resource namespace. For more information, see System Controls in the *Amazon Elastic Container Service Developer Guide*. * For tasks that use the "host" IPC mode, IPC namespace related "systemControls" are not supported. * For tasks that use the "task" IPC mode, IPC namespace related "systemControls" will apply to all containers within a task. Note: This parameter is not supported for Windows containers or tasks run on Fargate. * **proxyConfiguration** *(dict) --* The configuration details for the App Mesh proxy. Your Amazon ECS container instances require at least version 1.26.0 of the container agent and at least version 1.26.0-1 of the "ecs-init" package to use a proxy configuration. If your container instances are launched from the Amazon ECS optimized AMI version "20190301" or later, they contain the required versions of the container agent and "ecs-init". For more information, see Amazon ECS-optimized Linux AMI in the *Amazon Elastic Container Service Developer Guide*. * **type** *(string) --* The proxy type. The only supported value is "APPMESH". * **containerName** *(string) --* The name of the container that will serve as the App Mesh proxy. * **properties** *(list) --* The set of network configuration parameters to provide the Container Network Interface (CNI) plugin, specified as key-value pairs. * "IgnoredUID" - (Required) The user ID (UID) of the proxy container as defined by the "user" parameter in a container definition. This is used to ensure the proxy ignores its own traffic. If "IgnoredGID" is specified, this field can be empty. * "IgnoredGID" - (Required) The group ID (GID) of the proxy container as defined by the "user" parameter in a container definition. This is used to ensure the proxy ignores its own traffic. If "IgnoredUID" is specified, this field can be empty. * "AppPorts" - (Required) The list of ports that the application uses. Network traffic to these ports is forwarded to the "ProxyIngressPort" and "ProxyEgressPort". * "ProxyIngressPort" - (Required) Specifies the port that incoming traffic to the "AppPorts" is directed to. * "ProxyEgressPort" - (Required) Specifies the port that outgoing traffic from the "AppPorts" is directed to. * "EgressIgnoredPorts" - (Required) The egress traffic going to the specified ports is ignored and not redirected to the "ProxyEgressPort". It can be an empty list. * "EgressIgnoredIPs" - (Required) The egress traffic going to the specified IP addresses is ignored and not redirected to the "ProxyEgressPort". It can be an empty list. * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **registeredAt** *(datetime) --* The Unix timestamp for the time when the task definition was registered. * **deregisteredAt** *(datetime) --* The Unix timestamp for the time when the task definition was deregistered. * **registeredBy** *(string) --* The principal that registered the task definition. * **ephemeralStorage** *(dict) --* The ephemeral storage settings to use for tasks run with the task definition. * **sizeInGiB** *(integer) --* The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is "21" GiB and the maximum supported value is "200" GiB. * **enableFaultInjection** *(boolean) --* Enables fault injection and allows for fault injection requests to be accepted from the task's containers. The default value is "false". * **failures** *(list) --* Any failures associated with the call. * *(dict) --* A failed resource. For a list of common causes, see API failure reasons in the *Amazon Elastic Container Service Developer Guide*. * **arn** *(string) --* The Amazon Resource Name (ARN) of the failed resource. * **reason** *(string) --* The reason for the failure. * **detail** *(string) --* The details of the failure. **Exceptions** * "ECS.Client.exceptions.AccessDeniedException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ServerException" ECS / Client / run_task run_task ******** ECS.Client.run_task(**kwargs) Starts a new task using the specified task definition. Note: On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition. Note: Amazon Elastic Inference (EI) is no longer available to customers. You can allow Amazon ECS to place tasks for you, or you can customize how Amazon ECS places tasks using placement constraints and placement strategies. For more information, see Scheduling Tasks in the *Amazon Elastic Container Service Developer Guide*. Alternatively, you can use "StartTask" to use your own scheduler or place tasks manually on specific container instances. You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or updating a service. For more information, see Amazon EBS volumes in the *Amazon Elastic Container Service Developer Guide*. The Amazon ECS API follows an eventual consistency model. This is because of the distributed nature of the system supporting the API. This means that the result of an API command you run that affects your Amazon ECS resources might not be immediately visible to all subsequent commands you run. Keep this in mind when you carry out an API command that immediately follows a previous API command. To manage eventual consistency, you can do the following: * Confirm the state of the resource before you run a command to modify it. Run the DescribeTasks command using an exponential backoff algorithm to ensure that you allow enough time for the previous command to propagate through the system. To do this, run the DescribeTasks command repeatedly, starting with a couple of seconds of wait time and increasing gradually up to five minutes of wait time. * Add wait time between subsequent commands, even if the DescribeTasks command returns an accurate response. Apply an exponential backoff algorithm starting with a couple of seconds of wait time, and increase gradually up to about five minutes of wait time. If you get a "ConflictException" error, the "RunTask" request could not be processed due to conflicts. The provided "clientToken" is already in use with a different "RunTask" request. The "resourceIds" are the existing task ARNs which are already associated with the "clientToken". To fix this issue: * Run "RunTask" with a unique "clientToken". * Run "RunTask" with the "clientToken" and the original set of parameters If you get a "ClientException``error, the ``RunTask" could not be processed because you use managed scaling and there is a capacity error because the quota of tasks in the "PROVISIONING" per cluster has been reached. For information about the service quotas, see Amazon ECS service quotas. See also: AWS API Documentation **Request Syntax** response = client.run_task( capacityProviderStrategy=[ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], cluster='string', count=123, enableECSManagedTags=True|False, enableExecuteCommand=True|False, group='string', launchType='EC2'|'FARGATE'|'EXTERNAL', networkConfiguration={ 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, overrides={ 'containerOverrides': [ { 'name': 'string', 'command': [ 'string', ], 'environment': [ { 'name': 'string', 'value': 'string' }, ], 'environmentFiles': [ { 'value': 'string', 'type': 's3' }, ], 'cpu': 123, 'memory': 123, 'memoryReservation': 123, 'resourceRequirements': [ { 'value': 'string', 'type': 'GPU'|'InferenceAccelerator' }, ] }, ], 'cpu': 'string', 'inferenceAcceleratorOverrides': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'executionRoleArn': 'string', 'memory': 'string', 'taskRoleArn': 'string', 'ephemeralStorage': { 'sizeInGiB': 123 } }, placementConstraints=[ { 'type': 'distinctInstance'|'memberOf', 'expression': 'string' }, ], placementStrategy=[ { 'type': 'random'|'spread'|'binpack', 'field': 'string' }, ], platformVersion='string', propagateTags='TASK_DEFINITION'|'SERVICE'|'NONE', referenceId='string', startedBy='string', tags=[ { 'key': 'string', 'value': 'string' }, ], taskDefinition='string', clientToken='string', volumeConfigurations=[ { 'name': 'string', 'managedEBSVolume': { 'encrypted': True|False, 'kmsKeyId': 'string', 'volumeType': 'string', 'sizeInGiB': 123, 'snapshotId': 'string', 'volumeInitializationRate': 123, 'iops': 123, 'throughput': 123, 'tagSpecifications': [ { 'resourceType': 'volume', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE' }, ], 'roleArn': 'string', 'terminationPolicy': { 'deleteOnTermination': True|False }, 'filesystemType': 'ext3'|'ext4'|'xfs'|'ntfs' } }, ] ) type capacityProviderStrategy: list param capacityProviderStrategy: The capacity provider strategy to use for the task. If a "capacityProviderStrategy" is specified, the "launchType" parameter must be omitted. If no "capacityProviderStrategy" or "launchType" is specified, the "defaultCapacityProviderStrategy" for the cluster is used. When you use cluster auto scaling, you must specify "capacityProviderStrategy" and not "launchType". A capacity provider strategy can contain a maximum of 20 capacity providers. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* **[REQUIRED]** The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. type cluster: string param cluster: The short name or full Amazon Resource Name (ARN) of the cluster to run your task on. If you do not specify a cluster, the default cluster is assumed. Each account receives a default cluster the first time you use the service, but you may also create other clusters. type count: integer param count: The number of instantiations of the specified task to place on your cluster. You can specify up to 10 tasks for each call. type enableECSManagedTags: boolean param enableECSManagedTags: Specifies whether to use Amazon ECS managed tags for the task. For more information, see Tagging Your Amazon ECS Resources in the *Amazon Elastic Container Service Developer Guide*. type enableExecuteCommand: boolean param enableExecuteCommand: Determines whether to use the execute command functionality for the containers in this task. If "true", this enables execute command functionality on all containers in the task. If "true", then the task definition must have a task role, or you must provide one as an override. type group: string param group: The name of the task group to associate with the task. The default value is the family name of the task definition (for example, "family:my-family-name"). type launchType: string param launchType: The infrastructure to run your standalone task on. For more information, see Amazon ECS launch types in the *Amazon Elastic Container Service Developer Guide*. The "FARGATE" launch type runs your tasks on Fargate On- Demand infrastructure. Note: Fargate Spot infrastructure is available for use but a capacity provider strategy must be used. For more information, see Fargate capacity providers in the *Amazon ECS Developer Guide*. The "EC2" launch type runs your tasks on Amazon EC2 instances registered to your cluster. The "EXTERNAL" launch type runs your tasks on your on- premises server or virtual machine (VM) capacity registered to your cluster. A task can use either a launch type or a capacity provider strategy. If a "launchType" is specified, the "capacityProviderStrategy" parameter must be omitted. When you use cluster auto scaling, you must specify "capacityProviderStrategy" and not "launchType". type networkConfiguration: dict param networkConfiguration: The network configuration for the task. This parameter is required for task definitions that use the "awsvpc" network mode to receive their own elastic network interface, and it isn't supported for other network modes. For more information, see Task networking in the *Amazon Elastic Container Service Developer Guide*. * **awsvpcConfiguration** *(dict) --* The VPC subnets and security groups that are associated with a task. Note: All specified subnets and security groups must be from the same VPC. * **subnets** *(list) --* **[REQUIRED]** The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified. Note: All specified subnets must be from the same VPC. * *(string) --* * **securityGroups** *(list) --* The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified. Note: All specified security groups must be from the same VPC. * *(string) --* * **assignPublicIp** *(string) --* Whether the task's elastic network interface receives a public IP address. Consider the following when you set this value: * When you use "create-service" or "update-service", the default is "DISABLED". * When the service "deploymentController" is "ECS", the value must be "DISABLED". type overrides: dict param overrides: A list of container overrides in JSON format that specify the name of a container in the specified task definition and the overrides it should receive. You can override the default command for a container (that's specified in the task definition or Docker image) with a "command" override. You can also override existing environment variables (that are specified in the task definition or Docker image) on a container or add new environment variables to it with an "environment" override. A total of 8192 characters are allowed for overrides. This limit includes the JSON formatting characters of the override structure. * **containerOverrides** *(list) --* One or more container overrides that are sent to a task. * *(dict) --* The overrides that are sent to a container. An empty container override can be passed in. An example of an empty container override is "{"containerOverrides": [ ] }". If a non-empty container override is specified, the "name" parameter must be included. You can use Secrets Manager or Amazon Web Services Systems Manager Parameter Store to store the sensitive data. For more information, see Retrieve secrets through environment variables in the Amazon ECS Developer Guide. * **name** *(string) --* The name of the container that receives the override. This parameter is required if any override is specified. * **command** *(list) --* The command to send to the container that overrides the default command from the Docker image or the task definition. You must also specify a container name. * *(string) --* * **environment** *(list) --* The environment variables to send to the container. You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the task definition. You must also specify a container name. * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **environmentFiles** *(list) --* A list of files containing the environment variables to pass to a container, instead of the value from the container definition. * *(dict) --* A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a ".env" file extension. Each line in an environment file should contain an environment variable in "VARIABLE=VALUE" format. Lines beginning with "#" are treated as comments and are ignored. If there are environment variables specified using the "environment" parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Use a file to pass environment variables to a container in the *Amazon Elastic Container Service Developer Guide*. Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply. You must use the following platforms for the Fargate launch type: * Linux platform version "1.4.0" or later. * Windows platform version "1.0.0" or later. Consider the following when using the Fargate launch type: * The file is handled like a native Docker env-file. * There is no support for shell escape handling. * The container entry point interperts the "VARIABLE" values. * **value** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file. * **type** *(string) --* **[REQUIRED]** The file type to use. Environment files are objects in Amazon S3. The only supported value is "s3". * **cpu** *(integer) --* The number of "cpu" units reserved for the container, instead of the default value from the task definition. You must also specify a container name. * **memory** *(integer) --* The hard limit (in MiB) of memory to present to the container, instead of the default value from the task definition. If your container attempts to exceed the memory specified here, the container is killed. You must also specify a container name. * **memoryReservation** *(integer) --* The soft limit (in MiB) of memory to reserve for the container, instead of the default value from the task definition. You must also specify a container name. * **resourceRequirements** *(list) --* The type and amount of a resource to assign to a container, instead of the default value from the task definition. The only supported resource is a GPU. * *(dict) --* The type and amount of a resource to assign to a container. The supported resource types are GPUs and Elastic Inference accelerators. For more information, see Working with GPUs on Amazon ECS or Working with Amazon Elastic Inference on Amazon ECS in the *Amazon Elastic Container Service Developer Guide* * **value** *(string) --* **[REQUIRED]** The value for the specified resource type. When the type is "GPU", the value is the number of physical "GPUs" the Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for all containers in a task can't exceed the number of available GPUs on the container instance that the task is launched on. When the type is "InferenceAccelerator", the "value" matches the "deviceName" for an InferenceAccelerator specified in a task definition. * **type** *(string) --* **[REQUIRED]** The type of resource to assign to a container. * **cpu** *(string) --* The CPU override for the task. * **inferenceAcceleratorOverrides** *(list) --* The Elastic Inference accelerator override for the task. * *(dict) --* Details on an Elastic Inference accelerator task override. This parameter is used to override the Elastic Inference accelerator specified in the task definition. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. * **deviceName** *(string) --* The Elastic Inference accelerator device name to override for the task. This parameter must match a "deviceName" specified in the task definition. * **deviceType** *(string) --* The Elastic Inference accelerator type to use. * **executionRoleArn** *(string) --* The Amazon Resource Name (ARN) of the task execution role override for the task. For more information, see Amazon ECS task execution IAM role in the *Amazon Elastic Container Service Developer Guide*. * **memory** *(string) --* The memory override for the task. * **taskRoleArn** *(string) --* The Amazon Resource Name (ARN) of the role that containers in this task can assume. All containers in this task are granted the permissions that are specified in this role. For more information, see IAM Role for Tasks in the *Amazon Elastic Container Service Developer Guide*. * **ephemeralStorage** *(dict) --* The ephemeral storage setting override for the task. Note: This parameter is only supported for tasks hosted on Fargate that use the following platform versions: * Linux platform version "1.4.0" or later. * Windows platform version "1.0.0" or later. * **sizeInGiB** *(integer) --* **[REQUIRED]** The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is "21" GiB and the maximum supported value is "200" GiB. type placementConstraints: list param placementConstraints: An array of placement constraint objects to use for the task. You can specify up to 10 constraints for each task (including constraints in the task definition and those specified at runtime). * *(dict) --* An object representing a constraint on task placement. For more information, see Task placement constraints in the *Amazon Elastic Container Service Developer Guide*. Note: If you're using the Fargate launch type, task placement constraints aren't supported. * **type** *(string) --* The type of constraint. Use "distinctInstance" to ensure that each task in a particular group is running on a different container instance. Use "memberOf" to restrict the selection to a group of valid candidates. * **expression** *(string) --* A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is "distinctInstance". For more information, see Cluster query language in the *Amazon Elastic Container Service Developer Guide*. type placementStrategy: list param placementStrategy: The placement strategy objects to use for the task. You can specify a maximum of 5 strategy rules for each task. * *(dict) --* The task placement strategy for a task or service. For more information, see Task placement strategies in the *Amazon Elastic Container Service Developer Guide*. * **type** *(string) --* The type of placement strategy. The "random" placement strategy randomly places tasks on available candidates. The "spread" placement strategy spreads placement across available candidates evenly based on the "field" parameter. The "binpack" strategy places tasks on available candidates that have the least available amount of the resource that's specified with the "field" parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task. * **field** *(string) --* The field to apply the placement strategy against. For the "spread" placement strategy, valid values are "instanceId" (or "host", which has the same effect), or any platform or custom attribute that's applied to a container instance, such as "attribute:ecs.availability- zone". For the "binpack" placement strategy, valid values are "cpu" and "memory". For the "random" placement strategy, this field is not used. type platformVersion: string param platformVersion: The platform version the task uses. A platform version is only specified for tasks hosted on Fargate. If one isn't specified, the "LATEST" platform version is used. For more information, see Fargate platform versions in the *Amazon Elastic Container Service Developer Guide*. type propagateTags: string param propagateTags: Specifies whether to propagate the tags from the task definition to the task. If no value is specified, the tags aren't propagated. Tags can only be propagated to the task during task creation. To add tags to a task after task creation, use the TagResource API action. Note: An error will be received if you specify the "SERVICE" option when running a task. type referenceId: string param referenceId: This parameter is only used by Amazon ECS. It is not intended for use by customers. type startedBy: string param startedBy: An optional tag specified when a task is started. For example, if you automatically trigger a task to run a batch process job, you could apply a unique identifier for that job to your task with the "startedBy" parameter. You can then identify which tasks belong to that job by filtering the results of a ListTasks call with the "startedBy" value. Up to 128 letters (uppercase and lowercase), numbers, hyphens (-), forward slash (/), and underscores (_) are allowed. If a task is started by an Amazon ECS service, then the "startedBy" parameter contains the deployment ID of the service that starts it. type tags: list param tags: The metadata that you apply to the task to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). type taskDefinition: string param taskDefinition: **[REQUIRED]** The "family" and "revision" ( "family:revision") or full ARN of the task definition to run. If a "revision" isn't specified, the latest "ACTIVE" revision is used. The full ARN value must match the value that you specified as the "Resource" of the principal's permissions policy. When you specify a task definition, you must either specify a specific revision, or all revisions in the ARN. To specify a specific revision, include the revision number in the ARN. For example, to specify revision 2, use "arn:aws:ecs:us-east-1:111122223333:task- definition/TaskFamilyName:2". To specify all revisions, use the wildcard (*) in the ARN. For example, to specify all revisions, use "arn:aws:ecs:us- east-1:111122223333:task-definition/TaskFamilyName:*". For more information, see Policy Resources for Amazon ECS in the Amazon Elastic Container Service Developer Guide. type clientToken: string param clientToken: An identifier that you provide to ensure the idempotency of the request. It must be unique and is case sensitive. Up to 64 characters are allowed. The valid characters are characters in the range of 33-126, inclusive. For more information, see Ensuring idempotency. This field is autopopulated if not provided. type volumeConfigurations: list param volumeConfigurations: The details of the volume that was "configuredAtLaunch". You can configure the size, volumeType, IOPS, throughput, snapshot and encryption in in TaskManagedEBSVolumeConfiguration. The "name" of the volume must match the "name" from the task definition. * *(dict) --* Configuration settings for the task volume that was "configuredAtLaunch" that weren't set during "RegisterTaskDef". * **name** *(string) --* **[REQUIRED]** The name of the volume. This value must match the volume name from the "Volume" object in the task definition. * **managedEBSVolume** *(dict) --* The configuration for the Amazon EBS volume that Amazon ECS creates and manages on your behalf. These settings are used to create each Amazon EBS volume, with one volume created for each task. The Amazon EBS volumes are visible in your account in the Amazon EC2 console once they are created. * **encrypted** *(boolean) --* Indicates whether the volume should be encrypted. If you turn on Region-level Amazon EBS encryption by default but set this value as "false", the setting is overridden and the volume is encrypted with the KMS key specified for Amazon EBS encryption by default. This parameter maps 1:1 with the "Encrypted" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * **kmsKeyId** *(string) --* The Amazon Resource Name (ARN) identifier of the Amazon Web Services Key Management Service key to use for Amazon EBS encryption. When a key is specified using this parameter, it overrides Amazon EBS default encryption or any KMS key that you specified for cluster-level managed storage encryption. This parameter maps 1:1 with the "KmsKeyId" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. For more information about encrypting Amazon EBS volumes attached to a task, see Encrypt data stored in Amazon EBS volumes attached to Amazon ECS tasks. Warning: Amazon Web Services authenticates the Amazon Web Services Key Management Service key asynchronously. Therefore, if you specify an ID, alias, or ARN that is invalid, the action can appear to complete, but eventually fails. * **volumeType** *(string) --* The volume type. This parameter maps 1:1 with the "VolumeType" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. For more information, see Amazon EBS volume types in the *Amazon EC2 User Guide*. The following are the supported volume types. * General Purpose SSD: "gp2``| ``gp3" * Provisioned IOPS SSD: "io1``| ``io2" * Throughput Optimized HDD: "st1" * Cold HDD: "sc1" * Magnetic: "standard" Note: The magnetic volume type is not supported on Fargate. * **sizeInGiB** *(integer) --* The size of the volume in GiB. You must specify either a volume size or a snapshot ID. If you specify a snapshot ID, the snapshot size is used for the volume size by default. You can optionally specify a volume size greater than or equal to the snapshot size. This parameter maps 1:1 with the "Size" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. The following are the supported volume size values for each volume type. * "gp2" and "gp3": 1-16,384 * "io1" and "io2": 4-16,384 * "st1" and "sc1": 125-16,384 * "standard": 1-1,024 * **snapshotId** *(string) --* The snapshot that Amazon ECS uses to create the volume. You must specify either a snapshot ID or a volume size. This parameter maps 1:1 with the "SnapshotId" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * **volumeInitializationRate** *(integer) --* The rate, in MiB/s, at which data is fetched from a snapshot of an existing Amazon EBS volume to create a new volume for attachment to the task. This property can be specified only if you specify a "snapshotId". For more information, see Initialize Amazon EBS volumes in the *Amazon EBS User Guide*. * **iops** *(integer) --* The number of I/O operations per second (IOPS). For "gp3", "io1", and "io2" volumes, this represents the number of IOPS that are provisioned for the volume. For "gp2" volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting. The following are the supported values for each volume type. * "gp3": 3,000 - 16,000 IOPS * "io1": 100 - 64,000 IOPS * "io2": 100 - 256,000 IOPS This parameter is required for "io1" and "io2" volume types. The default for "gp3" volumes is "3,000 IOPS". This parameter is not supported for "st1", "sc1", or "standard" volume types. This parameter maps 1:1 with the "Iops" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * **throughput** *(integer) --* The throughput to provision for a volume, in MiB/s, with a maximum of 1,000 MiB/s. This parameter maps 1:1 with the "Throughput" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. Warning: This parameter is only supported for the "gp3" volume type. * **tagSpecifications** *(list) --* The tags to apply to the volume. Amazon ECS applies service-managed tags by default. This parameter maps 1:1 with the "TagSpecifications.N" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * *(dict) --* The tag specifications of an Amazon EBS volume. * **resourceType** *(string) --* **[REQUIRED]** The type of volume resource. * **tags** *(list) --* The tags applied to this Amazon EBS volume. "AmazonECSCreated" and "AmazonECSManaged" are reserved tags that can't be used. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **propagateTags** *(string) --* Determines whether to propagate the tags from the task definition to the Amazon EBS volume. Tags can only propagate to a "SERVICE" specified in "ServiceVolumeConfiguration". If no value is specified, the tags aren't propagated. * **roleArn** *(string) --* **[REQUIRED]** The ARN of the IAM role to associate with this volume. This is the Amazon ECS infrastructure IAM role that is used to manage your Amazon Web Services infrastructure. We recommend using the Amazon ECS-managed "AmazonECSInfrastructureRolePolicyForVolumes" IAM policy with this role. For more information, see Amazon ECS infrastructure IAM role in the *Amazon ECS Developer Guide*. * **terminationPolicy** *(dict) --* The termination policy for the volume when the task exits. This provides a way to control whether Amazon ECS terminates the Amazon EBS volume when the task stops. * **deleteOnTermination** *(boolean) --* **[REQUIRED]** Indicates whether the volume should be deleted on when the task stops. If a value of "true" is specified, Amazon ECS deletes the Amazon EBS volume on your behalf when the task goes into the "STOPPED" state. If no value is specified, the default value is "true" is used. When set to "false", Amazon ECS leaves the volume in your account. * **filesystemType** *(string) --* The Linux filesystem type for the volume. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the task will fail to start. The available filesystem types are "ext3", "ext4", and "xfs". If no value is specified, the "xfs" filesystem type is used by default. rtype: dict returns: **Response Syntax** { 'tasks': [ { 'attachments': [ { 'id': 'string', 'type': 'string', 'status': 'string', 'details': [ { 'name': 'string', 'value': 'string' }, ] }, ], 'attributes': [ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ], 'availabilityZone': 'string', 'capacityProviderName': 'string', 'clusterArn': 'string', 'connectivity': 'CONNECTED'|'DISCONNECTED', 'connectivityAt': datetime(2015, 1, 1), 'containerInstanceArn': 'string', 'containers': [ { 'containerArn': 'string', 'taskArn': 'string', 'name': 'string', 'image': 'string', 'imageDigest': 'string', 'runtimeId': 'string', 'lastStatus': 'string', 'exitCode': 123, 'reason': 'string', 'networkBindings': [ { 'bindIP': 'string', 'containerPort': 123, 'hostPort': 123, 'protocol': 'tcp'|'udp', 'containerPortRange': 'string', 'hostPortRange': 'string' }, ], 'networkInterfaces': [ { 'attachmentId': 'string', 'privateIpv4Address': 'string', 'ipv6Address': 'string' }, ], 'healthStatus': 'HEALTHY'|'UNHEALTHY'|'UNKNOWN', 'managedAgents': [ { 'lastStartedAt': datetime(2015, 1, 1), 'name': 'ExecuteCommandAgent', 'reason': 'string', 'lastStatus': 'string' }, ], 'cpu': 'string', 'memory': 'string', 'memoryReservation': 'string', 'gpuIds': [ 'string', ] }, ], 'cpu': 'string', 'createdAt': datetime(2015, 1, 1), 'desiredStatus': 'string', 'enableExecuteCommand': True|False, 'executionStoppedAt': datetime(2015, 1, 1), 'group': 'string', 'healthStatus': 'HEALTHY'|'UNHEALTHY'|'UNKNOWN', 'inferenceAccelerators': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'lastStatus': 'string', 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'memory': 'string', 'overrides': { 'containerOverrides': [ { 'name': 'string', 'command': [ 'string', ], 'environment': [ { 'name': 'string', 'value': 'string' }, ], 'environmentFiles': [ { 'value': 'string', 'type': 's3' }, ], 'cpu': 123, 'memory': 123, 'memoryReservation': 123, 'resourceRequirements': [ { 'value': 'string', 'type': 'GPU'|'InferenceAccelerator' }, ] }, ], 'cpu': 'string', 'inferenceAcceleratorOverrides': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'executionRoleArn': 'string', 'memory': 'string', 'taskRoleArn': 'string', 'ephemeralStorage': { 'sizeInGiB': 123 } }, 'platformVersion': 'string', 'platformFamily': 'string', 'pullStartedAt': datetime(2015, 1, 1), 'pullStoppedAt': datetime(2015, 1, 1), 'startedAt': datetime(2015, 1, 1), 'startedBy': 'string', 'stopCode': 'TaskFailedToStart'|'EssentialContainerExited'|'UserInitiated'|'ServiceSchedulerInitiated'|'SpotInterruption'|'TerminationNotice', 'stoppedAt': datetime(2015, 1, 1), 'stoppedReason': 'string', 'stoppingAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'taskArn': 'string', 'taskDefinitionArn': 'string', 'version': 123, 'ephemeralStorage': { 'sizeInGiB': 123 }, 'fargateEphemeralStorage': { 'sizeInGiB': 123, 'kmsKeyId': 'string' } }, ], 'failures': [ { 'arn': 'string', 'reason': 'string', 'detail': 'string' }, ] } **Response Structure** * *(dict) --* * **tasks** *(list) --* A full description of the tasks that were run. The tasks that were successfully placed on your cluster are described here. * *(dict) --* Details on a task in a cluster. * **attachments** *(list) --* The Elastic Network Adapter that's associated with the task if the task uses the "awsvpc" network mode. * *(dict) --* An object representing a container instance or task attachment. * **id** *(string) --* The unique identifier for the attachment. * **type** *(string) --* The type of the attachment, such as "ElasticNetworkInterface", "Service Connect", and "AmazonElasticBlockStorage". * **status** *(string) --* The status of the attachment. Valid values are "PRECREATED", "CREATED", "ATTACHING", "ATTACHED", "DETACHING", "DETACHED", "DELETED", and "FAILED". * **details** *(list) --* Details of the attachment. For elastic network interfaces, this includes the network interface ID, the MAC address, the subnet ID, and the private IPv4 address. For Service Connect services, this includes "portName", "clientAliases", "discoveryName", and "ingressPortOverride". For Elastic Block Storage, this includes "roleArn", "deleteOnTermination", "volumeName", "volumeId", and "statusReason" (only when the attachment fails to create or attach). * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **attributes** *(list) --* The attributes of the task * *(dict) --* An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the attribute. The "name" must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.). * **value** *(string) --* The value of the attribute. The "value" must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space. * **targetType** *(string) --* The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN. * **targetId** *(string) --* The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN). * **availabilityZone** *(string) --* The Availability Zone for the task. * **capacityProviderName** *(string) --* The capacity provider that's associated with the task. * **clusterArn** *(string) --* The ARN of the cluster that hosts the task. * **connectivity** *(string) --* The connectivity status of a task. * **connectivityAt** *(datetime) --* The Unix timestamp for the time when the task last went into "CONNECTED" status. * **containerInstanceArn** *(string) --* The ARN of the container instances that host the task. * **containers** *(list) --* The containers that's associated with the task. * *(dict) --* A Docker container that's part of a task. * **containerArn** *(string) --* The Amazon Resource Name (ARN) of the container. * **taskArn** *(string) --* The ARN of the task. * **name** *(string) --* The name of the container. * **image** *(string) --* The image used for the container. * **imageDigest** *(string) --* The container image manifest digest. * **runtimeId** *(string) --* The ID of the Docker container. * **lastStatus** *(string) --* The last known status of the container. * **exitCode** *(integer) --* The exit code returned from the container. * **reason** *(string) --* A short (1024 max characters) human-readable string to provide additional details about a running or stopped container. * **networkBindings** *(list) --* The network bindings associated with the container. * *(dict) --* Details on the network bindings between a container and its host container instance. After a task reaches the "RUNNING" status, manual and automatic host and container port assignments are visible in the "networkBindings" section of DescribeTasks API responses. * **bindIP** *(string) --* The IP address that the container is bound to on the container instance. * **containerPort** *(integer) --* The port number on the container that's used with the network binding. * **hostPort** *(integer) --* The port number on the host that's used with the network binding. * **protocol** *(string) --* The protocol used for the network binding. * **containerPortRange** *(string) --* The port number range on the container that's bound to the dynamically mapped host port range. The following rules apply when you specify a "containerPortRange": * You must use either the "bridge" network mode or the "awsvpc" network mode. * This parameter is available for both the EC2 and Fargate launch types. * This parameter is available for both the Linux and Windows operating systems. * The container instance must have at least version 1.67.0 of the container agent and at least version 1.67.0-1 of the "ecs-init" package * You can specify a maximum of 100 port ranges per container. * You do not specify a "hostPortRange". The value of the "hostPortRange" is set as follows: * For containers in a task with the "awsvpc" network mode, the "hostPortRange" is set to the same value as the "containerPortRange". This is a static mapping strategy. * For containers in a task with the "bridge" network mode, the Amazon ECS agent finds open host ports from the default ephemeral range and passes it to docker to bind them to the container ports. * The "containerPortRange" valid values are between 1 and 65535. * A port can only be included in one port mapping per container. * You cannot specify overlapping port ranges. * The first port in the range must be less than last port in the range. * Docker recommends that you turn off the docker-proxy in the Docker daemon config file when you have a large number of ports. For more information, see Issue #11185 on the Github website. For information about how to turn off the docker-proxy in the Docker daemon config file, see Docker daemon in the *Amazon ECS Developer Guide*. You can call DescribeTasks to view the "hostPortRange" which are the host ports that are bound to the container ports. * **hostPortRange** *(string) --* The port number range on the host that's used with the network binding. This is assigned is assigned by Docker and delivered by the Amazon ECS agent. * **networkInterfaces** *(list) --* The network interfaces associated with the container. * *(dict) --* An object representing the elastic network interface for tasks that use the "awsvpc" network mode. * **attachmentId** *(string) --* The attachment ID for the network interface. * **privateIpv4Address** *(string) --* The private IPv4 address for the network interface. * **ipv6Address** *(string) --* The private IPv6 address for the network interface. * **healthStatus** *(string) --* The health status of the container. If health checks aren't configured for this container in its task definition, then it reports the health status as "UNKNOWN". * **managedAgents** *(list) --* The details of any Amazon ECS managed agents associated with the container. * *(dict) --* Details about the managed agent status for the container. * **lastStartedAt** *(datetime) --* The Unix timestamp for the time when the managed agent was last started. * **name** *(string) --* The name of the managed agent. When the execute command feature is turned on, the managed agent name is "ExecuteCommandAgent". * **reason** *(string) --* The reason for why the managed agent is in the state it is in. * **lastStatus** *(string) --* The last known status of the managed agent. * **cpu** *(string) --* The number of CPU units set for the container. The value is "0" if no value was specified in the container definition when the task definition was registered. * **memory** *(string) --* The hard limit (in MiB) of memory set for the container. * **memoryReservation** *(string) --* The soft limit (in MiB) of memory set for the container. * **gpuIds** *(list) --* The IDs of each GPU assigned to the container. * *(string) --* * **cpu** *(string) --* The number of CPU units used by the task as expressed in a task definition. It can be expressed as an integer using CPU units (for example, "1024"). It can also be expressed as a string using vCPUs (for example, "1 vCPU" or "1 vcpu"). String values are converted to an integer that indicates the CPU units when the task definition is registered. If you're using the EC2 launch type or the external launch type, this field is optional. Supported values are between "128" CPU units ( "0.125" vCPUs) and "196608" CPU units ( "192" vCPUs). If you do not specify a value, the parameter is ignored. This field is required for Fargate. For information about the valid values, see Task size in the *Amazon Elastic Container Service Developer Guide*. * **createdAt** *(datetime) --* The Unix timestamp for the time when the task was created. More specifically, it's for the time when the task entered the "PENDING" state. * **desiredStatus** *(string) --* The desired status of the task. For more information, see Task Lifecycle. * **enableExecuteCommand** *(boolean) --* Determines whether execute command functionality is turned on for this task. If "true", execute command functionality is turned on all the containers in the task. * **executionStoppedAt** *(datetime) --* The Unix timestamp for the time when the task execution stopped. * **group** *(string) --* The name of the task group that's associated with the task. * **healthStatus** *(string) --* The health status for the task. It's determined by the health of the essential containers in the task. If all essential containers in the task are reporting as "HEALTHY", the task status also reports as "HEALTHY". If any essential containers in the task are reporting as "UNHEALTHY" or "UNKNOWN", the task status also reports as "UNHEALTHY" or "UNKNOWN". Note: The Amazon ECS container agent doesn't monitor or report on Docker health checks that are embedded in a container image and not specified in the container definition. For example, this includes those specified in a parent image or from the image's Dockerfile. Health check parameters that are specified in a container definition override any Docker health checks that are found in the container image. * **inferenceAccelerators** *(list) --* The Elastic Inference accelerator that's associated with the task. * *(dict) --* Details on an Elastic Inference accelerator. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. * **deviceName** *(string) --* The Elastic Inference accelerator device name. The "deviceName" must also be referenced in a container definition as a ResourceRequirement. * **deviceType** *(string) --* The Elastic Inference accelerator type to use. * **lastStatus** *(string) --* The last known status for the task. For more information, see Task Lifecycle. * **launchType** *(string) --* The infrastructure where your task runs on. For more information, see Amazon ECS launch types in the *Amazon Elastic Container Service Developer Guide*. * **memory** *(string) --* The amount of memory (in MiB) that the task uses as expressed in a task definition. It can be expressed as an integer using MiB (for example, "1024"). If it's expressed as a string using GB (for example, "1GB" or "1 GB"), it's converted to an integer indicating the MiB when the task definition is registered. If you use the EC2 launch type, this field is optional. If you use the Fargate launch type, this field is required. You must use one of the following values. The value that you choose determines the range of supported values for the "cpu" parameter. * 512 (0.5 GB), 1024 (1 GB), 2048 (2 GB) - Available "cpu" values: 256 (.25 vCPU) * 1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB) - Available "cpu" values: 512 (.5 vCPU) * 2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB) - Available "cpu" values: 1024 (1 vCPU) * Between 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB) - Available "cpu" values: 2048 (2 vCPU) * Between 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB) - Available "cpu" values: 4096 (4 vCPU) * Between 16 GB and 60 GB in 4 GB increments - Available "cpu" values: 8192 (8 vCPU) This option requires Linux platform "1.4.0" or later. * Between 32GB and 120 GB in 8 GB increments - Available "cpu" values: 16384 (16 vCPU) This option requires Linux platform "1.4.0" or later. * **overrides** *(dict) --* One or more container overrides. * **containerOverrides** *(list) --* One or more container overrides that are sent to a task. * *(dict) --* The overrides that are sent to a container. An empty container override can be passed in. An example of an empty container override is "{"containerOverrides": [ ] }". If a non-empty container override is specified, the "name" parameter must be included. You can use Secrets Manager or Amazon Web Services Systems Manager Parameter Store to store the sensitive data. For more information, see Retrieve secrets through environment variables in the Amazon ECS Developer Guide. * **name** *(string) --* The name of the container that receives the override. This parameter is required if any override is specified. * **command** *(list) --* The command to send to the container that overrides the default command from the Docker image or the task definition. You must also specify a container name. * *(string) --* * **environment** *(list) --* The environment variables to send to the container. You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the task definition. You must also specify a container name. * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **environmentFiles** *(list) --* A list of files containing the environment variables to pass to a container, instead of the value from the container definition. * *(dict) --* A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a ".env" file extension. Each line in an environment file should contain an environment variable in "VARIABLE=VALUE" format. Lines beginning with "#" are treated as comments and are ignored. If there are environment variables specified using the "environment" parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Use a file to pass environment variables to a container in the *Amazon Elastic Container Service Developer Guide*. Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply. You must use the following platforms for the Fargate launch type: * Linux platform version "1.4.0" or later. * Windows platform version "1.0.0" or later. Consider the following when using the Fargate launch type: * The file is handled like a native Docker env-file. * There is no support for shell escape handling. * The container entry point interperts the "VARIABLE" values. * **value** *(string) --* The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file. * **type** *(string) --* The file type to use. Environment files are objects in Amazon S3. The only supported value is "s3". * **cpu** *(integer) --* The number of "cpu" units reserved for the container, instead of the default value from the task definition. You must also specify a container name. * **memory** *(integer) --* The hard limit (in MiB) of memory to present to the container, instead of the default value from the task definition. If your container attempts to exceed the memory specified here, the container is killed. You must also specify a container name. * **memoryReservation** *(integer) --* The soft limit (in MiB) of memory to reserve for the container, instead of the default value from the task definition. You must also specify a container name. * **resourceRequirements** *(list) --* The type and amount of a resource to assign to a container, instead of the default value from the task definition. The only supported resource is a GPU. * *(dict) --* The type and amount of a resource to assign to a container. The supported resource types are GPUs and Elastic Inference accelerators. For more information, see Working with GPUs on Amazon ECS or Working with Amazon Elastic Inference on Amazon ECS in the *Amazon Elastic Container Service Developer Guide* * **value** *(string) --* The value for the specified resource type. When the type is "GPU", the value is the number of physical "GPUs" the Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for all containers in a task can't exceed the number of available GPUs on the container instance that the task is launched on. When the type is "InferenceAccelerator", the "value" matches the "deviceName" for an InferenceAccelerator specified in a task definition. * **type** *(string) --* The type of resource to assign to a container. * **cpu** *(string) --* The CPU override for the task. * **inferenceAcceleratorOverrides** *(list) --* The Elastic Inference accelerator override for the task. * *(dict) --* Details on an Elastic Inference accelerator task override. This parameter is used to override the Elastic Inference accelerator specified in the task definition. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. * **deviceName** *(string) --* The Elastic Inference accelerator device name to override for the task. This parameter must match a "deviceName" specified in the task definition. * **deviceType** *(string) --* The Elastic Inference accelerator type to use. * **executionRoleArn** *(string) --* The Amazon Resource Name (ARN) of the task execution role override for the task. For more information, see Amazon ECS task execution IAM role in the *Amazon Elastic Container Service Developer Guide*. * **memory** *(string) --* The memory override for the task. * **taskRoleArn** *(string) --* The Amazon Resource Name (ARN) of the role that containers in this task can assume. All containers in this task are granted the permissions that are specified in this role. For more information, see IAM Role for Tasks in the *Amazon Elastic Container Service Developer Guide*. * **ephemeralStorage** *(dict) --* The ephemeral storage setting override for the task. Note: This parameter is only supported for tasks hosted on Fargate that use the following platform versions: * Linux platform version "1.4.0" or later. * Windows platform version "1.0.0" or later. * **sizeInGiB** *(integer) --* The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is "21" GiB and the maximum supported value is "200" GiB. * **platformVersion** *(string) --* The platform version where your task runs on. A platform version is only specified for tasks that use the Fargate launch type. If you didn't specify one, the "LATEST" platform version is used. For more information, see Fargate Platform Versions in the *Amazon Elastic Container Service Developer Guide*. * **platformFamily** *(string) --* The operating system that your tasks are running on. A platform family is specified only for tasks that use the Fargate launch type. All tasks that run as part of this service must use the same "platformFamily" value as the service (for example, "LINUX."). * **pullStartedAt** *(datetime) --* The Unix timestamp for the time when the container image pull began. * **pullStoppedAt** *(datetime) --* The Unix timestamp for the time when the container image pull completed. * **startedAt** *(datetime) --* The Unix timestamp for the time when the task started. More specifically, it's for the time when the task transitioned from the "PENDING" state to the "RUNNING" state. * **startedBy** *(string) --* The tag specified when a task is started. If an Amazon ECS service started the task, the "startedBy" parameter contains the deployment ID of that service. * **stopCode** *(string) --* The stop code indicating why a task was stopped. The "stoppedReason" might contain additional details. For more information about stop code, see Stopped tasks error codes in the *Amazon ECS Developer Guide*. * **stoppedAt** *(datetime) --* The Unix timestamp for the time when the task was stopped. More specifically, it's for the time when the task transitioned from the "RUNNING" state to the "STOPPED" state. * **stoppedReason** *(string) --* The reason that the task was stopped. * **stoppingAt** *(datetime) --* The Unix timestamp for the time when the task stops. More specifically, it's for the time when the task transitions from the "RUNNING" state to "STOPPING". * **tags** *(list) --* The metadata that you apply to the task to help you categorize and organize the task. Each tag consists of a key and an optional value. You define both the key and value. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **taskArn** *(string) --* The Amazon Resource Name (ARN) of the task. * **taskDefinitionArn** *(string) --* The ARN of the task definition that creates the task. * **version** *(integer) --* The version counter for the task. Every time a task experiences a change that starts a CloudWatch event, the version counter is incremented. If you replicate your Amazon ECS task state with CloudWatch Events, you can compare the version of a task reported by the Amazon ECS API actions with the version reported in CloudWatch Events for the task (inside the "detail" object) to verify that the version in your event stream is current. * **ephemeralStorage** *(dict) --* The ephemeral storage settings for the task. * **sizeInGiB** *(integer) --* The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is "21" GiB and the maximum supported value is "200" GiB. * **fargateEphemeralStorage** *(dict) --* The Fargate ephemeral storage settings for the task. * **sizeInGiB** *(integer) --* The total amount, in GiB, of the ephemeral storage to set for the task. The minimum supported value is "20" GiB and the maximum supported value is "200" GiB. * **kmsKeyId** *(string) --* Specify an Key Management Service key ID to encrypt the ephemeral storage for the task. * **failures** *(list) --* Any failures associated with the call. For information about how to address failures, see Service event messages and API failure reasons in the *Amazon Elastic Container Service Developer Guide*. * *(dict) --* A failed resource. For a list of common causes, see API failure reasons in the *Amazon Elastic Container Service Developer Guide*. * **arn** *(string) --* The Amazon Resource Name (ARN) of the failed resource. * **reason** *(string) --* The reason for the failure. * **detail** *(string) --* The details of the failure. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ClusterNotFoundException" * "ECS.Client.exceptions.UnsupportedFeatureException" * "ECS.Client.exceptions.PlatformUnknownException" * "ECS.Client.exceptions.PlatformTaskDefinitionIncompatibilityE xception" * "ECS.Client.exceptions.AccessDeniedException" * "ECS.Client.exceptions.BlockedException" * "ECS.Client.exceptions.ConflictException" **Examples** This example runs the specified task definition on your default cluster. response = client.run_task( cluster='default', taskDefinition='sleep360:1', ) print(response) Expected Output: { 'tasks': [ { 'containerInstanceArn': 'arn:aws:ecs:us-east-1::container-instance/ffe3d344-77e2-476c-a4d0-bf560ad50acb', 'containers': [ { 'name': 'sleep', 'containerArn': 'arn:aws:ecs:us-east-1::container/58591c8e-be29-4ddf-95aa-ee459d4c59fd', 'lastStatus': 'PENDING', 'taskArn': 'arn:aws:ecs:us-east-1::task/a9f21ea7-c9f5-44b1-b8e6-b31f50ed33c0', }, ], 'desiredStatus': 'RUNNING', 'lastStatus': 'PENDING', 'overrides': { 'containerOverrides': [ { 'name': 'sleep', }, ], }, 'taskArn': 'arn:aws:ecs:us-east-1::task/a9f21ea7-c9f5-44b1-b8e6-b31f50ed33c0', 'taskDefinitionArn': 'arn:aws:ecs:us-east-1::task-definition/sleep360:1', }, ], 'ResponseMetadata': { '...': '...', }, } ECS / Client / describe_service_revisions describe_service_revisions ************************** ECS.Client.describe_service_revisions(**kwargs) Describes one or more service revisions. A service revision is a version of the service that includes the values for the Amazon ECS resources (for example, task definition) and the environment resources (for example, load balancers, subnets, and security groups). For more information, see Amazon ECS service revisions. You can't describe a service revision that was created before October 25, 2024. See also: AWS API Documentation **Request Syntax** response = client.describe_service_revisions( serviceRevisionArns=[ 'string', ] ) type serviceRevisionArns: list param serviceRevisionArns: **[REQUIRED]** The ARN of the service revision. You can specify a maximum of 20 ARNs. You can call ListServiceDeployments to get the ARNs. * *(string) --* rtype: dict returns: **Response Syntax** { 'serviceRevisions': [ { 'serviceRevisionArn': 'string', 'serviceArn': 'string', 'clusterArn': 'string', 'taskDefinition': 'string', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'platformVersion': 'string', 'platformFamily': 'string', 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'containerImages': [ { 'containerName': 'string', 'imageDigest': 'string', 'image': 'string' }, ], 'guardDutyEnabled': True|False, 'serviceConnectConfiguration': { 'enabled': True|False, 'namespace': 'string', 'services': [ { 'portName': 'string', 'discoveryName': 'string', 'clientAliases': [ { 'port': 123, 'dnsName': 'string', 'testTrafficRules': { 'header': { 'name': 'string', 'value': { 'exact': 'string' } } } }, ], 'ingressPortOverride': 123, 'timeout': { 'idleTimeoutSeconds': 123, 'perRequestTimeoutSeconds': 123 }, 'tls': { 'issuerCertificateAuthority': { 'awsPcaAuthorityArn': 'string' }, 'kmsKey': 'string', 'roleArn': 'string' } }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] } }, 'volumeConfigurations': [ { 'name': 'string', 'managedEBSVolume': { 'encrypted': True|False, 'kmsKeyId': 'string', 'volumeType': 'string', 'sizeInGiB': 123, 'snapshotId': 'string', 'volumeInitializationRate': 123, 'iops': 123, 'throughput': 123, 'tagSpecifications': [ { 'resourceType': 'volume', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE' }, ], 'roleArn': 'string', 'filesystemType': 'ext3'|'ext4'|'xfs'|'ntfs' } }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' }, 'createdAt': datetime(2015, 1, 1), 'vpcLatticeConfigurations': [ { 'roleArn': 'string', 'targetGroupArn': 'string', 'portName': 'string' }, ], 'resolvedConfiguration': { 'loadBalancers': [ { 'targetGroupArn': 'string', 'productionListenerRule': 'string' }, ] } }, ], 'failures': [ { 'arn': 'string', 'reason': 'string', 'detail': 'string' }, ] } **Response Structure** * *(dict) --* * **serviceRevisions** *(list) --* The list of service revisions described. * *(dict) --* Information about the service revision. A service revision contains a record of the workload configuration Amazon ECS is attempting to deploy. Whenever you create or deploy a service, Amazon ECS automatically creates and captures the configuration that you're trying to deploy in the service revision. For information about service revisions, see Amazon ECS service revisions in the *Amazon Elastic Container Service Developer Guide* . * **serviceRevisionArn** *(string) --* The ARN of the service revision. * **serviceArn** *(string) --* The ARN of the service for the service revision. * **clusterArn** *(string) --* The ARN of the cluster that hosts the service. * **taskDefinition** *(string) --* The task definition the service revision uses. * **capacityProviderStrategy** *(list) --* The capacity provider strategy the service revision uses. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. * **launchType** *(string) --* The launch type the service revision uses. * **platformVersion** *(string) --* For the Fargate launch type, the platform version the service revision uses. * **platformFamily** *(string) --* The platform family the service revision uses. * **loadBalancers** *(list) --* The load balancers the service revision uses. * *(dict) --* The load balancer configuration to use with a service or task set. When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers. We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration. A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the *Amazon Elastic Container Service Developer Guide*. * **targetGroupArn** *(string) --* The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set. A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. For services using the "ECS" deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the *Amazon Elastic Container Service Developer Guide*. For services using the "CODE_DEPLOY" deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the *Amazon Elastic Container Service Developer Guide*. Warning: If your service's task definition uses the "awsvpc" network mode, you must choose "ip" as the target type, not "instance". Do this when creating your target groups because tasks that use the "awsvpc" network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type. * **loadBalancerName** *(string) --* The name of the load balancer to associate with the Amazon ECS service or task set. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted. * **containerName** *(string) --* The name of the container (as it appears in a container definition) to associate with the load balancer. You need to specify the container name when configuring the target group for an Amazon ECS load balancer. * **containerPort** *(integer) --* The port on the container to associate with the load balancer. This port must correspond to a "containerPort" in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the "hostPort" of the port mapping. * **advancedConfiguration** *(dict) --* The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments. * **alternateTargetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments. * **productionListenerRule** *(string) --* The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic. * **testListenerRule** *(string) --* The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you. * **serviceRegistries** *(list) --* The service registries (for Service Discovery) the service revision uses. * *(dict) --* The details for the service registry. Each service may be associated with one service registry. Multiple service registries for each service are not supported. When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration. * **registryArn** *(string) --* The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService. * **port** *(integer) --* The port value used if your service discovery service specified an SRV record. This field might be used if both the "awsvpc" network mode and SRV records are used. * **containerName** *(string) --* The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition that your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **containerPort** *(integer) --* The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the "bridge" or "host" network mode, you must specify a "containerName" and "containerPort" combination from the task definition. If the task definition your service task specifies uses the "awsvpc" network mode and a type SRV DNS record is used, you must specify either a "containerName" and "containerPort" combination or a "port" value. However, you can't specify both. * **networkConfiguration** *(dict) --* The network configuration for a task or service. * **awsvpcConfiguration** *(dict) --* The VPC subnets and security groups that are associated with a task. Note: All specified subnets and security groups must be from the same VPC. * **subnets** *(list) --* The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified. Note: All specified subnets must be from the same VPC. * *(string) --* * **securityGroups** *(list) --* The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified. Note: All specified security groups must be from the same VPC. * *(string) --* * **assignPublicIp** *(string) --* Whether the task's elastic network interface receives a public IP address. Consider the following when you set this value: * When you use "create-service" or "update- service", the default is "DISABLED". * When the service "deploymentController" is "ECS", the value must be "DISABLED". * **containerImages** *(list) --* The container images the service revision uses. * *(dict) --* The details about the container image a service revision uses. To ensure that all tasks in a service use the same container image, Amazon ECS resolves container image names and any image tags specified in the task definition to container image digests. After the container image digest has been established, Amazon ECS uses the digest to start any other desired tasks, and for any future service and service revision updates. This leads to all tasks in a service always running identical container images, resulting in version consistency for your software. For more information, see Container image resolution in the Amazon ECS Developer Guide. * **containerName** *(string) --* The name of the container. * **imageDigest** *(string) --* The container image digest. * **image** *(string) --* The container image. * **guardDutyEnabled** *(boolean) --* Indicates whether Runtime Monitoring is turned on. * **serviceConnectConfiguration** *(dict) --* The Service Connect configuration of your Amazon ECS service. The configuration for this service to discover and connect to services, and be discovered by, and connected from, other services within a namespace. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **enabled** *(boolean) --* Specifies whether to use Service Connect with this service. * **namespace** *(string) --* The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace for use with Service Connect. The namespace must be in the same Amazon Web Services Region as the Amazon ECS service and cluster. The type of namespace doesn't affect Service Connect. For more information about Cloud Map, see Working with Services in the *Cloud Map Developer Guide*. * **services** *(list) --* The list of Service Connect service objects. These are names and aliases (also known as endpoints) that are used by other Amazon ECS services to connect to this service. This field is not required for a "client" Amazon ECS service that's a member of a namespace only to connect to other services within the namespace. An example of this would be a frontend application that accepts incoming requests from either a load balancer that's attached to the service or by other means. An object selects a port from the task definition, assigns a name for the Cloud Map service, and a list of aliases (endpoints) and ports for client applications to refer to this service. * *(dict) --* The Service Connect service object configuration. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **portName** *(string) --* The "portName" must match the name of one of the "portMappings" from all the containers in the task definition of this Amazon ECS service. * **discoveryName** *(string) --* The "discoveryName" is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen. If the "discoveryName" isn't specified, the port mapping name from the task definition is used in "portName.namespace". * **clientAliases** *(list) --* The list of client aliases for this Service Connect service. You use these to assign names that can be used by client applications. The maximum number of client aliases that you can have in this list is 1. Each alias ("endpoint") is a fully-qualified name and port number that other Amazon ECS tasks ("clients") can use to connect to this service. Each name and port mapping must be unique within the namespace. For each "ServiceConnectService", you must provide at least one "clientAlias" with one "port". * *(dict) --* Each alias ("endpoint") is a fully-qualified name and port number that other tasks ("clients") can use to connect to this service. Each name and port mapping must be unique within the namespace. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **port** *(integer) --* The listening port number for the Service Connect proxy. This port is available inside of all of the tasks within the same namespace. To avoid changing your applications in client Amazon ECS services, set this to the same port that the client application uses by default. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **dnsName** *(string) --* The "dnsName" is the name that you use in the applications of client tasks to connect to this service. The name must be a valid DNS name but doesn't need to be fully- qualified. The name can include up to 127 characters. The name can include lowercase letters, numbers, underscores (_), hyphens (-), and periods (.). The name can't start with a hyphen. If this parameter isn't specified, the default value of "discoveryName.namespace" is used. If the "discoveryName" isn't specified, the port mapping name from the task definition is used in "portName.namespace". To avoid changing your applications in client Amazon ECS services, set this to the same name that the client application uses by default. For example, a few common names are "database", "db", or the lowercase name of a database, such as "mysql" or "redis". For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **testTrafficRules** *(dict) --* The configuration for test traffic routing rules used during blue/green deployments with Amazon ECS Service Connect. This allows you to route a portion of traffic to the new service revision of your service for testing before shifting all production traffic. * **header** *(dict) --* The HTTP header-based routing rules that determine which requests should be routed to the new service version during blue/green deployment testing. These rules provide fine-grained control over test traffic routing based on request headers. * **name** *(string) --* The name of the HTTP header to examine for test traffic routing. Common examples include custom headers like "X -Test-Version" or "X-Canary-Request" that can be used to identify test traffic. * **value** *(dict) --* The header value matching configuration that determines how the HTTP header value is evaluated for test traffic routing decisions. * **exact** *(string) --* The exact value that the HTTP header must match for the test traffic routing rule to apply. This provides precise control over which requests are routed to the new service revision during blue/green deployments. * **ingressPortOverride** *(integer) --* The port number for the Service Connect proxy to listen on. Use the value of this field to bypass the proxy for traffic on the port number specified in the named "portMapping" in the task definition of this application, and then use it in your VPC security groups to allow traffic into the proxy for this Amazon ECS service. In "awsvpc" mode and Fargate, the default value is the container port number. The container port number is in the "portMapping" in the task definition. In bridge mode, the default value is the ephemeral port of the Service Connect proxy. * **timeout** *(dict) --* A reference to an object that represents the configured timeouts for Service Connect. * **idleTimeoutSeconds** *(integer) --* The amount of time in seconds a connection will stay active while idle. A value of "0" can be set to disable "idleTimeout". The "idleTimeout" default for "HTTP"/ "HTTP2"/ "GRPC" is 5 minutes. The "idleTimeout" default for "TCP" is 1 hour. * **perRequestTimeoutSeconds** *(integer) --* The amount of time waiting for the upstream to respond with a complete response per request. A value of "0" can be set to disable "perRequestTimeout". "perRequestTimeout" can only be set if Service Connect "appProtocol" isn't "TCP". Only "idleTimeout" is allowed for "TCP" "appProtocol". * **tls** *(dict) --* A reference to an object that represents a Transport Layer Security (TLS) configuration. * **issuerCertificateAuthority** *(dict) --* The signer certificate authority. * **awsPcaAuthorityArn** *(string) --* The ARN of the Amazon Web Services Private Certificate Authority certificate. * **kmsKey** *(string) --* The Amazon Web Services Key Management Service key. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that's associated with the Service Connect TLS. * **logConfiguration** *(dict) --* The log configuration for the container. This parameter maps to "LogConfig" in the docker container create command and the "--log-driver" option to docker run. By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition. Understand the following when specifying a log configuration for your containers. * Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent. For tasks on Fargate, the supported log drivers are "awslogs", "splunk", and "awsfirelens". For tasks hosted on Amazon EC2 instances, the supported log drivers are "awslogs", "fluentd", "gelf", "json- file", "journald", "syslog", "splunk", and "awsfirelens". * This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. * For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the "ECS_AVAILABLE_LOGGING_DRIVERS" environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the *Amazon Elastic Container Service Developer Guide*. * For tasks that are on Fargate, because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to. * **logDriver** *(string) --* The log driver to use for the container. For tasks on Fargate, the supported log drivers are "awslogs", "splunk", and "awsfirelens". For tasks hosted on Amazon EC2 instances, the supported log drivers are "awslogs", "fluentd", "gelf", "json-file", "journald", "syslog", "splunk", and "awsfirelens". For more information about using the "awslogs" log driver, see Send Amazon ECS logs to CloudWatch in the *Amazon Elastic Container Service Developer Guide*. For more information about using the "awsfirelens" log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner. Note: If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software. * **options** *(dict) --* The configuration options to send to the log driver. The options you can specify depend on the log driver. Some of the options you can specify when you use the "awslogs" log driver to route logs to Amazon CloudWatch include the following: awslogs-create-group Required: No Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to "false". Note: Your IAM policy must include the "logs:CreateLogGroup" permission before you attempt to use "awslogs-create-group".awslogs- region Required: Yes Specify the Amazon Web Services Region that the "awslogs" log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option. awslogs-group Required: Yes Make sure to specify a log group that the "awslogs" log driver sends its log streams to. awslogs-stream-prefix Required: Yes, when using Fargate.Optional when using EC2. Use the "awslogs-stream-prefix" option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format "prefix-name/container- name/ecs-task-id". If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option. For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to. You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console. awslogs-datetime-format Required: No This option defines a multiline start pattern in Python "strftime" format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry. For more information, see awslogs-datetime- format. You cannot configure both the "awslogs-datetime- format" and "awslogs-multiline-pattern" options. Note: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.awslogs-multiline-pattern Required: No This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. For more information, see awslogs-multiline- pattern. This option is ignored if "awslogs-datetime- format" is also configured. You cannot configure both the "awslogs-datetime- format" and "awslogs-multiline-pattern" options. Note: Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. The following options apply to all supported log drivers. mode Required: No Valid values: "non-blocking" | "blocking" This option defines the delivery mode of log messages from the container to the log driver specified using "logDriver". The delivery mode you choose affects application availability when the flow of logs from container is interrupted. If you use the "blocking" mode and the flow of logs is interrupted, calls from container code to write to the "stdout" and "stderr" streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure. If you use the "non-blocking" mode, the container's logs are instead stored in an in- memory intermediate buffer configured with the "max-buffer-size" option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver. You can set a default "mode" for all containers in a specific Amazon Web Services Region by using the "defaultLogDriverMode" account setting. If you don't specify the "mode" option or configure the account setting, Amazon ECS will default to the "non-blocking" mode. For more information about the account setting, see Default log driver mode in the *Amazon Elastic Container Service Developer Guide*. Note: On June 25, 2025, Amazon ECS changed the default log driver mode from "blocking" to "non-blocking" to prioritize task availability over logging. To continue using the "blocking" mode after this change, do one of the following: * Set the "mode" option in your container definition's "logConfiguration" as "blocking". * Set the "defaultLogDriverMode" account setting to "blocking". max-buffer-size Required: No Default value: "1m" When "non-blocking" mode is used, the "max- buffer-size" log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost. To route logs using the "splunk" log router, you need to specify a "splunk-token" and a "splunk- url". When you use the "awsfirelens" log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the "log-driver-buffer-limit" option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. Other options you can specify when using "awsfirelens" to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with "region" and a name for the log stream with "delivery_stream". When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with "region" and a data stream name with "stream". When you export logs to Amazon OpenSearch Service, you can specify options like "Name", "Host" (OpenSearch Service endpoint without protocol), "Port", "Index", "Type", "Aws_auth", "Aws_region", "Suppress_Type_Name", and "tls". For more information, see Under the hood: FireLens for Amazon ECS Tasks. When you export logs to Amazon S3, you can specify the bucket using the "bucket" option. You can also specify "region", "total_file_size", "upload_timeout", and "use_put_object" as options. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: "sudo docker version --format '{{.Server.APIVersion}}'" * *(string) --* * *(string) --* * **secretOptions** *(list) --* The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * *(dict) --* An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways: * To inject sensitive data into your containers as environment variables, use the "secrets" container definition parameter. * To reference sensitive information in the log configuration of a container, use the "secretOptions" container definition parameter. For more information, see Specifying sensitive data in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the secret. * **valueFrom** *(string) --* The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store. For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the *Amazon Elastic Container Service Developer Guide*. Note: If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified. * **volumeConfigurations** *(list) --* The volumes that are configured at deployment that the service revision uses. * *(dict) --* The configuration for a volume specified in the task definition as a volume that is configured at launch time. Currently, the only supported volume type is an Amazon EBS volume. * **name** *(string) --* The name of the volume. This value must match the volume name from the "Volume" object in the task definition. * **managedEBSVolume** *(dict) --* The configuration for the Amazon EBS volume that Amazon ECS creates and manages on your behalf. These settings are used to create each Amazon EBS volume, with one volume created for each task in the service. The Amazon EBS volumes are visible in your account in the Amazon EC2 console once they are created. * **encrypted** *(boolean) --* Indicates whether the volume should be encrypted. If you turn on Region-level Amazon EBS encryption by default but set this value as "false", the setting is overridden and the volume is encrypted with the KMS key specified for Amazon EBS encryption by default. This parameter maps 1:1 with the "Encrypted" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * **kmsKeyId** *(string) --* The Amazon Resource Name (ARN) identifier of the Amazon Web Services Key Management Service key to use for Amazon EBS encryption. When a key is specified using this parameter, it overrides Amazon EBS default encryption or any KMS key that you specified for cluster-level managed storage encryption. This parameter maps 1:1 with the "KmsKeyId" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. For more information about encrypting Amazon EBS volumes attached to tasks, see Encrypt data stored in Amazon EBS volumes attached to Amazon ECS tasks. Warning: Amazon Web Services authenticates the Amazon Web Services Key Management Service key asynchronously. Therefore, if you specify an ID, alias, or ARN that is invalid, the action can appear to complete, but eventually fails. * **volumeType** *(string) --* The volume type. This parameter maps 1:1 with the "VolumeType" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. For more information, see Amazon EBS volume types in the *Amazon EC2 User Guide*. The following are the supported volume types. * General Purpose SSD: "gp2``| ``gp3" * Provisioned IOPS SSD: "io1``| ``io2" * Throughput Optimized HDD: "st1" * Cold HDD: "sc1" * Magnetic: "standard" Note: The magnetic volume type is not supported on Fargate. * **sizeInGiB** *(integer) --* The size of the volume in GiB. You must specify either a volume size or a snapshot ID. If you specify a snapshot ID, the snapshot size is used for the volume size by default. You can optionally specify a volume size greater than or equal to the snapshot size. This parameter maps 1:1 with the "Size" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. The following are the supported volume size values for each volume type. * "gp2" and "gp3": 1-16,384 * "io1" and "io2": 4-16,384 * "st1" and "sc1": 125-16,384 * "standard": 1-1,024 * **snapshotId** *(string) --* The snapshot that Amazon ECS uses to create volumes for attachment to tasks maintained by the service. You must specify either "snapshotId" or "sizeInGiB" in your volume configuration. This parameter maps 1:1 with the "SnapshotId" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * **volumeInitializationRate** *(integer) --* The rate, in MiB/s, at which data is fetched from a snapshot of an existing EBS volume to create new volumes for attachment to the tasks maintained by the service. This property can be specified only if you specify a "snapshotId". For more information, see Initialize Amazon EBS volumes in the *Amazon EBS User Guide*. * **iops** *(integer) --* The number of I/O operations per second (IOPS). For "gp3", "io1", and "io2" volumes, this represents the number of IOPS that are provisioned for the volume. For "gp2" volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting. The following are the supported values for each volume type. * "gp3": 3,000 - 16,000 IOPS * "io1": 100 - 64,000 IOPS * "io2": 100 - 256,000 IOPS This parameter is required for "io1" and "io2" volume types. The default for "gp3" volumes is "3,000 IOPS". This parameter is not supported for "st1", "sc1", or "standard" volume types. This parameter maps 1:1 with the "Iops" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * **throughput** *(integer) --* The throughput to provision for a volume, in MiB/s, with a maximum of 1,000 MiB/s. This parameter maps 1:1 with the "Throughput" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. Warning: This parameter is only supported for the "gp3" volume type. * **tagSpecifications** *(list) --* The tags to apply to the volume. Amazon ECS applies service-managed tags by default. This parameter maps 1:1 with the "TagSpecifications.N" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * *(dict) --* The tag specifications of an Amazon EBS volume. * **resourceType** *(string) --* The type of volume resource. * **tags** *(list) --* The tags applied to this Amazon EBS volume. "AmazonECSCreated" and "AmazonECSManaged" are reserved tags that can't be used. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **propagateTags** *(string) --* Determines whether to propagate the tags from the task definition to the Amazon EBS volume. Tags can only propagate to a "SERVICE" specified in "ServiceVolumeConfiguration". If no value is specified, the tags aren't propagated. * **roleArn** *(string) --* The ARN of the IAM role to associate with this volume. This is the Amazon ECS infrastructure IAM role that is used to manage your Amazon Web Services infrastructure. We recommend using the Amazon ECS-managed "AmazonECSInfrastructureRolePolicyForVolumes" IAM policy with this role. For more information, see Amazon ECS infrastructure IAM role in the *Amazon ECS Developer Guide*. * **filesystemType** *(string) --* The filesystem type for the volume. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the tasks will fail to start. The available Linux filesystem types are "ext3", "ext4", and "xfs". If no value is specified, the "xfs" filesystem type is used by default. The available Windows filesystem types are "NTFS". * **fargateEphemeralStorage** *(dict) --* The amount of ephemeral storage to allocate for the deployment. * **kmsKeyId** *(string) --* Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment. * **createdAt** *(datetime) --* The time that the service revision was created. The format is yyyy-mm-dd HH:mm:ss.SSSSS. * **vpcLatticeConfigurations** *(list) --* The VPC Lattice configuration for the service revision. * *(dict) --* The VPC Lattice configuration for your service that holds the information for the target group(s) Amazon ECS tasks will be registered to. * **roleArn** *(string) --* The ARN of the IAM role to associate with this VPC Lattice configuration. This is the Amazon ECS infrastructure IAM role that is used to manage your VPC Lattice infrastructure. * **targetGroupArn** *(string) --* The full Amazon Resource Name (ARN) of the target group or groups associated with the VPC Lattice configuration that the Amazon ECS tasks will be registered to. * **portName** *(string) --* The name of the port mapping to register in the VPC Lattice target group. This is the name of the "portMapping" you defined in your task definition. * **resolvedConfiguration** *(dict) --* The resolved configuration for the service revision which contains the actual resources your service revision uses, such as which target groups serve traffic. * **loadBalancers** *(list) --* The resolved load balancer configuration for the service revision. This includes information about which target groups serve traffic and which listener rules direct traffic to them. * *(dict) --* The resolved load balancer configuration for a service revision. This includes information about which target groups serve traffic and which listener rules direct traffic to them. * **targetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the target group associated with the service revision. * **productionListenerRule** *(string) --* The Amazon Resource Name (ARN) of the production listener rule or listener that directs traffic to the target group associated with the service revision. * **failures** *(list) --* Any failures associated with the call. * *(dict) --* A failed resource. For a list of common causes, see API failure reasons in the *Amazon Elastic Container Service Developer Guide*. * **arn** *(string) --* The Amazon Resource Name (ARN) of the failed resource. * **reason** *(string) --* The reason for the failure. * **detail** *(string) --* The details of the failure. **Exceptions** * "ECS.Client.exceptions.AccessDeniedException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.ClusterNotFoundException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ServiceNotFoundException" * "ECS.Client.exceptions.UnsupportedFeatureException" ECS / Client / tag_resource tag_resource ************ ECS.Client.tag_resource(**kwargs) Associates the specified tags to a resource with the specified "resourceArn". If existing tags on a resource aren't specified in the request parameters, they aren't changed. When a resource is deleted, the tags that are associated with that resource are deleted as well. See also: AWS API Documentation **Request Syntax** response = client.tag_resource( resourceArn='string', tags=[ { 'key': 'string', 'value': 'string' }, ] ) Parameters: * **resourceArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the resource to add tags to. Currently, the supported resources are Amazon ECS capacity providers, tasks, services, task definitions, clusters, and container instances. In order to tag a service that has the following ARN format, you need to migrate the service to the long ARN. For more information, see Migrate an Amazon ECS short service ARN to a long ARN in the *Amazon Elastic Container Service Developer Guide*. "arn:aws:ecs:region:aws_account_id:service/service-name" After the migration is complete, the service has the long ARN format, as shown below. Use this ARN to tag the service. "arn:aws:ecs:region:aws_account_id:service/cluster-name /service-name" If you try to tag a service with a short ARN, you receive an "InvalidParameterException" error. * **tags** (*list*) -- **[REQUIRED]** The tags to add to the resource. A tag is an array of key- value pairs. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). Return type: dict Returns: **Response Syntax** {} **Response Structure** * *(dict) --* **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.ClusterNotFoundException" * "ECS.Client.exceptions.ResourceNotFoundException" * "ECS.Client.exceptions.InvalidParameterException" **Examples** This example tags the 'dev' cluster with key 'team' and value 'dev'. response = client.tag_resource( resourceArn='arn:aws:ecs:region:aws_account_id:cluster/dev', tags=[ { 'key': 'team', 'value': 'dev', }, ], ) print(response) Expected Output: { 'ResponseMetadata': { '...': '...', }, } ECS / Client / submit_attachment_state_changes submit_attachment_state_changes ******************************* ECS.Client.submit_attachment_state_changes(**kwargs) Note: This action is only used by the Amazon ECS agent, and it is not intended for use outside of the agent. Sent to acknowledge that an attachment changed states. See also: AWS API Documentation **Request Syntax** response = client.submit_attachment_state_changes( cluster='string', attachments=[ { 'attachmentArn': 'string', 'status': 'string' }, ] ) Parameters: * **cluster** (*string*) -- The short name or full ARN of the cluster that hosts the container instance the attachment belongs to. * **attachments** (*list*) -- **[REQUIRED]** Any attachments associated with the state change request. * *(dict) --* An object representing a change in state for a task attachment. * **attachmentArn** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) of the attachment. * **status** *(string) --* **[REQUIRED]** The status of the attachment. Return type: dict Returns: **Response Syntax** { 'acknowledgment': 'string' } **Response Structure** * *(dict) --* * **acknowledgment** *(string) --* Acknowledgement of the state change. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.AccessDeniedException" * "ECS.Client.exceptions.InvalidParameterException" ECS / Client / delete_attributes delete_attributes ***************** ECS.Client.delete_attributes(**kwargs) Deletes one or more custom attributes from an Amazon ECS resource. See also: AWS API Documentation **Request Syntax** response = client.delete_attributes( cluster='string', attributes=[ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ] ) Parameters: * **cluster** (*string*) -- The short name or full Amazon Resource Name (ARN) of the cluster that contains the resource to delete attributes. If you do not specify a cluster, the default cluster is assumed. * **attributes** (*list*) -- **[REQUIRED]** The attributes to delete from your resource. You can specify up to 10 attributes for each request. For custom attributes, specify the attribute name and target ID, but don't specify the value. If you specify the target ID using the short form, you must also specify the target type. * *(dict) --* An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* **[REQUIRED]** The name of the attribute. The "name" must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.). * **value** *(string) --* The value of the attribute. The "value" must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space. * **targetType** *(string) --* The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN. * **targetId** *(string) --* The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN). Return type: dict Returns: **Response Syntax** { 'attributes': [ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ] } **Response Structure** * *(dict) --* * **attributes** *(list) --* A list of attribute objects that were successfully deleted from your resource. * *(dict) --* An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the attribute. The "name" must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.). * **value** *(string) --* The value of the attribute. The "value" must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space. * **targetType** *(string) --* The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN. * **targetId** *(string) --* The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN). **Exceptions** * "ECS.Client.exceptions.ClusterNotFoundException" * "ECS.Client.exceptions.TargetNotFoundException" * "ECS.Client.exceptions.InvalidParameterException" ECS / Client / describe_clusters describe_clusters ***************** ECS.Client.describe_clusters(**kwargs) Describes one or more of your clusters. For CLI examples, see describe-clusters.rst on GitHub. See also: AWS API Documentation **Request Syntax** response = client.describe_clusters( clusters=[ 'string', ], include=[ 'ATTACHMENTS'|'CONFIGURATIONS'|'SETTINGS'|'STATISTICS'|'TAGS', ] ) Parameters: * **clusters** (*list*) -- A list of up to 100 cluster names or full cluster Amazon Resource Name (ARN) entries. If you do not specify a cluster, the default cluster is assumed. * *(string) --* * **include** (*list*) -- Determines whether to include additional information about the clusters in the response. If this field is omitted, this information isn't included. If "ATTACHMENTS" is specified, the attachments for the container instances or tasks within the cluster are included, for example the capacity providers. If "SETTINGS" is specified, the settings for the cluster are included. If "CONFIGURATIONS" is specified, the configuration for the cluster is included. If "STATISTICS" is specified, the task and service count is included, separated by launch type. If "TAGS" is specified, the metadata tags associated with the cluster are included. * *(string) --* Return type: dict Returns: **Response Syntax** { 'clusters': [ { 'clusterArn': 'string', 'clusterName': 'string', 'configuration': { 'executeCommandConfiguration': { 'kmsKeyId': 'string', 'logging': 'NONE'|'DEFAULT'|'OVERRIDE', 'logConfiguration': { 'cloudWatchLogGroupName': 'string', 'cloudWatchEncryptionEnabled': True|False, 's3BucketName': 'string', 's3EncryptionEnabled': True|False, 's3KeyPrefix': 'string' } }, 'managedStorageConfiguration': { 'kmsKeyId': 'string', 'fargateEphemeralStorageKmsKeyId': 'string' } }, 'status': 'string', 'registeredContainerInstancesCount': 123, 'runningTasksCount': 123, 'pendingTasksCount': 123, 'activeServicesCount': 123, 'statistics': [ { 'name': 'string', 'value': 'string' }, ], 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'settings': [ { 'name': 'containerInsights', 'value': 'string' }, ], 'capacityProviders': [ 'string', ], 'defaultCapacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'attachments': [ { 'id': 'string', 'type': 'string', 'status': 'string', 'details': [ { 'name': 'string', 'value': 'string' }, ] }, ], 'attachmentsStatus': 'string', 'serviceConnectDefaults': { 'namespace': 'string' } }, ], 'failures': [ { 'arn': 'string', 'reason': 'string', 'detail': 'string' }, ] } **Response Structure** * *(dict) --* * **clusters** *(list) --* The list of clusters. * *(dict) --* A regional grouping of one or more container instances where you can run task requests. Each account receives a default cluster the first time you use the Amazon ECS service, but you may also create other clusters. Clusters may contain more than one instance type simultaneously. * **clusterArn** *(string) --* The Amazon Resource Name (ARN) that identifies the cluster. For more information about the ARN format, see Amazon Resource Name (ARN) in the *Amazon ECS Developer Guide*. * **clusterName** *(string) --* A user-generated string that you use to identify your cluster. * **configuration** *(dict) --* The execute command and managed storage configuration for the cluster. * **executeCommandConfiguration** *(dict) --* The details of the execute command configuration. * **kmsKeyId** *(string) --* Specify an Key Management Service key ID to encrypt the data between the local client and the container. * **logging** *(string) --* The log setting to use for redirecting logs for your execute command results. The following log settings are available. * "NONE": The execute command session is not logged. * "DEFAULT": The "awslogs" configuration in the task definition is used. If no logging parameter is specified, it defaults to this value. If no "awslogs" log driver is configured in the task definition, the output won't be logged. * "OVERRIDE": Specify the logging details as a part of "logConfiguration". If the "OVERRIDE" logging option is specified, the "logConfiguration" is required. * **logConfiguration** *(dict) --* The log configuration for the results of the execute command actions. The logs can be sent to CloudWatch Logs or an Amazon S3 bucket. When "logging=OVERRIDE" is specified, a "logConfiguration" must be provided. * **cloudWatchLogGroupName** *(string) --* The name of the CloudWatch log group to send logs to. Note: The CloudWatch log group must already be created. * **cloudWatchEncryptionEnabled** *(boolean) --* Determines whether to use encryption on the CloudWatch logs. If not specified, encryption will be off. * **s3BucketName** *(string) --* The name of the S3 bucket to send logs to. Note: The S3 bucket must already be created. * **s3EncryptionEnabled** *(boolean) --* Determines whether to use encryption on the S3 logs. If not specified, encryption is not used. * **s3KeyPrefix** *(string) --* An optional folder in the S3 bucket to place logs in. * **managedStorageConfiguration** *(dict) --* The details of the managed storage configuration. * **kmsKeyId** *(string) --* Specify a Key Management Service key ID to encrypt Amazon ECS managed storage. When you specify a "kmsKeyId", Amazon ECS uses the key to encrypt data volumes managed by Amazon ECS that are attached to tasks in the cluster. The following data volumes are managed by Amazon ECS: Amazon EBS. For more information about encryption of Amazon EBS volumes attached to Amazon ECS tasks, see Encrypt data stored in Amazon EBS volumes for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. The key must be a single Region key. * **fargateEphemeralStorageKmsKeyId** *(string) --* Specify the Key Management Service key ID for Fargate ephemeral storage. When you specify a "fargateEphemeralStorageKmsKeyId", Amazon Web Services Fargate uses the key to encrypt data at rest in ephemeral storage. For more information about Fargate ephemeral storage encryption, see Customer managed keys for Amazon Web Services Fargate ephemeral storage for Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. The key must be a single Region key. * **status** *(string) --* The status of the cluster. The following are the possible states that are returned. ACTIVE The cluster is ready to accept tasks and if applicable you can register container instances with the cluster. PROVISIONING The cluster has capacity providers that are associated with it and the resources needed for the capacity provider are being created. DEPROVISIONING The cluster has capacity providers that are associated with it and the resources needed for the capacity provider are being deleted. FAILED The cluster has capacity providers that are associated with it and the resources needed for the capacity provider have failed to create. INACTIVE The cluster has been deleted. Clusters with an "INACTIVE" status may remain discoverable in your account for a period of time. However, this behavior is subject to change in the future. We don't recommend that you rely on "INACTIVE" clusters persisting. * **registeredContainerInstancesCount** *(integer) --* The number of container instances registered into the cluster. This includes container instances in both "ACTIVE" and "DRAINING" status. * **runningTasksCount** *(integer) --* The number of tasks in the cluster that are in the "RUNNING" state. * **pendingTasksCount** *(integer) --* The number of tasks in the cluster that are in the "PENDING" state. * **activeServicesCount** *(integer) --* The number of services that are running on the cluster in an "ACTIVE" state. You can view these services with PListServices. * **statistics** *(list) --* Additional information about your clusters that are separated by launch type. They include the following: * runningEC2TasksCount * RunningFargateTasksCount * pendingEC2TasksCount * pendingFargateTasksCount * activeEC2ServiceCount * activeFargateServiceCount * drainingEC2ServiceCount * drainingFargateServiceCount * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **tags** *(list) --* The metadata that you apply to the cluster to help you categorize and organize them. Each tag consists of a key and an optional value. You define both. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **settings** *(list) --* The settings for the cluster. This parameter indicates whether CloudWatch Container Insights is on or off for a cluster. * *(dict) --* The settings to use when creating a cluster. This parameter is used to turn on CloudWatch Container Insights with enhanced observability or CloudWatch Container Insights for a cluster. Container Insights with enhanced observability provides all the Container Insights metrics, plus additional task and container metrics. This version supports enhanced observability for Amazon ECS clusters using the Amazon EC2 and Fargate launch types. After you configure Container Insights with enhanced observability on Amazon ECS, Container Insights auto-collects detailed infrastructure telemetry from the cluster level down to the container level in your environment and displays these critical performance data in curated dashboards removing the heavy lifting in observability set-up. For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the cluster setting. The value is "containerInsights" . * **value** *(string) --* The value to set for the cluster setting. The supported values are "enhanced", "enabled", and "disabled". To use Container Insights with enhanced observability, set the "containerInsights" account setting to "enhanced". To use Container Insights, set the "containerInsights" account setting to "enabled". If a cluster value is specified, it will override the "containerInsights" value set with PutAccountSetting or PutAccountSettingDefault. * **capacityProviders** *(list) --* The capacity providers associated with the cluster. * *(string) --* * **defaultCapacityProviderStrategy** *(list) --* The default capacity provider strategy for the cluster. When services or tasks are run in the cluster with no launch type or capacity provider strategy specified, the default capacity provider strategy is used. * *(dict) --* The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask `__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the "CreateCluster" API. Only capacity providers that are already associated with a cluster and have an "ACTIVE" or "UPDATING" status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation. To use a Fargate capacity provider, specify either the "FARGATE" or "FARGATE_SPOT" capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy. With "FARGATE_SPOT", you can run interruption tolerant tasks at a rate that's discounted compared to the "FARGATE" price. "FARGATE_SPOT" runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two- minute warning. "FARGATE_SPOT" supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. "FARGATE_SPOT" supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later. A capacity provider strategy can contain a maximum of 20 capacity providers. * **capacityProvider** *(string) --* The short name of the capacity provider. * **weight** *(integer) --* The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The "weight" value is taken into consideration after the "base" value, if defined, is satisfied. If no "weight" value is specified, the default value of "0" is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of "0" can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of "0", any "RunTask" or "CreateService" actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of "1", then when the "base" is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of "1" for *capacityProviderA* and a weight of "4" for *capacityProviderB*, then for every one task that's run using *capacityProviderA*, four tasks would use *capacityProviderB*. * **base** *(integer) --* The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of "0" is used. * **attachments** *(list) --* The resources attached to a cluster. When using a capacity provider with a cluster, the capacity provider and associated resources are returned as cluster attachments. * *(dict) --* An object representing a container instance or task attachment. * **id** *(string) --* The unique identifier for the attachment. * **type** *(string) --* The type of the attachment, such as "ElasticNetworkInterface", "Service Connect", and "AmazonElasticBlockStorage". * **status** *(string) --* The status of the attachment. Valid values are "PRECREATED", "CREATED", "ATTACHING", "ATTACHED", "DETACHING", "DETACHED", "DELETED", and "FAILED". * **details** *(list) --* Details of the attachment. For elastic network interfaces, this includes the network interface ID, the MAC address, the subnet ID, and the private IPv4 address. For Service Connect services, this includes "portName", "clientAliases", "discoveryName", and "ingressPortOverride". For Elastic Block Storage, this includes "roleArn", "deleteOnTermination", "volumeName", "volumeId", and "statusReason" (only when the attachment fails to create or attach). * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **attachmentsStatus** *(string) --* The status of the capacity providers associated with the cluster. The following are the states that are returned. UPDATE_IN_PROGRESS The available capacity providers for the cluster are updating. UPDATE_COMPLETE The capacity providers have successfully updated. UPDATE_FAILED The capacity provider updates failed. * **serviceConnectDefaults** *(dict) --* Use this parameter to set a default Service Connect namespace. After you set a default Service Connect namespace, any new services with Service Connect turned on that are created in the cluster are added as client services in the namespace. This setting only applies to new services that set the "enabled" parameter to "true" in the "ServiceConnectConfiguration". You can set the namespace of each service individually in the "ServiceConnectConfiguration" to override this default parameter. Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the *Amazon Elastic Container Service Developer Guide*. * **namespace** *(string) --* The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace. When you create a service and don't specify a Service Connect configuration, this namespace is used. * **failures** *(list) --* Any failures associated with the call. * *(dict) --* A failed resource. For a list of common causes, see API failure reasons in the *Amazon Elastic Container Service Developer Guide*. * **arn** *(string) --* The Amazon Resource Name (ARN) of the failed resource. * **reason** *(string) --* The reason for the failure. * **detail** *(string) --* The details of the failure. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" **Examples** This example provides a description of the specified cluster in your default region. response = client.describe_clusters( clusters=[ 'default', ], ) print(response) Expected Output: { 'clusters': [ { 'clusterArn': 'arn:aws:ecs:us-east-1:aws_account_id:cluster/default', 'clusterName': 'default', 'status': 'ACTIVE', }, ], 'failures': [ ], 'ResponseMetadata': { '...': '...', }, } ECS / Client / start_task start_task ********** ECS.Client.start_task(**kwargs) Starts a new task from the specified task definition on the specified container instance or instances. Note: On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition. Note: Amazon Elastic Inference (EI) is no longer available to customers. Alternatively, you can use "RunTask" to place tasks for you. For more information, see Scheduling Tasks in the *Amazon Elastic Container Service Developer Guide*. You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or updating a service. For more information, see Amazon EBS volumes in the *Amazon Elastic Container Service Developer Guide*. See also: AWS API Documentation **Request Syntax** response = client.start_task( cluster='string', containerInstances=[ 'string', ], enableECSManagedTags=True|False, enableExecuteCommand=True|False, group='string', networkConfiguration={ 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, overrides={ 'containerOverrides': [ { 'name': 'string', 'command': [ 'string', ], 'environment': [ { 'name': 'string', 'value': 'string' }, ], 'environmentFiles': [ { 'value': 'string', 'type': 's3' }, ], 'cpu': 123, 'memory': 123, 'memoryReservation': 123, 'resourceRequirements': [ { 'value': 'string', 'type': 'GPU'|'InferenceAccelerator' }, ] }, ], 'cpu': 'string', 'inferenceAcceleratorOverrides': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'executionRoleArn': 'string', 'memory': 'string', 'taskRoleArn': 'string', 'ephemeralStorage': { 'sizeInGiB': 123 } }, propagateTags='TASK_DEFINITION'|'SERVICE'|'NONE', referenceId='string', startedBy='string', tags=[ { 'key': 'string', 'value': 'string' }, ], taskDefinition='string', volumeConfigurations=[ { 'name': 'string', 'managedEBSVolume': { 'encrypted': True|False, 'kmsKeyId': 'string', 'volumeType': 'string', 'sizeInGiB': 123, 'snapshotId': 'string', 'volumeInitializationRate': 123, 'iops': 123, 'throughput': 123, 'tagSpecifications': [ { 'resourceType': 'volume', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE' }, ], 'roleArn': 'string', 'terminationPolicy': { 'deleteOnTermination': True|False }, 'filesystemType': 'ext3'|'ext4'|'xfs'|'ntfs' } }, ] ) type cluster: string param cluster: The short name or full Amazon Resource Name (ARN) of the cluster where to start your task. If you do not specify a cluster, the default cluster is assumed. type containerInstances: list param containerInstances: **[REQUIRED]** The container instance IDs or full ARN entries for the container instances where you would like to place your task. You can specify up to 10 container instances. * *(string) --* type enableECSManagedTags: boolean param enableECSManagedTags: Specifies whether to use Amazon ECS managed tags for the task. For more information, see Tagging Your Amazon ECS Resources in the *Amazon Elastic Container Service Developer Guide*. type enableExecuteCommand: boolean param enableExecuteCommand: Whether or not the execute command functionality is turned on for the task. If "true", this turns on the execute command functionality on all containers in the task. type group: string param group: The name of the task group to associate with the task. The default value is the family name of the task definition (for example, family:my-family-name). type networkConfiguration: dict param networkConfiguration: The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the "awsvpc" networking mode. * **awsvpcConfiguration** *(dict) --* The VPC subnets and security groups that are associated with a task. Note: All specified subnets and security groups must be from the same VPC. * **subnets** *(list) --* **[REQUIRED]** The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified. Note: All specified subnets must be from the same VPC. * *(string) --* * **securityGroups** *(list) --* The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified. Note: All specified security groups must be from the same VPC. * *(string) --* * **assignPublicIp** *(string) --* Whether the task's elastic network interface receives a public IP address. Consider the following when you set this value: * When you use "create-service" or "update-service", the default is "DISABLED". * When the service "deploymentController" is "ECS", the value must be "DISABLED". type overrides: dict param overrides: A list of container overrides in JSON format that specify the name of a container in the specified task definition and the overrides it receives. You can override the default command for a container (that's specified in the task definition or Docker image) with a "command" override. You can also override existing environment variables (that are specified in the task definition or Docker image) on a container or add new environment variables to it with an "environment" override. Note: A total of 8192 characters are allowed for overrides. This limit includes the JSON formatting characters of the override structure. * **containerOverrides** *(list) --* One or more container overrides that are sent to a task. * *(dict) --* The overrides that are sent to a container. An empty container override can be passed in. An example of an empty container override is "{"containerOverrides": [ ] }". If a non-empty container override is specified, the "name" parameter must be included. You can use Secrets Manager or Amazon Web Services Systems Manager Parameter Store to store the sensitive data. For more information, see Retrieve secrets through environment variables in the Amazon ECS Developer Guide. * **name** *(string) --* The name of the container that receives the override. This parameter is required if any override is specified. * **command** *(list) --* The command to send to the container that overrides the default command from the Docker image or the task definition. You must also specify a container name. * *(string) --* * **environment** *(list) --* The environment variables to send to the container. You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the task definition. You must also specify a container name. * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **environmentFiles** *(list) --* A list of files containing the environment variables to pass to a container, instead of the value from the container definition. * *(dict) --* A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a ".env" file extension. Each line in an environment file should contain an environment variable in "VARIABLE=VALUE" format. Lines beginning with "#" are treated as comments and are ignored. If there are environment variables specified using the "environment" parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Use a file to pass environment variables to a container in the *Amazon Elastic Container Service Developer Guide*. Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply. You must use the following platforms for the Fargate launch type: * Linux platform version "1.4.0" or later. * Windows platform version "1.0.0" or later. Consider the following when using the Fargate launch type: * The file is handled like a native Docker env-file. * There is no support for shell escape handling. * The container entry point interperts the "VARIABLE" values. * **value** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file. * **type** *(string) --* **[REQUIRED]** The file type to use. Environment files are objects in Amazon S3. The only supported value is "s3". * **cpu** *(integer) --* The number of "cpu" units reserved for the container, instead of the default value from the task definition. You must also specify a container name. * **memory** *(integer) --* The hard limit (in MiB) of memory to present to the container, instead of the default value from the task definition. If your container attempts to exceed the memory specified here, the container is killed. You must also specify a container name. * **memoryReservation** *(integer) --* The soft limit (in MiB) of memory to reserve for the container, instead of the default value from the task definition. You must also specify a container name. * **resourceRequirements** *(list) --* The type and amount of a resource to assign to a container, instead of the default value from the task definition. The only supported resource is a GPU. * *(dict) --* The type and amount of a resource to assign to a container. The supported resource types are GPUs and Elastic Inference accelerators. For more information, see Working with GPUs on Amazon ECS or Working with Amazon Elastic Inference on Amazon ECS in the *Amazon Elastic Container Service Developer Guide* * **value** *(string) --* **[REQUIRED]** The value for the specified resource type. When the type is "GPU", the value is the number of physical "GPUs" the Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for all containers in a task can't exceed the number of available GPUs on the container instance that the task is launched on. When the type is "InferenceAccelerator", the "value" matches the "deviceName" for an InferenceAccelerator specified in a task definition. * **type** *(string) --* **[REQUIRED]** The type of resource to assign to a container. * **cpu** *(string) --* The CPU override for the task. * **inferenceAcceleratorOverrides** *(list) --* The Elastic Inference accelerator override for the task. * *(dict) --* Details on an Elastic Inference accelerator task override. This parameter is used to override the Elastic Inference accelerator specified in the task definition. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. * **deviceName** *(string) --* The Elastic Inference accelerator device name to override for the task. This parameter must match a "deviceName" specified in the task definition. * **deviceType** *(string) --* The Elastic Inference accelerator type to use. * **executionRoleArn** *(string) --* The Amazon Resource Name (ARN) of the task execution role override for the task. For more information, see Amazon ECS task execution IAM role in the *Amazon Elastic Container Service Developer Guide*. * **memory** *(string) --* The memory override for the task. * **taskRoleArn** *(string) --* The Amazon Resource Name (ARN) of the role that containers in this task can assume. All containers in this task are granted the permissions that are specified in this role. For more information, see IAM Role for Tasks in the *Amazon Elastic Container Service Developer Guide*. * **ephemeralStorage** *(dict) --* The ephemeral storage setting override for the task. Note: This parameter is only supported for tasks hosted on Fargate that use the following platform versions: * Linux platform version "1.4.0" or later. * Windows platform version "1.0.0" or later. * **sizeInGiB** *(integer) --* **[REQUIRED]** The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is "21" GiB and the maximum supported value is "200" GiB. type propagateTags: string param propagateTags: Specifies whether to propagate the tags from the task definition or the service to the task. If no value is specified, the tags aren't propagated. type referenceId: string param referenceId: This parameter is only used by Amazon ECS. It is not intended for use by customers. type startedBy: string param startedBy: An optional tag specified when a task is started. For example, if you automatically trigger a task to run a batch process job, you could apply a unique identifier for that job to your task with the "startedBy" parameter. You can then identify which tasks belong to that job by filtering the results of a ListTasks call with the "startedBy" value. Up to 36 letters (uppercase and lowercase), numbers, hyphens (-), forward slash (/), and underscores (_) are allowed. If a task is started by an Amazon ECS service, the "startedBy" parameter contains the deployment ID of the service that starts it. type tags: list param tags: The metadata that you apply to the task to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). type taskDefinition: string param taskDefinition: **[REQUIRED]** The "family" and "revision" ( "family:revision") or full ARN of the task definition to start. If a "revision" isn't specified, the latest "ACTIVE" revision is used. type volumeConfigurations: list param volumeConfigurations: The details of the volume that was "configuredAtLaunch". You can configure the size, volumeType, IOPS, throughput, snapshot and encryption in TaskManagedEBSVolumeConfiguration. The "name" of the volume must match the "name" from the task definition. * *(dict) --* Configuration settings for the task volume that was "configuredAtLaunch" that weren't set during "RegisterTaskDef". * **name** *(string) --* **[REQUIRED]** The name of the volume. This value must match the volume name from the "Volume" object in the task definition. * **managedEBSVolume** *(dict) --* The configuration for the Amazon EBS volume that Amazon ECS creates and manages on your behalf. These settings are used to create each Amazon EBS volume, with one volume created for each task. The Amazon EBS volumes are visible in your account in the Amazon EC2 console once they are created. * **encrypted** *(boolean) --* Indicates whether the volume should be encrypted. If you turn on Region-level Amazon EBS encryption by default but set this value as "false", the setting is overridden and the volume is encrypted with the KMS key specified for Amazon EBS encryption by default. This parameter maps 1:1 with the "Encrypted" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * **kmsKeyId** *(string) --* The Amazon Resource Name (ARN) identifier of the Amazon Web Services Key Management Service key to use for Amazon EBS encryption. When a key is specified using this parameter, it overrides Amazon EBS default encryption or any KMS key that you specified for cluster-level managed storage encryption. This parameter maps 1:1 with the "KmsKeyId" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. For more information about encrypting Amazon EBS volumes attached to a task, see Encrypt data stored in Amazon EBS volumes attached to Amazon ECS tasks. Warning: Amazon Web Services authenticates the Amazon Web Services Key Management Service key asynchronously. Therefore, if you specify an ID, alias, or ARN that is invalid, the action can appear to complete, but eventually fails. * **volumeType** *(string) --* The volume type. This parameter maps 1:1 with the "VolumeType" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. For more information, see Amazon EBS volume types in the *Amazon EC2 User Guide*. The following are the supported volume types. * General Purpose SSD: "gp2``| ``gp3" * Provisioned IOPS SSD: "io1``| ``io2" * Throughput Optimized HDD: "st1" * Cold HDD: "sc1" * Magnetic: "standard" Note: The magnetic volume type is not supported on Fargate. * **sizeInGiB** *(integer) --* The size of the volume in GiB. You must specify either a volume size or a snapshot ID. If you specify a snapshot ID, the snapshot size is used for the volume size by default. You can optionally specify a volume size greater than or equal to the snapshot size. This parameter maps 1:1 with the "Size" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. The following are the supported volume size values for each volume type. * "gp2" and "gp3": 1-16,384 * "io1" and "io2": 4-16,384 * "st1" and "sc1": 125-16,384 * "standard": 1-1,024 * **snapshotId** *(string) --* The snapshot that Amazon ECS uses to create the volume. You must specify either a snapshot ID or a volume size. This parameter maps 1:1 with the "SnapshotId" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * **volumeInitializationRate** *(integer) --* The rate, in MiB/s, at which data is fetched from a snapshot of an existing Amazon EBS volume to create a new volume for attachment to the task. This property can be specified only if you specify a "snapshotId". For more information, see Initialize Amazon EBS volumes in the *Amazon EBS User Guide*. * **iops** *(integer) --* The number of I/O operations per second (IOPS). For "gp3", "io1", and "io2" volumes, this represents the number of IOPS that are provisioned for the volume. For "gp2" volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting. The following are the supported values for each volume type. * "gp3": 3,000 - 16,000 IOPS * "io1": 100 - 64,000 IOPS * "io2": 100 - 256,000 IOPS This parameter is required for "io1" and "io2" volume types. The default for "gp3" volumes is "3,000 IOPS". This parameter is not supported for "st1", "sc1", or "standard" volume types. This parameter maps 1:1 with the "Iops" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * **throughput** *(integer) --* The throughput to provision for a volume, in MiB/s, with a maximum of 1,000 MiB/s. This parameter maps 1:1 with the "Throughput" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. Warning: This parameter is only supported for the "gp3" volume type. * **tagSpecifications** *(list) --* The tags to apply to the volume. Amazon ECS applies service-managed tags by default. This parameter maps 1:1 with the "TagSpecifications.N" parameter of the CreateVolume API in the *Amazon EC2 API Reference*. * *(dict) --* The tag specifications of an Amazon EBS volume. * **resourceType** *(string) --* **[REQUIRED]** The type of volume resource. * **tags** *(list) --* The tags applied to this Amazon EBS volume. "AmazonECSCreated" and "AmazonECSManaged" are reserved tags that can't be used. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **propagateTags** *(string) --* Determines whether to propagate the tags from the task definition to the Amazon EBS volume. Tags can only propagate to a "SERVICE" specified in "ServiceVolumeConfiguration". If no value is specified, the tags aren't propagated. * **roleArn** *(string) --* **[REQUIRED]** The ARN of the IAM role to associate with this volume. This is the Amazon ECS infrastructure IAM role that is used to manage your Amazon Web Services infrastructure. We recommend using the Amazon ECS-managed "AmazonECSInfrastructureRolePolicyForVolumes" IAM policy with this role. For more information, see Amazon ECS infrastructure IAM role in the *Amazon ECS Developer Guide*. * **terminationPolicy** *(dict) --* The termination policy for the volume when the task exits. This provides a way to control whether Amazon ECS terminates the Amazon EBS volume when the task stops. * **deleteOnTermination** *(boolean) --* **[REQUIRED]** Indicates whether the volume should be deleted on when the task stops. If a value of "true" is specified, Amazon ECS deletes the Amazon EBS volume on your behalf when the task goes into the "STOPPED" state. If no value is specified, the default value is "true" is used. When set to "false", Amazon ECS leaves the volume in your account. * **filesystemType** *(string) --* The Linux filesystem type for the volume. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the task will fail to start. The available filesystem types are "ext3", "ext4", and "xfs". If no value is specified, the "xfs" filesystem type is used by default. rtype: dict returns: **Response Syntax** { 'tasks': [ { 'attachments': [ { 'id': 'string', 'type': 'string', 'status': 'string', 'details': [ { 'name': 'string', 'value': 'string' }, ] }, ], 'attributes': [ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ], 'availabilityZone': 'string', 'capacityProviderName': 'string', 'clusterArn': 'string', 'connectivity': 'CONNECTED'|'DISCONNECTED', 'connectivityAt': datetime(2015, 1, 1), 'containerInstanceArn': 'string', 'containers': [ { 'containerArn': 'string', 'taskArn': 'string', 'name': 'string', 'image': 'string', 'imageDigest': 'string', 'runtimeId': 'string', 'lastStatus': 'string', 'exitCode': 123, 'reason': 'string', 'networkBindings': [ { 'bindIP': 'string', 'containerPort': 123, 'hostPort': 123, 'protocol': 'tcp'|'udp', 'containerPortRange': 'string', 'hostPortRange': 'string' }, ], 'networkInterfaces': [ { 'attachmentId': 'string', 'privateIpv4Address': 'string', 'ipv6Address': 'string' }, ], 'healthStatus': 'HEALTHY'|'UNHEALTHY'|'UNKNOWN', 'managedAgents': [ { 'lastStartedAt': datetime(2015, 1, 1), 'name': 'ExecuteCommandAgent', 'reason': 'string', 'lastStatus': 'string' }, ], 'cpu': 'string', 'memory': 'string', 'memoryReservation': 'string', 'gpuIds': [ 'string', ] }, ], 'cpu': 'string', 'createdAt': datetime(2015, 1, 1), 'desiredStatus': 'string', 'enableExecuteCommand': True|False, 'executionStoppedAt': datetime(2015, 1, 1), 'group': 'string', 'healthStatus': 'HEALTHY'|'UNHEALTHY'|'UNKNOWN', 'inferenceAccelerators': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'lastStatus': 'string', 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'memory': 'string', 'overrides': { 'containerOverrides': [ { 'name': 'string', 'command': [ 'string', ], 'environment': [ { 'name': 'string', 'value': 'string' }, ], 'environmentFiles': [ { 'value': 'string', 'type': 's3' }, ], 'cpu': 123, 'memory': 123, 'memoryReservation': 123, 'resourceRequirements': [ { 'value': 'string', 'type': 'GPU'|'InferenceAccelerator' }, ] }, ], 'cpu': 'string', 'inferenceAcceleratorOverrides': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'executionRoleArn': 'string', 'memory': 'string', 'taskRoleArn': 'string', 'ephemeralStorage': { 'sizeInGiB': 123 } }, 'platformVersion': 'string', 'platformFamily': 'string', 'pullStartedAt': datetime(2015, 1, 1), 'pullStoppedAt': datetime(2015, 1, 1), 'startedAt': datetime(2015, 1, 1), 'startedBy': 'string', 'stopCode': 'TaskFailedToStart'|'EssentialContainerExited'|'UserInitiated'|'ServiceSchedulerInitiated'|'SpotInterruption'|'TerminationNotice', 'stoppedAt': datetime(2015, 1, 1), 'stoppedReason': 'string', 'stoppingAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'taskArn': 'string', 'taskDefinitionArn': 'string', 'version': 123, 'ephemeralStorage': { 'sizeInGiB': 123 }, 'fargateEphemeralStorage': { 'sizeInGiB': 123, 'kmsKeyId': 'string' } }, ], 'failures': [ { 'arn': 'string', 'reason': 'string', 'detail': 'string' }, ] } **Response Structure** * *(dict) --* * **tasks** *(list) --* A full description of the tasks that were started. Each task that was successfully placed on your container instances is described. * *(dict) --* Details on a task in a cluster. * **attachments** *(list) --* The Elastic Network Adapter that's associated with the task if the task uses the "awsvpc" network mode. * *(dict) --* An object representing a container instance or task attachment. * **id** *(string) --* The unique identifier for the attachment. * **type** *(string) --* The type of the attachment, such as "ElasticNetworkInterface", "Service Connect", and "AmazonElasticBlockStorage". * **status** *(string) --* The status of the attachment. Valid values are "PRECREATED", "CREATED", "ATTACHING", "ATTACHED", "DETACHING", "DETACHED", "DELETED", and "FAILED". * **details** *(list) --* Details of the attachment. For elastic network interfaces, this includes the network interface ID, the MAC address, the subnet ID, and the private IPv4 address. For Service Connect services, this includes "portName", "clientAliases", "discoveryName", and "ingressPortOverride". For Elastic Block Storage, this includes "roleArn", "deleteOnTermination", "volumeName", "volumeId", and "statusReason" (only when the attachment fails to create or attach). * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **attributes** *(list) --* The attributes of the task * *(dict) --* An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the *Amazon Elastic Container Service Developer Guide*. * **name** *(string) --* The name of the attribute. The "name" must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.). * **value** *(string) --* The value of the attribute. The "value" must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space. * **targetType** *(string) --* The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN. * **targetId** *(string) --* The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN). * **availabilityZone** *(string) --* The Availability Zone for the task. * **capacityProviderName** *(string) --* The capacity provider that's associated with the task. * **clusterArn** *(string) --* The ARN of the cluster that hosts the task. * **connectivity** *(string) --* The connectivity status of a task. * **connectivityAt** *(datetime) --* The Unix timestamp for the time when the task last went into "CONNECTED" status. * **containerInstanceArn** *(string) --* The ARN of the container instances that host the task. * **containers** *(list) --* The containers that's associated with the task. * *(dict) --* A Docker container that's part of a task. * **containerArn** *(string) --* The Amazon Resource Name (ARN) of the container. * **taskArn** *(string) --* The ARN of the task. * **name** *(string) --* The name of the container. * **image** *(string) --* The image used for the container. * **imageDigest** *(string) --* The container image manifest digest. * **runtimeId** *(string) --* The ID of the Docker container. * **lastStatus** *(string) --* The last known status of the container. * **exitCode** *(integer) --* The exit code returned from the container. * **reason** *(string) --* A short (1024 max characters) human-readable string to provide additional details about a running or stopped container. * **networkBindings** *(list) --* The network bindings associated with the container. * *(dict) --* Details on the network bindings between a container and its host container instance. After a task reaches the "RUNNING" status, manual and automatic host and container port assignments are visible in the "networkBindings" section of DescribeTasks API responses. * **bindIP** *(string) --* The IP address that the container is bound to on the container instance. * **containerPort** *(integer) --* The port number on the container that's used with the network binding. * **hostPort** *(integer) --* The port number on the host that's used with the network binding. * **protocol** *(string) --* The protocol used for the network binding. * **containerPortRange** *(string) --* The port number range on the container that's bound to the dynamically mapped host port range. The following rules apply when you specify a "containerPortRange": * You must use either the "bridge" network mode or the "awsvpc" network mode. * This parameter is available for both the EC2 and Fargate launch types. * This parameter is available for both the Linux and Windows operating systems. * The container instance must have at least version 1.67.0 of the container agent and at least version 1.67.0-1 of the "ecs-init" package * You can specify a maximum of 100 port ranges per container. * You do not specify a "hostPortRange". The value of the "hostPortRange" is set as follows: * For containers in a task with the "awsvpc" network mode, the "hostPortRange" is set to the same value as the "containerPortRange". This is a static mapping strategy. * For containers in a task with the "bridge" network mode, the Amazon ECS agent finds open host ports from the default ephemeral range and passes it to docker to bind them to the container ports. * The "containerPortRange" valid values are between 1 and 65535. * A port can only be included in one port mapping per container. * You cannot specify overlapping port ranges. * The first port in the range must be less than last port in the range. * Docker recommends that you turn off the docker-proxy in the Docker daemon config file when you have a large number of ports. For more information, see Issue #11185 on the Github website. For information about how to turn off the docker-proxy in the Docker daemon config file, see Docker daemon in the *Amazon ECS Developer Guide*. You can call DescribeTasks to view the "hostPortRange" which are the host ports that are bound to the container ports. * **hostPortRange** *(string) --* The port number range on the host that's used with the network binding. This is assigned is assigned by Docker and delivered by the Amazon ECS agent. * **networkInterfaces** *(list) --* The network interfaces associated with the container. * *(dict) --* An object representing the elastic network interface for tasks that use the "awsvpc" network mode. * **attachmentId** *(string) --* The attachment ID for the network interface. * **privateIpv4Address** *(string) --* The private IPv4 address for the network interface. * **ipv6Address** *(string) --* The private IPv6 address for the network interface. * **healthStatus** *(string) --* The health status of the container. If health checks aren't configured for this container in its task definition, then it reports the health status as "UNKNOWN". * **managedAgents** *(list) --* The details of any Amazon ECS managed agents associated with the container. * *(dict) --* Details about the managed agent status for the container. * **lastStartedAt** *(datetime) --* The Unix timestamp for the time when the managed agent was last started. * **name** *(string) --* The name of the managed agent. When the execute command feature is turned on, the managed agent name is "ExecuteCommandAgent". * **reason** *(string) --* The reason for why the managed agent is in the state it is in. * **lastStatus** *(string) --* The last known status of the managed agent. * **cpu** *(string) --* The number of CPU units set for the container. The value is "0" if no value was specified in the container definition when the task definition was registered. * **memory** *(string) --* The hard limit (in MiB) of memory set for the container. * **memoryReservation** *(string) --* The soft limit (in MiB) of memory set for the container. * **gpuIds** *(list) --* The IDs of each GPU assigned to the container. * *(string) --* * **cpu** *(string) --* The number of CPU units used by the task as expressed in a task definition. It can be expressed as an integer using CPU units (for example, "1024"). It can also be expressed as a string using vCPUs (for example, "1 vCPU" or "1 vcpu"). String values are converted to an integer that indicates the CPU units when the task definition is registered. If you're using the EC2 launch type or the external launch type, this field is optional. Supported values are between "128" CPU units ( "0.125" vCPUs) and "196608" CPU units ( "192" vCPUs). If you do not specify a value, the parameter is ignored. This field is required for Fargate. For information about the valid values, see Task size in the *Amazon Elastic Container Service Developer Guide*. * **createdAt** *(datetime) --* The Unix timestamp for the time when the task was created. More specifically, it's for the time when the task entered the "PENDING" state. * **desiredStatus** *(string) --* The desired status of the task. For more information, see Task Lifecycle. * **enableExecuteCommand** *(boolean) --* Determines whether execute command functionality is turned on for this task. If "true", execute command functionality is turned on all the containers in the task. * **executionStoppedAt** *(datetime) --* The Unix timestamp for the time when the task execution stopped. * **group** *(string) --* The name of the task group that's associated with the task. * **healthStatus** *(string) --* The health status for the task. It's determined by the health of the essential containers in the task. If all essential containers in the task are reporting as "HEALTHY", the task status also reports as "HEALTHY". If any essential containers in the task are reporting as "UNHEALTHY" or "UNKNOWN", the task status also reports as "UNHEALTHY" or "UNKNOWN". Note: The Amazon ECS container agent doesn't monitor or report on Docker health checks that are embedded in a container image and not specified in the container definition. For example, this includes those specified in a parent image or from the image's Dockerfile. Health check parameters that are specified in a container definition override any Docker health checks that are found in the container image. * **inferenceAccelerators** *(list) --* The Elastic Inference accelerator that's associated with the task. * *(dict) --* Details on an Elastic Inference accelerator. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. * **deviceName** *(string) --* The Elastic Inference accelerator device name. The "deviceName" must also be referenced in a container definition as a ResourceRequirement. * **deviceType** *(string) --* The Elastic Inference accelerator type to use. * **lastStatus** *(string) --* The last known status for the task. For more information, see Task Lifecycle. * **launchType** *(string) --* The infrastructure where your task runs on. For more information, see Amazon ECS launch types in the *Amazon Elastic Container Service Developer Guide*. * **memory** *(string) --* The amount of memory (in MiB) that the task uses as expressed in a task definition. It can be expressed as an integer using MiB (for example, "1024"). If it's expressed as a string using GB (for example, "1GB" or "1 GB"), it's converted to an integer indicating the MiB when the task definition is registered. If you use the EC2 launch type, this field is optional. If you use the Fargate launch type, this field is required. You must use one of the following values. The value that you choose determines the range of supported values for the "cpu" parameter. * 512 (0.5 GB), 1024 (1 GB), 2048 (2 GB) - Available "cpu" values: 256 (.25 vCPU) * 1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB) - Available "cpu" values: 512 (.5 vCPU) * 2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB) - Available "cpu" values: 1024 (1 vCPU) * Between 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB) - Available "cpu" values: 2048 (2 vCPU) * Between 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB) - Available "cpu" values: 4096 (4 vCPU) * Between 16 GB and 60 GB in 4 GB increments - Available "cpu" values: 8192 (8 vCPU) This option requires Linux platform "1.4.0" or later. * Between 32GB and 120 GB in 8 GB increments - Available "cpu" values: 16384 (16 vCPU) This option requires Linux platform "1.4.0" or later. * **overrides** *(dict) --* One or more container overrides. * **containerOverrides** *(list) --* One or more container overrides that are sent to a task. * *(dict) --* The overrides that are sent to a container. An empty container override can be passed in. An example of an empty container override is "{"containerOverrides": [ ] }". If a non-empty container override is specified, the "name" parameter must be included. You can use Secrets Manager or Amazon Web Services Systems Manager Parameter Store to store the sensitive data. For more information, see Retrieve secrets through environment variables in the Amazon ECS Developer Guide. * **name** *(string) --* The name of the container that receives the override. This parameter is required if any override is specified. * **command** *(list) --* The command to send to the container that overrides the default command from the Docker image or the task definition. You must also specify a container name. * *(string) --* * **environment** *(list) --* The environment variables to send to the container. You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the task definition. You must also specify a container name. * *(dict) --* A key-value pair object. * **name** *(string) --* The name of the key-value pair. For environment variables, this is the name of the environment variable. * **value** *(string) --* The value of the key-value pair. For environment variables, this is the value of the environment variable. * **environmentFiles** *(list) --* A list of files containing the environment variables to pass to a container, instead of the value from the container definition. * *(dict) --* A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a ".env" file extension. Each line in an environment file should contain an environment variable in "VARIABLE=VALUE" format. Lines beginning with "#" are treated as comments and are ignored. If there are environment variables specified using the "environment" parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Use a file to pass environment variables to a container in the *Amazon Elastic Container Service Developer Guide*. Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply. You must use the following platforms for the Fargate launch type: * Linux platform version "1.4.0" or later. * Windows platform version "1.0.0" or later. Consider the following when using the Fargate launch type: * The file is handled like a native Docker env-file. * There is no support for shell escape handling. * The container entry point interperts the "VARIABLE" values. * **value** *(string) --* The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file. * **type** *(string) --* The file type to use. Environment files are objects in Amazon S3. The only supported value is "s3". * **cpu** *(integer) --* The number of "cpu" units reserved for the container, instead of the default value from the task definition. You must also specify a container name. * **memory** *(integer) --* The hard limit (in MiB) of memory to present to the container, instead of the default value from the task definition. If your container attempts to exceed the memory specified here, the container is killed. You must also specify a container name. * **memoryReservation** *(integer) --* The soft limit (in MiB) of memory to reserve for the container, instead of the default value from the task definition. You must also specify a container name. * **resourceRequirements** *(list) --* The type and amount of a resource to assign to a container, instead of the default value from the task definition. The only supported resource is a GPU. * *(dict) --* The type and amount of a resource to assign to a container. The supported resource types are GPUs and Elastic Inference accelerators. For more information, see Working with GPUs on Amazon ECS or Working with Amazon Elastic Inference on Amazon ECS in the *Amazon Elastic Container Service Developer Guide* * **value** *(string) --* The value for the specified resource type. When the type is "GPU", the value is the number of physical "GPUs" the Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for all containers in a task can't exceed the number of available GPUs on the container instance that the task is launched on. When the type is "InferenceAccelerator", the "value" matches the "deviceName" for an InferenceAccelerator specified in a task definition. * **type** *(string) --* The type of resource to assign to a container. * **cpu** *(string) --* The CPU override for the task. * **inferenceAcceleratorOverrides** *(list) --* The Elastic Inference accelerator override for the task. * *(dict) --* Details on an Elastic Inference accelerator task override. This parameter is used to override the Elastic Inference accelerator specified in the task definition. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the *Amazon Elastic Container Service Developer Guide*. * **deviceName** *(string) --* The Elastic Inference accelerator device name to override for the task. This parameter must match a "deviceName" specified in the task definition. * **deviceType** *(string) --* The Elastic Inference accelerator type to use. * **executionRoleArn** *(string) --* The Amazon Resource Name (ARN) of the task execution role override for the task. For more information, see Amazon ECS task execution IAM role in the *Amazon Elastic Container Service Developer Guide*. * **memory** *(string) --* The memory override for the task. * **taskRoleArn** *(string) --* The Amazon Resource Name (ARN) of the role that containers in this task can assume. All containers in this task are granted the permissions that are specified in this role. For more information, see IAM Role for Tasks in the *Amazon Elastic Container Service Developer Guide*. * **ephemeralStorage** *(dict) --* The ephemeral storage setting override for the task. Note: This parameter is only supported for tasks hosted on Fargate that use the following platform versions: * Linux platform version "1.4.0" or later. * Windows platform version "1.0.0" or later. * **sizeInGiB** *(integer) --* The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is "21" GiB and the maximum supported value is "200" GiB. * **platformVersion** *(string) --* The platform version where your task runs on. A platform version is only specified for tasks that use the Fargate launch type. If you didn't specify one, the "LATEST" platform version is used. For more information, see Fargate Platform Versions in the *Amazon Elastic Container Service Developer Guide*. * **platformFamily** *(string) --* The operating system that your tasks are running on. A platform family is specified only for tasks that use the Fargate launch type. All tasks that run as part of this service must use the same "platformFamily" value as the service (for example, "LINUX."). * **pullStartedAt** *(datetime) --* The Unix timestamp for the time when the container image pull began. * **pullStoppedAt** *(datetime) --* The Unix timestamp for the time when the container image pull completed. * **startedAt** *(datetime) --* The Unix timestamp for the time when the task started. More specifically, it's for the time when the task transitioned from the "PENDING" state to the "RUNNING" state. * **startedBy** *(string) --* The tag specified when a task is started. If an Amazon ECS service started the task, the "startedBy" parameter contains the deployment ID of that service. * **stopCode** *(string) --* The stop code indicating why a task was stopped. The "stoppedReason" might contain additional details. For more information about stop code, see Stopped tasks error codes in the *Amazon ECS Developer Guide*. * **stoppedAt** *(datetime) --* The Unix timestamp for the time when the task was stopped. More specifically, it's for the time when the task transitioned from the "RUNNING" state to the "STOPPED" state. * **stoppedReason** *(string) --* The reason that the task was stopped. * **stoppingAt** *(datetime) --* The Unix timestamp for the time when the task stops. More specifically, it's for the time when the task transitions from the "RUNNING" state to "STOPPING". * **tags** *(list) --* The metadata that you apply to the task to help you categorize and organize the task. Each tag consists of a key and an optional value. You define both the key and value. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * *(dict) --* The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them. The following basic restrictions apply to tags: * Maximum number of tags per resource - 50 * For each resource, each tag key must be unique, and each tag key can have only one value. * Maximum key length - 128 Unicode characters in UTF-8 * Maximum value length - 256 Unicode characters in UTF-8 * If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. * Tag keys and values are case-sensitive. * Do not use "aws:", "AWS:", or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. * **key** *(string) --* One part of a key-value pair that make up a tag. A "key" is a general label that acts like a category for more specific tag values. * **value** *(string) --* The optional part of a key-value pair that make up a tag. A "value" acts as a descriptor within a tag category (key). * **taskArn** *(string) --* The Amazon Resource Name (ARN) of the task. * **taskDefinitionArn** *(string) --* The ARN of the task definition that creates the task. * **version** *(integer) --* The version counter for the task. Every time a task experiences a change that starts a CloudWatch event, the version counter is incremented. If you replicate your Amazon ECS task state with CloudWatch Events, you can compare the version of a task reported by the Amazon ECS API actions with the version reported in CloudWatch Events for the task (inside the "detail" object) to verify that the version in your event stream is current. * **ephemeralStorage** *(dict) --* The ephemeral storage settings for the task. * **sizeInGiB** *(integer) --* The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is "21" GiB and the maximum supported value is "200" GiB. * **fargateEphemeralStorage** *(dict) --* The Fargate ephemeral storage settings for the task. * **sizeInGiB** *(integer) --* The total amount, in GiB, of the ephemeral storage to set for the task. The minimum supported value is "20" GiB and the maximum supported value is "200" GiB. * **kmsKeyId** *(string) --* Specify an Key Management Service key ID to encrypt the ephemeral storage for the task. * **failures** *(list) --* Any failures associated with the call. * *(dict) --* A failed resource. For a list of common causes, see API failure reasons in the *Amazon Elastic Container Service Developer Guide*. * **arn** *(string) --* The Amazon Resource Name (ARN) of the failed resource. * **reason** *(string) --* The reason for the failure. * **detail** *(string) --* The details of the failure. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" * "ECS.Client.exceptions.ClusterNotFoundException" * "ECS.Client.exceptions.UnsupportedFeatureException" ECS / Client / list_clusters list_clusters ************* ECS.Client.list_clusters(**kwargs) Returns a list of existing clusters. See also: AWS API Documentation **Request Syntax** response = client.list_clusters( nextToken='string', maxResults=123 ) Parameters: * **nextToken** (*string*) -- The "nextToken" value returned from a "ListClusters" request indicating that more results are available to fulfill the request and further calls are needed. If "maxResults" was provided, it's possible the number of results to be fewer than "maxResults". Note: This token should be treated as an opaque identifier that is only used to retrieve the next items in a list and not for other programmatic purposes. * **maxResults** (*integer*) -- The maximum number of cluster results that "ListClusters" returned in paginated output. When this parameter is used, "ListClusters" only returns "maxResults" results in a single page along with a "nextToken" response element. The remaining results of the initial request can be seen by sending another "ListClusters" request with the returned "nextToken" value. This value can be between 1 and 100. If this parameter isn't used, then "ListClusters" returns up to 100 results and a "nextToken" value if applicable. Return type: dict Returns: **Response Syntax** { 'clusterArns': [ 'string', ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **clusterArns** *(list) --* The list of full Amazon Resource Name (ARN) entries for each cluster that's associated with your account. * *(string) --* * **nextToken** *(string) --* The "nextToken" value to include in a future "ListClusters" request. When the results of a "ListClusters" request exceed "maxResults", this value can be used to retrieve the next page of results. This value is "null" when there are no more results to return. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.InvalidParameterException" **Examples** This example lists all of your available clusters in your default region. response = client.list_clusters( ) print(response) Expected Output: { 'clusterArns': [ 'arn:aws:ecs:us-east-1::cluster/test', 'arn:aws:ecs:us-east-1::cluster/default', ], 'ResponseMetadata': { '...': '...', }, } ECS / Client / submit_task_state_change submit_task_state_change ************************ ECS.Client.submit_task_state_change(**kwargs) Note: This action is only used by the Amazon ECS agent, and it is not intended for use outside of the agent. Sent to acknowledge that a task changed states. See also: AWS API Documentation **Request Syntax** response = client.submit_task_state_change( cluster='string', task='string', status='string', reason='string', containers=[ { 'containerName': 'string', 'imageDigest': 'string', 'runtimeId': 'string', 'exitCode': 123, 'networkBindings': [ { 'bindIP': 'string', 'containerPort': 123, 'hostPort': 123, 'protocol': 'tcp'|'udp', 'containerPortRange': 'string', 'hostPortRange': 'string' }, ], 'reason': 'string', 'status': 'string' }, ], attachments=[ { 'attachmentArn': 'string', 'status': 'string' }, ], managedAgents=[ { 'containerName': 'string', 'managedAgentName': 'ExecuteCommandAgent', 'status': 'string', 'reason': 'string' }, ], pullStartedAt=datetime(2015, 1, 1), pullStoppedAt=datetime(2015, 1, 1), executionStoppedAt=datetime(2015, 1, 1) ) Parameters: * **cluster** (*string*) -- The short name or full Amazon Resource Name (ARN) of the cluster that hosts the task. * **task** (*string*) -- The task ID or full ARN of the task in the state change request. * **status** (*string*) -- The status of the state change request. * **reason** (*string*) -- The reason for the state change request. * **containers** (*list*) -- Any containers that's associated with the state change request. * *(dict) --* An object that represents a change in state for a container. * **containerName** *(string) --* The name of the container. * **imageDigest** *(string) --* The container image SHA 256 digest. * **runtimeId** *(string) --* The ID of the Docker container. * **exitCode** *(integer) --* The exit code for the container, if the state change is a result of the container exiting. * **networkBindings** *(list) --* Any network bindings that are associated with the container. * *(dict) --* Details on the network bindings between a container and its host container instance. After a task reaches the "RUNNING" status, manual and automatic host and container port assignments are visible in the "networkBindings" section of DescribeTasks API responses. * **bindIP** *(string) --* The IP address that the container is bound to on the container instance. * **containerPort** *(integer) --* The port number on the container that's used with the network binding. * **hostPort** *(integer) --* The port number on the host that's used with the network binding. * **protocol** *(string) --* The protocol used for the network binding. * **containerPortRange** *(string) --* The port number range on the container that's bound to the dynamically mapped host port range. The following rules apply when you specify a "containerPortRange": * You must use either the "bridge" network mode or the "awsvpc" network mode. * This parameter is available for both the EC2 and Fargate launch types. * This parameter is available for both the Linux and Windows operating systems. * The container instance must have at least version 1.67.0 of the container agent and at least version 1.67.0-1 of the "ecs-init" package * You can specify a maximum of 100 port ranges per container. * You do not specify a "hostPortRange". The value of the "hostPortRange" is set as follows: * For containers in a task with the "awsvpc" network mode, the "hostPortRange" is set to the same value as the "containerPortRange". This is a static mapping strategy. * For containers in a task with the "bridge" network mode, the Amazon ECS agent finds open host ports from the default ephemeral range and passes it to docker to bind them to the container ports. * The "containerPortRange" valid values are between 1 and 65535. * A port can only be included in one port mapping per container. * You cannot specify overlapping port ranges. * The first port in the range must be less than last port in the range. * Docker recommends that you turn off the docker-proxy in the Docker daemon config file when you have a large number of ports. For more information, see Issue #11185 on the Github website. For information about how to turn off the docker-proxy in the Docker daemon config file, see Docker daemon in the *Amazon ECS Developer Guide*. You can call DescribeTasks to view the "hostPortRange" which are the host ports that are bound to the container ports. * **hostPortRange** *(string) --* The port number range on the host that's used with the network binding. This is assigned is assigned by Docker and delivered by the Amazon ECS agent. * **reason** *(string) --* The reason for the state change. * **status** *(string) --* The status of the container. * **attachments** (*list*) -- Any attachments associated with the state change request. * *(dict) --* An object representing a change in state for a task attachment. * **attachmentArn** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) of the attachment. * **status** *(string) --* **[REQUIRED]** The status of the attachment. * **managedAgents** (*list*) -- The details for the managed agent that's associated with the task. * *(dict) --* An object representing a change in state for a managed agent. * **containerName** *(string) --* **[REQUIRED]** The name of the container that's associated with the managed agent. * **managedAgentName** *(string) --* **[REQUIRED]** The name of the managed agent. * **status** *(string) --* **[REQUIRED]** The status of the managed agent. * **reason** *(string) --* The reason for the status of the managed agent. * **pullStartedAt** (*datetime*) -- The Unix timestamp for the time when the container image pull started. * **pullStoppedAt** (*datetime*) -- The Unix timestamp for the time when the container image pull completed. * **executionStoppedAt** (*datetime*) -- The Unix timestamp for the time when the task execution stopped. Return type: dict Returns: **Response Syntax** { 'acknowledgment': 'string' } **Response Structure** * *(dict) --* * **acknowledgment** *(string) --* Acknowledgement of the state change. **Exceptions** * "ECS.Client.exceptions.ServerException" * "ECS.Client.exceptions.ClientException" * "ECS.Client.exceptions.AccessDeniedException" * "ECS.Client.exceptions.InvalidParameterException"