CloudWatchLogs ************** Client ====== class CloudWatchLogs.Client A low-level client representing Amazon CloudWatch Logs You can use Amazon CloudWatch Logs to monitor, store, and access your log files from EC2 instances, CloudTrail, and other sources. You can then retrieve the associated log data from CloudWatch Logs using the CloudWatch console. Alternatively, you can use CloudWatch Logs commands in the Amazon Web Services CLI, CloudWatch Logs API, or CloudWatch Logs SDK. You can use CloudWatch Logs to: * **Monitor logs from EC2 instances in real time**: You can use CloudWatch Logs to monitor applications and systems using log data. For example, CloudWatch Logs can track the number of errors that occur in your application logs. Then, it can send you a notification whenever the rate of errors exceeds a threshold that you specify. CloudWatch Logs uses your log data for monitoring so no code changes are required. For example, you can monitor application logs for specific literal terms (such as "NullReferenceException"). You can also count the number of occurrences of a literal term at a particular position in log data (such as "404" status codes in an Apache access log). When the term you are searching for is found, CloudWatch Logs reports the data to a CloudWatch metric that you specify. * **Monitor CloudTrail logged events**: You can create alarms in CloudWatch and receive notifications of particular API activity as captured by CloudTrail. You can use the notification to perform troubleshooting. * **Archive log data**: You can use CloudWatch Logs to store your log data in highly durable storage. You can change the log retention setting so that any log events earlier than this setting are automatically deleted. The CloudWatch Logs agent helps to quickly send both rotated and non-rotated log data off of a host and into the log service. You can then access the raw log data when you need it. import boto3 client = boto3.client('logs') These are the available methods: * associate_kms_key * can_paginate * cancel_export_task * close * create_delivery * create_export_task * create_log_anomaly_detector * create_log_group * create_log_stream * delete_account_policy * delete_data_protection_policy * delete_delivery * delete_delivery_destination * delete_delivery_destination_policy * delete_delivery_source * delete_destination * delete_index_policy * delete_integration * delete_log_anomaly_detector * delete_log_group * delete_log_stream * delete_metric_filter * delete_query_definition * delete_resource_policy * delete_retention_policy * delete_subscription_filter * delete_transformer * describe_account_policies * describe_configuration_templates * describe_deliveries * describe_delivery_destinations * describe_delivery_sources * describe_destinations * describe_export_tasks * describe_field_indexes * describe_index_policies * describe_log_groups * describe_log_streams * describe_metric_filters * describe_queries * describe_query_definitions * describe_resource_policies * describe_subscription_filters * disassociate_kms_key * filter_log_events * get_data_protection_policy * get_delivery * get_delivery_destination * get_delivery_destination_policy * get_delivery_source * get_integration * get_log_anomaly_detector * get_log_events * get_log_group_fields * get_log_object * get_log_record * get_paginator * get_query_results * get_transformer * get_waiter * list_anomalies * list_integrations * list_log_anomaly_detectors * list_log_groups * list_log_groups_for_query * list_tags_for_resource * list_tags_log_group * put_account_policy * put_data_protection_policy * put_delivery_destination * put_delivery_destination_policy * put_delivery_source * put_destination * put_destination_policy * put_index_policy * put_integration * put_log_events * put_metric_filter * put_query_definition * put_resource_policy * put_retention_policy * put_subscription_filter * put_transformer * start_live_tail * start_query * stop_query * tag_log_group * tag_resource * test_metric_filter * test_transformer * untag_log_group * untag_resource * update_anomaly * update_delivery_configuration * update_log_anomaly_detector Paginators ========== Paginators are available on a client instance via the "get_paginator" method. For more detailed instructions and examples on the usage of paginators, see the paginators user guide. The available paginators are: * DescribeConfigurationTemplates * DescribeDeliveries * DescribeDeliveryDestinations * DescribeDeliverySources * DescribeDestinations * DescribeExportTasks * DescribeLogGroups * DescribeLogStreams * DescribeMetricFilters * DescribeQueries * DescribeResourcePolicies * DescribeSubscriptionFilters * FilterLogEvents * ListAnomalies * ListLogAnomalyDetectors * ListLogGroupsForQuery CloudWatchLogs / Paginator / ListLogGroupsForQuery ListLogGroupsForQuery ********************* class CloudWatchLogs.Paginator.ListLogGroupsForQuery paginator = client.get_paginator('list_log_groups_for_query') paginate(**kwargs) Creates an iterator that will paginate through responses from "CloudWatchLogs.Client.list_log_groups_for_query()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( queryId='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **queryId** (*string*) -- **[REQUIRED]** The ID of the query to use. This query ID is from the response to your StartQuery operation. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'logGroupIdentifiers': [ 'string', ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **logGroupIdentifiers** *(list) --* An array of the names and ARNs of the log groups that were processed in the query. * *(string) --* * **NextToken** *(string) --* A token to resume pagination. CloudWatchLogs / Paginator / DescribeExportTasks DescribeExportTasks ******************* class CloudWatchLogs.Paginator.DescribeExportTasks paginator = client.get_paginator('describe_export_tasks') paginate(**kwargs) Creates an iterator that will paginate through responses from "CloudWatchLogs.Client.describe_export_tasks()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( taskId='string', statusCode='CANCELLED'|'COMPLETED'|'FAILED'|'PENDING'|'PENDING_CANCEL'|'RUNNING', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **taskId** (*string*) -- The ID of the export task. Specifying a task ID filters the results to one or zero export tasks. * **statusCode** (*string*) -- The status code of the export task. Specifying a status code filters the results to zero or more export tasks. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'exportTasks': [ { 'taskId': 'string', 'taskName': 'string', 'logGroupName': 'string', 'from': 123, 'to': 123, 'destination': 'string', 'destinationPrefix': 'string', 'status': { 'code': 'CANCELLED'|'COMPLETED'|'FAILED'|'PENDING'|'PENDING_CANCEL'|'RUNNING', 'message': 'string' }, 'executionInfo': { 'creationTime': 123, 'completionTime': 123 } }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **exportTasks** *(list) --* The export tasks. * *(dict) --* Represents an export task. * **taskId** *(string) --* The ID of the export task. * **taskName** *(string) --* The name of the export task. * **logGroupName** *(string) --* The name of the log group from which logs data was exported. * **from** *(integer) --* The start time, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". Events with a timestamp before this time are not exported. * **to** *(integer) --* The end time, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". Events with a timestamp later than this time are not exported. * **destination** *(string) --* The name of the S3 bucket to which the log data was exported. * **destinationPrefix** *(string) --* The prefix that was used as the start of Amazon S3 key for every object exported. * **status** *(dict) --* The status of the export task. * **code** *(string) --* The status code of the export task. * **message** *(string) --* The status message related to the status code. * **executionInfo** *(dict) --* Execution information about the export task. * **creationTime** *(integer) --* The creation time of the export task, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". * **completionTime** *(integer) --* The completion time of the export task, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". * **NextToken** *(string) --* A token to resume pagination. CloudWatchLogs / Paginator / ListLogAnomalyDetectors ListLogAnomalyDetectors *********************** class CloudWatchLogs.Paginator.ListLogAnomalyDetectors paginator = client.get_paginator('list_log_anomaly_detectors') paginate(**kwargs) Creates an iterator that will paginate through responses from "CloudWatchLogs.Client.list_log_anomaly_detectors()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( filterLogGroupArn='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **filterLogGroupArn** (*string*) -- Use this to optionally filter the results to only include anomaly detectors that are associated with the specified log group. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'anomalyDetectors': [ { 'anomalyDetectorArn': 'string', 'detectorName': 'string', 'logGroupArnList': [ 'string', ], 'evaluationFrequency': 'ONE_MIN'|'FIVE_MIN'|'TEN_MIN'|'FIFTEEN_MIN'|'THIRTY_MIN'|'ONE_HOUR', 'filterPattern': 'string', 'anomalyDetectorStatus': 'INITIALIZING'|'TRAINING'|'ANALYZING'|'FAILED'|'DELETED'|'PAUSED', 'kmsKeyId': 'string', 'creationTimeStamp': 123, 'lastModifiedTimeStamp': 123, 'anomalyVisibilityTime': 123 }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **anomalyDetectors** *(list) --* An array of structures, where each structure in the array contains information about one anomaly detector. * *(dict) --* Contains information about one anomaly detector in the account. * **anomalyDetectorArn** *(string) --* The ARN of the anomaly detector. * **detectorName** *(string) --* The name of the anomaly detector. * **logGroupArnList** *(list) --* A list of the ARNs of the log groups that this anomaly detector watches. * *(string) --* * **evaluationFrequency** *(string) --* Specifies how often the anomaly detector runs and look for anomalies. * **filterPattern** *(string) --* A symbolic description of how CloudWatch Logs should interpret the data in each log event. For example, a log event can contain timestamps, IP addresses, strings, and so on. You use the filter pattern to specify what to look for in the log event message. * **anomalyDetectorStatus** *(string) --* Specifies the current status of the anomaly detector. To pause an anomaly detector, use the "enabled" parameter in the UpdateLogAnomalyDetector operation. * **kmsKeyId** *(string) --* The ARN of the KMS key assigned to this anomaly detector, if any. * **creationTimeStamp** *(integer) --* The date and time when this anomaly detector was created. * **lastModifiedTimeStamp** *(integer) --* The date and time when this anomaly detector was most recently modified. * **anomalyVisibilityTime** *(integer) --* The number of days used as the life cycle of anomalies. After this time, anomalies are automatically baselined and the anomaly detector model will treat new occurrences of similar event as normal. * **NextToken** *(string) --* A token to resume pagination. CloudWatchLogs / Paginator / DescribeConfigurationTemplates DescribeConfigurationTemplates ****************************** class CloudWatchLogs.Paginator.DescribeConfigurationTemplates paginator = client.get_paginator('describe_configuration_templates') paginate(**kwargs) Creates an iterator that will paginate through responses from "CloudWatchLogs.Client.describe_configuration_templates()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( service='string', logTypes=[ 'string', ], resourceTypes=[ 'string', ], deliveryDestinationTypes=[ 'S3'|'CWL'|'FH'|'XRAY', ], PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **service** (*string*) -- Use this parameter to filter the response to include only the configuration templates that apply to the Amazon Web Services service that you specify here. * **logTypes** (*list*) -- Use this parameter to filter the response to include only the configuration templates that apply to the log types that you specify here. * *(string) --* * **resourceTypes** (*list*) -- Use this parameter to filter the response to include only the configuration templates that apply to the resource types that you specify here. * *(string) --* * **deliveryDestinationTypes** (*list*) -- Use this parameter to filter the response to include only the configuration templates that apply to the delivery destination types that you specify here. * *(string) --* * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'configurationTemplates': [ { 'service': 'string', 'logType': 'string', 'resourceType': 'string', 'deliveryDestinationType': 'S3'|'CWL'|'FH'|'XRAY', 'defaultDeliveryConfigValues': { 'recordFields': [ 'string', ], 'fieldDelimiter': 'string', 's3DeliveryConfiguration': { 'suffixPath': 'string', 'enableHiveCompatiblePath': True|False } }, 'allowedFields': [ { 'name': 'string', 'mandatory': True|False }, ], 'allowedOutputFormats': [ 'json'|'plain'|'w3c'|'raw'|'parquet', ], 'allowedActionForAllowVendedLogsDeliveryForResource': 'string', 'allowedFieldDelimiters': [ 'string', ], 'allowedSuffixPathFields': [ 'string', ] }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **configurationTemplates** *(list) --* An array of objects, where each object describes one configuration template that matches the filters that you specified in the request. * *(dict) --* A structure containing information about the deafult settings and available settings that you can use to configure a delivery or a delivery destination. * **service** *(string) --* A string specifying which service this configuration template applies to. For more information about supported services see Enable logging from Amazon Web Services services.. * **logType** *(string) --* A string specifying which log type this configuration template applies to. * **resourceType** *(string) --* A string specifying which resource type this configuration template applies to. * **deliveryDestinationType** *(string) --* A string specifying which destination type this configuration template applies to. * **defaultDeliveryConfigValues** *(dict) --* A mapping that displays the default value of each property within a delivery's configuration, if it is not specified in the request. * **recordFields** *(list) --* The default record fields that will be delivered when a list of record fields is not provided in a CreateDelivery operation. * *(string) --* * **fieldDelimiter** *(string) --* The default field delimiter that is used in a CreateDelivery operation when the field delimiter is not specified in that operation. The field delimiter is used only when the final output delivery is in "Plain", "W3C", or "Raw" format. * **s3DeliveryConfiguration** *(dict) --* The delivery parameters that are used when you create a delivery to a delivery destination that is an S3 Bucket. * **suffixPath** *(string) --* This string allows re-configuring the S3 object prefix to contain either static or variable sections. The valid variables to use in the suffix path will vary by each log source. To find the values supported for the suffix path for each log source, use the DescribeConfigurationTemplates operation and check the "allowedSuffixPathFields" field in the response. * **enableHiveCompatiblePath** *(boolean) --* This parameter causes the S3 objects that contain delivered logs to use a prefix structure that allows for integration with Apache Hive. * **allowedFields** *(list) --* The allowed fields that a caller can use in the "recordFields" parameter of a CreateDelivery or UpdateDeliveryConfiguration operation. * *(dict) --* A structure that represents a valid record field header and whether it is mandatory. * **name** *(string) --* The name to use when specifying this record field in a CreateDelivery or UpdateDeliveryConfiguration operation. * **mandatory** *(boolean) --* If this is "true", the record field must be present in the "recordFields" parameter provided to a CreateDelivery or UpdateDeliveryConfiguration operation. * **allowedOutputFormats** *(list) --* The list of delivery destination output formats that are supported by this log source. * *(string) --* * **allowedActionForAllowVendedLogsDeliveryForResource ** *(string) --* The action permissions that a caller needs to have to be able to successfully create a delivery source on the desired resource type when calling PutDeliverySource. * **allowedFieldDelimiters** *(list) --* The valid values that a caller can use as field delimiters when calling CreateDelivery or UpdateDeliveryConfiguration on a delivery that delivers in "Plain", "W3C", or "Raw" format. * *(string) --* * **allowedSuffixPathFields** *(list) --* The list of variable fields that can be used in the suffix path of a delivery that delivers to an S3 bucket. * *(string) --* * **NextToken** *(string) --* A token to resume pagination. CloudWatchLogs / Paginator / DescribeQueries DescribeQueries *************** class CloudWatchLogs.Paginator.DescribeQueries paginator = client.get_paginator('describe_queries') paginate(**kwargs) Creates an iterator that will paginate through responses from "CloudWatchLogs.Client.describe_queries()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( logGroupName='string', status='Scheduled'|'Running'|'Complete'|'Failed'|'Cancelled'|'Timeout'|'Unknown', queryLanguage='CWLI'|'SQL'|'PPL', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **logGroupName** (*string*) -- Limits the returned queries to only those for the specified log group. * **status** (*string*) -- Limits the returned queries to only those that have the specified status. Valid values are "Cancelled", "Complete", "Failed", "Running", and "Scheduled". * **queryLanguage** (*string*) -- Limits the returned queries to only the queries that use the specified query language. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'queries': [ { 'queryLanguage': 'CWLI'|'SQL'|'PPL', 'queryId': 'string', 'queryString': 'string', 'status': 'Scheduled'|'Running'|'Complete'|'Failed'|'Cancelled'|'Timeout'|'Unknown', 'createTime': 123, 'logGroupName': 'string' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **queries** *(list) --* The list of queries that match the request. * *(dict) --* Information about one CloudWatch Logs Insights query that matches the request in a "DescribeQueries" operation. * **queryLanguage** *(string) --* The query language used for this query. For more information about the query languages that CloudWatch Logs supports, see Supported query languages. * **queryId** *(string) --* The unique ID number of this query. * **queryString** *(string) --* The query string used in this query. * **status** *(string) --* The status of this query. Possible values are "Cancelled", "Complete", "Failed", "Running", "Scheduled", and "Unknown". * **createTime** *(integer) --* The date and time that this query was created. * **logGroupName** *(string) --* The name of the log group scanned by this query. * **NextToken** *(string) --* A token to resume pagination. CloudWatchLogs / Paginator / DescribeDeliveries DescribeDeliveries ****************** class CloudWatchLogs.Paginator.DescribeDeliveries paginator = client.get_paginator('describe_deliveries') paginate(**kwargs) Creates an iterator that will paginate through responses from "CloudWatchLogs.Client.describe_deliveries()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max- items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'deliveries': [ { 'id': 'string', 'arn': 'string', 'deliverySourceName': 'string', 'deliveryDestinationArn': 'string', 'deliveryDestinationType': 'S3'|'CWL'|'FH'|'XRAY', 'recordFields': [ 'string', ], 'fieldDelimiter': 'string', 's3DeliveryConfiguration': { 'suffixPath': 'string', 'enableHiveCompatiblePath': True|False }, 'tags': { 'string': 'string' } }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **deliveries** *(list) --* An array of structures. Each structure contains information about one delivery in the account. * *(dict) --* This structure contains information about one *delivery* in your account. A delivery is a connection between a logical *delivery source* and a logical *delivery destination*. For more information, see CreateDelivery. To update an existing delivery configuration, use UpdateDeliveryConfiguration. * **id** *(string) --* The unique ID that identifies this delivery in your account. * **arn** *(string) --* The Amazon Resource Name (ARN) that uniquely identifies this delivery. * **deliverySourceName** *(string) --* The name of the delivery source that is associated with this delivery. * **deliveryDestinationArn** *(string) --* The ARN of the delivery destination that is associated with this delivery. * **deliveryDestinationType** *(string) --* Displays whether the delivery destination associated with this delivery is CloudWatch Logs, Amazon S3, Firehose, or X-Ray. * **recordFields** *(list) --* The record fields used in this delivery. * *(string) --* * **fieldDelimiter** *(string) --* The field delimiter that is used between record fields when the final output format of a delivery is in "Plain", "W3C", or "Raw" format. * **s3DeliveryConfiguration** *(dict) --* This structure contains delivery configurations that apply only when the delivery destination resource is an S3 bucket. * **suffixPath** *(string) --* This string allows re-configuring the S3 object prefix to contain either static or variable sections. The valid variables to use in the suffix path will vary by each log source. To find the values supported for the suffix path for each log source, use the DescribeConfigurationTemplates operation and check the "allowedSuffixPathFields" field in the response. * **enableHiveCompatiblePath** *(boolean) --* This parameter causes the S3 objects that contain delivered logs to use a prefix structure that allows for integration with Apache Hive. * **tags** *(dict) --* The tags that have been assigned to this delivery. * *(string) --* * *(string) --* * **NextToken** *(string) --* A token to resume pagination. CloudWatchLogs / Paginator / DescribeResourcePolicies DescribeResourcePolicies ************************ class CloudWatchLogs.Paginator.DescribeResourcePolicies paginator = client.get_paginator('describe_resource_policies') paginate(**kwargs) Creates an iterator that will paginate through responses from "CloudWatchLogs.Client.describe_resource_policies()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( resourceArn='string', policyScope='ACCOUNT'|'RESOURCE', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **resourceArn** (*string*) -- The ARN of the CloudWatch Logs resource for which to query the resource policy. * **policyScope** (*string*) -- Specifies the scope of the resource policy. Valid values are "ACCOUNT" or "RESOURCE". When not specified, defaults to "ACCOUNT". * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'resourcePolicies': [ { 'policyName': 'string', 'policyDocument': 'string', 'lastUpdatedTime': 123, 'policyScope': 'ACCOUNT'|'RESOURCE', 'resourceArn': 'string', 'revisionId': 'string' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **resourcePolicies** *(list) --* The resource policies that exist in this account. * *(dict) --* A policy enabling one or more entities to put logs to a log group in this account. * **policyName** *(string) --* The name of the resource policy. * **policyDocument** *(string) --* The details of the policy. * **lastUpdatedTime** *(integer) --* Timestamp showing when this policy was last updated, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". * **policyScope** *(string) --* Specifies scope of the resource policy. Valid values are ACCOUNT or RESOURCE. * **resourceArn** *(string) --* The ARN of the CloudWatch Logs resource to which the resource policy is attached. Only populated for resource-scoped policies. * **revisionId** *(string) --* The revision ID of the resource policy. Only populated for resource-scoped policies. * **NextToken** *(string) --* A token to resume pagination. CloudWatchLogs / Paginator / DescribeLogGroups DescribeLogGroups ***************** class CloudWatchLogs.Paginator.DescribeLogGroups paginator = client.get_paginator('describe_log_groups') paginate(**kwargs) Creates an iterator that will paginate through responses from "CloudWatchLogs.Client.describe_log_groups()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( accountIdentifiers=[ 'string', ], logGroupNamePrefix='string', logGroupNamePattern='string', includeLinkedAccounts=True|False, logGroupClass='STANDARD'|'INFREQUENT_ACCESS'|'DELIVERY', logGroupIdentifiers=[ 'string', ], PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **accountIdentifiers** (*list*) -- When "includeLinkedAccounts" is set to "true", use this parameter to specify the list of accounts to search. You can specify as many as 20 account IDs in the array. * *(string) --* * **logGroupNamePrefix** (*string*) -- The prefix to match. Note: "logGroupNamePrefix" and "logGroupNamePattern" are mutually exclusive. Only one of these parameters can be passed. * **logGroupNamePattern** (*string*) -- If you specify a string for this parameter, the operation returns only log groups that have names that match the string based on a case-sensitive substring search. For example, if you specify "DataLogs", log groups named "DataLogs", "aws/DataLogs", and "GroupDataLogs" would match, but "datalogs", "Data/log/s" and "Groupdata" would not match. If you specify "logGroupNamePattern" in your request, then only "arn", "creationTime", and "logGroupName" are included in the response. Note: "logGroupNamePattern" and "logGroupNamePrefix" are mutually exclusive. Only one of these parameters can be passed. * **includeLinkedAccounts** (*boolean*) -- If you are using a monitoring account, set this to "true" to have the operation return log groups in the accounts listed in "accountIdentifiers". If this parameter is set to "true" and "accountIdentifiers" contains a null value, the operation returns all log groups in the monitoring account and all log groups in all source accounts that are linked to the monitoring account. The default for this parameter is "false". * **logGroupClass** (*string*) -- Use this parameter to limit the results to only those log groups in the specified log group class. If you omit this parameter, log groups of all classes can be returned. Specifies the log group class for this log group. There are three classes: * The "Standard" log class supports all CloudWatch Logs features. * The "Infrequent Access" log class supports a subset of CloudWatch Logs features and incurs lower costs. * Use the "Delivery" log class only for delivering Lambda logs to store in Amazon S3 or Amazon Data Firehose. Log events in log groups in the Delivery class are kept in CloudWatch Logs for only one day. This log class doesn't offer rich CloudWatch Logs capabilities such as CloudWatch Logs Insights queries. For details about the features supported by each class, see Log classes * **logGroupIdentifiers** (*list*) -- Use this array to filter the list of log groups returned. If you specify this parameter, the only other filter that you can choose to specify is "includeLinkedAccounts". If you are using this operation in a monitoring account, you can specify the ARNs of log groups in source accounts and in the monitoring account itself. If you are using this operation in an account that is not a cross-account monitoring account, you can specify only log group names in the same account as the operation. * *(string) --* * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'logGroups': [ { 'logGroupName': 'string', 'creationTime': 123, 'retentionInDays': 123, 'metricFilterCount': 123, 'arn': 'string', 'storedBytes': 123, 'kmsKeyId': 'string', 'dataProtectionStatus': 'ACTIVATED'|'DELETED'|'ARCHIVED'|'DISABLED', 'inheritedProperties': [ 'ACCOUNT_DATA_PROTECTION', ], 'logGroupClass': 'STANDARD'|'INFREQUENT_ACCESS'|'DELIVERY', 'logGroupArn': 'string' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **logGroups** *(list) --* An array of structures, where each structure contains the information about one log group. * *(dict) --* Represents a log group. * **logGroupName** *(string) --* The name of the log group. * **creationTime** *(integer) --* The creation time of the log group, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. * **retentionInDays** *(integer) --* The number of days to retain the log events in the specified log group. Possible values are: 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1096, 1827, 2192, 2557, 2922, 3288, and 3653. To set a log group so that its log events do not expire, use DeleteRetentionPolicy. * **metricFilterCount** *(integer) --* The number of metric filters. * **arn** *(string) --* The Amazon Resource Name (ARN) of the log group. This version of the ARN includes a trailing ":*" after the log group name. Use this version to refer to the ARN in IAM policies when specifying permissions for most API actions. The exception is when specifying permissions for TagResource, UntagResource, and ListTagsForResource. The permissions for those three actions require the ARN version that doesn't include a trailing ":*". * **storedBytes** *(integer) --* The number of bytes stored. * **kmsKeyId** *(string) --* The Amazon Resource Name (ARN) of the KMS key to use when encrypting log data. * **dataProtectionStatus** *(string) --* Displays whether this log group has a protection policy, or whether it had one in the past. For more information, see PutDataProtectionPolicy. * **inheritedProperties** *(list) --* Displays all the properties that this log group has inherited from account-level settings. * *(string) --* * **logGroupClass** *(string) --* This specifies the log group class for this log group. There are three classes: * The "Standard" log class supports all CloudWatch Logs features. * The "Infrequent Access" log class supports a subset of CloudWatch Logs features and incurs lower costs. * Use the "Delivery" log class only for delivering Lambda logs to store in Amazon S3 or Amazon Data Firehose. Log events in log groups in the Delivery class are kept in CloudWatch Logs for only one day. This log class doesn't offer rich CloudWatch Logs capabilities such as CloudWatch Logs Insights queries. For details about the features supported by the Standard and Infrequent Access classes, see Log classes * **logGroupArn** *(string) --* The Amazon Resource Name (ARN) of the log group. This version of the ARN doesn't include a trailing ":*" after the log group name. Use this version to refer to the ARN in the following situations: * In the "logGroupIdentifier" input field in many CloudWatch Logs APIs. * In the "resourceArn" field in tagging APIs * In IAM policies, when specifying permissions for TagResource, UntagResource, and ListTagsForResource. * **NextToken** *(string) --* A token to resume pagination. CloudWatchLogs / Paginator / DescribeSubscriptionFilters DescribeSubscriptionFilters *************************** class CloudWatchLogs.Paginator.DescribeSubscriptionFilters paginator = client.get_paginator('describe_subscription_filters') paginate(**kwargs) Creates an iterator that will paginate through responses from "CloudWatchLogs.Client.describe_subscription_filters()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( logGroupName='string', filterNamePrefix='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **logGroupName** (*string*) -- **[REQUIRED]** The name of the log group. * **filterNamePrefix** (*string*) -- The prefix to match. If you don't specify a value, no prefix filter is applied. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'subscriptionFilters': [ { 'filterName': 'string', 'logGroupName': 'string', 'filterPattern': 'string', 'destinationArn': 'string', 'roleArn': 'string', 'distribution': 'Random'|'ByLogStream', 'applyOnTransformedLogs': True|False, 'creationTime': 123 }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **subscriptionFilters** *(list) --* The subscription filters. * *(dict) --* Represents a subscription filter. * **filterName** *(string) --* The name of the subscription filter. * **logGroupName** *(string) --* The name of the log group. * **filterPattern** *(string) --* A symbolic description of how CloudWatch Logs should interpret the data in each log event. For example, a log event can contain timestamps, IP addresses, strings, and so on. You use the filter pattern to specify what to look for in the log event message. * **destinationArn** *(string) --* The Amazon Resource Name (ARN) of the destination. * **roleArn** *(string) --* * **distribution** *(string) --* The method used to distribute log data to the destination, which can be either random or grouped by log stream. * **applyOnTransformedLogs** *(boolean) --* This parameter is valid only for log groups that have an active log transformer. For more information about log transformers, see PutTransformer. If this value is "true", the subscription filter is applied on the transformed version of the log events instead of the original ingested log events. * **creationTime** *(integer) --* The creation time of the subscription filter, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". * **NextToken** *(string) --* A token to resume pagination. CloudWatchLogs / Paginator / FilterLogEvents FilterLogEvents *************** class CloudWatchLogs.Paginator.FilterLogEvents paginator = client.get_paginator('filter_log_events') paginate(**kwargs) Creates an iterator that will paginate through responses from "CloudWatchLogs.Client.filter_log_events()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( logGroupName='string', logGroupIdentifier='string', logStreamNames=[ 'string', ], logStreamNamePrefix='string', startTime=123, endTime=123, filterPattern='string', interleaved=True|False, unmask=True|False, PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **logGroupName** (*string*) -- The name of the log group to search. Note: You must include either "logGroupIdentifier" or "logGroupName", but not both. * **logGroupIdentifier** (*string*) -- Specify either the name or ARN of the log group to view log events from. If the log group is in a source account and you are using a monitoring account, you must use the log group ARN. Note: You must include either "logGroupIdentifier" or "logGroupName", but not both. * **logStreamNames** (*list*) -- Filters the results to only logs from the log streams in this list. If you specify a value for both "logStreamNames" and "logStreamNamePrefix", the action returns an "InvalidParameterException" error. * *(string) --* * **logStreamNamePrefix** (*string*) -- Filters the results to include only events from log streams that have names starting with this prefix. If you specify a value for both "logStreamNamePrefix" and "logStreamNames", the action returns an "InvalidParameterException" error. * **startTime** (*integer*) -- The start of the time range, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". Events with a timestamp before this time are not returned. * **endTime** (*integer*) -- The end of the time range, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". Events with a timestamp later than this time are not returned. * **filterPattern** (*string*) -- The filter pattern to use. For more information, see Filter and Pattern Syntax. If not provided, all the events are matched. * **interleaved** (*boolean*) -- If the value is true, the operation attempts to provide responses that contain events from multiple log streams within the log group, interleaved in a single response. If the value is false, all the matched log events in the first log stream are searched first, then those in the next log stream, and so on. **Important** As of June 17, 2019, this parameter is ignored and the value is assumed to be true. The response from this operation always interleaves events from multiple log streams within a log group. * **unmask** (*boolean*) -- Specify "true" to display the log event fields with all sensitive data unmasked and visible. The default is "false". To use this operation with this parameter, you must be signed into an account with the "logs:Unmask" permission. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'events': [ { 'logStreamName': 'string', 'timestamp': 123, 'message': 'string', 'ingestionTime': 123, 'eventId': 'string' }, ], 'searchedLogStreams': [ { 'logStreamName': 'string', 'searchedCompletely': True|False }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **events** *(list) --* The matched events. * *(dict) --* Represents a matched event. * **logStreamName** *(string) --* The name of the log stream to which this event belongs. * **timestamp** *(integer) --* The time the event occurred, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". * **message** *(string) --* The data contained in the log event. * **ingestionTime** *(integer) --* The time the event was ingested, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". * **eventId** *(string) --* The ID of the event. * **searchedLogStreams** *(list) --* **Important** As of May 15, 2020, this parameter is no longer supported. This parameter returns an empty list. Indicates which log streams have been searched and whether each has been searched completely. * *(dict) --* Represents the search status of a log stream. * **logStreamName** *(string) --* The name of the log stream. * **searchedCompletely** *(boolean) --* Indicates whether all the events in this log stream were searched. * **NextToken** *(string) --* A token to resume pagination. CloudWatchLogs / Paginator / DescribeDeliverySources DescribeDeliverySources *********************** class CloudWatchLogs.Paginator.DescribeDeliverySources paginator = client.get_paginator('describe_delivery_sources') paginate(**kwargs) Creates an iterator that will paginate through responses from "CloudWatchLogs.Client.describe_delivery_sources()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max- items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'deliverySources': [ { 'name': 'string', 'arn': 'string', 'resourceArns': [ 'string', ], 'service': 'string', 'logType': 'string', 'tags': { 'string': 'string' } }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **deliverySources** *(list) --* An array of structures. Each structure contains information about one delivery source in the account. * *(dict) --* This structure contains information about one *delivery source* in your account. A delivery source is an Amazon Web Services resource that sends logs to an Amazon Web Services destination. The destination can be CloudWatch Logs, Amazon S3, or Firehose. Only some Amazon Web Services services support being configured as a delivery source. These services are listed as **Supported [V2 Permissions]** in the table at Enabling logging from Amazon Web Services services. To configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following: * Create a delivery source, which is a logical object that represents the resource that is actually sending the logs. For more information, see PutDeliverySource. * Create a *delivery destination*, which is a logical object that represents the actual delivery destination. For more information, see PutDeliveryDestination. * If you are delivering logs cross-account, you must use PutDeliveryDestinationPolicy in the destination account to assign an IAM policy to the destination. This policy allows delivery to that destination. * Create a *delivery* by pairing exactly one delivery source and one delivery destination. For more information, see CreateDelivery. You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination. * **name** *(string) --* The unique name of the delivery source. * **arn** *(string) --* The Amazon Resource Name (ARN) that uniquely identifies this delivery source. * **resourceArns** *(list) --* This array contains the ARN of the Amazon Web Services resource that sends logs and is represented by this delivery source. Currently, only one ARN can be in the array. * *(string) --* * **service** *(string) --* The Amazon Web Services service that is sending logs. * **logType** *(string) --* The type of log that the source is sending. For valid values for this parameter, see the documentation for the source service. * **tags** *(dict) --* The tags that have been assigned to this delivery source. * *(string) --* * *(string) --* * **NextToken** *(string) --* A token to resume pagination. CloudWatchLogs / Paginator / DescribeDeliveryDestinations DescribeDeliveryDestinations **************************** class CloudWatchLogs.Paginator.DescribeDeliveryDestinations paginator = client.get_paginator('describe_delivery_destinations') paginate(**kwargs) Creates an iterator that will paginate through responses from "CloudWatchLogs.Client.describe_delivery_destinations()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max- items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'deliveryDestinations': [ { 'name': 'string', 'arn': 'string', 'deliveryDestinationType': 'S3'|'CWL'|'FH'|'XRAY', 'outputFormat': 'json'|'plain'|'w3c'|'raw'|'parquet', 'deliveryDestinationConfiguration': { 'destinationResourceArn': 'string' }, 'tags': { 'string': 'string' } }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **deliveryDestinations** *(list) --* An array of structures. Each structure contains information about one delivery destination in the account. * *(dict) --* This structure contains information about one *delivery destination* in your account. A delivery destination is an Amazon Web Services resource that represents an Amazon Web Services service that logs can be sent to. CloudWatch Logs, Amazon S3, Firehose, and X-Ray are supported as delivery destinations. To configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following: * Create a delivery source, which is a logical object that represents the resource that is actually sending the logs. For more information, see PutDeliverySource. * Create a *delivery destination*, which is a logical object that represents the actual delivery destination. * If you are delivering logs cross-account, you must use PutDeliveryDestinationPolicy in the destination account to assign an IAM policy to the destination. This policy allows delivery to that destination. * Create a *delivery* by pairing exactly one delivery source and one delivery destination. For more information, see CreateDelivery. You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination. * **name** *(string) --* The name of this delivery destination. * **arn** *(string) --* The Amazon Resource Name (ARN) that uniquely identifies this delivery destination. * **deliveryDestinationType** *(string) --* Displays whether this delivery destination is CloudWatch Logs, Amazon S3, Firehose, or X-Ray. * **outputFormat** *(string) --* The format of the logs that are sent to this delivery destination. * **deliveryDestinationConfiguration** *(dict) --* A structure that contains the ARN of the Amazon Web Services resource that will receive the logs. * **destinationResourceArn** *(string) --* The ARN of the Amazon Web Services destination that this delivery destination represents. That Amazon Web Services destination can be a log group in CloudWatch Logs, an Amazon S3 bucket, or a delivery stream in Firehose. * **tags** *(dict) --* The tags that have been assigned to this delivery destination. * *(string) --* * *(string) --* * **NextToken** *(string) --* A token to resume pagination. CloudWatchLogs / Paginator / ListAnomalies ListAnomalies ************* class CloudWatchLogs.Paginator.ListAnomalies paginator = client.get_paginator('list_anomalies') paginate(**kwargs) Creates an iterator that will paginate through responses from "CloudWatchLogs.Client.list_anomalies()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( anomalyDetectorArn='string', suppressionState='SUPPRESSED'|'UNSUPPRESSED', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **anomalyDetectorArn** (*string*) -- Use this to optionally limit the results to only the anomalies found by a certain anomaly detector. * **suppressionState** (*string*) -- You can specify this parameter if you want to the operation to return only anomalies that are currently either suppressed or unsuppressed. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'anomalies': [ { 'anomalyId': 'string', 'patternId': 'string', 'anomalyDetectorArn': 'string', 'patternString': 'string', 'patternRegex': 'string', 'priority': 'string', 'firstSeen': 123, 'lastSeen': 123, 'description': 'string', 'active': True|False, 'state': 'Active'|'Suppressed'|'Baseline', 'histogram': { 'string': 123 }, 'logSamples': [ { 'timestamp': 123, 'message': 'string' }, ], 'patternTokens': [ { 'dynamicTokenPosition': 123, 'isDynamic': True|False, 'tokenString': 'string', 'enumerations': { 'string': 123 }, 'inferredTokenName': 'string' }, ], 'logGroupArnList': [ 'string', ], 'suppressed': True|False, 'suppressedDate': 123, 'suppressedUntil': 123, 'isPatternLevelSuppression': True|False }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **anomalies** *(list) --* An array of structures, where each structure contains information about one anomaly that a log anomaly detector has found. * *(dict) --* This structure represents one anomaly that has been found by a logs anomaly detector. For more information about patterns and anomalies, see CreateLogAnomalyDetector. * **anomalyId** *(string) --* The unique ID that CloudWatch Logs assigned to this anomaly. * **patternId** *(string) --* The ID of the pattern used to help identify this anomaly. * **anomalyDetectorArn** *(string) --* The ARN of the anomaly detector that identified this anomaly. * **patternString** *(string) --* The pattern used to help identify this anomaly, in string format. * **patternRegex** *(string) --* The pattern used to help identify this anomaly, in regular expression format. * **priority** *(string) --* The priority level of this anomaly, as determined by CloudWatch Logs. Priority is computed based on log severity labels such as "FATAL" and "ERROR" and the amount of deviation from the baseline. Possible values are "HIGH", "MEDIUM", and "LOW". * **firstSeen** *(integer) --* The date and time when the anomaly detector first saw this anomaly. It is specified as epoch time, which is the number of seconds since "January 1, 1970, 00:00:00 UTC". * **lastSeen** *(integer) --* The date and time when the anomaly detector most recently saw this anomaly. It is specified as epoch time, which is the number of seconds since "January 1, 1970, 00:00:00 UTC". * **description** *(string) --* A human-readable description of the anomaly. This description is generated by CloudWatch Logs. * **active** *(boolean) --* Specifies whether this anomaly is still ongoing. * **state** *(string) --* Indicates the current state of this anomaly. If it is still being treated as an anomaly, the value is "Active". If you have suppressed this anomaly by using the UpdateAnomaly operation, the value is "Suppressed". If this behavior is now considered to be normal, the value is "Baseline". * **histogram** *(dict) --* A map showing times when the anomaly detector ran, and the number of occurrences of this anomaly that were detected at each of those runs. The times are specified in epoch time, which is the number of seconds since "January 1, 1970, 00:00:00 UTC". * *(string) --* * *(integer) --* * **logSamples** *(list) --* An array of sample log event messages that are considered to be part of this anomaly. * *(dict) --* This structure contains the information for one sample log event that is associated with an anomaly found by a log anomaly detector. * **timestamp** *(integer) --* The time stamp of the log event. * **message** *(string) --* The message content of the log event. * **patternTokens** *(list) --* An array of structures where each structure contains information about one token that makes up the pattern. * *(dict) --* A structure that contains information about one pattern token related to an anomaly. For more information about patterns and tokens, see CreateLogAnomalyDetector. * **dynamicTokenPosition** *(integer) --* For a dynamic token, this indicates where in the pattern that this token appears, related to other dynamic tokens. The dynamic token that appears first has a value of "1", the one that appears second is "2", and so on. * **isDynamic** *(boolean) --* Specifies whether this is a dynamic token. * **tokenString** *(string) --* The string represented by this token. If this is a dynamic token, the value will be "<*>" * **enumerations** *(dict) --* Contains the values found for a dynamic token, and the number of times each value was found. * *(string) --* * *(integer) --* * **inferredTokenName** *(string) --* A name that CloudWatch Logs assigned to this dynamic token to make the pattern more readable. The string part of the "inferredTokenName" gives you a clearer idea of the content of this token. The number part of the "inferredTokenName" shows where in the pattern this token appears, compared to other dynamic tokens. CloudWatch Logs assigns the string part of the name based on analyzing the content of the log events that contain it. For example, an inferred token name of "IPAddress-3" means that the token represents an IP address, and this token is the third dynamic token in the pattern. * **logGroupArnList** *(list) --* An array of ARNS of the log groups that contained log events considered to be part of this anomaly. * *(string) --* * **suppressed** *(boolean) --* Indicates whether this anomaly is currently suppressed. To suppress an anomaly, use UpdateAnomaly. * **suppressedDate** *(integer) --* If the anomaly is suppressed, this indicates when it was suppressed. * **suppressedUntil** *(integer) --* If the anomaly is suppressed, this indicates when the suppression will end. If this value is "0", the anomaly was suppressed with no expiration, with the "INFINITE" value. * **isPatternLevelSuppression** *(boolean) --* If this anomaly is suppressed, this field is "true" if the suppression is because the pattern is suppressed. If "false", then only this particular anomaly is suppressed. * **NextToken** *(string) --* A token to resume pagination. CloudWatchLogs / Paginator / DescribeMetricFilters DescribeMetricFilters ********************* class CloudWatchLogs.Paginator.DescribeMetricFilters paginator = client.get_paginator('describe_metric_filters') paginate(**kwargs) Creates an iterator that will paginate through responses from "CloudWatchLogs.Client.describe_metric_filters()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( logGroupName='string', filterNamePrefix='string', metricName='string', metricNamespace='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **logGroupName** (*string*) -- The name of the log group. * **filterNamePrefix** (*string*) -- The prefix to match. CloudWatch Logs uses the value that you set here only if you also include the "logGroupName" parameter in your request. * **metricName** (*string*) -- Filters results to include only those with the specified metric name. If you include this parameter in your request, you must also include the "metricNamespace" parameter. * **metricNamespace** (*string*) -- Filters results to include only those in the specified namespace. If you include this parameter in your request, you must also include the "metricName" parameter. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'metricFilters': [ { 'filterName': 'string', 'filterPattern': 'string', 'metricTransformations': [ { 'metricName': 'string', 'metricNamespace': 'string', 'metricValue': 'string', 'defaultValue': 123.0, 'dimensions': { 'string': 'string' }, 'unit': 'Seconds'|'Microseconds'|'Milliseconds'|'Bytes'|'Kilobytes'|'Megabytes'|'Gigabytes'|'Terabytes'|'Bits'|'Kilobits'|'Megabits'|'Gigabits'|'Terabits'|'Percent'|'Count'|'Bytes/Second'|'Kilobytes/Second'|'Megabytes/Second'|'Gigabytes/Second'|'Terabytes/Second'|'Bits/Second'|'Kilobits/Second'|'Megabits/Second'|'Gigabits/Second'|'Terabits/Second'|'Count/Second'|'None' }, ], 'creationTime': 123, 'logGroupName': 'string', 'applyOnTransformedLogs': True|False }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **metricFilters** *(list) --* The metric filters. * *(dict) --* Metric filters express how CloudWatch Logs would extract metric observations from ingested log events and transform them into metric data in a CloudWatch metric. * **filterName** *(string) --* The name of the metric filter. * **filterPattern** *(string) --* A symbolic description of how CloudWatch Logs should interpret the data in each log event. For example, a log event can contain timestamps, IP addresses, strings, and so on. You use the filter pattern to specify what to look for in the log event message. * **metricTransformations** *(list) --* The metric transformations. * *(dict) --* Indicates how to transform ingested log events to metric data in a CloudWatch metric. * **metricName** *(string) --* The name of the CloudWatch metric. * **metricNamespace** *(string) --* A custom namespace to contain your metric in CloudWatch. Use namespaces to group together metrics that are similar. For more information, see Namespaces. * **metricValue** *(string) --* The value to publish to the CloudWatch metric when a filter pattern matches a log event. * **defaultValue** *(float) --* (Optional) The value to emit when a filter pattern does not match a log event. This value can be null. * **dimensions** *(dict) --* The fields to use as dimensions for the metric. One metric filter can include as many as three dimensions. Warning: Metrics extracted from log events are charged as custom metrics. To prevent unexpected high charges, do not specify high-cardinality fields such as "IPAddress" or "requestID" as dimensions. Each different value found for a dimension is treated as a separate metric and accrues charges as a separate custom metric.CloudWatch Logs disables a metric filter if it generates 1000 different name/value pairs for your specified dimensions within a certain amount of time. This helps to prevent accidental high charges.You can also set up a billing alarm to alert you if your charges are higher than expected. For more information, see Creating a Billing Alarm to Monitor Your Estimated Amazon Web Services Charges. * *(string) --* * *(string) --* * **unit** *(string) --* The unit to assign to the metric. If you omit this, the unit is set as "None". * **creationTime** *(integer) --* The creation time of the metric filter, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". * **logGroupName** *(string) --* The name of the log group. * **applyOnTransformedLogs** *(boolean) --* This parameter is valid only for log groups that have an active log transformer. For more information about log transformers, see PutTransformer. If this value is "true", the metric filter is applied on the transformed version of the log events instead of the original ingested log events. * **NextToken** *(string) --* A token to resume pagination. CloudWatchLogs / Paginator / DescribeDestinations DescribeDestinations ******************** class CloudWatchLogs.Paginator.DescribeDestinations paginator = client.get_paginator('describe_destinations') paginate(**kwargs) Creates an iterator that will paginate through responses from "CloudWatchLogs.Client.describe_destinations()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( DestinationNamePrefix='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **DestinationNamePrefix** (*string*) -- The prefix to match. If you don't specify a value, no prefix filter is applied. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'destinations': [ { 'destinationName': 'string', 'targetArn': 'string', 'roleArn': 'string', 'accessPolicy': 'string', 'arn': 'string', 'creationTime': 123 }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **destinations** *(list) --* The destinations. * *(dict) --* Represents a cross-account destination that receives subscription log events. * **destinationName** *(string) --* The name of the destination. * **targetArn** *(string) --* The Amazon Resource Name (ARN) of the physical target where the log events are delivered (for example, a Kinesis stream). * **roleArn** *(string) --* A role for impersonation, used when delivering log events to the target. * **accessPolicy** *(string) --* An IAM policy document that governs which Amazon Web Services accounts can create subscription filters against this destination. * **arn** *(string) --* The ARN of this destination. * **creationTime** *(integer) --* The creation time of the destination, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. * **NextToken** *(string) --* A token to resume pagination. CloudWatchLogs / Paginator / DescribeLogStreams DescribeLogStreams ****************** class CloudWatchLogs.Paginator.DescribeLogStreams paginator = client.get_paginator('describe_log_streams') paginate(**kwargs) Creates an iterator that will paginate through responses from "CloudWatchLogs.Client.describe_log_streams()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( logGroupName='string', logGroupIdentifier='string', logStreamNamePrefix='string', orderBy='LogStreamName'|'LastEventTime', descending=True|False, PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **logGroupName** (*string*) -- The name of the log group. Note: You must include either "logGroupIdentifier" or "logGroupName", but not both. * **logGroupIdentifier** (*string*) -- Specify either the name or ARN of the log group to view. If the log group is in a source account and you are using a monitoring account, you must use the log group ARN. Note: You must include either "logGroupIdentifier" or "logGroupName", but not both. * **logStreamNamePrefix** (*string*) -- The prefix to match. If "orderBy" is "LastEventTime", you cannot specify this parameter. * **orderBy** (*string*) -- If the value is "LogStreamName", the results are ordered by log stream name. If the value is "LastEventTime", the results are ordered by the event time. The default value is "LogStreamName". If you order the results by event time, you cannot specify the "logStreamNamePrefix" parameter. "lastEventTimestamp" represents the time of the most recent log event in the log stream in CloudWatch Logs. This number is expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". "lastEventTimestamp" updates on an eventual consistency basis. It typically updates in less than an hour from ingestion, but in rare situations might take longer. * **descending** (*boolean*) -- If the value is true, results are returned in descending order. If the value is to false, results are returned in ascending order. The default value is false. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'logStreams': [ { 'logStreamName': 'string', 'creationTime': 123, 'firstEventTimestamp': 123, 'lastEventTimestamp': 123, 'lastIngestionTime': 123, 'uploadSequenceToken': 'string', 'arn': 'string', 'storedBytes': 123 }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **logStreams** *(list) --* The log streams. * *(dict) --* Represents a log stream, which is a sequence of log events from a single emitter of logs. * **logStreamName** *(string) --* The name of the log stream. * **creationTime** *(integer) --* The creation time of the stream, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". * **firstEventTimestamp** *(integer) --* The time of the first event, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". * **lastEventTimestamp** *(integer) --* The time of the most recent log event in the log stream in CloudWatch Logs. This number is expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". The "lastEventTime" value updates on an eventual consistency basis. It typically updates in less than an hour from ingestion, but in rare situations might take longer. * **lastIngestionTime** *(integer) --* The ingestion time, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC" The "lastIngestionTime" value updates on an eventual consistency basis. It typically updates in less than an hour after ingestion, but in rare situations might take longer. * **uploadSequenceToken** *(string) --* The sequence token. Warning: The sequence token is now ignored in "PutLogEvents" actions. "PutLogEvents" actions are always accepted regardless of receiving an invalid sequence token. You don't need to obtain "uploadSequenceToken" to use a "PutLogEvents" action. * **arn** *(string) --* The Amazon Resource Name (ARN) of the log stream. * **storedBytes** *(integer) --* The number of bytes stored. **Important:** As of June 17, 2019, this parameter is no longer supported for log streams, and is always reported as zero. This change applies only to log streams. The "storedBytes" parameter for log groups is not affected. * **NextToken** *(string) --* A token to resume pagination. CloudWatchLogs / Client / delete_delivery_destination delete_delivery_destination *************************** CloudWatchLogs.Client.delete_delivery_destination(**kwargs) Deletes a *delivery destination*. A delivery is a connection between a logical *delivery source* and a logical *delivery destination*. You can't delete a delivery destination if any current deliveries are associated with it. To find whether any deliveries are associated with this delivery destination, use the DescribeDeliveries operation and check the "deliveryDestinationArn" field in the results. See also: AWS API Documentation **Request Syntax** response = client.delete_delivery_destination( name='string' ) Parameters: **name** (*string*) -- **[REQUIRED]** The name of the delivery destination that you want to delete. You can find a list of delivery destination names by using the DescribeDeliveryDestinations operation. Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.ConflictException" * "CloudWatchLogs.Client.exceptions.ValidationException" * "CloudWatchLogs.Client.exceptions.ServiceQuotaExceededException" * "CloudWatchLogs.Client.exceptions.ThrottlingException" CloudWatchLogs / Client / put_delivery_destination put_delivery_destination ************************ CloudWatchLogs.Client.put_delivery_destination(**kwargs) Creates or updates a logical *delivery destination*. A delivery destination is an Amazon Web Services resource that represents an Amazon Web Services service that logs can be sent to. CloudWatch Logs, Amazon S3, and Firehose are supported as logs delivery destinations and X-Ray as the trace delivery destination. To configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following: * Create a delivery source, which is a logical object that represents the resource that is actually sending the logs. For more information, see PutDeliverySource. * Use "PutDeliveryDestination" to create a *delivery destination* in the same account of the actual delivery destination. The delivery destination that you create is a logical object that represents the actual delivery destination. * If you are delivering logs cross-account, you must use PutDeliveryDestinationPolicy in the destination account to assign an IAM policy to the destination. This policy allows delivery to that destination. * Use "CreateDelivery" to create a *delivery* by pairing exactly one delivery source and one delivery destination. For more information, see CreateDelivery. You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination. Only some Amazon Web Services services support being configured as a delivery source. These services are listed as **Supported [V2 Permissions]** in the table at Enabling logging from Amazon Web Services services. If you use this operation to update an existing delivery destination, all the current delivery destination parameters are overwritten with the new parameter values that you specify. See also: AWS API Documentation **Request Syntax** response = client.put_delivery_destination( name='string', outputFormat='json'|'plain'|'w3c'|'raw'|'parquet', deliveryDestinationConfiguration={ 'destinationResourceArn': 'string' }, deliveryDestinationType='S3'|'CWL'|'FH'|'XRAY', tags={ 'string': 'string' } ) Parameters: * **name** (*string*) -- **[REQUIRED]** A name for this delivery destination. This name must be unique for all delivery destinations in your account. * **outputFormat** (*string*) -- The format for the logs that this delivery destination will receive. * **deliveryDestinationConfiguration** (*dict*) -- A structure that contains the ARN of the Amazon Web Services resource that will receive the logs. Note: "deliveryDestinationConfiguration" is required for CloudWatch Logs, Amazon S3, Firehose log delivery destinations and not required for X-Ray trace delivery destinations. "deliveryDestinationType" is needed for X-Ray trace delivery destinations but not required for other logs delivery destinations. * **destinationResourceArn** *(string) --* **[REQUIRED]** The ARN of the Amazon Web Services destination that this delivery destination represents. That Amazon Web Services destination can be a log group in CloudWatch Logs, an Amazon S3 bucket, or a delivery stream in Firehose. * **deliveryDestinationType** (*string*) -- The type of delivery destination. This parameter specifies the target service where log data will be delivered. Valid values include: * "S3" - Amazon S3 for long-term storage and analytics * "CWL" - CloudWatch Logs for centralized log management * "FH" - Amazon Kinesis Data Firehose for real-time data streaming * "XRAY" - Amazon Web Services X-Ray for distributed tracing and application monitoring The delivery destination type determines the format and configuration options available for log delivery. * **tags** (*dict*) -- An optional list of key-value pairs to associate with the resource. For more information about tagging, see Tagging Amazon Web Services resources * *(string) --* * *(string) --* Return type: dict Returns: **Response Syntax** { 'deliveryDestination': { 'name': 'string', 'arn': 'string', 'deliveryDestinationType': 'S3'|'CWL'|'FH'|'XRAY', 'outputFormat': 'json'|'plain'|'w3c'|'raw'|'parquet', 'deliveryDestinationConfiguration': { 'destinationResourceArn': 'string' }, 'tags': { 'string': 'string' } } } **Response Structure** * *(dict) --* * **deliveryDestination** *(dict) --* A structure containing information about the delivery destination that you just created or updated. * **name** *(string) --* The name of this delivery destination. * **arn** *(string) --* The Amazon Resource Name (ARN) that uniquely identifies this delivery destination. * **deliveryDestinationType** *(string) --* Displays whether this delivery destination is CloudWatch Logs, Amazon S3, Firehose, or X-Ray. * **outputFormat** *(string) --* The format of the logs that are sent to this delivery destination. * **deliveryDestinationConfiguration** *(dict) --* A structure that contains the ARN of the Amazon Web Services resource that will receive the logs. * **destinationResourceArn** *(string) --* The ARN of the Amazon Web Services destination that this delivery destination represents. That Amazon Web Services destination can be a log group in CloudWatch Logs, an Amazon S3 bucket, or a delivery stream in Firehose. * **tags** *(dict) --* The tags that have been assigned to this delivery destination. * *(string) --* * *(string) --* **Exceptions** * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.ConflictException" * "CloudWatchLogs.Client.exceptions.ValidationException" * "CloudWatchLogs.Client.exceptions.ServiceQuotaExceededException" * "CloudWatchLogs.Client.exceptions.ThrottlingException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" CloudWatchLogs / Client / list_integrations list_integrations ***************** CloudWatchLogs.Client.list_integrations(**kwargs) Returns a list of integrations between CloudWatch Logs and other services in this account. Currently, only one integration can be created in an account, and this integration must be with OpenSearch Service. See also: AWS API Documentation **Request Syntax** response = client.list_integrations( integrationNamePrefix='string', integrationType='OPENSEARCH', integrationStatus='PROVISIONING'|'ACTIVE'|'FAILED' ) Parameters: * **integrationNamePrefix** (*string*) -- To limit the results to integrations that start with a certain name prefix, specify that name prefix here. * **integrationType** (*string*) -- To limit the results to integrations of a certain type, specify that type here. * **integrationStatus** (*string*) -- To limit the results to integrations with a certain status, specify that status here. Return type: dict Returns: **Response Syntax** { 'integrationSummaries': [ { 'integrationName': 'string', 'integrationType': 'OPENSEARCH', 'integrationStatus': 'PROVISIONING'|'ACTIVE'|'FAILED' }, ] } **Response Structure** * *(dict) --* * **integrationSummaries** *(list) --* An array, where each object in the array contains information about one CloudWatch Logs integration in this account. * *(dict) --* This structure contains information about one CloudWatch Logs integration. This structure is returned by a ListIntegrations operation. * **integrationName** *(string) --* The name of this integration. * **integrationType** *(string) --* The type of integration. Integrations with OpenSearch Service have the type "OPENSEARCH". * **integrationStatus** *(string) --* The current status of this integration. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / create_delivery create_delivery *************** CloudWatchLogs.Client.create_delivery(**kwargs) Creates a *delivery*. A delivery is a connection between a logical *delivery source* and a logical *delivery destination* that you have already created. Only some Amazon Web Services services support being configured as a delivery source using this operation. These services are listed as **Supported [V2 Permissions]** in the table at Enabling logging from Amazon Web Services services. A delivery destination can represent a log group in CloudWatch Logs, an Amazon S3 bucket, a delivery stream in Firehose, or X-Ray. To configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following: * Create a delivery source, which is a logical object that represents the resource that is actually sending the logs. For more information, see PutDeliverySource. * Create a *delivery destination*, which is a logical object that represents the actual delivery destination. For more information, see PutDeliveryDestination. * If you are delivering logs cross-account, you must use PutDeliveryDestinationPolicy in the destination account to assign an IAM policy to the destination. This policy allows delivery to that destination. * Use "CreateDelivery" to create a *delivery* by pairing exactly one delivery source and one delivery destination. You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination. To update an existing delivery configuration, use UpdateDeliveryConfiguration. See also: AWS API Documentation **Request Syntax** response = client.create_delivery( deliverySourceName='string', deliveryDestinationArn='string', recordFields=[ 'string', ], fieldDelimiter='string', s3DeliveryConfiguration={ 'suffixPath': 'string', 'enableHiveCompatiblePath': True|False }, tags={ 'string': 'string' } ) Parameters: * **deliverySourceName** (*string*) -- **[REQUIRED]** The name of the delivery source to use for this delivery. * **deliveryDestinationArn** (*string*) -- **[REQUIRED]** The ARN of the delivery destination to use for this delivery. * **recordFields** (*list*) -- The list of record fields to be delivered to the destination, in order. If the delivery's log source has mandatory fields, they must be included in this list. * *(string) --* * **fieldDelimiter** (*string*) -- The field delimiter to use between record fields when the final output format of a delivery is in "Plain", "W3C", or "Raw" format. * **s3DeliveryConfiguration** (*dict*) -- This structure contains parameters that are valid only when the delivery's delivery destination is an S3 bucket. * **suffixPath** *(string) --* This string allows re-configuring the S3 object prefix to contain either static or variable sections. The valid variables to use in the suffix path will vary by each log source. To find the values supported for the suffix path for each log source, use the DescribeConfigurationTemplates operation and check the "allowedSuffixPathFields" field in the response. * **enableHiveCompatiblePath** *(boolean) --* This parameter causes the S3 objects that contain delivered logs to use a prefix structure that allows for integration with Apache Hive. * **tags** (*dict*) -- An optional list of key-value pairs to associate with the resource. For more information about tagging, see Tagging Amazon Web Services resources * *(string) --* * *(string) --* Return type: dict Returns: **Response Syntax** { 'delivery': { 'id': 'string', 'arn': 'string', 'deliverySourceName': 'string', 'deliveryDestinationArn': 'string', 'deliveryDestinationType': 'S3'|'CWL'|'FH'|'XRAY', 'recordFields': [ 'string', ], 'fieldDelimiter': 'string', 's3DeliveryConfiguration': { 'suffixPath': 'string', 'enableHiveCompatiblePath': True|False }, 'tags': { 'string': 'string' } } } **Response Structure** * *(dict) --* * **delivery** *(dict) --* A structure that contains information about the delivery that you just created. * **id** *(string) --* The unique ID that identifies this delivery in your account. * **arn** *(string) --* The Amazon Resource Name (ARN) that uniquely identifies this delivery. * **deliverySourceName** *(string) --* The name of the delivery source that is associated with this delivery. * **deliveryDestinationArn** *(string) --* The ARN of the delivery destination that is associated with this delivery. * **deliveryDestinationType** *(string) --* Displays whether the delivery destination associated with this delivery is CloudWatch Logs, Amazon S3, Firehose, or X-Ray. * **recordFields** *(list) --* The record fields used in this delivery. * *(string) --* * **fieldDelimiter** *(string) --* The field delimiter that is used between record fields when the final output format of a delivery is in "Plain", "W3C", or "Raw" format. * **s3DeliveryConfiguration** *(dict) --* This structure contains delivery configurations that apply only when the delivery destination resource is an S3 bucket. * **suffixPath** *(string) --* This string allows re-configuring the S3 object prefix to contain either static or variable sections. The valid variables to use in the suffix path will vary by each log source. To find the values supported for the suffix path for each log source, use the DescribeConfigurationTemplates operation and check the "allowedSuffixPathFields" field in the response. * **enableHiveCompatiblePath** *(boolean) --* This parameter causes the S3 objects that contain delivered logs to use a prefix structure that allows for integration with Apache Hive. * **tags** *(dict) --* The tags that have been assigned to this delivery. * *(string) --* * *(string) --* **Exceptions** * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.ConflictException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ValidationException" * "CloudWatchLogs.Client.exceptions.AccessDeniedException" * "CloudWatchLogs.Client.exceptions.ServiceQuotaExceededException" * "CloudWatchLogs.Client.exceptions.ThrottlingException" CloudWatchLogs / Client / update_log_anomaly_detector update_log_anomaly_detector *************************** CloudWatchLogs.Client.update_log_anomaly_detector(**kwargs) Updates an existing log anomaly detector. See also: AWS API Documentation **Request Syntax** response = client.update_log_anomaly_detector( anomalyDetectorArn='string', evaluationFrequency='ONE_MIN'|'FIVE_MIN'|'TEN_MIN'|'FIFTEEN_MIN'|'THIRTY_MIN'|'ONE_HOUR', filterPattern='string', anomalyVisibilityTime=123, enabled=True|False ) Parameters: * **anomalyDetectorArn** (*string*) -- **[REQUIRED]** The ARN of the anomaly detector that you want to update. * **evaluationFrequency** (*string*) -- Specifies how often the anomaly detector runs and look for anomalies. Set this value according to the frequency that the log group receives new logs. For example, if the log group receives new log events every 10 minutes, then setting "evaluationFrequency" to "FIFTEEN_MIN" might be appropriate. * **filterPattern** (*string*) -- A symbolic description of how CloudWatch Logs should interpret the data in each log event. For example, a log event can contain timestamps, IP addresses, strings, and so on. You use the filter pattern to specify what to look for in the log event message. * **anomalyVisibilityTime** (*integer*) -- The number of days to use as the life cycle of anomalies. After this time, anomalies are automatically baselined and the anomaly detector model will treat new occurrences of similar event as normal. Therefore, if you do not correct the cause of an anomaly during this time, it will be considered normal going forward and will not be detected. * **enabled** (*boolean*) -- **[REQUIRED]** Use this parameter to pause or restart the anomaly detector. Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" CloudWatchLogs / Client / stop_query stop_query ********** CloudWatchLogs.Client.stop_query(**kwargs) Stops a CloudWatch Logs Insights query that is in progress. If the query has already ended, the operation returns an error indicating that the specified query is not running. See also: AWS API Documentation **Request Syntax** response = client.stop_query( queryId='string' ) Parameters: **queryId** (*string*) -- **[REQUIRED]** The ID number of the query to stop. To find this ID number, use "DescribeQueries". Return type: dict Returns: **Response Syntax** { 'success': True|False } **Response Structure** * *(dict) --* * **success** *(boolean) --* This is true if the query was stopped by the "StopQuery" operation. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / put_subscription_filter put_subscription_filter *********************** CloudWatchLogs.Client.put_subscription_filter(**kwargs) Creates or updates a subscription filter and associates it with the specified log group. With subscription filters, you can subscribe to a real-time stream of log events ingested through PutLogEvents and have them delivered to a specific destination. When log events are sent to the receiving service, they are Base64 encoded and compressed with the GZIP format. The following destinations are supported for subscription filters: * An Amazon Kinesis data stream belonging to the same account as the subscription filter, for same-account delivery. * A logical destination created with PutDestination that belongs to a different account, for cross-account delivery. We currently support Kinesis Data Streams and Firehose as logical destinations. * An Amazon Kinesis Data Firehose delivery stream that belongs to the same account as the subscription filter, for same-account delivery. * An Lambda function that belongs to the same account as the subscription filter, for same-account delivery. Each log group can have up to two subscription filters associated with it. If you are updating an existing filter, you must specify the correct name in "filterName". Using regular expressions in filter patterns is supported. For these filters, there is a quotas of quota of two regular expression patterns within a single filter pattern. There is also a quota of five regular expression patterns per log group. For more information about using regular expressions in filter patterns, see Filter pattern syntax for metric filters, subscription filters, filter log events, and Live Tail. To perform a "PutSubscriptionFilter" operation for any destination except a Lambda function, you must also have the "iam:PassRole" permission. See also: AWS API Documentation **Request Syntax** response = client.put_subscription_filter( logGroupName='string', filterName='string', filterPattern='string', destinationArn='string', roleArn='string', distribution='Random'|'ByLogStream', applyOnTransformedLogs=True|False ) Parameters: * **logGroupName** (*string*) -- **[REQUIRED]** The name of the log group. * **filterName** (*string*) -- **[REQUIRED]** A name for the subscription filter. If you are updating an existing filter, you must specify the correct name in "filterName". To find the name of the filter currently associated with a log group, use DescribeSubscriptionFilters. * **filterPattern** (*string*) -- **[REQUIRED]** A filter pattern for subscribing to a filtered stream of log events. * **destinationArn** (*string*) -- **[REQUIRED]** The ARN of the destination to deliver matching log events to. Currently, the supported destinations are: * An Amazon Kinesis stream belonging to the same account as the subscription filter, for same-account delivery. * A logical destination (specified using an ARN) belonging to a different account, for cross-account delivery. If you're setting up a cross-account subscription, the destination must have an IAM policy associated with it. The IAM policy must allow the sender to send logs to the destination. For more information, see PutDestinationPolicy. * A Kinesis Data Firehose delivery stream belonging to the same account as the subscription filter, for same-account delivery. * A Lambda function belonging to the same account as the subscription filter, for same-account delivery. * **roleArn** (*string*) -- The ARN of an IAM role that grants CloudWatch Logs permissions to deliver ingested log events to the destination stream. You don't need to provide the ARN when you are working with a logical destination for cross-account delivery. * **distribution** (*string*) -- The method used to distribute log data to the destination. By default, log data is grouped by log stream, but the grouping can be set to random for a more even distribution. This property is only applicable when the destination is an Amazon Kinesis data stream. * **applyOnTransformedLogs** (*boolean*) -- This parameter is valid only for log groups that have an active log transformer. For more information about log transformers, see PutTransformer. If the log group uses either a log-group level or account- level transformer, and you specify "true", the subscription filter will be applied on the transformed version of the log events instead of the original ingested log events. Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.LimitExceededException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.InvalidOperationException" CloudWatchLogs / Client / get_paginator get_paginator ************* CloudWatchLogs.Client.get_paginator(operation_name) Create a paginator for an operation. Parameters: **operation_name** (*string*) -- The operation name. This is the same name as the method name on the client. For example, if the method name is "create_foo", and you'd normally invoke the operation as "client.create_foo(**kwargs)", if the "create_foo" operation can be paginated, you can use the call "client.get_paginator("create_foo")". Raises: **OperationNotPageableError** -- Raised if the operation is not pageable. You can use the "client.can_paginate" method to check if an operation is pageable. Return type: "botocore.paginate.Paginator" Returns: A paginator object. CloudWatchLogs / Client / delete_delivery_source delete_delivery_source ********************** CloudWatchLogs.Client.delete_delivery_source(**kwargs) Deletes a *delivery source*. A delivery is a connection between a logical *delivery source* and a logical *delivery destination*. You can't delete a delivery source if any current deliveries are associated with it. To find whether any deliveries are associated with this delivery source, use the DescribeDeliveries operation and check the "deliverySourceName" field in the results. See also: AWS API Documentation **Request Syntax** response = client.delete_delivery_source( name='string' ) Parameters: **name** (*string*) -- **[REQUIRED]** The name of the delivery source that you want to delete. Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.ConflictException" * "CloudWatchLogs.Client.exceptions.ValidationException" * "CloudWatchLogs.Client.exceptions.ServiceQuotaExceededException" * "CloudWatchLogs.Client.exceptions.ThrottlingException" CloudWatchLogs / Client / put_delivery_destination_policy put_delivery_destination_policy ******************************* CloudWatchLogs.Client.put_delivery_destination_policy(**kwargs) Creates and assigns an IAM policy that grants permissions to CloudWatch Logs to deliver logs cross-account to a specified destination in this account. To configure the delivery of logs from an Amazon Web Services service in another account to a logs delivery destination in the current account, you must do the following: * Create a delivery source, which is a logical object that represents the resource that is actually sending the logs. For more information, see PutDeliverySource. * Create a *delivery destination*, which is a logical object that represents the actual delivery destination. For more information, see PutDeliveryDestination. * Use this operation in the destination account to assign an IAM policy to the destination. This policy allows delivery to that destination. * Create a *delivery* by pairing exactly one delivery source and one delivery destination. For more information, see CreateDelivery. Only some Amazon Web Services services support being configured as a delivery source. These services are listed as **Supported [V2 Permissions]** in the table at Enabling logging from Amazon Web Services services. The contents of the policy must include two statements. One statement enables general logs delivery, and the other allows delivery to the chosen destination. See the examples for the needed policies. See also: AWS API Documentation **Request Syntax** response = client.put_delivery_destination_policy( deliveryDestinationName='string', deliveryDestinationPolicy='string' ) Parameters: * **deliveryDestinationName** (*string*) -- **[REQUIRED]** The name of the delivery destination to assign this policy to. * **deliveryDestinationPolicy** (*string*) -- **[REQUIRED]** The contents of the policy. Return type: dict Returns: **Response Syntax** { 'policy': { 'deliveryDestinationPolicy': 'string' } } **Response Structure** * *(dict) --* * **policy** *(dict) --* The contents of the policy that you just created. * **deliveryDestinationPolicy** *(string) --* The contents of the delivery destination policy. **Exceptions** * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.ValidationException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ConflictException" CloudWatchLogs / Client / describe_destinations describe_destinations ********************* CloudWatchLogs.Client.describe_destinations(**kwargs) Lists all your destinations. The results are ASCII-sorted by destination name. See also: AWS API Documentation **Request Syntax** response = client.describe_destinations( DestinationNamePrefix='string', nextToken='string', limit=123 ) Parameters: * **DestinationNamePrefix** (*string*) -- The prefix to match. If you don't specify a value, no prefix filter is applied. * **nextToken** (*string*) -- The token for the next set of items to return. (You received this token from a previous call.) * **limit** (*integer*) -- The maximum number of items returned. If you don't specify a value, the default maximum value of 50 items is used. Return type: dict Returns: **Response Syntax** { 'destinations': [ { 'destinationName': 'string', 'targetArn': 'string', 'roleArn': 'string', 'accessPolicy': 'string', 'arn': 'string', 'creationTime': 123 }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **destinations** *(list) --* The destinations. * *(dict) --* Represents a cross-account destination that receives subscription log events. * **destinationName** *(string) --* The name of the destination. * **targetArn** *(string) --* The Amazon Resource Name (ARN) of the physical target where the log events are delivered (for example, a Kinesis stream). * **roleArn** *(string) --* A role for impersonation, used when delivering log events to the target. * **accessPolicy** *(string) --* An IAM policy document that governs which Amazon Web Services accounts can create subscription filters against this destination. * **arn** *(string) --* The ARN of this destination. * **creationTime** *(integer) --* The creation time of the destination, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. * **nextToken** *(string) --* The token for the next set of items to return. The token expires after 24 hours. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / delete_log_anomaly_detector delete_log_anomaly_detector *************************** CloudWatchLogs.Client.delete_log_anomaly_detector(**kwargs) Deletes the specified CloudWatch Logs anomaly detector. See also: AWS API Documentation **Request Syntax** response = client.delete_log_anomaly_detector( anomalyDetectorArn='string' ) Parameters: **anomalyDetectorArn** (*string*) -- **[REQUIRED]** The ARN of the anomaly detector to delete. You can find the ARNs of log anomaly detectors in your account by using the ListLogAnomalyDetectors operation. Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" CloudWatchLogs / Client / delete_index_policy delete_index_policy ******************* CloudWatchLogs.Client.delete_index_policy(**kwargs) Deletes a log-group level field index policy that was applied to a single log group. The indexing of the log events that happened before you delete the policy will still be used for as many as 30 days to improve CloudWatch Logs Insights queries. You can't use this operation to delete an account-level index policy. Instead, use DeletAccountPolicy. If you delete a log-group level field index policy and there is an account-level field index policy, in a few minutes the log group begins using that account-wide policy to index new incoming log events. See also: AWS API Documentation **Request Syntax** response = client.delete_index_policy( logGroupIdentifier='string' ) Parameters: **logGroupIdentifier** (*string*) -- **[REQUIRED]** The log group to delete the index policy for. You can specify either the name or the ARN of the log group. Return type: dict Returns: **Response Syntax** {} **Response Structure** * *(dict) --* **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.LimitExceededException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / update_delivery_configuration update_delivery_configuration ***************************** CloudWatchLogs.Client.update_delivery_configuration(**kwargs) Use this operation to update the configuration of a delivery to change either the S3 path pattern or the format of the delivered logs. You can't use this operation to change the source or destination of the delivery. See also: AWS API Documentation **Request Syntax** response = client.update_delivery_configuration( id='string', recordFields=[ 'string', ], fieldDelimiter='string', s3DeliveryConfiguration={ 'suffixPath': 'string', 'enableHiveCompatiblePath': True|False } ) Parameters: * **id** (*string*) -- **[REQUIRED]** The ID of the delivery to be updated by this request. * **recordFields** (*list*) -- The list of record fields to be delivered to the destination, in order. If the delivery's log source has mandatory fields, they must be included in this list. * *(string) --* * **fieldDelimiter** (*string*) -- The field delimiter to use between record fields when the final output format of a delivery is in "Plain", "W3C", or "Raw" format. * **s3DeliveryConfiguration** (*dict*) -- This structure contains parameters that are valid only when the delivery's delivery destination is an S3 bucket. * **suffixPath** *(string) --* This string allows re-configuring the S3 object prefix to contain either static or variable sections. The valid variables to use in the suffix path will vary by each log source. To find the values supported for the suffix path for each log source, use the DescribeConfigurationTemplates operation and check the "allowedSuffixPathFields" field in the response. * **enableHiveCompatiblePath** *(boolean) --* This parameter causes the S3 objects that contain delivered logs to use a prefix structure that allows for integration with Apache Hive. Return type: dict Returns: **Response Syntax** {} **Response Structure** * *(dict) --* **Exceptions** * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.ConflictException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ValidationException" * "CloudWatchLogs.Client.exceptions.AccessDeniedException" * "CloudWatchLogs.Client.exceptions.ThrottlingException" CloudWatchLogs / Client / describe_account_policies describe_account_policies ************************* CloudWatchLogs.Client.describe_account_policies(**kwargs) Returns a list of all CloudWatch Logs account policies in the account. To use this operation, you must be signed on with the correct permissions depending on the type of policy that you are retrieving information for. * To see data protection policies, you must have the "logs:GetDataProtectionPolicy" and "logs:DescribeAccountPolicies" permissions. * To see subscription filter policies, you must have the "logs:DescribeSubscriptionFilters" and "logs:DescribeAccountPolicies" permissions. * To see transformer policies, you must have the "logs:GetTransformer" and "logs:DescribeAccountPolicies" permissions. * To see field index policies, you must have the "logs:DescribeIndexPolicies" and "logs:DescribeAccountPolicies" permissions. See also: AWS API Documentation **Request Syntax** response = client.describe_account_policies( policyType='DATA_PROTECTION_POLICY'|'SUBSCRIPTION_FILTER_POLICY'|'FIELD_INDEX_POLICY'|'TRANSFORMER_POLICY'|'METRIC_EXTRACTION_POLICY', policyName='string', accountIdentifiers=[ 'string', ], nextToken='string' ) Parameters: * **policyType** (*string*) -- **[REQUIRED]** Use this parameter to limit the returned policies to only the policies that match the policy type that you specify. * **policyName** (*string*) -- Use this parameter to limit the returned policies to only the policy with the name that you specify. * **accountIdentifiers** (*list*) -- If you are using an account that is set up as a monitoring account for CloudWatch unified cross-account observability, you can use this to specify the account ID of a source account. If you do, the operation returns the account policy for the specified account. Currently, you can specify only one account ID in this parameter. If you omit this parameter, only the policy in the current account is returned. * *(string) --* * **nextToken** (*string*) -- The token for the next set of items to return. (You received this token from a previous call.) Return type: dict Returns: **Response Syntax** { 'accountPolicies': [ { 'policyName': 'string', 'policyDocument': 'string', 'lastUpdatedTime': 123, 'policyType': 'DATA_PROTECTION_POLICY'|'SUBSCRIPTION_FILTER_POLICY'|'FIELD_INDEX_POLICY'|'TRANSFORMER_POLICY'|'METRIC_EXTRACTION_POLICY', 'scope': 'ALL', 'selectionCriteria': 'string', 'accountId': 'string' }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **accountPolicies** *(list) --* An array of structures that contain information about the CloudWatch Logs account policies that match the specified filters. * *(dict) --* A structure that contains information about one CloudWatch Logs account policy. * **policyName** *(string) --* The name of the account policy. * **policyDocument** *(string) --* The policy document for this account policy. The JSON specified in "policyDocument" can be up to 30,720 characters. * **lastUpdatedTime** *(integer) --* The date and time that this policy was most recently updated. * **policyType** *(string) --* The type of policy for this account policy. * **scope** *(string) --* The scope of the account policy. * **selectionCriteria** *(string) --* The log group selection criteria that is used for this policy. * **accountId** *(string) --* The Amazon Web Services account ID that the policy applies to. * **nextToken** *(string) --* The token to use when requesting the next set of items. The token expires after 24 hours. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / can_paginate can_paginate ************ CloudWatchLogs.Client.can_paginate(operation_name) Check if an operation can be paginated. Parameters: **operation_name** (*string*) -- The operation name. This is the same name as the method name on the client. For example, if the method name is "create_foo", and you'd normally invoke the operation as "client.create_foo(**kwargs)", if the "create_foo" operation can be paginated, you can use the call "client.get_paginator("create_foo")". Returns: "True" if the operation can be paginated, "False" otherwise. CloudWatchLogs / Client / create_log_anomaly_detector create_log_anomaly_detector *************************** CloudWatchLogs.Client.create_log_anomaly_detector(**kwargs) Creates an *anomaly detector* that regularly scans one or more log groups and look for patterns and anomalies in the logs. An anomaly detector can help surface issues by automatically discovering anomalies in your log event traffic. An anomaly detector uses machine learning algorithms to scan log events and find *patterns*. A pattern is a shared text structure that recurs among your log fields. Patterns provide a useful tool for analyzing large sets of logs because a large number of log events can often be compressed into a few patterns. The anomaly detector uses pattern recognition to find "anomalies", which are unusual log events. It uses the "evaluationFrequency" to compare current log events and patterns with trained baselines. Fields within a pattern are called *tokens*. Fields that vary within a pattern, such as a request ID or timestamp, are referred to as *dynamic tokens* and represented by "<*>". The following is an example of a pattern: "[INFO] Request time: <*> ms" This pattern represents log events like "[INFO] Request time: 327 ms" and other similar log events that differ only by the number, in this csse 327. When the pattern is displayed, the different numbers are replaced by "<*>" Note: Any parts of log events that are masked as sensitive data are not scanned for anomalies. For more information about masking sensitive data, see Help protect sensitive log data with masking. See also: AWS API Documentation **Request Syntax** response = client.create_log_anomaly_detector( logGroupArnList=[ 'string', ], detectorName='string', evaluationFrequency='ONE_MIN'|'FIVE_MIN'|'TEN_MIN'|'FIFTEEN_MIN'|'THIRTY_MIN'|'ONE_HOUR', filterPattern='string', kmsKeyId='string', anomalyVisibilityTime=123, tags={ 'string': 'string' } ) Parameters: * **logGroupArnList** (*list*) -- **[REQUIRED]** An array containing the ARN of the log group that this anomaly detector will watch. You can specify only one log group ARN. * *(string) --* * **detectorName** (*string*) -- A name for this anomaly detector. * **evaluationFrequency** (*string*) -- Specifies how often the anomaly detector is to run and look for anomalies. Set this value according to the frequency that the log group receives new logs. For example, if the log group receives new log events every 10 minutes, then 15 minutes might be a good setting for "evaluationFrequency" . * **filterPattern** (*string*) -- You can use this parameter to limit the anomaly detection model to examine only log events that match the pattern you specify here. For more information, see Filter and Pattern Syntax. * **kmsKeyId** (*string*) -- Optionally assigns a KMS key to secure this anomaly detector and its findings. If a key is assigned, the anomalies found and the model used by this detector are encrypted at rest with the key. If a key is assigned to an anomaly detector, a user must have permissions for both this key and for the anomaly detector to retrieve information about the anomalies that it finds. Make sure the value provided is a valid KMS key ARN. For more information about using a KMS key and to see the required IAM policy, see Use a KMS key with an anomaly detector. * **anomalyVisibilityTime** (*integer*) -- The number of days to have visibility on an anomaly. After this time period has elapsed for an anomaly, it will be automatically baselined and the anomaly detector will treat new occurrences of a similar anomaly as normal. Therefore, if you do not correct the cause of an anomaly during the time period specified in "anomalyVisibilityTime", it will be considered normal going forward and will not be detected as an anomaly. * **tags** (*dict*) -- An optional list of key-value pairs to associate with the resource. For more information about tagging, see Tagging Amazon Web Services resources * *(string) --* * *(string) --* Return type: dict Returns: **Response Syntax** { 'anomalyDetectorArn': 'string' } **Response Structure** * *(dict) --* * **anomalyDetectorArn** *(string) --* The ARN of the log anomaly detector that you just created. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.LimitExceededException" CloudWatchLogs / Client / get_query_results get_query_results ***************** CloudWatchLogs.Client.get_query_results(**kwargs) Returns the results from the specified query. Only the fields requested in the query are returned, along with a "@ptr" field, which is the identifier for the log record. You can use the value of "@ptr" in a GetLogRecord operation to get the full log record. "GetQueryResults" does not start running a query. To run a query, use StartQuery. For more information about how long results of previous queries are available, see CloudWatch Logs quotas. If the value of the "Status" field in the output is "Running", this operation returns only partial results. If you see a value of "Scheduled" or "Running" for the status, you can retry the operation later to see the final results. If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account to start queries in linked source accounts. For more information, see CloudWatch cross- account observability. See also: AWS API Documentation **Request Syntax** response = client.get_query_results( queryId='string' ) Parameters: **queryId** (*string*) -- **[REQUIRED]** The ID number of the query. Return type: dict Returns: **Response Syntax** { 'queryLanguage': 'CWLI'|'SQL'|'PPL', 'results': [ [ { 'field': 'string', 'value': 'string' }, ], ], 'statistics': { 'recordsMatched': 123.0, 'recordsScanned': 123.0, 'estimatedRecordsSkipped': 123.0, 'bytesScanned': 123.0, 'estimatedBytesSkipped': 123.0, 'logGroupsScanned': 123.0 }, 'status': 'Scheduled'|'Running'|'Complete'|'Failed'|'Cancelled'|'Timeout'|'Unknown', 'encryptionKey': 'string' } **Response Structure** * *(dict) --* * **queryLanguage** *(string) --* The query language used for this query. For more information about the query languages that CloudWatch Logs supports, see Supported query languages. * **results** *(list) --* The log events that matched the query criteria during the most recent time it ran. The "results" value is an array of arrays. Each log event is one object in the top-level array. Each of these log event objects is an array of "field"/ "value" pairs. * *(list) --* * *(dict) --* Contains one field from one log event returned by a CloudWatch Logs Insights query, along with the value of that field. For more information about the fields that are generated by CloudWatch logs, see Supported Logs and Discovered Fields. * **field** *(string) --* The log event field. * **value** *(string) --* The value of this field. * **statistics** *(dict) --* Includes the number of log events scanned by the query, the number of log events that matched the query criteria, and the total number of bytes in the scanned log events. These values reflect the full raw results of the query. * **recordsMatched** *(float) --* The number of log events that matched the query string. * **recordsScanned** *(float) --* The total number of log events scanned during the query. * **estimatedRecordsSkipped** *(float) --* An estimate of the number of log events that were skipped when processing this query, because the query contained an indexed field. Skipping these entries lowers query costs and improves the query performance time. For more information about field indexes, see PutIndexPolicy. * **bytesScanned** *(float) --* The total number of bytes in the log events scanned during the query. * **estimatedBytesSkipped** *(float) --* An estimate of the number of bytes in the log events that were skipped when processing this query, because the query contained an indexed field. Skipping these entries lowers query costs and improves the query performance time. For more information about field indexes, see PutIndexPolicy. * **logGroupsScanned** *(float) --* The number of log groups that were scanned by this query. * **status** *(string) --* The status of the most recent running of the query. Possible values are "Cancelled", "Complete", "Failed", "Running", "Scheduled", "Timeout", and "Unknown". Queries time out after 60 minutes of runtime. To avoid having your queries time out, reduce the time range being searched or partition your query into a number of queries. * **encryptionKey** *(string) --* If you associated an KMS key with the CloudWatch Logs Insights query results in this account, this field displays the ARN of the key that's used to encrypt the query results when StartQuery stores them. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / create_export_task create_export_task ****************** CloudWatchLogs.Client.create_export_task(**kwargs) Creates an export task so that you can efficiently export data from a log group to an Amazon S3 bucket. When you perform a "CreateExportTask" operation, you must use credentials that have permission to write to the S3 bucket that you specify as the destination. Exporting log data to S3 buckets that are encrypted by KMS is supported. Exporting log data to Amazon S3 buckets that have S3 Object Lock enabled with a retention period is also supported. Exporting to S3 buckets that are encrypted with AES-256 is supported. This is an asynchronous call. If all the required information is provided, this operation initiates an export task and responds with the ID of the task. After the task has started, you can use DescribeExportTasks to get the status of the export task. Each account can only have one active ( "RUNNING" or "PENDING") export task at a time. To cancel an export task, use CancelExportTask. You can export logs from multiple log groups or multiple time ranges to the same S3 bucket. To separate log data for each export task, specify a prefix to be used as the Amazon S3 key prefix for all exported objects. Note: We recommend that you don't regularly export to Amazon S3 as a way to continuously archive your logs. For that use case, we instead recommend that you use subscriptions. For more information about subscriptions, see Real-time processing of log data with subscriptions. Note: Time-based sorting on chunks of log data inside an exported file is not guaranteed. You can sort the exported log field data by using Linux utilities. See also: AWS API Documentation **Request Syntax** response = client.create_export_task( taskName='string', logGroupName='string', logStreamNamePrefix='string', fromTime=123, to=123, destination='string', destinationPrefix='string' ) Parameters: * **taskName** (*string*) -- The name of the export task. * **logGroupName** (*string*) -- **[REQUIRED]** The name of the log group. * **logStreamNamePrefix** (*string*) -- Export only log streams that match the provided prefix. If you don't specify a value, no prefix filter is applied. * **fromTime** (*integer*) -- **[REQUIRED]** The start time of the range for the request, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". Events with a timestamp earlier than this time are not exported. * **to** (*integer*) -- **[REQUIRED]** The end time of the range for the request, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". Events with a timestamp later than this time are not exported. You must specify a time that is not earlier than when this log group was created. * **destination** (*string*) -- **[REQUIRED]** The name of S3 bucket for the exported log data. The bucket must be in the same Amazon Web Services Region. * **destinationPrefix** (*string*) -- The prefix used as the start of the key for every object exported. If you don't specify a value, the default is "exportedlogs". The length of this parameter must comply with the S3 object key name length limits. The object key name is a sequence of Unicode characters with UTF-8 encoding, and can be up to 1,024 bytes. Return type: dict Returns: **Response Syntax** { 'taskId': 'string' } **Response Structure** * *(dict) --* * **taskId** *(string) --* The ID of the export task. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.LimitExceededException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ResourceAlreadyExistsException" CloudWatchLogs / Client / associate_kms_key associate_kms_key ***************** CloudWatchLogs.Client.associate_kms_key(**kwargs) Associates the specified KMS key with either one log group in the account, or with all stored CloudWatch Logs query insights results in the account. When you use "AssociateKmsKey", you specify either the "logGroupName" parameter or the "resourceIdentifier" parameter. You can't specify both of those parameters in the same operation. * Specify the "logGroupName" parameter to cause log events ingested into that log group to be encrypted with that key. Only the log events ingested after the key is associated are encrypted with that key. Associating a KMS key with a log group overrides any existing associations between the log group and a KMS key. After a KMS key is associated with a log group, all newly ingested data for the log group is encrypted using the KMS key. This association is stored as long as the data encrypted with the KMS key is still within CloudWatch Logs. This enables CloudWatch Logs to decrypt this data whenever it is requested. Associating a key with a log group does not cause the results of queries of that log group to be encrypted with that key. To have query results encrypted with a KMS key, you must use an "AssociateKmsKey" operation with the "resourceIdentifier" parameter that specifies a "query-result" resource. * Specify the "resourceIdentifier" parameter with a "query-result" resource, to use that key to encrypt the stored results of all future StartQuery operations in the account. The response from a GetQueryResults operation will still return the query results in plain text. Even if you have not associated a key with your query results, the query results are encrypted when stored, using the default CloudWatch Logs method. If you run a query from a monitoring account that queries logs in a source account, the query results key from the monitoring account, if any, is used. Warning: If you delete the key that is used to encrypt log events or log group query results, then all the associated stored log events or query results that were encrypted with that key will be unencryptable and unusable. Note: CloudWatch Logs supports only symmetric KMS keys. Do not associate an asymmetric KMS key with your log group or query results. For more information, see Using Symmetric and Asymmetric Keys. It can take up to 5 minutes for this operation to take effect. If you attempt to associate a KMS key with a log group but the KMS key does not exist or the KMS key is disabled, you receive an "InvalidParameterException" error. See also: AWS API Documentation **Request Syntax** response = client.associate_kms_key( logGroupName='string', kmsKeyId='string', resourceIdentifier='string' ) Parameters: * **logGroupName** (*string*) -- The name of the log group. In your "AssociateKmsKey" operation, you must specify either the "resourceIdentifier" parameter or the "logGroup" parameter, but you can't specify both. * **kmsKeyId** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the KMS key to use when encrypting log data. This must be a symmetric KMS key. For more information, see Amazon Resource Names and Using Symmetric and Asymmetric Keys. * **resourceIdentifier** (*string*) -- Specifies the target for this operation. You must specify one of the following: * Specify the following ARN to have future GetQueryResults operations in this account encrypt the results with the specified KMS key. Replace *REGION* and *ACCOUNT_ID* with your Region and account ID. "arn:aws:logs:REGION:ACCOUNT_ID :query-result:*" * Specify the ARN of a log group to have CloudWatch Logs use the KMS key to encrypt log events that are ingested and stored by that log group. The log group ARN must be in the following format. Replace *REGION* and *ACCOUNT_ID* with your Region and account ID. "arn:aws:logs:REGION:ACCOUNT_ID :log-group:LOG_GROUP_NAME" In your "AssociateKmsKey" operation, you must specify either the "resourceIdentifier" parameter or the "logGroup" parameter, but you can't specify both. Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / cancel_export_task cancel_export_task ****************** CloudWatchLogs.Client.cancel_export_task(**kwargs) Cancels the specified export task. The task must be in the "PENDING" or "RUNNING" state. See also: AWS API Documentation **Request Syntax** response = client.cancel_export_task( taskId='string' ) Parameters: **taskId** (*string*) -- **[REQUIRED]** The ID of the export task. Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.InvalidOperationException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / put_data_protection_policy put_data_protection_policy ************************** CloudWatchLogs.Client.put_data_protection_policy(**kwargs) Creates a data protection policy for the specified log group. A data protection policy can help safeguard sensitive data that's ingested by the log group by auditing and masking the sensitive log data. Warning: Sensitive data is detected and masked when it is ingested into the log group. When you set a data protection policy, log events ingested into the log group before that time are not masked. By default, when a user views a log event that includes masked data, the sensitive data is replaced by asterisks. A user who has the "logs:Unmask" permission can use a GetLogEvents or FilterLogEvents operation with the "unmask" parameter set to "true" to view the unmasked log events. Users with the "logs:Unmask" can also view unmasked data in the CloudWatch Logs console by running a CloudWatch Logs Insights query with the "unmask" query command. For more information, including a list of types of data that can be audited and masked, see Protect sensitive log data with masking. The "PutDataProtectionPolicy" operation applies to only the specified log group. You can also use PutAccountPolicy to create an account-level data protection policy that applies to all log groups in the account, including both existing log groups and log groups that are created level. If a log group has its own data protection policy and the account also has an account-level data protection policy, then the two policies are cumulative. Any sensitive term specified in either policy is masked. See also: AWS API Documentation **Request Syntax** response = client.put_data_protection_policy( logGroupIdentifier='string', policyDocument='string' ) Parameters: * **logGroupIdentifier** (*string*) -- **[REQUIRED]** Specify either the log group name or log group ARN. * **policyDocument** (*string*) -- **[REQUIRED]** Specify the data protection policy, in JSON. This policy must include two JSON blocks: * The first block must include both a "DataIdentifer" array and an "Operation" property with an "Audit" action. The "DataIdentifer" array lists the types of sensitive data that you want to mask. For more information about the available options, see Types of data that you can mask. The "Operation" property with an "Audit" action is required to find the sensitive data terms. This "Audit" action must contain a "FindingsDestination" object. You can optionally use that "FindingsDestination" object to list one or more destinations to send audit findings to. If you specify destinations such as log groups, Firehose streams, and S3 buckets, they must already exist. * The second block must include both a "DataIdentifer" array and an "Operation" property with an "Deidentify" action. The "DataIdentifer" array must exactly match the "DataIdentifer" array in the first block of the policy. The "Operation" property with the "Deidentify" action is what actually masks the data, and it must contain the ""MaskConfig": {}" object. The ""MaskConfig": {}" object must be empty. For an example data protection policy, see the **Examples** section on this page. Warning: The contents of the two "DataIdentifer" arrays must match exactly. In addition to the two JSON blocks, the "policyDocument" can also include "Name", "Description", and "Version" fields. The "Name" is used as a dimension when CloudWatch Logs reports audit findings metrics to CloudWatch. The JSON specified in "policyDocument" can be up to 30,720 characters. Return type: dict Returns: **Response Syntax** { 'logGroupIdentifier': 'string', 'policyDocument': 'string', 'lastUpdatedTime': 123 } **Response Structure** * *(dict) --* * **logGroupIdentifier** *(string) --* The log group name or ARN that you specified in your request. * **policyDocument** *(string) --* The data protection policy used for this log group. * **lastUpdatedTime** *(integer) --* The date and time that this policy was most recently updated. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.LimitExceededException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / list_anomalies list_anomalies ************** CloudWatchLogs.Client.list_anomalies(**kwargs) Returns a list of anomalies that log anomaly detectors have found. For details about the structure format of each anomaly object that is returned, see the example in this section. See also: AWS API Documentation **Request Syntax** response = client.list_anomalies( anomalyDetectorArn='string', suppressionState='SUPPRESSED'|'UNSUPPRESSED', limit=123, nextToken='string' ) Parameters: * **anomalyDetectorArn** (*string*) -- Use this to optionally limit the results to only the anomalies found by a certain anomaly detector. * **suppressionState** (*string*) -- You can specify this parameter if you want to the operation to return only anomalies that are currently either suppressed or unsuppressed. * **limit** (*integer*) -- The maximum number of items to return. If you don't specify a value, the default maximum value of 50 items is used. * **nextToken** (*string*) -- The token for the next set of items to return. The token expires after 24 hours. Return type: dict Returns: **Response Syntax** { 'anomalies': [ { 'anomalyId': 'string', 'patternId': 'string', 'anomalyDetectorArn': 'string', 'patternString': 'string', 'patternRegex': 'string', 'priority': 'string', 'firstSeen': 123, 'lastSeen': 123, 'description': 'string', 'active': True|False, 'state': 'Active'|'Suppressed'|'Baseline', 'histogram': { 'string': 123 }, 'logSamples': [ { 'timestamp': 123, 'message': 'string' }, ], 'patternTokens': [ { 'dynamicTokenPosition': 123, 'isDynamic': True|False, 'tokenString': 'string', 'enumerations': { 'string': 123 }, 'inferredTokenName': 'string' }, ], 'logGroupArnList': [ 'string', ], 'suppressed': True|False, 'suppressedDate': 123, 'suppressedUntil': 123, 'isPatternLevelSuppression': True|False }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **anomalies** *(list) --* An array of structures, where each structure contains information about one anomaly that a log anomaly detector has found. * *(dict) --* This structure represents one anomaly that has been found by a logs anomaly detector. For more information about patterns and anomalies, see CreateLogAnomalyDetector. * **anomalyId** *(string) --* The unique ID that CloudWatch Logs assigned to this anomaly. * **patternId** *(string) --* The ID of the pattern used to help identify this anomaly. * **anomalyDetectorArn** *(string) --* The ARN of the anomaly detector that identified this anomaly. * **patternString** *(string) --* The pattern used to help identify this anomaly, in string format. * **patternRegex** *(string) --* The pattern used to help identify this anomaly, in regular expression format. * **priority** *(string) --* The priority level of this anomaly, as determined by CloudWatch Logs. Priority is computed based on log severity labels such as "FATAL" and "ERROR" and the amount of deviation from the baseline. Possible values are "HIGH", "MEDIUM", and "LOW". * **firstSeen** *(integer) --* The date and time when the anomaly detector first saw this anomaly. It is specified as epoch time, which is the number of seconds since "January 1, 1970, 00:00:00 UTC". * **lastSeen** *(integer) --* The date and time when the anomaly detector most recently saw this anomaly. It is specified as epoch time, which is the number of seconds since "January 1, 1970, 00:00:00 UTC". * **description** *(string) --* A human-readable description of the anomaly. This description is generated by CloudWatch Logs. * **active** *(boolean) --* Specifies whether this anomaly is still ongoing. * **state** *(string) --* Indicates the current state of this anomaly. If it is still being treated as an anomaly, the value is "Active". If you have suppressed this anomaly by using the UpdateAnomaly operation, the value is "Suppressed". If this behavior is now considered to be normal, the value is "Baseline". * **histogram** *(dict) --* A map showing times when the anomaly detector ran, and the number of occurrences of this anomaly that were detected at each of those runs. The times are specified in epoch time, which is the number of seconds since "January 1, 1970, 00:00:00 UTC". * *(string) --* * *(integer) --* * **logSamples** *(list) --* An array of sample log event messages that are considered to be part of this anomaly. * *(dict) --* This structure contains the information for one sample log event that is associated with an anomaly found by a log anomaly detector. * **timestamp** *(integer) --* The time stamp of the log event. * **message** *(string) --* The message content of the log event. * **patternTokens** *(list) --* An array of structures where each structure contains information about one token that makes up the pattern. * *(dict) --* A structure that contains information about one pattern token related to an anomaly. For more information about patterns and tokens, see CreateLogAnomalyDetector. * **dynamicTokenPosition** *(integer) --* For a dynamic token, this indicates where in the pattern that this token appears, related to other dynamic tokens. The dynamic token that appears first has a value of "1", the one that appears second is "2", and so on. * **isDynamic** *(boolean) --* Specifies whether this is a dynamic token. * **tokenString** *(string) --* The string represented by this token. If this is a dynamic token, the value will be "<*>" * **enumerations** *(dict) --* Contains the values found for a dynamic token, and the number of times each value was found. * *(string) --* * *(integer) --* * **inferredTokenName** *(string) --* A name that CloudWatch Logs assigned to this dynamic token to make the pattern more readable. The string part of the "inferredTokenName" gives you a clearer idea of the content of this token. The number part of the "inferredTokenName" shows where in the pattern this token appears, compared to other dynamic tokens. CloudWatch Logs assigns the string part of the name based on analyzing the content of the log events that contain it. For example, an inferred token name of "IPAddress-3" means that the token represents an IP address, and this token is the third dynamic token in the pattern. * **logGroupArnList** *(list) --* An array of ARNS of the log groups that contained log events considered to be part of this anomaly. * *(string) --* * **suppressed** *(boolean) --* Indicates whether this anomaly is currently suppressed. To suppress an anomaly, use UpdateAnomaly. * **suppressedDate** *(integer) --* If the anomaly is suppressed, this indicates when it was suppressed. * **suppressedUntil** *(integer) --* If the anomaly is suppressed, this indicates when the suppression will end. If this value is "0", the anomaly was suppressed with no expiration, with the "INFINITE" value. * **isPatternLevelSuppression** *(boolean) --* If this anomaly is suppressed, this field is "true" if the suppression is because the pattern is suppressed. If "false", then only this particular anomaly is suppressed. * **nextToken** *(string) --* The token for the next set of items to return. The token expires after 24 hours. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" CloudWatchLogs / Client / update_anomaly update_anomaly ************** CloudWatchLogs.Client.update_anomaly(**kwargs) Use this operation to *suppress* anomaly detection for a specified anomaly or pattern. If you suppress an anomaly, CloudWatch Logs won't report new occurrences of that anomaly and won't update that anomaly with new data. If you suppress a pattern, CloudWatch Logs won't report any anomalies related to that pattern. You must specify either "anomalyId" or "patternId", but you can't specify both parameters in the same operation. If you have previously used this operation to suppress detection of a pattern or anomaly, you can use it again to cause CloudWatch Logs to end the suppression. To do this, use this operation and specify the anomaly or pattern to stop suppressing, and omit the "suppressionType" and "suppressionPeriod" parameters. See also: AWS API Documentation **Request Syntax** response = client.update_anomaly( anomalyId='string', patternId='string', anomalyDetectorArn='string', suppressionType='LIMITED'|'INFINITE', suppressionPeriod={ 'value': 123, 'suppressionUnit': 'SECONDS'|'MINUTES'|'HOURS' }, baseline=True|False ) Parameters: * **anomalyId** (*string*) -- If you are suppressing or unsuppressing an anomaly, specify its unique ID here. You can find anomaly IDs by using the ListAnomalies operation. * **patternId** (*string*) -- If you are suppressing or unsuppressing an pattern, specify its unique ID here. You can find pattern IDs by using the ListAnomalies operation. * **anomalyDetectorArn** (*string*) -- **[REQUIRED]** The ARN of the anomaly detector that this operation is to act on. * **suppressionType** (*string*) -- Use this to specify whether the suppression to be temporary or infinite. If you specify "LIMITED", you must also specify a "suppressionPeriod". If you specify "INFINITE", any value for "suppressionPeriod" is ignored. * **suppressionPeriod** (*dict*) -- If you are temporarily suppressing an anomaly or pattern, use this structure to specify how long the suppression is to last. * **value** *(integer) --* Specifies the number of seconds, minutes or hours to suppress this anomaly. There is no maximum. * **suppressionUnit** *(string) --* Specifies whether the value of "value" is in seconds, minutes, or hours. * **baseline** (*boolean*) -- Set this to "true" to prevent CloudWatch Logs from displaying this behavior as an anomaly in the future. The behavior is then treated as baseline behavior. However, if similar but more severe occurrences of this behavior occur in the future, those will still be reported as anomalies. The default is "false" Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" CloudWatchLogs / Client / delete_log_stream delete_log_stream ***************** CloudWatchLogs.Client.delete_log_stream(**kwargs) Deletes the specified log stream and permanently deletes all the archived log events associated with the log stream. See also: AWS API Documentation **Request Syntax** response = client.delete_log_stream( logGroupName='string', logStreamName='string' ) Parameters: * **logGroupName** (*string*) -- **[REQUIRED]** The name of the log group. * **logStreamName** (*string*) -- **[REQUIRED]** The name of the log stream. Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / untag_log_group untag_log_group *************** CloudWatchLogs.Client.untag_log_group(**kwargs) Warning: The UntagLogGroup operation is on the path to deprecation. We recommend that you use UntagResource instead. Removes the specified tags from the specified log group. To list the tags for a log group, use ListTagsForResource. To add tags, use TagResource. When using IAM policies to control tag management for CloudWatch Logs log groups, the condition keys "aws:Resource/key-name" and "aws:TagKeys" cannot be used to restrict which tags users can assign. Danger: This operation is deprecated and may not function as expected. This operation should not be used going forward and is only kept for the purpose of backwards compatiblity. See also: AWS API Documentation **Request Syntax** response = client.untag_log_group( logGroupName='string', tags=[ 'string', ] ) Parameters: * **logGroupName** (*string*) -- **[REQUIRED]** The name of the log group. * **tags** (*list*) -- **[REQUIRED]** The tag keys. The corresponding tags are removed from the log group. * *(string) --* Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" CloudWatchLogs / Client / describe_delivery_destinations describe_delivery_destinations ****************************** CloudWatchLogs.Client.describe_delivery_destinations(**kwargs) Retrieves a list of the delivery destinations that have been created in the account. See also: AWS API Documentation **Request Syntax** response = client.describe_delivery_destinations( nextToken='string', limit=123 ) Parameters: * **nextToken** (*string*) -- The token for the next set of items to return. The token expires after 24 hours. * **limit** (*integer*) -- Optionally specify the maximum number of delivery destinations to return in the response. Return type: dict Returns: **Response Syntax** { 'deliveryDestinations': [ { 'name': 'string', 'arn': 'string', 'deliveryDestinationType': 'S3'|'CWL'|'FH'|'XRAY', 'outputFormat': 'json'|'plain'|'w3c'|'raw'|'parquet', 'deliveryDestinationConfiguration': { 'destinationResourceArn': 'string' }, 'tags': { 'string': 'string' } }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **deliveryDestinations** *(list) --* An array of structures. Each structure contains information about one delivery destination in the account. * *(dict) --* This structure contains information about one *delivery destination* in your account. A delivery destination is an Amazon Web Services resource that represents an Amazon Web Services service that logs can be sent to. CloudWatch Logs, Amazon S3, Firehose, and X-Ray are supported as delivery destinations. To configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following: * Create a delivery source, which is a logical object that represents the resource that is actually sending the logs. For more information, see PutDeliverySource. * Create a *delivery destination*, which is a logical object that represents the actual delivery destination. * If you are delivering logs cross-account, you must use PutDeliveryDestinationPolicy in the destination account to assign an IAM policy to the destination. This policy allows delivery to that destination. * Create a *delivery* by pairing exactly one delivery source and one delivery destination. For more information, see CreateDelivery. You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination. * **name** *(string) --* The name of this delivery destination. * **arn** *(string) --* The Amazon Resource Name (ARN) that uniquely identifies this delivery destination. * **deliveryDestinationType** *(string) --* Displays whether this delivery destination is CloudWatch Logs, Amazon S3, Firehose, or X-Ray. * **outputFormat** *(string) --* The format of the logs that are sent to this delivery destination. * **deliveryDestinationConfiguration** *(dict) --* A structure that contains the ARN of the Amazon Web Services resource that will receive the logs. * **destinationResourceArn** *(string) --* The ARN of the Amazon Web Services destination that this delivery destination represents. That Amazon Web Services destination can be a log group in CloudWatch Logs, an Amazon S3 bucket, or a delivery stream in Firehose. * **tags** *(dict) --* The tags that have been assigned to this delivery destination. * *(string) --* * *(string) --* * **nextToken** *(string) --* The token for the next set of items to return. The token expires after 24 hours. **Exceptions** * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.ServiceQuotaExceededException" * "CloudWatchLogs.Client.exceptions.ValidationException" * "CloudWatchLogs.Client.exceptions.ThrottlingException" CloudWatchLogs / Client / get_delivery_source get_delivery_source ******************* CloudWatchLogs.Client.get_delivery_source(**kwargs) Retrieves complete information about one delivery source. See also: AWS API Documentation **Request Syntax** response = client.get_delivery_source( name='string' ) Parameters: **name** (*string*) -- **[REQUIRED]** The name of the delivery source that you want to retrieve. Return type: dict Returns: **Response Syntax** { 'deliverySource': { 'name': 'string', 'arn': 'string', 'resourceArns': [ 'string', ], 'service': 'string', 'logType': 'string', 'tags': { 'string': 'string' } } } **Response Structure** * *(dict) --* * **deliverySource** *(dict) --* A structure containing information about the delivery source. * **name** *(string) --* The unique name of the delivery source. * **arn** *(string) --* The Amazon Resource Name (ARN) that uniquely identifies this delivery source. * **resourceArns** *(list) --* This array contains the ARN of the Amazon Web Services resource that sends logs and is represented by this delivery source. Currently, only one ARN can be in the array. * *(string) --* * **service** *(string) --* The Amazon Web Services service that is sending logs. * **logType** *(string) --* The type of log that the source is sending. For valid values for this parameter, see the documentation for the source service. * **tags** *(dict) --* The tags that have been assigned to this delivery source. * *(string) --* * *(string) --* **Exceptions** * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.ValidationException" * "CloudWatchLogs.Client.exceptions.ServiceQuotaExceededException" * "CloudWatchLogs.Client.exceptions.ThrottlingException" CloudWatchLogs / Client / get_integration get_integration *************** CloudWatchLogs.Client.get_integration(**kwargs) Returns information about one integration between CloudWatch Logs and OpenSearch Service. See also: AWS API Documentation **Request Syntax** response = client.get_integration( integrationName='string' ) Parameters: **integrationName** (*string*) -- **[REQUIRED]** The name of the integration that you want to find information about. To find the name of your integration, use ListIntegrations Return type: dict Returns: **Response Syntax** { 'integrationName': 'string', 'integrationType': 'OPENSEARCH', 'integrationStatus': 'PROVISIONING'|'ACTIVE'|'FAILED', 'integrationDetails': { 'openSearchIntegrationDetails': { 'dataSource': { 'dataSourceName': 'string', 'status': { 'status': 'ACTIVE'|'NOT_FOUND'|'ERROR', 'statusMessage': 'string' } }, 'application': { 'applicationEndpoint': 'string', 'applicationArn': 'string', 'applicationId': 'string', 'status': { 'status': 'ACTIVE'|'NOT_FOUND'|'ERROR', 'statusMessage': 'string' } }, 'collection': { 'collectionEndpoint': 'string', 'collectionArn': 'string', 'status': { 'status': 'ACTIVE'|'NOT_FOUND'|'ERROR', 'statusMessage': 'string' } }, 'workspace': { 'workspaceId': 'string', 'status': { 'status': 'ACTIVE'|'NOT_FOUND'|'ERROR', 'statusMessage': 'string' } }, 'encryptionPolicy': { 'policyName': 'string', 'status': { 'status': 'ACTIVE'|'NOT_FOUND'|'ERROR', 'statusMessage': 'string' } }, 'networkPolicy': { 'policyName': 'string', 'status': { 'status': 'ACTIVE'|'NOT_FOUND'|'ERROR', 'statusMessage': 'string' } }, 'accessPolicy': { 'policyName': 'string', 'status': { 'status': 'ACTIVE'|'NOT_FOUND'|'ERROR', 'statusMessage': 'string' } }, 'lifecyclePolicy': { 'policyName': 'string', 'status': { 'status': 'ACTIVE'|'NOT_FOUND'|'ERROR', 'statusMessage': 'string' } } } } } **Response Structure** * *(dict) --* * **integrationName** *(string) --* The name of the integration. * **integrationType** *(string) --* The type of integration. Integrations with OpenSearch Service have the type "OPENSEARCH". * **integrationStatus** *(string) --* The current status of this integration. * **integrationDetails** *(dict) --* A structure that contains information about the integration configuration. For an integration with OpenSearch Service, this includes information about OpenSearch Service resources such as the collection, the workspace, and policies. Note: This is a Tagged Union structure. Only one of the following top level keys will be set: "openSearchIntegrationDetails". If a client receives an unknown member it will set "SDK_UNKNOWN_MEMBER" as the top level key, which maps to the name or tag of the unknown member. The structure of "SDK_UNKNOWN_MEMBER" is as follows: 'SDK_UNKNOWN_MEMBER': {'name': 'UnknownMemberName'} * **openSearchIntegrationDetails** *(dict) --* This structure contains complete information about one integration between CloudWatch Logs and OpenSearch Service. * **dataSource** *(dict) --* This structure contains information about the OpenSearch Service data source used for this integration. This data source was created as part of the integration setup. An OpenSearch Service data source defines the source and destination for OpenSearch Service queries. It includes the role required to execute queries and write to collections. For more information about OpenSearch Service data sources , see Creating OpenSearch Service data source integrations with Amazon S3. * **dataSourceName** *(string) --* The name of the OpenSearch Service data source. * **status** *(dict) --* This structure contains information about the status of this OpenSearch Service resource. * **status** *(string) --* The current status of this resource. * **statusMessage** *(string) --* A message with additional information about the status of this resource. * **application** *(dict) --* This structure contains information about the OpenSearch Service application used for this integration. An OpenSearch Service application is the web application that was created by the integration with CloudWatch Logs. It hosts the vended logs dashboards. * **applicationEndpoint** *(string) --* The endpoint of the application. * **applicationArn** *(string) --* The Amazon Resource Name (ARN) of the application. * **applicationId** *(string) --* The ID of the application. * **status** *(dict) --* This structure contains information about the status of this OpenSearch Service resource. * **status** *(string) --* The current status of this resource. * **statusMessage** *(string) --* A message with additional information about the status of this resource. * **collection** *(dict) --* This structure contains information about the OpenSearch Service collection used for this integration. This collection was created as part of the integration setup. An OpenSearch Service collection is a logical grouping of one or more indexes that represent an analytics workload. For more information, see Creating and managing OpenSearch Service Serverless collections. * **collectionEndpoint** *(string) --* The endpoint of the collection. * **collectionArn** *(string) --* The ARN of the collection. * **status** *(dict) --* This structure contains information about the status of this OpenSearch Service resource. * **status** *(string) --* The current status of this resource. * **statusMessage** *(string) --* A message with additional information about the status of this resource. * **workspace** *(dict) --* This structure contains information about the OpenSearch Service workspace used for this integration. An OpenSearch Service workspace is the collection of dashboards along with other OpenSearch Service tools. This workspace was created automatically as part of the integration setup. For more information, see Centralized OpenSearch user interface (Dashboards) with OpenSearch Service. * **workspaceId** *(string) --* The ID of this workspace. * **status** *(dict) --* This structure contains information about the status of an OpenSearch Service resource. * **status** *(string) --* The current status of this resource. * **statusMessage** *(string) --* A message with additional information about the status of this resource. * **encryptionPolicy** *(dict) --* This structure contains information about the OpenSearch Service encryption policy used for this integration. The encryption policy was created automatically when you created the integration. For more information, see Encryption policies in the OpenSearch Service Developer Guide. * **policyName** *(string) --* The name of the encryption policy. * **status** *(dict) --* This structure contains information about the status of this OpenSearch Service resource. * **status** *(string) --* The current status of this resource. * **statusMessage** *(string) --* A message with additional information about the status of this resource. * **networkPolicy** *(dict) --* This structure contains information about the OpenSearch Service network policy used for this integration. The network policy assigns network access settings to collections. For more information, see Network policies in the OpenSearch Service Developer Guide. * **policyName** *(string) --* The name of the network policy. * **status** *(dict) --* This structure contains information about the status of this OpenSearch Service resource. * **status** *(string) --* The current status of this resource. * **statusMessage** *(string) --* A message with additional information about the status of this resource. * **accessPolicy** *(dict) --* This structure contains information about the OpenSearch Service data access policy used for this integration. The access policy defines the access controls for the collection. This data access policy was automatically created as part of the integration setup. For more information about OpenSearch Service data access policies, see Data access control for Amazon OpenSearch Serverless in the OpenSearch Service Developer Guide. * **policyName** *(string) --* The name of the data access policy. * **status** *(dict) --* This structure contains information about the status of this OpenSearch Service resource. * **status** *(string) --* The current status of this resource. * **statusMessage** *(string) --* A message with additional information about the status of this resource. * **lifecyclePolicy** *(dict) --* This structure contains information about the OpenSearch Service data lifecycle policy used for this integration. The lifecycle policy determines the lifespan of the data in the collection. It was automatically created as part of the integration setup. For more information, see Using data lifecycle policies with OpenSearch Service Serverless in the OpenSearch Service Developer Guide. * **policyName** *(string) --* The name of the lifecycle policy. * **status** *(dict) --* This structure contains information about the status of this OpenSearch Service resource. * **status** *(string) --* The current status of this resource. * **statusMessage** *(string) --* A message with additional information about the status of this resource. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" CloudWatchLogs / Client / put_delivery_source put_delivery_source ******************* CloudWatchLogs.Client.put_delivery_source(**kwargs) Creates or updates a logical *delivery source*. A delivery source represents an Amazon Web Services resource that sends logs to an logs delivery destination. The destination can be CloudWatch Logs, Amazon S3, Firehose or X-Ray for sending traces. To configure logs delivery between a delivery destination and an Amazon Web Services service that is supported as a delivery source, you must do the following: * Use "PutDeliverySource" to create a delivery source, which is a logical object that represents the resource that is actually sending the logs. * Use "PutDeliveryDestination" to create a *delivery destination*, which is a logical object that represents the actual delivery destination. For more information, see PutDeliveryDestination. * If you are delivering logs cross-account, you must use PutDeliveryDestinationPolicy in the destination account to assign an IAM policy to the destination. This policy allows delivery to that destination. * Use "CreateDelivery" to create a *delivery* by pairing exactly one delivery source and one delivery destination. For more information, see CreateDelivery. You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination. Only some Amazon Web Services services support being configured as a delivery source. These services are listed as **Supported [V2 Permissions]** in the table at Enabling logging from Amazon Web Services services. If you use this operation to update an existing delivery source, all the current delivery source parameters are overwritten with the new parameter values that you specify. See also: AWS API Documentation **Request Syntax** response = client.put_delivery_source( name='string', resourceArn='string', logType='string', tags={ 'string': 'string' } ) Parameters: * **name** (*string*) -- **[REQUIRED]** A name for this delivery source. This name must be unique for all delivery sources in your account. * **resourceArn** (*string*) -- **[REQUIRED]** The ARN of the Amazon Web Services resource that is generating and sending logs. For example, "arn:aws:workmail:us-east-1:12 3456789012:organization/m-1234EXAMPLEabcd1234abcd1234abcd1234" * **logType** (*string*) -- **[REQUIRED]** Defines the type of log that the source is sending. * For Amazon Bedrock, the valid value is "APPLICATION_LOGS" and "TRACES". * For CloudFront, the valid value is "ACCESS_LOGS". * For Amazon CodeWhisperer, the valid value is "EVENT_LOGS". * For Elemental MediaPackage, the valid values are "EGRESS_ACCESS_LOGS" and "INGRESS_ACCESS_LOGS". * For Elemental MediaTailor, the valid values are "AD_DECISION_SERVER_LOGS", "MANIFEST_SERVICE_LOGS", and "TRANSCODE_LOGS". * For Entity Resolution, the valid value is "WORKFLOW_LOGS". * For IAM Identity Center, the valid value is "ERROR_LOGS". * For PCS, the valid values are "PCS_SCHEDULER_LOGS" and "PCS_JOBCOMP_LOGS". * For Amazon Q, the valid value is "EVENT_LOGS". * For Amazon SES mail manager, the valid values are "APPLICATION_LOG" and "TRAFFIC_POLICY_DEBUG_LOGS". * For Amazon WorkMail, the valid values are "ACCESS_CONTROL_LOGS", "AUTHENTICATION_LOGS", "WORKMAIL_AVAILABILITY_PROVIDER_LOGS", "WORKMAIL_MAILBOX_ACCESS_LOGS", and "WORKMAIL_PERSONAL_ACCESS_TOKEN_LOGS". * For Amazon VPC Route Server, the valid value is "EVENT_LOGS". * **tags** (*dict*) -- An optional list of key-value pairs to associate with the resource. For more information about tagging, see Tagging Amazon Web Services resources * *(string) --* * *(string) --* Return type: dict Returns: **Response Syntax** { 'deliverySource': { 'name': 'string', 'arn': 'string', 'resourceArns': [ 'string', ], 'service': 'string', 'logType': 'string', 'tags': { 'string': 'string' } } } **Response Structure** * *(dict) --* * **deliverySource** *(dict) --* A structure containing information about the delivery source that was just created or updated. * **name** *(string) --* The unique name of the delivery source. * **arn** *(string) --* The Amazon Resource Name (ARN) that uniquely identifies this delivery source. * **resourceArns** *(list) --* This array contains the ARN of the Amazon Web Services resource that sends logs and is represented by this delivery source. Currently, only one ARN can be in the array. * *(string) --* * **service** *(string) --* The Amazon Web Services service that is sending logs. * **logType** *(string) --* The type of log that the source is sending. For valid values for this parameter, see the documentation for the source service. * **tags** *(dict) --* The tags that have been assigned to this delivery source. * *(string) --* * *(string) --* **Exceptions** * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.ConflictException" * "CloudWatchLogs.Client.exceptions.ValidationException" * "CloudWatchLogs.Client.exceptions.ServiceQuotaExceededException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ThrottlingException" CloudWatchLogs / Client / list_tags_log_group list_tags_log_group ******************* CloudWatchLogs.Client.list_tags_log_group(**kwargs) Warning: The ListTagsLogGroup operation is on the path to deprecation. We recommend that you use ListTagsForResource instead. Lists the tags for the specified log group. Danger: This operation is deprecated and may not function as expected. This operation should not be used going forward and is only kept for the purpose of backwards compatiblity. See also: AWS API Documentation **Request Syntax** response = client.list_tags_log_group( logGroupName='string' ) Parameters: **logGroupName** (*string*) -- **[REQUIRED]** The name of the log group. Return type: dict Returns: **Response Syntax** { 'tags': { 'string': 'string' } } **Response Structure** * *(dict) --* * **tags** *(dict) --* The tags for the log group. * *(string) --* * *(string) --* **Exceptions** * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / delete_data_protection_policy delete_data_protection_policy ***************************** CloudWatchLogs.Client.delete_data_protection_policy(**kwargs) Deletes the data protection policy from the specified log group. For more information about data protection policies, see PutDataProtectionPolicy. See also: AWS API Documentation **Request Syntax** response = client.delete_data_protection_policy( logGroupIdentifier='string' ) Parameters: **logGroupIdentifier** (*string*) -- **[REQUIRED]** The name or ARN of the log group that you want to delete the data protection policy for. Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / list_tags_for_resource list_tags_for_resource ********************** CloudWatchLogs.Client.list_tags_for_resource(**kwargs) Displays the tags associated with a CloudWatch Logs resource. Currently, log groups and destinations support tagging. See also: AWS API Documentation **Request Syntax** response = client.list_tags_for_resource( resourceArn='string' ) Parameters: **resourceArn** (*string*) -- **[REQUIRED]** The ARN of the resource that you want to view tags for. The ARN format of a log group is "arn:aws:logs:Region:account-id :log-group:log-group-name" The ARN format of a destination is "arn:aws:logs:Region:account- id:destination:destination-name" For more information about ARN format, see CloudWatch Logs resources and operations. Return type: dict Returns: **Response Syntax** { 'tags': { 'string': 'string' } } **Response Structure** * *(dict) --* * **tags** *(dict) --* The list of tags associated with the requested resource.> * *(string) --* * *(string) --* **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / put_transformer put_transformer *************** CloudWatchLogs.Client.put_transformer(**kwargs) Creates or updates a *log transformer* for a single log group. You use log transformers to transform log events into a different format, making them easier for you to process and analyze. You can also transform logs from different sources into standardized formats that contains relevant, source-specific information. After you have created a transformer, CloudWatch Logs performs the transformations at the time of log ingestion. You can then refer to the transformed versions of the logs during operations such as querying with CloudWatch Logs Insights or creating metric filters or subscription filers. You can also use a transformer to copy metadata from metadata keys into the log events themselves. This metadata can include log group name, log stream name, account ID and Region. A transformer for a log group is a series of processors, where each processor applies one type of transformation to the log events ingested into this log group. The processors work one after another, in the order that you list them, like a pipeline. For more information about the available processors to use in a transformer, see Processors that you can use. Having log events in standardized format enables visibility across your applications for your log analysis, reporting, and alarming needs. CloudWatch Logs provides transformation for common log types with out-of-the-box transformation templates for major Amazon Web Services log sources such as VPC flow logs, Lambda, and Amazon RDS. You can use pre-built transformation templates or create custom transformation policies. You can create transformers only for the log groups in the Standard log class. You can also set up a transformer at the account level. For more information, see PutAccountPolicy. If there is both a log-group level transformer created with "PutTransformer" and an account- level transformer that could apply to the same log group, the log group uses only the log-group level transformer. It ignores the account-level transformer. See also: AWS API Documentation **Request Syntax** response = client.put_transformer( logGroupIdentifier='string', transformerConfig=[ { 'addKeys': { 'entries': [ { 'key': 'string', 'value': 'string', 'overwriteIfExists': True|False }, ] }, 'copyValue': { 'entries': [ { 'source': 'string', 'target': 'string', 'overwriteIfExists': True|False }, ] }, 'csv': { 'quoteCharacter': 'string', 'delimiter': 'string', 'columns': [ 'string', ], 'source': 'string' }, 'dateTimeConverter': { 'source': 'string', 'target': 'string', 'targetFormat': 'string', 'matchPatterns': [ 'string', ], 'sourceTimezone': 'string', 'targetTimezone': 'string', 'locale': 'string' }, 'deleteKeys': { 'withKeys': [ 'string', ] }, 'grok': { 'source': 'string', 'match': 'string' }, 'listToMap': { 'source': 'string', 'key': 'string', 'valueKey': 'string', 'target': 'string', 'flatten': True|False, 'flattenedElement': 'first'|'last' }, 'lowerCaseString': { 'withKeys': [ 'string', ] }, 'moveKeys': { 'entries': [ { 'source': 'string', 'target': 'string', 'overwriteIfExists': True|False }, ] }, 'parseCloudfront': { 'source': 'string' }, 'parseJSON': { 'source': 'string', 'destination': 'string' }, 'parseKeyValue': { 'source': 'string', 'destination': 'string', 'fieldDelimiter': 'string', 'keyValueDelimiter': 'string', 'keyPrefix': 'string', 'nonMatchValue': 'string', 'overwriteIfExists': True|False }, 'parseRoute53': { 'source': 'string' }, 'parseToOCSF': { 'source': 'string', 'eventSource': 'CloudTrail'|'Route53Resolver'|'VPCFlow'|'EKSAudit'|'AWSWAF', 'ocsfVersion': 'V1.1' }, 'parsePostgres': { 'source': 'string' }, 'parseVPC': { 'source': 'string' }, 'parseWAF': { 'source': 'string' }, 'renameKeys': { 'entries': [ { 'key': 'string', 'renameTo': 'string', 'overwriteIfExists': True|False }, ] }, 'splitString': { 'entries': [ { 'source': 'string', 'delimiter': 'string' }, ] }, 'substituteString': { 'entries': [ { 'source': 'string', 'from': 'string', 'to': 'string' }, ] }, 'trimString': { 'withKeys': [ 'string', ] }, 'typeConverter': { 'entries': [ { 'key': 'string', 'type': 'boolean'|'integer'|'double'|'string' }, ] }, 'upperCaseString': { 'withKeys': [ 'string', ] } }, ] ) Parameters: * **logGroupIdentifier** (*string*) -- **[REQUIRED]** Specify either the name or ARN of the log group to create the transformer for. * **transformerConfig** (*list*) -- **[REQUIRED]** This structure contains the configuration of this log transformer. A log transformer is an array of processors, where each processor applies one type of transformation to the log events that are ingested. * *(dict) --* This structure contains the information about one processor in a log transformer. * **addKeys** *(dict) --* Use this parameter to include the addKeys processor in your transformer. * **entries** *(list) --* **[REQUIRED]** An array of objects, where each object contains the information about one key to add to the log event. * *(dict) --* This object defines one key that will be added with the addKeys processor. * **key** *(string) --* **[REQUIRED]** The key of the new entry to be added to the log event * **value** *(string) --* **[REQUIRED]** The value of the new entry to be added to the log event * **overwriteIfExists** *(boolean) --* Specifies whether to overwrite the value if the key already exists in the log event. If you omit this, the default is "false". * **copyValue** *(dict) --* Use this parameter to include the copyValue processor in your transformer. * **entries** *(list) --* **[REQUIRED]** An array of "CopyValueEntry" objects, where each object contains the information about one field value to copy. * *(dict) --* This object defines one value to be copied with the copyValue processor. * **source** *(string) --* **[REQUIRED]** The key to copy. * **target** *(string) --* **[REQUIRED]** The key of the field to copy the value to. * **overwriteIfExists** *(boolean) --* Specifies whether to overwrite the value if the destination key already exists. If you omit this, the default is "false". * **csv** *(dict) --* Use this parameter to include the CSV processor in your transformer. * **quoteCharacter** *(string) --* The character used used as a text qualifier for a single column of data. If you omit this, the double quotation mark """ character is used. * **delimiter** *(string) --* The character used to separate each column in the original comma-separated value log event. If you omit this, the processor looks for the comma "," character as the delimiter. * **columns** *(list) --* An array of names to use for the columns in the transformed log event. If you omit this, default column names ( "[column_1, column_2 ...]") are used. * *(string) --* * **source** *(string) --* The path to the field in the log event that has the comma separated values to be parsed. If you omit this value, the whole log message is processed. * **dateTimeConverter** *(dict) --* Use this parameter to include the datetimeConverter processor in your transformer. * **source** *(string) --* **[REQUIRED]** The key to apply the date conversion to. * **target** *(string) --* **[REQUIRED]** The JSON field to store the result in. * **targetFormat** *(string) --* The datetime format to use for the converted data in the target field. If you omit this, the default of "yyyy-MM- dd'T'HH:mm:ss.SSS'Z" is used. * **matchPatterns** *(list) --* **[REQUIRED]** A list of patterns to match against the "source" field. * *(string) --* * **sourceTimezone** *(string) --* The time zone of the source field. If you omit this, the default used is the UTC zone. * **targetTimezone** *(string) --* The time zone of the target field. If you omit this, the default used is the UTC zone. * **locale** *(string) --* The locale of the source field. If you omit this, the default of "locale.ROOT" is used. * **deleteKeys** *(dict) --* Use this parameter to include the deleteKeys processor in your transformer. * **withKeys** *(list) --* **[REQUIRED]** The list of keys to delete. * *(string) --* * **grok** *(dict) --* Use this parameter to include the grok processor in your transformer. * **source** *(string) --* The path to the field in the log event that you want to parse. If you omit this value, the whole log message is parsed. * **match** *(string) --* **[REQUIRED]** The grok pattern to match against the log event. For a list of supported grok patterns, see Supported grok patterns. * **listToMap** *(dict) --* Use this parameter to include the listToMap processor in your transformer. * **source** *(string) --* **[REQUIRED]** The key in the log event that has a list of objects that will be converted to a map. * **key** *(string) --* **[REQUIRED]** The key of the field to be extracted as keys in the generated map * **valueKey** *(string) --* If this is specified, the values that you specify in this parameter will be extracted from the "source" objects and put into the values of the generated map. Otherwise, original objects in the source list will be put into the values of the generated map. * **target** *(string) --* The key of the field that will hold the generated map * **flatten** *(boolean) --* A Boolean value to indicate whether the list will be flattened into single items. Specify "true" to flatten the list. The default is "false" * **flattenedElement** *(string) --* If you set "flatten" to "true", use "flattenedElement" to specify which element, "first" or "last", to keep. You must specify this parameter if "flatten" is "true" * **lowerCaseString** *(dict) --* Use this parameter to include the lowerCaseString processor in your transformer. * **withKeys** *(list) --* **[REQUIRED]** The array caontaining the keys of the fields to convert to lowercase. * *(string) --* * **moveKeys** *(dict) --* Use this parameter to include the moveKeys processor in your transformer. * **entries** *(list) --* **[REQUIRED]** An array of objects, where each object contains the information about one key to move. * *(dict) --* This object defines one key that will be moved with the moveKey processor. * **source** *(string) --* **[REQUIRED]** The key to move. * **target** *(string) --* **[REQUIRED]** The key to move to. * **overwriteIfExists** *(boolean) --* Specifies whether to overwrite the value if the destination key already exists. If you omit this, the default is "false". * **parseCloudfront** *(dict) --* Use this parameter to include the parseCloudfront processor in your transformer. If you use this processor, it must be the first processor in your transformer. * **source** *(string) --* Omit this parameter and the whole log message will be processed by this processor. No other value than "@message" is allowed for "source". * **parseJSON** *(dict) --* Use this parameter to include the parseJSON processor in your transformer. * **source** *(string) --* Path to the field in the log event that will be parsed. Use dot notation to access child fields. For example, "store.book" * **destination** *(string) --* The location to put the parsed key value pair into. If you omit this parameter, it is placed under the root node. * **parseKeyValue** *(dict) --* Use this parameter to include the parseKeyValue processor in your transformer. * **source** *(string) --* Path to the field in the log event that will be parsed. Use dot notation to access child fields. For example, "store.book" * **destination** *(string) --* The destination field to put the extracted key-value pairs into * **fieldDelimiter** *(string) --* The field delimiter string that is used between key- value pairs in the original log events. If you omit this, the ampersand "&" character is used. * **keyValueDelimiter** *(string) --* The delimiter string to use between the key and value in each pair in the transformed log event. If you omit this, the equal "=" character is used. * **keyPrefix** *(string) --* If you want to add a prefix to all transformed keys, specify it here. * **nonMatchValue** *(string) --* A value to insert into the value field in the result, when a key-value pair is not successfully split. * **overwriteIfExists** *(boolean) --* Specifies whether to overwrite the value if the destination key already exists. If you omit this, the default is "false". * **parseRoute53** *(dict) --* Use this parameter to include the parseRoute53 processor in your transformer. If you use this processor, it must be the first processor in your transformer. * **source** *(string) --* Omit this parameter and the whole log message will be processed by this processor. No other value than "@message" is allowed for "source". * **parseToOCSF** *(dict) --* Use this parameter to convert logs into Open Cybersecurity Schema (OCSF) format. * **source** *(string) --* The path to the field in the log event that you want to parse. If you omit this value, the whole log message is parsed. * **eventSource** *(string) --* **[REQUIRED]** Specify the service or process that produces the log events that will be converted with this processor. * **ocsfVersion** *(string) --* **[REQUIRED]** Specify which version of the OCSF schema to use for the transformed log events. * **parsePostgres** *(dict) --* Use this parameter to include the parsePostGres processor in your transformer. If you use this processor, it must be the first processor in your transformer. * **source** *(string) --* Omit this parameter and the whole log message will be processed by this processor. No other value than "@message" is allowed for "source". * **parseVPC** *(dict) --* Use this parameter to include the parseVPC processor in your transformer. If you use this processor, it must be the first processor in your transformer. * **source** *(string) --* Omit this parameter and the whole log message will be processed by this processor. No other value than "@message" is allowed for "source". * **parseWAF** *(dict) --* Use this parameter to include the parseWAF processor in your transformer. If you use this processor, it must be the first processor in your transformer. * **source** *(string) --* Omit this parameter and the whole log message will be processed by this processor. No other value than "@message" is allowed for "source". * **renameKeys** *(dict) --* Use this parameter to include the renameKeys processor in your transformer. * **entries** *(list) --* **[REQUIRED]** An array of "RenameKeyEntry" objects, where each object contains the information about a single key to rename. * *(dict) --* This object defines one key that will be renamed with the renameKey processor. * **key** *(string) --* **[REQUIRED]** The key to rename * **renameTo** *(string) --* **[REQUIRED]** The string to use for the new key name * **overwriteIfExists** *(boolean) --* Specifies whether to overwrite the existing value if the destination key already exists. The default is "false" * **splitString** *(dict) --* Use this parameter to include the splitString processor in your transformer. * **entries** *(list) --* **[REQUIRED]** An array of "SplitStringEntry" objects, where each object contains the information about one field to split. * *(dict) --* This object defines one log field that will be split with the splitString processor. * **source** *(string) --* **[REQUIRED]** The key of the field to split. * **delimiter** *(string) --* **[REQUIRED]** The separator characters to split the string entry on. * **substituteString** *(dict) --* Use this parameter to include the substituteString processor in your transformer. * **entries** *(list) --* **[REQUIRED]** An array of objects, where each object contains the information about one key to match and replace. * *(dict) --* This object defines one log field key that will be replaced using the substituteString processor. * **source** *(string) --* **[REQUIRED]** The key to modify * **from** *(string) --* **[REQUIRED]** The regular expression string to be replaced. Special regex characters such as [ and ] must be escaped using \ when using double quotes and with when using single quotes. For more information, see Class Pattern on the Oracle web site. * **to** *(string) --* **[REQUIRED]** The string to be substituted for each match of "from" * **trimString** *(dict) --* Use this parameter to include the trimString processor in your transformer. * **withKeys** *(list) --* **[REQUIRED]** The array containing the keys of the fields to trim. * *(string) --* * **typeConverter** *(dict) --* Use this parameter to include the typeConverter processor in your transformer. * **entries** *(list) --* **[REQUIRED]** An array of "TypeConverterEntry" objects, where each object contains the information about one field to change the type of. * *(dict) --* This object defines one value type that will be converted using the typeConverter processor. * **key** *(string) --* **[REQUIRED]** The key with the value that is to be converted to a different type. * **type** *(string) --* **[REQUIRED]** The type to convert the field value to. Valid values are "integer", "double", "string" and "boolean". * **upperCaseString** *(dict) --* Use this parameter to include the upperCaseString processor in your transformer. * **withKeys** *(list) --* **[REQUIRED]** The array of containing the keys of the field to convert to uppercase. * *(string) --* Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.LimitExceededException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.InvalidOperationException" CloudWatchLogs / Client / list_log_anomaly_detectors list_log_anomaly_detectors ************************** CloudWatchLogs.Client.list_log_anomaly_detectors(**kwargs) Retrieves a list of the log anomaly detectors in the account. See also: AWS API Documentation **Request Syntax** response = client.list_log_anomaly_detectors( filterLogGroupArn='string', limit=123, nextToken='string' ) Parameters: * **filterLogGroupArn** (*string*) -- Use this to optionally filter the results to only include anomaly detectors that are associated with the specified log group. * **limit** (*integer*) -- The maximum number of items to return. If you don't specify a value, the default maximum value of 50 items is used. * **nextToken** (*string*) -- The token for the next set of items to return. The token expires after 24 hours. Return type: dict Returns: **Response Syntax** { 'anomalyDetectors': [ { 'anomalyDetectorArn': 'string', 'detectorName': 'string', 'logGroupArnList': [ 'string', ], 'evaluationFrequency': 'ONE_MIN'|'FIVE_MIN'|'TEN_MIN'|'FIFTEEN_MIN'|'THIRTY_MIN'|'ONE_HOUR', 'filterPattern': 'string', 'anomalyDetectorStatus': 'INITIALIZING'|'TRAINING'|'ANALYZING'|'FAILED'|'DELETED'|'PAUSED', 'kmsKeyId': 'string', 'creationTimeStamp': 123, 'lastModifiedTimeStamp': 123, 'anomalyVisibilityTime': 123 }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **anomalyDetectors** *(list) --* An array of structures, where each structure in the array contains information about one anomaly detector. * *(dict) --* Contains information about one anomaly detector in the account. * **anomalyDetectorArn** *(string) --* The ARN of the anomaly detector. * **detectorName** *(string) --* The name of the anomaly detector. * **logGroupArnList** *(list) --* A list of the ARNs of the log groups that this anomaly detector watches. * *(string) --* * **evaluationFrequency** *(string) --* Specifies how often the anomaly detector runs and look for anomalies. * **filterPattern** *(string) --* A symbolic description of how CloudWatch Logs should interpret the data in each log event. For example, a log event can contain timestamps, IP addresses, strings, and so on. You use the filter pattern to specify what to look for in the log event message. * **anomalyDetectorStatus** *(string) --* Specifies the current status of the anomaly detector. To pause an anomaly detector, use the "enabled" parameter in the UpdateLogAnomalyDetector operation. * **kmsKeyId** *(string) --* The ARN of the KMS key assigned to this anomaly detector, if any. * **creationTimeStamp** *(integer) --* The date and time when this anomaly detector was created. * **lastModifiedTimeStamp** *(integer) --* The date and time when this anomaly detector was most recently modified. * **anomalyVisibilityTime** *(integer) --* The number of days used as the life cycle of anomalies. After this time, anomalies are automatically baselined and the anomaly detector model will treat new occurrences of similar event as normal. * **nextToken** *(string) --* The token for the next set of items to return. The token expires after 24 hours. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" CloudWatchLogs / Client / delete_account_policy delete_account_policy ********************* CloudWatchLogs.Client.delete_account_policy(**kwargs) Deletes a CloudWatch Logs account policy. This stops the account- wide policy from applying to log groups in the account. If you delete a data protection policy or subscription filter policy, any log-group level policies of those types remain in effect. To use this operation, you must be signed on with the correct permissions depending on the type of policy that you are deleting. * To delete a data protection policy, you must have the "logs:DeleteDataProtectionPolicy" and "logs:DeleteAccountPolicy" permissions. * To delete a subscription filter policy, you must have the "logs:DeleteSubscriptionFilter" and "logs:DeleteAccountPolicy" permissions. * To delete a transformer policy, you must have the "logs:DeleteTransformer" and "logs:DeleteAccountPolicy" permissions. * To delete a field index policy, you must have the "logs:DeleteIndexPolicy" and "logs:DeleteAccountPolicy" permissions. If you delete a field index policy, the indexing of the log events that happened before you deleted the policy will still be used for up to 30 days to improve CloudWatch Logs Insights queries. See also: AWS API Documentation **Request Syntax** response = client.delete_account_policy( policyName='string', policyType='DATA_PROTECTION_POLICY'|'SUBSCRIPTION_FILTER_POLICY'|'FIELD_INDEX_POLICY'|'TRANSFORMER_POLICY'|'METRIC_EXTRACTION_POLICY' ) Parameters: * **policyName** (*string*) -- **[REQUIRED]** The name of the policy to delete. * **policyType** (*string*) -- **[REQUIRED]** The type of policy to delete. Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" CloudWatchLogs / Client / delete_metric_filter delete_metric_filter ******************** CloudWatchLogs.Client.delete_metric_filter(**kwargs) Deletes the specified metric filter. See also: AWS API Documentation **Request Syntax** response = client.delete_metric_filter( logGroupName='string', filterName='string' ) Parameters: * **logGroupName** (*string*) -- **[REQUIRED]** The name of the log group. * **filterName** (*string*) -- **[REQUIRED]** The name of the metric filter. Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / untag_resource untag_resource ************** CloudWatchLogs.Client.untag_resource(**kwargs) Removes one or more tags from the specified resource. See also: AWS API Documentation **Request Syntax** response = client.untag_resource( resourceArn='string', tagKeys=[ 'string', ] ) Parameters: * **resourceArn** (*string*) -- **[REQUIRED]** The ARN of the CloudWatch Logs resource that you're removing tags from. The ARN format of a log group is "arn:aws:logs:Region:account- id:log-group:log-group-name" The ARN format of a destination is "arn:aws:logs:Region :account-id:destination:destination-name" For more information about ARN format, see CloudWatch Logs resources and operations. * **tagKeys** (*list*) -- **[REQUIRED]** The list of tag keys to remove from the resource. * *(string) --* Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / start_live_tail start_live_tail *************** CloudWatchLogs.Client.start_live_tail(**kwargs) Starts a Live Tail streaming session for one or more log groups. A Live Tail session returns a stream of log events that have been recently ingested in the log groups. For more information, see Use Live Tail to view logs in near real time. The response to this operation is a response stream, over which the server sends live log events and the client receives them. The following objects are sent over the stream: * A single LiveTailSessionStart object is sent at the start of the session. * Every second, a LiveTailSessionUpdate object is sent. Each of these objects contains an array of the actual log events. If no new log events were ingested in the past second, the "LiveTailSessionUpdate" object will contain an empty array. The array of log events contained in a "LiveTailSessionUpdate" can include as many as 500 log events. If the number of log events matching the request exceeds 500 per second, the log events are sampled down to 500 log events to be included in each "LiveTailSessionUpdate" object. If your client consumes the log events slower than the server produces them, CloudWatch Logs buffers up to 10 "LiveTailSessionUpdate" events or 5000 log events, after which it starts dropping the oldest events. * A SessionStreamingException object is returned if an unknown error occurs on the server side. * A SessionTimeoutException object is returned when the session times out, after it has been kept open for three hours. Note: The "StartLiveTail" API routes requests to "streaming- logs.Region.amazonaws.com" using SDK host prefix injection. VPC endpoint support is not available for this API. Warning: You can end a session before it times out by closing the session stream or by closing the client that is receiving the stream. The session also ends if the established connection between the client and the server breaks. For examples of using an SDK to start a Live Tail session, see Start a Live Tail session using an Amazon Web Services SDK. See also: AWS API Documentation **Request Syntax** response = client.start_live_tail( logGroupIdentifiers=[ 'string', ], logStreamNames=[ 'string', ], logStreamNamePrefixes=[ 'string', ], logEventFilterPattern='string' ) Parameters: * **logGroupIdentifiers** (*list*) -- **[REQUIRED]** An array where each item in the array is a log group to include in the Live Tail session. Specify each log group by its ARN. If you specify an ARN, the ARN can't end with an asterisk (*). Note: You can include up to 10 log groups. * *(string) --* * **logStreamNames** (*list*) -- If you specify this parameter, then only log events in the log streams that you specify here are included in the Live Tail session. If you specify this field, you can't also specify the "logStreamNamePrefixes" field. Note: You can specify this parameter only if you specify only one log group in "logGroupIdentifiers". * *(string) --* * **logStreamNamePrefixes** (*list*) -- If you specify this parameter, then only log events in the log streams that have names that start with the prefixes that you specify here are included in the Live Tail session. If you specify this field, you can't also specify the "logStreamNames" field. Note: You can specify this parameter only if you specify only one log group in "logGroupIdentifiers". * *(string) --* * **logEventFilterPattern** (*string*) -- An optional pattern to use to filter the results to include only log events that match the pattern. For example, a filter pattern of "error 404" causes only log events that include both "error" and "404" to be included in the Live Tail stream. Regular expression filter patterns are supported. For more information about filter pattern syntax, see Filter and Pattern Syntax. Return type: dict Returns: The response of this operation contains an "EventStream" member. When iterated the "EventStream" will yield events based on the structure below, where only one of the top level keys will be present for any given event. **Response Syntax** { 'responseStream': EventStream({ 'sessionStart': { 'requestId': 'string', 'sessionId': 'string', 'logGroupIdentifiers': [ 'string', ], 'logStreamNames': [ 'string', ], 'logStreamNamePrefixes': [ 'string', ], 'logEventFilterPattern': 'string' }, 'sessionUpdate': { 'sessionMetadata': { 'sampled': True|False }, 'sessionResults': [ { 'logStreamName': 'string', 'logGroupIdentifier': 'string', 'message': 'string', 'timestamp': 123, 'ingestionTime': 123 }, ] }, 'SessionTimeoutException': { 'message': 'string' }, 'SessionStreamingException': { 'message': 'string' } }) } **Response Structure** * *(dict) --* * **responseStream** ("EventStream") -- An object that includes the stream returned by your request. It can include both log events and exceptions. * **sessionStart** *(dict) --* This object contains information about this Live Tail session, including the log groups included and the log stream filters, if any. * **requestId** *(string) --* The unique ID generated by CloudWatch Logs to identify this Live Tail session request. * **sessionId** *(string) --* The unique ID generated by CloudWatch Logs to identify this Live Tail session. * **logGroupIdentifiers** *(list) --* An array of the names and ARNs of the log groups included in this Live Tail session. * *(string) --* * **logStreamNames** *(list) --* If your StartLiveTail operation request included a "logStreamNames" parameter that filtered the session to only include certain log streams, these streams are listed here. * *(string) --* * **logStreamNamePrefixes** *(list) --* If your StartLiveTail operation request included a "logStreamNamePrefixes" parameter that filtered the session to only include log streams that have names that start with certain prefixes, these prefixes are listed here. * *(string) --* * **logEventFilterPattern** *(string) --* An optional pattern to filter the results to include only log events that match the pattern. For example, a filter pattern of "error 404" displays only log events that include both "error" and "404". For more information about filter pattern syntax, see Filter and Pattern Syntax. * **sessionUpdate** *(dict) --* This object contains the log events and session metadata. * **sessionMetadata** *(dict) --* This object contains the session metadata for a Live Tail session. * **sampled** *(boolean) --* If this is "true", then more than 500 log events matched the request for this update, and the "sessionResults" includes a sample of 500 of those events. If this is "false", then 500 or fewer log events matched the request for this update, so no sampling was necessary. In this case, the "sessionResults" array includes all log events that matched your request during this time. * **sessionResults** *(list) --* An array, where each member of the array includes the information for one log event in the Live Tail session. A "sessionResults" array can include as many as 500 log events. If the number of log events matching the request exceeds 500 per second, the log events are sampled down to 500 log events to be included in each "sessionUpdate" structure. * *(dict) --* This object contains the information for one log event returned in a Live Tail stream. * **logStreamName** *(string) --* The name of the log stream that ingested this log event. * **logGroupIdentifier** *(string) --* The name or ARN of the log group that ingested this log event. * **message** *(string) --* The log event message text. * **timestamp** *(integer) --* The timestamp specifying when this log event was created. * **ingestionTime** *(integer) --* The timestamp specifying when this log event was ingested into the log group. * **SessionTimeoutException** *(dict) --* This exception is returned in the stream when the Live Tail session times out. Live Tail sessions time out after three hours. * **message** *(string) --* * **SessionStreamingException** *(dict) --* This exception is returned if an unknown error occurs. * **message** *(string) --* **Exceptions** * "CloudWatchLogs.Client.exceptions.AccessDeniedException" * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.LimitExceededException" * "CloudWatchLogs.Client.exceptions.InvalidOperationException" CloudWatchLogs / Client / put_destination put_destination *************** CloudWatchLogs.Client.put_destination(**kwargs) Creates or updates a destination. This operation is used only to create destinations for cross-account subscriptions. A destination encapsulates a physical resource (such as an Amazon Kinesis stream). With a destination, you can subscribe to a real- time stream of log events for a different account, ingested using PutLogEvents. Through an access policy, a destination controls what is written to it. By default, "PutDestination" does not set any access policy with the destination, which means a cross-account user cannot call PutSubscriptionFilter against this destination. To enable this, the destination owner must call PutDestinationPolicy after "PutDestination". To perform a "PutDestination" operation, you must also have the "iam:PassRole" permission. See also: AWS API Documentation **Request Syntax** response = client.put_destination( destinationName='string', targetArn='string', roleArn='string', tags={ 'string': 'string' } ) Parameters: * **destinationName** (*string*) -- **[REQUIRED]** A name for the destination. * **targetArn** (*string*) -- **[REQUIRED]** The ARN of an Amazon Kinesis stream to which to deliver matching log events. * **roleArn** (*string*) -- **[REQUIRED]** The ARN of an IAM role that grants CloudWatch Logs permissions to call the Amazon Kinesis "PutRecord" operation on the destination stream. * **tags** (*dict*) -- An optional list of key-value pairs to associate with the resource. For more information about tagging, see Tagging Amazon Web Services resources * *(string) --* * *(string) --* Return type: dict Returns: **Response Syntax** { 'destination': { 'destinationName': 'string', 'targetArn': 'string', 'roleArn': 'string', 'accessPolicy': 'string', 'arn': 'string', 'creationTime': 123 } } **Response Structure** * *(dict) --* * **destination** *(dict) --* The destination. * **destinationName** *(string) --* The name of the destination. * **targetArn** *(string) --* The Amazon Resource Name (ARN) of the physical target where the log events are delivered (for example, a Kinesis stream). * **roleArn** *(string) --* A role for impersonation, used when delivering log events to the target. * **accessPolicy** *(string) --* An IAM policy document that governs which Amazon Web Services accounts can create subscription filters against this destination. * **arn** *(string) --* The ARN of this destination. * **creationTime** *(integer) --* The creation time of the destination, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / get_waiter get_waiter ********** CloudWatchLogs.Client.get_waiter(waiter_name) Returns an object that can wait for some condition. Parameters: **waiter_name** (*str*) -- The name of the waiter to get. See the waiters section of the service docs for a list of available waiters. Returns: The specified waiter object. Return type: "botocore.waiter.Waiter" CloudWatchLogs / Client / start_query start_query *********** CloudWatchLogs.Client.start_query(**kwargs) Starts a query of one or more log groups using CloudWatch Logs Insights. You specify the log groups and time range to query and the query string to use. For more information, see CloudWatch Logs Insights Query Syntax. After you run a query using "StartQuery", the query results are stored by CloudWatch Logs. You can use GetQueryResults to retrieve the results of a query, using the "queryId" that "StartQuery" returns. Note: To specify the log groups to query, a "StartQuery" operation must include one of the following: * Either exactly one of the following parameters: "logGroupName", "logGroupNames", or "logGroupIdentifiers" * Or the "queryString" must include a "SOURCE" command to select log groups for the query. The "SOURCE" command can select log groups based on log group name prefix, account ID, and log class. For more information about the "SOURCE" command, see SOURCE. If you have associated a KMS key with the query results in this account, then StartQuery uses that key to encrypt the results when it stores them. If no key is associated with query results, the query results are encrypted with the default CloudWatch Logs encryption method. Queries time out after 60 minutes of runtime. If your queries are timing out, reduce the time range being searched or partition your query into a number of queries. If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account to start a query in a linked source account. For more information, see CloudWatch cross- account observability. For a cross-account "StartQuery" operation, the query definition must be defined in the monitoring account. You can have up to 30 concurrent CloudWatch Logs insights queries, including queries that have been added to dashboards. See also: AWS API Documentation **Request Syntax** response = client.start_query( queryLanguage='CWLI'|'SQL'|'PPL', logGroupName='string', logGroupNames=[ 'string', ], logGroupIdentifiers=[ 'string', ], startTime=123, endTime=123, queryString='string', limit=123 ) Parameters: * **queryLanguage** (*string*) -- Specify the query language to use for this query. The options are Logs Insights QL, OpenSearch PPL, and OpenSearch SQL. For more information about the query languages that CloudWatch Logs supports, see Supported query languages. * **logGroupName** (*string*) -- The log group on which to perform the query. Note: A "StartQuery" operation must include exactly one of the following parameters: "logGroupName", "logGroupNames", or "logGroupIdentifiers". The exception is queries using the OpenSearch Service SQL query language, where you specify the log group names inside the "querystring" instead of here. * **logGroupNames** (*list*) -- The list of log groups to be queried. You can include up to 50 log groups. Note: A "StartQuery" operation must include exactly one of the following parameters: "logGroupName", "logGroupNames", or "logGroupIdentifiers". The exception is queries using the OpenSearch Service SQL query language, where you specify the log group names inside the "querystring" instead of here. * *(string) --* * **logGroupIdentifiers** (*list*) -- The list of log groups to query. You can include up to 50 log groups. You can specify them by the log group name or ARN. If a log group that you're querying is in a source account and you're using a monitoring account, you must specify the ARN of the log group here. The query definition must also be defined in the monitoring account. If you specify an ARN, use the format arn:aws:logs:*region *:*account-id*:log-group:*log_group_name* Don't include an * at the end. A "StartQuery" operation must include exactly one of the following parameters: "logGroupName", "logGroupNames", or "logGroupIdentifiers". The exception is queries using the OpenSearch Service SQL query language, where you specify the log group names inside the "querystring" instead of here. * *(string) --* * **startTime** (*integer*) -- **[REQUIRED]** The beginning of the time range to query. The range is inclusive, so the specified start time is included in the query. Specified as epoch time, the number of seconds since "January 1, 1970, 00:00:00 UTC". * **endTime** (*integer*) -- **[REQUIRED]** The end of the time range to query. The range is inclusive, so the specified end time is included in the query. Specified as epoch time, the number of seconds since "January 1, 1970, 00:00:00 UTC". * **queryString** (*string*) -- **[REQUIRED]** The query string to use. For more information, see CloudWatch Logs Insights Query Syntax. * **limit** (*integer*) -- The maximum number of log events to return in the query. If the query string uses the "fields" command, only the specified fields and their values are returned. The default is 10,000. Return type: dict Returns: **Response Syntax** { 'queryId': 'string' } **Response Structure** * *(dict) --* * **queryId** *(string) --* The unique ID of the query. **Exceptions** * "CloudWatchLogs.Client.exceptions.MalformedQueryException" * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.LimitExceededException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / delete_integration delete_integration ****************** CloudWatchLogs.Client.delete_integration(**kwargs) Deletes the integration between CloudWatch Logs and OpenSearch Service. If your integration has active vended logs dashboards, you must specify "true" for the "force" parameter, otherwise the operation will fail. If you delete the integration by setting "force" to "true", all your vended logs dashboards powered by OpenSearch Service will be deleted and the data that was on them will no longer be accessible. See also: AWS API Documentation **Request Syntax** response = client.delete_integration( integrationName='string', force=True|False ) Parameters: * **integrationName** (*string*) -- **[REQUIRED]** The name of the integration to delete. To find the name of your integration, use ListIntegrations. * **force** (*boolean*) -- Specify "true" to force the deletion of the integration even if vended logs dashboards currently exist. The default is "false". Return type: dict Returns: **Response Syntax** {} **Response Structure** * *(dict) --* **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.ValidationException" CloudWatchLogs / Client / get_log_object get_log_object ************** CloudWatchLogs.Client.get_log_object(**kwargs) Retrieves a large logging object (LLO) and streams it back. This API is used to fetch the content of large portions of log events that have been ingested through the PutOpenTelemetryLogs API. When log events contain fields that would cause the total event size to exceed 1MB, CloudWatch Logs automatically processes up to 10 fields, starting with the largest fields. Each field is truncated as needed to keep the total event size as close to 1MB as possible. The excess portions are stored as Large Log Objects (LLOs) and these fields are processed separately and LLO reference system fields (in the format "@ptr.$[path.to.field]") are added. The path in the reference field reflects the original JSON structure where the large field was located. For example, this could be "@ptr.$['input']['message']", "@ptr.$['AAA']['BBB']['CCC']['DDD']", "@ptr.$['AAA']", or any other path matching your log structure. See also: AWS API Documentation **Request Syntax** response = client.get_log_object( unmask=True|False, logObjectPointer='string' ) Parameters: * **unmask** (*boolean*) -- A boolean flag that indicates whether to unmask sensitive log data. When set to true, any masked or redacted data in the log object will be displayed in its original form. Default is false. * **logObjectPointer** (*string*) -- **[REQUIRED]** A pointer to the specific log object to retrieve. This is a required parameter that uniquely identifies the log object within CloudWatch Logs. The pointer is typically obtained from a previous query or filter operation. Return type: dict Returns: The response of this operation contains an "EventStream" member. When iterated the "EventStream" will yield events based on the structure below, where only one of the top level keys will be present for any given event. **Response Syntax** { 'fieldStream': EventStream({ 'fields': { 'data': b'bytes' }, 'InternalStreamingException': { 'message': 'string' } }) } **Response Structure** * *(dict) --* The response from the GetLogObject operation. * **fieldStream** ("EventStream") -- A stream of structured log data returned by the GetLogObject operation. This stream contains log events with their associated metadata and extracted fields. * **fields** *(dict) --* A structure containing the extracted fields from a log event. These fields are extracted based on the log format and can be used for structured querying and analysis. * **data** *(bytes) --* The actual log data content returned in the streaming response. This contains the fields and values of the log event in a structured format that can be parsed and processed by the client. * **InternalStreamingException** *(dict) --* An internal error occurred during the streaming of log data. This exception is thrown when there's an issue with the internal streaming mechanism used by the GetLogObject operation. * **message** *(string) --* **Exceptions** * "CloudWatchLogs.Client.exceptions.AccessDeniedException" * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.LimitExceededException" * "CloudWatchLogs.Client.exceptions.InvalidOperationException" CloudWatchLogs / Client / describe_metric_filters describe_metric_filters *********************** CloudWatchLogs.Client.describe_metric_filters(**kwargs) Lists the specified metric filters. You can list all of the metric filters or filter the results by log name, prefix, metric name, or metric namespace. The results are ASCII-sorted by filter name. See also: AWS API Documentation **Request Syntax** response = client.describe_metric_filters( logGroupName='string', filterNamePrefix='string', nextToken='string', limit=123, metricName='string', metricNamespace='string' ) Parameters: * **logGroupName** (*string*) -- The name of the log group. * **filterNamePrefix** (*string*) -- The prefix to match. CloudWatch Logs uses the value that you set here only if you also include the "logGroupName" parameter in your request. * **nextToken** (*string*) -- The token for the next set of items to return. (You received this token from a previous call.) * **limit** (*integer*) -- The maximum number of items returned. If you don't specify a value, the default is up to 50 items. * **metricName** (*string*) -- Filters results to include only those with the specified metric name. If you include this parameter in your request, you must also include the "metricNamespace" parameter. * **metricNamespace** (*string*) -- Filters results to include only those in the specified namespace. If you include this parameter in your request, you must also include the "metricName" parameter. Return type: dict Returns: **Response Syntax** { 'metricFilters': [ { 'filterName': 'string', 'filterPattern': 'string', 'metricTransformations': [ { 'metricName': 'string', 'metricNamespace': 'string', 'metricValue': 'string', 'defaultValue': 123.0, 'dimensions': { 'string': 'string' }, 'unit': 'Seconds'|'Microseconds'|'Milliseconds'|'Bytes'|'Kilobytes'|'Megabytes'|'Gigabytes'|'Terabytes'|'Bits'|'Kilobits'|'Megabits'|'Gigabits'|'Terabits'|'Percent'|'Count'|'Bytes/Second'|'Kilobytes/Second'|'Megabytes/Second'|'Gigabytes/Second'|'Terabytes/Second'|'Bits/Second'|'Kilobits/Second'|'Megabits/Second'|'Gigabits/Second'|'Terabits/Second'|'Count/Second'|'None' }, ], 'creationTime': 123, 'logGroupName': 'string', 'applyOnTransformedLogs': True|False }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **metricFilters** *(list) --* The metric filters. * *(dict) --* Metric filters express how CloudWatch Logs would extract metric observations from ingested log events and transform them into metric data in a CloudWatch metric. * **filterName** *(string) --* The name of the metric filter. * **filterPattern** *(string) --* A symbolic description of how CloudWatch Logs should interpret the data in each log event. For example, a log event can contain timestamps, IP addresses, strings, and so on. You use the filter pattern to specify what to look for in the log event message. * **metricTransformations** *(list) --* The metric transformations. * *(dict) --* Indicates how to transform ingested log events to metric data in a CloudWatch metric. * **metricName** *(string) --* The name of the CloudWatch metric. * **metricNamespace** *(string) --* A custom namespace to contain your metric in CloudWatch. Use namespaces to group together metrics that are similar. For more information, see Namespaces. * **metricValue** *(string) --* The value to publish to the CloudWatch metric when a filter pattern matches a log event. * **defaultValue** *(float) --* (Optional) The value to emit when a filter pattern does not match a log event. This value can be null. * **dimensions** *(dict) --* The fields to use as dimensions for the metric. One metric filter can include as many as three dimensions. Warning: Metrics extracted from log events are charged as custom metrics. To prevent unexpected high charges, do not specify high-cardinality fields such as "IPAddress" or "requestID" as dimensions. Each different value found for a dimension is treated as a separate metric and accrues charges as a separate custom metric.CloudWatch Logs disables a metric filter if it generates 1000 different name/value pairs for your specified dimensions within a certain amount of time. This helps to prevent accidental high charges.You can also set up a billing alarm to alert you if your charges are higher than expected. For more information, see Creating a Billing Alarm to Monitor Your Estimated Amazon Web Services Charges. * *(string) --* * *(string) --* * **unit** *(string) --* The unit to assign to the metric. If you omit this, the unit is set as "None". * **creationTime** *(integer) --* The creation time of the metric filter, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". * **logGroupName** *(string) --* The name of the log group. * **applyOnTransformedLogs** *(boolean) --* This parameter is valid only for log groups that have an active log transformer. For more information about log transformers, see PutTransformer. If this value is "true", the metric filter is applied on the transformed version of the log events instead of the original ingested log events. * **nextToken** *(string) --* The token for the next set of items to return. The token expires after 24 hours. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / describe_delivery_sources describe_delivery_sources ************************* CloudWatchLogs.Client.describe_delivery_sources(**kwargs) Retrieves a list of the delivery sources that have been created in the account. See also: AWS API Documentation **Request Syntax** response = client.describe_delivery_sources( nextToken='string', limit=123 ) Parameters: * **nextToken** (*string*) -- The token for the next set of items to return. The token expires after 24 hours. * **limit** (*integer*) -- Optionally specify the maximum number of delivery sources to return in the response. Return type: dict Returns: **Response Syntax** { 'deliverySources': [ { 'name': 'string', 'arn': 'string', 'resourceArns': [ 'string', ], 'service': 'string', 'logType': 'string', 'tags': { 'string': 'string' } }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **deliverySources** *(list) --* An array of structures. Each structure contains information about one delivery source in the account. * *(dict) --* This structure contains information about one *delivery source* in your account. A delivery source is an Amazon Web Services resource that sends logs to an Amazon Web Services destination. The destination can be CloudWatch Logs, Amazon S3, or Firehose. Only some Amazon Web Services services support being configured as a delivery source. These services are listed as **Supported [V2 Permissions]** in the table at Enabling logging from Amazon Web Services services. To configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following: * Create a delivery source, which is a logical object that represents the resource that is actually sending the logs. For more information, see PutDeliverySource. * Create a *delivery destination*, which is a logical object that represents the actual delivery destination. For more information, see PutDeliveryDestination. * If you are delivering logs cross-account, you must use PutDeliveryDestinationPolicy in the destination account to assign an IAM policy to the destination. This policy allows delivery to that destination. * Create a *delivery* by pairing exactly one delivery source and one delivery destination. For more information, see CreateDelivery. You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination. * **name** *(string) --* The unique name of the delivery source. * **arn** *(string) --* The Amazon Resource Name (ARN) that uniquely identifies this delivery source. * **resourceArns** *(list) --* This array contains the ARN of the Amazon Web Services resource that sends logs and is represented by this delivery source. Currently, only one ARN can be in the array. * *(string) --* * **service** *(string) --* The Amazon Web Services service that is sending logs. * **logType** *(string) --* The type of log that the source is sending. For valid values for this parameter, see the documentation for the source service. * **tags** *(dict) --* The tags that have been assigned to this delivery source. * *(string) --* * *(string) --* * **nextToken** *(string) --* The token for the next set of items to return. The token expires after 24 hours. **Exceptions** * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.ServiceQuotaExceededException" * "CloudWatchLogs.Client.exceptions.ValidationException" * "CloudWatchLogs.Client.exceptions.ThrottlingException" CloudWatchLogs / Client / put_index_policy put_index_policy **************** CloudWatchLogs.Client.put_index_policy(**kwargs) Creates or updates a *field index policy* for the specified log group. Only log groups in the Standard log class support field index policies. For more information about log classes, see Log classes. You can use field index policies to create *field indexes* on fields found in log events in the log group. Creating field indexes speeds up and lowers the costs for CloudWatch Logs Insights queries that reference those field indexes, because these queries attempt to skip the processing of log events that are known to not match the indexed field. Good fields to index are fields that you often need to query for and fields or values that match only a small fraction of the total log events. Common examples of indexes include request ID, session ID, userID, and instance IDs. For more information, see Create field indexes to improve query performance and reduce costs. To find the fields that are in your log group events, use the GetLogGroupFields operation. For example, suppose you have created a field index for "requestId". Then, any CloudWatch Logs Insights query on that log group that includes "requestId = value" or "requestId IN [value, value, ...]" will process fewer log events to reduce costs, and have improved performance. Each index policy has the following quotas and restrictions: * As many as 20 fields can be included in the policy. * Each field name can include as many as 100 characters. Matches of log events to the names of indexed fields are case- sensitive. For example, a field index of "RequestId" won't match a log event containing "requestId". Log group-level field index policies created with "PutIndexPolicy" override account-level field index policies created with PutAccountPolicy. If you use "PutIndexPolicy" to create a field index policy for a log group, that log group uses only that policy. The log group ignores any account-wide field index policy that you might have created. See also: AWS API Documentation **Request Syntax** response = client.put_index_policy( logGroupIdentifier='string', policyDocument='string' ) Parameters: * **logGroupIdentifier** (*string*) -- **[REQUIRED]** Specify either the log group name or log group ARN to apply this field index policy to. If you specify an ARN, use the format arn:aws:logs:*region*:*account-id*:log- group:*log_group_name* Don't include an * at the end. * **policyDocument** (*string*) -- **[REQUIRED]** The index policy document, in JSON format. The following is an example of an index policy document that creates two indexes, "RequestId" and "TransactionId". ""policyDocument": "{ "Fields": [ "RequestId", "TransactionId" ] }"" The policy document must include at least one field index. For more information about the fields that can be included and other restrictions, see Field index syntax and quotas. Return type: dict Returns: **Response Syntax** { 'indexPolicy': { 'logGroupIdentifier': 'string', 'lastUpdateTime': 123, 'policyDocument': 'string', 'policyName': 'string', 'source': 'ACCOUNT'|'LOG_GROUP' } } **Response Structure** * *(dict) --* * **indexPolicy** *(dict) --* The index policy that you just created or updated. * **logGroupIdentifier** *(string) --* The ARN of the log group that this index policy applies to. * **lastUpdateTime** *(integer) --* The date and time that this index policy was most recently updated. * **policyDocument** *(string) --* The policy document for this index policy, in JSON format. * **policyName** *(string) --* The name of this policy. Responses about log group-level field index policies don't have this field, because those policies don't have names. * **source** *(string) --* This field indicates whether this is an account-level index policy or an index policy that applies only to a single log group. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.LimitExceededException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / put_destination_policy put_destination_policy ********************** CloudWatchLogs.Client.put_destination_policy(**kwargs) Creates or updates an access policy associated with an existing destination. An access policy is an IAM policy document that is used to authorize claims to register a subscription filter against a given destination. See also: AWS API Documentation **Request Syntax** response = client.put_destination_policy( destinationName='string', accessPolicy='string', forceUpdate=True|False ) Parameters: * **destinationName** (*string*) -- **[REQUIRED]** A name for an existing destination. * **accessPolicy** (*string*) -- **[REQUIRED]** An IAM policy document that authorizes cross-account users to deliver their log events to the associated destination. This can be up to 5120 bytes. * **forceUpdate** (*boolean*) -- Specify true if you are updating an existing destination policy to grant permission to an organization ID instead of granting permission to individual Amazon Web Services accounts. Before you update a destination policy this way, you must first update the subscription filters in the accounts that send logs to this destination. If you do not, the subscription filters might stop working. By specifying "true" for "forceUpdate", you are affirming that you have already updated the subscription filters. For more information, see Updating an existing cross-account subscription If you omit this parameter, the default of "false" is used. Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / put_account_policy put_account_policy ****************** CloudWatchLogs.Client.put_account_policy(**kwargs) Creates an account-level data protection policy, subscription filter policy, field index policy, transformer policy, or metric extraction policy that applies to all log groups or a subset of log groups in the account. To use this operation, you must be signed on with the correct permissions depending on the type of policy that you are creating. * To create a data protection policy, you must have the "logs:PutDataProtectionPolicy" and "logs:PutAccountPolicy" permissions. * To create a subscription filter policy, you must have the "logs:PutSubscriptionFilter" and "logs:PutAccountPolicy" permissions. * To create a transformer policy, you must have the "logs:PutTransformer" and "logs:PutAccountPolicy" permissions. * To create a field index policy, you must have the "logs:PutIndexPolicy" and "logs:PutAccountPolicy" permissions. * To create a metric extraction policy, you must have the "logs:PutMetricExtractionPolicy" and "logs:PutAccountPolicy" permissions. **Data protection policy** A data protection policy can help safeguard sensitive data that's ingested by your log groups by auditing and masking the sensitive log data. Each account can have only one account-level data protection policy. Warning: Sensitive data is detected and masked when it is ingested into a log group. When you set a data protection policy, log events ingested into the log groups before that time are not masked. If you use "PutAccountPolicy" to create a data protection policy for your whole account, it applies to both existing log groups and all log groups that are created later in this account. The account- level policy is applied to existing log groups with eventual consistency. It might take up to 5 minutes before sensitive data in existing log groups begins to be masked. By default, when a user views a log event that includes masked data, the sensitive data is replaced by asterisks. A user who has the "logs:Unmask" permission can use a GetLogEvents or FilterLogEvents operation with the "unmask" parameter set to "true" to view the unmasked log events. Users with the "logs:Unmask" can also view unmasked data in the CloudWatch Logs console by running a CloudWatch Logs Insights query with the "unmask" query command. For more information, including a list of types of data that can be audited and masked, see Protect sensitive log data with masking. To use the "PutAccountPolicy" operation for a data protection policy, you must be signed on with the "logs:PutDataProtectionPolicy" and "logs:PutAccountPolicy" permissions. The "PutAccountPolicy" operation applies to all log groups in the account. You can use PutDataProtectionPolicy to create a data protection policy that applies to just one log group. If a log group has its own data protection policy and the account also has an account-level data protection policy, then the two policies are cumulative. Any sensitive term specified in either policy is masked. **Subscription filter policy** A subscription filter policy sets up a real-time feed of log events from CloudWatch Logs to other Amazon Web Services services. Account-level subscription filter policies apply to both existing log groups and log groups that are created later in this account. Supported destinations are Kinesis Data Streams, Firehose, and Lambda. When log events are sent to the receiving service, they are Base64 encoded and compressed with the GZIP format. The following destinations are supported for subscription filters: * An Kinesis Data Streams data stream in the same account as the subscription policy, for same-account delivery. * An Firehose data stream in the same account as the subscription policy, for same-account delivery. * A Lambda function in the same account as the subscription policy, for same-account delivery. * A logical destination in a different account created with PutDestination, for cross-account delivery. Kinesis Data Streams and Firehose are supported as logical destinations. Each account can have one account-level subscription filter policy per Region. If you are updating an existing filter, you must specify the correct name in "PolicyName". To perform a "PutAccountPolicy" subscription filter operation for any destination except a Lambda function, you must also have the "iam:PassRole" permission. **Transformer policy** Creates or updates a *log transformer policy* for your account. You use log transformers to transform log events into a different format, making them easier for you to process and analyze. You can also transform logs from different sources into standardized formats that contain relevant, source-specific information. After you have created a transformer, CloudWatch Logs performs this transformation at the time of log ingestion. You can then refer to the transformed versions of the logs during operations such as querying with CloudWatch Logs Insights or creating metric filters or subscription filters. You can also use a transformer to copy metadata from metadata keys into the log events themselves. This metadata can include log group name, log stream name, account ID and Region. A transformer for a log group is a series of processors, where each processor applies one type of transformation to the log events ingested into this log group. For more information about the available processors to use in a transformer, see Processors that you can use. Having log events in standardized format enables visibility across your applications for your log analysis, reporting, and alarming needs. CloudWatch Logs provides transformation for common log types with out-of-the-box transformation templates for major Amazon Web Services log sources such as VPC flow logs, Lambda, and Amazon RDS. You can use pre-built transformation templates or create custom transformation policies. You can create transformers only for the log groups in the Standard log class. You can have one account-level transformer policy that applies to all log groups in the account. Or you can create as many as 20 account-level transformer policies that are each scoped to a subset of log groups with the "selectionCriteria" parameter. If you have multiple account-level transformer policies with selection criteria, no two of them can use the same or overlapping log group name prefixes. For example, if you have one policy filtered to log groups that start with "my-log", you can't have another field index policy filtered to "my-logpprod" or "my-logging". You can also set up a transformer at the log-group level. For more information, see PutTransformer. If there is both a log-group level transformer created with "PutTransformer" and an account-level transformer that could apply to the same log group, the log group uses only the log-group level transformer. It ignores the account- level transformer. **Field index policy** You can use field index policies to create indexes on fields found in log events in the log group. Creating field indexes can help lower the scan volume for CloudWatch Logs Insights queries that reference those fields, because these queries attempt to skip the processing of log events that are known to not match the indexed field. Good fields to index are fields that you often need to query for and fields or values that match only a small fraction of the total log events. Common examples of indexes include request ID, session ID, user IDs, or instance IDs. For more information, see Create field indexes to improve query performance and reduce costs To find the fields that are in your log group events, use the GetLogGroupFields operation. For example, suppose you have created a field index for "requestId". Then, any CloudWatch Logs Insights query on that log group that includes "requestId = value" or "requestId in [value, value, ...]" will attempt to process only the log events where the indexed field matches the specified value. Matches of log events to the names of indexed fields are case- sensitive. For example, an indexed field of "RequestId" won't match a log event containing "requestId". You can have one account-level field index policy that applies to all log groups in the account. Or you can create as many as 20 account-level field index policies that are each scoped to a subset of log groups with the "selectionCriteria" parameter. If you have multiple account-level index policies with selection criteria, no two of them can use the same or overlapping log group name prefixes. For example, if you have one policy filtered to log groups that start with "my-log", you can't have another field index policy filtered to "my-logpprod" or "my-logging". If you create an account-level field index policy in a monitoring account in cross-account observability, the policy is applied only to the monitoring account and not to any source accounts. If you want to create a field index policy for a single log group, you can use PutIndexPolicy instead of "PutAccountPolicy". If you do so, that log group will use only that log-group level policy, and will ignore the account-level policy that you create with PutAccountPolicy. **Metric extraction policy** A metric extraction policy controls whether CloudWatch Metrics can be created through the Embedded Metrics Format (EMF) for log groups in your account. By default, EMF metric creation is enabled for all log groups. You can use metric extraction policies to disable EMF metric creation for your entire account or specific log groups. When a policy disables EMF metric creation for a log group, log events in the EMF format are still ingested, but no CloudWatch Metrics are created from them. Warning: Creating a policy disables metrics for AWS features that use EMF to create metrics, such as CloudWatch Container Insights and CloudWatch Application Signals. To prevent turning off those features by accident, we recommend that you exclude the underlying log-groups through a selection-criteria such as "LogGroupNamePrefix NOT IN ["/aws/containerinsights", "/aws/ecs/containerinsights", "/aws/application-signals/data"]". Each account can have either one account-level metric extraction policy that applies to all log groups, or up to 5 policies that are each scoped to a subset of log groups with the "selectionCriteria" parameter. The selection criteria supports filtering by "LogGroupName" and "LogGroupNamePrefix" using the operators "IN" and "NOT IN". You can specify up to 50 values in each "IN" or "NOT IN" list. The selection criteria can be specified in these formats: "LogGroupName IN ["log-group-1", "log-group-2"]" "LogGroupNamePrefix NOT IN ["/aws/prefix1", "/aws/prefix2"]" If you have multiple account-level metric extraction policies with selection criteria, no two of them can have overlapping criteria. For example, if you have one policy with selection criteria "LogGroupNamePrefix IN ["my-log"]", you can't have another metric extraction policy with selection criteria "LogGroupNamePrefix IN ["/my-log-prod"]" or "LogGroupNamePrefix IN ["/my-logging"]", as the set of log groups matching these prefixes would be a subset of the log groups matching the first policy's prefix, creating an overlap. When using "NOT IN", only one policy with this operator is allowed per account. When combining policies with "IN" and "NOT IN" operators, the overlap check ensures that policies don't have conflicting effects. Two policies with "IN" and "NOT IN" operators do not overlap if and only if every value in the "IN ``policy is completely contained within some value in the ``NOT IN" policy. For example: * If you have a "NOT IN" policy for prefix ""/aws/lambda"", you can create an "IN" policy for the exact log group name ""/aws/lambda/function1"" because the set of log groups matching ""/aws/lambda/function1"" is a subset of the log groups matching ""/aws/lambda"". * If you have a "NOT IN" policy for prefix ""/aws/lambda"", you cannot create an "IN" policy for prefix ""/aws"" because the set of log groups matching ""/aws"" is not a subset of the log groups matching ""/aws/lambda"". See also: AWS API Documentation **Request Syntax** response = client.put_account_policy( policyName='string', policyDocument='string', policyType='DATA_PROTECTION_POLICY'|'SUBSCRIPTION_FILTER_POLICY'|'FIELD_INDEX_POLICY'|'TRANSFORMER_POLICY'|'METRIC_EXTRACTION_POLICY', scope='ALL', selectionCriteria='string' ) Parameters: * **policyName** (*string*) -- **[REQUIRED]** A name for the policy. This must be unique within the account. * **policyDocument** (*string*) -- **[REQUIRED]** Specify the policy, in JSON. **Data protection policy** A data protection policy must include two JSON blocks: * The first block must include both a "DataIdentifer" array and an "Operation" property with an "Audit" action. The "DataIdentifer" array lists the types of sensitive data that you want to mask. For more information about the available options, see Types of data that you can mask. The "Operation" property with an "Audit" action is required to find the sensitive data terms. This "Audit" action must contain a "FindingsDestination" object. You can optionally use that "FindingsDestination" object to list one or more destinations to send audit findings to. If you specify destinations such as log groups, Firehose streams, and S3 buckets, they must already exist. * The second block must include both a "DataIdentifer" array and an "Operation" property with an "Deidentify" action. The "DataIdentifer" array must exactly match the "DataIdentifer" array in the first block of the policy. The "Operation" property with the "Deidentify" action is what actually masks the data, and it must contain the ""MaskConfig": {}" object. The ""MaskConfig": {}" object must be empty. For an example data protection policy, see the **Examples** section on this page. Warning: The contents of the two "DataIdentifer" arrays must match exactly. In addition to the two JSON blocks, the "policyDocument" can also include "Name", "Description", and "Version" fields. The "Name" is different than the operation's "policyName" parameter, and is used as a dimension when CloudWatch Logs reports audit findings metrics to CloudWatch. The JSON specified in "policyDocument" can be up to 30,720 characters long. **Subscription filter policy** A subscription filter policy can include the following attributes in a JSON block: * **DestinationArn** The ARN of the destination to deliver log events to. Supported destinations are: * An Kinesis Data Streams data stream in the same account as the subscription policy, for same-account delivery. * An Firehose data stream in the same account as the subscription policy, for same-account delivery. * A Lambda function in the same account as the subscription policy, for same-account delivery. * A logical destination in a different account created with PutDestination, for cross-account delivery. Kinesis Data Streams and Firehose are supported as logical destinations. * **RoleArn** The ARN of an IAM role that grants CloudWatch Logs permissions to deliver ingested log events to the destination stream. You don't need to provide the ARN when you are working with a logical destination for cross-account delivery. * **FilterPattern** A filter pattern for subscribing to a filtered stream of log events. * **Distribution** The method used to distribute log data to the destination. By default, log data is grouped by log stream, but the grouping can be set to "Random" for a more even distribution. This property is only applicable when the destination is an Kinesis Data Streams data stream. **Transformer policy** A transformer policy must include one JSON block with the array of processors and their configurations. For more information about available processors, see Processors that you can use. **Field index policy** A field index filter policy can include the following attribute in a JSON block: * **Fields** The array of field indexes to create. It must contain at least one field index. The following is an example of an index policy document that creates two indexes, "RequestId" and "TransactionId". ""policyDocument": "{ \"Fields\": [ \"RequestId\", \"TransactionId\" ] }"" * **policyType** (*string*) -- **[REQUIRED]** The type of policy that you're creating or updating. * **scope** (*string*) -- Currently the only valid value for this parameter is "ALL", which specifies that the data protection policy applies to all log groups in the account. If you omit this parameter, the default of "ALL" is used. * **selectionCriteria** (*string*) -- Use this parameter to apply the new policy to a subset of log groups in the account. Specifying "selectionCriteria" is valid only when you specify "SUBSCRIPTION_FILTER_POLICY", "FIELD_INDEX_POLICY" or "TRANSFORMER_POLICY``for ``policyType". If "policyType" is "SUBSCRIPTION_FILTER_POLICY", the only supported "selectionCriteria" filter is "LogGroupName NOT IN []" If "policyType" is "FIELD_INDEX_POLICY" or "TRANSFORMER_POLICY", the only supported "selectionCriteria" filter is "LogGroupNamePrefix" The "selectionCriteria" string can be up to 25KB in length. The length is determined by using its UTF-8 bytes. Using the "selectionCriteria" parameter with "SUBSCRIPTION_FILTER_POLICY" is useful to help prevent infinite loops. For more information, see Log recursion prevention. Return type: dict Returns: **Response Syntax** { 'accountPolicy': { 'policyName': 'string', 'policyDocument': 'string', 'lastUpdatedTime': 123, 'policyType': 'DATA_PROTECTION_POLICY'|'SUBSCRIPTION_FILTER_POLICY'|'FIELD_INDEX_POLICY'|'TRANSFORMER_POLICY'|'METRIC_EXTRACTION_POLICY', 'scope': 'ALL', 'selectionCriteria': 'string', 'accountId': 'string' } } **Response Structure** * *(dict) --* * **accountPolicy** *(dict) --* The account policy that you created. * **policyName** *(string) --* The name of the account policy. * **policyDocument** *(string) --* The policy document for this account policy. The JSON specified in "policyDocument" can be up to 30,720 characters. * **lastUpdatedTime** *(integer) --* The date and time that this policy was most recently updated. * **policyType** *(string) --* The type of policy for this account policy. * **scope** *(string) --* The scope of the account policy. * **selectionCriteria** *(string) --* The log group selection criteria that is used for this policy. * **accountId** *(string) --* The Amazon Web Services account ID that the policy applies to. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.LimitExceededException" CloudWatchLogs / Client / list_log_groups_for_query list_log_groups_for_query ************************* CloudWatchLogs.Client.list_log_groups_for_query(**kwargs) Returns a list of the log groups that were analyzed during a single CloudWatch Logs Insights query. This can be useful for queries that use log group name prefixes or the "filterIndex" command, because the log groups are dynamically selected in these cases. For more information about field indexes, see Create field indexes to improve query performance and reduce costs. See also: AWS API Documentation **Request Syntax** response = client.list_log_groups_for_query( queryId='string', nextToken='string', maxResults=123 ) Parameters: * **queryId** (*string*) -- **[REQUIRED]** The ID of the query to use. This query ID is from the response to your StartQuery operation. * **nextToken** (*string*) -- The token for the next set of items to return. The token expires after 24 hours. * **maxResults** (*integer*) -- Limits the number of returned log groups to the specified number. Return type: dict Returns: **Response Syntax** { 'logGroupIdentifiers': [ 'string', ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **logGroupIdentifiers** *(list) --* An array of the names and ARNs of the log groups that were processed in the query. * *(string) --* * **nextToken** *(string) --* The token for the next set of items to return. The token expires after 24 hours. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.AccessDeniedException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / get_log_events get_log_events ************** CloudWatchLogs.Client.get_log_events(**kwargs) Lists log events from the specified log stream. You can list all of the log events or filter using a time range. "GetLogEvents" is a paginated operation. Each page returned can contain up to 1 MB of log events or up to 10,000 log events. A returned page might only be partially full, or even empty. For example, if the result of a query would return 15,000 log events, the first page isn't guaranteed to have 10,000 log events even if they all fit into 1 MB. Partially full or empty pages don't necessarily mean that pagination is finished. As long as the "nextBackwardToken" or "nextForwardToken" returned is NOT equal to the "nextToken" that you passed into the API call, there might be more log events available. The token that you use depends on the direction you want to move in along the log stream. The returned tokens are never null. Note: If you set "startFromHead" to "true" and you don’t include "endTime" in your request, you can end up in a situation where the pagination doesn't terminate. This can happen when the new log events are being added to the target log streams faster than they are being read. This situation is a good use case for the CloudWatch Logs Live Tail feature. If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross- account observability. You can specify the log group to search by using either "logGroupIdentifier" or "logGroupName". You must include one of these two parameters, but you can't include both. Note: If you are using log transformation, the "GetLogEvents" operation returns only the original versions of log events, before they were transformed. To view the transformed versions, you must use a CloudWatch Logs query. See also: AWS API Documentation **Request Syntax** response = client.get_log_events( logGroupName='string', logGroupIdentifier='string', logStreamName='string', startTime=123, endTime=123, nextToken='string', limit=123, startFromHead=True|False, unmask=True|False ) Parameters: * **logGroupName** (*string*) -- The name of the log group. Note: You must include either "logGroupIdentifier" or "logGroupName", but not both. * **logGroupIdentifier** (*string*) -- Specify either the name or ARN of the log group to view events from. If the log group is in a source account and you are using a monitoring account, you must use the log group ARN. Note: You must include either "logGroupIdentifier" or "logGroupName", but not both. * **logStreamName** (*string*) -- **[REQUIRED]** The name of the log stream. * **startTime** (*integer*) -- The start of the time range, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". Events with a timestamp equal to this time or later than this time are included. Events with a timestamp earlier than this time are not included. * **endTime** (*integer*) -- The end of the time range, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". Events with a timestamp equal to or later than this time are not included. * **nextToken** (*string*) -- The token for the next set of items to return. (You received this token from a previous call.) * **limit** (*integer*) -- The maximum number of log events returned. If you don't specify a limit, the default is as many log events as can fit in a response size of 1 MB (up to 10,000 log events). * **startFromHead** (*boolean*) -- If the value is true, the earliest log events are returned first. If the value is false, the latest log events are returned first. The default value is false. If you are using a previous "nextForwardToken" value as the "nextToken" in this operation, you must specify "true" for "startFromHead". * **unmask** (*boolean*) -- Specify "true" to display the log event fields with all sensitive data unmasked and visible. The default is "false". To use this operation with this parameter, you must be signed into an account with the "logs:Unmask" permission. Return type: dict Returns: **Response Syntax** { 'events': [ { 'timestamp': 123, 'message': 'string', 'ingestionTime': 123 }, ], 'nextForwardToken': 'string', 'nextBackwardToken': 'string' } **Response Structure** * *(dict) --* * **events** *(list) --* The events. * *(dict) --* Represents a log event. * **timestamp** *(integer) --* The time the event occurred, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". * **message** *(string) --* The data contained in the log event. * **ingestionTime** *(integer) --* The time the event was ingested, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". * **nextForwardToken** *(string) --* The token for the next set of items in the forward direction. The token expires after 24 hours. If you have reached the end of the stream, it returns the same token you passed in. * **nextBackwardToken** *(string) --* The token for the next set of items in the backward direction. The token expires after 24 hours. This token is not null. If you have reached the end of the stream, it returns the same token you passed in. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / put_integration put_integration *************** CloudWatchLogs.Client.put_integration(**kwargs) Creates an integration between CloudWatch Logs and another service in this account. Currently, only integrations with OpenSearch Service are supported, and currently you can have only one integration in your account. Integrating with OpenSearch Service makes it possible for you to create curated vended logs dashboards, powered by OpenSearch Service analytics. For more information, see Vended log dashboards powered by Amazon OpenSearch Service. You can use this operation only to create a new integration. You can't modify an existing integration. See also: AWS API Documentation **Request Syntax** response = client.put_integration( integrationName='string', resourceConfig={ 'openSearchResourceConfig': { 'kmsKeyArn': 'string', 'dataSourceRoleArn': 'string', 'dashboardViewerPrincipals': [ 'string', ], 'applicationArn': 'string', 'retentionDays': 123 } }, integrationType='OPENSEARCH' ) Parameters: * **integrationName** (*string*) -- **[REQUIRED]** A name for the integration. * **resourceConfig** (*dict*) -- **[REQUIRED]** A structure that contains configuration information for the integration that you are creating. Note: This is a Tagged Union structure. Only one of the following top level keys can be set: "openSearchResourceConfig". * **openSearchResourceConfig** *(dict) --* This structure contains configuration details about an integration between CloudWatch Logs and OpenSearch Service. * **kmsKeyArn** *(string) --* To have the vended dashboard data encrypted with KMS instead of the CloudWatch Logs default encryption method, specify the ARN of the KMS key that you want to use. * **dataSourceRoleArn** *(string) --* **[REQUIRED]** Specify the ARN of an IAM role that CloudWatch Logs will use to create the integration. This role must have the permissions necessary to access the OpenSearch Service collection to be able to create the dashboards. For more information about the permissions needed, see Permissions that the integration needs in the CloudWatch Logs User Guide. * **dashboardViewerPrincipals** *(list) --* **[REQUIRED]** Specify the ARNs of IAM roles and IAM users who you want to grant permission to for viewing the dashboards. Warning: In addition to specifying these users here, you must also grant them the **CloudWatchOpenSearchDashboardAccess** IAM policy. For more information, see IAM policies for users. * *(string) --* * **applicationArn** *(string) --* If you want to use an existing OpenSearch Service application for your integration with OpenSearch Service, specify it here. If you omit this, a new application will be created. * **retentionDays** *(integer) --* **[REQUIRED]** Specify how many days that you want the data derived by OpenSearch Service to be retained in the index that the dashboard refers to. This also sets the maximum time period that you can choose when viewing data in the dashboard. Choosing a longer time frame will incur additional costs. * **integrationType** (*string*) -- **[REQUIRED]** The type of integration. Currently, the only supported type is "OPENSEARCH". Return type: dict Returns: **Response Syntax** { 'integrationName': 'string', 'integrationStatus': 'PROVISIONING'|'ACTIVE'|'FAILED' } **Response Structure** * *(dict) --* * **integrationName** *(string) --* The name of the integration that you just created. * **integrationStatus** *(string) --* The status of the integration that you just created. After you create an integration, it takes a few minutes to complete. During this time, you'll see the status as "PROVISIONING". **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.LimitExceededException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.ValidationException" CloudWatchLogs / Client / filter_log_events filter_log_events ***************** CloudWatchLogs.Client.filter_log_events(**kwargs) Lists log events from the specified log group. You can list all the log events or filter the results using one or more of the following: * A filter pattern * A time range * The log stream name, or a log stream name prefix that matches mutltiple log streams You must have the "logs:FilterLogEvents" permission to perform this operation. You can specify the log group to search by using either "logGroupIdentifier" or "logGroupName". You must include one of these two parameters, but you can't include both. "FilterLogEvents" is a paginated operation. Each page returned can contain up to 1 MB of log events or up to 10,000 log events. A returned page might only be partially full, or even empty. For example, if the result of a query would return 15,000 log events, the first page isn't guaranteed to have 10,000 log events even if they all fit into 1 MB. Partially full or empty pages don't necessarily mean that pagination is finished. If the results include a "nextToken", there might be more log events available. You can return these additional log events by providing the nextToken in a subsequent "FilterLogEvents" operation. If the results don't include a "nextToken", then pagination is finished. Specifying the "limit" parameter only guarantees that a single page doesn't return more log events than the specified limit, but it might return fewer events than the limit. This is the expected API behavior. The returned log events are sorted by event timestamp, the timestamp when the event was ingested by CloudWatch Logs, and the ID of the "PutLogEvents" request. If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross- account observability. Note: If you are using log transformation, the "FilterLogEvents" operation returns only the original versions of log events, before they were transformed. To view the transformed versions, you must use a CloudWatch Logs query. See also: AWS API Documentation **Request Syntax** response = client.filter_log_events( logGroupName='string', logGroupIdentifier='string', logStreamNames=[ 'string', ], logStreamNamePrefix='string', startTime=123, endTime=123, filterPattern='string', nextToken='string', limit=123, interleaved=True|False, unmask=True|False ) Parameters: * **logGroupName** (*string*) -- The name of the log group to search. Note: You must include either "logGroupIdentifier" or "logGroupName", but not both. * **logGroupIdentifier** (*string*) -- Specify either the name or ARN of the log group to view log events from. If the log group is in a source account and you are using a monitoring account, you must use the log group ARN. Note: You must include either "logGroupIdentifier" or "logGroupName", but not both. * **logStreamNames** (*list*) -- Filters the results to only logs from the log streams in this list. If you specify a value for both "logStreamNames" and "logStreamNamePrefix", the action returns an "InvalidParameterException" error. * *(string) --* * **logStreamNamePrefix** (*string*) -- Filters the results to include only events from log streams that have names starting with this prefix. If you specify a value for both "logStreamNamePrefix" and "logStreamNames", the action returns an "InvalidParameterException" error. * **startTime** (*integer*) -- The start of the time range, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". Events with a timestamp before this time are not returned. * **endTime** (*integer*) -- The end of the time range, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". Events with a timestamp later than this time are not returned. * **filterPattern** (*string*) -- The filter pattern to use. For more information, see Filter and Pattern Syntax. If not provided, all the events are matched. * **nextToken** (*string*) -- The token for the next set of events to return. (You received this token from a previous call.) * **limit** (*integer*) -- The maximum number of events to return. The default is 10,000 events. * **interleaved** (*boolean*) -- If the value is true, the operation attempts to provide responses that contain events from multiple log streams within the log group, interleaved in a single response. If the value is false, all the matched log events in the first log stream are searched first, then those in the next log stream, and so on. **Important** As of June 17, 2019, this parameter is ignored and the value is assumed to be true. The response from this operation always interleaves events from multiple log streams within a log group. * **unmask** (*boolean*) -- Specify "true" to display the log event fields with all sensitive data unmasked and visible. The default is "false". To use this operation with this parameter, you must be signed into an account with the "logs:Unmask" permission. Return type: dict Returns: **Response Syntax** { 'events': [ { 'logStreamName': 'string', 'timestamp': 123, 'message': 'string', 'ingestionTime': 123, 'eventId': 'string' }, ], 'searchedLogStreams': [ { 'logStreamName': 'string', 'searchedCompletely': True|False }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **events** *(list) --* The matched events. * *(dict) --* Represents a matched event. * **logStreamName** *(string) --* The name of the log stream to which this event belongs. * **timestamp** *(integer) --* The time the event occurred, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". * **message** *(string) --* The data contained in the log event. * **ingestionTime** *(integer) --* The time the event was ingested, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". * **eventId** *(string) --* The ID of the event. * **searchedLogStreams** *(list) --* **Important** As of May 15, 2020, this parameter is no longer supported. This parameter returns an empty list. Indicates which log streams have been searched and whether each has been searched completely. * *(dict) --* Represents the search status of a log stream. * **logStreamName** *(string) --* The name of the log stream. * **searchedCompletely** *(boolean) --* Indicates whether all the events in this log stream were searched. * **nextToken** *(string) --* The token to use when requesting the next set of items. The token expires after 24 hours. If the results don't include a "nextToken", then pagination is finished. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / delete_destination delete_destination ****************** CloudWatchLogs.Client.delete_destination(**kwargs) Deletes the specified destination, and eventually disables all the subscription filters that publish to it. This operation does not delete the physical resource encapsulated by the destination. See also: AWS API Documentation **Request Syntax** response = client.delete_destination( destinationName='string' ) Parameters: **destinationName** (*string*) -- **[REQUIRED]** The name of the destination. Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / get_transformer get_transformer *************** CloudWatchLogs.Client.get_transformer(**kwargs) Returns the information about the log transformer associated with this log group. This operation returns data only for transformers created at the log group level. To get information for an account-level transformer, use DescribeAccountPolicies. See also: AWS API Documentation **Request Syntax** response = client.get_transformer( logGroupIdentifier='string' ) Parameters: **logGroupIdentifier** (*string*) -- **[REQUIRED]** Specify either the name or ARN of the log group to return transformer information for. If the log group is in a source account and you are using a monitoring account, you must use the log group ARN. Return type: dict Returns: **Response Syntax** { 'logGroupIdentifier': 'string', 'creationTime': 123, 'lastModifiedTime': 123, 'transformerConfig': [ { 'addKeys': { 'entries': [ { 'key': 'string', 'value': 'string', 'overwriteIfExists': True|False }, ] }, 'copyValue': { 'entries': [ { 'source': 'string', 'target': 'string', 'overwriteIfExists': True|False }, ] }, 'csv': { 'quoteCharacter': 'string', 'delimiter': 'string', 'columns': [ 'string', ], 'source': 'string' }, 'dateTimeConverter': { 'source': 'string', 'target': 'string', 'targetFormat': 'string', 'matchPatterns': [ 'string', ], 'sourceTimezone': 'string', 'targetTimezone': 'string', 'locale': 'string' }, 'deleteKeys': { 'withKeys': [ 'string', ] }, 'grok': { 'source': 'string', 'match': 'string' }, 'listToMap': { 'source': 'string', 'key': 'string', 'valueKey': 'string', 'target': 'string', 'flatten': True|False, 'flattenedElement': 'first'|'last' }, 'lowerCaseString': { 'withKeys': [ 'string', ] }, 'moveKeys': { 'entries': [ { 'source': 'string', 'target': 'string', 'overwriteIfExists': True|False }, ] }, 'parseCloudfront': { 'source': 'string' }, 'parseJSON': { 'source': 'string', 'destination': 'string' }, 'parseKeyValue': { 'source': 'string', 'destination': 'string', 'fieldDelimiter': 'string', 'keyValueDelimiter': 'string', 'keyPrefix': 'string', 'nonMatchValue': 'string', 'overwriteIfExists': True|False }, 'parseRoute53': { 'source': 'string' }, 'parseToOCSF': { 'source': 'string', 'eventSource': 'CloudTrail'|'Route53Resolver'|'VPCFlow'|'EKSAudit'|'AWSWAF', 'ocsfVersion': 'V1.1' }, 'parsePostgres': { 'source': 'string' }, 'parseVPC': { 'source': 'string' }, 'parseWAF': { 'source': 'string' }, 'renameKeys': { 'entries': [ { 'key': 'string', 'renameTo': 'string', 'overwriteIfExists': True|False }, ] }, 'splitString': { 'entries': [ { 'source': 'string', 'delimiter': 'string' }, ] }, 'substituteString': { 'entries': [ { 'source': 'string', 'from': 'string', 'to': 'string' }, ] }, 'trimString': { 'withKeys': [ 'string', ] }, 'typeConverter': { 'entries': [ { 'key': 'string', 'type': 'boolean'|'integer'|'double'|'string' }, ] }, 'upperCaseString': { 'withKeys': [ 'string', ] } }, ] } **Response Structure** * *(dict) --* * **logGroupIdentifier** *(string) --* The ARN of the log group that you specified in your request. * **creationTime** *(integer) --* The creation time of the transformer, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. * **lastModifiedTime** *(integer) --* The date and time when this transformer was most recently modified, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. * **transformerConfig** *(list) --* This sructure contains the configuration of the requested transformer. * *(dict) --* This structure contains the information about one processor in a log transformer. * **addKeys** *(dict) --* Use this parameter to include the addKeys processor in your transformer. * **entries** *(list) --* An array of objects, where each object contains the information about one key to add to the log event. * *(dict) --* This object defines one key that will be added with the addKeys processor. * **key** *(string) --* The key of the new entry to be added to the log event * **value** *(string) --* The value of the new entry to be added to the log event * **overwriteIfExists** *(boolean) --* Specifies whether to overwrite the value if the key already exists in the log event. If you omit this, the default is "false". * **copyValue** *(dict) --* Use this parameter to include the copyValue processor in your transformer. * **entries** *(list) --* An array of "CopyValueEntry" objects, where each object contains the information about one field value to copy. * *(dict) --* This object defines one value to be copied with the copyValue processor. * **source** *(string) --* The key to copy. * **target** *(string) --* The key of the field to copy the value to. * **overwriteIfExists** *(boolean) --* Specifies whether to overwrite the value if the destination key already exists. If you omit this, the default is "false". * **csv** *(dict) --* Use this parameter to include the CSV processor in your transformer. * **quoteCharacter** *(string) --* The character used used as a text qualifier for a single column of data. If you omit this, the double quotation mark """ character is used. * **delimiter** *(string) --* The character used to separate each column in the original comma-separated value log event. If you omit this, the processor looks for the comma "," character as the delimiter. * **columns** *(list) --* An array of names to use for the columns in the transformed log event. If you omit this, default column names ( "[column_1, column_2 ...]") are used. * *(string) --* * **source** *(string) --* The path to the field in the log event that has the comma separated values to be parsed. If you omit this value, the whole log message is processed. * **dateTimeConverter** *(dict) --* Use this parameter to include the datetimeConverter processor in your transformer. * **source** *(string) --* The key to apply the date conversion to. * **target** *(string) --* The JSON field to store the result in. * **targetFormat** *(string) --* The datetime format to use for the converted data in the target field. If you omit this, the default of "yyyy-MM- dd'T'HH:mm:ss.SSS'Z" is used. * **matchPatterns** *(list) --* A list of patterns to match against the "source" field. * *(string) --* * **sourceTimezone** *(string) --* The time zone of the source field. If you omit this, the default used is the UTC zone. * **targetTimezone** *(string) --* The time zone of the target field. If you omit this, the default used is the UTC zone. * **locale** *(string) --* The locale of the source field. If you omit this, the default of "locale.ROOT" is used. * **deleteKeys** *(dict) --* Use this parameter to include the deleteKeys processor in your transformer. * **withKeys** *(list) --* The list of keys to delete. * *(string) --* * **grok** *(dict) --* Use this parameter to include the grok processor in your transformer. * **source** *(string) --* The path to the field in the log event that you want to parse. If you omit this value, the whole log message is parsed. * **match** *(string) --* The grok pattern to match against the log event. For a list of supported grok patterns, see Supported grok patterns. * **listToMap** *(dict) --* Use this parameter to include the listToMap processor in your transformer. * **source** *(string) --* The key in the log event that has a list of objects that will be converted to a map. * **key** *(string) --* The key of the field to be extracted as keys in the generated map * **valueKey** *(string) --* If this is specified, the values that you specify in this parameter will be extracted from the "source" objects and put into the values of the generated map. Otherwise, original objects in the source list will be put into the values of the generated map. * **target** *(string) --* The key of the field that will hold the generated map * **flatten** *(boolean) --* A Boolean value to indicate whether the list will be flattened into single items. Specify "true" to flatten the list. The default is "false" * **flattenedElement** *(string) --* If you set "flatten" to "true", use "flattenedElement" to specify which element, "first" or "last", to keep. You must specify this parameter if "flatten" is "true" * **lowerCaseString** *(dict) --* Use this parameter to include the lowerCaseString processor in your transformer. * **withKeys** *(list) --* The array caontaining the keys of the fields to convert to lowercase. * *(string) --* * **moveKeys** *(dict) --* Use this parameter to include the moveKeys processor in your transformer. * **entries** *(list) --* An array of objects, where each object contains the information about one key to move. * *(dict) --* This object defines one key that will be moved with the moveKey processor. * **source** *(string) --* The key to move. * **target** *(string) --* The key to move to. * **overwriteIfExists** *(boolean) --* Specifies whether to overwrite the value if the destination key already exists. If you omit this, the default is "false". * **parseCloudfront** *(dict) --* Use this parameter to include the parseCloudfront processor in your transformer. If you use this processor, it must be the first processor in your transformer. * **source** *(string) --* Omit this parameter and the whole log message will be processed by this processor. No other value than "@message" is allowed for "source". * **parseJSON** *(dict) --* Use this parameter to include the parseJSON processor in your transformer. * **source** *(string) --* Path to the field in the log event that will be parsed. Use dot notation to access child fields. For example, "store.book" * **destination** *(string) --* The location to put the parsed key value pair into. If you omit this parameter, it is placed under the root node. * **parseKeyValue** *(dict) --* Use this parameter to include the parseKeyValue processor in your transformer. * **source** *(string) --* Path to the field in the log event that will be parsed. Use dot notation to access child fields. For example, "store.book" * **destination** *(string) --* The destination field to put the extracted key-value pairs into * **fieldDelimiter** *(string) --* The field delimiter string that is used between key- value pairs in the original log events. If you omit this, the ampersand "&" character is used. * **keyValueDelimiter** *(string) --* The delimiter string to use between the key and value in each pair in the transformed log event. If you omit this, the equal "=" character is used. * **keyPrefix** *(string) --* If you want to add a prefix to all transformed keys, specify it here. * **nonMatchValue** *(string) --* A value to insert into the value field in the result, when a key-value pair is not successfully split. * **overwriteIfExists** *(boolean) --* Specifies whether to overwrite the value if the destination key already exists. If you omit this, the default is "false". * **parseRoute53** *(dict) --* Use this parameter to include the parseRoute53 processor in your transformer. If you use this processor, it must be the first processor in your transformer. * **source** *(string) --* Omit this parameter and the whole log message will be processed by this processor. No other value than "@message" is allowed for "source". * **parseToOCSF** *(dict) --* Use this parameter to convert logs into Open Cybersecurity Schema (OCSF) format. * **source** *(string) --* The path to the field in the log event that you want to parse. If you omit this value, the whole log message is parsed. * **eventSource** *(string) --* Specify the service or process that produces the log events that will be converted with this processor. * **ocsfVersion** *(string) --* Specify which version of the OCSF schema to use for the transformed log events. * **parsePostgres** *(dict) --* Use this parameter to include the parsePostGres processor in your transformer. If you use this processor, it must be the first processor in your transformer. * **source** *(string) --* Omit this parameter and the whole log message will be processed by this processor. No other value than "@message" is allowed for "source". * **parseVPC** *(dict) --* Use this parameter to include the parseVPC processor in your transformer. If you use this processor, it must be the first processor in your transformer. * **source** *(string) --* Omit this parameter and the whole log message will be processed by this processor. No other value than "@message" is allowed for "source". * **parseWAF** *(dict) --* Use this parameter to include the parseWAF processor in your transformer. If you use this processor, it must be the first processor in your transformer. * **source** *(string) --* Omit this parameter and the whole log message will be processed by this processor. No other value than "@message" is allowed for "source". * **renameKeys** *(dict) --* Use this parameter to include the renameKeys processor in your transformer. * **entries** *(list) --* An array of "RenameKeyEntry" objects, where each object contains the information about a single key to rename. * *(dict) --* This object defines one key that will be renamed with the renameKey processor. * **key** *(string) --* The key to rename * **renameTo** *(string) --* The string to use for the new key name * **overwriteIfExists** *(boolean) --* Specifies whether to overwrite the existing value if the destination key already exists. The default is "false" * **splitString** *(dict) --* Use this parameter to include the splitString processor in your transformer. * **entries** *(list) --* An array of "SplitStringEntry" objects, where each object contains the information about one field to split. * *(dict) --* This object defines one log field that will be split with the splitString processor. * **source** *(string) --* The key of the field to split. * **delimiter** *(string) --* The separator characters to split the string entry on. * **substituteString** *(dict) --* Use this parameter to include the substituteString processor in your transformer. * **entries** *(list) --* An array of objects, where each object contains the information about one key to match and replace. * *(dict) --* This object defines one log field key that will be replaced using the substituteString processor. * **source** *(string) --* The key to modify * **from** *(string) --* The regular expression string to be replaced. Special regex characters such as [ and ] must be escaped using \ when using double quotes and with when using single quotes. For more information, see Class Pattern on the Oracle web site. * **to** *(string) --* The string to be substituted for each match of "from" * **trimString** *(dict) --* Use this parameter to include the trimString processor in your transformer. * **withKeys** *(list) --* The array containing the keys of the fields to trim. * *(string) --* * **typeConverter** *(dict) --* Use this parameter to include the typeConverter processor in your transformer. * **entries** *(list) --* An array of "TypeConverterEntry" objects, where each object contains the information about one field to change the type of. * *(dict) --* This object defines one value type that will be converted using the typeConverter processor. * **key** *(string) --* The key with the value that is to be converted to a different type. * **type** *(string) --* The type to convert the field value to. Valid values are "integer", "double", "string" and "boolean". * **upperCaseString** *(dict) --* Use this parameter to include the upperCaseString processor in your transformer. * **withKeys** *(list) --* The array of containing the keys of the field to convert to uppercase. * *(string) --* **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.InvalidOperationException" CloudWatchLogs / Client / put_retention_policy put_retention_policy ******************** CloudWatchLogs.Client.put_retention_policy(**kwargs) Sets the retention of the specified log group. With a retention policy, you can configure the number of days for which to retain log events in the specified log group. Note: CloudWatch Logs doesn't immediately delete log events when they reach their retention setting. It typically takes up to 72 hours after that before log events are deleted, but in rare situations might take longer.To illustrate, imagine that you change a log group to have a longer retention setting when it contains log events that are past the expiration date, but haven't been deleted. Those log events will take up to 72 hours to be deleted after the new retention date is reached. To make sure that log data is deleted permanently, keep a log group at its lower retention setting until 72 hours after the previous retention period ends. Alternatively, wait to change the retention setting until you confirm that the earlier log events are deleted.When log events reach their retention setting they are marked for deletion. After they are marked for deletion, they do not add to your archival storage costs anymore, even if they are not actually deleted until later. These log events marked for deletion are also not included when you use an API to retrieve the "storedBytes" value to see how many bytes a log group is storing. See also: AWS API Documentation **Request Syntax** response = client.put_retention_policy( logGroupName='string', retentionInDays=123 ) Parameters: * **logGroupName** (*string*) -- **[REQUIRED]** The name of the log group. * **retentionInDays** (*integer*) -- **[REQUIRED]** The number of days to retain the log events in the specified log group. Possible values are: 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1096, 1827, 2192, 2557, 2922, 3288, and 3653. To set a log group so that its log events do not expire, use DeleteRetentionPolicy. Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / delete_delivery delete_delivery *************** CloudWatchLogs.Client.delete_delivery(**kwargs) Deletes a *delivery*. A delivery is a connection between a logical *delivery source* and a logical *delivery destination*. Deleting a delivery only deletes the connection between the delivery source and delivery destination. It does not delete the delivery destination or the delivery source. See also: AWS API Documentation **Request Syntax** response = client.delete_delivery( id='string' ) Parameters: **id** (*string*) -- **[REQUIRED]** The unique ID of the delivery to delete. You can find the ID of a delivery with the DescribeDeliveries operation. Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.ConflictException" * "CloudWatchLogs.Client.exceptions.ValidationException" * "CloudWatchLogs.Client.exceptions.ServiceQuotaExceededException" * "CloudWatchLogs.Client.exceptions.ThrottlingException" CloudWatchLogs / Client / describe_queries describe_queries **************** CloudWatchLogs.Client.describe_queries(**kwargs) Returns a list of CloudWatch Logs Insights queries that are scheduled, running, or have been run recently in this account. You can request all queries or limit it to queries of a specific log group or queries with a certain status. See also: AWS API Documentation **Request Syntax** response = client.describe_queries( logGroupName='string', status='Scheduled'|'Running'|'Complete'|'Failed'|'Cancelled'|'Timeout'|'Unknown', maxResults=123, nextToken='string', queryLanguage='CWLI'|'SQL'|'PPL' ) Parameters: * **logGroupName** (*string*) -- Limits the returned queries to only those for the specified log group. * **status** (*string*) -- Limits the returned queries to only those that have the specified status. Valid values are "Cancelled", "Complete", "Failed", "Running", and "Scheduled". * **maxResults** (*integer*) -- Limits the number of returned queries to the specified number. * **nextToken** (*string*) -- The token for the next set of items to return. The token expires after 24 hours. * **queryLanguage** (*string*) -- Limits the returned queries to only the queries that use the specified query language. Return type: dict Returns: **Response Syntax** { 'queries': [ { 'queryLanguage': 'CWLI'|'SQL'|'PPL', 'queryId': 'string', 'queryString': 'string', 'status': 'Scheduled'|'Running'|'Complete'|'Failed'|'Cancelled'|'Timeout'|'Unknown', 'createTime': 123, 'logGroupName': 'string' }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **queries** *(list) --* The list of queries that match the request. * *(dict) --* Information about one CloudWatch Logs Insights query that matches the request in a "DescribeQueries" operation. * **queryLanguage** *(string) --* The query language used for this query. For more information about the query languages that CloudWatch Logs supports, see Supported query languages. * **queryId** *(string) --* The unique ID number of this query. * **queryString** *(string) --* The query string used in this query. * **status** *(string) --* The status of this query. Possible values are "Cancelled", "Complete", "Failed", "Running", "Scheduled", and "Unknown". * **createTime** *(integer) --* The date and time that this query was created. * **logGroupName** *(string) --* The name of the log group scanned by this query. * **nextToken** *(string) --* The token for the next set of items to return. The token expires after 24 hours. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / delete_retention_policy delete_retention_policy *********************** CloudWatchLogs.Client.delete_retention_policy(**kwargs) Deletes the specified retention policy. Log events do not expire if they belong to log groups without a retention policy. See also: AWS API Documentation **Request Syntax** response = client.delete_retention_policy( logGroupName='string' ) Parameters: **logGroupName** (*string*) -- **[REQUIRED]** The name of the log group. Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / delete_subscription_filter delete_subscription_filter ************************** CloudWatchLogs.Client.delete_subscription_filter(**kwargs) Deletes the specified subscription filter. See also: AWS API Documentation **Request Syntax** response = client.delete_subscription_filter( logGroupName='string', filterName='string' ) Parameters: * **logGroupName** (*string*) -- **[REQUIRED]** The name of the log group. * **filterName** (*string*) -- **[REQUIRED]** The name of the subscription filter. Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / test_metric_filter test_metric_filter ****************** CloudWatchLogs.Client.test_metric_filter(**kwargs) Tests the filter pattern of a metric filter against a sample of log event messages. You can use this operation to validate the correctness of a metric filter pattern. See also: AWS API Documentation **Request Syntax** response = client.test_metric_filter( filterPattern='string', logEventMessages=[ 'string', ] ) Parameters: * **filterPattern** (*string*) -- **[REQUIRED]** A symbolic description of how CloudWatch Logs should interpret the data in each log event. For example, a log event can contain timestamps, IP addresses, strings, and so on. You use the filter pattern to specify what to look for in the log event message. * **logEventMessages** (*list*) -- **[REQUIRED]** The log event messages to test. * *(string) --* Return type: dict Returns: **Response Syntax** { 'matches': [ { 'eventNumber': 123, 'eventMessage': 'string', 'extractedValues': { 'string': 'string' } }, ] } **Response Structure** * *(dict) --* * **matches** *(list) --* The matched events. * *(dict) --* Represents a matched event. * **eventNumber** *(integer) --* The event number. * **eventMessage** *(string) --* The raw event data. * **extractedValues** *(dict) --* The values extracted from the event data by the filter. * *(string) --* * *(string) --* **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / disassociate_kms_key disassociate_kms_key ******************** CloudWatchLogs.Client.disassociate_kms_key(**kwargs) Disassociates the specified KMS key from the specified log group or from all CloudWatch Logs Insights query results in the account. When you use "DisassociateKmsKey", you specify either the "logGroupName" parameter or the "resourceIdentifier" parameter. You can't specify both of those parameters in the same operation. * Specify the "logGroupName" parameter to stop using the KMS key to encrypt future log events ingested and stored in the log group. Instead, they will be encrypted with the default CloudWatch Logs method. The log events that were ingested while the key was associated with the log group are still encrypted with that key. Therefore, CloudWatch Logs will need permissions for the key whenever that data is accessed. * Specify the "resourceIdentifier" parameter with the "query- result" resource to stop using the KMS key to encrypt the results of all future StartQuery operations in the account. They will instead be encrypted with the default CloudWatch Logs method. The results from queries that ran while the key was associated with the account are still encrypted with that key. Therefore, CloudWatch Logs will need permissions for the key whenever that data is accessed. It can take up to 5 minutes for this operation to take effect. See also: AWS API Documentation **Request Syntax** response = client.disassociate_kms_key( logGroupName='string', resourceIdentifier='string' ) Parameters: * **logGroupName** (*string*) -- The name of the log group. In your "DisassociateKmsKey" operation, you must specify either the "resourceIdentifier" parameter or the "logGroup" parameter, but you can't specify both. * **resourceIdentifier** (*string*) -- Specifies the target for this operation. You must specify one of the following: * Specify the ARN of a log group to stop having CloudWatch Logs use the KMS key to encrypt log events that are ingested and stored by that log group. After you run this operation, CloudWatch Logs encrypts ingested log events with the default CloudWatch Logs method. The log group ARN must be in the following format. Replace *REGION* and *ACCOUNT_ID* with your Region and account ID. "arn:aws:logs:REGION:ACCOUNT_ID :log-group:LOG_GROUP_NAME" * Specify the following ARN to stop using this key to encrypt the results of future StartQuery operations in this account. Replace *REGION* and *ACCOUNT_ID* with your Region and account ID. "arn:aws:logs:REGION:ACCOUNT_ID:query-result:*" In your "DisssociateKmsKey" operation, you must specify either the "resourceIdentifier" parameter or the "logGroup" parameter, but you can't specify both. Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / create_log_group create_log_group **************** CloudWatchLogs.Client.create_log_group(**kwargs) Creates a log group with the specified name. You can create up to 1,000,000 log groups per Region per account. You must use the following guidelines when naming a log group: * Log group names must be unique within a Region for an Amazon Web Services account. * Log group names can be between 1 and 512 characters long. * Log group names consist of the following characters: a-z, A-Z, 0-9, '_' (underscore), '-' (hyphen), '/' (forward slash), '.' (period), and '#' (number sign) * Log group names can't start with the string "aws/" When you create a log group, by default the log events in the log group do not expire. To set a retention policy so that events expire and are deleted after a specified time, use PutRetentionPolicy. If you associate an KMS key with the log group, ingested data is encrypted using the KMS key. This association is stored as long as the data encrypted with the KMS key is still within CloudWatch Logs. This enables CloudWatch Logs to decrypt this data whenever it is requested. If you attempt to associate a KMS key with the log group but the KMS key does not exist or the KMS key is disabled, you receive an "InvalidParameterException" error. Warning: CloudWatch Logs supports only symmetric KMS keys. Do not associate an asymmetric KMS key with your log group. For more information, see Using Symmetric and Asymmetric Keys. See also: AWS API Documentation **Request Syntax** response = client.create_log_group( logGroupName='string', kmsKeyId='string', tags={ 'string': 'string' }, logGroupClass='STANDARD'|'INFREQUENT_ACCESS'|'DELIVERY' ) Parameters: * **logGroupName** (*string*) -- **[REQUIRED]** A name for the log group. * **kmsKeyId** (*string*) -- The Amazon Resource Name (ARN) of the KMS key to use when encrypting log data. For more information, see Amazon Resource Names. * **tags** (*dict*) -- The key-value pairs to use for the tags. You can grant users access to certain log groups while preventing them from accessing other log groups. To do so, tag your groups and use IAM policies that refer to those tags. To assign tags when you create a log group, you must have either the "logs:TagResource" or "logs:TagLogGroup" permission. For more information about tagging, see Tagging Amazon Web Services resources. For more information about using tags to control access, see Controlling access to Amazon Web Services resources using tags. * *(string) --* * *(string) --* * **logGroupClass** (*string*) -- Use this parameter to specify the log group class for this log group. There are three classes: * The "Standard" log class supports all CloudWatch Logs features. * The "Infrequent Access" log class supports a subset of CloudWatch Logs features and incurs lower costs. * Use the "Delivery" log class only for delivering Lambda logs to store in Amazon S3 or Amazon Data Firehose. Log events in log groups in the Delivery class are kept in CloudWatch Logs for only one day. This log class doesn't offer rich CloudWatch Logs capabilities such as CloudWatch Logs Insights queries. If you omit this parameter, the default of "STANDARD" is used. Warning: The value of "logGroupClass" can't be changed after a log group is created. For details about the features supported by each class, see Log classes Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceAlreadyExistsException" * "CloudWatchLogs.Client.exceptions.LimitExceededException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / describe_query_definitions describe_query_definitions ************************** CloudWatchLogs.Client.describe_query_definitions(**kwargs) This operation returns a paginated list of your saved CloudWatch Logs Insights query definitions. You can retrieve query definitions from the current account or from a source account that is linked to the current account. You can use the "queryDefinitionNamePrefix" parameter to limit the results to only the query definitions that have names that start with a certain string. See also: AWS API Documentation **Request Syntax** response = client.describe_query_definitions( queryLanguage='CWLI'|'SQL'|'PPL', queryDefinitionNamePrefix='string', maxResults=123, nextToken='string' ) Parameters: * **queryLanguage** (*string*) -- The query language used for this query. For more information about the query languages that CloudWatch Logs supports, see Supported query languages. * **queryDefinitionNamePrefix** (*string*) -- Use this parameter to filter your results to only the query definitions that have names that start with the prefix you specify. * **maxResults** (*integer*) -- Limits the number of returned query definitions to the specified number. * **nextToken** (*string*) -- The token for the next set of items to return. The token expires after 24 hours. Return type: dict Returns: **Response Syntax** { 'queryDefinitions': [ { 'queryLanguage': 'CWLI'|'SQL'|'PPL', 'queryDefinitionId': 'string', 'name': 'string', 'queryString': 'string', 'lastModified': 123, 'logGroupNames': [ 'string', ] }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **queryDefinitions** *(list) --* The list of query definitions that match your request. * *(dict) --* This structure contains details about a saved CloudWatch Logs Insights query definition. * **queryLanguage** *(string) --* The query language used for this query. For more information about the query languages that CloudWatch Logs supports, see Supported query languages. * **queryDefinitionId** *(string) --* The unique ID of the query definition. * **name** *(string) --* The name of the query definition. * **queryString** *(string) --* The query string to use for this definition. For more information, see CloudWatch Logs Insights Query Syntax. * **lastModified** *(integer) --* The date that the query definition was most recently modified. * **logGroupNames** *(list) --* If this query definition contains a list of log groups that it is limited to, that list appears here. * *(string) --* * **nextToken** *(string) --* The token for the next set of items to return. The token expires after 24 hours. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / put_log_events put_log_events ************** CloudWatchLogs.Client.put_log_events(**kwargs) Uploads a batch of log events to the specified log stream. Warning: The sequence token is now ignored in "PutLogEvents" actions. "PutLogEvents" actions are always accepted and never return "InvalidSequenceTokenException" or "DataAlreadyAcceptedException" even if the sequence token is not valid. You can use parallel "PutLogEvents" actions on the same log stream. The batch of events must satisfy the following constraints: * The maximum batch size is 1,048,576 bytes. This size is calculated as the sum of all event messages in UTF-8, plus 26 bytes for each log event. * Events more than 2 hours in the future are rejected while processing remaining valid events. * Events older than 14 days or preceding the log group's retention period are rejected while processing remaining valid events. * The log events in the batch must be in chronological order by their timestamp. The timestamp is the time that the event occurred, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". (In Amazon Web Services Tools for PowerShell and the Amazon Web Services SDK for .NET, the timestamp is specified in .NET format: "yyyy-mm-ddThh:mm:ss". For example, "2017-09-15T13:45:30".) * A batch of log events in a single request must be in a chronological order. Otherwise, the operation fails. * Each log event can be no larger than 1 MB. * The maximum number of log events in a batch is 10,000. * For valid events (within 14 days in the past to 2 hours in future), the time span in a single batch cannot exceed 24 hours. Otherwise, the operation fails. Warning: The quota of five requests per second per log stream has been removed. Instead, "PutLogEvents" actions are throttled based on a per-second per-account quota. You can request an increase to the per-second throttling quota by using the Service Quotas service. If a call to "PutLogEvents" returns "UnrecognizedClientException" the most likely cause is a non-valid Amazon Web Services access key ID or secret key. See also: AWS API Documentation **Request Syntax** response = client.put_log_events( logGroupName='string', logStreamName='string', logEvents=[ { 'timestamp': 123, 'message': 'string' }, ], sequenceToken='string', entity={ 'keyAttributes': { 'string': 'string' }, 'attributes': { 'string': 'string' } } ) Parameters: * **logGroupName** (*string*) -- **[REQUIRED]** The name of the log group. * **logStreamName** (*string*) -- **[REQUIRED]** The name of the log stream. * **logEvents** (*list*) -- **[REQUIRED]** The log events. * *(dict) --* Represents a log event, which is a record of activity that was recorded by the application or resource being monitored. * **timestamp** *(integer) --* **[REQUIRED]** The time the event occurred, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". * **message** *(string) --* **[REQUIRED]** The raw event message. Each log event can be no larger than 1 MB. * **sequenceToken** (*string*) -- The sequence token obtained from the response of the previous "PutLogEvents" call. Warning: The "sequenceToken" parameter is now ignored in "PutLogEvents" actions. "PutLogEvents" actions are now accepted and never return "InvalidSequenceTokenException" or "DataAlreadyAcceptedException" even if the sequence token is not valid. * **entity** (*dict*) -- The entity associated with the log events. * **keyAttributes** *(dict) --* The attributes of the entity which identify the specific entity, as a list of key-value pairs. Entities with the same "keyAttributes" are considered to be the same entity. There are five allowed attributes (key names): "Type", "ResourceType", "Identifier" "Name", and "Environment". For details about how to use the key attributes, see How to add related information to telemetry in the *CloudWatch User Guide*. * *(string) --* * *(string) --* * **attributes** *(dict) --* Additional attributes of the entity that are not used to specify the identity of the entity. A list of key-value pairs. For details about how to use the attributes, see How to add related information to telemetry in the *CloudWatch User Guide*. * *(string) --* * *(string) --* Return type: dict Returns: **Response Syntax** { 'nextSequenceToken': 'string', 'rejectedLogEventsInfo': { 'tooNewLogEventStartIndex': 123, 'tooOldLogEventEndIndex': 123, 'expiredLogEventEndIndex': 123 }, 'rejectedEntityInfo': { 'errorType': 'InvalidEntity'|'InvalidTypeValue'|'InvalidKeyAttributes'|'InvalidAttributes'|'EntitySizeTooLarge'|'UnsupportedLogGroupType'|'MissingRequiredFields' } } **Response Structure** * *(dict) --* * **nextSequenceToken** *(string) --* The next sequence token. Warning: This field has been deprecated.The sequence token is now ignored in "PutLogEvents" actions. "PutLogEvents" actions are always accepted even if the sequence token is not valid. You can use parallel "PutLogEvents" actions on the same log stream and you do not need to wait for the response of a previous "PutLogEvents" action to obtain the "nextSequenceToken" value. * **rejectedLogEventsInfo** *(dict) --* The rejected events. * **tooNewLogEventStartIndex** *(integer) --* The index of the first log event that is too new. This field is inclusive. * **tooOldLogEventEndIndex** *(integer) --* The index of the last log event that is too old. This field is exclusive. * **expiredLogEventEndIndex** *(integer) --* The expired log events. * **rejectedEntityInfo** *(dict) --* Information about why the entity is rejected when calling "PutLogEvents". Only returned when the entity is rejected. Note: When the entity is rejected, the events may still be accepted. * **errorType** *(string) --* The type of error that caused the rejection of the entity when calling "PutLogEvents". **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.InvalidSequenceTokenException" * "CloudWatchLogs.Client.exceptions.DataAlreadyAcceptedException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.UnrecognizedClientException" CloudWatchLogs / Client / get_log_anomaly_detector get_log_anomaly_detector ************************ CloudWatchLogs.Client.get_log_anomaly_detector(**kwargs) Retrieves information about the log anomaly detector that you specify. The KMS key ARN detected is valid. See also: AWS API Documentation **Request Syntax** response = client.get_log_anomaly_detector( anomalyDetectorArn='string' ) Parameters: **anomalyDetectorArn** (*string*) -- **[REQUIRED]** The ARN of the anomaly detector to retrieve information about. You can find the ARNs of log anomaly detectors in your account by using the ListLogAnomalyDetectors operation. Return type: dict Returns: **Response Syntax** { 'detectorName': 'string', 'logGroupArnList': [ 'string', ], 'evaluationFrequency': 'ONE_MIN'|'FIVE_MIN'|'TEN_MIN'|'FIFTEEN_MIN'|'THIRTY_MIN'|'ONE_HOUR', 'filterPattern': 'string', 'anomalyDetectorStatus': 'INITIALIZING'|'TRAINING'|'ANALYZING'|'FAILED'|'DELETED'|'PAUSED', 'kmsKeyId': 'string', 'creationTimeStamp': 123, 'lastModifiedTimeStamp': 123, 'anomalyVisibilityTime': 123 } **Response Structure** * *(dict) --* * **detectorName** *(string) --* The name of the log anomaly detector * **logGroupArnList** *(list) --* An array of structures, where each structure contains the ARN of a log group associated with this anomaly detector. * *(string) --* * **evaluationFrequency** *(string) --* Specifies how often the anomaly detector runs and look for anomalies. Set this value according to the frequency that the log group receives new logs. For example, if the log group receives new log events every 10 minutes, then setting "evaluationFrequency" to "FIFTEEN_MIN" might be appropriate. * **filterPattern** *(string) --* A symbolic description of how CloudWatch Logs should interpret the data in each log event. For example, a log event can contain timestamps, IP addresses, strings, and so on. You use the filter pattern to specify what to look for in the log event message. * **anomalyDetectorStatus** *(string) --* Specifies whether the anomaly detector is currently active. To change its status, use the "enabled" parameter in the UpdateLogAnomalyDetector operation. * **kmsKeyId** *(string) --* The ARN of the KMS key assigned to this anomaly detector, if any. * **creationTimeStamp** *(integer) --* The date and time when this anomaly detector was created. * **lastModifiedTimeStamp** *(integer) --* The date and time when this anomaly detector was most recently modified. * **anomalyVisibilityTime** *(integer) --* The number of days used as the life cycle of anomalies. After this time, anomalies are automatically baselined and the anomaly detector model will treat new occurrences of similar event as normal. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" CloudWatchLogs / Client / describe_field_indexes describe_field_indexes ********************** CloudWatchLogs.Client.describe_field_indexes(**kwargs) Returns a list of field indexes listed in the field index policies of one or more log groups. For more information about field index policies, see PutIndexPolicy. See also: AWS API Documentation **Request Syntax** response = client.describe_field_indexes( logGroupIdentifiers=[ 'string', ], nextToken='string' ) Parameters: * **logGroupIdentifiers** (*list*) -- **[REQUIRED]** An array containing the names or ARNs of the log groups that you want to retrieve field indexes for. * *(string) --* * **nextToken** (*string*) -- The token for the next set of items to return. The token expires after 24 hours. Return type: dict Returns: **Response Syntax** { 'fieldIndexes': [ { 'logGroupIdentifier': 'string', 'fieldIndexName': 'string', 'lastScanTime': 123, 'firstEventTime': 123, 'lastEventTime': 123 }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **fieldIndexes** *(list) --* An array containing the field index information. * *(dict) --* This structure describes one log event field that is used as an index in at least one index policy in this account. * **logGroupIdentifier** *(string) --* If this field index appears in an index policy that applies only to a single log group, the ARN of that log group is displayed here. * **fieldIndexName** *(string) --* The string that this field index matches. * **lastScanTime** *(integer) --* The most recent time that CloudWatch Logs scanned ingested log events to search for this field index to improve the speed of future CloudWatch Logs Insights queries that search for this field index. * **firstEventTime** *(integer) --* The time and date of the earliest log event that matches this field index, after the index policy that contains it was created. * **lastEventTime** *(integer) --* The time and date of the most recent log event that matches this field index. * **nextToken** *(string) --* The token for the next set of items to return. The token expires after 24 hours. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.LimitExceededException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / delete_transformer delete_transformer ****************** CloudWatchLogs.Client.delete_transformer(**kwargs) Deletes the log transformer for the specified log group. As soon as you do this, the transformation of incoming log events according to that transformer stops. If this account has an account-level transformer that applies to this log group, the log group begins using that account-level transformer when this log-group level transformer is deleted. After you delete a transformer, be sure to edit any metric filters or subscription filters that relied on the transformed versions of the log events. See also: AWS API Documentation **Request Syntax** response = client.delete_transformer( logGroupIdentifier='string' ) Parameters: **logGroupIdentifier** (*string*) -- **[REQUIRED]** Specify either the name or ARN of the log group to delete the transformer for. If the log group is in a source account and you are using a monitoring account, you must use the log group ARN. Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.InvalidOperationException" CloudWatchLogs / Client / delete_delivery_destination_policy delete_delivery_destination_policy ********************************** CloudWatchLogs.Client.delete_delivery_destination_policy(**kwargs) Deletes a delivery destination policy. For more information about these policies, see PutDeliveryDestinationPolicy. See also: AWS API Documentation **Request Syntax** response = client.delete_delivery_destination_policy( deliveryDestinationName='string' ) Parameters: **deliveryDestinationName** (*string*) -- **[REQUIRED]** The name of the delivery destination that you want to delete the policy for. Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.ValidationException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ConflictException" CloudWatchLogs / Client / close close ***** CloudWatchLogs.Client.close() Closes underlying endpoint connections. CloudWatchLogs / Client / put_resource_policy put_resource_policy ******************* CloudWatchLogs.Client.put_resource_policy(**kwargs) Creates or updates a resource policy allowing other Amazon Web Services services to put log events to this account, such as Amazon Route 53. An account can have up to 10 resource policies per Amazon Web Services Region. See also: AWS API Documentation **Request Syntax** response = client.put_resource_policy( policyName='string', policyDocument='string', resourceArn='string', expectedRevisionId='string' ) Parameters: * **policyName** (*string*) -- Name of the new policy. This parameter is required. * **policyDocument** (*string*) -- Details of the new policy, including the identity of the principal that is enabled to put logs to this account. This is formatted as a JSON string. This parameter is required. The following example creates a resource policy enabling the Route 53 service to put DNS query logs in to the specified log group. Replace ""logArn"" with the ARN of your CloudWatch Logs resource, such as a log group or log stream. CloudWatch Logs also supports aws:SourceArn and aws:SourceAccount condition context keys. In the example resource policy, you would replace the value of "SourceArn" with the resource making the call from Route 53 to CloudWatch Logs. You would also replace the value of "SourceAccount" with the Amazon Web Services account ID making that call. "{ "Version": "2012-10-17", "Statement": [ { "Sid": "Route53LogsToCloudWatchLogs", "Effect": "Allow", "Principal": { "Service": [ "route53.amazonaws.com" ] }, "Action": "logs:PutLogEvents", "Resource": "logArn", "Condition": { "ArnLike": { "aws:SourceArn": "myRoute53ResourceArn" }, "StringEquals": { "aws:SourceAccount": "myAwsAccountId" } } } ] }" * **resourceArn** (*string*) -- The ARN of the CloudWatch Logs resource to which the resource policy needs to be added or attached. Currently only supports LogGroup ARN. * **expectedRevisionId** (*string*) -- The expected revision ID of the resource policy. Required when "resourceArn" is provided to prevent concurrent modifications. Use "null" when creating a resource policy for the first time. Return type: dict Returns: **Response Syntax** { 'resourcePolicy': { 'policyName': 'string', 'policyDocument': 'string', 'lastUpdatedTime': 123, 'policyScope': 'ACCOUNT'|'RESOURCE', 'resourceArn': 'string', 'revisionId': 'string' }, 'revisionId': 'string' } **Response Structure** * *(dict) --* * **resourcePolicy** *(dict) --* The new policy. * **policyName** *(string) --* The name of the resource policy. * **policyDocument** *(string) --* The details of the policy. * **lastUpdatedTime** *(integer) --* Timestamp showing when this policy was last updated, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". * **policyScope** *(string) --* Specifies scope of the resource policy. Valid values are ACCOUNT or RESOURCE. * **resourceArn** *(string) --* The ARN of the CloudWatch Logs resource to which the resource policy is attached. Only populated for resource- scoped policies. * **revisionId** *(string) --* The revision ID of the resource policy. Only populated for resource-scoped policies. * **revisionId** *(string) --* The revision ID of the created or updated resource policy. Only returned for resource-scoped policies. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.LimitExceededException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / put_metric_filter put_metric_filter ***************** CloudWatchLogs.Client.put_metric_filter(**kwargs) Creates or updates a metric filter and associates it with the specified log group. With metric filters, you can configure rules to extract metric data from log events ingested through PutLogEvents. The maximum number of metric filters that can be associated with a log group is 100. Using regular expressions in filter patterns is supported. For these filters, there is a quota of two regular expression patterns within a single filter pattern. There is also a quota of five regular expression patterns per log group. For more information about using regular expressions in filter patterns, see Filter pattern syntax for metric filters, subscription filters, filter log events, and Live Tail. When you create a metric filter, you can also optionally assign a unit and dimensions to the metric that is created. Warning: Metrics extracted from log events are charged as custom metrics. To prevent unexpected high charges, do not specify high- cardinality fields such as "IPAddress" or "requestID" as dimensions. Each different value found for a dimension is treated as a separate metric and accrues charges as a separate custom metric.CloudWatch Logs might disable a metric filter if it generates 1,000 different name/value pairs for your specified dimensions within one hour.You can also set up a billing alarm to alert you if your charges are higher than expected. For more information, see Creating a Billing Alarm to Monitor Your Estimated Amazon Web Services Charges. See also: AWS API Documentation **Request Syntax** response = client.put_metric_filter( logGroupName='string', filterName='string', filterPattern='string', metricTransformations=[ { 'metricName': 'string', 'metricNamespace': 'string', 'metricValue': 'string', 'defaultValue': 123.0, 'dimensions': { 'string': 'string' }, 'unit': 'Seconds'|'Microseconds'|'Milliseconds'|'Bytes'|'Kilobytes'|'Megabytes'|'Gigabytes'|'Terabytes'|'Bits'|'Kilobits'|'Megabits'|'Gigabits'|'Terabits'|'Percent'|'Count'|'Bytes/Second'|'Kilobytes/Second'|'Megabytes/Second'|'Gigabytes/Second'|'Terabytes/Second'|'Bits/Second'|'Kilobits/Second'|'Megabits/Second'|'Gigabits/Second'|'Terabits/Second'|'Count/Second'|'None' }, ], applyOnTransformedLogs=True|False ) Parameters: * **logGroupName** (*string*) -- **[REQUIRED]** The name of the log group. * **filterName** (*string*) -- **[REQUIRED]** A name for the metric filter. * **filterPattern** (*string*) -- **[REQUIRED]** A filter pattern for extracting metric data out of ingested log events. * **metricTransformations** (*list*) -- **[REQUIRED]** A collection of information that defines how metric data gets emitted. * *(dict) --* Indicates how to transform ingested log events to metric data in a CloudWatch metric. * **metricName** *(string) --* **[REQUIRED]** The name of the CloudWatch metric. * **metricNamespace** *(string) --* **[REQUIRED]** A custom namespace to contain your metric in CloudWatch. Use namespaces to group together metrics that are similar. For more information, see Namespaces. * **metricValue** *(string) --* **[REQUIRED]** The value to publish to the CloudWatch metric when a filter pattern matches a log event. * **defaultValue** *(float) --* (Optional) The value to emit when a filter pattern does not match a log event. This value can be null. * **dimensions** *(dict) --* The fields to use as dimensions for the metric. One metric filter can include as many as three dimensions. Warning: Metrics extracted from log events are charged as custom metrics. To prevent unexpected high charges, do not specify high-cardinality fields such as "IPAddress" or "requestID" as dimensions. Each different value found for a dimension is treated as a separate metric and accrues charges as a separate custom metric.CloudWatch Logs disables a metric filter if it generates 1000 different name/value pairs for your specified dimensions within a certain amount of time. This helps to prevent accidental high charges.You can also set up a billing alarm to alert you if your charges are higher than expected. For more information, see Creating a Billing Alarm to Monitor Your Estimated Amazon Web Services Charges. * *(string) --* * *(string) --* * **unit** *(string) --* The unit to assign to the metric. If you omit this, the unit is set as "None". * **applyOnTransformedLogs** (*boolean*) -- This parameter is valid only for log groups that have an active log transformer. For more information about log transformers, see PutTransformer. If the log group uses either a log-group level or account- level transformer, and you specify "true", the metric filter will be applied on the transformed version of the log events instead of the original ingested log events. Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.LimitExceededException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.InvalidOperationException" CloudWatchLogs / Client / tag_log_group tag_log_group ************* CloudWatchLogs.Client.tag_log_group(**kwargs) Warning: The TagLogGroup operation is on the path to deprecation. We recommend that you use TagResource instead. Adds or updates the specified tags for the specified log group. To list the tags for a log group, use ListTagsForResource. To remove tags, use UntagResource. For more information about tags, see Tag Log Groups in Amazon CloudWatch Logs in the *Amazon CloudWatch Logs User Guide*. CloudWatch Logs doesn't support IAM policies that prevent users from assigning specified tags to log groups using the "aws:Resource /key-name" or "aws:TagKeys" condition keys. For more information about using tags to control access, see Controlling access to Amazon Web Services resources using tags. Danger: This operation is deprecated and may not function as expected. This operation should not be used going forward and is only kept for the purpose of backwards compatiblity. See also: AWS API Documentation **Request Syntax** response = client.tag_log_group( logGroupName='string', tags={ 'string': 'string' } ) Parameters: * **logGroupName** (*string*) -- **[REQUIRED]** The name of the log group. * **tags** (*dict*) -- **[REQUIRED]** The key-value pairs to use for the tags. * *(string) --* * *(string) --* Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.InvalidParameterException" CloudWatchLogs / Client / get_delivery_destination get_delivery_destination ************************ CloudWatchLogs.Client.get_delivery_destination(**kwargs) Retrieves complete information about one delivery destination. See also: AWS API Documentation **Request Syntax** response = client.get_delivery_destination( name='string' ) Parameters: **name** (*string*) -- **[REQUIRED]** The name of the delivery destination that you want to retrieve. Return type: dict Returns: **Response Syntax** { 'deliveryDestination': { 'name': 'string', 'arn': 'string', 'deliveryDestinationType': 'S3'|'CWL'|'FH'|'XRAY', 'outputFormat': 'json'|'plain'|'w3c'|'raw'|'parquet', 'deliveryDestinationConfiguration': { 'destinationResourceArn': 'string' }, 'tags': { 'string': 'string' } } } **Response Structure** * *(dict) --* * **deliveryDestination** *(dict) --* A structure containing information about the delivery destination. * **name** *(string) --* The name of this delivery destination. * **arn** *(string) --* The Amazon Resource Name (ARN) that uniquely identifies this delivery destination. * **deliveryDestinationType** *(string) --* Displays whether this delivery destination is CloudWatch Logs, Amazon S3, Firehose, or X-Ray. * **outputFormat** *(string) --* The format of the logs that are sent to this delivery destination. * **deliveryDestinationConfiguration** *(dict) --* A structure that contains the ARN of the Amazon Web Services resource that will receive the logs. * **destinationResourceArn** *(string) --* The ARN of the Amazon Web Services destination that this delivery destination represents. That Amazon Web Services destination can be a log group in CloudWatch Logs, an Amazon S3 bucket, or a delivery stream in Firehose. * **tags** *(dict) --* The tags that have been assigned to this delivery destination. * *(string) --* * *(string) --* **Exceptions** * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.ValidationException" * "CloudWatchLogs.Client.exceptions.ServiceQuotaExceededException" * "CloudWatchLogs.Client.exceptions.ThrottlingException" CloudWatchLogs / Client / describe_export_tasks describe_export_tasks ********************* CloudWatchLogs.Client.describe_export_tasks(**kwargs) Lists the specified export tasks. You can list all your export tasks or filter the results based on task ID or task status. See also: AWS API Documentation **Request Syntax** response = client.describe_export_tasks( taskId='string', statusCode='CANCELLED'|'COMPLETED'|'FAILED'|'PENDING'|'PENDING_CANCEL'|'RUNNING', nextToken='string', limit=123 ) Parameters: * **taskId** (*string*) -- The ID of the export task. Specifying a task ID filters the results to one or zero export tasks. * **statusCode** (*string*) -- The status code of the export task. Specifying a status code filters the results to zero or more export tasks. * **nextToken** (*string*) -- The token for the next set of items to return. (You received this token from a previous call.) * **limit** (*integer*) -- The maximum number of items returned. If you don't specify a value, the default is up to 50 items. Return type: dict Returns: **Response Syntax** { 'exportTasks': [ { 'taskId': 'string', 'taskName': 'string', 'logGroupName': 'string', 'from': 123, 'to': 123, 'destination': 'string', 'destinationPrefix': 'string', 'status': { 'code': 'CANCELLED'|'COMPLETED'|'FAILED'|'PENDING'|'PENDING_CANCEL'|'RUNNING', 'message': 'string' }, 'executionInfo': { 'creationTime': 123, 'completionTime': 123 } }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **exportTasks** *(list) --* The export tasks. * *(dict) --* Represents an export task. * **taskId** *(string) --* The ID of the export task. * **taskName** *(string) --* The name of the export task. * **logGroupName** *(string) --* The name of the log group from which logs data was exported. * **from** *(integer) --* The start time, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". Events with a timestamp before this time are not exported. * **to** *(integer) --* The end time, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". Events with a timestamp later than this time are not exported. * **destination** *(string) --* The name of the S3 bucket to which the log data was exported. * **destinationPrefix** *(string) --* The prefix that was used as the start of Amazon S3 key for every object exported. * **status** *(dict) --* The status of the export task. * **code** *(string) --* The status code of the export task. * **message** *(string) --* The status message related to the status code. * **executionInfo** *(dict) --* Execution information about the export task. * **creationTime** *(integer) --* The creation time of the export task, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". * **completionTime** *(integer) --* The completion time of the export task, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". * **nextToken** *(string) --* The token for the next set of items to return. The token expires after 24 hours. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / describe_subscription_filters describe_subscription_filters ***************************** CloudWatchLogs.Client.describe_subscription_filters(**kwargs) Lists the subscription filters for the specified log group. You can list all the subscription filters or filter the results by prefix. The results are ASCII-sorted by filter name. See also: AWS API Documentation **Request Syntax** response = client.describe_subscription_filters( logGroupName='string', filterNamePrefix='string', nextToken='string', limit=123 ) Parameters: * **logGroupName** (*string*) -- **[REQUIRED]** The name of the log group. * **filterNamePrefix** (*string*) -- The prefix to match. If you don't specify a value, no prefix filter is applied. * **nextToken** (*string*) -- The token for the next set of items to return. (You received this token from a previous call.) * **limit** (*integer*) -- The maximum number of items returned. If you don't specify a value, the default is up to 50 items. Return type: dict Returns: **Response Syntax** { 'subscriptionFilters': [ { 'filterName': 'string', 'logGroupName': 'string', 'filterPattern': 'string', 'destinationArn': 'string', 'roleArn': 'string', 'distribution': 'Random'|'ByLogStream', 'applyOnTransformedLogs': True|False, 'creationTime': 123 }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **subscriptionFilters** *(list) --* The subscription filters. * *(dict) --* Represents a subscription filter. * **filterName** *(string) --* The name of the subscription filter. * **logGroupName** *(string) --* The name of the log group. * **filterPattern** *(string) --* A symbolic description of how CloudWatch Logs should interpret the data in each log event. For example, a log event can contain timestamps, IP addresses, strings, and so on. You use the filter pattern to specify what to look for in the log event message. * **destinationArn** *(string) --* The Amazon Resource Name (ARN) of the destination. * **roleArn** *(string) --* * **distribution** *(string) --* The method used to distribute log data to the destination, which can be either random or grouped by log stream. * **applyOnTransformedLogs** *(boolean) --* This parameter is valid only for log groups that have an active log transformer. For more information about log transformers, see PutTransformer. If this value is "true", the subscription filter is applied on the transformed version of the log events instead of the original ingested log events. * **creationTime** *(integer) --* The creation time of the subscription filter, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". * **nextToken** *(string) --* The token for the next set of items to return. The token expires after 24 hours. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / put_query_definition put_query_definition ******************** CloudWatchLogs.Client.put_query_definition(**kwargs) Creates or updates a query definition for CloudWatch Logs Insights. For more information, see Analyzing Log Data with CloudWatch Logs Insights. To update a query definition, specify its "queryDefinitionId" in your request. The values of "name", "queryString", and "logGroupNames" are changed to the values that you specify in your update operation. No current values are retained from the current query definition. For example, imagine updating a current query definition that includes log groups. If you don't specify the "logGroupNames" parameter in your update operation, the query definition changes to contain no log groups. You must have the "logs:PutQueryDefinition" permission to be able to perform this operation. See also: AWS API Documentation **Request Syntax** response = client.put_query_definition( queryLanguage='CWLI'|'SQL'|'PPL', name='string', queryDefinitionId='string', logGroupNames=[ 'string', ], queryString='string', clientToken='string' ) Parameters: * **queryLanguage** (*string*) -- Specify the query language to use for this query. The options are Logs Insights QL, OpenSearch PPL, and OpenSearch SQL. For more information about the query languages that CloudWatch Logs supports, see Supported query languages. * **name** (*string*) -- **[REQUIRED]** A name for the query definition. If you are saving numerous query definitions, we recommend that you name them. This way, you can find the ones you want by using the first part of the name as a filter in the "queryDefinitionNamePrefix" parameter of DescribeQueryDefinitions. * **queryDefinitionId** (*string*) -- If you are updating a query definition, use this parameter to specify the ID of the query definition that you want to update. You can use DescribeQueryDefinitions to retrieve the IDs of your saved query definitions. If you are creating a query definition, do not specify this parameter. CloudWatch generates a unique ID for the new query definition and include it in the response to this operation. * **logGroupNames** (*list*) -- Use this parameter to include specific log groups as part of your query definition. If your query uses the OpenSearch Service query language, you specify the log group names inside the "querystring" instead of here. If you are updating an existing query definition for the Logs Insights QL or OpenSearch Service PPL and you omit this parameter, then the updated definition will contain no log groups. * *(string) --* * **queryString** (*string*) -- **[REQUIRED]** The query string to use for this definition. For more information, see CloudWatch Logs Insights Query Syntax. * **clientToken** (*string*) -- Used as an idempotency token, to avoid returning an exception if the service receives the same request twice because of a network error. This field is autopopulated if not provided. Return type: dict Returns: **Response Syntax** { 'queryDefinitionId': 'string' } **Response Structure** * *(dict) --* * **queryDefinitionId** *(string) --* The ID of the query definition. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.LimitExceededException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / create_log_stream create_log_stream ***************** CloudWatchLogs.Client.create_log_stream(**kwargs) Creates a log stream for the specified log group. A log stream is a sequence of log events that originate from a single source, such as an application instance or a resource that is being monitored. There is no limit on the number of log streams that you can create for a log group. There is a limit of 50 TPS on "CreateLogStream" operations, after which transactions are throttled. You must use the following guidelines when naming a log stream: * Log stream names must be unique within the log group. * Log stream names can be between 1 and 512 characters long. * Don't use ':' (colon) or '*' (asterisk) characters. See also: AWS API Documentation **Request Syntax** response = client.create_log_stream( logGroupName='string', logStreamName='string' ) Parameters: * **logGroupName** (*string*) -- **[REQUIRED]** The name of the log group. * **logStreamName** (*string*) -- **[REQUIRED]** The name of the log stream. Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceAlreadyExistsException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / delete_query_definition delete_query_definition *********************** CloudWatchLogs.Client.delete_query_definition(**kwargs) Deletes a saved CloudWatch Logs Insights query definition. A query definition contains details about a saved CloudWatch Logs Insights query. Each "DeleteQueryDefinition" operation can delete one query definition. You must have the "logs:DeleteQueryDefinition" permission to be able to perform this operation. See also: AWS API Documentation **Request Syntax** response = client.delete_query_definition( queryDefinitionId='string' ) Parameters: **queryDefinitionId** (*string*) -- **[REQUIRED]** The ID of the query definition that you want to delete. You can use DescribeQueryDefinitions to retrieve the IDs of your saved query definitions. Return type: dict Returns: **Response Syntax** { 'success': True|False } **Response Structure** * *(dict) --* * **success** *(boolean) --* A value of TRUE indicates that the operation succeeded. FALSE indicates that the operation failed. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / get_data_protection_policy get_data_protection_policy ************************** CloudWatchLogs.Client.get_data_protection_policy(**kwargs) Returns information about a log group data protection policy. See also: AWS API Documentation **Request Syntax** response = client.get_data_protection_policy( logGroupIdentifier='string' ) Parameters: **logGroupIdentifier** (*string*) -- **[REQUIRED]** The name or ARN of the log group that contains the data protection policy that you want to see. Return type: dict Returns: **Response Syntax** { 'logGroupIdentifier': 'string', 'policyDocument': 'string', 'lastUpdatedTime': 123 } **Response Structure** * *(dict) --* * **logGroupIdentifier** *(string) --* The log group name or ARN that you specified in your request. * **policyDocument** *(string) --* The data protection policy document for this log group. * **lastUpdatedTime** *(integer) --* The date and time that this policy was most recently updated. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / list_log_groups list_log_groups *************** CloudWatchLogs.Client.list_log_groups(**kwargs) Returns a list of log groups in the Region in your account. If you are performing this action in a monitoring account, you can choose to also return log groups from source accounts that are linked to the monitoring account. For more information about using cross- account observability to set up monitoring accounts and source accounts, see CloudWatch cross-account observability. You can optionally filter the list by log group class and by using regular expressions in your request to match strings in the log group names. This operation is paginated. By default, your first use of this operation returns 50 results, and includes a token to use in a subsequent operation to return more results. See also: AWS API Documentation **Request Syntax** response = client.list_log_groups( logGroupNamePattern='string', logGroupClass='STANDARD'|'INFREQUENT_ACCESS'|'DELIVERY', includeLinkedAccounts=True|False, accountIdentifiers=[ 'string', ], nextToken='string', limit=123 ) Parameters: * **logGroupNamePattern** (*string*) -- Use this parameter to limit the returned log groups to only those with names that match the pattern that you specify. This parameter is a regular expression that can match prefixes and substrings, and supports wildcard matching and matching multiple patterns, as in the following examples. * Use "^" to match log group names by prefix. * For a substring match, specify the string to match. All matches are case sensitive * To match multiple patterns, separate them with a "|" as in the example "^/aws/lambda|discovery" You can specify as many as five different regular expression patterns in this field, each of which must be between 3 and 24 characters. You can include the "^" symbol as many as five times, and include the "|" symbol as many as four times. * **logGroupClass** (*string*) -- Use this parameter to limit the results to only those log groups in the specified log group class. If you omit this parameter, log groups of all classes can be returned. * **includeLinkedAccounts** (*boolean*) -- If you are using a monitoring account, set this to "true" to have the operation return log groups in the accounts listed in "accountIdentifiers". If this parameter is set to "true" and "accountIdentifiers" contains a null value, the operation returns all log groups in the monitoring account and all log groups in all source accounts that are linked to the monitoring account. The default for this parameter is "false". * **accountIdentifiers** (*list*) -- When "includeLinkedAccounts" is set to "true", use this parameter to specify the list of accounts to search. You can specify as many as 20 account IDs in the array. * *(string) --* * **nextToken** (*string*) -- The token for the next set of items to return. The token expires after 24 hours. * **limit** (*integer*) -- The maximum number of log groups to return. If you omit this parameter, the default is up to 50 log groups. Return type: dict Returns: **Response Syntax** { 'logGroups': [ { 'logGroupName': 'string', 'logGroupArn': 'string', 'logGroupClass': 'STANDARD'|'INFREQUENT_ACCESS'|'DELIVERY' }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **logGroups** *(list) --* An array of structures, where each structure contains the information about one log group. * *(dict) --* This structure contains information about one log group in your account. * **logGroupName** *(string) --* The name of the log group. * **logGroupArn** *(string) --* The Amazon Resource Name (ARN) of the log group. * **logGroupClass** *(string) --* The log group class for this log group. For details about the features supported by each log group class, see Log classes * **nextToken** *(string) --* The token for the next set of items to return. The token expires after 24 hours. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / describe_deliveries describe_deliveries ******************* CloudWatchLogs.Client.describe_deliveries(**kwargs) Retrieves a list of the deliveries that have been created in the account. A *delivery* is a connection between a delivery source and a delivery destination. A delivery source represents an Amazon Web Services resource that sends logs to an logs delivery destination. The destination can be CloudWatch Logs, Amazon S3, Firehose or X-Ray. Only some Amazon Web Services services support being configured as a delivery source. These services are listed in Enable logging from Amazon Web Services services. See also: AWS API Documentation **Request Syntax** response = client.describe_deliveries( nextToken='string', limit=123 ) Parameters: * **nextToken** (*string*) -- The token for the next set of items to return. The token expires after 24 hours. * **limit** (*integer*) -- Optionally specify the maximum number of deliveries to return in the response. Return type: dict Returns: **Response Syntax** { 'deliveries': [ { 'id': 'string', 'arn': 'string', 'deliverySourceName': 'string', 'deliveryDestinationArn': 'string', 'deliveryDestinationType': 'S3'|'CWL'|'FH'|'XRAY', 'recordFields': [ 'string', ], 'fieldDelimiter': 'string', 's3DeliveryConfiguration': { 'suffixPath': 'string', 'enableHiveCompatiblePath': True|False }, 'tags': { 'string': 'string' } }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **deliveries** *(list) --* An array of structures. Each structure contains information about one delivery in the account. * *(dict) --* This structure contains information about one *delivery* in your account. A delivery is a connection between a logical *delivery source* and a logical *delivery destination*. For more information, see CreateDelivery. To update an existing delivery configuration, use UpdateDeliveryConfiguration. * **id** *(string) --* The unique ID that identifies this delivery in your account. * **arn** *(string) --* The Amazon Resource Name (ARN) that uniquely identifies this delivery. * **deliverySourceName** *(string) --* The name of the delivery source that is associated with this delivery. * **deliveryDestinationArn** *(string) --* The ARN of the delivery destination that is associated with this delivery. * **deliveryDestinationType** *(string) --* Displays whether the delivery destination associated with this delivery is CloudWatch Logs, Amazon S3, Firehose, or X-Ray. * **recordFields** *(list) --* The record fields used in this delivery. * *(string) --* * **fieldDelimiter** *(string) --* The field delimiter that is used between record fields when the final output format of a delivery is in "Plain", "W3C", or "Raw" format. * **s3DeliveryConfiguration** *(dict) --* This structure contains delivery configurations that apply only when the delivery destination resource is an S3 bucket. * **suffixPath** *(string) --* This string allows re-configuring the S3 object prefix to contain either static or variable sections. The valid variables to use in the suffix path will vary by each log source. To find the values supported for the suffix path for each log source, use the DescribeConfigurationTemplates operation and check the "allowedSuffixPathFields" field in the response. * **enableHiveCompatiblePath** *(boolean) --* This parameter causes the S3 objects that contain delivered logs to use a prefix structure that allows for integration with Apache Hive. * **tags** *(dict) --* The tags that have been assigned to this delivery. * *(string) --* * *(string) --* * **nextToken** *(string) --* The token for the next set of items to return. The token expires after 24 hours. **Exceptions** * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.ServiceQuotaExceededException" * "CloudWatchLogs.Client.exceptions.ValidationException" * "CloudWatchLogs.Client.exceptions.ThrottlingException" CloudWatchLogs / Client / delete_log_group delete_log_group **************** CloudWatchLogs.Client.delete_log_group(**kwargs) Deletes the specified log group and permanently deletes all the archived log events associated with the log group. See also: AWS API Documentation **Request Syntax** response = client.delete_log_group( logGroupName='string' ) Parameters: **logGroupName** (*string*) -- **[REQUIRED]** The name of the log group. Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / describe_index_policies describe_index_policies *********************** CloudWatchLogs.Client.describe_index_policies(**kwargs) Returns the field index policies of one or more log groups. For more information about field index policies, see PutIndexPolicy. If a specified log group has a log-group level index policy, that policy is returned by this operation. If a specified log group doesn't have a log-group level index policy, but an account-wide index policy applies to it, that account-wide policy is returned by this operation. To find information about only account-level policies, use DescribeAccountPolicies instead. See also: AWS API Documentation **Request Syntax** response = client.describe_index_policies( logGroupIdentifiers=[ 'string', ], nextToken='string' ) Parameters: * **logGroupIdentifiers** (*list*) -- **[REQUIRED]** An array containing the name or ARN of the log group that you want to retrieve field index policies for. * *(string) --* * **nextToken** (*string*) -- The token for the next set of items to return. The token expires after 24 hours. Return type: dict Returns: **Response Syntax** { 'indexPolicies': [ { 'logGroupIdentifier': 'string', 'lastUpdateTime': 123, 'policyDocument': 'string', 'policyName': 'string', 'source': 'ACCOUNT'|'LOG_GROUP' }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **indexPolicies** *(list) --* An array containing the field index policies. * *(dict) --* This structure contains information about one field index policy in this account. * **logGroupIdentifier** *(string) --* The ARN of the log group that this index policy applies to. * **lastUpdateTime** *(integer) --* The date and time that this index policy was most recently updated. * **policyDocument** *(string) --* The policy document for this index policy, in JSON format. * **policyName** *(string) --* The name of this policy. Responses about log group-level field index policies don't have this field, because those policies don't have names. * **source** *(string) --* This field indicates whether this is an account-level index policy or an index policy that applies only to a single log group. * **nextToken** *(string) --* The token for the next set of items to return. The token expires after 24 hours. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.LimitExceededException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / tag_resource tag_resource ************ CloudWatchLogs.Client.tag_resource(**kwargs) Assigns one or more tags (key-value pairs) to the specified CloudWatch Logs resource. Currently, the only CloudWatch Logs resources that can be tagged are log groups and destinations. Tags can help you organize and categorize your resources. You can also use them to scope user permissions by granting a user permission to access or change only resources with certain tag values. Tags don't have any semantic meaning to Amazon Web Services and are interpreted strictly as strings of characters. You can use the "TagResource" action with a resource that already has tags. If you specify a new tag key for the alarm, this tag is appended to the list of tags associated with the alarm. If you specify a tag key that is already associated with the alarm, the new tag value that you specify replaces the previous value for that tag. You can associate as many as 50 tags with a CloudWatch Logs resource. See also: AWS API Documentation **Request Syntax** response = client.tag_resource( resourceArn='string', tags={ 'string': 'string' } ) Parameters: * **resourceArn** (*string*) -- **[REQUIRED]** The ARN of the resource that you're adding tags to. The ARN format of a log group is "arn:aws:logs:Region:account- id:log-group:log-group-name" The ARN format of a destination is "arn:aws:logs:Region :account-id:destination:destination-name" For more information about ARN format, see CloudWatch Logs resources and operations. * **tags** (*dict*) -- **[REQUIRED]** The list of key-value pairs to associate with the resource. * *(string) --* * *(string) --* Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.TooManyTagsException" CloudWatchLogs / Client / get_log_group_fields get_log_group_fields ******************** CloudWatchLogs.Client.get_log_group_fields(**kwargs) Returns a list of the fields that are included in log events in the specified log group. Includes the percentage of log events that contain each field. The search is limited to a time period that you specify. You can specify the log group to search by using either "logGroupIdentifier" or "logGroupName". You must specify one of these parameters, but you can't specify both. In the results, fields that start with "@" are fields generated by CloudWatch Logs. For example, "@timestamp" is the timestamp of each log event. For more information about the fields that are generated by CloudWatch logs, see Supported Logs and Discovered Fields. The response results are sorted by the frequency percentage, starting with the highest percentage. If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross- account observability. See also: AWS API Documentation **Request Syntax** response = client.get_log_group_fields( logGroupName='string', time=123, logGroupIdentifier='string' ) Parameters: * **logGroupName** (*string*) -- The name of the log group to search. Note: You must include either "logGroupIdentifier" or "logGroupName", but not both. * **time** (*integer*) -- The time to set as the center of the query. If you specify "time", the 8 minutes before and 8 minutes after this time are searched. If you omit "time", the most recent 15 minutes up to the current time are searched. The "time" value is specified as epoch time, which is the number of seconds since "January 1, 1970, 00:00:00 UTC". * **logGroupIdentifier** (*string*) -- Specify either the name or ARN of the log group to view. If the log group is in a source account and you are using a monitoring account, you must specify the ARN. Note: You must include either "logGroupIdentifier" or "logGroupName", but not both. Return type: dict Returns: **Response Syntax** { 'logGroupFields': [ { 'name': 'string', 'percent': 123 }, ] } **Response Structure** * *(dict) --* * **logGroupFields** *(list) --* The array of fields found in the query. Each object in the array contains the name of the field, along with the percentage of time it appeared in the log events that were queried. * *(dict) --* The fields contained in log events found by a "GetLogGroupFields" operation, along with the percentage of queried log events in which each field appears. * **name** *(string) --* The name of a log field. * **percent** *(integer) --* The percentage of log events queried that contained the field. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.LimitExceededException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / describe_configuration_templates describe_configuration_templates ******************************** CloudWatchLogs.Client.describe_configuration_templates(**kwargs) Use this operation to return the valid and default values that are used when creating delivery sources, delivery destinations, and deliveries. For more information about deliveries, see CreateDelivery. See also: AWS API Documentation **Request Syntax** response = client.describe_configuration_templates( service='string', logTypes=[ 'string', ], resourceTypes=[ 'string', ], deliveryDestinationTypes=[ 'S3'|'CWL'|'FH'|'XRAY', ], nextToken='string', limit=123 ) Parameters: * **service** (*string*) -- Use this parameter to filter the response to include only the configuration templates that apply to the Amazon Web Services service that you specify here. * **logTypes** (*list*) -- Use this parameter to filter the response to include only the configuration templates that apply to the log types that you specify here. * *(string) --* * **resourceTypes** (*list*) -- Use this parameter to filter the response to include only the configuration templates that apply to the resource types that you specify here. * *(string) --* * **deliveryDestinationTypes** (*list*) -- Use this parameter to filter the response to include only the configuration templates that apply to the delivery destination types that you specify here. * *(string) --* * **nextToken** (*string*) -- The token for the next set of items to return. The token expires after 24 hours. * **limit** (*integer*) -- Use this parameter to limit the number of configuration templates that are returned in the response. Return type: dict Returns: **Response Syntax** { 'configurationTemplates': [ { 'service': 'string', 'logType': 'string', 'resourceType': 'string', 'deliveryDestinationType': 'S3'|'CWL'|'FH'|'XRAY', 'defaultDeliveryConfigValues': { 'recordFields': [ 'string', ], 'fieldDelimiter': 'string', 's3DeliveryConfiguration': { 'suffixPath': 'string', 'enableHiveCompatiblePath': True|False } }, 'allowedFields': [ { 'name': 'string', 'mandatory': True|False }, ], 'allowedOutputFormats': [ 'json'|'plain'|'w3c'|'raw'|'parquet', ], 'allowedActionForAllowVendedLogsDeliveryForResource': 'string', 'allowedFieldDelimiters': [ 'string', ], 'allowedSuffixPathFields': [ 'string', ] }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **configurationTemplates** *(list) --* An array of objects, where each object describes one configuration template that matches the filters that you specified in the request. * *(dict) --* A structure containing information about the deafult settings and available settings that you can use to configure a delivery or a delivery destination. * **service** *(string) --* A string specifying which service this configuration template applies to. For more information about supported services see Enable logging from Amazon Web Services services.. * **logType** *(string) --* A string specifying which log type this configuration template applies to. * **resourceType** *(string) --* A string specifying which resource type this configuration template applies to. * **deliveryDestinationType** *(string) --* A string specifying which destination type this configuration template applies to. * **defaultDeliveryConfigValues** *(dict) --* A mapping that displays the default value of each property within a delivery's configuration, if it is not specified in the request. * **recordFields** *(list) --* The default record fields that will be delivered when a list of record fields is not provided in a CreateDelivery operation. * *(string) --* * **fieldDelimiter** *(string) --* The default field delimiter that is used in a CreateDelivery operation when the field delimiter is not specified in that operation. The field delimiter is used only when the final output delivery is in "Plain", "W3C", or "Raw" format. * **s3DeliveryConfiguration** *(dict) --* The delivery parameters that are used when you create a delivery to a delivery destination that is an S3 Bucket. * **suffixPath** *(string) --* This string allows re-configuring the S3 object prefix to contain either static or variable sections. The valid variables to use in the suffix path will vary by each log source. To find the values supported for the suffix path for each log source, use the DescribeConfigurationTemplates operation and check the "allowedSuffixPathFields" field in the response. * **enableHiveCompatiblePath** *(boolean) --* This parameter causes the S3 objects that contain delivered logs to use a prefix structure that allows for integration with Apache Hive. * **allowedFields** *(list) --* The allowed fields that a caller can use in the "recordFields" parameter of a CreateDelivery or UpdateDeliveryConfiguration operation. * *(dict) --* A structure that represents a valid record field header and whether it is mandatory. * **name** *(string) --* The name to use when specifying this record field in a CreateDelivery or UpdateDeliveryConfiguration operation. * **mandatory** *(boolean) --* If this is "true", the record field must be present in the "recordFields" parameter provided to a CreateDelivery or UpdateDeliveryConfiguration operation. * **allowedOutputFormats** *(list) --* The list of delivery destination output formats that are supported by this log source. * *(string) --* * **allowedActionForAllowVendedLogsDeliveryForResource** *(string) --* The action permissions that a caller needs to have to be able to successfully create a delivery source on the desired resource type when calling PutDeliverySource. * **allowedFieldDelimiters** *(list) --* The valid values that a caller can use as field delimiters when calling CreateDelivery or UpdateDeliveryConfiguration on a delivery that delivers in "Plain", "W3C", or "Raw" format. * *(string) --* * **allowedSuffixPathFields** *(list) --* The list of variable fields that can be used in the suffix path of a delivery that delivers to an S3 bucket. * *(string) --* * **nextToken** *(string) --* The token for the next set of items to return. The token expires after 24 hours. **Exceptions** * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.ValidationException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ThrottlingException" CloudWatchLogs / Client / describe_log_groups describe_log_groups ******************* CloudWatchLogs.Client.describe_log_groups(**kwargs) Returns information about log groups. You can return all your log groups or filter the results by prefix. The results are ASCII- sorted by log group name. CloudWatch Logs doesn't support IAM policies that control access to the "DescribeLogGroups" action by using the "aws:ResourceTag/key- name" condition key. Other CloudWatch Logs actions do support the use of the "aws:ResourceTag/key-name" condition key to control access. For more information about using tags to control access, see Controlling access to Amazon Web Services resources using tags. If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross- account observability. See also: AWS API Documentation **Request Syntax** response = client.describe_log_groups( accountIdentifiers=[ 'string', ], logGroupNamePrefix='string', logGroupNamePattern='string', nextToken='string', limit=123, includeLinkedAccounts=True|False, logGroupClass='STANDARD'|'INFREQUENT_ACCESS'|'DELIVERY', logGroupIdentifiers=[ 'string', ] ) Parameters: * **accountIdentifiers** (*list*) -- When "includeLinkedAccounts" is set to "true", use this parameter to specify the list of accounts to search. You can specify as many as 20 account IDs in the array. * *(string) --* * **logGroupNamePrefix** (*string*) -- The prefix to match. Note: "logGroupNamePrefix" and "logGroupNamePattern" are mutually exclusive. Only one of these parameters can be passed. * **logGroupNamePattern** (*string*) -- If you specify a string for this parameter, the operation returns only log groups that have names that match the string based on a case-sensitive substring search. For example, if you specify "DataLogs", log groups named "DataLogs", "aws/DataLogs", and "GroupDataLogs" would match, but "datalogs", "Data/log/s" and "Groupdata" would not match. If you specify "logGroupNamePattern" in your request, then only "arn", "creationTime", and "logGroupName" are included in the response. Note: "logGroupNamePattern" and "logGroupNamePrefix" are mutually exclusive. Only one of these parameters can be passed. * **nextToken** (*string*) -- The token for the next set of items to return. (You received this token from a previous call.) * **limit** (*integer*) -- The maximum number of items returned. If you don't specify a value, the default is up to 50 items. * **includeLinkedAccounts** (*boolean*) -- If you are using a monitoring account, set this to "true" to have the operation return log groups in the accounts listed in "accountIdentifiers". If this parameter is set to "true" and "accountIdentifiers" contains a null value, the operation returns all log groups in the monitoring account and all log groups in all source accounts that are linked to the monitoring account. The default for this parameter is "false". * **logGroupClass** (*string*) -- Use this parameter to limit the results to only those log groups in the specified log group class. If you omit this parameter, log groups of all classes can be returned. Specifies the log group class for this log group. There are three classes: * The "Standard" log class supports all CloudWatch Logs features. * The "Infrequent Access" log class supports a subset of CloudWatch Logs features and incurs lower costs. * Use the "Delivery" log class only for delivering Lambda logs to store in Amazon S3 or Amazon Data Firehose. Log events in log groups in the Delivery class are kept in CloudWatch Logs for only one day. This log class doesn't offer rich CloudWatch Logs capabilities such as CloudWatch Logs Insights queries. For details about the features supported by each class, see Log classes * **logGroupIdentifiers** (*list*) -- Use this array to filter the list of log groups returned. If you specify this parameter, the only other filter that you can choose to specify is "includeLinkedAccounts". If you are using this operation in a monitoring account, you can specify the ARNs of log groups in source accounts and in the monitoring account itself. If you are using this operation in an account that is not a cross-account monitoring account, you can specify only log group names in the same account as the operation. * *(string) --* Return type: dict Returns: **Response Syntax** { 'logGroups': [ { 'logGroupName': 'string', 'creationTime': 123, 'retentionInDays': 123, 'metricFilterCount': 123, 'arn': 'string', 'storedBytes': 123, 'kmsKeyId': 'string', 'dataProtectionStatus': 'ACTIVATED'|'DELETED'|'ARCHIVED'|'DISABLED', 'inheritedProperties': [ 'ACCOUNT_DATA_PROTECTION', ], 'logGroupClass': 'STANDARD'|'INFREQUENT_ACCESS'|'DELIVERY', 'logGroupArn': 'string' }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **logGroups** *(list) --* An array of structures, where each structure contains the information about one log group. * *(dict) --* Represents a log group. * **logGroupName** *(string) --* The name of the log group. * **creationTime** *(integer) --* The creation time of the log group, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. * **retentionInDays** *(integer) --* The number of days to retain the log events in the specified log group. Possible values are: 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1096, 1827, 2192, 2557, 2922, 3288, and 3653. To set a log group so that its log events do not expire, use DeleteRetentionPolicy. * **metricFilterCount** *(integer) --* The number of metric filters. * **arn** *(string) --* The Amazon Resource Name (ARN) of the log group. This version of the ARN includes a trailing ":*" after the log group name. Use this version to refer to the ARN in IAM policies when specifying permissions for most API actions. The exception is when specifying permissions for TagResource, UntagResource, and ListTagsForResource. The permissions for those three actions require the ARN version that doesn't include a trailing ":*". * **storedBytes** *(integer) --* The number of bytes stored. * **kmsKeyId** *(string) --* The Amazon Resource Name (ARN) of the KMS key to use when encrypting log data. * **dataProtectionStatus** *(string) --* Displays whether this log group has a protection policy, or whether it had one in the past. For more information, see PutDataProtectionPolicy. * **inheritedProperties** *(list) --* Displays all the properties that this log group has inherited from account-level settings. * *(string) --* * **logGroupClass** *(string) --* This specifies the log group class for this log group. There are three classes: * The "Standard" log class supports all CloudWatch Logs features. * The "Infrequent Access" log class supports a subset of CloudWatch Logs features and incurs lower costs. * Use the "Delivery" log class only for delivering Lambda logs to store in Amazon S3 or Amazon Data Firehose. Log events in log groups in the Delivery class are kept in CloudWatch Logs for only one day. This log class doesn't offer rich CloudWatch Logs capabilities such as CloudWatch Logs Insights queries. For details about the features supported by the Standard and Infrequent Access classes, see Log classes * **logGroupArn** *(string) --* The Amazon Resource Name (ARN) of the log group. This version of the ARN doesn't include a trailing ":*" after the log group name. Use this version to refer to the ARN in the following situations: * In the "logGroupIdentifier" input field in many CloudWatch Logs APIs. * In the "resourceArn" field in tagging APIs * In IAM policies, when specifying permissions for TagResource, UntagResource, and ListTagsForResource. * **nextToken** *(string) --* The token for the next set of items to return. The token expires after 24 hours. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / describe_resource_policies describe_resource_policies ************************** CloudWatchLogs.Client.describe_resource_policies(**kwargs) Lists the resource policies in this account. See also: AWS API Documentation **Request Syntax** response = client.describe_resource_policies( nextToken='string', limit=123, resourceArn='string', policyScope='ACCOUNT'|'RESOURCE' ) Parameters: * **nextToken** (*string*) -- The token for the next set of items to return. The token expires after 24 hours. * **limit** (*integer*) -- The maximum number of resource policies to be displayed with one call of this API. * **resourceArn** (*string*) -- The ARN of the CloudWatch Logs resource for which to query the resource policy. * **policyScope** (*string*) -- Specifies the scope of the resource policy. Valid values are "ACCOUNT" or "RESOURCE". When not specified, defaults to "ACCOUNT". Return type: dict Returns: **Response Syntax** { 'resourcePolicies': [ { 'policyName': 'string', 'policyDocument': 'string', 'lastUpdatedTime': 123, 'policyScope': 'ACCOUNT'|'RESOURCE', 'resourceArn': 'string', 'revisionId': 'string' }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **resourcePolicies** *(list) --* The resource policies that exist in this account. * *(dict) --* A policy enabling one or more entities to put logs to a log group in this account. * **policyName** *(string) --* The name of the resource policy. * **policyDocument** *(string) --* The details of the policy. * **lastUpdatedTime** *(integer) --* Timestamp showing when this policy was last updated, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". * **policyScope** *(string) --* Specifies scope of the resource policy. Valid values are ACCOUNT or RESOURCE. * **resourceArn** *(string) --* The ARN of the CloudWatch Logs resource to which the resource policy is attached. Only populated for resource-scoped policies. * **revisionId** *(string) --* The revision ID of the resource policy. Only populated for resource-scoped policies. * **nextToken** *(string) --* The token for the next set of items to return. The token expires after 24 hours. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / get_log_record get_log_record ************** CloudWatchLogs.Client.get_log_record(**kwargs) Retrieves all of the fields and values of a single log event. All fields are retrieved, even if the original query that produced the "logRecordPointer" retrieved only a subset of fields. Fields are returned as field name/field value pairs. The full unparsed log event is returned within "@message". See also: AWS API Documentation **Request Syntax** response = client.get_log_record( logRecordPointer='string', unmask=True|False ) Parameters: * **logRecordPointer** (*string*) -- **[REQUIRED]** The pointer corresponding to the log event record you want to retrieve. You get this from the response of a "GetQueryResults" operation. In that response, the value of the "@ptr" field for a log event is the value to use as "logRecordPointer" to retrieve that complete log event record. * **unmask** (*boolean*) -- Specify "true" to display the log event fields with all sensitive data unmasked and visible. The default is "false". To use this operation with this parameter, you must be signed into an account with the "logs:Unmask" permission. Return type: dict Returns: **Response Syntax** { 'logRecord': { 'string': 'string' } } **Response Structure** * *(dict) --* * **logRecord** *(dict) --* The requested log event, as a JSON string. * *(string) --* * *(string) --* **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.LimitExceededException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / get_delivery get_delivery ************ CloudWatchLogs.Client.get_delivery(**kwargs) Returns complete information about one logical *delivery*. A delivery is a connection between a delivery source and a delivery destination. A delivery source represents an Amazon Web Services resource that sends logs to an logs delivery destination. The destination can be CloudWatch Logs, Amazon S3, or Firehose. Only some Amazon Web Services services support being configured as a delivery source. These services are listed in Enable logging from Amazon Web Services services. You need to specify the delivery "id" in this operation. You can find the IDs of the deliveries in your account with the DescribeDeliveries operation. See also: AWS API Documentation **Request Syntax** response = client.get_delivery( id='string' ) Parameters: **id** (*string*) -- **[REQUIRED]** The ID of the delivery that you want to retrieve. Return type: dict Returns: **Response Syntax** { 'delivery': { 'id': 'string', 'arn': 'string', 'deliverySourceName': 'string', 'deliveryDestinationArn': 'string', 'deliveryDestinationType': 'S3'|'CWL'|'FH'|'XRAY', 'recordFields': [ 'string', ], 'fieldDelimiter': 'string', 's3DeliveryConfiguration': { 'suffixPath': 'string', 'enableHiveCompatiblePath': True|False }, 'tags': { 'string': 'string' } } } **Response Structure** * *(dict) --* * **delivery** *(dict) --* A structure that contains information about the delivery. * **id** *(string) --* The unique ID that identifies this delivery in your account. * **arn** *(string) --* The Amazon Resource Name (ARN) that uniquely identifies this delivery. * **deliverySourceName** *(string) --* The name of the delivery source that is associated with this delivery. * **deliveryDestinationArn** *(string) --* The ARN of the delivery destination that is associated with this delivery. * **deliveryDestinationType** *(string) --* Displays whether the delivery destination associated with this delivery is CloudWatch Logs, Amazon S3, Firehose, or X-Ray. * **recordFields** *(list) --* The record fields used in this delivery. * *(string) --* * **fieldDelimiter** *(string) --* The field delimiter that is used between record fields when the final output format of a delivery is in "Plain", "W3C", or "Raw" format. * **s3DeliveryConfiguration** *(dict) --* This structure contains delivery configurations that apply only when the delivery destination resource is an S3 bucket. * **suffixPath** *(string) --* This string allows re-configuring the S3 object prefix to contain either static or variable sections. The valid variables to use in the suffix path will vary by each log source. To find the values supported for the suffix path for each log source, use the DescribeConfigurationTemplates operation and check the "allowedSuffixPathFields" field in the response. * **enableHiveCompatiblePath** *(boolean) --* This parameter causes the S3 objects that contain delivered logs to use a prefix structure that allows for integration with Apache Hive. * **tags** *(dict) --* The tags that have been assigned to this delivery. * *(string) --* * *(string) --* **Exceptions** * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.ValidationException" * "CloudWatchLogs.Client.exceptions.ServiceQuotaExceededException" * "CloudWatchLogs.Client.exceptions.ThrottlingException" CloudWatchLogs / Client / delete_resource_policy delete_resource_policy ********************** CloudWatchLogs.Client.delete_resource_policy(**kwargs) Deletes a resource policy from this account. This revokes the access of the identities in that policy to put log events to this account. See also: AWS API Documentation **Request Syntax** response = client.delete_resource_policy( policyName='string', resourceArn='string', expectedRevisionId='string' ) Parameters: * **policyName** (*string*) -- The name of the policy to be revoked. This parameter is required. * **resourceArn** (*string*) -- The ARN of the CloudWatch Logs resource for which the resource policy needs to be deleted * **expectedRevisionId** (*string*) -- The expected revision ID of the resource policy. Required when deleting a resource- scoped policy to prevent concurrent modifications. Returns: None **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.OperationAbortedException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / test_transformer test_transformer **************** CloudWatchLogs.Client.test_transformer(**kwargs) Use this operation to test a log transformer. You enter the transformer configuration and a set of log events to test with. The operation responds with an array that includes the original log events and the transformed versions. See also: AWS API Documentation **Request Syntax** response = client.test_transformer( transformerConfig=[ { 'addKeys': { 'entries': [ { 'key': 'string', 'value': 'string', 'overwriteIfExists': True|False }, ] }, 'copyValue': { 'entries': [ { 'source': 'string', 'target': 'string', 'overwriteIfExists': True|False }, ] }, 'csv': { 'quoteCharacter': 'string', 'delimiter': 'string', 'columns': [ 'string', ], 'source': 'string' }, 'dateTimeConverter': { 'source': 'string', 'target': 'string', 'targetFormat': 'string', 'matchPatterns': [ 'string', ], 'sourceTimezone': 'string', 'targetTimezone': 'string', 'locale': 'string' }, 'deleteKeys': { 'withKeys': [ 'string', ] }, 'grok': { 'source': 'string', 'match': 'string' }, 'listToMap': { 'source': 'string', 'key': 'string', 'valueKey': 'string', 'target': 'string', 'flatten': True|False, 'flattenedElement': 'first'|'last' }, 'lowerCaseString': { 'withKeys': [ 'string', ] }, 'moveKeys': { 'entries': [ { 'source': 'string', 'target': 'string', 'overwriteIfExists': True|False }, ] }, 'parseCloudfront': { 'source': 'string' }, 'parseJSON': { 'source': 'string', 'destination': 'string' }, 'parseKeyValue': { 'source': 'string', 'destination': 'string', 'fieldDelimiter': 'string', 'keyValueDelimiter': 'string', 'keyPrefix': 'string', 'nonMatchValue': 'string', 'overwriteIfExists': True|False }, 'parseRoute53': { 'source': 'string' }, 'parseToOCSF': { 'source': 'string', 'eventSource': 'CloudTrail'|'Route53Resolver'|'VPCFlow'|'EKSAudit'|'AWSWAF', 'ocsfVersion': 'V1.1' }, 'parsePostgres': { 'source': 'string' }, 'parseVPC': { 'source': 'string' }, 'parseWAF': { 'source': 'string' }, 'renameKeys': { 'entries': [ { 'key': 'string', 'renameTo': 'string', 'overwriteIfExists': True|False }, ] }, 'splitString': { 'entries': [ { 'source': 'string', 'delimiter': 'string' }, ] }, 'substituteString': { 'entries': [ { 'source': 'string', 'from': 'string', 'to': 'string' }, ] }, 'trimString': { 'withKeys': [ 'string', ] }, 'typeConverter': { 'entries': [ { 'key': 'string', 'type': 'boolean'|'integer'|'double'|'string' }, ] }, 'upperCaseString': { 'withKeys': [ 'string', ] } }, ], logEventMessages=[ 'string', ] ) Parameters: * **transformerConfig** (*list*) -- **[REQUIRED]** This structure contains the configuration of this log transformer that you want to test. A log transformer is an array of processors, where each processor applies one type of transformation to the log events that are ingested. * *(dict) --* This structure contains the information about one processor in a log transformer. * **addKeys** *(dict) --* Use this parameter to include the addKeys processor in your transformer. * **entries** *(list) --* **[REQUIRED]** An array of objects, where each object contains the information about one key to add to the log event. * *(dict) --* This object defines one key that will be added with the addKeys processor. * **key** *(string) --* **[REQUIRED]** The key of the new entry to be added to the log event * **value** *(string) --* **[REQUIRED]** The value of the new entry to be added to the log event * **overwriteIfExists** *(boolean) --* Specifies whether to overwrite the value if the key already exists in the log event. If you omit this, the default is "false". * **copyValue** *(dict) --* Use this parameter to include the copyValue processor in your transformer. * **entries** *(list) --* **[REQUIRED]** An array of "CopyValueEntry" objects, where each object contains the information about one field value to copy. * *(dict) --* This object defines one value to be copied with the copyValue processor. * **source** *(string) --* **[REQUIRED]** The key to copy. * **target** *(string) --* **[REQUIRED]** The key of the field to copy the value to. * **overwriteIfExists** *(boolean) --* Specifies whether to overwrite the value if the destination key already exists. If you omit this, the default is "false". * **csv** *(dict) --* Use this parameter to include the CSV processor in your transformer. * **quoteCharacter** *(string) --* The character used used as a text qualifier for a single column of data. If you omit this, the double quotation mark """ character is used. * **delimiter** *(string) --* The character used to separate each column in the original comma-separated value log event. If you omit this, the processor looks for the comma "," character as the delimiter. * **columns** *(list) --* An array of names to use for the columns in the transformed log event. If you omit this, default column names ( "[column_1, column_2 ...]") are used. * *(string) --* * **source** *(string) --* The path to the field in the log event that has the comma separated values to be parsed. If you omit this value, the whole log message is processed. * **dateTimeConverter** *(dict) --* Use this parameter to include the datetimeConverter processor in your transformer. * **source** *(string) --* **[REQUIRED]** The key to apply the date conversion to. * **target** *(string) --* **[REQUIRED]** The JSON field to store the result in. * **targetFormat** *(string) --* The datetime format to use for the converted data in the target field. If you omit this, the default of "yyyy-MM- dd'T'HH:mm:ss.SSS'Z" is used. * **matchPatterns** *(list) --* **[REQUIRED]** A list of patterns to match against the "source" field. * *(string) --* * **sourceTimezone** *(string) --* The time zone of the source field. If you omit this, the default used is the UTC zone. * **targetTimezone** *(string) --* The time zone of the target field. If you omit this, the default used is the UTC zone. * **locale** *(string) --* The locale of the source field. If you omit this, the default of "locale.ROOT" is used. * **deleteKeys** *(dict) --* Use this parameter to include the deleteKeys processor in your transformer. * **withKeys** *(list) --* **[REQUIRED]** The list of keys to delete. * *(string) --* * **grok** *(dict) --* Use this parameter to include the grok processor in your transformer. * **source** *(string) --* The path to the field in the log event that you want to parse. If you omit this value, the whole log message is parsed. * **match** *(string) --* **[REQUIRED]** The grok pattern to match against the log event. For a list of supported grok patterns, see Supported grok patterns. * **listToMap** *(dict) --* Use this parameter to include the listToMap processor in your transformer. * **source** *(string) --* **[REQUIRED]** The key in the log event that has a list of objects that will be converted to a map. * **key** *(string) --* **[REQUIRED]** The key of the field to be extracted as keys in the generated map * **valueKey** *(string) --* If this is specified, the values that you specify in this parameter will be extracted from the "source" objects and put into the values of the generated map. Otherwise, original objects in the source list will be put into the values of the generated map. * **target** *(string) --* The key of the field that will hold the generated map * **flatten** *(boolean) --* A Boolean value to indicate whether the list will be flattened into single items. Specify "true" to flatten the list. The default is "false" * **flattenedElement** *(string) --* If you set "flatten" to "true", use "flattenedElement" to specify which element, "first" or "last", to keep. You must specify this parameter if "flatten" is "true" * **lowerCaseString** *(dict) --* Use this parameter to include the lowerCaseString processor in your transformer. * **withKeys** *(list) --* **[REQUIRED]** The array caontaining the keys of the fields to convert to lowercase. * *(string) --* * **moveKeys** *(dict) --* Use this parameter to include the moveKeys processor in your transformer. * **entries** *(list) --* **[REQUIRED]** An array of objects, where each object contains the information about one key to move. * *(dict) --* This object defines one key that will be moved with the moveKey processor. * **source** *(string) --* **[REQUIRED]** The key to move. * **target** *(string) --* **[REQUIRED]** The key to move to. * **overwriteIfExists** *(boolean) --* Specifies whether to overwrite the value if the destination key already exists. If you omit this, the default is "false". * **parseCloudfront** *(dict) --* Use this parameter to include the parseCloudfront processor in your transformer. If you use this processor, it must be the first processor in your transformer. * **source** *(string) --* Omit this parameter and the whole log message will be processed by this processor. No other value than "@message" is allowed for "source". * **parseJSON** *(dict) --* Use this parameter to include the parseJSON processor in your transformer. * **source** *(string) --* Path to the field in the log event that will be parsed. Use dot notation to access child fields. For example, "store.book" * **destination** *(string) --* The location to put the parsed key value pair into. If you omit this parameter, it is placed under the root node. * **parseKeyValue** *(dict) --* Use this parameter to include the parseKeyValue processor in your transformer. * **source** *(string) --* Path to the field in the log event that will be parsed. Use dot notation to access child fields. For example, "store.book" * **destination** *(string) --* The destination field to put the extracted key-value pairs into * **fieldDelimiter** *(string) --* The field delimiter string that is used between key- value pairs in the original log events. If you omit this, the ampersand "&" character is used. * **keyValueDelimiter** *(string) --* The delimiter string to use between the key and value in each pair in the transformed log event. If you omit this, the equal "=" character is used. * **keyPrefix** *(string) --* If you want to add a prefix to all transformed keys, specify it here. * **nonMatchValue** *(string) --* A value to insert into the value field in the result, when a key-value pair is not successfully split. * **overwriteIfExists** *(boolean) --* Specifies whether to overwrite the value if the destination key already exists. If you omit this, the default is "false". * **parseRoute53** *(dict) --* Use this parameter to include the parseRoute53 processor in your transformer. If you use this processor, it must be the first processor in your transformer. * **source** *(string) --* Omit this parameter and the whole log message will be processed by this processor. No other value than "@message" is allowed for "source". * **parseToOCSF** *(dict) --* Use this parameter to convert logs into Open Cybersecurity Schema (OCSF) format. * **source** *(string) --* The path to the field in the log event that you want to parse. If you omit this value, the whole log message is parsed. * **eventSource** *(string) --* **[REQUIRED]** Specify the service or process that produces the log events that will be converted with this processor. * **ocsfVersion** *(string) --* **[REQUIRED]** Specify which version of the OCSF schema to use for the transformed log events. * **parsePostgres** *(dict) --* Use this parameter to include the parsePostGres processor in your transformer. If you use this processor, it must be the first processor in your transformer. * **source** *(string) --* Omit this parameter and the whole log message will be processed by this processor. No other value than "@message" is allowed for "source". * **parseVPC** *(dict) --* Use this parameter to include the parseVPC processor in your transformer. If you use this processor, it must be the first processor in your transformer. * **source** *(string) --* Omit this parameter and the whole log message will be processed by this processor. No other value than "@message" is allowed for "source". * **parseWAF** *(dict) --* Use this parameter to include the parseWAF processor in your transformer. If you use this processor, it must be the first processor in your transformer. * **source** *(string) --* Omit this parameter and the whole log message will be processed by this processor. No other value than "@message" is allowed for "source". * **renameKeys** *(dict) --* Use this parameter to include the renameKeys processor in your transformer. * **entries** *(list) --* **[REQUIRED]** An array of "RenameKeyEntry" objects, where each object contains the information about a single key to rename. * *(dict) --* This object defines one key that will be renamed with the renameKey processor. * **key** *(string) --* **[REQUIRED]** The key to rename * **renameTo** *(string) --* **[REQUIRED]** The string to use for the new key name * **overwriteIfExists** *(boolean) --* Specifies whether to overwrite the existing value if the destination key already exists. The default is "false" * **splitString** *(dict) --* Use this parameter to include the splitString processor in your transformer. * **entries** *(list) --* **[REQUIRED]** An array of "SplitStringEntry" objects, where each object contains the information about one field to split. * *(dict) --* This object defines one log field that will be split with the splitString processor. * **source** *(string) --* **[REQUIRED]** The key of the field to split. * **delimiter** *(string) --* **[REQUIRED]** The separator characters to split the string entry on. * **substituteString** *(dict) --* Use this parameter to include the substituteString processor in your transformer. * **entries** *(list) --* **[REQUIRED]** An array of objects, where each object contains the information about one key to match and replace. * *(dict) --* This object defines one log field key that will be replaced using the substituteString processor. * **source** *(string) --* **[REQUIRED]** The key to modify * **from** *(string) --* **[REQUIRED]** The regular expression string to be replaced. Special regex characters such as [ and ] must be escaped using \ when using double quotes and with when using single quotes. For more information, see Class Pattern on the Oracle web site. * **to** *(string) --* **[REQUIRED]** The string to be substituted for each match of "from" * **trimString** *(dict) --* Use this parameter to include the trimString processor in your transformer. * **withKeys** *(list) --* **[REQUIRED]** The array containing the keys of the fields to trim. * *(string) --* * **typeConverter** *(dict) --* Use this parameter to include the typeConverter processor in your transformer. * **entries** *(list) --* **[REQUIRED]** An array of "TypeConverterEntry" objects, where each object contains the information about one field to change the type of. * *(dict) --* This object defines one value type that will be converted using the typeConverter processor. * **key** *(string) --* **[REQUIRED]** The key with the value that is to be converted to a different type. * **type** *(string) --* **[REQUIRED]** The type to convert the field value to. Valid values are "integer", "double", "string" and "boolean". * **upperCaseString** *(dict) --* Use this parameter to include the upperCaseString processor in your transformer. * **withKeys** *(list) --* **[REQUIRED]** The array of containing the keys of the field to convert to uppercase. * *(string) --* * **logEventMessages** (*list*) -- **[REQUIRED]** An array of the raw log events that you want to use to test this transformer. * *(string) --* Return type: dict Returns: **Response Syntax** { 'transformedLogs': [ { 'eventNumber': 123, 'eventMessage': 'string', 'transformedEventMessage': 'string' }, ] } **Response Structure** * *(dict) --* * **transformedLogs** *(list) --* An array where each member of the array includes both the original version and the transformed version of one of the log events that you input. * *(dict) --* This structure contains information for one log event that has been processed by a log transformer. * **eventNumber** *(integer) --* The event number. * **eventMessage** *(string) --* The original log event message before it was transformed. * **transformedEventMessage** *(string) --* The log event message after being transformed. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.InvalidOperationException" CloudWatchLogs / Client / describe_log_streams describe_log_streams ******************** CloudWatchLogs.Client.describe_log_streams(**kwargs) Lists the log streams for the specified log group. You can list all the log streams or filter the results by prefix. You can also control how the results are ordered. You can specify the log group to search by using either "logGroupIdentifier" or "logGroupName". You must include one of these two parameters, but you can't include both. This operation has a limit of 25 transactions per second, after which transactions are throttled. If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross- account observability. See also: AWS API Documentation **Request Syntax** response = client.describe_log_streams( logGroupName='string', logGroupIdentifier='string', logStreamNamePrefix='string', orderBy='LogStreamName'|'LastEventTime', descending=True|False, nextToken='string', limit=123 ) Parameters: * **logGroupName** (*string*) -- The name of the log group. Note: You must include either "logGroupIdentifier" or "logGroupName", but not both. * **logGroupIdentifier** (*string*) -- Specify either the name or ARN of the log group to view. If the log group is in a source account and you are using a monitoring account, you must use the log group ARN. Note: You must include either "logGroupIdentifier" or "logGroupName", but not both. * **logStreamNamePrefix** (*string*) -- The prefix to match. If "orderBy" is "LastEventTime", you cannot specify this parameter. * **orderBy** (*string*) -- If the value is "LogStreamName", the results are ordered by log stream name. If the value is "LastEventTime", the results are ordered by the event time. The default value is "LogStreamName". If you order the results by event time, you cannot specify the "logStreamNamePrefix" parameter. "lastEventTimestamp" represents the time of the most recent log event in the log stream in CloudWatch Logs. This number is expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". "lastEventTimestamp" updates on an eventual consistency basis. It typically updates in less than an hour from ingestion, but in rare situations might take longer. * **descending** (*boolean*) -- If the value is true, results are returned in descending order. If the value is to false, results are returned in ascending order. The default value is false. * **nextToken** (*string*) -- The token for the next set of items to return. (You received this token from a previous call.) * **limit** (*integer*) -- The maximum number of items returned. If you don't specify a value, the default is up to 50 items. Return type: dict Returns: **Response Syntax** { 'logStreams': [ { 'logStreamName': 'string', 'creationTime': 123, 'firstEventTimestamp': 123, 'lastEventTimestamp': 123, 'lastIngestionTime': 123, 'uploadSequenceToken': 'string', 'arn': 'string', 'storedBytes': 123 }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **logStreams** *(list) --* The log streams. * *(dict) --* Represents a log stream, which is a sequence of log events from a single emitter of logs. * **logStreamName** *(string) --* The name of the log stream. * **creationTime** *(integer) --* The creation time of the stream, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". * **firstEventTimestamp** *(integer) --* The time of the first event, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". * **lastEventTimestamp** *(integer) --* The time of the most recent log event in the log stream in CloudWatch Logs. This number is expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC". The "lastEventTime" value updates on an eventual consistency basis. It typically updates in less than an hour from ingestion, but in rare situations might take longer. * **lastIngestionTime** *(integer) --* The ingestion time, expressed as the number of milliseconds after "Jan 1, 1970 00:00:00 UTC" The "lastIngestionTime" value updates on an eventual consistency basis. It typically updates in less than an hour after ingestion, but in rare situations might take longer. * **uploadSequenceToken** *(string) --* The sequence token. Warning: The sequence token is now ignored in "PutLogEvents" actions. "PutLogEvents" actions are always accepted regardless of receiving an invalid sequence token. You don't need to obtain "uploadSequenceToken" to use a "PutLogEvents" action. * **arn** *(string) --* The Amazon Resource Name (ARN) of the log stream. * **storedBytes** *(integer) --* The number of bytes stored. **Important:** As of June 17, 2019, this parameter is no longer supported for log streams, and is always reported as zero. This change applies only to log streams. The "storedBytes" parameter for log groups is not affected. * **nextToken** *(string) --* The token for the next set of items to return. The token expires after 24 hours. **Exceptions** * "CloudWatchLogs.Client.exceptions.InvalidParameterException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException" * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" CloudWatchLogs / Client / get_delivery_destination_policy get_delivery_destination_policy ******************************* CloudWatchLogs.Client.get_delivery_destination_policy(**kwargs) Retrieves the delivery destination policy assigned to the delivery destination that you specify. For more information about delivery destinations and their policies, see PutDeliveryDestinationPolicy. See also: AWS API Documentation **Request Syntax** response = client.get_delivery_destination_policy( deliveryDestinationName='string' ) Parameters: **deliveryDestinationName** (*string*) -- **[REQUIRED]** The name of the delivery destination that you want to retrieve the policy of. Return type: dict Returns: **Response Syntax** { 'policy': { 'deliveryDestinationPolicy': 'string' } } **Response Structure** * *(dict) --* * **policy** *(dict) --* The IAM policy for this delivery destination. * **deliveryDestinationPolicy** *(string) --* The contents of the delivery destination policy. **Exceptions** * "CloudWatchLogs.Client.exceptions.ServiceUnavailableException" * "CloudWatchLogs.Client.exceptions.ValidationException" * "CloudWatchLogs.Client.exceptions.ResourceNotFoundException"