TimestreamQuery *************** Client ====== class TimestreamQuery.Client A low-level client representing Amazon Timestream Query import boto3 client = boto3.client('timestream-query') These are the available methods: * can_paginate * cancel_query * close * create_scheduled_query * delete_scheduled_query * describe_account_settings * describe_endpoints * describe_scheduled_query * execute_scheduled_query * get_paginator * get_waiter * list_scheduled_queries * list_tags_for_resource * prepare_query * query * tag_resource * untag_resource * update_account_settings * update_scheduled_query Paginators ========== Paginators are available on a client instance via the "get_paginator" method. For more detailed instructions and examples on the usage of paginators, see the paginators user guide. The available paginators are: * ListScheduledQueries * ListTagsForResource * Query TimestreamQuery / Paginator / ListTagsForResource ListTagsForResource ******************* class TimestreamQuery.Paginator.ListTagsForResource paginator = client.get_paginator('list_tags_for_resource') paginate(**kwargs) Creates an iterator that will paginate through responses from "TimestreamQuery.Client.list_tags_for_resource()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( ResourceARN='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **ResourceARN** (*string*) -- **[REQUIRED]** The Timestream resource with tags to be listed. This value is an Amazon Resource Name (ARN). * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'Tags': [ { 'Key': 'string', 'Value': 'string' }, ], } **Response Structure** * *(dict) --* * **Tags** *(list) --* The tags currently associated with the Timestream resource. * *(dict) --* A tag is a label that you assign to a Timestream database and/or table. Each tag consists of a key and an optional value, both of which you define. Tags enable you to categorize databases and/or tables, for example, by purpose, owner, or environment. * **Key** *(string) --* The key of the tag. Tag keys are case sensitive. * **Value** *(string) --* The value of the tag. Tag values are case sensitive and can be null. TimestreamQuery / Paginator / Query Query ***** class TimestreamQuery.Paginator.Query paginator = client.get_paginator('query') paginate(**kwargs) Creates an iterator that will paginate through responses from "TimestreamQuery.Client.query()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( QueryString='string', ClientToken='string', QueryInsights={ 'Mode': 'ENABLED_WITH_RATE_CONTROL'|'DISABLED' }, PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **QueryString** (*string*) -- **[REQUIRED]** The query to be run by Timestream. * **ClientToken** (*string*) -- Unique, case-sensitive string of up to 64 ASCII characters specified when a "Query" request is made. Providing a "ClientToken" makes the call to "Query" *idempotent*. This means that running the same query repeatedly will produce the same result. In other words, making multiple identical "Query" requests has the same effect as making a single request. When using "ClientToken" in a query, note the following: * If the Query API is instantiated without a "ClientToken", the Query SDK generates a "ClientToken" on your behalf. * If the "Query" invocation only contains the "ClientToken" but does not include a "NextToken", that invocation of "Query" is assumed to be a new query run. * If the invocation contains "NextToken", that particular invocation is assumed to be a subsequent invocation of a prior call to the Query API, and a result set is returned. * After 4 hours, any request with the same "ClientToken" is treated as a new request. This field is autopopulated if not provided. * **QueryInsights** (*dict*) -- Encapsulates settings for enabling "QueryInsights". Enabling "QueryInsights" returns insights and metrics in addition to query results for the query that you executed. You can use "QueryInsights" to tune your query performance. * **Mode** *(string) --* **[REQUIRED]** Provides the following modes to enable "QueryInsights": * "ENABLED_WITH_RATE_CONTROL" – Enables "QueryInsights" for the queries being processed. This mode also includes a rate control mechanism, which limits the "QueryInsights" feature to 1 query per second (QPS). * "DISABLED" – Disables "QueryInsights". * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'QueryId': 'string', 'Rows': [ { 'Data': [ { 'ScalarValue': 'string', 'TimeSeriesValue': [ { 'Time': 'string', 'Value': {'... recursive ...'} }, ], 'ArrayValue': {'... recursive ...'}, 'RowValue': {'... recursive ...'}, 'NullValue': True|False }, ] }, ], 'ColumnInfo': [ { 'Name': 'string', 'Type': { 'ScalarType': 'VARCHAR'|'BOOLEAN'|'BIGINT'|'DOUBLE'|'TIMESTAMP'|'DATE'|'TIME'|'INTERVAL_DAY_TO_SECOND'|'INTERVAL_YEAR_TO_MONTH'|'UNKNOWN'|'INTEGER', 'ArrayColumnInfo': {'... recursive ...'}, 'TimeSeriesMeasureValueColumnInfo': {'... recursive ...'}, 'RowColumnInfo': {'... recursive ...'} } }, ], 'QueryStatus': { 'ProgressPercentage': 123.0, 'CumulativeBytesScanned': 123, 'CumulativeBytesMetered': 123 }, 'QueryInsightsResponse': { 'QuerySpatialCoverage': { 'Max': { 'Value': 123.0, 'TableArn': 'string', 'PartitionKey': [ 'string', ] } }, 'QueryTemporalRange': { 'Max': { 'Value': 123, 'TableArn': 'string' } }, 'QueryTableCount': 123, 'OutputRows': 123, 'OutputBytes': 123, 'UnloadPartitionCount': 123, 'UnloadWrittenRows': 123, 'UnloadWrittenBytes': 123 } } **Response Structure** * *(dict) --* * **QueryId** *(string) --* A unique ID for the given query. * **Rows** *(list) --* The result set rows returned by the query. * *(dict) --* Represents a single row in the query results. * **Data** *(list) --* List of data points in a single row of the result set. * *(dict) --* Datum represents a single data point in a query result. * **ScalarValue** *(string) --* Indicates if the data point is a scalar value such as integer, string, double, or Boolean. * **TimeSeriesValue** *(list) --* Indicates if the data point is a timeseries data type. * *(dict) --* The timeseries data type represents the values of a measure over time. A time series is an array of rows of timestamps and measure values, with rows sorted in ascending order of time. A TimeSeriesDataPoint is a single data point in the time series. It represents a tuple of (time, measure value) in a time series. * **Time** *(string) --* The timestamp when the measure value was collected. * **Value** *(dict) --* The measure value for the data point. * **ArrayValue** *(list) --* Indicates if the data point is an array. * **RowValue** *(dict) --* Indicates if the data point is a row. * **NullValue** *(boolean) --* Indicates if the data point is null. * **ColumnInfo** *(list) --* The column data types of the returned result set. * *(dict) --* Contains the metadata for query results such as the column names, data types, and other attributes. * **Name** *(string) --* The name of the result set column. The name of the result set is available for columns of all data types except for arrays. * **Type** *(dict) --* The data type of the result set column. The data type can be a scalar or complex. Scalar data types are integers, strings, doubles, Booleans, and others. Complex data types are types such as arrays, rows, and others. * **ScalarType** *(string) --* Indicates if the column is of type string, integer, Boolean, double, timestamp, date, time. For more information, see Supported data types. * **ArrayColumnInfo** *(dict) --* Indicates if the column is an array. * **TimeSeriesMeasureValueColumnInfo** *(dict) --* Indicates if the column is a timeseries data type. * **RowColumnInfo** *(list) --* Indicates if the column is a row. * **QueryStatus** *(dict) --* Information about the status of the query, including progress and bytes scanned. * **ProgressPercentage** *(float) --* The progress of the query, expressed as a percentage. * **CumulativeBytesScanned** *(integer) --* The amount of data scanned by the query in bytes. This is a cumulative sum and represents the total amount of bytes scanned since the query was started. * **CumulativeBytesMetered** *(integer) --* The amount of data scanned by the query in bytes that you will be charged for. This is a cumulative sum and represents the total amount of data that you will be charged for since the query was started. The charge is applied only once and is either applied when the query completes running or when the query is cancelled. * **QueryInsightsResponse** *(dict) --* Encapsulates "QueryInsights" containing insights and metrics related to the query that you executed. * **QuerySpatialCoverage** *(dict) --* Provides insights into the spatial coverage of the query, including the table with sub-optimal (max) spatial pruning. This information can help you identify areas for improvement in your partitioning strategy to enhance spatial pruning. * **Max** *(dict) --* Provides insights into the spatial coverage of the executed query and the table with the most inefficient spatial pruning. * "Value" – The maximum ratio of spatial coverage. * "TableArn" – The Amazon Resource Name (ARN) of the table with sub-optimal spatial pruning. * "PartitionKey" – The partition key used for partitioning, which can be a default "measure_name" or a CDPK. * **Value** *(float) --* The maximum ratio of spatial coverage. * **TableArn** *(string) --* The Amazon Resource Name (ARN) of the table with the most sub-optimal spatial pruning. * **PartitionKey** *(list) --* The partition key used for partitioning, which can be a default "measure_name" or a customer defined partition key. * *(string) --* * **QueryTemporalRange** *(dict) --* Provides insights into the temporal range of the query, including the table with the largest (max) time range. Following are some of the potential options for optimizing time-based pruning: * Add missing time-predicates. * Remove functions around the time predicates. * Add time predicates to all the sub-queries. * **Max** *(dict) --* Encapsulates the following properties that provide insights into the most sub-optimal performing table on the temporal axis: * "Value" – The maximum duration in nanoseconds between the start and end of the query. * "TableArn" – The Amazon Resource Name (ARN) of the table which is queried with the largest time range. * **Value** *(integer) --* The maximum duration in nanoseconds between the start and end of the query. * **TableArn** *(string) --* The Amazon Resource Name (ARN) of the table which is queried with the largest time range. * **QueryTableCount** *(integer) --* Indicates the number of tables in the query. * **OutputRows** *(integer) --* Indicates the total number of rows returned as part of the query result set. You can use this data to validate if the number of rows in the result set have changed as part of the query tuning exercise. * **OutputBytes** *(integer) --* Indicates the size of query result set in bytes. You can use this data to validate if the result set has changed as part of the query tuning exercise. * **UnloadPartitionCount** *(integer) --* Indicates the partitions created by the "Unload" operation. * **UnloadWrittenRows** *(integer) --* Indicates the rows written by the "Unload" query. * **UnloadWrittenBytes** *(integer) --* Indicates the size, in bytes, written by the "Unload" operation. TimestreamQuery / Paginator / ListScheduledQueries ListScheduledQueries ******************** class TimestreamQuery.Paginator.ListScheduledQueries paginator = client.get_paginator('list_scheduled_queries') paginate(**kwargs) Creates an iterator that will paginate through responses from "TimestreamQuery.Client.list_scheduled_queries()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max- items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'ScheduledQueries': [ { 'Arn': 'string', 'Name': 'string', 'CreationTime': datetime(2015, 1, 1), 'State': 'ENABLED'|'DISABLED', 'PreviousInvocationTime': datetime(2015, 1, 1), 'NextInvocationTime': datetime(2015, 1, 1), 'ErrorReportConfiguration': { 'S3Configuration': { 'BucketName': 'string', 'ObjectKeyPrefix': 'string', 'EncryptionOption': 'SSE_S3'|'SSE_KMS' } }, 'TargetDestination': { 'TimestreamDestination': { 'DatabaseName': 'string', 'TableName': 'string' } }, 'LastRunStatus': 'AUTO_TRIGGER_SUCCESS'|'AUTO_TRIGGER_FAILURE'|'MANUAL_TRIGGER_SUCCESS'|'MANUAL_TRIGGER_FAILURE' }, ], } **Response Structure** * *(dict) --* * **ScheduledQueries** *(list) --* A list of scheduled queries. * *(dict) --* Scheduled Query * **Arn** *(string) --* The Amazon Resource Name. * **Name** *(string) --* The name of the scheduled query. * **CreationTime** *(datetime) --* The creation time of the scheduled query. * **State** *(string) --* State of scheduled query. * **PreviousInvocationTime** *(datetime) --* The last time the scheduled query was run. * **NextInvocationTime** *(datetime) --* The next time the scheduled query is to be run. * **ErrorReportConfiguration** *(dict) --* Configuration for scheduled query error reporting. * **S3Configuration** *(dict) --* The S3 configuration for the error reports. * **BucketName** *(string) --* Name of the S3 bucket under which error reports will be created. * **ObjectKeyPrefix** *(string) --* Prefix for the error report key. Timestream by default adds the following prefix to the error report path. * **EncryptionOption** *(string) --* Encryption at rest options for the error reports. If no encryption option is specified, Timestream will choose SSE_S3 as default. * **TargetDestination** *(dict) --* Target data source where final scheduled query result will be written. * **TimestreamDestination** *(dict) --* Query result destination details for Timestream data source. * **DatabaseName** *(string) --* Timestream database name. * **TableName** *(string) --* Timestream table name. * **LastRunStatus** *(string) --* Status of the last scheduled query run. TimestreamQuery / Client / get_paginator get_paginator ************* TimestreamQuery.Client.get_paginator(operation_name) Create a paginator for an operation. Parameters: **operation_name** (*string*) -- The operation name. This is the same name as the method name on the client. For example, if the method name is "create_foo", and you'd normally invoke the operation as "client.create_foo(**kwargs)", if the "create_foo" operation can be paginated, you can use the call "client.get_paginator("create_foo")". Raises: **OperationNotPageableError** -- Raised if the operation is not pageable. You can use the "client.can_paginate" method to check if an operation is pageable. Return type: "botocore.paginate.Paginator" Returns: A paginator object. TimestreamQuery / Client / describe_account_settings describe_account_settings ************************* TimestreamQuery.Client.describe_account_settings() Describes the settings for your account that include the query pricing model and the configured maximum TCUs the service can use for your query workload. You're charged only for the duration of compute units used for your workloads. See also: AWS API Documentation **Request Syntax** response = client.describe_account_settings() Return type: dict Returns: **Response Syntax** { 'MaxQueryTCU': 123, 'QueryPricingModel': 'BYTES_SCANNED'|'COMPUTE_UNITS', 'QueryCompute': { 'ComputeMode': 'ON_DEMAND'|'PROVISIONED', 'ProvisionedCapacity': { 'ActiveQueryTCU': 123, 'NotificationConfiguration': { 'SnsConfiguration': { 'TopicArn': 'string' }, 'RoleArn': 'string' }, 'LastUpdate': { 'TargetQueryTCU': 123, 'Status': 'PENDING'|'FAILED'|'SUCCEEDED', 'StatusMessage': 'string' } } } } **Response Structure** * *(dict) --* * **MaxQueryTCU** *(integer) --* The maximum number of Timestream compute units (TCUs) the service will use at any point in time to serve your queries. To run queries, you must set a minimum capacity of 4 TCU. You can set the maximum number of TCU in multiples of 4, for example, 4, 8, 16, 32, and so on. This configuration is applicable only for on-demand usage of (TCUs). * **QueryPricingModel** *(string) --* The pricing model for queries in your account. Note: The "QueryPricingModel" parameter is used by several Timestream operations; however, the "UpdateAccountSettings" API operation doesn't recognize any values other than "COMPUTE_UNITS". * **QueryCompute** *(dict) --* An object that contains the usage settings for Timestream Compute Units (TCUs) in your account for the query workload. * **ComputeMode** *(string) --* The mode in which Timestream Compute Units (TCUs) are allocated and utilized within an account. Note that in the Asia Pacific (Mumbai) region, the API operation only recognizes the value "PROVISIONED". * **ProvisionedCapacity** *(dict) --* Configuration object that contains settings for provisioned Timestream Compute Units (TCUs) in your account. * **ActiveQueryTCU** *(integer) --* The number of Timestream Compute Units (TCUs) provisioned in the account. This field is only visible when the compute mode is "PROVISIONED". * **NotificationConfiguration** *(dict) --* An object that contains settings for notifications that are sent whenever the provisioned capacity settings are modified. This field is only visible when the compute mode is "PROVISIONED". * **SnsConfiguration** *(dict) --* Details on SNS that are required to send the notification. * **TopicArn** *(string) --* SNS topic ARN that the scheduled query status notifications will be sent to. * **RoleArn** *(string) --* An Amazon Resource Name (ARN) that grants Timestream permission to publish notifications. This field is only visible if SNS Topic is provided when updating the account settings. * **LastUpdate** *(dict) --* Information about the last update to the provisioned capacity settings. * **TargetQueryTCU** *(integer) --* The number of TimeStream Compute Units (TCUs) requested in the last account settings update. * **Status** *(string) --* The status of the last update. Can be either "PENDING", "FAILED", or "SUCCEEDED". * **StatusMessage** *(string) --* Error message describing the last account settings update status, visible only if an error occurred. **Exceptions** * "TimestreamQuery.Client.exceptions.AccessDeniedException" * "TimestreamQuery.Client.exceptions.InternalServerException" * "TimestreamQuery.Client.exceptions.ThrottlingException" * "TimestreamQuery.Client.exceptions.InvalidEndpointException" TimestreamQuery / Client / create_scheduled_query create_scheduled_query ********************** TimestreamQuery.Client.create_scheduled_query(**kwargs) Create a scheduled query that will be run on your behalf at the configured schedule. Timestream assumes the execution role provided as part of the "ScheduledQueryExecutionRoleArn" parameter to run the query. You can use the "NotificationConfiguration" parameter to configure notification for your scheduled query operations. See also: AWS API Documentation **Request Syntax** response = client.create_scheduled_query( Name='string', QueryString='string', ScheduleConfiguration={ 'ScheduleExpression': 'string' }, NotificationConfiguration={ 'SnsConfiguration': { 'TopicArn': 'string' } }, TargetConfiguration={ 'TimestreamConfiguration': { 'DatabaseName': 'string', 'TableName': 'string', 'TimeColumn': 'string', 'DimensionMappings': [ { 'Name': 'string', 'DimensionValueType': 'VARCHAR' }, ], 'MultiMeasureMappings': { 'TargetMultiMeasureName': 'string', 'MultiMeasureAttributeMappings': [ { 'SourceColumn': 'string', 'TargetMultiMeasureAttributeName': 'string', 'MeasureValueType': 'BIGINT'|'BOOLEAN'|'DOUBLE'|'VARCHAR'|'TIMESTAMP' }, ] }, 'MixedMeasureMappings': [ { 'MeasureName': 'string', 'SourceColumn': 'string', 'TargetMeasureName': 'string', 'MeasureValueType': 'BIGINT'|'BOOLEAN'|'DOUBLE'|'VARCHAR'|'MULTI', 'MultiMeasureAttributeMappings': [ { 'SourceColumn': 'string', 'TargetMultiMeasureAttributeName': 'string', 'MeasureValueType': 'BIGINT'|'BOOLEAN'|'DOUBLE'|'VARCHAR'|'TIMESTAMP' }, ] }, ], 'MeasureNameColumn': 'string' } }, ClientToken='string', ScheduledQueryExecutionRoleArn='string', Tags=[ { 'Key': 'string', 'Value': 'string' }, ], KmsKeyId='string', ErrorReportConfiguration={ 'S3Configuration': { 'BucketName': 'string', 'ObjectKeyPrefix': 'string', 'EncryptionOption': 'SSE_S3'|'SSE_KMS' } } ) Parameters: * **Name** (*string*) -- **[REQUIRED]** Name of the scheduled query. * **QueryString** (*string*) -- **[REQUIRED]** The query string to run. Parameter names can be specified in the query string "@" character followed by an identifier. The named Parameter "@scheduled_runtime" is reserved and can be used in the query to get the time at which the query is scheduled to run. The timestamp calculated according to the ScheduleConfiguration parameter, will be the value of "@scheduled_runtime" paramater for each query run. For example, consider an instance of a scheduled query executing on 2021-12-01 00:00:00. For this instance, the "@scheduled_runtime" parameter is initialized to the timestamp 2021-12-01 00:00:00 when invoking the query. * **ScheduleConfiguration** (*dict*) -- **[REQUIRED]** The schedule configuration for the query. * **ScheduleExpression** *(string) --* **[REQUIRED]** An expression that denotes when to trigger the scheduled query run. This can be a cron expression or a rate expression. * **NotificationConfiguration** (*dict*) -- **[REQUIRED]** Notification configuration for the scheduled query. A notification is sent by Timestream when a query run finishes, when the state is updated or when you delete it. * **SnsConfiguration** *(dict) --* **[REQUIRED]** Details about the Amazon Simple Notification Service (SNS) configuration. This field is visible only when SNS Topic is provided when updating the account settings. * **TopicArn** *(string) --* **[REQUIRED]** SNS topic ARN that the scheduled query status notifications will be sent to. * **TargetConfiguration** (*dict*) -- Configuration used for writing the result of a query. * **TimestreamConfiguration** *(dict) --* **[REQUIRED]** Configuration needed to write data into the Timestream database and table. * **DatabaseName** *(string) --* **[REQUIRED]** Name of Timestream database to which the query result will be written. * **TableName** *(string) --* **[REQUIRED]** Name of Timestream table that the query result will be written to. The table should be within the same database that is provided in Timestream configuration. * **TimeColumn** *(string) --* **[REQUIRED]** Column from query result that should be used as the time column in destination table. Column type for this should be TIMESTAMP. * **DimensionMappings** *(list) --* **[REQUIRED]** This is to allow mapping column(s) from the query result to the dimension in the destination table. * *(dict) --* This type is used to map column(s) from the query result to a dimension in the destination table. * **Name** *(string) --* **[REQUIRED]** Column name from query result. * **DimensionValueType** *(string) --* **[REQUIRED]** Type for the dimension. * **MultiMeasureMappings** *(dict) --* Multi-measure mappings. * **TargetMultiMeasureName** *(string) --* The name of the target multi-measure name in the derived table. This input is required when measureNameColumn is not provided. If MeasureNameColumn is provided, then value from that column will be used as multi-measure name. * **MultiMeasureAttributeMappings** *(list) --* **[REQUIRED]** Required. Attribute mappings to be used for mapping query results to ingest data for multi-measure attributes. * *(dict) --* Attribute mapping for MULTI value measures. * **SourceColumn** *(string) --* **[REQUIRED]** Source column from where the attribute value is to be read. * **TargetMultiMeasureAttributeName** *(string) --* Custom name to be used for attribute name in derived table. If not provided, source column name would be used. * **MeasureValueType** *(string) --* **[REQUIRED]** Type of the attribute to be read from the source column. * **MixedMeasureMappings** *(list) --* Specifies how to map measures to multi-measure records. * *(dict) --* MixedMeasureMappings are mappings that can be used to ingest data into a mixture of narrow and multi measures in the derived table. * **MeasureName** *(string) --* Refers to the value of measure_name in a result row. This field is required if MeasureNameColumn is provided. * **SourceColumn** *(string) --* This field refers to the source column from which measure-value is to be read for result materialization. * **TargetMeasureName** *(string) --* Target measure name to be used. If not provided, the target measure name by default would be measure-name if provided, or sourceColumn otherwise. * **MeasureValueType** *(string) --* **[REQUIRED]** Type of the value that is to be read from sourceColumn. If the mapping is for MULTI, use MeasureValueType.MULTI. * **MultiMeasureAttributeMappings** *(list) --* Required when measureValueType is MULTI. Attribute mappings for MULTI value measures. * *(dict) --* Attribute mapping for MULTI value measures. * **SourceColumn** *(string) --* **[REQUIRED]** Source column from where the attribute value is to be read. * **TargetMultiMeasureAttributeName** *(string) --* Custom name to be used for attribute name in derived table. If not provided, source column name would be used. * **MeasureValueType** *(string) --* **[REQUIRED]** Type of the attribute to be read from the source column. * **MeasureNameColumn** *(string) --* Name of the measure column. * **ClientToken** (*string*) -- Using a ClientToken makes the call to CreateScheduledQuery idempotent, in other words, making the same request repeatedly will produce the same result. Making multiple identical CreateScheduledQuery requests has the same effect as making a single request. * If CreateScheduledQuery is called without a "ClientToken", the Query SDK generates a "ClientToken" on your behalf. * After 8 hours, any request with the same "ClientToken" is treated as a new request. This field is autopopulated if not provided. * **ScheduledQueryExecutionRoleArn** (*string*) -- **[REQUIRED]** The ARN for the IAM role that Timestream will assume when running the scheduled query. * **Tags** (*list*) -- A list of key-value pairs to label the scheduled query. * *(dict) --* A tag is a label that you assign to a Timestream database and/or table. Each tag consists of a key and an optional value, both of which you define. Tags enable you to categorize databases and/or tables, for example, by purpose, owner, or environment. * **Key** *(string) --* **[REQUIRED]** The key of the tag. Tag keys are case sensitive. * **Value** *(string) --* **[REQUIRED]** The value of the tag. Tag values are case sensitive and can be null. * **KmsKeyId** (*string*) -- The Amazon KMS key used to encrypt the scheduled query resource, at-rest. If the Amazon KMS key is not specified, the scheduled query resource will be encrypted with a Timestream owned Amazon KMS key. To specify a KMS key, use the key ID, key ARN, alias name, or alias ARN. When using an alias name, prefix the name with *alias/* If ErrorReportConfiguration uses "SSE_KMS" as encryption type, the same KmsKeyId is used to encrypt the error report at rest. * **ErrorReportConfiguration** (*dict*) -- **[REQUIRED]** Configuration for error reporting. Error reports will be generated when a problem is encountered when writing the query results. * **S3Configuration** *(dict) --* **[REQUIRED]** The S3 configuration for the error reports. * **BucketName** *(string) --* **[REQUIRED]** Name of the S3 bucket under which error reports will be created. * **ObjectKeyPrefix** *(string) --* Prefix for the error report key. Timestream by default adds the following prefix to the error report path. * **EncryptionOption** *(string) --* Encryption at rest options for the error reports. If no encryption option is specified, Timestream will choose SSE_S3 as default. Return type: dict Returns: **Response Syntax** { 'Arn': 'string' } **Response Structure** * *(dict) --* * **Arn** *(string) --* ARN for the created scheduled query. **Exceptions** * "TimestreamQuery.Client.exceptions.AccessDeniedException" * "TimestreamQuery.Client.exceptions.ConflictException" * "TimestreamQuery.Client.exceptions.InternalServerException" * "TimestreamQuery.Client.exceptions.ServiceQuotaExceededException" * "TimestreamQuery.Client.exceptions.ThrottlingException" * "TimestreamQuery.Client.exceptions.ValidationException" * "TimestreamQuery.Client.exceptions.InvalidEndpointException" TimestreamQuery / Client / can_paginate can_paginate ************ TimestreamQuery.Client.can_paginate(operation_name) Check if an operation can be paginated. Parameters: **operation_name** (*string*) -- The operation name. This is the same name as the method name on the client. For example, if the method name is "create_foo", and you'd normally invoke the operation as "client.create_foo(**kwargs)", if the "create_foo" operation can be paginated, you can use the call "client.get_paginator("create_foo")". Returns: "True" if the operation can be paginated, "False" otherwise. TimestreamQuery / Client / describe_endpoints describe_endpoints ****************** TimestreamQuery.Client.describe_endpoints() DescribeEndpoints returns a list of available endpoints to make Timestream API calls against. This API is available through both Write and Query. Because the Timestream SDKs are designed to transparently work with the service’s architecture, including the management and mapping of the service endpoints, *it is not recommended that you use this API unless*: * You are using VPC endpoints (Amazon Web Services PrivateLink) with Timestream * Your application uses a programming language that does not yet have SDK support * You require better control over the client-side implementation For detailed information on how and when to use and implement DescribeEndpoints, see The Endpoint Discovery Pattern. See also: AWS API Documentation **Request Syntax** response = client.describe_endpoints() Return type: dict Returns: **Response Syntax** { 'Endpoints': [ { 'Address': 'string', 'CachePeriodInMinutes': 123 }, ] } **Response Structure** * *(dict) --* * **Endpoints** *(list) --* An "Endpoints" object is returned when a "DescribeEndpoints" request is made. * *(dict) --* Represents an available endpoint against which to make API calls against, as well as the TTL for that endpoint. * **Address** *(string) --* An endpoint address. * **CachePeriodInMinutes** *(integer) --* The TTL for the endpoint, in minutes. **Exceptions** * "TimestreamQuery.Client.exceptions.InternalServerException" * "TimestreamQuery.Client.exceptions.ValidationException" * "TimestreamQuery.Client.exceptions.ThrottlingException" TimestreamQuery / Client / delete_scheduled_query delete_scheduled_query ********************** TimestreamQuery.Client.delete_scheduled_query(**kwargs) Deletes a given scheduled query. This is an irreversible operation. See also: AWS API Documentation **Request Syntax** response = client.delete_scheduled_query( ScheduledQueryArn='string' ) Parameters: **ScheduledQueryArn** (*string*) -- **[REQUIRED]** The ARN of the scheduled query. Returns: None **Exceptions** * "TimestreamQuery.Client.exceptions.AccessDeniedException" * "TimestreamQuery.Client.exceptions.InternalServerException" * "TimestreamQuery.Client.exceptions.ResourceNotFoundException" * "TimestreamQuery.Client.exceptions.ThrottlingException" * "TimestreamQuery.Client.exceptions.ValidationException" * "TimestreamQuery.Client.exceptions.InvalidEndpointException" TimestreamQuery / Client / prepare_query prepare_query ************* TimestreamQuery.Client.prepare_query(**kwargs) A synchronous operation that allows you to submit a query with parameters to be stored by Timestream for later running. Timestream only supports using this operation with "ValidateOnly" set to "true". See also: AWS API Documentation **Request Syntax** response = client.prepare_query( QueryString='string', ValidateOnly=True|False ) Parameters: * **QueryString** (*string*) -- **[REQUIRED]** The Timestream query string that you want to use as a prepared statement. Parameter names can be specified in the query string "@" character followed by an identifier. * **ValidateOnly** (*boolean*) -- By setting this value to "true", Timestream will only validate that the query string is a valid Timestream query, and not store the prepared query for later use. Return type: dict Returns: **Response Syntax** { 'QueryString': 'string', 'Columns': [ { 'Name': 'string', 'Type': { 'ScalarType': 'VARCHAR'|'BOOLEAN'|'BIGINT'|'DOUBLE'|'TIMESTAMP'|'DATE'|'TIME'|'INTERVAL_DAY_TO_SECOND'|'INTERVAL_YEAR_TO_MONTH'|'UNKNOWN'|'INTEGER', 'ArrayColumnInfo': { 'Name': 'string', 'Type': {'... recursive ...'} }, 'TimeSeriesMeasureValueColumnInfo': { 'Name': 'string', 'Type': {'... recursive ...'} }, 'RowColumnInfo': [ { 'Name': 'string', 'Type': {'... recursive ...'} }, ] }, 'DatabaseName': 'string', 'TableName': 'string', 'Aliased': True|False }, ], 'Parameters': [ { 'Name': 'string', 'Type': { 'ScalarType': 'VARCHAR'|'BOOLEAN'|'BIGINT'|'DOUBLE'|'TIMESTAMP'|'DATE'|'TIME'|'INTERVAL_DAY_TO_SECOND'|'INTERVAL_YEAR_TO_MONTH'|'UNKNOWN'|'INTEGER', 'ArrayColumnInfo': { 'Name': 'string', 'Type': {'... recursive ...'} }, 'TimeSeriesMeasureValueColumnInfo': { 'Name': 'string', 'Type': {'... recursive ...'} }, 'RowColumnInfo': [ { 'Name': 'string', 'Type': {'... recursive ...'} }, ] } }, ] } **Response Structure** * *(dict) --* * **QueryString** *(string) --* The query string that you want prepare. * **Columns** *(list) --* A list of SELECT clause columns of the submitted query string. * *(dict) --* Details of the column that is returned by the query. * **Name** *(string) --* Name of the column. * **Type** *(dict) --* Contains the data type of a column in a query result set. The data type can be scalar or complex. The supported scalar data types are integers, Boolean, string, double, timestamp, date, time, and intervals. The supported complex data types are arrays, rows, and timeseries. * **ScalarType** *(string) --* Indicates if the column is of type string, integer, Boolean, double, timestamp, date, time. For more information, see Supported data types. * **ArrayColumnInfo** *(dict) --* Indicates if the column is an array. * **Name** *(string) --* The name of the result set column. The name of the result set is available for columns of all data types except for arrays. * **Type** *(dict) --* The data type of the result set column. The data type can be a scalar or complex. Scalar data types are integers, strings, doubles, Booleans, and others. Complex data types are types such as arrays, rows, and others. * **TimeSeriesMeasureValueColumnInfo** *(dict) --* Indicates if the column is a timeseries data type. * **Name** *(string) --* The name of the result set column. The name of the result set is available for columns of all data types except for arrays. * **Type** *(dict) --* The data type of the result set column. The data type can be a scalar or complex. Scalar data types are integers, strings, doubles, Booleans, and others. Complex data types are types such as arrays, rows, and others. * **RowColumnInfo** *(list) --* Indicates if the column is a row. * *(dict) --* Contains the metadata for query results such as the column names, data types, and other attributes. * **Name** *(string) --* The name of the result set column. The name of the result set is available for columns of all data types except for arrays. * **Type** *(dict) --* The data type of the result set column. The data type can be a scalar or complex. Scalar data types are integers, strings, doubles, Booleans, and others. Complex data types are types such as arrays, rows, and others. * **DatabaseName** *(string) --* Database that has this column. * **TableName** *(string) --* Table within the database that has this column. * **Aliased** *(boolean) --* True, if the column name was aliased by the query. False otherwise. * **Parameters** *(list) --* A list of parameters used in the submitted query string. * *(dict) --* Mapping for named parameters. * **Name** *(string) --* Parameter name. * **Type** *(dict) --* Contains the data type of a column in a query result set. The data type can be scalar or complex. The supported scalar data types are integers, Boolean, string, double, timestamp, date, time, and intervals. The supported complex data types are arrays, rows, and timeseries. * **ScalarType** *(string) --* Indicates if the column is of type string, integer, Boolean, double, timestamp, date, time. For more information, see Supported data types. * **ArrayColumnInfo** *(dict) --* Indicates if the column is an array. * **Name** *(string) --* The name of the result set column. The name of the result set is available for columns of all data types except for arrays. * **Type** *(dict) --* The data type of the result set column. The data type can be a scalar or complex. Scalar data types are integers, strings, doubles, Booleans, and others. Complex data types are types such as arrays, rows, and others. * **TimeSeriesMeasureValueColumnInfo** *(dict) --* Indicates if the column is a timeseries data type. * **Name** *(string) --* The name of the result set column. The name of the result set is available for columns of all data types except for arrays. * **Type** *(dict) --* The data type of the result set column. The data type can be a scalar or complex. Scalar data types are integers, strings, doubles, Booleans, and others. Complex data types are types such as arrays, rows, and others. * **RowColumnInfo** *(list) --* Indicates if the column is a row. * *(dict) --* Contains the metadata for query results such as the column names, data types, and other attributes. * **Name** *(string) --* The name of the result set column. The name of the result set is available for columns of all data types except for arrays. * **Type** *(dict) --* The data type of the result set column. The data type can be a scalar or complex. Scalar data types are integers, strings, doubles, Booleans, and others. Complex data types are types such as arrays, rows, and others. **Exceptions** * "TimestreamQuery.Client.exceptions.AccessDeniedException" * "TimestreamQuery.Client.exceptions.InternalServerException" * "TimestreamQuery.Client.exceptions.ThrottlingException" * "TimestreamQuery.Client.exceptions.ValidationException" * "TimestreamQuery.Client.exceptions.InvalidEndpointException" TimestreamQuery / Client / update_account_settings update_account_settings *********************** TimestreamQuery.Client.update_account_settings(**kwargs) Transitions your account to use TCUs for query pricing and modifies the maximum query compute units that you've configured. If you reduce the value of "MaxQueryTCU" to a desired configuration, the new value can take up to 24 hours to be effective. Note: After you've transitioned your account to use TCUs for query pricing, you can't transition to using bytes scanned for query pricing. See also: AWS API Documentation **Request Syntax** response = client.update_account_settings( MaxQueryTCU=123, QueryPricingModel='BYTES_SCANNED'|'COMPUTE_UNITS', QueryCompute={ 'ComputeMode': 'ON_DEMAND'|'PROVISIONED', 'ProvisionedCapacity': { 'TargetQueryTCU': 123, 'NotificationConfiguration': { 'SnsConfiguration': { 'TopicArn': 'string' }, 'RoleArn': 'string' } } } ) Parameters: * **MaxQueryTCU** (*integer*) -- The maximum number of compute units the service will use at any point in time to serve your queries. To run queries, you must set a minimum capacity of 4 TCU. You can set the maximum number of TCU in multiples of 4, for example, 4, 8, 16, 32, and so on. The maximum value supported for "MaxQueryTCU" is 1000. To request an increase to this soft limit, contact Amazon Web Services Support. For information about the default quota for maxQueryTCU, see Default quotas. This configuration is applicable only for on-demand usage of Timestream Compute Units (TCUs). The maximum value supported for "MaxQueryTCU" is 1000. To request an increase to this soft limit, contact Amazon Web Services Support. For information about the default quota for "maxQueryTCU", see Default quotas. * **QueryPricingModel** (*string*) -- The pricing model for queries in an account. Note: The "QueryPricingModel" parameter is used by several Timestream operations; however, the "UpdateAccountSettings" API operation doesn't recognize any values other than "COMPUTE_UNITS". * **QueryCompute** (*dict*) -- Modifies the query compute settings configured in your account, including the query pricing model and provisioned Timestream Compute Units (TCUs) in your account. Note: This API is idempotent, meaning that making the same request multiple times will have the same effect as making the request once. * **ComputeMode** *(string) --* The mode in which Timestream Compute Units (TCUs) are allocated and utilized within an account. Note that in the Asia Pacific (Mumbai) region, the API operation only recognizes the value "PROVISIONED". * **ProvisionedCapacity** *(dict) --* Configuration object that contains settings for provisioned Timestream Compute Units (TCUs) in your account. * **TargetQueryTCU** *(integer) --* **[REQUIRED]** The target compute capacity for querying data, specified in Timestream Compute Units (TCUs). * **NotificationConfiguration** *(dict) --* Configuration settings for notifications related to the provisioned capacity update. * **SnsConfiguration** *(dict) --* Details on SNS that are required to send the notification. * **TopicArn** *(string) --* **[REQUIRED]** SNS topic ARN that the scheduled query status notifications will be sent to. * **RoleArn** *(string) --* **[REQUIRED]** An Amazon Resource Name (ARN) that grants Timestream permission to publish notifications. This field is only visible if SNS Topic is provided when updating the account settings. Return type: dict Returns: **Response Syntax** { 'MaxQueryTCU': 123, 'QueryPricingModel': 'BYTES_SCANNED'|'COMPUTE_UNITS', 'QueryCompute': { 'ComputeMode': 'ON_DEMAND'|'PROVISIONED', 'ProvisionedCapacity': { 'ActiveQueryTCU': 123, 'NotificationConfiguration': { 'SnsConfiguration': { 'TopicArn': 'string' }, 'RoleArn': 'string' }, 'LastUpdate': { 'TargetQueryTCU': 123, 'Status': 'PENDING'|'FAILED'|'SUCCEEDED', 'StatusMessage': 'string' } } } } **Response Structure** * *(dict) --* * **MaxQueryTCU** *(integer) --* The configured maximum number of compute units the service will use at any point in time to serve your queries. * **QueryPricingModel** *(string) --* The pricing model for an account. * **QueryCompute** *(dict) --* Confirms the updated account settings for querying data in your account. * **ComputeMode** *(string) --* The mode in which Timestream Compute Units (TCUs) are allocated and utilized within an account. Note that in the Asia Pacific (Mumbai) region, the API operation only recognizes the value "PROVISIONED". * **ProvisionedCapacity** *(dict) --* Configuration object that contains settings for provisioned Timestream Compute Units (TCUs) in your account. * **ActiveQueryTCU** *(integer) --* The number of Timestream Compute Units (TCUs) provisioned in the account. This field is only visible when the compute mode is "PROVISIONED". * **NotificationConfiguration** *(dict) --* An object that contains settings for notifications that are sent whenever the provisioned capacity settings are modified. This field is only visible when the compute mode is "PROVISIONED". * **SnsConfiguration** *(dict) --* Details on SNS that are required to send the notification. * **TopicArn** *(string) --* SNS topic ARN that the scheduled query status notifications will be sent to. * **RoleArn** *(string) --* An Amazon Resource Name (ARN) that grants Timestream permission to publish notifications. This field is only visible if SNS Topic is provided when updating the account settings. * **LastUpdate** *(dict) --* Information about the last update to the provisioned capacity settings. * **TargetQueryTCU** *(integer) --* The number of TimeStream Compute Units (TCUs) requested in the last account settings update. * **Status** *(string) --* The status of the last update. Can be either "PENDING", "FAILED", or "SUCCEEDED". * **StatusMessage** *(string) --* Error message describing the last account settings update status, visible only if an error occurred. **Exceptions** * "TimestreamQuery.Client.exceptions.AccessDeniedException" * "TimestreamQuery.Client.exceptions.InternalServerException" * "TimestreamQuery.Client.exceptions.ThrottlingException" * "TimestreamQuery.Client.exceptions.ValidationException" * "TimestreamQuery.Client.exceptions.InvalidEndpointException" TimestreamQuery / Client / list_tags_for_resource list_tags_for_resource ********************** TimestreamQuery.Client.list_tags_for_resource(**kwargs) List all tags on a Timestream query resource. See also: AWS API Documentation **Request Syntax** response = client.list_tags_for_resource( ResourceARN='string', MaxResults=123, NextToken='string' ) Parameters: * **ResourceARN** (*string*) -- **[REQUIRED]** The Timestream resource with tags to be listed. This value is an Amazon Resource Name (ARN). * **MaxResults** (*integer*) -- The maximum number of tags to return. * **NextToken** (*string*) -- A pagination token to resume pagination. Return type: dict Returns: **Response Syntax** { 'Tags': [ { 'Key': 'string', 'Value': 'string' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **Tags** *(list) --* The tags currently associated with the Timestream resource. * *(dict) --* A tag is a label that you assign to a Timestream database and/or table. Each tag consists of a key and an optional value, both of which you define. Tags enable you to categorize databases and/or tables, for example, by purpose, owner, or environment. * **Key** *(string) --* The key of the tag. Tag keys are case sensitive. * **Value** *(string) --* The value of the tag. Tag values are case sensitive and can be null. * **NextToken** *(string) --* A pagination token to resume pagination with a subsequent call to "ListTagsForResourceResponse". **Exceptions** * "TimestreamQuery.Client.exceptions.ResourceNotFoundException" * "TimestreamQuery.Client.exceptions.ThrottlingException" * "TimestreamQuery.Client.exceptions.ValidationException" * "TimestreamQuery.Client.exceptions.InvalidEndpointException" TimestreamQuery / Client / update_scheduled_query update_scheduled_query ********************** TimestreamQuery.Client.update_scheduled_query(**kwargs) Update a scheduled query. See also: AWS API Documentation **Request Syntax** response = client.update_scheduled_query( ScheduledQueryArn='string', State='ENABLED'|'DISABLED' ) Parameters: * **ScheduledQueryArn** (*string*) -- **[REQUIRED]** ARN of the scheuled query. * **State** (*string*) -- **[REQUIRED]** State of the scheduled query. Returns: None **Exceptions** * "TimestreamQuery.Client.exceptions.AccessDeniedException" * "TimestreamQuery.Client.exceptions.InternalServerException" * "TimestreamQuery.Client.exceptions.ResourceNotFoundException" * "TimestreamQuery.Client.exceptions.ThrottlingException" * "TimestreamQuery.Client.exceptions.ValidationException" * "TimestreamQuery.Client.exceptions.InvalidEndpointException" TimestreamQuery / Client / list_scheduled_queries list_scheduled_queries ********************** TimestreamQuery.Client.list_scheduled_queries(**kwargs) Gets a list of all scheduled queries in the caller's Amazon account and Region. "ListScheduledQueries" is eventually consistent. See also: AWS API Documentation **Request Syntax** response = client.list_scheduled_queries( MaxResults=123, NextToken='string' ) Parameters: * **MaxResults** (*integer*) -- The maximum number of items to return in the output. If the total number of items available is more than the value specified, a "NextToken" is provided in the output. To resume pagination, provide the "NextToken" value as the argument to the subsequent call to "ListScheduledQueriesRequest". * **NextToken** (*string*) -- A pagination token to resume pagination. Return type: dict Returns: **Response Syntax** { 'ScheduledQueries': [ { 'Arn': 'string', 'Name': 'string', 'CreationTime': datetime(2015, 1, 1), 'State': 'ENABLED'|'DISABLED', 'PreviousInvocationTime': datetime(2015, 1, 1), 'NextInvocationTime': datetime(2015, 1, 1), 'ErrorReportConfiguration': { 'S3Configuration': { 'BucketName': 'string', 'ObjectKeyPrefix': 'string', 'EncryptionOption': 'SSE_S3'|'SSE_KMS' } }, 'TargetDestination': { 'TimestreamDestination': { 'DatabaseName': 'string', 'TableName': 'string' } }, 'LastRunStatus': 'AUTO_TRIGGER_SUCCESS'|'AUTO_TRIGGER_FAILURE'|'MANUAL_TRIGGER_SUCCESS'|'MANUAL_TRIGGER_FAILURE' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **ScheduledQueries** *(list) --* A list of scheduled queries. * *(dict) --* Scheduled Query * **Arn** *(string) --* The Amazon Resource Name. * **Name** *(string) --* The name of the scheduled query. * **CreationTime** *(datetime) --* The creation time of the scheduled query. * **State** *(string) --* State of scheduled query. * **PreviousInvocationTime** *(datetime) --* The last time the scheduled query was run. * **NextInvocationTime** *(datetime) --* The next time the scheduled query is to be run. * **ErrorReportConfiguration** *(dict) --* Configuration for scheduled query error reporting. * **S3Configuration** *(dict) --* The S3 configuration for the error reports. * **BucketName** *(string) --* Name of the S3 bucket under which error reports will be created. * **ObjectKeyPrefix** *(string) --* Prefix for the error report key. Timestream by default adds the following prefix to the error report path. * **EncryptionOption** *(string) --* Encryption at rest options for the error reports. If no encryption option is specified, Timestream will choose SSE_S3 as default. * **TargetDestination** *(dict) --* Target data source where final scheduled query result will be written. * **TimestreamDestination** *(dict) --* Query result destination details for Timestream data source. * **DatabaseName** *(string) --* Timestream database name. * **TableName** *(string) --* Timestream table name. * **LastRunStatus** *(string) --* Status of the last scheduled query run. * **NextToken** *(string) --* A token to specify where to start paginating. This is the NextToken from a previously truncated response. **Exceptions** * "TimestreamQuery.Client.exceptions.AccessDeniedException" * "TimestreamQuery.Client.exceptions.InternalServerException" * "TimestreamQuery.Client.exceptions.ThrottlingException" * "TimestreamQuery.Client.exceptions.ValidationException" * "TimestreamQuery.Client.exceptions.InvalidEndpointException" TimestreamQuery / Client / untag_resource untag_resource ************** TimestreamQuery.Client.untag_resource(**kwargs) Removes the association of tags from a Timestream query resource. See also: AWS API Documentation **Request Syntax** response = client.untag_resource( ResourceARN='string', TagKeys=[ 'string', ] ) Parameters: * **ResourceARN** (*string*) -- **[REQUIRED]** The Timestream resource that the tags will be removed from. This value is an Amazon Resource Name (ARN). * **TagKeys** (*list*) -- **[REQUIRED]** A list of tags keys. Existing tags of the resource whose keys are members of this list will be removed from the Timestream resource. * *(string) --* Return type: dict Returns: **Response Syntax** {} **Response Structure** * *(dict) --* **Exceptions** * "TimestreamQuery.Client.exceptions.ValidationException" * "TimestreamQuery.Client.exceptions.ThrottlingException" * "TimestreamQuery.Client.exceptions.ResourceNotFoundException" * "TimestreamQuery.Client.exceptions.InvalidEndpointException" TimestreamQuery / Client / get_waiter get_waiter ********** TimestreamQuery.Client.get_waiter(waiter_name) Returns an object that can wait for some condition. Parameters: **waiter_name** (*str*) -- The name of the waiter to get. See the waiters section of the service docs for a list of available waiters. Returns: The specified waiter object. Return type: "botocore.waiter.Waiter" TimestreamQuery / Client / execute_scheduled_query execute_scheduled_query *********************** TimestreamQuery.Client.execute_scheduled_query(**kwargs) You can use this API to run a scheduled query manually. If you enabled "QueryInsights", this API also returns insights and metrics related to the query that you executed as part of an Amazon SNS notification. "QueryInsights" helps with performance tuning of your query. For more information about "QueryInsights", see Using query insights to optimize queries in Amazon Timestream. See also: AWS API Documentation **Request Syntax** response = client.execute_scheduled_query( ScheduledQueryArn='string', InvocationTime=datetime(2015, 1, 1), ClientToken='string', QueryInsights={ 'Mode': 'ENABLED_WITH_RATE_CONTROL'|'DISABLED' } ) Parameters: * **ScheduledQueryArn** (*string*) -- **[REQUIRED]** ARN of the scheduled query. * **InvocationTime** (*datetime*) -- **[REQUIRED]** The timestamp in UTC. Query will be run as if it was invoked at this timestamp. * **ClientToken** (*string*) -- Not used. This field is autopopulated if not provided. * **QueryInsights** (*dict*) -- Encapsulates settings for enabling "QueryInsights". Enabling "QueryInsights" returns insights and metrics as a part of the Amazon SNS notification for the query that you executed. You can use "QueryInsights" to tune your query performance and cost. * **Mode** *(string) --* **[REQUIRED]** Provides the following modes to enable "ScheduledQueryInsights": * "ENABLED_WITH_RATE_CONTROL" – Enables "ScheduledQueryInsights" for the queries being processed. This mode also includes a rate control mechanism, which limits the "QueryInsights" feature to 1 query per second (QPS). * "DISABLED" – Disables "ScheduledQueryInsights". Returns: None **Exceptions** * "TimestreamQuery.Client.exceptions.AccessDeniedException" * "TimestreamQuery.Client.exceptions.InternalServerException" * "TimestreamQuery.Client.exceptions.ResourceNotFoundException" * "TimestreamQuery.Client.exceptions.ThrottlingException" * "TimestreamQuery.Client.exceptions.ValidationException" * "TimestreamQuery.Client.exceptions.InvalidEndpointException" TimestreamQuery / Client / query query ***** TimestreamQuery.Client.query(**kwargs) "Query" is a synchronous operation that enables you to run a query against your Amazon Timestream data. If you enabled "QueryInsights", this API also returns insights and metrics related to the query that you executed. "QueryInsights" helps with performance tuning of your query. For more information about "QueryInsights", see Using query insights to optimize queries in Amazon Timestream. Note: The maximum number of "Query" API requests you're allowed to make with "QueryInsights" enabled is 1 query per second (QPS). If you exceed this query rate, it might result in throttling. "Query" will time out after 60 seconds. You must update the default timeout in the SDK to support a timeout of 60 seconds. See the code sample for details. Your query request will fail in the following cases: * If you submit a "Query" request with the same client token outside of the 5-minute idempotency window. * If you submit a "Query" request with the same client token, but change other parameters, within the 5-minute idempotency window. * If the size of the row (including the query metadata) exceeds 1 MB, then the query will fail with the following error message: "Query aborted as max page response size has been exceeded by the output result row" * If the IAM principal of the query initiator and the result reader are not the same and/or the query initiator and the result reader do not have the same query string in the query requests, the query will fail with an "Invalid pagination token" error. See also: AWS API Documentation **Request Syntax** response = client.query( QueryString='string', ClientToken='string', NextToken='string', MaxRows=123, QueryInsights={ 'Mode': 'ENABLED_WITH_RATE_CONTROL'|'DISABLED' } ) Parameters: * **QueryString** (*string*) -- **[REQUIRED]** The query to be run by Timestream. * **ClientToken** (*string*) -- Unique, case-sensitive string of up to 64 ASCII characters specified when a "Query" request is made. Providing a "ClientToken" makes the call to "Query" *idempotent*. This means that running the same query repeatedly will produce the same result. In other words, making multiple identical "Query" requests has the same effect as making a single request. When using "ClientToken" in a query, note the following: * If the Query API is instantiated without a "ClientToken", the Query SDK generates a "ClientToken" on your behalf. * If the "Query" invocation only contains the "ClientToken" but does not include a "NextToken", that invocation of "Query" is assumed to be a new query run. * If the invocation contains "NextToken", that particular invocation is assumed to be a subsequent invocation of a prior call to the Query API, and a result set is returned. * After 4 hours, any request with the same "ClientToken" is treated as a new request. This field is autopopulated if not provided. * **NextToken** (*string*) -- A pagination token used to return a set of results. When the "Query" API is invoked using "NextToken", that particular invocation is assumed to be a subsequent invocation of a prior call to "Query", and a result set is returned. However, if the "Query" invocation only contains the "ClientToken", that invocation of "Query" is assumed to be a new query run. Note the following when using NextToken in a query: * A pagination token can be used for up to five "Query" invocations, OR for a duration of up to 1 hour – whichever comes first. * Using the same "NextToken" will return the same set of records. To keep paginating through the result set, you must to use the most recent "nextToken". * Suppose a "Query" invocation returns two "NextToken" values, "TokenA" and "TokenB". If "TokenB" is used in a subsequent "Query" invocation, then "TokenA" is invalidated and cannot be reused. * To request a previous result set from a query after pagination has begun, you must re-invoke the Query API. * The latest "NextToken" should be used to paginate until "null" is returned, at which point a new "NextToken" should be used. * If the IAM principal of the query initiator and the result reader are not the same and/or the query initiator and the result reader do not have the same query string in the query requests, the query will fail with an "Invalid pagination token" error. * **MaxRows** (*integer*) -- The total number of rows to be returned in the "Query" output. The initial run of "Query" with a "MaxRows" value specified will return the result set of the query in two cases: * The size of the result is less than "1MB". * The number of rows in the result set is less than the value of "maxRows". Otherwise, the initial invocation of "Query" only returns a "NextToken", which can then be used in subsequent calls to fetch the result set. To resume pagination, provide the "NextToken" value in the subsequent command. If the row size is large (e.g. a row has many columns), Timestream may return fewer rows to keep the response size from exceeding the 1 MB limit. If "MaxRows" is not provided, Timestream will send the necessary number of rows to meet the 1 MB limit. * **QueryInsights** (*dict*) -- Encapsulates settings for enabling "QueryInsights". Enabling "QueryInsights" returns insights and metrics in addition to query results for the query that you executed. You can use "QueryInsights" to tune your query performance. * **Mode** *(string) --* **[REQUIRED]** Provides the following modes to enable "QueryInsights": * "ENABLED_WITH_RATE_CONTROL" – Enables "QueryInsights" for the queries being processed. This mode also includes a rate control mechanism, which limits the "QueryInsights" feature to 1 query per second (QPS). * "DISABLED" – Disables "QueryInsights". Return type: dict Returns: **Response Syntax** { 'QueryId': 'string', 'NextToken': 'string', 'Rows': [ { 'Data': [ { 'ScalarValue': 'string', 'TimeSeriesValue': [ { 'Time': 'string', 'Value': {'... recursive ...'} }, ], 'ArrayValue': {'... recursive ...'}, 'RowValue': {'... recursive ...'}, 'NullValue': True|False }, ] }, ], 'ColumnInfo': [ { 'Name': 'string', 'Type': { 'ScalarType': 'VARCHAR'|'BOOLEAN'|'BIGINT'|'DOUBLE'|'TIMESTAMP'|'DATE'|'TIME'|'INTERVAL_DAY_TO_SECOND'|'INTERVAL_YEAR_TO_MONTH'|'UNKNOWN'|'INTEGER', 'ArrayColumnInfo': {'... recursive ...'}, 'TimeSeriesMeasureValueColumnInfo': {'... recursive ...'}, 'RowColumnInfo': {'... recursive ...'} } }, ], 'QueryStatus': { 'ProgressPercentage': 123.0, 'CumulativeBytesScanned': 123, 'CumulativeBytesMetered': 123 }, 'QueryInsightsResponse': { 'QuerySpatialCoverage': { 'Max': { 'Value': 123.0, 'TableArn': 'string', 'PartitionKey': [ 'string', ] } }, 'QueryTemporalRange': { 'Max': { 'Value': 123, 'TableArn': 'string' } }, 'QueryTableCount': 123, 'OutputRows': 123, 'OutputBytes': 123, 'UnloadPartitionCount': 123, 'UnloadWrittenRows': 123, 'UnloadWrittenBytes': 123 } } **Response Structure** * *(dict) --* * **QueryId** *(string) --* A unique ID for the given query. * **NextToken** *(string) --* A pagination token that can be used again on a "Query" call to get the next set of results. * **Rows** *(list) --* The result set rows returned by the query. * *(dict) --* Represents a single row in the query results. * **Data** *(list) --* List of data points in a single row of the result set. * *(dict) --* Datum represents a single data point in a query result. * **ScalarValue** *(string) --* Indicates if the data point is a scalar value such as integer, string, double, or Boolean. * **TimeSeriesValue** *(list) --* Indicates if the data point is a timeseries data type. * *(dict) --* The timeseries data type represents the values of a measure over time. A time series is an array of rows of timestamps and measure values, with rows sorted in ascending order of time. A TimeSeriesDataPoint is a single data point in the time series. It represents a tuple of (time, measure value) in a time series. * **Time** *(string) --* The timestamp when the measure value was collected. * **Value** *(dict) --* The measure value for the data point. * **ArrayValue** *(list) --* Indicates if the data point is an array. * **RowValue** *(dict) --* Indicates if the data point is a row. * **NullValue** *(boolean) --* Indicates if the data point is null. * **ColumnInfo** *(list) --* The column data types of the returned result set. * *(dict) --* Contains the metadata for query results such as the column names, data types, and other attributes. * **Name** *(string) --* The name of the result set column. The name of the result set is available for columns of all data types except for arrays. * **Type** *(dict) --* The data type of the result set column. The data type can be a scalar or complex. Scalar data types are integers, strings, doubles, Booleans, and others. Complex data types are types such as arrays, rows, and others. * **ScalarType** *(string) --* Indicates if the column is of type string, integer, Boolean, double, timestamp, date, time. For more information, see Supported data types. * **ArrayColumnInfo** *(dict) --* Indicates if the column is an array. * **TimeSeriesMeasureValueColumnInfo** *(dict) --* Indicates if the column is a timeseries data type. * **RowColumnInfo** *(list) --* Indicates if the column is a row. * **QueryStatus** *(dict) --* Information about the status of the query, including progress and bytes scanned. * **ProgressPercentage** *(float) --* The progress of the query, expressed as a percentage. * **CumulativeBytesScanned** *(integer) --* The amount of data scanned by the query in bytes. This is a cumulative sum and represents the total amount of bytes scanned since the query was started. * **CumulativeBytesMetered** *(integer) --* The amount of data scanned by the query in bytes that you will be charged for. This is a cumulative sum and represents the total amount of data that you will be charged for since the query was started. The charge is applied only once and is either applied when the query completes running or when the query is cancelled. * **QueryInsightsResponse** *(dict) --* Encapsulates "QueryInsights" containing insights and metrics related to the query that you executed. * **QuerySpatialCoverage** *(dict) --* Provides insights into the spatial coverage of the query, including the table with sub-optimal (max) spatial pruning. This information can help you identify areas for improvement in your partitioning strategy to enhance spatial pruning. * **Max** *(dict) --* Provides insights into the spatial coverage of the executed query and the table with the most inefficient spatial pruning. * "Value" – The maximum ratio of spatial coverage. * "TableArn" – The Amazon Resource Name (ARN) of the table with sub-optimal spatial pruning. * "PartitionKey" – The partition key used for partitioning, which can be a default "measure_name" or a CDPK. * **Value** *(float) --* The maximum ratio of spatial coverage. * **TableArn** *(string) --* The Amazon Resource Name (ARN) of the table with the most sub-optimal spatial pruning. * **PartitionKey** *(list) --* The partition key used for partitioning, which can be a default "measure_name" or a customer defined partition key. * *(string) --* * **QueryTemporalRange** *(dict) --* Provides insights into the temporal range of the query, including the table with the largest (max) time range. Following are some of the potential options for optimizing time-based pruning: * Add missing time-predicates. * Remove functions around the time predicates. * Add time predicates to all the sub-queries. * **Max** *(dict) --* Encapsulates the following properties that provide insights into the most sub-optimal performing table on the temporal axis: * "Value" – The maximum duration in nanoseconds between the start and end of the query. * "TableArn" – The Amazon Resource Name (ARN) of the table which is queried with the largest time range. * **Value** *(integer) --* The maximum duration in nanoseconds between the start and end of the query. * **TableArn** *(string) --* The Amazon Resource Name (ARN) of the table which is queried with the largest time range. * **QueryTableCount** *(integer) --* Indicates the number of tables in the query. * **OutputRows** *(integer) --* Indicates the total number of rows returned as part of the query result set. You can use this data to validate if the number of rows in the result set have changed as part of the query tuning exercise. * **OutputBytes** *(integer) --* Indicates the size of query result set in bytes. You can use this data to validate if the result set has changed as part of the query tuning exercise. * **UnloadPartitionCount** *(integer) --* Indicates the partitions created by the "Unload" operation. * **UnloadWrittenRows** *(integer) --* Indicates the rows written by the "Unload" query. * **UnloadWrittenBytes** *(integer) --* Indicates the size, in bytes, written by the "Unload" operation. **Exceptions** * "TimestreamQuery.Client.exceptions.AccessDeniedException" * "TimestreamQuery.Client.exceptions.ConflictException" * "TimestreamQuery.Client.exceptions.InternalServerException" * "TimestreamQuery.Client.exceptions.QueryExecutionException" * "TimestreamQuery.Client.exceptions.ThrottlingException" * "TimestreamQuery.Client.exceptions.ValidationException" * "TimestreamQuery.Client.exceptions.InvalidEndpointException" TimestreamQuery / Client / close close ***** TimestreamQuery.Client.close() Closes underlying endpoint connections. TimestreamQuery / Client / tag_resource tag_resource ************ TimestreamQuery.Client.tag_resource(**kwargs) Associate a set of tags with a Timestream resource. You can then activate these user-defined tags so that they appear on the Billing and Cost Management console for cost allocation tracking. See also: AWS API Documentation **Request Syntax** response = client.tag_resource( ResourceARN='string', Tags=[ { 'Key': 'string', 'Value': 'string' }, ] ) Parameters: * **ResourceARN** (*string*) -- **[REQUIRED]** Identifies the Timestream resource to which tags should be added. This value is an Amazon Resource Name (ARN). * **Tags** (*list*) -- **[REQUIRED]** The tags to be assigned to the Timestream resource. * *(dict) --* A tag is a label that you assign to a Timestream database and/or table. Each tag consists of a key and an optional value, both of which you define. Tags enable you to categorize databases and/or tables, for example, by purpose, owner, or environment. * **Key** *(string) --* **[REQUIRED]** The key of the tag. Tag keys are case sensitive. * **Value** *(string) --* **[REQUIRED]** The value of the tag. Tag values are case sensitive and can be null. Return type: dict Returns: **Response Syntax** {} **Response Structure** * *(dict) --* **Exceptions** * "TimestreamQuery.Client.exceptions.ResourceNotFoundException" * "TimestreamQuery.Client.exceptions.ServiceQuotaExceededException" * "TimestreamQuery.Client.exceptions.ThrottlingException" * "TimestreamQuery.Client.exceptions.ValidationException" * "TimestreamQuery.Client.exceptions.InvalidEndpointException" TimestreamQuery / Client / describe_scheduled_query describe_scheduled_query ************************ TimestreamQuery.Client.describe_scheduled_query(**kwargs) Provides detailed information about a scheduled query. See also: AWS API Documentation **Request Syntax** response = client.describe_scheduled_query( ScheduledQueryArn='string' ) Parameters: **ScheduledQueryArn** (*string*) -- **[REQUIRED]** The ARN of the scheduled query. Return type: dict Returns: **Response Syntax** { 'ScheduledQuery': { 'Arn': 'string', 'Name': 'string', 'QueryString': 'string', 'CreationTime': datetime(2015, 1, 1), 'State': 'ENABLED'|'DISABLED', 'PreviousInvocationTime': datetime(2015, 1, 1), 'NextInvocationTime': datetime(2015, 1, 1), 'ScheduleConfiguration': { 'ScheduleExpression': 'string' }, 'NotificationConfiguration': { 'SnsConfiguration': { 'TopicArn': 'string' } }, 'TargetConfiguration': { 'TimestreamConfiguration': { 'DatabaseName': 'string', 'TableName': 'string', 'TimeColumn': 'string', 'DimensionMappings': [ { 'Name': 'string', 'DimensionValueType': 'VARCHAR' }, ], 'MultiMeasureMappings': { 'TargetMultiMeasureName': 'string', 'MultiMeasureAttributeMappings': [ { 'SourceColumn': 'string', 'TargetMultiMeasureAttributeName': 'string', 'MeasureValueType': 'BIGINT'|'BOOLEAN'|'DOUBLE'|'VARCHAR'|'TIMESTAMP' }, ] }, 'MixedMeasureMappings': [ { 'MeasureName': 'string', 'SourceColumn': 'string', 'TargetMeasureName': 'string', 'MeasureValueType': 'BIGINT'|'BOOLEAN'|'DOUBLE'|'VARCHAR'|'MULTI', 'MultiMeasureAttributeMappings': [ { 'SourceColumn': 'string', 'TargetMultiMeasureAttributeName': 'string', 'MeasureValueType': 'BIGINT'|'BOOLEAN'|'DOUBLE'|'VARCHAR'|'TIMESTAMP' }, ] }, ], 'MeasureNameColumn': 'string' } }, 'ScheduledQueryExecutionRoleArn': 'string', 'KmsKeyId': 'string', 'ErrorReportConfiguration': { 'S3Configuration': { 'BucketName': 'string', 'ObjectKeyPrefix': 'string', 'EncryptionOption': 'SSE_S3'|'SSE_KMS' } }, 'LastRunSummary': { 'InvocationTime': datetime(2015, 1, 1), 'TriggerTime': datetime(2015, 1, 1), 'RunStatus': 'AUTO_TRIGGER_SUCCESS'|'AUTO_TRIGGER_FAILURE'|'MANUAL_TRIGGER_SUCCESS'|'MANUAL_TRIGGER_FAILURE', 'ExecutionStats': { 'ExecutionTimeInMillis': 123, 'DataWrites': 123, 'BytesMetered': 123, 'CumulativeBytesScanned': 123, 'RecordsIngested': 123, 'QueryResultRows': 123 }, 'QueryInsightsResponse': { 'QuerySpatialCoverage': { 'Max': { 'Value': 123.0, 'TableArn': 'string', 'PartitionKey': [ 'string', ] } }, 'QueryTemporalRange': { 'Max': { 'Value': 123, 'TableArn': 'string' } }, 'QueryTableCount': 123, 'OutputRows': 123, 'OutputBytes': 123 }, 'ErrorReportLocation': { 'S3ReportLocation': { 'BucketName': 'string', 'ObjectKey': 'string' } }, 'FailureReason': 'string' }, 'RecentlyFailedRuns': [ { 'InvocationTime': datetime(2015, 1, 1), 'TriggerTime': datetime(2015, 1, 1), 'RunStatus': 'AUTO_TRIGGER_SUCCESS'|'AUTO_TRIGGER_FAILURE'|'MANUAL_TRIGGER_SUCCESS'|'MANUAL_TRIGGER_FAILURE', 'ExecutionStats': { 'ExecutionTimeInMillis': 123, 'DataWrites': 123, 'BytesMetered': 123, 'CumulativeBytesScanned': 123, 'RecordsIngested': 123, 'QueryResultRows': 123 }, 'QueryInsightsResponse': { 'QuerySpatialCoverage': { 'Max': { 'Value': 123.0, 'TableArn': 'string', 'PartitionKey': [ 'string', ] } }, 'QueryTemporalRange': { 'Max': { 'Value': 123, 'TableArn': 'string' } }, 'QueryTableCount': 123, 'OutputRows': 123, 'OutputBytes': 123 }, 'ErrorReportLocation': { 'S3ReportLocation': { 'BucketName': 'string', 'ObjectKey': 'string' } }, 'FailureReason': 'string' }, ] } } **Response Structure** * *(dict) --* * **ScheduledQuery** *(dict) --* The scheduled query. * **Arn** *(string) --* Scheduled query ARN. * **Name** *(string) --* Name of the scheduled query. * **QueryString** *(string) --* The query to be run. * **CreationTime** *(datetime) --* Creation time of the scheduled query. * **State** *(string) --* State of the scheduled query. * **PreviousInvocationTime** *(datetime) --* Last time the query was run. * **NextInvocationTime** *(datetime) --* The next time the scheduled query is scheduled to run. * **ScheduleConfiguration** *(dict) --* Schedule configuration. * **ScheduleExpression** *(string) --* An expression that denotes when to trigger the scheduled query run. This can be a cron expression or a rate expression. * **NotificationConfiguration** *(dict) --* Notification configuration. * **SnsConfiguration** *(dict) --* Details about the Amazon Simple Notification Service (SNS) configuration. This field is visible only when SNS Topic is provided when updating the account settings. * **TopicArn** *(string) --* SNS topic ARN that the scheduled query status notifications will be sent to. * **TargetConfiguration** *(dict) --* Scheduled query target store configuration. * **TimestreamConfiguration** *(dict) --* Configuration needed to write data into the Timestream database and table. * **DatabaseName** *(string) --* Name of Timestream database to which the query result will be written. * **TableName** *(string) --* Name of Timestream table that the query result will be written to. The table should be within the same database that is provided in Timestream configuration. * **TimeColumn** *(string) --* Column from query result that should be used as the time column in destination table. Column type for this should be TIMESTAMP. * **DimensionMappings** *(list) --* This is to allow mapping column(s) from the query result to the dimension in the destination table. * *(dict) --* This type is used to map column(s) from the query result to a dimension in the destination table. * **Name** *(string) --* Column name from query result. * **DimensionValueType** *(string) --* Type for the dimension. * **MultiMeasureMappings** *(dict) --* Multi-measure mappings. * **TargetMultiMeasureName** *(string) --* The name of the target multi-measure name in the derived table. This input is required when measureNameColumn is not provided. If MeasureNameColumn is provided, then value from that column will be used as multi-measure name. * **MultiMeasureAttributeMappings** *(list) --* Required. Attribute mappings to be used for mapping query results to ingest data for multi-measure attributes. * *(dict) --* Attribute mapping for MULTI value measures. * **SourceColumn** *(string) --* Source column from where the attribute value is to be read. * **TargetMultiMeasureAttributeName** *(string) --* Custom name to be used for attribute name in derived table. If not provided, source column name would be used. * **MeasureValueType** *(string) --* Type of the attribute to be read from the source column. * **MixedMeasureMappings** *(list) --* Specifies how to map measures to multi-measure records. * *(dict) --* MixedMeasureMappings are mappings that can be used to ingest data into a mixture of narrow and multi measures in the derived table. * **MeasureName** *(string) --* Refers to the value of measure_name in a result row. This field is required if MeasureNameColumn is provided. * **SourceColumn** *(string) --* This field refers to the source column from which measure-value is to be read for result materialization. * **TargetMeasureName** *(string) --* Target measure name to be used. If not provided, the target measure name by default would be measure-name if provided, or sourceColumn otherwise. * **MeasureValueType** *(string) --* Type of the value that is to be read from sourceColumn. If the mapping is for MULTI, use MeasureValueType.MULTI. * **MultiMeasureAttributeMappings** *(list) --* Required when measureValueType is MULTI. Attribute mappings for MULTI value measures. * *(dict) --* Attribute mapping for MULTI value measures. * **SourceColumn** *(string) --* Source column from where the attribute value is to be read. * **TargetMultiMeasureAttributeName** *(string) --* Custom name to be used for attribute name in derived table. If not provided, source column name would be used. * **MeasureValueType** *(string) --* Type of the attribute to be read from the source column. * **MeasureNameColumn** *(string) --* Name of the measure column. * **ScheduledQueryExecutionRoleArn** *(string) --* IAM role that Timestream uses to run the schedule query. * **KmsKeyId** *(string) --* A customer provided KMS key used to encrypt the scheduled query resource. * **ErrorReportConfiguration** *(dict) --* Error-reporting configuration for the scheduled query. * **S3Configuration** *(dict) --* The S3 configuration for the error reports. * **BucketName** *(string) --* Name of the S3 bucket under which error reports will be created. * **ObjectKeyPrefix** *(string) --* Prefix for the error report key. Timestream by default adds the following prefix to the error report path. * **EncryptionOption** *(string) --* Encryption at rest options for the error reports. If no encryption option is specified, Timestream will choose SSE_S3 as default. * **LastRunSummary** *(dict) --* Runtime summary for the last scheduled query run. * **InvocationTime** *(datetime) --* InvocationTime for this run. This is the time at which the query is scheduled to run. Parameter "@scheduled_runtime" can be used in the query to get the value. * **TriggerTime** *(datetime) --* The actual time when the query was run. * **RunStatus** *(string) --* The status of a scheduled query run. * **ExecutionStats** *(dict) --* Runtime statistics for a scheduled run. * **ExecutionTimeInMillis** *(integer) --* Total time, measured in milliseconds, that was needed for the scheduled query run to complete. * **DataWrites** *(integer) --* Data writes metered for records ingested in a single scheduled query run. * **BytesMetered** *(integer) --* Bytes metered for a single scheduled query run. * **CumulativeBytesScanned** *(integer) --* Bytes scanned for a single scheduled query run. * **RecordsIngested** *(integer) --* The number of records ingested for a single scheduled query run. * **QueryResultRows** *(integer) --* Number of rows present in the output from running a query before ingestion to destination data source. * **QueryInsightsResponse** *(dict) --* Provides various insights and metrics related to the run summary of the scheduled query. * **QuerySpatialCoverage** *(dict) --* Provides insights into the spatial coverage of the query, including the table with sub-optimal (max) spatial pruning. This information can help you identify areas for improvement in your partitioning strategy to enhance spatial pruning. * **Max** *(dict) --* Provides insights into the spatial coverage of the executed query and the table with the most inefficient spatial pruning. * "Value" – The maximum ratio of spatial coverage. * "TableArn" – The Amazon Resource Name (ARN) of the table with sub-optimal spatial pruning. * "PartitionKey" – The partition key used for partitioning, which can be a default "measure_name" or a CDPK. * **Value** *(float) --* The maximum ratio of spatial coverage. * **TableArn** *(string) --* The Amazon Resource Name (ARN) of the table with the most sub-optimal spatial pruning. * **PartitionKey** *(list) --* The partition key used for partitioning, which can be a default "measure_name" or a customer defined partition key. * *(string) --* * **QueryTemporalRange** *(dict) --* Provides insights into the temporal range of the query, including the table with the largest (max) time range. Following are some of the potential options for optimizing time-based pruning: * Add missing time-predicates. * Remove functions around the time predicates. * Add time predicates to all the sub-queries. * **Max** *(dict) --* Encapsulates the following properties that provide insights into the most sub-optimal performing table on the temporal axis: * "Value" – The maximum duration in nanoseconds between the start and end of the query. * "TableArn" – The Amazon Resource Name (ARN) of the table which is queried with the largest time range. * **Value** *(integer) --* The maximum duration in nanoseconds between the start and end of the query. * **TableArn** *(string) --* The Amazon Resource Name (ARN) of the table which is queried with the largest time range. * **QueryTableCount** *(integer) --* Indicates the number of tables in the query. * **OutputRows** *(integer) --* Indicates the total number of rows returned as part of the query result set. You can use this data to validate if the number of rows in the result set have changed as part of the query tuning exercise. * **OutputBytes** *(integer) --* Indicates the size of query result set in bytes. You can use this data to validate if the result set has changed as part of the query tuning exercise. * **ErrorReportLocation** *(dict) --* S3 location for error report. * **S3ReportLocation** *(dict) --* The S3 location where error reports are written. * **BucketName** *(string) --* S3 bucket name. * **ObjectKey** *(string) --* S3 key. * **FailureReason** *(string) --* Error message for the scheduled query in case of failure. You might have to look at the error report to get more detailed error reasons. * **RecentlyFailedRuns** *(list) --* Runtime summary for the last five failed scheduled query runs. * *(dict) --* Run summary for the scheduled query * **InvocationTime** *(datetime) --* InvocationTime for this run. This is the time at which the query is scheduled to run. Parameter "@scheduled_runtime" can be used in the query to get the value. * **TriggerTime** *(datetime) --* The actual time when the query was run. * **RunStatus** *(string) --* The status of a scheduled query run. * **ExecutionStats** *(dict) --* Runtime statistics for a scheduled run. * **ExecutionTimeInMillis** *(integer) --* Total time, measured in milliseconds, that was needed for the scheduled query run to complete. * **DataWrites** *(integer) --* Data writes metered for records ingested in a single scheduled query run. * **BytesMetered** *(integer) --* Bytes metered for a single scheduled query run. * **CumulativeBytesScanned** *(integer) --* Bytes scanned for a single scheduled query run. * **RecordsIngested** *(integer) --* The number of records ingested for a single scheduled query run. * **QueryResultRows** *(integer) --* Number of rows present in the output from running a query before ingestion to destination data source. * **QueryInsightsResponse** *(dict) --* Provides various insights and metrics related to the run summary of the scheduled query. * **QuerySpatialCoverage** *(dict) --* Provides insights into the spatial coverage of the query, including the table with sub-optimal (max) spatial pruning. This information can help you identify areas for improvement in your partitioning strategy to enhance spatial pruning. * **Max** *(dict) --* Provides insights into the spatial coverage of the executed query and the table with the most inefficient spatial pruning. * "Value" – The maximum ratio of spatial coverage. * "TableArn" – The Amazon Resource Name (ARN) of the table with sub-optimal spatial pruning. * "PartitionKey" – The partition key used for partitioning, which can be a default "measure_name" or a CDPK. * **Value** *(float) --* The maximum ratio of spatial coverage. * **TableArn** *(string) --* The Amazon Resource Name (ARN) of the table with the most sub-optimal spatial pruning. * **PartitionKey** *(list) --* The partition key used for partitioning, which can be a default "measure_name" or a customer defined partition key. * *(string) --* * **QueryTemporalRange** *(dict) --* Provides insights into the temporal range of the query, including the table with the largest (max) time range. Following are some of the potential options for optimizing time-based pruning: * Add missing time-predicates. * Remove functions around the time predicates. * Add time predicates to all the sub-queries. * **Max** *(dict) --* Encapsulates the following properties that provide insights into the most sub-optimal performing table on the temporal axis: * "Value" – The maximum duration in nanoseconds between the start and end of the query. * "TableArn" – The Amazon Resource Name (ARN) of the table which is queried with the largest time range. * **Value** *(integer) --* The maximum duration in nanoseconds between the start and end of the query. * **TableArn** *(string) --* The Amazon Resource Name (ARN) of the table which is queried with the largest time range. * **QueryTableCount** *(integer) --* Indicates the number of tables in the query. * **OutputRows** *(integer) --* Indicates the total number of rows returned as part of the query result set. You can use this data to validate if the number of rows in the result set have changed as part of the query tuning exercise. * **OutputBytes** *(integer) --* Indicates the size of query result set in bytes. You can use this data to validate if the result set has changed as part of the query tuning exercise. * **ErrorReportLocation** *(dict) --* S3 location for error report. * **S3ReportLocation** *(dict) --* The S3 location where error reports are written. * **BucketName** *(string) --* S3 bucket name. * **ObjectKey** *(string) --* S3 key. * **FailureReason** *(string) --* Error message for the scheduled query in case of failure. You might have to look at the error report to get more detailed error reasons. **Exceptions** * "TimestreamQuery.Client.exceptions.AccessDeniedException" * "TimestreamQuery.Client.exceptions.InternalServerException" * "TimestreamQuery.Client.exceptions.ResourceNotFoundException" * "TimestreamQuery.Client.exceptions.ThrottlingException" * "TimestreamQuery.Client.exceptions.ValidationException" * "TimestreamQuery.Client.exceptions.InvalidEndpointException" TimestreamQuery / Client / cancel_query cancel_query ************ TimestreamQuery.Client.cancel_query(**kwargs) Cancels a query that has been issued. Cancellation is provided only if the query has not completed running before the cancellation request was issued. Because cancellation is an idempotent operation, subsequent cancellation requests will return a "CancellationMessage", indicating that the query has already been canceled. See code sample for details. See also: AWS API Documentation **Request Syntax** response = client.cancel_query( QueryId='string' ) Parameters: **QueryId** (*string*) -- **[REQUIRED]** The ID of the query that needs to be cancelled. "QueryID" is returned as part of the query result. Return type: dict Returns: **Response Syntax** { 'CancellationMessage': 'string' } **Response Structure** * *(dict) --* * **CancellationMessage** *(string) --* A "CancellationMessage" is returned when a "CancelQuery" request for the query specified by "QueryId" has already been issued. **Exceptions** * "TimestreamQuery.Client.exceptions.AccessDeniedException" * "TimestreamQuery.Client.exceptions.InternalServerException" * "TimestreamQuery.Client.exceptions.ThrottlingException" * "TimestreamQuery.Client.exceptions.ValidationException" * "TimestreamQuery.Client.exceptions.InvalidEndpointException"