IoTAnalytics ************ Client ====== class IoTAnalytics.Client A low-level client representing AWS IoT Analytics IoT Analytics allows you to collect large amounts of device data, process messages, and store them. You can then query the data and run sophisticated analytics on it. IoT Analytics enables advanced data exploration through integration with Jupyter Notebooks and data visualization through integration with Amazon QuickSight. Traditional analytics and business intelligence tools are designed to process structured data. IoT data often comes from devices that record noisy processes (such as temperature, motion, or sound). As a result the data from these devices can have significant gaps, corrupted messages, and false readings that must be cleaned up before analysis can occur. Also, IoT data is often only meaningful in the context of other data from external sources. IoT Analytics automates the steps required to analyze data from IoT devices. IoT Analytics filters, transforms, and enriches IoT data before storing it in a time-series data store for analysis. You can set up the service to collect only the data you need from your devices, apply mathematical transforms to process the data, and enrich the data with device-specific metadata such as device type and location before storing it. Then, you can analyze your data by running queries using the built-in SQL query engine, or perform more complex analytics and machine learning inference. IoT Analytics includes pre-built models for common IoT use cases so you can answer questions like which devices are about to fail or which customers are at risk of abandoning their wearable devices. import boto3 client = boto3.client('iotanalytics') These are the available methods: * batch_put_message * can_paginate * cancel_pipeline_reprocessing * close * create_channel * create_dataset * create_dataset_content * create_datastore * create_pipeline * delete_channel * delete_dataset * delete_dataset_content * delete_datastore * delete_pipeline * describe_channel * describe_dataset * describe_datastore * describe_logging_options * describe_pipeline * get_dataset_content * get_paginator * get_waiter * list_channels * list_dataset_contents * list_datasets * list_datastores * list_pipelines * list_tags_for_resource * put_logging_options * run_pipeline_activity * sample_channel_data * start_pipeline_reprocessing * tag_resource * untag_resource * update_channel * update_dataset * update_datastore * update_pipeline Paginators ========== Paginators are available on a client instance via the "get_paginator" method. For more detailed instructions and examples on the usage of paginators, see the paginators user guide. The available paginators are: * ListChannels * ListDatasetContents * ListDatasets * ListDatastores * ListPipelines IoTAnalytics / Paginator / ListChannels ListChannels ************ class IoTAnalytics.Paginator.ListChannels paginator = client.get_paginator('list_channels') paginate(**kwargs) Creates an iterator that will paginate through responses from "IoTAnalytics.Client.list_channels()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max- items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'channelSummaries': [ { 'channelName': 'string', 'channelStorage': { 'serviceManagedS3': {}, 'customerManagedS3': { 'bucket': 'string', 'keyPrefix': 'string', 'roleArn': 'string' } }, 'status': 'CREATING'|'ACTIVE'|'DELETING', 'creationTime': datetime(2015, 1, 1), 'lastUpdateTime': datetime(2015, 1, 1), 'lastMessageArrivalTime': datetime(2015, 1, 1) }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **channelSummaries** *(list) --* A list of "ChannelSummary" objects. * *(dict) --* A summary of information about a channel. * **channelName** *(string) --* The name of the channel. * **channelStorage** *(dict) --* Where channel data is stored. * **serviceManagedS3** *(dict) --* Used to store channel data in an S3 bucket managed by IoT Analytics. * **customerManagedS3** *(dict) --* Used to store channel data in an S3 bucket that you manage. * **bucket** *(string) --* The name of the S3 bucket in which channel data is stored. * **keyPrefix** *(string) --* (Optional) The prefix used to create the keys of the channel data objects. Each object in an S3 bucket has a key that is its unique identifier within the bucket (each object in a bucket has exactly one key). The prefix must end with a forward slash (/). * **roleArn** *(string) --* The ARN of the role that grants IoT Analytics permission to interact with your Amazon S3 resources. * **status** *(string) --* The status of the channel. * **creationTime** *(datetime) --* When the channel was created. * **lastUpdateTime** *(datetime) --* The last time the channel was updated. * **lastMessageArrivalTime** *(datetime) --* The last time when a new message arrived in the channel. IoT Analytics updates this value at most once per minute for one channel. Hence, the "lastMessageArrivalTime" value is an approximation. This feature only applies to messages that arrived in the data store after October 23, 2020. * **NextToken** *(string) --* A token to resume pagination. IoTAnalytics / Paginator / ListDatastores ListDatastores ************** class IoTAnalytics.Paginator.ListDatastores paginator = client.get_paginator('list_datastores') paginate(**kwargs) Creates an iterator that will paginate through responses from "IoTAnalytics.Client.list_datastores()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max- items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'datastoreSummaries': [ { 'datastoreName': 'string', 'datastoreStorage': { 'serviceManagedS3': {}, 'customerManagedS3': { 'bucket': 'string', 'keyPrefix': 'string', 'roleArn': 'string' }, 'iotSiteWiseMultiLayerStorage': { 'customerManagedS3Storage': { 'bucket': 'string', 'keyPrefix': 'string' } } }, 'status': 'CREATING'|'ACTIVE'|'DELETING', 'creationTime': datetime(2015, 1, 1), 'lastUpdateTime': datetime(2015, 1, 1), 'lastMessageArrivalTime': datetime(2015, 1, 1), 'fileFormatType': 'JSON'|'PARQUET', 'datastorePartitions': { 'partitions': [ { 'attributePartition': { 'attributeName': 'string' }, 'timestampPartition': { 'attributeName': 'string', 'timestampFormat': 'string' } }, ] } }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **datastoreSummaries** *(list) --* A list of "DatastoreSummary" objects. * *(dict) --* A summary of information about a data store. * **datastoreName** *(string) --* The name of the data store. * **datastoreStorage** *(dict) --* Where data in a data store is stored. * **serviceManagedS3** *(dict) --* Used to store data in an Amazon S3 bucket managed by IoT Analytics. * **customerManagedS3** *(dict) --* Used to store data in an Amazon S3 bucket managed by IoT Analytics. * **bucket** *(string) --* The name of the Amazon S3 bucket where your data is stored. * **keyPrefix** *(string) --* (Optional) The prefix used to create the keys of the data store data objects. Each object in an Amazon S3 bucket has a key that is its unique identifier in the bucket. Each object in a bucket has exactly one key. The prefix must end with a forward slash (/). * **roleArn** *(string) --* The ARN of the role that grants IoT Analytics permission to interact with your Amazon S3 resources. * **iotSiteWiseMultiLayerStorage** *(dict) --* Used to store data used by IoT SiteWise in an Amazon S3 bucket that you manage. * **customerManagedS3Storage** *(dict) --* Used to store data used by IoT SiteWise in an Amazon S3 bucket that you manage. * **bucket** *(string) --* The name of the Amazon S3 bucket where your data is stored. * **keyPrefix** *(string) --* (Optional) The prefix used to create the keys of the data store data objects. Each object in an Amazon S3 bucket has a key that is its unique identifier in the bucket. Each object in a bucket has exactly one key. The prefix must end with a forward slash (/). * **status** *(string) --* The status of the data store. * **creationTime** *(datetime) --* When the data store was created. * **lastUpdateTime** *(datetime) --* The last time the data store was updated. * **lastMessageArrivalTime** *(datetime) --* The last time when a new message arrived in the data store. IoT Analytics updates this value at most once per minute for Amazon Simple Storage Service one data store. Hence, the "lastMessageArrivalTime" value is an approximation. This feature only applies to messages that arrived in the data store after October 23, 2020. * **fileFormatType** *(string) --* The file format of the data in the data store. * **datastorePartitions** *(dict) --* Contains information about the partition dimensions in a data store. * **partitions** *(list) --* A list of partition dimensions in a data store. * *(dict) --* A single dimension to partition a data store. The dimension must be an "AttributePartition" or a "TimestampPartition". * **attributePartition** *(dict) --* A partition dimension defined by an "attributeName". * **attributeName** *(string) --* The name of the attribute that defines a partition dimension. * **timestampPartition** *(dict) --* A partition dimension defined by a timestamp attribute. * **attributeName** *(string) --* The attribute name of the partition defined by a timestamp. * **timestampFormat** *(string) --* The timestamp format of a partition defined by a timestamp. The default format is seconds since epoch (January 1, 1970 at midnight UTC time). * **NextToken** *(string) --* A token to resume pagination. IoTAnalytics / Paginator / ListPipelines ListPipelines ************* class IoTAnalytics.Paginator.ListPipelines paginator = client.get_paginator('list_pipelines') paginate(**kwargs) Creates an iterator that will paginate through responses from "IoTAnalytics.Client.list_pipelines()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max- items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'pipelineSummaries': [ { 'pipelineName': 'string', 'reprocessingSummaries': [ { 'id': 'string', 'status': 'RUNNING'|'SUCCEEDED'|'CANCELLED'|'FAILED', 'creationTime': datetime(2015, 1, 1) }, ], 'creationTime': datetime(2015, 1, 1), 'lastUpdateTime': datetime(2015, 1, 1) }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **pipelineSummaries** *(list) --* A list of "PipelineSummary" objects. * *(dict) --* A summary of information about a pipeline. * **pipelineName** *(string) --* The name of the pipeline. * **reprocessingSummaries** *(list) --* A summary of information about the pipeline reprocessing. * *(dict) --* Information about pipeline reprocessing. * **id** *(string) --* The "reprocessingId" returned by "StartPipelineReprocessing". * **status** *(string) --* The status of the pipeline reprocessing. * **creationTime** *(datetime) --* The time the pipeline reprocessing was created. * **creationTime** *(datetime) --* When the pipeline was created. * **lastUpdateTime** *(datetime) --* When the pipeline was last updated. * **NextToken** *(string) --* A token to resume pagination. IoTAnalytics / Paginator / ListDatasets ListDatasets ************ class IoTAnalytics.Paginator.ListDatasets paginator = client.get_paginator('list_datasets') paginate(**kwargs) Creates an iterator that will paginate through responses from "IoTAnalytics.Client.list_datasets()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max- items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'datasetSummaries': [ { 'datasetName': 'string', 'status': 'CREATING'|'ACTIVE'|'DELETING', 'creationTime': datetime(2015, 1, 1), 'lastUpdateTime': datetime(2015, 1, 1), 'triggers': [ { 'schedule': { 'expression': 'string' }, 'dataset': { 'name': 'string' } }, ], 'actions': [ { 'actionName': 'string', 'actionType': 'QUERY'|'CONTAINER' }, ] }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **datasetSummaries** *(list) --* A list of "DatasetSummary" objects. * *(dict) --* A summary of information about a dataset. * **datasetName** *(string) --* The name of the dataset. * **status** *(string) --* The status of the dataset. * **creationTime** *(datetime) --* The time the dataset was created. * **lastUpdateTime** *(datetime) --* The last time the dataset was updated. * **triggers** *(list) --* A list of triggers. A trigger causes dataset content to be populated at a specified time interval or when another dataset is populated. The list of triggers can be empty or contain up to five "DataSetTrigger" objects * *(dict) --* The "DatasetTrigger" that specifies when the dataset is automatically updated. * **schedule** *(dict) --* The Schedule when the trigger is initiated. * **expression** *(string) --* The expression that defines when to trigger an update. For more information, see Schedule Expressions for Rules in the *Amazon CloudWatch Events User Guide*. * **dataset** *(dict) --* The dataset whose content creation triggers the creation of this dataset's contents. * **name** *(string) --* The name of the dataset whose content generation triggers the new dataset content generation. * **actions** *(list) --* A list of "DataActionSummary" objects. * *(dict) --* Information about the action that automatically creates the dataset's contents. * **actionName** *(string) --* The name of the action that automatically creates the dataset's contents. * **actionType** *(string) --* The type of action by which the dataset's contents are automatically created. * **NextToken** *(string) --* A token to resume pagination. IoTAnalytics / Paginator / ListDatasetContents ListDatasetContents ******************* class IoTAnalytics.Paginator.ListDatasetContents paginator = client.get_paginator('list_dataset_contents') paginate(**kwargs) Creates an iterator that will paginate through responses from "IoTAnalytics.Client.list_dataset_contents()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( datasetName='string', scheduledOnOrAfter=datetime(2015, 1, 1), scheduledBefore=datetime(2015, 1, 1), PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **datasetName** (*string*) -- **[REQUIRED]** The name of the dataset whose contents information you want to list. * **scheduledOnOrAfter** (*datetime*) -- A filter to limit results to those dataset contents whose creation is scheduled on or after the given time. See the field "triggers.schedule" in the "CreateDataset" request. (timestamp) * **scheduledBefore** (*datetime*) -- A filter to limit results to those dataset contents whose creation is scheduled before the given time. See the field "triggers.schedule" in the "CreateDataset" request. (timestamp) * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'datasetContentSummaries': [ { 'version': 'string', 'status': { 'state': 'CREATING'|'SUCCEEDED'|'FAILED', 'reason': 'string' }, 'creationTime': datetime(2015, 1, 1), 'scheduleTime': datetime(2015, 1, 1), 'completionTime': datetime(2015, 1, 1) }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **datasetContentSummaries** *(list) --* Summary information about dataset contents that have been created. * *(dict) --* Summary information about dataset contents. * **version** *(string) --* The version of the dataset contents. * **status** *(dict) --* The status of the dataset contents. * **state** *(string) --* The state of the dataset contents. Can be one of READY, CREATING, SUCCEEDED, or FAILED. * **reason** *(string) --* The reason the dataset contents are in this state. * **creationTime** *(datetime) --* The actual time the creation of the dataset contents was started. * **scheduleTime** *(datetime) --* The time the creation of the dataset contents was scheduled to start. * **completionTime** *(datetime) --* The time the dataset content status was updated to SUCCEEDED or FAILED. * **NextToken** *(string) --* A token to resume pagination. IoTAnalytics / Client / get_paginator get_paginator ************* IoTAnalytics.Client.get_paginator(operation_name) Create a paginator for an operation. Parameters: **operation_name** (*string*) -- The operation name. This is the same name as the method name on the client. For example, if the method name is "create_foo", and you'd normally invoke the operation as "client.create_foo(**kwargs)", if the "create_foo" operation can be paginated, you can use the call "client.get_paginator("create_foo")". Raises: **OperationNotPageableError** -- Raised if the operation is not pageable. You can use the "client.can_paginate" method to check if an operation is pageable. Return type: "botocore.paginate.Paginator" Returns: A paginator object. IoTAnalytics / Client / list_channels list_channels ************* IoTAnalytics.Client.list_channels(**kwargs) Retrieves a list of channels. See also: AWS API Documentation **Request Syntax** response = client.list_channels( nextToken='string', maxResults=123 ) Parameters: * **nextToken** (*string*) -- The token for the next set of results. * **maxResults** (*integer*) -- The maximum number of results to return in this request. The default value is 100. Return type: dict Returns: **Response Syntax** { 'channelSummaries': [ { 'channelName': 'string', 'channelStorage': { 'serviceManagedS3': {}, 'customerManagedS3': { 'bucket': 'string', 'keyPrefix': 'string', 'roleArn': 'string' } }, 'status': 'CREATING'|'ACTIVE'|'DELETING', 'creationTime': datetime(2015, 1, 1), 'lastUpdateTime': datetime(2015, 1, 1), 'lastMessageArrivalTime': datetime(2015, 1, 1) }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **channelSummaries** *(list) --* A list of "ChannelSummary" objects. * *(dict) --* A summary of information about a channel. * **channelName** *(string) --* The name of the channel. * **channelStorage** *(dict) --* Where channel data is stored. * **serviceManagedS3** *(dict) --* Used to store channel data in an S3 bucket managed by IoT Analytics. * **customerManagedS3** *(dict) --* Used to store channel data in an S3 bucket that you manage. * **bucket** *(string) --* The name of the S3 bucket in which channel data is stored. * **keyPrefix** *(string) --* (Optional) The prefix used to create the keys of the channel data objects. Each object in an S3 bucket has a key that is its unique identifier within the bucket (each object in a bucket has exactly one key). The prefix must end with a forward slash (/). * **roleArn** *(string) --* The ARN of the role that grants IoT Analytics permission to interact with your Amazon S3 resources. * **status** *(string) --* The status of the channel. * **creationTime** *(datetime) --* When the channel was created. * **lastUpdateTime** *(datetime) --* The last time the channel was updated. * **lastMessageArrivalTime** *(datetime) --* The last time when a new message arrived in the channel. IoT Analytics updates this value at most once per minute for one channel. Hence, the "lastMessageArrivalTime" value is an approximation. This feature only applies to messages that arrived in the data store after October 23, 2020. * **nextToken** *(string) --* The token to retrieve the next set of results, or "null" if there are no more results. **Exceptions** * "IoTAnalytics.Client.exceptions.InvalidRequestException" * "IoTAnalytics.Client.exceptions.InternalFailureException" * "IoTAnalytics.Client.exceptions.ServiceUnavailableException" * "IoTAnalytics.Client.exceptions.ThrottlingException" IoTAnalytics / Client / can_paginate can_paginate ************ IoTAnalytics.Client.can_paginate(operation_name) Check if an operation can be paginated. Parameters: **operation_name** (*string*) -- The operation name. This is the same name as the method name on the client. For example, if the method name is "create_foo", and you'd normally invoke the operation as "client.create_foo(**kwargs)", if the "create_foo" operation can be paginated, you can use the call "client.get_paginator("create_foo")". Returns: "True" if the operation can be paginated, "False" otherwise. IoTAnalytics / Client / create_dataset create_dataset ************** IoTAnalytics.Client.create_dataset(**kwargs) Used to create a dataset. A dataset stores data retrieved from a data store by applying a "queryAction" (a SQL query) or a "containerAction" (executing a containerized application). This operation creates the skeleton of a dataset. The dataset can be populated manually by calling "CreateDatasetContent" or automatically according to a trigger you specify. See also: AWS API Documentation **Request Syntax** response = client.create_dataset( datasetName='string', actions=[ { 'actionName': 'string', 'queryAction': { 'sqlQuery': 'string', 'filters': [ { 'deltaTime': { 'offsetSeconds': 123, 'timeExpression': 'string' } }, ] }, 'containerAction': { 'image': 'string', 'executionRoleArn': 'string', 'resourceConfiguration': { 'computeType': 'ACU_1'|'ACU_2', 'volumeSizeInGB': 123 }, 'variables': [ { 'name': 'string', 'stringValue': 'string', 'doubleValue': 123.0, 'datasetContentVersionValue': { 'datasetName': 'string' }, 'outputFileUriValue': { 'fileName': 'string' } }, ] } }, ], triggers=[ { 'schedule': { 'expression': 'string' }, 'dataset': { 'name': 'string' } }, ], contentDeliveryRules=[ { 'entryName': 'string', 'destination': { 'iotEventsDestinationConfiguration': { 'inputName': 'string', 'roleArn': 'string' }, 's3DestinationConfiguration': { 'bucket': 'string', 'key': 'string', 'glueConfiguration': { 'tableName': 'string', 'databaseName': 'string' }, 'roleArn': 'string' } } }, ], retentionPeriod={ 'unlimited': True|False, 'numberOfDays': 123 }, versioningConfiguration={ 'unlimited': True|False, 'maxVersions': 123 }, tags=[ { 'key': 'string', 'value': 'string' }, ], lateDataRules=[ { 'ruleName': 'string', 'ruleConfiguration': { 'deltaTimeSessionWindowConfiguration': { 'timeoutInMinutes': 123 } } }, ] ) Parameters: * **datasetName** (*string*) -- **[REQUIRED]** The name of the dataset. * **actions** (*list*) -- **[REQUIRED]** A list of actions that create the dataset contents. * *(dict) --* A "DatasetAction" object that specifies how dataset contents are automatically created. * **actionName** *(string) --* The name of the dataset action by which dataset contents are automatically created. * **queryAction** *(dict) --* An "SqlQueryDatasetAction" object that uses an SQL query to automatically create dataset contents. * **sqlQuery** *(string) --* **[REQUIRED]** A SQL query string. * **filters** *(list) --* Prefilters applied to message data. * *(dict) --* Information that is used to filter message data, to segregate it according to the timeframe in which it arrives. * **deltaTime** *(dict) --* Used to limit data to that which has arrived since the last execution of the action. * **offsetSeconds** *(integer) --* **[REQUIRED]** The number of seconds of estimated in-flight lag time of message data. When you create dataset contents using message data from a specified timeframe, some message data might still be in flight when processing begins, and so do not arrive in time to be processed. Use this field to make allowances for the in flight time of your message data, so that data not processed from a previous timeframe is included with the next timeframe. Otherwise, missed message data would be excluded from processing during the next timeframe too, because its timestamp places it within the previous timeframe. * **timeExpression** *(string) --* **[REQUIRED]** An expression by which the time of the message data might be determined. This can be the name of a timestamp field or a SQL expression that is used to derive the time the message data was generated. * **containerAction** *(dict) --* Information that allows the system to run a containerized application to create the dataset contents. The application must be in a Docker container along with any required support libraries. * **image** *(string) --* **[REQUIRED]** The ARN of the Docker container stored in your account. The Docker container contains an application and required support libraries and is used to generate dataset contents. * **executionRoleArn** *(string) --* **[REQUIRED]** The ARN of the role that gives permission to the system to access required resources to run the "containerAction". This includes, at minimum, permission to retrieve the dataset contents that are the input to the containerized application. * **resourceConfiguration** *(dict) --* **[REQUIRED]** Configuration of the resource that executes the "containerAction". * **computeType** *(string) --* **[REQUIRED]** The type of the compute resource used to execute the "containerAction". Possible values are: "ACU_1" (vCPU=4, memory=16 GiB) or "ACU_2" (vCPU=8, memory=32 GiB). * **volumeSizeInGB** *(integer) --* **[REQUIRED]** The size, in GB, of the persistent storage available to the resource instance used to execute the "containerAction" (min: 1, max: 50). * **variables** *(list) --* The values of variables used in the context of the execution of the containerized application (basically, parameters passed to the application). Each variable must have a name and a value given by one of "stringValue", "datasetContentVersionValue", or "outputFileUriValue". * *(dict) --* An instance of a variable to be passed to the "containerAction" execution. Each variable must have a name and a value given by one of "stringValue", "datasetContentVersionValue", or "outputFileUriValue". * **name** *(string) --* **[REQUIRED]** The name of the variable. * **stringValue** *(string) --* The value of the variable as a string. * **doubleValue** *(float) --* The value of the variable as a double (numeric). * **datasetContentVersionValue** *(dict) --* The value of the variable as a structure that specifies a dataset content version. * **datasetName** *(string) --* **[REQUIRED]** The name of the dataset whose latest contents are used as input to the notebook or application. * **outputFileUriValue** *(dict) --* The value of the variable as a structure that specifies an output file URI. * **fileName** *(string) --* **[REQUIRED]** The URI of the location where dataset contents are stored, usually the URI of a file in an S3 bucket. * **triggers** (*list*) -- A list of triggers. A trigger causes dataset contents to be populated at a specified time interval or when another dataset's contents are created. The list of triggers can be empty or contain up to five "DataSetTrigger" objects. * *(dict) --* The "DatasetTrigger" that specifies when the dataset is automatically updated. * **schedule** *(dict) --* The Schedule when the trigger is initiated. * **expression** *(string) --* The expression that defines when to trigger an update. For more information, see Schedule Expressions for Rules in the *Amazon CloudWatch Events User Guide*. * **dataset** *(dict) --* The dataset whose content creation triggers the creation of this dataset's contents. * **name** *(string) --* **[REQUIRED]** The name of the dataset whose content generation triggers the new dataset content generation. * **contentDeliveryRules** (*list*) -- When dataset contents are created, they are delivered to destinations specified here. * *(dict) --* When dataset contents are created, they are delivered to destination specified here. * **entryName** *(string) --* The name of the dataset content delivery rules entry. * **destination** *(dict) --* **[REQUIRED]** The destination to which dataset contents are delivered. * **iotEventsDestinationConfiguration** *(dict) --* Configuration information for delivery of dataset contents to IoT Events. * **inputName** *(string) --* **[REQUIRED]** The name of the IoT Events input to which dataset contents are delivered. * **roleArn** *(string) --* **[REQUIRED]** The ARN of the role that grants IoT Analytics permission to deliver dataset contents to an IoT Events input. * **s3DestinationConfiguration** *(dict) --* Configuration information for delivery of dataset contents to Amazon S3. * **bucket** *(string) --* **[REQUIRED]** The name of the S3 bucket to which dataset contents are delivered. * **key** *(string) --* **[REQUIRED]** The key of the dataset contents object in an S3 bucket. Each object has a key that is a unique identifier. Each object has exactly one key. You can create a unique key with the following options: * Use "!{iotanalytics:scheduleTime}" to insert the time of a scheduled SQL query run. * Use "!{iotanalytics:versionId}" to insert a unique hash that identifies a dataset content. * Use "!{iotanalytics:creationTime}" to insert the creation time of a dataset content. The following example creates a unique key for a CSV file: "dataset/mydataset/!{iotanalytics:scheduleTime} /!{iotanalytics:versionId}.csv" Note: If you don't use "!{iotanalytics:versionId}" to specify the key, you might get duplicate keys. For example, you might have two dataset contents with the same "scheduleTime" but different >>``<>``<>A-Za-z_<<]([A-Za-z0-9]*|[ A-Za-z0-9][>>A-Za-z0-9_<<]*)$". * Cannot be more than 255 characters. * Are case insensitive. (Fields named foo and FOO in the same payload are considered duplicates.) For example, {"temp_01": 29} or {"_temp_01": 29} are valid, but {"temp-01": 29}, {"01_temp": 29} or {"__temp_01": 29} are invalid in message payloads. * *(dict) --* Information about a message. * **messageId** *(string) --* **[REQUIRED]** The ID you want to assign to the message. Each "messageId" must be unique within each batch sent. * **payload** *(bytes) --* **[REQUIRED]** The payload of the message. This can be a JSON string or a base64-encoded string representing binary data, in which case you must decode it by means of a pipeline activity. Return type: dict Returns: **Response Syntax** { 'batchPutMessageErrorEntries': [ { 'messageId': 'string', 'errorCode': 'string', 'errorMessage': 'string' }, ] } **Response Structure** * *(dict) --* * **batchPutMessageErrorEntries** *(list) --* A list of any errors encountered when sending the messages to the channel. * *(dict) --* Contains informations about errors. * **messageId** *(string) --* The ID of the message that caused the error. See the value corresponding to the "messageId" key in the message object. * **errorCode** *(string) --* The code associated with the error. * **errorMessage** *(string) --* The message associated with the error. **Exceptions** * "IoTAnalytics.Client.exceptions.ResourceNotFoundException" * "IoTAnalytics.Client.exceptions.InvalidRequestException" * "IoTAnalytics.Client.exceptions.InternalFailureException" * "IoTAnalytics.Client.exceptions.ServiceUnavailableException" * "IoTAnalytics.Client.exceptions.ThrottlingException" IoTAnalytics / Client / get_waiter get_waiter ********** IoTAnalytics.Client.get_waiter(waiter_name) Returns an object that can wait for some condition. Parameters: **waiter_name** (*str*) -- The name of the waiter to get. See the waiters section of the service docs for a list of available waiters. Returns: The specified waiter object. Return type: "botocore.waiter.Waiter" IoTAnalytics / Client / sample_channel_data sample_channel_data ******************* IoTAnalytics.Client.sample_channel_data(**kwargs) Retrieves a sample of messages from the specified channel ingested during the specified timeframe. Up to 10 messages can be retrieved. See also: AWS API Documentation **Request Syntax** response = client.sample_channel_data( channelName='string', maxMessages=123, startTime=datetime(2015, 1, 1), endTime=datetime(2015, 1, 1) ) Parameters: * **channelName** (*string*) -- **[REQUIRED]** The name of the channel whose message samples are retrieved. * **maxMessages** (*integer*) -- The number of sample messages to be retrieved. The limit is 10. The default is also 10. * **startTime** (*datetime*) -- The start of the time window from which sample messages are retrieved. * **endTime** (*datetime*) -- The end of the time window from which sample messages are retrieved. Return type: dict Returns: **Response Syntax** { 'payloads': [ b'bytes', ] } **Response Structure** * *(dict) --* * **payloads** *(list) --* The list of message samples. Each sample message is returned as a base64-encoded string. * *(bytes) --* **Exceptions** * "IoTAnalytics.Client.exceptions.InvalidRequestException" * "IoTAnalytics.Client.exceptions.ResourceNotFoundException" * "IoTAnalytics.Client.exceptions.InternalFailureException" * "IoTAnalytics.Client.exceptions.ServiceUnavailableException" * "IoTAnalytics.Client.exceptions.ThrottlingException" IoTAnalytics / Client / delete_pipeline delete_pipeline *************** IoTAnalytics.Client.delete_pipeline(**kwargs) Deletes the specified pipeline. See also: AWS API Documentation **Request Syntax** response = client.delete_pipeline( pipelineName='string' ) Parameters: **pipelineName** (*string*) -- **[REQUIRED]** The name of the pipeline to delete. Returns: None **Exceptions** * "IoTAnalytics.Client.exceptions.InvalidRequestException" * "IoTAnalytics.Client.exceptions.ResourceNotFoundException" * "IoTAnalytics.Client.exceptions.InternalFailureException" * "IoTAnalytics.Client.exceptions.ServiceUnavailableException" * "IoTAnalytics.Client.exceptions.ThrottlingException" IoTAnalytics / Client / list_dataset_contents list_dataset_contents ********************* IoTAnalytics.Client.list_dataset_contents(**kwargs) Lists information about dataset contents that have been created. See also: AWS API Documentation **Request Syntax** response = client.list_dataset_contents( datasetName='string', nextToken='string', maxResults=123, scheduledOnOrAfter=datetime(2015, 1, 1), scheduledBefore=datetime(2015, 1, 1) ) Parameters: * **datasetName** (*string*) -- **[REQUIRED]** The name of the dataset whose contents information you want to list. * **nextToken** (*string*) -- The token for the next set of results. * **maxResults** (*integer*) -- The maximum number of results to return in this request. * **scheduledOnOrAfter** (*datetime*) -- A filter to limit results to those dataset contents whose creation is scheduled on or after the given time. See the field "triggers.schedule" in the "CreateDataset" request. (timestamp) * **scheduledBefore** (*datetime*) -- A filter to limit results to those dataset contents whose creation is scheduled before the given time. See the field "triggers.schedule" in the "CreateDataset" request. (timestamp) Return type: dict Returns: **Response Syntax** { 'datasetContentSummaries': [ { 'version': 'string', 'status': { 'state': 'CREATING'|'SUCCEEDED'|'FAILED', 'reason': 'string' }, 'creationTime': datetime(2015, 1, 1), 'scheduleTime': datetime(2015, 1, 1), 'completionTime': datetime(2015, 1, 1) }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **datasetContentSummaries** *(list) --* Summary information about dataset contents that have been created. * *(dict) --* Summary information about dataset contents. * **version** *(string) --* The version of the dataset contents. * **status** *(dict) --* The status of the dataset contents. * **state** *(string) --* The state of the dataset contents. Can be one of READY, CREATING, SUCCEEDED, or FAILED. * **reason** *(string) --* The reason the dataset contents are in this state. * **creationTime** *(datetime) --* The actual time the creation of the dataset contents was started. * **scheduleTime** *(datetime) --* The time the creation of the dataset contents was scheduled to start. * **completionTime** *(datetime) --* The time the dataset content status was updated to SUCCEEDED or FAILED. * **nextToken** *(string) --* The token to retrieve the next set of results, or "null" if there are no more results. **Exceptions** * "IoTAnalytics.Client.exceptions.InvalidRequestException" * "IoTAnalytics.Client.exceptions.InternalFailureException" * "IoTAnalytics.Client.exceptions.ServiceUnavailableException" * "IoTAnalytics.Client.exceptions.ThrottlingException" * "IoTAnalytics.Client.exceptions.ResourceNotFoundException" IoTAnalytics / Client / delete_dataset_content delete_dataset_content ********************** IoTAnalytics.Client.delete_dataset_content(**kwargs) Deletes the content of the specified dataset. See also: AWS API Documentation **Request Syntax** response = client.delete_dataset_content( datasetName='string', versionId='string' ) Parameters: * **datasetName** (*string*) -- **[REQUIRED]** The name of the dataset whose content is deleted. * **versionId** (*string*) -- The version of the dataset whose content is deleted. You can also use the strings "$LATEST" or "$LATEST_SUCCEEDED" to delete the latest or latest successfully completed data set. If not specified, "$LATEST_SUCCEEDED" is the default. Returns: None **Exceptions** * "IoTAnalytics.Client.exceptions.InvalidRequestException" * "IoTAnalytics.Client.exceptions.ResourceNotFoundException" * "IoTAnalytics.Client.exceptions.InternalFailureException" * "IoTAnalytics.Client.exceptions.ServiceUnavailableException" * "IoTAnalytics.Client.exceptions.ThrottlingException" IoTAnalytics / Client / create_channel create_channel ************** IoTAnalytics.Client.create_channel(**kwargs) Used to create a channel. A channel collects data from an MQTT topic and archives the raw, unprocessed messages before publishing the data to a pipeline. See also: AWS API Documentation **Request Syntax** response = client.create_channel( channelName='string', channelStorage={ 'serviceManagedS3': {} , 'customerManagedS3': { 'bucket': 'string', 'keyPrefix': 'string', 'roleArn': 'string' } }, retentionPeriod={ 'unlimited': True|False, 'numberOfDays': 123 }, tags=[ { 'key': 'string', 'value': 'string' }, ] ) Parameters: * **channelName** (*string*) -- **[REQUIRED]** The name of the channel. * **channelStorage** (*dict*) -- Where channel data is stored. You can choose one of "serviceManagedS3" or "customerManagedS3" storage. If not specified, the default is "serviceManagedS3". You can't change this storage option after the channel is created. * **serviceManagedS3** *(dict) --* Used to store channel data in an S3 bucket managed by IoT Analytics. You can't change the choice of S3 storage after the data store is created. * **customerManagedS3** *(dict) --* Used to store channel data in an S3 bucket that you manage. If customer managed storage is selected, the "retentionPeriod" parameter is ignored. You can't change the choice of S3 storage after the data store is created. * **bucket** *(string) --* **[REQUIRED]** The name of the S3 bucket in which channel data is stored. * **keyPrefix** *(string) --* (Optional) The prefix used to create the keys of the channel data objects. Each object in an S3 bucket has a key that is its unique identifier in the bucket. Each object in a bucket has exactly one key. The prefix must end with a forward slash (/). * **roleArn** *(string) --* **[REQUIRED]** The ARN of the role that grants IoT Analytics permission to interact with your Amazon S3 resources. * **retentionPeriod** (*dict*) -- How long, in days, message data is kept for the channel. When "customerManagedS3" storage is selected, this parameter is ignored. * **unlimited** *(boolean) --* If true, message data is kept indefinitely. * **numberOfDays** *(integer) --* The number of days that message data is kept. The "unlimited" parameter must be false. * **tags** (*list*) -- Metadata which can be used to manage the channel. * *(dict) --* A set of key-value pairs that are used to manage the resource. * **key** *(string) --* **[REQUIRED]** The tag's key. * **value** *(string) --* **[REQUIRED]** The tag's value. Return type: dict Returns: **Response Syntax** { 'channelName': 'string', 'channelArn': 'string', 'retentionPeriod': { 'unlimited': True|False, 'numberOfDays': 123 } } **Response Structure** * *(dict) --* * **channelName** *(string) --* The name of the channel. * **channelArn** *(string) --* The ARN of the channel. * **retentionPeriod** *(dict) --* How long, in days, message data is kept for the channel. * **unlimited** *(boolean) --* If true, message data is kept indefinitely. * **numberOfDays** *(integer) --* The number of days that message data is kept. The "unlimited" parameter must be false. **Exceptions** * "IoTAnalytics.Client.exceptions.InvalidRequestException" * "IoTAnalytics.Client.exceptions.ResourceAlreadyExistsException" * "IoTAnalytics.Client.exceptions.InternalFailureException" * "IoTAnalytics.Client.exceptions.ServiceUnavailableException" * "IoTAnalytics.Client.exceptions.ThrottlingException" * "IoTAnalytics.Client.exceptions.LimitExceededException" IoTAnalytics / Client / delete_channel delete_channel ************** IoTAnalytics.Client.delete_channel(**kwargs) Deletes the specified channel. See also: AWS API Documentation **Request Syntax** response = client.delete_channel( channelName='string' ) Parameters: **channelName** (*string*) -- **[REQUIRED]** The name of the channel to delete. Returns: None **Exceptions** * "IoTAnalytics.Client.exceptions.InvalidRequestException" * "IoTAnalytics.Client.exceptions.ResourceNotFoundException" * "IoTAnalytics.Client.exceptions.InternalFailureException" * "IoTAnalytics.Client.exceptions.ServiceUnavailableException" * "IoTAnalytics.Client.exceptions.ThrottlingException" IoTAnalytics / Client / list_datasets list_datasets ************* IoTAnalytics.Client.list_datasets(**kwargs) Retrieves information about datasets. See also: AWS API Documentation **Request Syntax** response = client.list_datasets( nextToken='string', maxResults=123 ) Parameters: * **nextToken** (*string*) -- The token for the next set of results. * **maxResults** (*integer*) -- The maximum number of results to return in this request. The default value is 100. Return type: dict Returns: **Response Syntax** { 'datasetSummaries': [ { 'datasetName': 'string', 'status': 'CREATING'|'ACTIVE'|'DELETING', 'creationTime': datetime(2015, 1, 1), 'lastUpdateTime': datetime(2015, 1, 1), 'triggers': [ { 'schedule': { 'expression': 'string' }, 'dataset': { 'name': 'string' } }, ], 'actions': [ { 'actionName': 'string', 'actionType': 'QUERY'|'CONTAINER' }, ] }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **datasetSummaries** *(list) --* A list of "DatasetSummary" objects. * *(dict) --* A summary of information about a dataset. * **datasetName** *(string) --* The name of the dataset. * **status** *(string) --* The status of the dataset. * **creationTime** *(datetime) --* The time the dataset was created. * **lastUpdateTime** *(datetime) --* The last time the dataset was updated. * **triggers** *(list) --* A list of triggers. A trigger causes dataset content to be populated at a specified time interval or when another dataset is populated. The list of triggers can be empty or contain up to five "DataSetTrigger" objects * *(dict) --* The "DatasetTrigger" that specifies when the dataset is automatically updated. * **schedule** *(dict) --* The Schedule when the trigger is initiated. * **expression** *(string) --* The expression that defines when to trigger an update. For more information, see Schedule Expressions for Rules in the *Amazon CloudWatch Events User Guide*. * **dataset** *(dict) --* The dataset whose content creation triggers the creation of this dataset's contents. * **name** *(string) --* The name of the dataset whose content generation triggers the new dataset content generation. * **actions** *(list) --* A list of "DataActionSummary" objects. * *(dict) --* Information about the action that automatically creates the dataset's contents. * **actionName** *(string) --* The name of the action that automatically creates the dataset's contents. * **actionType** *(string) --* The type of action by which the dataset's contents are automatically created. * **nextToken** *(string) --* The token to retrieve the next set of results, or "null" if there are no more results. **Exceptions** * "IoTAnalytics.Client.exceptions.InvalidRequestException" * "IoTAnalytics.Client.exceptions.InternalFailureException" * "IoTAnalytics.Client.exceptions.ServiceUnavailableException" * "IoTAnalytics.Client.exceptions.ThrottlingException" IoTAnalytics / Client / list_datastores list_datastores *************** IoTAnalytics.Client.list_datastores(**kwargs) Retrieves a list of data stores. See also: AWS API Documentation **Request Syntax** response = client.list_datastores( nextToken='string', maxResults=123 ) Parameters: * **nextToken** (*string*) -- The token for the next set of results. * **maxResults** (*integer*) -- The maximum number of results to return in this request. The default value is 100. Return type: dict Returns: **Response Syntax** { 'datastoreSummaries': [ { 'datastoreName': 'string', 'datastoreStorage': { 'serviceManagedS3': {}, 'customerManagedS3': { 'bucket': 'string', 'keyPrefix': 'string', 'roleArn': 'string' }, 'iotSiteWiseMultiLayerStorage': { 'customerManagedS3Storage': { 'bucket': 'string', 'keyPrefix': 'string' } } }, 'status': 'CREATING'|'ACTIVE'|'DELETING', 'creationTime': datetime(2015, 1, 1), 'lastUpdateTime': datetime(2015, 1, 1), 'lastMessageArrivalTime': datetime(2015, 1, 1), 'fileFormatType': 'JSON'|'PARQUET', 'datastorePartitions': { 'partitions': [ { 'attributePartition': { 'attributeName': 'string' }, 'timestampPartition': { 'attributeName': 'string', 'timestampFormat': 'string' } }, ] } }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **datastoreSummaries** *(list) --* A list of "DatastoreSummary" objects. * *(dict) --* A summary of information about a data store. * **datastoreName** *(string) --* The name of the data store. * **datastoreStorage** *(dict) --* Where data in a data store is stored. * **serviceManagedS3** *(dict) --* Used to store data in an Amazon S3 bucket managed by IoT Analytics. * **customerManagedS3** *(dict) --* Used to store data in an Amazon S3 bucket managed by IoT Analytics. * **bucket** *(string) --* The name of the Amazon S3 bucket where your data is stored. * **keyPrefix** *(string) --* (Optional) The prefix used to create the keys of the data store data objects. Each object in an Amazon S3 bucket has a key that is its unique identifier in the bucket. Each object in a bucket has exactly one key. The prefix must end with a forward slash (/). * **roleArn** *(string) --* The ARN of the role that grants IoT Analytics permission to interact with your Amazon S3 resources. * **iotSiteWiseMultiLayerStorage** *(dict) --* Used to store data used by IoT SiteWise in an Amazon S3 bucket that you manage. * **customerManagedS3Storage** *(dict) --* Used to store data used by IoT SiteWise in an Amazon S3 bucket that you manage. * **bucket** *(string) --* The name of the Amazon S3 bucket where your data is stored. * **keyPrefix** *(string) --* (Optional) The prefix used to create the keys of the data store data objects. Each object in an Amazon S3 bucket has a key that is its unique identifier in the bucket. Each object in a bucket has exactly one key. The prefix must end with a forward slash (/). * **status** *(string) --* The status of the data store. * **creationTime** *(datetime) --* When the data store was created. * **lastUpdateTime** *(datetime) --* The last time the data store was updated. * **lastMessageArrivalTime** *(datetime) --* The last time when a new message arrived in the data store. IoT Analytics updates this value at most once per minute for Amazon Simple Storage Service one data store. Hence, the "lastMessageArrivalTime" value is an approximation. This feature only applies to messages that arrived in the data store after October 23, 2020. * **fileFormatType** *(string) --* The file format of the data in the data store. * **datastorePartitions** *(dict) --* Contains information about the partition dimensions in a data store. * **partitions** *(list) --* A list of partition dimensions in a data store. * *(dict) --* A single dimension to partition a data store. The dimension must be an "AttributePartition" or a "TimestampPartition". * **attributePartition** *(dict) --* A partition dimension defined by an "attributeName". * **attributeName** *(string) --* The name of the attribute that defines a partition dimension. * **timestampPartition** *(dict) --* A partition dimension defined by a timestamp attribute. * **attributeName** *(string) --* The attribute name of the partition defined by a timestamp. * **timestampFormat** *(string) --* The timestamp format of a partition defined by a timestamp. The default format is seconds since epoch (January 1, 1970 at midnight UTC time). * **nextToken** *(string) --* The token to retrieve the next set of results, or "null" if there are no more results. **Exceptions** * "IoTAnalytics.Client.exceptions.InvalidRequestException" * "IoTAnalytics.Client.exceptions.InternalFailureException" * "IoTAnalytics.Client.exceptions.ServiceUnavailableException" * "IoTAnalytics.Client.exceptions.ThrottlingException" IoTAnalytics / Client / close close ***** IoTAnalytics.Client.close() Closes underlying endpoint connections. IoTAnalytics / Client / create_datastore create_datastore **************** IoTAnalytics.Client.create_datastore(**kwargs) Creates a data store, which is a repository for messages. See also: AWS API Documentation **Request Syntax** response = client.create_datastore( datastoreName='string', datastoreStorage={ 'serviceManagedS3': {} , 'customerManagedS3': { 'bucket': 'string', 'keyPrefix': 'string', 'roleArn': 'string' }, 'iotSiteWiseMultiLayerStorage': { 'customerManagedS3Storage': { 'bucket': 'string', 'keyPrefix': 'string' } } }, retentionPeriod={ 'unlimited': True|False, 'numberOfDays': 123 }, tags=[ { 'key': 'string', 'value': 'string' }, ], fileFormatConfiguration={ 'jsonConfiguration': {} , 'parquetConfiguration': { 'schemaDefinition': { 'columns': [ { 'name': 'string', 'type': 'string' }, ] } } }, datastorePartitions={ 'partitions': [ { 'attributePartition': { 'attributeName': 'string' }, 'timestampPartition': { 'attributeName': 'string', 'timestampFormat': 'string' } }, ] } ) Parameters: * **datastoreName** (*string*) -- **[REQUIRED]** The name of the data store. * **datastoreStorage** (*dict*) -- Where data in a data store is stored.. You can choose "serviceManagedS3" storage, "customerManagedS3" storage, or "iotSiteWiseMultiLayerStorage" storage. The default is "serviceManagedS3". You can't change the choice of Amazon S3 storage after your data store is created. * **serviceManagedS3** *(dict) --* Used to store data in an Amazon S3 bucket managed by IoT Analytics. You can't change the choice of Amazon S3 storage after your data store is created. * **customerManagedS3** *(dict) --* S3-customer-managed; When you choose customer-managed storage, the "retentionPeriod" parameter is ignored. You can't change the choice of Amazon S3 storage after your data store is created. * **bucket** *(string) --* **[REQUIRED]** The name of the Amazon S3 bucket where your data is stored. * **keyPrefix** *(string) --* (Optional) The prefix used to create the keys of the data store data objects. Each object in an Amazon S3 bucket has a key that is its unique identifier in the bucket. Each object in a bucket has exactly one key. The prefix must end with a forward slash (/). * **roleArn** *(string) --* **[REQUIRED]** The ARN of the role that grants IoT Analytics permission to interact with your Amazon S3 resources. * **iotSiteWiseMultiLayerStorage** *(dict) --* Used to store data used by IoT SiteWise in an Amazon S3 bucket that you manage. You can't change the choice of Amazon S3 storage after your data store is created. * **customerManagedS3Storage** *(dict) --* **[REQUIRED]** Used to store data used by IoT SiteWise in an Amazon S3 bucket that you manage. * **bucket** *(string) --* **[REQUIRED]** The name of the Amazon S3 bucket where your data is stored. * **keyPrefix** *(string) --* (Optional) The prefix used to create the keys of the data store data objects. Each object in an Amazon S3 bucket has a key that is its unique identifier in the bucket. Each object in a bucket has exactly one key. The prefix must end with a forward slash (/). * **retentionPeriod** (*dict*) -- How long, in days, message data is kept for the data store. When "customerManagedS3" storage is selected, this parameter is ignored. * **unlimited** *(boolean) --* If true, message data is kept indefinitely. * **numberOfDays** *(integer) --* The number of days that message data is kept. The "unlimited" parameter must be false. * **tags** (*list*) -- Metadata which can be used to manage the data store. * *(dict) --* A set of key-value pairs that are used to manage the resource. * **key** *(string) --* **[REQUIRED]** The tag's key. * **value** *(string) --* **[REQUIRED]** The tag's value. * **fileFormatConfiguration** (*dict*) -- Contains the configuration information of file formats. IoT Analytics data stores support JSON and Parquet. The default file format is JSON. You can specify only one format. You can't change the file format after you create the data store. * **jsonConfiguration** *(dict) --* Contains the configuration information of the JSON format. * **parquetConfiguration** *(dict) --* Contains the configuration information of the Parquet format. * **schemaDefinition** *(dict) --* Information needed to define a schema. * **columns** *(list) --* Specifies one or more columns that store your data. Each schema can have up to 100 columns. Each column can have up to 100 nested types. * *(dict) --* Contains information about a column that stores your data. * **name** *(string) --* **[REQUIRED]** The name of the column. * **type** *(string) --* **[REQUIRED]** The type of data. For more information about the supported data types, see Common data types in the *Glue Developer Guide*. * **datastorePartitions** (*dict*) -- Contains information about the partition dimensions in a data store. * **partitions** *(list) --* A list of partition dimensions in a data store. * *(dict) --* A single dimension to partition a data store. The dimension must be an "AttributePartition" or a "TimestampPartition". * **attributePartition** *(dict) --* A partition dimension defined by an "attributeName". * **attributeName** *(string) --* **[REQUIRED]** The name of the attribute that defines a partition dimension. * **timestampPartition** *(dict) --* A partition dimension defined by a timestamp attribute. * **attributeName** *(string) --* **[REQUIRED]** The attribute name of the partition defined by a timestamp. * **timestampFormat** *(string) --* The timestamp format of a partition defined by a timestamp. The default format is seconds since epoch (January 1, 1970 at midnight UTC time). Return type: dict Returns: **Response Syntax** { 'datastoreName': 'string', 'datastoreArn': 'string', 'retentionPeriod': { 'unlimited': True|False, 'numberOfDays': 123 } } **Response Structure** * *(dict) --* * **datastoreName** *(string) --* The name of the data store. * **datastoreArn** *(string) --* The ARN of the data store. * **retentionPeriod** *(dict) --* How long, in days, message data is kept for the data store. * **unlimited** *(boolean) --* If true, message data is kept indefinitely. * **numberOfDays** *(integer) --* The number of days that message data is kept. The "unlimited" parameter must be false. **Exceptions** * "IoTAnalytics.Client.exceptions.InvalidRequestException" * "IoTAnalytics.Client.exceptions.ResourceAlreadyExistsException" * "IoTAnalytics.Client.exceptions.InternalFailureException" * "IoTAnalytics.Client.exceptions.ServiceUnavailableException" * "IoTAnalytics.Client.exceptions.ThrottlingException" * "IoTAnalytics.Client.exceptions.LimitExceededException" IoTAnalytics / Client / delete_datastore delete_datastore **************** IoTAnalytics.Client.delete_datastore(**kwargs) Deletes the specified data store. See also: AWS API Documentation **Request Syntax** response = client.delete_datastore( datastoreName='string' ) Parameters: **datastoreName** (*string*) -- **[REQUIRED]** The name of the data store to delete. Returns: None **Exceptions** * "IoTAnalytics.Client.exceptions.InvalidRequestException" * "IoTAnalytics.Client.exceptions.ResourceNotFoundException" * "IoTAnalytics.Client.exceptions.InternalFailureException" * "IoTAnalytics.Client.exceptions.ServiceUnavailableException" * "IoTAnalytics.Client.exceptions.ThrottlingException" IoTAnalytics / Client / describe_dataset describe_dataset **************** IoTAnalytics.Client.describe_dataset(**kwargs) Retrieves information about a dataset. See also: AWS API Documentation **Request Syntax** response = client.describe_dataset( datasetName='string' ) Parameters: **datasetName** (*string*) -- **[REQUIRED]** The name of the dataset whose information is retrieved. Return type: dict Returns: **Response Syntax** { 'dataset': { 'name': 'string', 'arn': 'string', 'actions': [ { 'actionName': 'string', 'queryAction': { 'sqlQuery': 'string', 'filters': [ { 'deltaTime': { 'offsetSeconds': 123, 'timeExpression': 'string' } }, ] }, 'containerAction': { 'image': 'string', 'executionRoleArn': 'string', 'resourceConfiguration': { 'computeType': 'ACU_1'|'ACU_2', 'volumeSizeInGB': 123 }, 'variables': [ { 'name': 'string', 'stringValue': 'string', 'doubleValue': 123.0, 'datasetContentVersionValue': { 'datasetName': 'string' }, 'outputFileUriValue': { 'fileName': 'string' } }, ] } }, ], 'triggers': [ { 'schedule': { 'expression': 'string' }, 'dataset': { 'name': 'string' } }, ], 'contentDeliveryRules': [ { 'entryName': 'string', 'destination': { 'iotEventsDestinationConfiguration': { 'inputName': 'string', 'roleArn': 'string' }, 's3DestinationConfiguration': { 'bucket': 'string', 'key': 'string', 'glueConfiguration': { 'tableName': 'string', 'databaseName': 'string' }, 'roleArn': 'string' } } }, ], 'status': 'CREATING'|'ACTIVE'|'DELETING', 'creationTime': datetime(2015, 1, 1), 'lastUpdateTime': datetime(2015, 1, 1), 'retentionPeriod': { 'unlimited': True|False, 'numberOfDays': 123 }, 'versioningConfiguration': { 'unlimited': True|False, 'maxVersions': 123 }, 'lateDataRules': [ { 'ruleName': 'string', 'ruleConfiguration': { 'deltaTimeSessionWindowConfiguration': { 'timeoutInMinutes': 123 } } }, ] } } **Response Structure** * *(dict) --* * **dataset** *(dict) --* An object that contains information about the dataset. * **name** *(string) --* The name of the dataset. * **arn** *(string) --* The ARN of the dataset. * **actions** *(list) --* The "DatasetAction" objects that automatically create the dataset contents. * *(dict) --* A "DatasetAction" object that specifies how dataset contents are automatically created. * **actionName** *(string) --* The name of the dataset action by which dataset contents are automatically created. * **queryAction** *(dict) --* An "SqlQueryDatasetAction" object that uses an SQL query to automatically create dataset contents. * **sqlQuery** *(string) --* A SQL query string. * **filters** *(list) --* Prefilters applied to message data. * *(dict) --* Information that is used to filter message data, to segregate it according to the timeframe in which it arrives. * **deltaTime** *(dict) --* Used to limit data to that which has arrived since the last execution of the action. * **offsetSeconds** *(integer) --* The number of seconds of estimated in-flight lag time of message data. When you create dataset contents using message data from a specified timeframe, some message data might still be in flight when processing begins, and so do not arrive in time to be processed. Use this field to make allowances for the in flight time of your message data, so that data not processed from a previous timeframe is included with the next timeframe. Otherwise, missed message data would be excluded from processing during the next timeframe too, because its timestamp places it within the previous timeframe. * **timeExpression** *(string) --* An expression by which the time of the message data might be determined. This can be the name of a timestamp field or a SQL expression that is used to derive the time the message data was generated. * **containerAction** *(dict) --* Information that allows the system to run a containerized application to create the dataset contents. The application must be in a Docker container along with any required support libraries. * **image** *(string) --* The ARN of the Docker container stored in your account. The Docker container contains an application and required support libraries and is used to generate dataset contents. * **executionRoleArn** *(string) --* The ARN of the role that gives permission to the system to access required resources to run the "containerAction". This includes, at minimum, permission to retrieve the dataset contents that are the input to the containerized application. * **resourceConfiguration** *(dict) --* Configuration of the resource that executes the "containerAction". * **computeType** *(string) --* The type of the compute resource used to execute the "containerAction". Possible values are: "ACU_1" (vCPU=4, memory=16 GiB) or "ACU_2" (vCPU=8, memory=32 GiB). * **volumeSizeInGB** *(integer) --* The size, in GB, of the persistent storage available to the resource instance used to execute the "containerAction" (min: 1, max: 50). * **variables** *(list) --* The values of variables used in the context of the execution of the containerized application (basically, parameters passed to the application). Each variable must have a name and a value given by one of "stringValue", "datasetContentVersionValue", or "outputFileUriValue". * *(dict) --* An instance of a variable to be passed to the "containerAction" execution. Each variable must have a name and a value given by one of "stringValue", "datasetContentVersionValue", or "outputFileUriValue". * **name** *(string) --* The name of the variable. * **stringValue** *(string) --* The value of the variable as a string. * **doubleValue** *(float) --* The value of the variable as a double (numeric). * **datasetContentVersionValue** *(dict) --* The value of the variable as a structure that specifies a dataset content version. * **datasetName** *(string) --* The name of the dataset whose latest contents are used as input to the notebook or application. * **outputFileUriValue** *(dict) --* The value of the variable as a structure that specifies an output file URI. * **fileName** *(string) --* The URI of the location where dataset contents are stored, usually the URI of a file in an S3 bucket. * **triggers** *(list) --* The "DatasetTrigger" objects that specify when the dataset is automatically updated. * *(dict) --* The "DatasetTrigger" that specifies when the dataset is automatically updated. * **schedule** *(dict) --* The Schedule when the trigger is initiated. * **expression** *(string) --* The expression that defines when to trigger an update. For more information, see Schedule Expressions for Rules in the *Amazon CloudWatch Events User Guide*. * **dataset** *(dict) --* The dataset whose content creation triggers the creation of this dataset's contents. * **name** *(string) --* The name of the dataset whose content generation triggers the new dataset content generation. * **contentDeliveryRules** *(list) --* When dataset contents are created they are delivered to destinations specified here. * *(dict) --* When dataset contents are created, they are delivered to destination specified here. * **entryName** *(string) --* The name of the dataset content delivery rules entry. * **destination** *(dict) --* The destination to which dataset contents are delivered. * **iotEventsDestinationConfiguration** *(dict) --* Configuration information for delivery of dataset contents to IoT Events. * **inputName** *(string) --* The name of the IoT Events input to which dataset contents are delivered. * **roleArn** *(string) --* The ARN of the role that grants IoT Analytics permission to deliver dataset contents to an IoT Events input. * **s3DestinationConfiguration** *(dict) --* Configuration information for delivery of dataset contents to Amazon S3. * **bucket** *(string) --* The name of the S3 bucket to which dataset contents are delivered. * **key** *(string) --* The key of the dataset contents object in an S3 bucket. Each object has a key that is a unique identifier. Each object has exactly one key. You can create a unique key with the following options: * Use "!{iotanalytics:scheduleTime}" to insert the time of a scheduled SQL query run. * Use "!{iotanalytics:versionId}" to insert a unique hash that identifies a dataset content. * Use "!{iotanalytics:creationTime}" to insert the creation time of a dataset content. The following example creates a unique key for a CSV file: "dataset/mydataset/!{iotanalytics:sched uleTime}/!{iotanalytics:versionId}.csv" Note: If you don't use "!{iotanalytics:versionId}" to specify the key, you might get duplicate keys. For example, you might have two dataset contents with the same "scheduleTime" but different >>``<