KinesisAnalyticsV2 ****************** Client ====== class KinesisAnalyticsV2.Client A low-level client representing Amazon Kinesis Analytics (Kinesis Analytics V2) Note: Amazon Managed Service for Apache Flink was previously known as Amazon Kinesis Data Analytics for Apache Flink. Amazon Managed Service for Apache Flink is a fully managed service that you can use to process and analyze streaming data using Java, Python, SQL, or Scala. The service enables you to quickly author and run Java, SQL, or Scala code against streaming sources to perform time series analytics, feed real-time dashboards, and create real-time metrics. import boto3 client = boto3.client('kinesisanalyticsv2') These are the available methods: * add_application_cloud_watch_logging_option * add_application_input * add_application_input_processing_configuration * add_application_output * add_application_reference_data_source * add_application_vpc_configuration * can_paginate * close * create_application * create_application_presigned_url * create_application_snapshot * delete_application * delete_application_cloud_watch_logging_option * delete_application_input_processing_configuration * delete_application_output * delete_application_reference_data_source * delete_application_snapshot * delete_application_vpc_configuration * describe_application * describe_application_operation * describe_application_snapshot * describe_application_version * discover_input_schema * get_paginator * get_waiter * list_application_operations * list_application_snapshots * list_application_versions * list_applications * list_tags_for_resource * rollback_application * start_application * stop_application * tag_resource * untag_resource * update_application * update_application_maintenance_configuration Paginators ========== Paginators are available on a client instance via the "get_paginator" method. For more detailed instructions and examples on the usage of paginators, see the paginators user guide. The available paginators are: * ListApplicationOperations * ListApplicationSnapshots * ListApplicationVersions * ListApplications KinesisAnalyticsV2 / Paginator / ListApplicationOperations ListApplicationOperations ************************* class KinesisAnalyticsV2.Paginator.ListApplicationOperations paginator = client.get_paginator('list_application_operations') paginate(**kwargs) Creates an iterator that will paginate through responses from "KinesisAnalyticsV2.Client.list_application_operations()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( ApplicationName='string', Operation='string', OperationStatus='IN_PROGRESS'|'CANCELLED'|'SUCCESSFUL'|'FAILED', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of the application * **Operation** (*string*) -- Type of operation performed on an application * **OperationStatus** (*string*) -- Status of the operation performed on an application * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'ApplicationOperationInfoList': [ { 'Operation': 'string', 'OperationId': 'string', 'StartTime': datetime(2015, 1, 1), 'EndTime': datetime(2015, 1, 1), 'OperationStatus': 'IN_PROGRESS'|'CANCELLED'|'SUCCESSFUL'|'FAILED' }, ], } **Response Structure** * *(dict) --* Response with the list of operations for an application * **ApplicationOperationInfoList** *(list) --* List of ApplicationOperationInfo for an application * *(dict) --* Provides a description of the operation, such as the type and status of operation * **Operation** *(string) --* Type of operation performed on an application * **OperationId** *(string) --* Identifier of the Operation * **StartTime** *(datetime) --* The timestamp at which the operation was created * **EndTime** *(datetime) --* The timestamp at which the operation finished for the application * **OperationStatus** *(string) --* Status of the operation performed on an application KinesisAnalyticsV2 / Paginator / ListApplicationSnapshots ListApplicationSnapshots ************************ class KinesisAnalyticsV2.Paginator.ListApplicationSnapshots paginator = client.get_paginator('list_application_snapshots') paginate(**kwargs) Creates an iterator that will paginate through responses from "KinesisAnalyticsV2.Client.list_application_snapshots()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( ApplicationName='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of an existing application. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'SnapshotSummaries': [ { 'SnapshotName': 'string', 'SnapshotStatus': 'CREATING'|'READY'|'DELETING'|'FAILED', 'ApplicationVersionId': 123, 'SnapshotCreationTimestamp': datetime(2015, 1, 1), 'RuntimeEnvironment': 'SQL-1_0'|'FLINK-1_6'|'FLINK-1_8'|'ZEPPELIN-FLINK-1_0'|'FLINK-1_11'|'FLINK-1_13'|'ZEPPELIN-FLINK-2_0'|'FLINK-1_15'|'ZEPPELIN-FLINK-3_0'|'FLINK-1_18'|'FLINK-1_19'|'FLINK-1_20' }, ], } **Response Structure** * *(dict) --* * **SnapshotSummaries** *(list) --* A collection of objects containing information about the application snapshots. * *(dict) --* Provides details about a snapshot of application state. * **SnapshotName** *(string) --* The identifier for the application snapshot. * **SnapshotStatus** *(string) --* The status of the application snapshot. * **ApplicationVersionId** *(integer) --* The current application version ID when the snapshot was created. * **SnapshotCreationTimestamp** *(datetime) --* The timestamp of the application snapshot. * **RuntimeEnvironment** *(string) --* The Flink Runtime for the application snapshot. KinesisAnalyticsV2 / Paginator / ListApplicationVersions ListApplicationVersions *********************** class KinesisAnalyticsV2.Paginator.ListApplicationVersions paginator = client.get_paginator('list_application_versions') paginate(**kwargs) Creates an iterator that will paginate through responses from "KinesisAnalyticsV2.Client.list_application_versions()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( ApplicationName='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of the application for which you want to list all versions. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'ApplicationVersionSummaries': [ { 'ApplicationVersionId': 123, 'ApplicationStatus': 'DELETING'|'STARTING'|'STOPPING'|'READY'|'RUNNING'|'UPDATING'|'AUTOSCALING'|'FORCE_STOPPING'|'ROLLING_BACK'|'MAINTENANCE'|'ROLLED_BACK' }, ], } **Response Structure** * *(dict) --* * **ApplicationVersionSummaries** *(list) --* A list of the application versions and the associated configuration summaries. The list includes application versions that were rolled back. To get the complete description of a specific application version, invoke the DescribeApplicationVersion operation. * *(dict) --* The summary of the application version. * **ApplicationVersionId** *(integer) --* The ID of the application version. Managed Service for Apache Flink updates the "ApplicationVersionId" each time you update the application. * **ApplicationStatus** *(string) --* The status of the application. KinesisAnalyticsV2 / Paginator / ListApplications ListApplications **************** class KinesisAnalyticsV2.Paginator.ListApplications paginator = client.get_paginator('list_applications') paginate(**kwargs) Creates an iterator that will paginate through responses from "KinesisAnalyticsV2.Client.list_applications()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max- items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'ApplicationSummaries': [ { 'ApplicationName': 'string', 'ApplicationARN': 'string', 'ApplicationStatus': 'DELETING'|'STARTING'|'STOPPING'|'READY'|'RUNNING'|'UPDATING'|'AUTOSCALING'|'FORCE_STOPPING'|'ROLLING_BACK'|'MAINTENANCE'|'ROLLED_BACK', 'ApplicationVersionId': 123, 'RuntimeEnvironment': 'SQL-1_0'|'FLINK-1_6'|'FLINK-1_8'|'ZEPPELIN-FLINK-1_0'|'FLINK-1_11'|'FLINK-1_13'|'ZEPPELIN-FLINK-2_0'|'FLINK-1_15'|'ZEPPELIN-FLINK-3_0'|'FLINK-1_18'|'FLINK-1_19'|'FLINK-1_20', 'ApplicationMode': 'STREAMING'|'INTERACTIVE' }, ], } **Response Structure** * *(dict) --* * **ApplicationSummaries** *(list) --* A list of "ApplicationSummary" objects. * *(dict) --* Provides application summary information, including the application Amazon Resource Name (ARN), name, and status. * **ApplicationName** *(string) --* The name of the application. * **ApplicationARN** *(string) --* The ARN of the application. * **ApplicationStatus** *(string) --* The status of the application. * **ApplicationVersionId** *(integer) --* Provides the current application version. * **RuntimeEnvironment** *(string) --* The runtime environment for the application. * **ApplicationMode** *(string) --* For a Managed Service for Apache Flink application, the mode is "STREAMING". For a Managed Service for Apache Flink Studio notebook, it is "INTERACTIVE". KinesisAnalyticsV2 / Client / add_application_cloud_watch_logging_option add_application_cloud_watch_logging_option ****************************************** KinesisAnalyticsV2.Client.add_application_cloud_watch_logging_option(**kwargs) Adds an Amazon CloudWatch log stream to monitor application configuration errors. See also: AWS API Documentation **Request Syntax** response = client.add_application_cloud_watch_logging_option( ApplicationName='string', CurrentApplicationVersionId=123, CloudWatchLoggingOption={ 'LogStreamARN': 'string' }, ConditionalToken='string' ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The Kinesis Data Analytics application name. * **CurrentApplicationVersionId** (*integer*) -- The version ID of the SQL-based Kinesis Data Analytics application. You must provide the "CurrentApplicationVersionId" or the "ConditionalToken".You can retrieve the application version ID using DescribeApplication. For better concurrency support, use the "ConditionalToken" parameter instead of "CurrentApplicationVersionId". * **CloudWatchLoggingOption** (*dict*) -- **[REQUIRED]** Provides the Amazon CloudWatch log stream Amazon Resource Name (ARN). * **LogStreamARN** *(string) --* **[REQUIRED]** The ARN of the CloudWatch log to receive application messages. * **ConditionalToken** (*string*) -- A value you use to implement strong concurrency for application updates. You must provide the "CurrentApplicationVersionId" or the "ConditionalToken". You get the application's current "ConditionalToken" using DescribeApplication. For better concurrency support, use the "ConditionalToken" parameter instead of "CurrentApplicationVersionId". Return type: dict Returns: **Response Syntax** { 'ApplicationARN': 'string', 'ApplicationVersionId': 123, 'CloudWatchLoggingOptionDescriptions': [ { 'CloudWatchLoggingOptionId': 'string', 'LogStreamARN': 'string', 'RoleARN': 'string' }, ], 'OperationId': 'string' } **Response Structure** * *(dict) --* * **ApplicationARN** *(string) --* The application's ARN. * **ApplicationVersionId** *(integer) --* The new version ID of the SQL-based Kinesis Data Analytics application. Kinesis Data Analytics updates the "ApplicationVersionId" each time you change the CloudWatch logging options. * **CloudWatchLoggingOptionDescriptions** *(list) --* The descriptions of the current CloudWatch logging options for the SQL-based Kinesis Data Analytics application. * *(dict) --* Describes the Amazon CloudWatch logging option. * **CloudWatchLoggingOptionId** *(string) --* The ID of the CloudWatch logging option description. * **LogStreamARN** *(string) --* The Amazon Resource Name (ARN) of the CloudWatch log to receive application messages. * **RoleARN** *(string) --* The IAM ARN of the role to use to send application messages. Note: Provided for backward compatibility. Applications created with the current API version have an application-level service execution role rather than a resource-level role. * **OperationId** *(string) --* Operation ID for tracking AddApplicationCloudWatchLoggingOption request **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.ResourceInUseException" * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.ConcurrentModificationExce ption" * "KinesisAnalyticsV2.Client.exceptions.InvalidRequestException" * "KinesisAnalyticsV2.Client.exceptions.InvalidApplicationConfigur ationException" KinesisAnalyticsV2 / Client / start_application start_application ***************** KinesisAnalyticsV2.Client.start_application(**kwargs) Starts the specified Managed Service for Apache Flink application. After creating an application, you must exclusively call this operation to start your application. See also: AWS API Documentation **Request Syntax** response = client.start_application( ApplicationName='string', RunConfiguration={ 'FlinkRunConfiguration': { 'AllowNonRestoredState': True|False }, 'SqlRunConfigurations': [ { 'InputId': 'string', 'InputStartingPositionConfiguration': { 'InputStartingPosition': 'NOW'|'TRIM_HORIZON'|'LAST_STOPPED_POINT' } }, ], 'ApplicationRestoreConfiguration': { 'ApplicationRestoreType': 'SKIP_RESTORE_FROM_SNAPSHOT'|'RESTORE_FROM_LATEST_SNAPSHOT'|'RESTORE_FROM_CUSTOM_SNAPSHOT', 'SnapshotName': 'string' } } ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of the application. * **RunConfiguration** (*dict*) -- Identifies the run configuration (start parameters) of a Managed Service for Apache Flink application. * **FlinkRunConfiguration** *(dict) --* Describes the starting parameters for a Managed Service for Apache Flink application. * **AllowNonRestoredState** *(boolean) --* When restoring from a snapshot, specifies whether the runtime is allowed to skip a state that cannot be mapped to the new program. This will happen if the program is updated between snapshots to remove stateful parameters, and state data in the snapshot no longer corresponds to valid application data. For more information, see Allowing Non-Restored State in the Apache Flink documentation. Note: This value defaults to "false". If you update your application without specifying this parameter, "AllowNonRestoredState" will be set to "false", even if it was previously set to "true". * **SqlRunConfigurations** *(list) --* Describes the starting parameters for a SQL-based Kinesis Data Analytics application application. * *(dict) --* Describes the starting parameters for a SQL-based Kinesis Data Analytics application. * **InputId** *(string) --* **[REQUIRED]** The input source ID. You can get this ID by calling the DescribeApplication operation. * **InputStartingPositionConfiguration** *(dict) --* **[REQUIRED]** The point at which you want the application to start processing records from the streaming source. * **InputStartingPosition** *(string) --* The starting position on the stream. * "NOW" - Start reading just after the most recent record in the stream, and start at the request timestamp that the customer issued. * "TRIM_HORIZON" - Start reading at the last untrimmed record in the stream, which is the oldest record available in the stream. This option is not available for an Amazon Kinesis Data Firehose delivery stream. * "LAST_STOPPED_POINT" - Resume reading from where the application last stopped reading. * **ApplicationRestoreConfiguration** *(dict) --* Describes the restore behavior of a restarting application. * **ApplicationRestoreType** *(string) --* **[REQUIRED]** Specifies how the application should be restored. * **SnapshotName** *(string) --* The identifier of an existing snapshot of application state to use to restart an application. The application uses this value if "RESTORE_FROM_CUSTOM_SNAPSHOT" is specified for the "ApplicationRestoreType". Return type: dict Returns: **Response Syntax** { 'OperationId': 'string' } **Response Structure** * *(dict) --* * **OperationId** *(string) --* Operation ID for tracking StartApplication request **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.ResourceInUseException" * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.InvalidApplicationConfigur ationException" * "KinesisAnalyticsV2.Client.exceptions.InvalidRequestException" KinesisAnalyticsV2 / Client / add_application_input add_application_input ********************* KinesisAnalyticsV2.Client.add_application_input(**kwargs) Adds a streaming source to your SQL-based Kinesis Data Analytics application. You can add a streaming source when you create an application, or you can use this operation to add a streaming source after you create an application. For more information, see CreateApplication. Any configuration update, including adding a streaming source using this operation, results in a new version of the application. You can use the DescribeApplication operation to find the current application version. See also: AWS API Documentation **Request Syntax** response = client.add_application_input( ApplicationName='string', CurrentApplicationVersionId=123, Input={ 'NamePrefix': 'string', 'InputProcessingConfiguration': { 'InputLambdaProcessor': { 'ResourceARN': 'string' } }, 'KinesisStreamsInput': { 'ResourceARN': 'string' }, 'KinesisFirehoseInput': { 'ResourceARN': 'string' }, 'InputParallelism': { 'Count': 123 }, 'InputSchema': { 'RecordFormat': { 'RecordFormatType': 'JSON'|'CSV', 'MappingParameters': { 'JSONMappingParameters': { 'RecordRowPath': 'string' }, 'CSVMappingParameters': { 'RecordRowDelimiter': 'string', 'RecordColumnDelimiter': 'string' } } }, 'RecordEncoding': 'string', 'RecordColumns': [ { 'Name': 'string', 'Mapping': 'string', 'SqlType': 'string' }, ] } } ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of your existing application to which you want to add the streaming source. * **CurrentApplicationVersionId** (*integer*) -- **[REQUIRED]** The current version of your application. You must provide the "ApplicationVersionID" or the "ConditionalToken".You can use the DescribeApplication operation to find the current application version. * **Input** (*dict*) -- **[REQUIRED]** The Input to add. * **NamePrefix** *(string) --* **[REQUIRED]** The name prefix to use when creating an in-application stream. Suppose that you specify a prefix " "MyInApplicationStream"." Kinesis Data Analytics then creates one or more (as per the "InputParallelism" count you specified) in-application streams with the names " "MyInApplicationStream_001"," " "MyInApplicationStream_002"," and so on. * **InputProcessingConfiguration** *(dict) --* The InputProcessingConfiguration for the input. An input processor transforms records as they are received from the stream, before the application's SQL code executes. Currently, the only input processing configuration available is InputLambdaProcessor. * **InputLambdaProcessor** *(dict) --* **[REQUIRED]** The InputLambdaProcessor that is used to preprocess the records in the stream before being processed by your application code. * **ResourceARN** *(string) --* **[REQUIRED]** The ARN of the Amazon Lambda function that operates on records in the stream. Note: To specify an earlier version of the Lambda function than the latest, include the Lambda function version in the Lambda function ARN. For more information about Lambda ARNs, see Example ARNs: Amazon Lambda * **KinesisStreamsInput** *(dict) --* If the streaming source is an Amazon Kinesis data stream, identifies the stream's Amazon Resource Name (ARN). * **ResourceARN** *(string) --* **[REQUIRED]** The ARN of the input Kinesis data stream to read. * **KinesisFirehoseInput** *(dict) --* If the streaming source is an Amazon Kinesis Data Firehose delivery stream, identifies the delivery stream's ARN. * **ResourceARN** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) of the delivery stream. * **InputParallelism** *(dict) --* Describes the number of in-application streams to create. * **Count** *(integer) --* The number of in-application streams to create. * **InputSchema** *(dict) --* **[REQUIRED]** Describes the format of the data in the streaming source, and how each data element maps to corresponding columns in the in-application stream that is being created. Also used to describe the format of the reference data source. * **RecordFormat** *(dict) --* **[REQUIRED]** Specifies the format of the records on the streaming source. * **RecordFormatType** *(string) --* **[REQUIRED]** The type of record format. * **MappingParameters** *(dict) --* When you configure application input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source. * **JSONMappingParameters** *(dict) --* Provides additional mapping information when JSON is the record format on the streaming source. * **RecordRowPath** *(string) --* **[REQUIRED]** The path to the top-level parent that contains the records. * **CSVMappingParameters** *(dict) --* Provides additional mapping information when the record format uses delimiters (for example, CSV). * **RecordRowDelimiter** *(string) --* **[REQUIRED]** The row delimiter. For example, in a CSV format, *'n'* is the typical row delimiter. * **RecordColumnDelimiter** *(string) --* **[REQUIRED]** The column delimiter. For example, in a CSV format, a comma (",") is the typical column delimiter. * **RecordEncoding** *(string) --* Specifies the encoding of the records in the streaming source. For example, UTF-8. * **RecordColumns** *(list) --* **[REQUIRED]** A list of "RecordColumn" objects. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the mapping of each data element in the streaming source to the corresponding column in the in- application stream. Also used to describe the format of the reference data source. * **Name** *(string) --* **[REQUIRED]** The name of the column that is created in the in- application input stream or reference table. * **Mapping** *(string) --* A reference to the data element in the streaming input or the reference data source. * **SqlType** *(string) --* **[REQUIRED]** The type of column created in the in-application input stream or reference table. Return type: dict Returns: **Response Syntax** { 'ApplicationARN': 'string', 'ApplicationVersionId': 123, 'InputDescriptions': [ { 'InputId': 'string', 'NamePrefix': 'string', 'InAppStreamNames': [ 'string', ], 'InputProcessingConfigurationDescription': { 'InputLambdaProcessorDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' } }, 'KinesisStreamsInputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'KinesisFirehoseInputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'InputSchema': { 'RecordFormat': { 'RecordFormatType': 'JSON'|'CSV', 'MappingParameters': { 'JSONMappingParameters': { 'RecordRowPath': 'string' }, 'CSVMappingParameters': { 'RecordRowDelimiter': 'string', 'RecordColumnDelimiter': 'string' } } }, 'RecordEncoding': 'string', 'RecordColumns': [ { 'Name': 'string', 'Mapping': 'string', 'SqlType': 'string' }, ] }, 'InputParallelism': { 'Count': 123 }, 'InputStartingPositionConfiguration': { 'InputStartingPosition': 'NOW'|'TRIM_HORIZON'|'LAST_STOPPED_POINT' } }, ] } **Response Structure** * *(dict) --* * **ApplicationARN** *(string) --* The Amazon Resource Name (ARN) of the application. * **ApplicationVersionId** *(integer) --* Provides the current application version. * **InputDescriptions** *(list) --* Describes the application input configuration. * *(dict) --* Describes the application input configuration for a SQL- based Kinesis Data Analytics application. * **InputId** *(string) --* The input ID that is associated with the application input. This is the ID that Kinesis Data Analytics assigns to each input configuration that you add to your application. * **NamePrefix** *(string) --* The in-application name prefix. * **InAppStreamNames** *(list) --* Returns the in-application stream names that are mapped to the stream source. * *(string) --* * **InputProcessingConfigurationDescription** *(dict) --* The description of the preprocessor that executes on records in this input before the application's code is run. * **InputLambdaProcessorDescription** *(dict) --* Provides configuration information about the associated InputLambdaProcessorDescription * **ResourceARN** *(string) --* The ARN of the Amazon Lambda function that is used to preprocess the records in the stream. Note: To specify an earlier version of the Lambda function than the latest, include the Lambda function version in the Lambda function ARN. For more information about Lambda ARNs, see Example ARNs: Amazon Lambda * **RoleARN** *(string) --* The ARN of the IAM role that is used to access the Amazon Lambda function. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **KinesisStreamsInputDescription** *(dict) --* If a Kinesis data stream is configured as a streaming source, provides the Kinesis data stream's Amazon Resource Name (ARN). * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the Kinesis data stream. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to access the stream. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **KinesisFirehoseInputDescription** *(dict) --* If a Kinesis Data Firehose delivery stream is configured as a streaming source, provides the delivery stream's ARN. * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the delivery stream. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics assumes to access the stream. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **InputSchema** *(dict) --* Describes the format of the data in the streaming source, and how each data element maps to corresponding columns in the in-application stream that is being created. * **RecordFormat** *(dict) --* Specifies the format of the records on the streaming source. * **RecordFormatType** *(string) --* The type of record format. * **MappingParameters** *(dict) --* When you configure application input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source. * **JSONMappingParameters** *(dict) --* Provides additional mapping information when JSON is the record format on the streaming source. * **RecordRowPath** *(string) --* The path to the top-level parent that contains the records. * **CSVMappingParameters** *(dict) --* Provides additional mapping information when the record format uses delimiters (for example, CSV). * **RecordRowDelimiter** *(string) --* The row delimiter. For example, in a CSV format, *'n'* is the typical row delimiter. * **RecordColumnDelimiter** *(string) --* The column delimiter. For example, in a CSV format, a comma (",") is the typical column delimiter. * **RecordEncoding** *(string) --* Specifies the encoding of the records in the streaming source. For example, UTF-8. * **RecordColumns** *(list) --* A list of "RecordColumn" objects. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the mapping of each data element in the streaming source to the corresponding column in the in-application stream. Also used to describe the format of the reference data source. * **Name** *(string) --* The name of the column that is created in the in- application input stream or reference table. * **Mapping** *(string) --* A reference to the data element in the streaming input or the reference data source. * **SqlType** *(string) --* The type of column created in the in-application input stream or reference table. * **InputParallelism** *(dict) --* Describes the configured parallelism (number of in- application streams mapped to the streaming source). * **Count** *(integer) --* The number of in-application streams to create. * **InputStartingPositionConfiguration** *(dict) --* The point at which the application is configured to read from the input stream. * **InputStartingPosition** *(string) --* The starting position on the stream. * "NOW" - Start reading just after the most recent record in the stream, and start at the request timestamp that the customer issued. * "TRIM_HORIZON" - Start reading at the last untrimmed record in the stream, which is the oldest record available in the stream. This option is not available for an Amazon Kinesis Data Firehose delivery stream. * "LAST_STOPPED_POINT" - Resume reading from where the application last stopped reading. **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.ResourceInUseException" * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.ConcurrentModificationExce ption" * "KinesisAnalyticsV2.Client.exceptions.CodeValidationException" * "KinesisAnalyticsV2.Client.exceptions.InvalidRequestException" KinesisAnalyticsV2 / Client / list_applications list_applications ***************** KinesisAnalyticsV2.Client.list_applications(**kwargs) Returns a list of Managed Service for Apache Flink applications in your account. For each application, the response includes the application name, Amazon Resource Name (ARN), and status. If you want detailed information about a specific application, use DescribeApplication. See also: AWS API Documentation **Request Syntax** response = client.list_applications( Limit=123, NextToken='string' ) Parameters: * **Limit** (*integer*) -- The maximum number of applications to list. * **NextToken** (*string*) -- If a previous command returned a pagination token, pass it into this value to retrieve the next set of results. For more information about pagination, see Using the Amazon Command Line Interface's Pagination Options. Return type: dict Returns: **Response Syntax** { 'ApplicationSummaries': [ { 'ApplicationName': 'string', 'ApplicationARN': 'string', 'ApplicationStatus': 'DELETING'|'STARTING'|'STOPPING'|'READY'|'RUNNING'|'UPDATING'|'AUTOSCALING'|'FORCE_STOPPING'|'ROLLING_BACK'|'MAINTENANCE'|'ROLLED_BACK', 'ApplicationVersionId': 123, 'RuntimeEnvironment': 'SQL-1_0'|'FLINK-1_6'|'FLINK-1_8'|'ZEPPELIN-FLINK-1_0'|'FLINK-1_11'|'FLINK-1_13'|'ZEPPELIN-FLINK-2_0'|'FLINK-1_15'|'ZEPPELIN-FLINK-3_0'|'FLINK-1_18'|'FLINK-1_19'|'FLINK-1_20', 'ApplicationMode': 'STREAMING'|'INTERACTIVE' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **ApplicationSummaries** *(list) --* A list of "ApplicationSummary" objects. * *(dict) --* Provides application summary information, including the application Amazon Resource Name (ARN), name, and status. * **ApplicationName** *(string) --* The name of the application. * **ApplicationARN** *(string) --* The ARN of the application. * **ApplicationStatus** *(string) --* The status of the application. * **ApplicationVersionId** *(integer) --* Provides the current application version. * **RuntimeEnvironment** *(string) --* The runtime environment for the application. * **ApplicationMode** *(string) --* For a Managed Service for Apache Flink application, the mode is "STREAMING". For a Managed Service for Apache Flink Studio notebook, it is "INTERACTIVE". * **NextToken** *(string) --* The pagination token for the next set of results, or "null" if there are no additional results. Pass this token into a subsequent command to retrieve the next set of items For more information about pagination, see Using the Amazon Command Line Interface's Pagination Options. **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.InvalidRequestException" KinesisAnalyticsV2 / Client / get_paginator get_paginator ************* KinesisAnalyticsV2.Client.get_paginator(operation_name) Create a paginator for an operation. Parameters: **operation_name** (*string*) -- The operation name. This is the same name as the method name on the client. For example, if the method name is "create_foo", and you'd normally invoke the operation as "client.create_foo(**kwargs)", if the "create_foo" operation can be paginated, you can use the call "client.get_paginator("create_foo")". Raises: **OperationNotPageableError** -- Raised if the operation is not pageable. You can use the "client.can_paginate" method to check if an operation is pageable. Return type: "botocore.paginate.Paginator" Returns: A paginator object. KinesisAnalyticsV2 / Client / delete_application_vpc_configuration delete_application_vpc_configuration ************************************ KinesisAnalyticsV2.Client.delete_application_vpc_configuration(**kwargs) Removes a VPC configuration from a Managed Service for Apache Flink application. See also: AWS API Documentation **Request Syntax** response = client.delete_application_vpc_configuration( ApplicationName='string', CurrentApplicationVersionId=123, VpcConfigurationId='string', ConditionalToken='string' ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of an existing application. * **CurrentApplicationVersionId** (*integer*) -- The current application version ID. You must provide the "CurrentApplicationVersionId" or the "ConditionalToken". You can retrieve the application version ID using DescribeApplication. For better concurrency support, use the "ConditionalToken" parameter instead of "CurrentApplicationVersionId". * **VpcConfigurationId** (*string*) -- **[REQUIRED]** The ID of the VPC configuration to delete. * **ConditionalToken** (*string*) -- A value you use to implement strong concurrency for application updates. You must provide the "CurrentApplicationVersionId" or the "ConditionalToken". You get the application's current "ConditionalToken" using DescribeApplication. For better concurrency support, use the "ConditionalToken" parameter instead of "CurrentApplicationVersionId". Return type: dict Returns: **Response Syntax** { 'ApplicationARN': 'string', 'ApplicationVersionId': 123, 'OperationId': 'string' } **Response Structure** * *(dict) --* * **ApplicationARN** *(string) --* The ARN of the Managed Service for Apache Flink application. * **ApplicationVersionId** *(integer) --* The updated version ID of the application. * **OperationId** *(string) --* Operation ID for tracking DeleteApplicationVpcConfiguration request **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.ResourceInUseException" * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.ConcurrentModificationExce ption" * "KinesisAnalyticsV2.Client.exceptions.InvalidApplicationConfigur ationException" KinesisAnalyticsV2 / Client / delete_application_reference_data_source delete_application_reference_data_source **************************************** KinesisAnalyticsV2.Client.delete_application_reference_data_source(**kwargs) Deletes a reference data source configuration from the specified SQL-based Kinesis Data Analytics application's configuration. If the application is running, Kinesis Data Analytics immediately removes the in-application table that you created using the AddApplicationReferenceDataSource operation. See also: AWS API Documentation **Request Syntax** response = client.delete_application_reference_data_source( ApplicationName='string', CurrentApplicationVersionId=123, ReferenceId='string' ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of an existing application. * **CurrentApplicationVersionId** (*integer*) -- **[REQUIRED]** The current application version. You can use the DescribeApplication operation to get the current application version. If the version specified is not the current version, the "ConcurrentModificationException" is returned. * **ReferenceId** (*string*) -- **[REQUIRED]** The ID of the reference data source. When you add a reference data source to your application using the AddApplicationReferenceDataSource, Kinesis Data Analytics assigns an ID. You can use the DescribeApplication operation to get the reference ID. Return type: dict Returns: **Response Syntax** { 'ApplicationARN': 'string', 'ApplicationVersionId': 123 } **Response Structure** * *(dict) --* * **ApplicationARN** *(string) --* The application Amazon Resource Name (ARN). * **ApplicationVersionId** *(integer) --* The updated version ID of the application. **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.ResourceInUseException" * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.ConcurrentModificationExce ption" * "KinesisAnalyticsV2.Client.exceptions.InvalidRequestException" KinesisAnalyticsV2 / Client / list_application_snapshots list_application_snapshots ************************** KinesisAnalyticsV2.Client.list_application_snapshots(**kwargs) Lists information about the current application snapshots. See also: AWS API Documentation **Request Syntax** response = client.list_application_snapshots( ApplicationName='string', Limit=123, NextToken='string' ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of an existing application. * **Limit** (*integer*) -- The maximum number of application snapshots to list. * **NextToken** (*string*) -- Use this parameter if you receive a "NextToken" response in a previous request that indicates that there is more output available. Set it to the value of the previous call's "NextToken" response to indicate where the output should continue from. Return type: dict Returns: **Response Syntax** { 'SnapshotSummaries': [ { 'SnapshotName': 'string', 'SnapshotStatus': 'CREATING'|'READY'|'DELETING'|'FAILED', 'ApplicationVersionId': 123, 'SnapshotCreationTimestamp': datetime(2015, 1, 1), 'RuntimeEnvironment': 'SQL-1_0'|'FLINK-1_6'|'FLINK-1_8'|'ZEPPELIN-FLINK-1_0'|'FLINK-1_11'|'FLINK-1_13'|'ZEPPELIN-FLINK-2_0'|'FLINK-1_15'|'ZEPPELIN-FLINK-3_0'|'FLINK-1_18'|'FLINK-1_19'|'FLINK-1_20' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **SnapshotSummaries** *(list) --* A collection of objects containing information about the application snapshots. * *(dict) --* Provides details about a snapshot of application state. * **SnapshotName** *(string) --* The identifier for the application snapshot. * **SnapshotStatus** *(string) --* The status of the application snapshot. * **ApplicationVersionId** *(integer) --* The current application version ID when the snapshot was created. * **SnapshotCreationTimestamp** *(datetime) --* The timestamp of the application snapshot. * **RuntimeEnvironment** *(string) --* The Flink Runtime for the application snapshot. * **NextToken** *(string) --* The token for the next set of results, or "null" if there are no additional results. **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.UnsupportedOperationExcept ion" KinesisAnalyticsV2 / Client / delete_application_input_processing_configuration delete_application_input_processing_configuration ************************************************* KinesisAnalyticsV2.Client.delete_application_input_processing_configuration(**kwargs) Deletes an InputProcessingConfiguration from an input. See also: AWS API Documentation **Request Syntax** response = client.delete_application_input_processing_configuration( ApplicationName='string', CurrentApplicationVersionId=123, InputId='string' ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of the application. * **CurrentApplicationVersionId** (*integer*) -- **[REQUIRED]** The application version. You can use the DescribeApplication operation to get the current application version. If the version specified is not the current version, the "ConcurrentModificationException" is returned. * **InputId** (*string*) -- **[REQUIRED]** The ID of the input configuration from which to delete the input processing configuration. You can get a list of the input IDs for an application by using the DescribeApplication operation. Return type: dict Returns: **Response Syntax** { 'ApplicationARN': 'string', 'ApplicationVersionId': 123 } **Response Structure** * *(dict) --* * **ApplicationARN** *(string) --* The Amazon Resource Name (ARN) of the application. * **ApplicationVersionId** *(integer) --* The current application version ID. **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.ResourceInUseException" * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.ConcurrentModificationExce ption" * "KinesisAnalyticsV2.Client.exceptions.InvalidRequestException" KinesisAnalyticsV2 / Client / can_paginate can_paginate ************ KinesisAnalyticsV2.Client.can_paginate(operation_name) Check if an operation can be paginated. Parameters: **operation_name** (*string*) -- The operation name. This is the same name as the method name on the client. For example, if the method name is "create_foo", and you'd normally invoke the operation as "client.create_foo(**kwargs)", if the "create_foo" operation can be paginated, you can use the call "client.get_paginator("create_foo")". Returns: "True" if the operation can be paginated, "False" otherwise. KinesisAnalyticsV2 / Client / describe_application_snapshot describe_application_snapshot ***************************** KinesisAnalyticsV2.Client.describe_application_snapshot(**kwargs) Returns information about a snapshot of application state data. See also: AWS API Documentation **Request Syntax** response = client.describe_application_snapshot( ApplicationName='string', SnapshotName='string' ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of an existing application. * **SnapshotName** (*string*) -- **[REQUIRED]** The identifier of an application snapshot. You can retrieve this value using . Return type: dict Returns: **Response Syntax** { 'SnapshotDetails': { 'SnapshotName': 'string', 'SnapshotStatus': 'CREATING'|'READY'|'DELETING'|'FAILED', 'ApplicationVersionId': 123, 'SnapshotCreationTimestamp': datetime(2015, 1, 1), 'RuntimeEnvironment': 'SQL-1_0'|'FLINK-1_6'|'FLINK-1_8'|'ZEPPELIN-FLINK-1_0'|'FLINK-1_11'|'FLINK-1_13'|'ZEPPELIN-FLINK-2_0'|'FLINK-1_15'|'ZEPPELIN-FLINK-3_0'|'FLINK-1_18'|'FLINK-1_19'|'FLINK-1_20' } } **Response Structure** * *(dict) --* * **SnapshotDetails** *(dict) --* An object containing information about the application snapshot. * **SnapshotName** *(string) --* The identifier for the application snapshot. * **SnapshotStatus** *(string) --* The status of the application snapshot. * **ApplicationVersionId** *(integer) --* The current application version ID when the snapshot was created. * **SnapshotCreationTimestamp** *(datetime) --* The timestamp of the application snapshot. * **RuntimeEnvironment** *(string) --* The Flink Runtime for the application snapshot. **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.UnsupportedOperationExcept ion" KinesisAnalyticsV2 / Client / list_tags_for_resource list_tags_for_resource ********************** KinesisAnalyticsV2.Client.list_tags_for_resource(**kwargs) Retrieves the list of key-value tags assigned to the application. For more information, see Using Tagging. See also: AWS API Documentation **Request Syntax** response = client.list_tags_for_resource( ResourceARN='string' ) Parameters: **ResourceARN** (*string*) -- **[REQUIRED]** The ARN of the application for which to retrieve tags. Return type: dict Returns: **Response Syntax** { 'Tags': [ { 'Key': 'string', 'Value': 'string' }, ] } **Response Structure** * *(dict) --* * **Tags** *(list) --* The key-value tags assigned to the application. * *(dict) --* A key-value pair (the value is optional) that you can define and assign to Amazon resources. If you specify a tag that already exists, the tag value is replaced with the value that you specify in the request. Note that the maximum number of application tags includes system tags. The maximum number of user-defined application tags is 50. For more information, see Using Tagging. * **Key** *(string) --* The key of the key-value tag. * **Value** *(string) --* The value of the key-value tag. The value is optional. **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.ConcurrentModificationExce ption" KinesisAnalyticsV2 / Client / add_application_vpc_configuration add_application_vpc_configuration ********************************* KinesisAnalyticsV2.Client.add_application_vpc_configuration(**kwargs) Adds a Virtual Private Cloud (VPC) configuration to the application. Applications can use VPCs to store and access resources securely. Note the following about VPC configurations for Managed Service for Apache Flink applications: * VPC configurations are not supported for SQL applications. * When a VPC is added to a Managed Service for Apache Flink application, the application can no longer be accessed from the Internet directly. To enable Internet access to the application, add an Internet gateway to your VPC. See also: AWS API Documentation **Request Syntax** response = client.add_application_vpc_configuration( ApplicationName='string', CurrentApplicationVersionId=123, VpcConfiguration={ 'SubnetIds': [ 'string', ], 'SecurityGroupIds': [ 'string', ] }, ConditionalToken='string' ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of an existing application. * **CurrentApplicationVersionId** (*integer*) -- The version of the application to which you want to add the VPC configuration. You must provide the "CurrentApplicationVersionId" or the "ConditionalToken". You can use the DescribeApplication operation to get the current application version. If the version specified is not the current version, the "ConcurrentModificationException" is returned. For better concurrency support, use the "ConditionalToken" parameter instead of "CurrentApplicationVersionId". * **VpcConfiguration** (*dict*) -- **[REQUIRED]** Description of the VPC to add to the application. * **SubnetIds** *(list) --* **[REQUIRED]** The array of Subnet IDs used by the VPC configuration. * *(string) --* * **SecurityGroupIds** *(list) --* **[REQUIRED]** The array of SecurityGroup IDs used by the VPC configuration. * *(string) --* * **ConditionalToken** (*string*) -- A value you use to implement strong concurrency for application updates. You must provide the "ApplicationVersionID" or the "ConditionalToken". You get the application's current "ConditionalToken" using DescribeApplication. For better concurrency support, use the "ConditionalToken" parameter instead of "CurrentApplicationVersionId". Return type: dict Returns: **Response Syntax** { 'ApplicationARN': 'string', 'ApplicationVersionId': 123, 'VpcConfigurationDescription': { 'VpcConfigurationId': 'string', 'VpcId': 'string', 'SubnetIds': [ 'string', ], 'SecurityGroupIds': [ 'string', ] }, 'OperationId': 'string' } **Response Structure** * *(dict) --* * **ApplicationARN** *(string) --* The ARN of the application. * **ApplicationVersionId** *(integer) --* Provides the current application version. Managed Service for Apache Flink updates the ApplicationVersionId each time you update the application. * **VpcConfigurationDescription** *(dict) --* The parameters of the new VPC configuration. * **VpcConfigurationId** *(string) --* The ID of the VPC configuration. * **VpcId** *(string) --* The ID of the associated VPC. * **SubnetIds** *(list) --* The array of Subnet IDs used by the VPC configuration. * *(string) --* * **SecurityGroupIds** *(list) --* The array of SecurityGroup IDs used by the VPC configuration. * *(string) --* * **OperationId** *(string) --* Operation ID for tracking AddApplicationVpcConfiguration request **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.ResourceInUseException" * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.ConcurrentModificationExce ption" * "KinesisAnalyticsV2.Client.exceptions.InvalidApplicationConfigur ationException" KinesisAnalyticsV2 / Client / add_application_input_processing_configuration add_application_input_processing_configuration ********************************************** KinesisAnalyticsV2.Client.add_application_input_processing_configuration(**kwargs) Adds an InputProcessingConfiguration to a SQL-based Kinesis Data Analytics application. An input processor pre-processes records on the input stream before the application's SQL code executes. Currently, the only input processor available is Amazon Lambda. See also: AWS API Documentation **Request Syntax** response = client.add_application_input_processing_configuration( ApplicationName='string', CurrentApplicationVersionId=123, InputId='string', InputProcessingConfiguration={ 'InputLambdaProcessor': { 'ResourceARN': 'string' } } ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of the application to which you want to add the input processing configuration. * **CurrentApplicationVersionId** (*integer*) -- **[REQUIRED]** The version of the application to which you want to add the input processing configuration. You can use the DescribeApplication operation to get the current application version. If the version specified is not the current version, the "ConcurrentModificationException" is returned. * **InputId** (*string*) -- **[REQUIRED]** The ID of the input configuration to add the input processing configuration to. You can get a list of the input IDs for an application using the DescribeApplication operation. * **InputProcessingConfiguration** (*dict*) -- **[REQUIRED]** The InputProcessingConfiguration to add to the application. * **InputLambdaProcessor** *(dict) --* **[REQUIRED]** The InputLambdaProcessor that is used to preprocess the records in the stream before being processed by your application code. * **ResourceARN** *(string) --* **[REQUIRED]** The ARN of the Amazon Lambda function that operates on records in the stream. Note: To specify an earlier version of the Lambda function than the latest, include the Lambda function version in the Lambda function ARN. For more information about Lambda ARNs, see Example ARNs: Amazon Lambda Return type: dict Returns: **Response Syntax** { 'ApplicationARN': 'string', 'ApplicationVersionId': 123, 'InputId': 'string', 'InputProcessingConfigurationDescription': { 'InputLambdaProcessorDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' } } } **Response Structure** * *(dict) --* * **ApplicationARN** *(string) --* The Amazon Resource Name (ARN) of the application. * **ApplicationVersionId** *(integer) --* Provides the current application version. * **InputId** *(string) --* The input ID that is associated with the application input. This is the ID that Kinesis Data Analytics assigns to each input configuration that you add to your application. * **InputProcessingConfigurationDescription** *(dict) --* The description of the preprocessor that executes on records in this input before the application's code is run. * **InputLambdaProcessorDescription** *(dict) --* Provides configuration information about the associated InputLambdaProcessorDescription * **ResourceARN** *(string) --* The ARN of the Amazon Lambda function that is used to preprocess the records in the stream. Note: To specify an earlier version of the Lambda function than the latest, include the Lambda function version in the Lambda function ARN. For more information about Lambda ARNs, see Example ARNs: Amazon Lambda * **RoleARN** *(string) --* The ARN of the IAM role that is used to access the Amazon Lambda function. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.ResourceInUseException" * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.ConcurrentModificationExce ption" * "KinesisAnalyticsV2.Client.exceptions.InvalidRequestException" KinesisAnalyticsV2 / Client / untag_resource untag_resource ************** KinesisAnalyticsV2.Client.untag_resource(**kwargs) Removes one or more tags from a Managed Service for Apache Flink application. For more information, see Using Tagging. See also: AWS API Documentation **Request Syntax** response = client.untag_resource( ResourceARN='string', TagKeys=[ 'string', ] ) Parameters: * **ResourceARN** (*string*) -- **[REQUIRED]** The ARN of the Managed Service for Apache Flink application from which to remove the tags. * **TagKeys** (*list*) -- **[REQUIRED]** A list of keys of tags to remove from the specified application. * *(string) --* Return type: dict Returns: **Response Syntax** {} **Response Structure** * *(dict) --* **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.ResourceInUseException" * "KinesisAnalyticsV2.Client.exceptions.TooManyTagsException" * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.ConcurrentModificationExce ption" KinesisAnalyticsV2 / Client / delete_application_snapshot delete_application_snapshot *************************** KinesisAnalyticsV2.Client.delete_application_snapshot(**kwargs) Deletes a snapshot of application state. See also: AWS API Documentation **Request Syntax** response = client.delete_application_snapshot( ApplicationName='string', SnapshotName='string', SnapshotCreationTimestamp=datetime(2015, 1, 1) ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of an existing application. * **SnapshotName** (*string*) -- **[REQUIRED]** The identifier for the snapshot delete. * **SnapshotCreationTimestamp** (*datetime*) -- **[REQUIRED]** The creation timestamp of the application snapshot to delete. You can retrieve this value using or . Return type: dict Returns: **Response Syntax** {} **Response Structure** * *(dict) --* **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.ResourceInUseException" * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.UnsupportedOperationExcept ion" * "KinesisAnalyticsV2.Client.exceptions.InvalidRequestException" * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.ConcurrentModificationExce ption" KinesisAnalyticsV2 / Client / list_application_versions list_application_versions ************************* KinesisAnalyticsV2.Client.list_application_versions(**kwargs) Lists all the versions for the specified application, including versions that were rolled back. The response also includes a summary of the configuration associated with each version. To get the complete description of a specific application version, invoke the DescribeApplicationVersion operation. Note: This operation is supported only for Managed Service for Apache Flink. See also: AWS API Documentation **Request Syntax** response = client.list_application_versions( ApplicationName='string', Limit=123, NextToken='string' ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of the application for which you want to list all versions. * **Limit** (*integer*) -- The maximum number of versions to list in this invocation of the operation. * **NextToken** (*string*) -- If a previous invocation of this operation returned a pagination token, pass it into this value to retrieve the next set of results. For more information about pagination, see Using the Amazon Command Line Interface's Pagination Options. Return type: dict Returns: **Response Syntax** { 'ApplicationVersionSummaries': [ { 'ApplicationVersionId': 123, 'ApplicationStatus': 'DELETING'|'STARTING'|'STOPPING'|'READY'|'RUNNING'|'UPDATING'|'AUTOSCALING'|'FORCE_STOPPING'|'ROLLING_BACK'|'MAINTENANCE'|'ROLLED_BACK' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **ApplicationVersionSummaries** *(list) --* A list of the application versions and the associated configuration summaries. The list includes application versions that were rolled back. To get the complete description of a specific application version, invoke the DescribeApplicationVersion operation. * *(dict) --* The summary of the application version. * **ApplicationVersionId** *(integer) --* The ID of the application version. Managed Service for Apache Flink updates the "ApplicationVersionId" each time you update the application. * **ApplicationStatus** *(string) --* The status of the application. * **NextToken** *(string) --* The pagination token for the next set of results, or "null" if there are no additional results. To retrieve the next set of items, pass this token into a subsequent invocation of this operation. For more information about pagination, see Using the Amazon Command Line Interface's Pagination Options. **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.UnsupportedOperationExcept ion" KinesisAnalyticsV2 / Client / get_waiter get_waiter ********** KinesisAnalyticsV2.Client.get_waiter(waiter_name) Returns an object that can wait for some condition. Parameters: **waiter_name** (*str*) -- The name of the waiter to get. See the waiters section of the service docs for a list of available waiters. Returns: The specified waiter object. Return type: "botocore.waiter.Waiter" KinesisAnalyticsV2 / Client / create_application_snapshot create_application_snapshot *************************** KinesisAnalyticsV2.Client.create_application_snapshot(**kwargs) Creates a snapshot of the application's state data. See also: AWS API Documentation **Request Syntax** response = client.create_application_snapshot( ApplicationName='string', SnapshotName='string' ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of an existing application * **SnapshotName** (*string*) -- **[REQUIRED]** An identifier for the application snapshot. Return type: dict Returns: **Response Syntax** {} **Response Structure** * *(dict) --* **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.ResourceInUseException" * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.LimitExceededException" * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.UnsupportedOperationExcept ion" * "KinesisAnalyticsV2.Client.exceptions.InvalidRequestException" * "KinesisAnalyticsV2.Client.exceptions.InvalidApplicationConfigur ationException" KinesisAnalyticsV2 / Client / discover_input_schema discover_input_schema ********************* KinesisAnalyticsV2.Client.discover_input_schema(**kwargs) Infers a schema for a SQL-based Kinesis Data Analytics application by evaluating sample records on the specified streaming source (Kinesis data stream or Kinesis Data Firehose delivery stream) or Amazon S3 object. In the response, the operation returns the inferred schema and also the sample records that the operation used to infer the schema. You can use the inferred schema when configuring a streaming source for your application. When you create an application using the Kinesis Data Analytics console, the console uses this operation to infer a schema and show it in the console user interface. See also: AWS API Documentation **Request Syntax** response = client.discover_input_schema( ResourceARN='string', ServiceExecutionRole='string', InputStartingPositionConfiguration={ 'InputStartingPosition': 'NOW'|'TRIM_HORIZON'|'LAST_STOPPED_POINT' }, S3Configuration={ 'BucketARN': 'string', 'FileKey': 'string' }, InputProcessingConfiguration={ 'InputLambdaProcessor': { 'ResourceARN': 'string' } } ) Parameters: * **ResourceARN** (*string*) -- The Amazon Resource Name (ARN) of the streaming source. * **ServiceExecutionRole** (*string*) -- **[REQUIRED]** The ARN of the role that is used to access the streaming source. * **InputStartingPositionConfiguration** (*dict*) -- The point at which you want Kinesis Data Analytics to start reading records from the specified streaming source for discovery purposes. * **InputStartingPosition** *(string) --* The starting position on the stream. * "NOW" - Start reading just after the most recent record in the stream, and start at the request timestamp that the customer issued. * "TRIM_HORIZON" - Start reading at the last untrimmed record in the stream, which is the oldest record available in the stream. This option is not available for an Amazon Kinesis Data Firehose delivery stream. * "LAST_STOPPED_POINT" - Resume reading from where the application last stopped reading. * **S3Configuration** (*dict*) -- Specify this parameter to discover a schema from data in an Amazon S3 object. * **BucketARN** *(string) --* **[REQUIRED]** The ARN of the S3 bucket that contains the data. * **FileKey** *(string) --* **[REQUIRED]** The name of the object that contains the data. * **InputProcessingConfiguration** (*dict*) -- The InputProcessingConfiguration to use to preprocess the records before discovering the schema of the records. * **InputLambdaProcessor** *(dict) --* **[REQUIRED]** The InputLambdaProcessor that is used to preprocess the records in the stream before being processed by your application code. * **ResourceARN** *(string) --* **[REQUIRED]** The ARN of the Amazon Lambda function that operates on records in the stream. Note: To specify an earlier version of the Lambda function than the latest, include the Lambda function version in the Lambda function ARN. For more information about Lambda ARNs, see Example ARNs: Amazon Lambda Return type: dict Returns: **Response Syntax** { 'InputSchema': { 'RecordFormat': { 'RecordFormatType': 'JSON'|'CSV', 'MappingParameters': { 'JSONMappingParameters': { 'RecordRowPath': 'string' }, 'CSVMappingParameters': { 'RecordRowDelimiter': 'string', 'RecordColumnDelimiter': 'string' } } }, 'RecordEncoding': 'string', 'RecordColumns': [ { 'Name': 'string', 'Mapping': 'string', 'SqlType': 'string' }, ] }, 'ParsedInputRecords': [ [ 'string', ], ], 'ProcessedInputRecords': [ 'string', ], 'RawInputRecords': [ 'string', ] } **Response Structure** * *(dict) --* * **InputSchema** *(dict) --* The schema inferred from the streaming source. It identifies the format of the data in the streaming source and how each data element maps to corresponding columns in the in- application stream that you can create. * **RecordFormat** *(dict) --* Specifies the format of the records on the streaming source. * **RecordFormatType** *(string) --* The type of record format. * **MappingParameters** *(dict) --* When you configure application input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source. * **JSONMappingParameters** *(dict) --* Provides additional mapping information when JSON is the record format on the streaming source. * **RecordRowPath** *(string) --* The path to the top-level parent that contains the records. * **CSVMappingParameters** *(dict) --* Provides additional mapping information when the record format uses delimiters (for example, CSV). * **RecordRowDelimiter** *(string) --* The row delimiter. For example, in a CSV format, *'n'* is the typical row delimiter. * **RecordColumnDelimiter** *(string) --* The column delimiter. For example, in a CSV format, a comma (",") is the typical column delimiter. * **RecordEncoding** *(string) --* Specifies the encoding of the records in the streaming source. For example, UTF-8. * **RecordColumns** *(list) --* A list of "RecordColumn" objects. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the mapping of each data element in the streaming source to the corresponding column in the in- application stream. Also used to describe the format of the reference data source. * **Name** *(string) --* The name of the column that is created in the in- application input stream or reference table. * **Mapping** *(string) --* A reference to the data element in the streaming input or the reference data source. * **SqlType** *(string) --* The type of column created in the in-application input stream or reference table. * **ParsedInputRecords** *(list) --* An array of elements, where each element corresponds to a row in a stream record (a stream record can have more than one row). * *(list) --* * *(string) --* * **ProcessedInputRecords** *(list) --* The stream data that was modified by the processor specified in the "InputProcessingConfiguration" parameter. * *(string) --* * **RawInputRecords** *(list) --* The raw stream data that was sampled to infer the schema. * *(string) --* **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.UnableToDetectSchemaExcept ion" * "KinesisAnalyticsV2.Client.exceptions.ResourceProvisionedThrough putExceededException" * "KinesisAnalyticsV2.Client.exceptions.ServiceUnavailableExceptio n" * "KinesisAnalyticsV2.Client.exceptions.InvalidRequestException" * "KinesisAnalyticsV2.Client.exceptions.UnsupportedOperationExcept ion" KinesisAnalyticsV2 / Client / rollback_application rollback_application ******************** KinesisAnalyticsV2.Client.rollback_application(**kwargs) Reverts the application to the previous running version. You can roll back an application if you suspect it is stuck in a transient status or in the running status. You can roll back an application only if it is in the "UPDATING", "AUTOSCALING", or "RUNNING" statuses. When you rollback an application, it loads state data from the last successful snapshot. If the application has no snapshots, Managed Service for Apache Flink rejects the rollback request. See also: AWS API Documentation **Request Syntax** response = client.rollback_application( ApplicationName='string', CurrentApplicationVersionId=123 ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of the application. * **CurrentApplicationVersionId** (*integer*) -- **[REQUIRED]** The current application version ID. You can retrieve the application version ID using DescribeApplication. Return type: dict Returns: **Response Syntax** { 'ApplicationDetail': { 'ApplicationARN': 'string', 'ApplicationDescription': 'string', 'ApplicationName': 'string', 'RuntimeEnvironment': 'SQL-1_0'|'FLINK-1_6'|'FLINK-1_8'|'ZEPPELIN-FLINK-1_0'|'FLINK-1_11'|'FLINK-1_13'|'ZEPPELIN-FLINK-2_0'|'FLINK-1_15'|'ZEPPELIN-FLINK-3_0'|'FLINK-1_18'|'FLINK-1_19'|'FLINK-1_20', 'ServiceExecutionRole': 'string', 'ApplicationStatus': 'DELETING'|'STARTING'|'STOPPING'|'READY'|'RUNNING'|'UPDATING'|'AUTOSCALING'|'FORCE_STOPPING'|'ROLLING_BACK'|'MAINTENANCE'|'ROLLED_BACK', 'ApplicationVersionId': 123, 'CreateTimestamp': datetime(2015, 1, 1), 'LastUpdateTimestamp': datetime(2015, 1, 1), 'ApplicationConfigurationDescription': { 'SqlApplicationConfigurationDescription': { 'InputDescriptions': [ { 'InputId': 'string', 'NamePrefix': 'string', 'InAppStreamNames': [ 'string', ], 'InputProcessingConfigurationDescription': { 'InputLambdaProcessorDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' } }, 'KinesisStreamsInputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'KinesisFirehoseInputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'InputSchema': { 'RecordFormat': { 'RecordFormatType': 'JSON'|'CSV', 'MappingParameters': { 'JSONMappingParameters': { 'RecordRowPath': 'string' }, 'CSVMappingParameters': { 'RecordRowDelimiter': 'string', 'RecordColumnDelimiter': 'string' } } }, 'RecordEncoding': 'string', 'RecordColumns': [ { 'Name': 'string', 'Mapping': 'string', 'SqlType': 'string' }, ] }, 'InputParallelism': { 'Count': 123 }, 'InputStartingPositionConfiguration': { 'InputStartingPosition': 'NOW'|'TRIM_HORIZON'|'LAST_STOPPED_POINT' } }, ], 'OutputDescriptions': [ { 'OutputId': 'string', 'Name': 'string', 'KinesisStreamsOutputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'KinesisFirehoseOutputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'LambdaOutputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'DestinationSchema': { 'RecordFormatType': 'JSON'|'CSV' } }, ], 'ReferenceDataSourceDescriptions': [ { 'ReferenceId': 'string', 'TableName': 'string', 'S3ReferenceDataSourceDescription': { 'BucketARN': 'string', 'FileKey': 'string', 'ReferenceRoleARN': 'string' }, 'ReferenceSchema': { 'RecordFormat': { 'RecordFormatType': 'JSON'|'CSV', 'MappingParameters': { 'JSONMappingParameters': { 'RecordRowPath': 'string' }, 'CSVMappingParameters': { 'RecordRowDelimiter': 'string', 'RecordColumnDelimiter': 'string' } } }, 'RecordEncoding': 'string', 'RecordColumns': [ { 'Name': 'string', 'Mapping': 'string', 'SqlType': 'string' }, ] } }, ] }, 'ApplicationCodeConfigurationDescription': { 'CodeContentType': 'PLAINTEXT'|'ZIPFILE', 'CodeContentDescription': { 'TextContent': 'string', 'CodeMD5': 'string', 'CodeSize': 123, 'S3ApplicationCodeLocationDescription': { 'BucketARN': 'string', 'FileKey': 'string', 'ObjectVersion': 'string' } } }, 'RunConfigurationDescription': { 'ApplicationRestoreConfigurationDescription': { 'ApplicationRestoreType': 'SKIP_RESTORE_FROM_SNAPSHOT'|'RESTORE_FROM_LATEST_SNAPSHOT'|'RESTORE_FROM_CUSTOM_SNAPSHOT', 'SnapshotName': 'string' }, 'FlinkRunConfigurationDescription': { 'AllowNonRestoredState': True|False } }, 'FlinkApplicationConfigurationDescription': { 'CheckpointConfigurationDescription': { 'ConfigurationType': 'DEFAULT'|'CUSTOM', 'CheckpointingEnabled': True|False, 'CheckpointInterval': 123, 'MinPauseBetweenCheckpoints': 123 }, 'MonitoringConfigurationDescription': { 'ConfigurationType': 'DEFAULT'|'CUSTOM', 'MetricsLevel': 'APPLICATION'|'TASK'|'OPERATOR'|'PARALLELISM', 'LogLevel': 'INFO'|'WARN'|'ERROR'|'DEBUG' }, 'ParallelismConfigurationDescription': { 'ConfigurationType': 'DEFAULT'|'CUSTOM', 'Parallelism': 123, 'ParallelismPerKPU': 123, 'CurrentParallelism': 123, 'AutoScalingEnabled': True|False }, 'JobPlanDescription': 'string' }, 'EnvironmentPropertyDescriptions': { 'PropertyGroupDescriptions': [ { 'PropertyGroupId': 'string', 'PropertyMap': { 'string': 'string' } }, ] }, 'ApplicationSnapshotConfigurationDescription': { 'SnapshotsEnabled': True|False }, 'ApplicationSystemRollbackConfigurationDescription': { 'RollbackEnabled': True|False }, 'VpcConfigurationDescriptions': [ { 'VpcConfigurationId': 'string', 'VpcId': 'string', 'SubnetIds': [ 'string', ], 'SecurityGroupIds': [ 'string', ] }, ], 'ZeppelinApplicationConfigurationDescription': { 'MonitoringConfigurationDescription': { 'LogLevel': 'INFO'|'WARN'|'ERROR'|'DEBUG' }, 'CatalogConfigurationDescription': { 'GlueDataCatalogConfigurationDescription': { 'DatabaseARN': 'string' } }, 'DeployAsApplicationConfigurationDescription': { 'S3ContentLocationDescription': { 'BucketARN': 'string', 'BasePath': 'string' } }, 'CustomArtifactsConfigurationDescription': [ { 'ArtifactType': 'UDF'|'DEPENDENCY_JAR', 'S3ContentLocationDescription': { 'BucketARN': 'string', 'FileKey': 'string', 'ObjectVersion': 'string' }, 'MavenReferenceDescription': { 'GroupId': 'string', 'ArtifactId': 'string', 'Version': 'string' } }, ] } }, 'CloudWatchLoggingOptionDescriptions': [ { 'CloudWatchLoggingOptionId': 'string', 'LogStreamARN': 'string', 'RoleARN': 'string' }, ], 'ApplicationMaintenanceConfigurationDescription': { 'ApplicationMaintenanceWindowStartTime': 'string', 'ApplicationMaintenanceWindowEndTime': 'string' }, 'ApplicationVersionUpdatedFrom': 123, 'ApplicationVersionRolledBackFrom': 123, 'ApplicationVersionCreateTimestamp': datetime(2015, 1, 1), 'ConditionalToken': 'string', 'ApplicationVersionRolledBackTo': 123, 'ApplicationMode': 'STREAMING'|'INTERACTIVE' }, 'OperationId': 'string' } **Response Structure** * *(dict) --* * **ApplicationDetail** *(dict) --* Describes the application, including the application Amazon Resource Name (ARN), status, latest version, and input and output configurations. * **ApplicationARN** *(string) --* The ARN of the application. * **ApplicationDescription** *(string) --* The description of the application. * **ApplicationName** *(string) --* The name of the application. * **RuntimeEnvironment** *(string) --* The runtime environment for the application. * **ServiceExecutionRole** *(string) --* Specifies the IAM role that the application uses to access external resources. * **ApplicationStatus** *(string) --* The status of the application. * **ApplicationVersionId** *(integer) --* Provides the current application version. Managed Service for Apache Flink updates the "ApplicationVersionId" each time you update the application. * **CreateTimestamp** *(datetime) --* The current timestamp when the application was created. * **LastUpdateTimestamp** *(datetime) --* The current timestamp when the application was last updated. * **ApplicationConfigurationDescription** *(dict) --* Describes details about the application code and starting parameters for a Managed Service for Apache Flink application. * **SqlApplicationConfigurationDescription** *(dict) --* The details about inputs, outputs, and reference data sources for a SQL-based Kinesis Data Analytics application. * **InputDescriptions** *(list) --* The array of InputDescription objects describing the input streams used by the application. * *(dict) --* Describes the application input configuration for a SQL-based Kinesis Data Analytics application. * **InputId** *(string) --* The input ID that is associated with the application input. This is the ID that Kinesis Data Analytics assigns to each input configuration that you add to your application. * **NamePrefix** *(string) --* The in-application name prefix. * **InAppStreamNames** *(list) --* Returns the in-application stream names that are mapped to the stream source. * *(string) --* * **InputProcessingConfigurationDescription** *(dict) --* The description of the preprocessor that executes on records in this input before the application's code is run. * **InputLambdaProcessorDescription** *(dict) --* Provides configuration information about the associated InputLambdaProcessorDescription * **ResourceARN** *(string) --* The ARN of the Amazon Lambda function that is used to preprocess the records in the stream. Note: To specify an earlier version of the Lambda function than the latest, include the Lambda function version in the Lambda function ARN. For more information about Lambda ARNs, see Example ARNs: Amazon Lambda * **RoleARN** *(string) --* The ARN of the IAM role that is used to access the Amazon Lambda function. Note: Provided for backward compatibility. Applications that are created with the current API version have an application- level service execution role rather than a resource-level role. * **KinesisStreamsInputDescription** *(dict) --* If a Kinesis data stream is configured as a streaming source, provides the Kinesis data stream's Amazon Resource Name (ARN). * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the Kinesis data stream. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to access the stream. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **KinesisFirehoseInputDescription** *(dict) --* If a Kinesis Data Firehose delivery stream is configured as a streaming source, provides the delivery stream's ARN. * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the delivery stream. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics assumes to access the stream. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **InputSchema** *(dict) --* Describes the format of the data in the streaming source, and how each data element maps to corresponding columns in the in-application stream that is being created. * **RecordFormat** *(dict) --* Specifies the format of the records on the streaming source. * **RecordFormatType** *(string) --* The type of record format. * **MappingParameters** *(dict) --* When you configure application input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source. * **JSONMappingParameters** *(dict) --* Provides additional mapping information when JSON is the record format on the streaming source. * **RecordRowPath** *(string) --* The path to the top-level parent that contains the records. * **CSVMappingParameters** *(dict) --* Provides additional mapping information when the record format uses delimiters (for example, CSV). * **RecordRowDelimiter** *(string) --* The row delimiter. For example, in a CSV format, *'n'* is the typical row delimiter. * **RecordColumnDelimiter** *(string) --* The column delimiter. For example, in a CSV format, a comma (",") is the typical column delimiter. * **RecordEncoding** *(string) --* Specifies the encoding of the records in the streaming source. For example, UTF-8. * **RecordColumns** *(list) --* A list of "RecordColumn" objects. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the mapping of each data element in the streaming source to the corresponding column in the in-application stream. Also used to describe the format of the reference data source. * **Name** *(string) --* The name of the column that is created in the in-application input stream or reference table. * **Mapping** *(string) --* A reference to the data element in the streaming input or the reference data source. * **SqlType** *(string) --* The type of column created in the in- application input stream or reference table. * **InputParallelism** *(dict) --* Describes the configured parallelism (number of in-application streams mapped to the streaming source). * **Count** *(integer) --* The number of in-application streams to create. * **InputStartingPositionConfiguration** *(dict) --* The point at which the application is configured to read from the input stream. * **InputStartingPosition** *(string) --* The starting position on the stream. * "NOW" - Start reading just after the most recent record in the stream, and start at the request timestamp that the customer issued. * "TRIM_HORIZON" - Start reading at the last untrimmed record in the stream, which is the oldest record available in the stream. This option is not available for an Amazon Kinesis Data Firehose delivery stream. * "LAST_STOPPED_POINT" - Resume reading from where the application last stopped reading. * **OutputDescriptions** *(list) --* The array of OutputDescription objects describing the destination streams used by the application. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the application output configuration, which includes the in-application stream name and the destination where the stream data is written. The destination can be a Kinesis data stream or a Kinesis Data Firehose delivery stream. * **OutputId** *(string) --* A unique identifier for the output configuration. * **Name** *(string) --* The name of the in-application stream that is configured as output. * **KinesisStreamsOutputDescription** *(dict) --* Describes the Kinesis data stream that is configured as the destination where output is written. * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the Kinesis data stream. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to access the stream. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **KinesisFirehoseOutputDescription** *(dict) --* Describes the Kinesis Data Firehose delivery stream that is configured as the destination where output is written. * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the delivery stream. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to access the stream. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **LambdaOutputDescription** *(dict) --* Describes the Lambda function that is configured as the destination where output is written. * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the destination Lambda function. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to write to the destination function. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **DestinationSchema** *(dict) --* The data format used for writing data to the destination. * **RecordFormatType** *(string) --* Specifies the format of the records on the output stream. * **ReferenceDataSourceDescriptions** *(list) --* The array of ReferenceDataSourceDescription objects describing the reference data sources used by the application. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the reference data source configured for an application. * **ReferenceId** *(string) --* The ID of the reference data source. This is the ID that Kinesis Data Analytics assigns when you add the reference data source to your application using the CreateApplication or UpdateApplication operation. * **TableName** *(string) --* The in-application table name created by the specific reference data source configuration. * **S3ReferenceDataSourceDescription** *(dict) --* Provides the Amazon S3 bucket name, the object key name that contains the reference data. * **BucketARN** *(string) --* The Amazon Resource Name (ARN) of the S3 bucket. * **FileKey** *(string) --* Amazon S3 object key name. * **ReferenceRoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to read the Amazon S3 object on your behalf to populate the in- application reference table. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **ReferenceSchema** *(dict) --* Describes the format of the data in the streaming source, and how each data element maps to corresponding columns created in the in- application stream. * **RecordFormat** *(dict) --* Specifies the format of the records on the streaming source. * **RecordFormatType** *(string) --* The type of record format. * **MappingParameters** *(dict) --* When you configure application input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source. * **JSONMappingParameters** *(dict) --* Provides additional mapping information when JSON is the record format on the streaming source. * **RecordRowPath** *(string) --* The path to the top-level parent that contains the records. * **CSVMappingParameters** *(dict) --* Provides additional mapping information when the record format uses delimiters (for example, CSV). * **RecordRowDelimiter** *(string) --* The row delimiter. For example, in a CSV format, *'n'* is the typical row delimiter. * **RecordColumnDelimiter** *(string) --* The column delimiter. For example, in a CSV format, a comma (",") is the typical column delimiter. * **RecordEncoding** *(string) --* Specifies the encoding of the records in the streaming source. For example, UTF-8. * **RecordColumns** *(list) --* A list of "RecordColumn" objects. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the mapping of each data element in the streaming source to the corresponding column in the in-application stream. Also used to describe the format of the reference data source. * **Name** *(string) --* The name of the column that is created in the in-application input stream or reference table. * **Mapping** *(string) --* A reference to the data element in the streaming input or the reference data source. * **SqlType** *(string) --* The type of column created in the in- application input stream or reference table. * **ApplicationCodeConfigurationDescription** *(dict) --* The details about the application code for a Managed Service for Apache Flink application. * **CodeContentType** *(string) --* Specifies whether the code content is in text or zip format. * **CodeContentDescription** *(dict) --* Describes details about the location and format of the application code. * **TextContent** *(string) --* The text-format code * **CodeMD5** *(string) --* The checksum that can be used to validate zip-format code. * **CodeSize** *(integer) --* The size in bytes of the application code. Can be used to validate zip-format code. * **S3ApplicationCodeLocationDescription** *(dict) --* The S3 bucket Amazon Resource Name (ARN), file key, and object version of the application code stored in Amazon S3. * **BucketARN** *(string) --* The Amazon Resource Name (ARN) for the S3 bucket containing the application code. * **FileKey** *(string) --* The file key for the object containing the application code. * **ObjectVersion** *(string) --* The version of the object containing the application code. * **RunConfigurationDescription** *(dict) --* The details about the starting properties for a Managed Service for Apache Flink application. * **ApplicationRestoreConfigurationDescription** *(dict) --* Describes the restore behavior of a restarting application. * **ApplicationRestoreType** *(string) --* Specifies how the application should be restored. * **SnapshotName** *(string) --* The identifier of an existing snapshot of application state to use to restart an application. The application uses this value if "RESTORE_FROM_CUSTOM_SNAPSHOT" is specified for the "ApplicationRestoreType". * **FlinkRunConfigurationDescription** *(dict) --* Describes the starting parameters for a Managed Service for Apache Flink application. * **AllowNonRestoredState** *(boolean) --* When restoring from a snapshot, specifies whether the runtime is allowed to skip a state that cannot be mapped to the new program. This will happen if the program is updated between snapshots to remove stateful parameters, and state data in the snapshot no longer corresponds to valid application data. For more information, see Allowing Non-Restored State in the Apache Flink documentation. Note: This value defaults to "false". If you update your application without specifying this parameter, "AllowNonRestoredState" will be set to "false", even if it was previously set to "true". * **FlinkApplicationConfigurationDescription** *(dict) --* The details about a Managed Service for Apache Flink application. * **CheckpointConfigurationDescription** *(dict) --* Describes an application's checkpointing configuration. Checkpointing is the process of persisting application state for fault tolerance. * **ConfigurationType** *(string) --* Describes whether the application uses the default checkpointing behavior in Managed Service for Apache Flink. Note: If this value is set to "DEFAULT", the application will use the following values, even if they are set to other values using APIs or application code: * **CheckpointingEnabled:** true * **CheckpointInterval:** 60000 * **MinPauseBetweenCheckpoints:** 5000 * **CheckpointingEnabled** *(boolean) --* Describes whether checkpointing is enabled for a Managed Service for Apache Flink application. Note: If "CheckpointConfiguration.ConfigurationType" is "DEFAULT", the application will use a "CheckpointingEnabled" value of "true", even if this value is set to another value using this API or in application code. * **CheckpointInterval** *(integer) --* Describes the interval in milliseconds between checkpoint operations. Note: If "CheckpointConfiguration.ConfigurationType" is "DEFAULT", the application will use a "CheckpointInterval" value of 60000, even if this value is set to another value using this API or in application code. * **MinPauseBetweenCheckpoints** *(integer) --* Describes the minimum time in milliseconds after a checkpoint operation completes that a new checkpoint operation can start. Note: If "CheckpointConfiguration.ConfigurationType" is "DEFAULT", the application will use a "MinPauseBetweenCheckpoints" value of 5000, even if this value is set using this API or in application code. * **MonitoringConfigurationDescription** *(dict) --* Describes configuration parameters for Amazon CloudWatch logging for an application. * **ConfigurationType** *(string) --* Describes whether to use the default CloudWatch logging configuration for an application. * **MetricsLevel** *(string) --* Describes the granularity of the CloudWatch Logs for an application. * **LogLevel** *(string) --* Describes the verbosity of the CloudWatch Logs for an application. * **ParallelismConfigurationDescription** *(dict) --* Describes parameters for how an application executes multiple tasks simultaneously. * **ConfigurationType** *(string) --* Describes whether the application uses the default parallelism for the Managed Service for Apache Flink service. * **Parallelism** *(integer) --* Describes the initial number of parallel tasks that a Managed Service for Apache Flink application can perform. If "AutoScalingEnabled" is set to True, then Managed Service for Apache Flink can increase the "CurrentParallelism" value in response to application load. The service can increase "CurrentParallelism" up to the maximum parallelism, which is "ParalellismPerKPU" times the maximum KPUs for the application. The maximum KPUs for an application is 32 by default, and can be increased by requesting a limit increase. If application load is reduced, the service can reduce the "CurrentParallelism" value down to the "Parallelism" setting. * **ParallelismPerKPU** *(integer) --* Describes the number of parallel tasks that a Managed Service for Apache Flink application can perform per Kinesis Processing Unit (KPU) used by the application. * **CurrentParallelism** *(integer) --* Describes the current number of parallel tasks that a Managed Service for Apache Flink application can perform. If "AutoScalingEnabled" is set to True, Managed Service for Apache Flink can increase this value in response to application load. The service can increase this value up to the maximum parallelism, which is "ParalellismPerKPU" times the maximum KPUs for the application. The maximum KPUs for an application is 32 by default, and can be increased by requesting a limit increase. If application load is reduced, the service can reduce the "CurrentParallelism" value down to the "Parallelism" setting. * **AutoScalingEnabled** *(boolean) --* Describes whether the Managed Service for Apache Flink service can increase the parallelism of the application in response to increased throughput. * **JobPlanDescription** *(string) --* The job plan for an application. For more information about the job plan, see Jobs and Scheduling in the Apache Flink Documentation. To retrieve the job plan for the application, use the DescribeApplicationRequest$IncludeAdditionalDetails parameter of the DescribeApplication operation. * **EnvironmentPropertyDescriptions** *(dict) --* Describes execution properties for a Managed Service for Apache Flink application. * **PropertyGroupDescriptions** *(list) --* Describes the execution property groups. * *(dict) --* Property key-value pairs passed into an application. * **PropertyGroupId** *(string) --* Describes the key of an application execution property key-value pair. * **PropertyMap** *(dict) --* Describes the value of an application execution property key-value pair. * *(string) --* * *(string) --* * **ApplicationSnapshotConfigurationDescription** *(dict) --* Describes whether snapshots are enabled for a Managed Service for Apache Flink application. * **SnapshotsEnabled** *(boolean) --* Describes whether snapshots are enabled for a Managed Service for Apache Flink application. * **ApplicationSystemRollbackConfigurationDescription** *(dict) --* Describes system rollback configuration for a Managed Service for Apache Flink application * **RollbackEnabled** *(boolean) --* Describes whether system rollbacks are enabled for a Managed Service for Apache Flink application * **VpcConfigurationDescriptions** *(list) --* The array of descriptions of VPC configurations available to the application. * *(dict) --* Describes the parameters of a VPC used by the application. * **VpcConfigurationId** *(string) --* The ID of the VPC configuration. * **VpcId** *(string) --* The ID of the associated VPC. * **SubnetIds** *(list) --* The array of Subnet IDs used by the VPC configuration. * *(string) --* * **SecurityGroupIds** *(list) --* The array of SecurityGroup IDs used by the VPC configuration. * *(string) --* * **ZeppelinApplicationConfigurationDescription** *(dict) --* The configuration parameters for a Managed Service for Apache Flink Studio notebook. * **MonitoringConfigurationDescription** *(dict) --* The monitoring configuration of a Managed Service for Apache Flink Studio notebook. * **LogLevel** *(string) --* Describes the verbosity of the CloudWatch Logs for an application. * **CatalogConfigurationDescription** *(dict) --* The Amazon Glue Data Catalog that is associated with the Managed Service for Apache Flink Studio notebook. * **GlueDataCatalogConfigurationDescription** *(dict) --* The configuration parameters for the default Amazon Glue database. You use this database for SQL queries that you write in a Managed Service for Apache Flink Studio notebook. * **DatabaseARN** *(string) --* The Amazon Resource Name (ARN) of the database. * **DeployAsApplicationConfigurationDescription** *(dict) --* The parameters required to deploy a Managed Service for Apache Flink Studio notebook as an application with durable state. * **S3ContentLocationDescription** *(dict) --* The location that holds the data required to specify an Amazon Data Analytics application. * **BucketARN** *(string) --* The Amazon Resource Name (ARN) of the S3 bucket. * **BasePath** *(string) --* The base path for the S3 bucket. * **CustomArtifactsConfigurationDescription** *(list) --* Custom artifacts are dependency JARs and user-defined functions (UDF). * *(dict) --* Specifies a dependency JAR or a JAR of user-defined functions. * **ArtifactType** *(string) --* "UDF" stands for user-defined functions. This type of artifact must be in an S3 bucket. A "DEPENDENCY_JAR" can be in either Maven or an S3 bucket. * **S3ContentLocationDescription** *(dict) --* For a Managed Service for Apache Flink application provides a description of an Amazon S3 object, including the Amazon Resource Name (ARN) of the S3 bucket, the name of the Amazon S3 object that contains the data, and the version number of the Amazon S3 object that contains the data. * **BucketARN** *(string) --* The Amazon Resource Name (ARN) for the S3 bucket containing the application code. * **FileKey** *(string) --* The file key for the object containing the application code. * **ObjectVersion** *(string) --* The version of the object containing the application code. * **MavenReferenceDescription** *(dict) --* The parameters that are required to specify a Maven dependency. * **GroupId** *(string) --* The group ID of the Maven reference. * **ArtifactId** *(string) --* The artifact ID of the Maven reference. * **Version** *(string) --* The version of the Maven reference. * **CloudWatchLoggingOptionDescriptions** *(list) --* Describes the application Amazon CloudWatch logging options. * *(dict) --* Describes the Amazon CloudWatch logging option. * **CloudWatchLoggingOptionId** *(string) --* The ID of the CloudWatch logging option description. * **LogStreamARN** *(string) --* The Amazon Resource Name (ARN) of the CloudWatch log to receive application messages. * **RoleARN** *(string) --* The IAM ARN of the role to use to send application messages. Note: Provided for backward compatibility. Applications created with the current API version have an application-level service execution role rather than a resource-level role. * **ApplicationMaintenanceConfigurationDescription** *(dict) --* The details of the maintenance configuration for the application. * **ApplicationMaintenanceWindowStartTime** *(string) --* The start time for the maintenance window. * **ApplicationMaintenanceWindowEndTime** *(string) --* The end time for the maintenance window. * **ApplicationVersionUpdatedFrom** *(integer) --* The previous application version before the latest application update. RollbackApplication reverts the application to this version. * **ApplicationVersionRolledBackFrom** *(integer) --* If you reverted the application using RollbackApplication, the application version when "RollbackApplication" was called. * **ApplicationVersionCreateTimestamp** *(datetime) --* The current timestamp when the application version was created. * **ConditionalToken** *(string) --* A value you use to implement strong concurrency for application updates. * **ApplicationVersionRolledBackTo** *(integer) --* The version to which you want to roll back the application. * **ApplicationMode** *(string) --* To create a Managed Service for Apache Flink Studio notebook, you must set the mode to "INTERACTIVE". However, for a Managed Service for Apache Flink application, the mode is optional. * **OperationId** *(string) --* Operation ID for tracking RollbackApplication request **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.ResourceInUseException" * "KinesisAnalyticsV2.Client.exceptions.InvalidRequestException" * "KinesisAnalyticsV2.Client.exceptions.ConcurrentModificationExce ption" * "KinesisAnalyticsV2.Client.exceptions.UnsupportedOperationExcept ion" KinesisAnalyticsV2 / Client / list_application_operations list_application_operations *************************** KinesisAnalyticsV2.Client.list_application_operations(**kwargs) Lists information about operations performed on a Managed Service for Apache Flink application See also: AWS API Documentation **Request Syntax** response = client.list_application_operations( ApplicationName='string', Limit=123, NextToken='string', Operation='string', OperationStatus='IN_PROGRESS'|'CANCELLED'|'SUCCESSFUL'|'FAILED' ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of the application * **Limit** (*integer*) -- Limit on the number of records returned in the response * **NextToken** (*string*) -- If a previous command returned a pagination token, pass it into this value to retrieve the next set of results * **Operation** (*string*) -- Type of operation performed on an application * **OperationStatus** (*string*) -- Status of the operation performed on an application Return type: dict Returns: **Response Syntax** { 'ApplicationOperationInfoList': [ { 'Operation': 'string', 'OperationId': 'string', 'StartTime': datetime(2015, 1, 1), 'EndTime': datetime(2015, 1, 1), 'OperationStatus': 'IN_PROGRESS'|'CANCELLED'|'SUCCESSFUL'|'FAILED' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* Response with the list of operations for an application * **ApplicationOperationInfoList** *(list) --* List of ApplicationOperationInfo for an application * *(dict) --* Provides a description of the operation, such as the type and status of operation * **Operation** *(string) --* Type of operation performed on an application * **OperationId** *(string) --* Identifier of the Operation * **StartTime** *(datetime) --* The timestamp at which the operation was created * **EndTime** *(datetime) --* The timestamp at which the operation finished for the application * **OperationStatus** *(string) --* Status of the operation performed on an application * **NextToken** *(string) --* If a previous command returned a pagination token, pass it into this value to retrieve the next set of results **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.UnsupportedOperationExcept ion" KinesisAnalyticsV2 / Client / update_application_maintenance_configuration update_application_maintenance_configuration ******************************************** KinesisAnalyticsV2.Client.update_application_maintenance_configuration(**kwargs) Updates the maintenance configuration of the Managed Service for Apache Flink application. You can invoke this operation on an application that is in one of the two following states: "READY" or "RUNNING". If you invoke it when the application is in a state other than these two states, it throws a "ResourceInUseException". The service makes use of the updated configuration the next time it schedules maintenance for the application. If you invoke this operation after the service schedules maintenance, the service will apply the configuration update the next time it schedules maintenance for the application. This means that you might not see the maintenance configuration update applied to the maintenance process that follows a successful invocation of this operation, but to the following maintenance process instead. To see the current maintenance configuration of your application, invoke the DescribeApplication operation. For information about application maintenance, see Managed Service for Apache Flink for Apache Flink Maintenance. Note: This operation is supported only for Managed Service for Apache Flink. See also: AWS API Documentation **Request Syntax** response = client.update_application_maintenance_configuration( ApplicationName='string', ApplicationMaintenanceConfigurationUpdate={ 'ApplicationMaintenanceWindowStartTimeUpdate': 'string' } ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of the application for which you want to update the maintenance configuration. * **ApplicationMaintenanceConfigurationUpdate** (*dict*) -- **[REQUIRED]** Describes the application maintenance configuration update. * **ApplicationMaintenanceWindowStartTimeUpdate** *(string) --* **[REQUIRED]** The updated start time for the maintenance window. Return type: dict Returns: **Response Syntax** { 'ApplicationARN': 'string', 'ApplicationMaintenanceConfigurationDescription': { 'ApplicationMaintenanceWindowStartTime': 'string', 'ApplicationMaintenanceWindowEndTime': 'string' } } **Response Structure** * *(dict) --* * **ApplicationARN** *(string) --* The Amazon Resource Name (ARN) of the application. * **ApplicationMaintenanceConfigurationDescription** *(dict) --* The application maintenance configuration description after the update. * **ApplicationMaintenanceWindowStartTime** *(string) --* The start time for the maintenance window. * **ApplicationMaintenanceWindowEndTime** *(string) --* The end time for the maintenance window. **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.ResourceInUseException" * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.ConcurrentModificationExce ption" * "KinesisAnalyticsV2.Client.exceptions.UnsupportedOperationExcept ion" KinesisAnalyticsV2 / Client / add_application_output add_application_output ********************** KinesisAnalyticsV2.Client.add_application_output(**kwargs) Adds an external destination to your SQL-based Kinesis Data Analytics application. If you want Kinesis Data Analytics to deliver data from an in- application stream within your application to an external destination (such as an Kinesis data stream, a Kinesis Data Firehose delivery stream, or an Amazon Lambda function), you add the relevant configuration to your application using this operation. You can configure one or more outputs for your application. Each output configuration maps an in-application stream and an external destination. You can use one of the output configurations to deliver data from your in-application error stream to an external destination so that you can analyze the errors. Any configuration update, including adding a streaming source using this operation, results in a new version of the application. You can use the DescribeApplication operation to find the current application version. See also: AWS API Documentation **Request Syntax** response = client.add_application_output( ApplicationName='string', CurrentApplicationVersionId=123, Output={ 'Name': 'string', 'KinesisStreamsOutput': { 'ResourceARN': 'string' }, 'KinesisFirehoseOutput': { 'ResourceARN': 'string' }, 'LambdaOutput': { 'ResourceARN': 'string' }, 'DestinationSchema': { 'RecordFormatType': 'JSON'|'CSV' } } ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of the application to which you want to add the output configuration. * **CurrentApplicationVersionId** (*integer*) -- **[REQUIRED]** The version of the application to which you want to add the output configuration. You can use the DescribeApplication operation to get the current application version. If the version specified is not the current version, the "ConcurrentModificationException" is returned. * **Output** (*dict*) -- **[REQUIRED]** An array of objects, each describing one output configuration. In the output configuration, you specify the name of an in- application stream, a destination (that is, a Kinesis data stream, a Kinesis Data Firehose delivery stream, or an Amazon Lambda function), and record the formation to use when writing to the destination. * **Name** *(string) --* **[REQUIRED]** The name of the in-application stream. * **KinesisStreamsOutput** *(dict) --* Identifies a Kinesis data stream as the destination. * **ResourceARN** *(string) --* **[REQUIRED]** The ARN of the destination Kinesis data stream to write to. * **KinesisFirehoseOutput** *(dict) --* Identifies a Kinesis Data Firehose delivery stream as the destination. * **ResourceARN** *(string) --* **[REQUIRED]** The ARN of the destination delivery stream to write to. * **LambdaOutput** *(dict) --* Identifies an Amazon Lambda function as the destination. * **ResourceARN** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) of the destination Lambda function to write to. Note: To specify an earlier version of the Lambda function than the latest, include the Lambda function version in the Lambda function ARN. For more information about Lambda ARNs, see Example ARNs: Amazon Lambda * **DestinationSchema** *(dict) --* **[REQUIRED]** Describes the data format when records are written to the destination. * **RecordFormatType** *(string) --* **[REQUIRED]** Specifies the format of the records on the output stream. Return type: dict Returns: **Response Syntax** { 'ApplicationARN': 'string', 'ApplicationVersionId': 123, 'OutputDescriptions': [ { 'OutputId': 'string', 'Name': 'string', 'KinesisStreamsOutputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'KinesisFirehoseOutputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'LambdaOutputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'DestinationSchema': { 'RecordFormatType': 'JSON'|'CSV' } }, ] } **Response Structure** * *(dict) --* * **ApplicationARN** *(string) --* The application Amazon Resource Name (ARN). * **ApplicationVersionId** *(integer) --* The updated application version ID. Kinesis Data Analytics increments this ID when the application is updated. * **OutputDescriptions** *(list) --* Describes the application output configuration. For more information, see Configuring Application Output. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the application output configuration, which includes the in-application stream name and the destination where the stream data is written. The destination can be a Kinesis data stream or a Kinesis Data Firehose delivery stream. * **OutputId** *(string) --* A unique identifier for the output configuration. * **Name** *(string) --* The name of the in-application stream that is configured as output. * **KinesisStreamsOutputDescription** *(dict) --* Describes the Kinesis data stream that is configured as the destination where output is written. * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the Kinesis data stream. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to access the stream. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **KinesisFirehoseOutputDescription** *(dict) --* Describes the Kinesis Data Firehose delivery stream that is configured as the destination where output is written. * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the delivery stream. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to access the stream. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **LambdaOutputDescription** *(dict) --* Describes the Lambda function that is configured as the destination where output is written. * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the destination Lambda function. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to write to the destination function. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **DestinationSchema** *(dict) --* The data format used for writing data to the destination. * **RecordFormatType** *(string) --* Specifies the format of the records on the output stream. **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.ResourceInUseException" * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.ConcurrentModificationExce ption" * "KinesisAnalyticsV2.Client.exceptions.InvalidRequestException" KinesisAnalyticsV2 / Client / update_application update_application ****************** KinesisAnalyticsV2.Client.update_application(**kwargs) Updates an existing Managed Service for Apache Flink application. Using this operation, you can update application code, input configuration, and output configuration. Managed Service for Apache Flink updates the "ApplicationVersionId" each time you update your application. See also: AWS API Documentation **Request Syntax** response = client.update_application( ApplicationName='string', CurrentApplicationVersionId=123, ApplicationConfigurationUpdate={ 'SqlApplicationConfigurationUpdate': { 'InputUpdates': [ { 'InputId': 'string', 'NamePrefixUpdate': 'string', 'InputProcessingConfigurationUpdate': { 'InputLambdaProcessorUpdate': { 'ResourceARNUpdate': 'string' } }, 'KinesisStreamsInputUpdate': { 'ResourceARNUpdate': 'string' }, 'KinesisFirehoseInputUpdate': { 'ResourceARNUpdate': 'string' }, 'InputSchemaUpdate': { 'RecordFormatUpdate': { 'RecordFormatType': 'JSON'|'CSV', 'MappingParameters': { 'JSONMappingParameters': { 'RecordRowPath': 'string' }, 'CSVMappingParameters': { 'RecordRowDelimiter': 'string', 'RecordColumnDelimiter': 'string' } } }, 'RecordEncodingUpdate': 'string', 'RecordColumnUpdates': [ { 'Name': 'string', 'Mapping': 'string', 'SqlType': 'string' }, ] }, 'InputParallelismUpdate': { 'CountUpdate': 123 } }, ], 'OutputUpdates': [ { 'OutputId': 'string', 'NameUpdate': 'string', 'KinesisStreamsOutputUpdate': { 'ResourceARNUpdate': 'string' }, 'KinesisFirehoseOutputUpdate': { 'ResourceARNUpdate': 'string' }, 'LambdaOutputUpdate': { 'ResourceARNUpdate': 'string' }, 'DestinationSchemaUpdate': { 'RecordFormatType': 'JSON'|'CSV' } }, ], 'ReferenceDataSourceUpdates': [ { 'ReferenceId': 'string', 'TableNameUpdate': 'string', 'S3ReferenceDataSourceUpdate': { 'BucketARNUpdate': 'string', 'FileKeyUpdate': 'string' }, 'ReferenceSchemaUpdate': { 'RecordFormat': { 'RecordFormatType': 'JSON'|'CSV', 'MappingParameters': { 'JSONMappingParameters': { 'RecordRowPath': 'string' }, 'CSVMappingParameters': { 'RecordRowDelimiter': 'string', 'RecordColumnDelimiter': 'string' } } }, 'RecordEncoding': 'string', 'RecordColumns': [ { 'Name': 'string', 'Mapping': 'string', 'SqlType': 'string' }, ] } }, ] }, 'ApplicationCodeConfigurationUpdate': { 'CodeContentTypeUpdate': 'PLAINTEXT'|'ZIPFILE', 'CodeContentUpdate': { 'TextContentUpdate': 'string', 'ZipFileContentUpdate': b'bytes', 'S3ContentLocationUpdate': { 'BucketARNUpdate': 'string', 'FileKeyUpdate': 'string', 'ObjectVersionUpdate': 'string' } } }, 'FlinkApplicationConfigurationUpdate': { 'CheckpointConfigurationUpdate': { 'ConfigurationTypeUpdate': 'DEFAULT'|'CUSTOM', 'CheckpointingEnabledUpdate': True|False, 'CheckpointIntervalUpdate': 123, 'MinPauseBetweenCheckpointsUpdate': 123 }, 'MonitoringConfigurationUpdate': { 'ConfigurationTypeUpdate': 'DEFAULT'|'CUSTOM', 'MetricsLevelUpdate': 'APPLICATION'|'TASK'|'OPERATOR'|'PARALLELISM', 'LogLevelUpdate': 'INFO'|'WARN'|'ERROR'|'DEBUG' }, 'ParallelismConfigurationUpdate': { 'ConfigurationTypeUpdate': 'DEFAULT'|'CUSTOM', 'ParallelismUpdate': 123, 'ParallelismPerKPUUpdate': 123, 'AutoScalingEnabledUpdate': True|False } }, 'EnvironmentPropertyUpdates': { 'PropertyGroups': [ { 'PropertyGroupId': 'string', 'PropertyMap': { 'string': 'string' } }, ] }, 'ApplicationSnapshotConfigurationUpdate': { 'SnapshotsEnabledUpdate': True|False }, 'ApplicationSystemRollbackConfigurationUpdate': { 'RollbackEnabledUpdate': True|False }, 'VpcConfigurationUpdates': [ { 'VpcConfigurationId': 'string', 'SubnetIdUpdates': [ 'string', ], 'SecurityGroupIdUpdates': [ 'string', ] }, ], 'ZeppelinApplicationConfigurationUpdate': { 'MonitoringConfigurationUpdate': { 'LogLevelUpdate': 'INFO'|'WARN'|'ERROR'|'DEBUG' }, 'CatalogConfigurationUpdate': { 'GlueDataCatalogConfigurationUpdate': { 'DatabaseARNUpdate': 'string' } }, 'DeployAsApplicationConfigurationUpdate': { 'S3ContentLocationUpdate': { 'BucketARNUpdate': 'string', 'BasePathUpdate': 'string' } }, 'CustomArtifactsConfigurationUpdate': [ { 'ArtifactType': 'UDF'|'DEPENDENCY_JAR', 'S3ContentLocation': { 'BucketARN': 'string', 'FileKey': 'string', 'ObjectVersion': 'string' }, 'MavenReference': { 'GroupId': 'string', 'ArtifactId': 'string', 'Version': 'string' } }, ] } }, ServiceExecutionRoleUpdate='string', RunConfigurationUpdate={ 'FlinkRunConfiguration': { 'AllowNonRestoredState': True|False }, 'ApplicationRestoreConfiguration': { 'ApplicationRestoreType': 'SKIP_RESTORE_FROM_SNAPSHOT'|'RESTORE_FROM_LATEST_SNAPSHOT'|'RESTORE_FROM_CUSTOM_SNAPSHOT', 'SnapshotName': 'string' } }, CloudWatchLoggingOptionUpdates=[ { 'CloudWatchLoggingOptionId': 'string', 'LogStreamARNUpdate': 'string' }, ], ConditionalToken='string', RuntimeEnvironmentUpdate='SQL-1_0'|'FLINK-1_6'|'FLINK-1_8'|'ZEPPELIN-FLINK-1_0'|'FLINK-1_11'|'FLINK-1_13'|'ZEPPELIN-FLINK-2_0'|'FLINK-1_15'|'ZEPPELIN-FLINK-3_0'|'FLINK-1_18'|'FLINK-1_19'|'FLINK-1_20' ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of the application to update. * **CurrentApplicationVersionId** (*integer*) -- The current application version ID. You must provide the "CurrentApplicationVersionId" or the "ConditionalToken".You can retrieve the application version ID using DescribeApplication. For better concurrency support, use the "ConditionalToken" parameter instead of "CurrentApplicationVersionId". * **ApplicationConfigurationUpdate** (*dict*) -- Describes application configuration updates. * **SqlApplicationConfigurationUpdate** *(dict) --* Describes updates to a SQL-based Kinesis Data Analytics application's configuration. * **InputUpdates** *(list) --* The array of InputUpdate objects describing the new input streams used by the application. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes updates to a specific input configuration (identified by the "InputId" of an application). * **InputId** *(string) --* **[REQUIRED]** The input ID of the application input to be updated. * **NamePrefixUpdate** *(string) --* The name prefix for in-application streams that Kinesis Data Analytics creates for the specific streaming source. * **InputProcessingConfigurationUpdate** *(dict) --* Describes updates to an InputProcessingConfiguration. * **InputLambdaProcessorUpdate** *(dict) --* **[REQUIRED]** Provides update information for an InputLambdaProcessor. * **ResourceARNUpdate** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) of the new Amazon Lambda function that is used to preprocess the records in the stream. Note: To specify an earlier version of the Lambda function than the latest, include the Lambda function version in the Lambda function ARN. For more information about Lambda ARNs, see Example ARNs: Amazon Lambda * **KinesisStreamsInputUpdate** *(dict) --* If a Kinesis data stream is the streaming source to be updated, provides an updated stream Amazon Resource Name (ARN). * **ResourceARNUpdate** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) of the input Kinesis data stream to read. * **KinesisFirehoseInputUpdate** *(dict) --* If a Kinesis Data Firehose delivery stream is the streaming source to be updated, provides an updated stream ARN. * **ResourceARNUpdate** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) of the input delivery stream to read. * **InputSchemaUpdate** *(dict) --* Describes the data format on the streaming source, and how record elements on the streaming source map to columns of the in-application stream that is created. * **RecordFormatUpdate** *(dict) --* Specifies the format of the records on the streaming source. * **RecordFormatType** *(string) --* **[REQUIRED]** The type of record format. * **MappingParameters** *(dict) --* When you configure application input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source. * **JSONMappingParameters** *(dict) --* Provides additional mapping information when JSON is the record format on the streaming source. * **RecordRowPath** *(string) --* **[REQUIRED]** The path to the top-level parent that contains the records. * **CSVMappingParameters** *(dict) --* Provides additional mapping information when the record format uses delimiters (for example, CSV). * **RecordRowDelimiter** *(string) --* **[REQUIRED]** The row delimiter. For example, in a CSV format, *'n'* is the typical row delimiter. * **RecordColumnDelimiter** *(string) --* **[REQUIRED]** The column delimiter. For example, in a CSV format, a comma (",") is the typical column delimiter. * **RecordEncodingUpdate** *(string) --* Specifies the encoding of the records in the streaming source; for example, UTF-8. * **RecordColumnUpdates** *(list) --* A list of "RecordColumn" objects. Each object describes the mapping of the streaming source element to the corresponding column in the in- application stream. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the mapping of each data element in the streaming source to the corresponding column in the in-application stream. Also used to describe the format of the reference data source. * **Name** *(string) --* **[REQUIRED]** The name of the column that is created in the in-application input stream or reference table. * **Mapping** *(string) --* A reference to the data element in the streaming input or the reference data source. * **SqlType** *(string) --* **[REQUIRED]** The type of column created in the in-application input stream or reference table. * **InputParallelismUpdate** *(dict) --* Describes the parallelism updates (the number of in- application streams Kinesis Data Analytics creates for the specific streaming source). * **CountUpdate** *(integer) --* **[REQUIRED]** The number of in-application streams to create for the specified streaming source. * **OutputUpdates** *(list) --* The array of OutputUpdate objects describing the new destination streams used by the application. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes updates to the output configuration identified by the "OutputId". * **OutputId** *(string) --* **[REQUIRED]** Identifies the specific output configuration that you want to update. * **NameUpdate** *(string) --* If you want to specify a different in-application stream for this output configuration, use this field to specify the new in-application stream name. * **KinesisStreamsOutputUpdate** *(dict) --* Describes a Kinesis data stream as the destination for the output. * **ResourceARNUpdate** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) of the Kinesis data stream where you want to write the output. * **KinesisFirehoseOutputUpdate** *(dict) --* Describes a Kinesis Data Firehose delivery stream as the destination for the output. * **ResourceARNUpdate** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) of the delivery stream to write to. * **LambdaOutputUpdate** *(dict) --* Describes an Amazon Lambda function as the destination for the output. * **ResourceARNUpdate** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) of the destination Amazon Lambda function. Note: To specify an earlier version of the Lambda function than the latest, include the Lambda function version in the Lambda function ARN. For more information about Lambda ARNs, see Example ARNs: Amazon Lambda * **DestinationSchemaUpdate** *(dict) --* Describes the data format when records are written to the destination. * **RecordFormatType** *(string) --* **[REQUIRED]** Specifies the format of the records on the output stream. * **ReferenceDataSourceUpdates** *(list) --* The array of ReferenceDataSourceUpdate objects describing the new reference data sources used by the application. * *(dict) --* When you update a reference data source configuration for a SQL-based Kinesis Data Analytics application, this object provides all the updated values (such as the source bucket name and object key name), the in- application table name that is created, and updated mapping information that maps the data in the Amazon S3 object to the in-application reference table that is created. * **ReferenceId** *(string) --* **[REQUIRED]** The ID of the reference data source that is being updated. You can use the DescribeApplication operation to get this value. * **TableNameUpdate** *(string) --* The in-application table name that is created by this update. * **S3ReferenceDataSourceUpdate** *(dict) --* Describes the S3 bucket name, object key name, and IAM role that Kinesis Data Analytics can assume to read the Amazon S3 object on your behalf and populate the in-application reference table. * **BucketARNUpdate** *(string) --* The Amazon Resource Name (ARN) of the S3 bucket. * **FileKeyUpdate** *(string) --* The object key name. * **ReferenceSchemaUpdate** *(dict) --* Describes the format of the data in the streaming source, and how each data element maps to corresponding columns created in the in-application stream. * **RecordFormat** *(dict) --* **[REQUIRED]** Specifies the format of the records on the streaming source. * **RecordFormatType** *(string) --* **[REQUIRED]** The type of record format. * **MappingParameters** *(dict) --* When you configure application input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source. * **JSONMappingParameters** *(dict) --* Provides additional mapping information when JSON is the record format on the streaming source. * **RecordRowPath** *(string) --* **[REQUIRED]** The path to the top-level parent that contains the records. * **CSVMappingParameters** *(dict) --* Provides additional mapping information when the record format uses delimiters (for example, CSV). * **RecordRowDelimiter** *(string) --* **[REQUIRED]** The row delimiter. For example, in a CSV format, *'n'* is the typical row delimiter. * **RecordColumnDelimiter** *(string) --* **[REQUIRED]** The column delimiter. For example, in a CSV format, a comma (",") is the typical column delimiter. * **RecordEncoding** *(string) --* Specifies the encoding of the records in the streaming source. For example, UTF-8. * **RecordColumns** *(list) --* **[REQUIRED]** A list of "RecordColumn" objects. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the mapping of each data element in the streaming source to the corresponding column in the in-application stream. Also used to describe the format of the reference data source. * **Name** *(string) --* **[REQUIRED]** The name of the column that is created in the in-application input stream or reference table. * **Mapping** *(string) --* A reference to the data element in the streaming input or the reference data source. * **SqlType** *(string) --* **[REQUIRED]** The type of column created in the in-application input stream or reference table. * **ApplicationCodeConfigurationUpdate** *(dict) --* Describes updates to an application's code configuration. * **CodeContentTypeUpdate** *(string) --* Describes updates to the code content type. * **CodeContentUpdate** *(dict) --* Describes updates to the code content of an application. * **TextContentUpdate** *(string) --* Describes an update to the text code for an application. * **ZipFileContentUpdate** *(bytes) --* Describes an update to the zipped code for an application. * **S3ContentLocationUpdate** *(dict) --* Describes an update to the location of code for an application. * **BucketARNUpdate** *(string) --* The new Amazon Resource Name (ARN) for the S3 bucket containing the application code. * **FileKeyUpdate** *(string) --* The new file key for the object containing the application code. * **ObjectVersionUpdate** *(string) --* The new version of the object containing the application code. * **FlinkApplicationConfigurationUpdate** *(dict) --* Describes updates to a Managed Service for Apache Flink application's configuration. * **CheckpointConfigurationUpdate** *(dict) --* Describes updates to an application's checkpointing configuration. Checkpointing is the process of persisting application state for fault tolerance. * **ConfigurationTypeUpdate** *(string) --* Describes updates to whether the application uses the default checkpointing behavior of Managed Service for Apache Flink. You must set this property to "CUSTOM" in order to set the "CheckpointingEnabled", "CheckpointInterval", or "MinPauseBetweenCheckpoints" parameters. Note: If this value is set to "DEFAULT", the application will use the following values, even if they are set to other values using APIs or application code: * **CheckpointingEnabled:** true * **CheckpointInterval:** 60000 * **MinPauseBetweenCheckpoints:** 5000 * **CheckpointingEnabledUpdate** *(boolean) --* Describes updates to whether checkpointing is enabled for an application. Note: If "CheckpointConfiguration.ConfigurationType" is "DEFAULT", the application will use a "CheckpointingEnabled" value of "true", even if this value is set to another value using this API or in application code. * **CheckpointIntervalUpdate** *(integer) --* Describes updates to the interval in milliseconds between checkpoint operations. Note: If "CheckpointConfiguration.ConfigurationType" is "DEFAULT", the application will use a "CheckpointInterval" value of 60000, even if this value is set to another value using this API or in application code. * **MinPauseBetweenCheckpointsUpdate** *(integer) --* Describes updates to the minimum time in milliseconds after a checkpoint operation completes that a new checkpoint operation can start. Note: If "CheckpointConfiguration.ConfigurationType" is "DEFAULT", the application will use a "MinPauseBetweenCheckpoints" value of 5000, even if this value is set using this API or in application code. * **MonitoringConfigurationUpdate** *(dict) --* Describes updates to the configuration parameters for Amazon CloudWatch logging for an application. * **ConfigurationTypeUpdate** *(string) --* Describes updates to whether to use the default CloudWatch logging configuration for an application. You must set this property to "CUSTOM" in order to set the "LogLevel" or "MetricsLevel" parameters. * **MetricsLevelUpdate** *(string) --* Describes updates to the granularity of the CloudWatch Logs for an application. The "Parallelism" level is not recommended for applications with a Parallelism over 64 due to excessive costs. * **LogLevelUpdate** *(string) --* Describes updates to the verbosity of the CloudWatch Logs for an application. * **ParallelismConfigurationUpdate** *(dict) --* Describes updates to the parameters for how an application executes multiple tasks simultaneously. * **ConfigurationTypeUpdate** *(string) --* Describes updates to whether the application uses the default parallelism for the Managed Service for Apache Flink service, or if a custom parallelism is used. You must set this property to "CUSTOM" in order to change your application's "AutoScalingEnabled", "Parallelism", or "ParallelismPerKPU" properties. * **ParallelismUpdate** *(integer) --* Describes updates to the initial number of parallel tasks an application can perform. If "AutoScalingEnabled" is set to True, then Managed Service for Apache Flink can increase the "CurrentParallelism" value in response to application load. The service can increase "CurrentParallelism" up to the maximum parallelism, which is "ParalellismPerKPU" times the maximum KPUs for the application. The maximum KPUs for an application is 32 by default, and can be increased by requesting a limit increase. If application load is reduced, the service will reduce "CurrentParallelism" down to the "Parallelism" setting. * **ParallelismPerKPUUpdate** *(integer) --* Describes updates to the number of parallel tasks an application can perform per Kinesis Processing Unit (KPU) used by the application. * **AutoScalingEnabledUpdate** *(boolean) --* Describes updates to whether the Managed Service for Apache Flink service can increase the parallelism of a Managed Service for Apache Flink application in response to increased throughput. * **EnvironmentPropertyUpdates** *(dict) --* Describes updates to the environment properties for a Managed Service for Apache Flink application. * **PropertyGroups** *(list) --* **[REQUIRED]** Describes updates to the execution property groups. * *(dict) --* Property key-value pairs passed into an application. * **PropertyGroupId** *(string) --* **[REQUIRED]** Describes the key of an application execution property key-value pair. * **PropertyMap** *(dict) --* **[REQUIRED]** Describes the value of an application execution property key-value pair. * *(string) --* * *(string) --* * **ApplicationSnapshotConfigurationUpdate** *(dict) --* Describes whether snapshots are enabled for a Managed Service for Apache Flink application. * **SnapshotsEnabledUpdate** *(boolean) --* **[REQUIRED]** Describes updates to whether snapshots are enabled for an application. * **ApplicationSystemRollbackConfigurationUpdate** *(dict) --* Describes system rollback configuration for a Managed Service for Apache Flink application * **RollbackEnabledUpdate** *(boolean) --* **[REQUIRED]** Describes whether system rollbacks are enabled for a Managed Service for Apache Flink application * **VpcConfigurationUpdates** *(list) --* Updates to the array of descriptions of VPC configurations available to the application. * *(dict) --* Describes updates to the VPC configuration used by the application. * **VpcConfigurationId** *(string) --* **[REQUIRED]** Describes an update to the ID of the VPC configuration. * **SubnetIdUpdates** *(list) --* Describes updates to the array of Subnet IDs used by the VPC configuration. * *(string) --* * **SecurityGroupIdUpdates** *(list) --* Describes updates to the array of SecurityGroup IDs used by the VPC configuration. * *(string) --* * **ZeppelinApplicationConfigurationUpdate** *(dict) --* Updates to the configuration of a Managed Service for Apache Flink Studio notebook. * **MonitoringConfigurationUpdate** *(dict) --* Updates to the monitoring configuration of a Managed Service for Apache Flink Studio notebook. * **LogLevelUpdate** *(string) --* **[REQUIRED]** Updates to the logging level for Apache Zeppelin within a Managed Service for Apache Flink Studio notebook. * **CatalogConfigurationUpdate** *(dict) --* Updates to the configuration of the Amazon Glue Data Catalog that is associated with the Managed Service for Apache Flink Studio notebook. * **GlueDataCatalogConfigurationUpdate** *(dict) --* **[REQUIRED]** Updates to the configuration parameters for the default Amazon Glue database. You use this database for SQL queries that you write in a Managed Service for Apache Flink Studio notebook. * **DatabaseARNUpdate** *(string) --* **[REQUIRED]** The updated Amazon Resource Name (ARN) of the database. * **DeployAsApplicationConfigurationUpdate** *(dict) --* Updates to the configuration information required to deploy an Amazon Data Analytics Studio notebook as an application with durable state. * **S3ContentLocationUpdate** *(dict) --* Updates to the location that holds the data required to specify an Amazon Data Analytics application. * **BucketARNUpdate** *(string) --* The updated Amazon Resource Name (ARN) of the S3 bucket. * **BasePathUpdate** *(string) --* The updated S3 bucket path. * **CustomArtifactsConfigurationUpdate** *(list) --* Updates to the customer artifacts. Custom artifacts are dependency JAR files and user-defined functions (UDF). * *(dict) --* Specifies dependency JARs, as well as JAR files that contain user-defined functions (UDF). * **ArtifactType** *(string) --* **[REQUIRED]** "UDF" stands for user-defined functions. This type of artifact must be in an S3 bucket. A "DEPENDENCY_JAR" can be in either Maven or an S3 bucket. * **S3ContentLocation** *(dict) --* For a Managed Service for Apache Flink application provides a description of an Amazon S3 object, including the Amazon Resource Name (ARN) of the S3 bucket, the name of the Amazon S3 object that contains the data, and the version number of the Amazon S3 object that contains the data. * **BucketARN** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) for the S3 bucket containing the application code. * **FileKey** *(string) --* **[REQUIRED]** The file key for the object containing the application code. * **ObjectVersion** *(string) --* The version of the object containing the application code. * **MavenReference** *(dict) --* The parameters required to fully specify a Maven reference. * **GroupId** *(string) --* **[REQUIRED]** The group ID of the Maven reference. * **ArtifactId** *(string) --* **[REQUIRED]** The artifact ID of the Maven reference. * **Version** *(string) --* **[REQUIRED]** The version of the Maven reference. * **ServiceExecutionRoleUpdate** (*string*) -- Describes updates to the service execution role. * **RunConfigurationUpdate** (*dict*) -- Describes updates to the application's starting parameters. * **FlinkRunConfiguration** *(dict) --* Describes the starting parameters for a Managed Service for Apache Flink application. * **AllowNonRestoredState** *(boolean) --* When restoring from a snapshot, specifies whether the runtime is allowed to skip a state that cannot be mapped to the new program. This will happen if the program is updated between snapshots to remove stateful parameters, and state data in the snapshot no longer corresponds to valid application data. For more information, see Allowing Non-Restored State in the Apache Flink documentation. Note: This value defaults to "false". If you update your application without specifying this parameter, "AllowNonRestoredState" will be set to "false", even if it was previously set to "true". * **ApplicationRestoreConfiguration** *(dict) --* Describes updates to the restore behavior of a restarting application. * **ApplicationRestoreType** *(string) --* **[REQUIRED]** Specifies how the application should be restored. * **SnapshotName** *(string) --* The identifier of an existing snapshot of application state to use to restart an application. The application uses this value if "RESTORE_FROM_CUSTOM_SNAPSHOT" is specified for the "ApplicationRestoreType". * **CloudWatchLoggingOptionUpdates** (*list*) -- Describes application Amazon CloudWatch logging option updates. You can only update existing CloudWatch logging options with this action. To add a new CloudWatch logging option, use AddApplicationCloudWatchLoggingOption. * *(dict) --* Describes the Amazon CloudWatch logging option updates. * **CloudWatchLoggingOptionId** *(string) --* **[REQUIRED]** The ID of the CloudWatch logging option to update * **LogStreamARNUpdate** *(string) --* The Amazon Resource Name (ARN) of the CloudWatch log to receive application messages. * **ConditionalToken** (*string*) -- A value you use to implement strong concurrency for application updates. You must provide the "CurrentApplicationVersionId" or the "ConditionalToken". You get the application's current "ConditionalToken" using DescribeApplication. For better concurrency support, use the "ConditionalToken" parameter instead of "CurrentApplicationVersionId". * **RuntimeEnvironmentUpdate** (*string*) -- Updates the Managed Service for Apache Flink runtime environment used to run your code. To avoid issues you must: * Ensure your new jar and dependencies are compatible with the new runtime selected. * Ensure your new code's state is compatible with the snapshot from which your application will start Return type: dict Returns: **Response Syntax** { 'ApplicationDetail': { 'ApplicationARN': 'string', 'ApplicationDescription': 'string', 'ApplicationName': 'string', 'RuntimeEnvironment': 'SQL-1_0'|'FLINK-1_6'|'FLINK-1_8'|'ZEPPELIN-FLINK-1_0'|'FLINK-1_11'|'FLINK-1_13'|'ZEPPELIN-FLINK-2_0'|'FLINK-1_15'|'ZEPPELIN-FLINK-3_0'|'FLINK-1_18'|'FLINK-1_19'|'FLINK-1_20', 'ServiceExecutionRole': 'string', 'ApplicationStatus': 'DELETING'|'STARTING'|'STOPPING'|'READY'|'RUNNING'|'UPDATING'|'AUTOSCALING'|'FORCE_STOPPING'|'ROLLING_BACK'|'MAINTENANCE'|'ROLLED_BACK', 'ApplicationVersionId': 123, 'CreateTimestamp': datetime(2015, 1, 1), 'LastUpdateTimestamp': datetime(2015, 1, 1), 'ApplicationConfigurationDescription': { 'SqlApplicationConfigurationDescription': { 'InputDescriptions': [ { 'InputId': 'string', 'NamePrefix': 'string', 'InAppStreamNames': [ 'string', ], 'InputProcessingConfigurationDescription': { 'InputLambdaProcessorDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' } }, 'KinesisStreamsInputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'KinesisFirehoseInputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'InputSchema': { 'RecordFormat': { 'RecordFormatType': 'JSON'|'CSV', 'MappingParameters': { 'JSONMappingParameters': { 'RecordRowPath': 'string' }, 'CSVMappingParameters': { 'RecordRowDelimiter': 'string', 'RecordColumnDelimiter': 'string' } } }, 'RecordEncoding': 'string', 'RecordColumns': [ { 'Name': 'string', 'Mapping': 'string', 'SqlType': 'string' }, ] }, 'InputParallelism': { 'Count': 123 }, 'InputStartingPositionConfiguration': { 'InputStartingPosition': 'NOW'|'TRIM_HORIZON'|'LAST_STOPPED_POINT' } }, ], 'OutputDescriptions': [ { 'OutputId': 'string', 'Name': 'string', 'KinesisStreamsOutputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'KinesisFirehoseOutputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'LambdaOutputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'DestinationSchema': { 'RecordFormatType': 'JSON'|'CSV' } }, ], 'ReferenceDataSourceDescriptions': [ { 'ReferenceId': 'string', 'TableName': 'string', 'S3ReferenceDataSourceDescription': { 'BucketARN': 'string', 'FileKey': 'string', 'ReferenceRoleARN': 'string' }, 'ReferenceSchema': { 'RecordFormat': { 'RecordFormatType': 'JSON'|'CSV', 'MappingParameters': { 'JSONMappingParameters': { 'RecordRowPath': 'string' }, 'CSVMappingParameters': { 'RecordRowDelimiter': 'string', 'RecordColumnDelimiter': 'string' } } }, 'RecordEncoding': 'string', 'RecordColumns': [ { 'Name': 'string', 'Mapping': 'string', 'SqlType': 'string' }, ] } }, ] }, 'ApplicationCodeConfigurationDescription': { 'CodeContentType': 'PLAINTEXT'|'ZIPFILE', 'CodeContentDescription': { 'TextContent': 'string', 'CodeMD5': 'string', 'CodeSize': 123, 'S3ApplicationCodeLocationDescription': { 'BucketARN': 'string', 'FileKey': 'string', 'ObjectVersion': 'string' } } }, 'RunConfigurationDescription': { 'ApplicationRestoreConfigurationDescription': { 'ApplicationRestoreType': 'SKIP_RESTORE_FROM_SNAPSHOT'|'RESTORE_FROM_LATEST_SNAPSHOT'|'RESTORE_FROM_CUSTOM_SNAPSHOT', 'SnapshotName': 'string' }, 'FlinkRunConfigurationDescription': { 'AllowNonRestoredState': True|False } }, 'FlinkApplicationConfigurationDescription': { 'CheckpointConfigurationDescription': { 'ConfigurationType': 'DEFAULT'|'CUSTOM', 'CheckpointingEnabled': True|False, 'CheckpointInterval': 123, 'MinPauseBetweenCheckpoints': 123 }, 'MonitoringConfigurationDescription': { 'ConfigurationType': 'DEFAULT'|'CUSTOM', 'MetricsLevel': 'APPLICATION'|'TASK'|'OPERATOR'|'PARALLELISM', 'LogLevel': 'INFO'|'WARN'|'ERROR'|'DEBUG' }, 'ParallelismConfigurationDescription': { 'ConfigurationType': 'DEFAULT'|'CUSTOM', 'Parallelism': 123, 'ParallelismPerKPU': 123, 'CurrentParallelism': 123, 'AutoScalingEnabled': True|False }, 'JobPlanDescription': 'string' }, 'EnvironmentPropertyDescriptions': { 'PropertyGroupDescriptions': [ { 'PropertyGroupId': 'string', 'PropertyMap': { 'string': 'string' } }, ] }, 'ApplicationSnapshotConfigurationDescription': { 'SnapshotsEnabled': True|False }, 'ApplicationSystemRollbackConfigurationDescription': { 'RollbackEnabled': True|False }, 'VpcConfigurationDescriptions': [ { 'VpcConfigurationId': 'string', 'VpcId': 'string', 'SubnetIds': [ 'string', ], 'SecurityGroupIds': [ 'string', ] }, ], 'ZeppelinApplicationConfigurationDescription': { 'MonitoringConfigurationDescription': { 'LogLevel': 'INFO'|'WARN'|'ERROR'|'DEBUG' }, 'CatalogConfigurationDescription': { 'GlueDataCatalogConfigurationDescription': { 'DatabaseARN': 'string' } }, 'DeployAsApplicationConfigurationDescription': { 'S3ContentLocationDescription': { 'BucketARN': 'string', 'BasePath': 'string' } }, 'CustomArtifactsConfigurationDescription': [ { 'ArtifactType': 'UDF'|'DEPENDENCY_JAR', 'S3ContentLocationDescription': { 'BucketARN': 'string', 'FileKey': 'string', 'ObjectVersion': 'string' }, 'MavenReferenceDescription': { 'GroupId': 'string', 'ArtifactId': 'string', 'Version': 'string' } }, ] } }, 'CloudWatchLoggingOptionDescriptions': [ { 'CloudWatchLoggingOptionId': 'string', 'LogStreamARN': 'string', 'RoleARN': 'string' }, ], 'ApplicationMaintenanceConfigurationDescription': { 'ApplicationMaintenanceWindowStartTime': 'string', 'ApplicationMaintenanceWindowEndTime': 'string' }, 'ApplicationVersionUpdatedFrom': 123, 'ApplicationVersionRolledBackFrom': 123, 'ApplicationVersionCreateTimestamp': datetime(2015, 1, 1), 'ConditionalToken': 'string', 'ApplicationVersionRolledBackTo': 123, 'ApplicationMode': 'STREAMING'|'INTERACTIVE' }, 'OperationId': 'string' } **Response Structure** * *(dict) --* * **ApplicationDetail** *(dict) --* Describes application updates. * **ApplicationARN** *(string) --* The ARN of the application. * **ApplicationDescription** *(string) --* The description of the application. * **ApplicationName** *(string) --* The name of the application. * **RuntimeEnvironment** *(string) --* The runtime environment for the application. * **ServiceExecutionRole** *(string) --* Specifies the IAM role that the application uses to access external resources. * **ApplicationStatus** *(string) --* The status of the application. * **ApplicationVersionId** *(integer) --* Provides the current application version. Managed Service for Apache Flink updates the "ApplicationVersionId" each time you update the application. * **CreateTimestamp** *(datetime) --* The current timestamp when the application was created. * **LastUpdateTimestamp** *(datetime) --* The current timestamp when the application was last updated. * **ApplicationConfigurationDescription** *(dict) --* Describes details about the application code and starting parameters for a Managed Service for Apache Flink application. * **SqlApplicationConfigurationDescription** *(dict) --* The details about inputs, outputs, and reference data sources for a SQL-based Kinesis Data Analytics application. * **InputDescriptions** *(list) --* The array of InputDescription objects describing the input streams used by the application. * *(dict) --* Describes the application input configuration for a SQL-based Kinesis Data Analytics application. * **InputId** *(string) --* The input ID that is associated with the application input. This is the ID that Kinesis Data Analytics assigns to each input configuration that you add to your application. * **NamePrefix** *(string) --* The in-application name prefix. * **InAppStreamNames** *(list) --* Returns the in-application stream names that are mapped to the stream source. * *(string) --* * **InputProcessingConfigurationDescription** *(dict) --* The description of the preprocessor that executes on records in this input before the application's code is run. * **InputLambdaProcessorDescription** *(dict) --* Provides configuration information about the associated InputLambdaProcessorDescription * **ResourceARN** *(string) --* The ARN of the Amazon Lambda function that is used to preprocess the records in the stream. Note: To specify an earlier version of the Lambda function than the latest, include the Lambda function version in the Lambda function ARN. For more information about Lambda ARNs, see Example ARNs: Amazon Lambda * **RoleARN** *(string) --* The ARN of the IAM role that is used to access the Amazon Lambda function. Note: Provided for backward compatibility. Applications that are created with the current API version have an application- level service execution role rather than a resource-level role. * **KinesisStreamsInputDescription** *(dict) --* If a Kinesis data stream is configured as a streaming source, provides the Kinesis data stream's Amazon Resource Name (ARN). * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the Kinesis data stream. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to access the stream. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **KinesisFirehoseInputDescription** *(dict) --* If a Kinesis Data Firehose delivery stream is configured as a streaming source, provides the delivery stream's ARN. * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the delivery stream. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics assumes to access the stream. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **InputSchema** *(dict) --* Describes the format of the data in the streaming source, and how each data element maps to corresponding columns in the in-application stream that is being created. * **RecordFormat** *(dict) --* Specifies the format of the records on the streaming source. * **RecordFormatType** *(string) --* The type of record format. * **MappingParameters** *(dict) --* When you configure application input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source. * **JSONMappingParameters** *(dict) --* Provides additional mapping information when JSON is the record format on the streaming source. * **RecordRowPath** *(string) --* The path to the top-level parent that contains the records. * **CSVMappingParameters** *(dict) --* Provides additional mapping information when the record format uses delimiters (for example, CSV). * **RecordRowDelimiter** *(string) --* The row delimiter. For example, in a CSV format, *'n'* is the typical row delimiter. * **RecordColumnDelimiter** *(string) --* The column delimiter. For example, in a CSV format, a comma (",") is the typical column delimiter. * **RecordEncoding** *(string) --* Specifies the encoding of the records in the streaming source. For example, UTF-8. * **RecordColumns** *(list) --* A list of "RecordColumn" objects. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the mapping of each data element in the streaming source to the corresponding column in the in-application stream. Also used to describe the format of the reference data source. * **Name** *(string) --* The name of the column that is created in the in-application input stream or reference table. * **Mapping** *(string) --* A reference to the data element in the streaming input or the reference data source. * **SqlType** *(string) --* The type of column created in the in- application input stream or reference table. * **InputParallelism** *(dict) --* Describes the configured parallelism (number of in-application streams mapped to the streaming source). * **Count** *(integer) --* The number of in-application streams to create. * **InputStartingPositionConfiguration** *(dict) --* The point at which the application is configured to read from the input stream. * **InputStartingPosition** *(string) --* The starting position on the stream. * "NOW" - Start reading just after the most recent record in the stream, and start at the request timestamp that the customer issued. * "TRIM_HORIZON" - Start reading at the last untrimmed record in the stream, which is the oldest record available in the stream. This option is not available for an Amazon Kinesis Data Firehose delivery stream. * "LAST_STOPPED_POINT" - Resume reading from where the application last stopped reading. * **OutputDescriptions** *(list) --* The array of OutputDescription objects describing the destination streams used by the application. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the application output configuration, which includes the in-application stream name and the destination where the stream data is written. The destination can be a Kinesis data stream or a Kinesis Data Firehose delivery stream. * **OutputId** *(string) --* A unique identifier for the output configuration. * **Name** *(string) --* The name of the in-application stream that is configured as output. * **KinesisStreamsOutputDescription** *(dict) --* Describes the Kinesis data stream that is configured as the destination where output is written. * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the Kinesis data stream. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to access the stream. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **KinesisFirehoseOutputDescription** *(dict) --* Describes the Kinesis Data Firehose delivery stream that is configured as the destination where output is written. * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the delivery stream. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to access the stream. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **LambdaOutputDescription** *(dict) --* Describes the Lambda function that is configured as the destination where output is written. * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the destination Lambda function. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to write to the destination function. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **DestinationSchema** *(dict) --* The data format used for writing data to the destination. * **RecordFormatType** *(string) --* Specifies the format of the records on the output stream. * **ReferenceDataSourceDescriptions** *(list) --* The array of ReferenceDataSourceDescription objects describing the reference data sources used by the application. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the reference data source configured for an application. * **ReferenceId** *(string) --* The ID of the reference data source. This is the ID that Kinesis Data Analytics assigns when you add the reference data source to your application using the CreateApplication or UpdateApplication operation. * **TableName** *(string) --* The in-application table name created by the specific reference data source configuration. * **S3ReferenceDataSourceDescription** *(dict) --* Provides the Amazon S3 bucket name, the object key name that contains the reference data. * **BucketARN** *(string) --* The Amazon Resource Name (ARN) of the S3 bucket. * **FileKey** *(string) --* Amazon S3 object key name. * **ReferenceRoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to read the Amazon S3 object on your behalf to populate the in- application reference table. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **ReferenceSchema** *(dict) --* Describes the format of the data in the streaming source, and how each data element maps to corresponding columns created in the in- application stream. * **RecordFormat** *(dict) --* Specifies the format of the records on the streaming source. * **RecordFormatType** *(string) --* The type of record format. * **MappingParameters** *(dict) --* When you configure application input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source. * **JSONMappingParameters** *(dict) --* Provides additional mapping information when JSON is the record format on the streaming source. * **RecordRowPath** *(string) --* The path to the top-level parent that contains the records. * **CSVMappingParameters** *(dict) --* Provides additional mapping information when the record format uses delimiters (for example, CSV). * **RecordRowDelimiter** *(string) --* The row delimiter. For example, in a CSV format, *'n'* is the typical row delimiter. * **RecordColumnDelimiter** *(string) --* The column delimiter. For example, in a CSV format, a comma (",") is the typical column delimiter. * **RecordEncoding** *(string) --* Specifies the encoding of the records in the streaming source. For example, UTF-8. * **RecordColumns** *(list) --* A list of "RecordColumn" objects. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the mapping of each data element in the streaming source to the corresponding column in the in-application stream. Also used to describe the format of the reference data source. * **Name** *(string) --* The name of the column that is created in the in-application input stream or reference table. * **Mapping** *(string) --* A reference to the data element in the streaming input or the reference data source. * **SqlType** *(string) --* The type of column created in the in- application input stream or reference table. * **ApplicationCodeConfigurationDescription** *(dict) --* The details about the application code for a Managed Service for Apache Flink application. * **CodeContentType** *(string) --* Specifies whether the code content is in text or zip format. * **CodeContentDescription** *(dict) --* Describes details about the location and format of the application code. * **TextContent** *(string) --* The text-format code * **CodeMD5** *(string) --* The checksum that can be used to validate zip-format code. * **CodeSize** *(integer) --* The size in bytes of the application code. Can be used to validate zip-format code. * **S3ApplicationCodeLocationDescription** *(dict) --* The S3 bucket Amazon Resource Name (ARN), file key, and object version of the application code stored in Amazon S3. * **BucketARN** *(string) --* The Amazon Resource Name (ARN) for the S3 bucket containing the application code. * **FileKey** *(string) --* The file key for the object containing the application code. * **ObjectVersion** *(string) --* The version of the object containing the application code. * **RunConfigurationDescription** *(dict) --* The details about the starting properties for a Managed Service for Apache Flink application. * **ApplicationRestoreConfigurationDescription** *(dict) --* Describes the restore behavior of a restarting application. * **ApplicationRestoreType** *(string) --* Specifies how the application should be restored. * **SnapshotName** *(string) --* The identifier of an existing snapshot of application state to use to restart an application. The application uses this value if "RESTORE_FROM_CUSTOM_SNAPSHOT" is specified for the "ApplicationRestoreType". * **FlinkRunConfigurationDescription** *(dict) --* Describes the starting parameters for a Managed Service for Apache Flink application. * **AllowNonRestoredState** *(boolean) --* When restoring from a snapshot, specifies whether the runtime is allowed to skip a state that cannot be mapped to the new program. This will happen if the program is updated between snapshots to remove stateful parameters, and state data in the snapshot no longer corresponds to valid application data. For more information, see Allowing Non-Restored State in the Apache Flink documentation. Note: This value defaults to "false". If you update your application without specifying this parameter, "AllowNonRestoredState" will be set to "false", even if it was previously set to "true". * **FlinkApplicationConfigurationDescription** *(dict) --* The details about a Managed Service for Apache Flink application. * **CheckpointConfigurationDescription** *(dict) --* Describes an application's checkpointing configuration. Checkpointing is the process of persisting application state for fault tolerance. * **ConfigurationType** *(string) --* Describes whether the application uses the default checkpointing behavior in Managed Service for Apache Flink. Note: If this value is set to "DEFAULT", the application will use the following values, even if they are set to other values using APIs or application code: * **CheckpointingEnabled:** true * **CheckpointInterval:** 60000 * **MinPauseBetweenCheckpoints:** 5000 * **CheckpointingEnabled** *(boolean) --* Describes whether checkpointing is enabled for a Managed Service for Apache Flink application. Note: If "CheckpointConfiguration.ConfigurationType" is "DEFAULT", the application will use a "CheckpointingEnabled" value of "true", even if this value is set to another value using this API or in application code. * **CheckpointInterval** *(integer) --* Describes the interval in milliseconds between checkpoint operations. Note: If "CheckpointConfiguration.ConfigurationType" is "DEFAULT", the application will use a "CheckpointInterval" value of 60000, even if this value is set to another value using this API or in application code. * **MinPauseBetweenCheckpoints** *(integer) --* Describes the minimum time in milliseconds after a checkpoint operation completes that a new checkpoint operation can start. Note: If "CheckpointConfiguration.ConfigurationType" is "DEFAULT", the application will use a "MinPauseBetweenCheckpoints" value of 5000, even if this value is set using this API or in application code. * **MonitoringConfigurationDescription** *(dict) --* Describes configuration parameters for Amazon CloudWatch logging for an application. * **ConfigurationType** *(string) --* Describes whether to use the default CloudWatch logging configuration for an application. * **MetricsLevel** *(string) --* Describes the granularity of the CloudWatch Logs for an application. * **LogLevel** *(string) --* Describes the verbosity of the CloudWatch Logs for an application. * **ParallelismConfigurationDescription** *(dict) --* Describes parameters for how an application executes multiple tasks simultaneously. * **ConfigurationType** *(string) --* Describes whether the application uses the default parallelism for the Managed Service for Apache Flink service. * **Parallelism** *(integer) --* Describes the initial number of parallel tasks that a Managed Service for Apache Flink application can perform. If "AutoScalingEnabled" is set to True, then Managed Service for Apache Flink can increase the "CurrentParallelism" value in response to application load. The service can increase "CurrentParallelism" up to the maximum parallelism, which is "ParalellismPerKPU" times the maximum KPUs for the application. The maximum KPUs for an application is 32 by default, and can be increased by requesting a limit increase. If application load is reduced, the service can reduce the "CurrentParallelism" value down to the "Parallelism" setting. * **ParallelismPerKPU** *(integer) --* Describes the number of parallel tasks that a Managed Service for Apache Flink application can perform per Kinesis Processing Unit (KPU) used by the application. * **CurrentParallelism** *(integer) --* Describes the current number of parallel tasks that a Managed Service for Apache Flink application can perform. If "AutoScalingEnabled" is set to True, Managed Service for Apache Flink can increase this value in response to application load. The service can increase this value up to the maximum parallelism, which is "ParalellismPerKPU" times the maximum KPUs for the application. The maximum KPUs for an application is 32 by default, and can be increased by requesting a limit increase. If application load is reduced, the service can reduce the "CurrentParallelism" value down to the "Parallelism" setting. * **AutoScalingEnabled** *(boolean) --* Describes whether the Managed Service for Apache Flink service can increase the parallelism of the application in response to increased throughput. * **JobPlanDescription** *(string) --* The job plan for an application. For more information about the job plan, see Jobs and Scheduling in the Apache Flink Documentation. To retrieve the job plan for the application, use the DescribeApplicationRequest$IncludeAdditionalDetails parameter of the DescribeApplication operation. * **EnvironmentPropertyDescriptions** *(dict) --* Describes execution properties for a Managed Service for Apache Flink application. * **PropertyGroupDescriptions** *(list) --* Describes the execution property groups. * *(dict) --* Property key-value pairs passed into an application. * **PropertyGroupId** *(string) --* Describes the key of an application execution property key-value pair. * **PropertyMap** *(dict) --* Describes the value of an application execution property key-value pair. * *(string) --* * *(string) --* * **ApplicationSnapshotConfigurationDescription** *(dict) --* Describes whether snapshots are enabled for a Managed Service for Apache Flink application. * **SnapshotsEnabled** *(boolean) --* Describes whether snapshots are enabled for a Managed Service for Apache Flink application. * **ApplicationSystemRollbackConfigurationDescription** *(dict) --* Describes system rollback configuration for a Managed Service for Apache Flink application * **RollbackEnabled** *(boolean) --* Describes whether system rollbacks are enabled for a Managed Service for Apache Flink application * **VpcConfigurationDescriptions** *(list) --* The array of descriptions of VPC configurations available to the application. * *(dict) --* Describes the parameters of a VPC used by the application. * **VpcConfigurationId** *(string) --* The ID of the VPC configuration. * **VpcId** *(string) --* The ID of the associated VPC. * **SubnetIds** *(list) --* The array of Subnet IDs used by the VPC configuration. * *(string) --* * **SecurityGroupIds** *(list) --* The array of SecurityGroup IDs used by the VPC configuration. * *(string) --* * **ZeppelinApplicationConfigurationDescription** *(dict) --* The configuration parameters for a Managed Service for Apache Flink Studio notebook. * **MonitoringConfigurationDescription** *(dict) --* The monitoring configuration of a Managed Service for Apache Flink Studio notebook. * **LogLevel** *(string) --* Describes the verbosity of the CloudWatch Logs for an application. * **CatalogConfigurationDescription** *(dict) --* The Amazon Glue Data Catalog that is associated with the Managed Service for Apache Flink Studio notebook. * **GlueDataCatalogConfigurationDescription** *(dict) --* The configuration parameters for the default Amazon Glue database. You use this database for SQL queries that you write in a Managed Service for Apache Flink Studio notebook. * **DatabaseARN** *(string) --* The Amazon Resource Name (ARN) of the database. * **DeployAsApplicationConfigurationDescription** *(dict) --* The parameters required to deploy a Managed Service for Apache Flink Studio notebook as an application with durable state. * **S3ContentLocationDescription** *(dict) --* The location that holds the data required to specify an Amazon Data Analytics application. * **BucketARN** *(string) --* The Amazon Resource Name (ARN) of the S3 bucket. * **BasePath** *(string) --* The base path for the S3 bucket. * **CustomArtifactsConfigurationDescription** *(list) --* Custom artifacts are dependency JARs and user-defined functions (UDF). * *(dict) --* Specifies a dependency JAR or a JAR of user-defined functions. * **ArtifactType** *(string) --* "UDF" stands for user-defined functions. This type of artifact must be in an S3 bucket. A "DEPENDENCY_JAR" can be in either Maven or an S3 bucket. * **S3ContentLocationDescription** *(dict) --* For a Managed Service for Apache Flink application provides a description of an Amazon S3 object, including the Amazon Resource Name (ARN) of the S3 bucket, the name of the Amazon S3 object that contains the data, and the version number of the Amazon S3 object that contains the data. * **BucketARN** *(string) --* The Amazon Resource Name (ARN) for the S3 bucket containing the application code. * **FileKey** *(string) --* The file key for the object containing the application code. * **ObjectVersion** *(string) --* The version of the object containing the application code. * **MavenReferenceDescription** *(dict) --* The parameters that are required to specify a Maven dependency. * **GroupId** *(string) --* The group ID of the Maven reference. * **ArtifactId** *(string) --* The artifact ID of the Maven reference. * **Version** *(string) --* The version of the Maven reference. * **CloudWatchLoggingOptionDescriptions** *(list) --* Describes the application Amazon CloudWatch logging options. * *(dict) --* Describes the Amazon CloudWatch logging option. * **CloudWatchLoggingOptionId** *(string) --* The ID of the CloudWatch logging option description. * **LogStreamARN** *(string) --* The Amazon Resource Name (ARN) of the CloudWatch log to receive application messages. * **RoleARN** *(string) --* The IAM ARN of the role to use to send application messages. Note: Provided for backward compatibility. Applications created with the current API version have an application-level service execution role rather than a resource-level role. * **ApplicationMaintenanceConfigurationDescription** *(dict) --* The details of the maintenance configuration for the application. * **ApplicationMaintenanceWindowStartTime** *(string) --* The start time for the maintenance window. * **ApplicationMaintenanceWindowEndTime** *(string) --* The end time for the maintenance window. * **ApplicationVersionUpdatedFrom** *(integer) --* The previous application version before the latest application update. RollbackApplication reverts the application to this version. * **ApplicationVersionRolledBackFrom** *(integer) --* If you reverted the application using RollbackApplication, the application version when "RollbackApplication" was called. * **ApplicationVersionCreateTimestamp** *(datetime) --* The current timestamp when the application version was created. * **ConditionalToken** *(string) --* A value you use to implement strong concurrency for application updates. * **ApplicationVersionRolledBackTo** *(integer) --* The version to which you want to roll back the application. * **ApplicationMode** *(string) --* To create a Managed Service for Apache Flink Studio notebook, you must set the mode to "INTERACTIVE". However, for a Managed Service for Apache Flink application, the mode is optional. * **OperationId** *(string) --* Operation ID for tracking UpdateApplication request **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.CodeValidationException" * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.ResourceInUseException" * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.ConcurrentModificationExce ption" * "KinesisAnalyticsV2.Client.exceptions.InvalidRequestException" * "KinesisAnalyticsV2.Client.exceptions.InvalidApplicationConfigur ationException" * "KinesisAnalyticsV2.Client.exceptions.LimitExceededException" KinesisAnalyticsV2 / Client / create_application_presigned_url create_application_presigned_url ******************************** KinesisAnalyticsV2.Client.create_application_presigned_url(**kwargs) Creates and returns a URL that you can use to connect to an application's extension. The IAM role or user used to call this API defines the permissions to access the extension. After the presigned URL is created, no additional permission is required to access this URL. IAM authorization policies for this API are also enforced for every HTTP request that attempts to connect to the extension. You control the amount of time that the URL will be valid using the "SessionExpirationDurationInSeconds" parameter. If you do not provide this parameter, the returned URL is valid for twelve hours. Note: The URL that you get from a call to CreateApplicationPresignedUrl must be used within 3 minutes to be valid. If you first try to use the URL after the 3-minute limit expires, the service returns an HTTP 403 Forbidden error. See also: AWS API Documentation **Request Syntax** response = client.create_application_presigned_url( ApplicationName='string', UrlType='FLINK_DASHBOARD_URL'|'ZEPPELIN_UI_URL', SessionExpirationDurationInSeconds=123 ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of the application. * **UrlType** (*string*) -- **[REQUIRED]** The type of the extension for which to create and return a URL. Currently, the only valid extension URL type is "FLINK_DASHBOARD_URL". * **SessionExpirationDurationInSeconds** (*integer*) -- The duration in seconds for which the returned URL will be valid. Return type: dict Returns: **Response Syntax** { 'AuthorizedUrl': 'string' } **Response Structure** * *(dict) --* * **AuthorizedUrl** *(string) --* The URL of the extension. **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.ResourceInUseException" * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" KinesisAnalyticsV2 / Client / close close ***** KinesisAnalyticsV2.Client.close() Closes underlying endpoint connections. KinesisAnalyticsV2 / Client / describe_application describe_application ******************** KinesisAnalyticsV2.Client.describe_application(**kwargs) Returns information about a specific Managed Service for Apache Flink application. If you want to retrieve a list of all applications in your account, use the ListApplications operation. See also: AWS API Documentation **Request Syntax** response = client.describe_application( ApplicationName='string', IncludeAdditionalDetails=True|False ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of the application. * **IncludeAdditionalDetails** (*boolean*) -- Displays verbose information about a Managed Service for Apache Flink application, including the application's job plan. Return type: dict Returns: **Response Syntax** { 'ApplicationDetail': { 'ApplicationARN': 'string', 'ApplicationDescription': 'string', 'ApplicationName': 'string', 'RuntimeEnvironment': 'SQL-1_0'|'FLINK-1_6'|'FLINK-1_8'|'ZEPPELIN-FLINK-1_0'|'FLINK-1_11'|'FLINK-1_13'|'ZEPPELIN-FLINK-2_0'|'FLINK-1_15'|'ZEPPELIN-FLINK-3_0'|'FLINK-1_18'|'FLINK-1_19'|'FLINK-1_20', 'ServiceExecutionRole': 'string', 'ApplicationStatus': 'DELETING'|'STARTING'|'STOPPING'|'READY'|'RUNNING'|'UPDATING'|'AUTOSCALING'|'FORCE_STOPPING'|'ROLLING_BACK'|'MAINTENANCE'|'ROLLED_BACK', 'ApplicationVersionId': 123, 'CreateTimestamp': datetime(2015, 1, 1), 'LastUpdateTimestamp': datetime(2015, 1, 1), 'ApplicationConfigurationDescription': { 'SqlApplicationConfigurationDescription': { 'InputDescriptions': [ { 'InputId': 'string', 'NamePrefix': 'string', 'InAppStreamNames': [ 'string', ], 'InputProcessingConfigurationDescription': { 'InputLambdaProcessorDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' } }, 'KinesisStreamsInputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'KinesisFirehoseInputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'InputSchema': { 'RecordFormat': { 'RecordFormatType': 'JSON'|'CSV', 'MappingParameters': { 'JSONMappingParameters': { 'RecordRowPath': 'string' }, 'CSVMappingParameters': { 'RecordRowDelimiter': 'string', 'RecordColumnDelimiter': 'string' } } }, 'RecordEncoding': 'string', 'RecordColumns': [ { 'Name': 'string', 'Mapping': 'string', 'SqlType': 'string' }, ] }, 'InputParallelism': { 'Count': 123 }, 'InputStartingPositionConfiguration': { 'InputStartingPosition': 'NOW'|'TRIM_HORIZON'|'LAST_STOPPED_POINT' } }, ], 'OutputDescriptions': [ { 'OutputId': 'string', 'Name': 'string', 'KinesisStreamsOutputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'KinesisFirehoseOutputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'LambdaOutputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'DestinationSchema': { 'RecordFormatType': 'JSON'|'CSV' } }, ], 'ReferenceDataSourceDescriptions': [ { 'ReferenceId': 'string', 'TableName': 'string', 'S3ReferenceDataSourceDescription': { 'BucketARN': 'string', 'FileKey': 'string', 'ReferenceRoleARN': 'string' }, 'ReferenceSchema': { 'RecordFormat': { 'RecordFormatType': 'JSON'|'CSV', 'MappingParameters': { 'JSONMappingParameters': { 'RecordRowPath': 'string' }, 'CSVMappingParameters': { 'RecordRowDelimiter': 'string', 'RecordColumnDelimiter': 'string' } } }, 'RecordEncoding': 'string', 'RecordColumns': [ { 'Name': 'string', 'Mapping': 'string', 'SqlType': 'string' }, ] } }, ] }, 'ApplicationCodeConfigurationDescription': { 'CodeContentType': 'PLAINTEXT'|'ZIPFILE', 'CodeContentDescription': { 'TextContent': 'string', 'CodeMD5': 'string', 'CodeSize': 123, 'S3ApplicationCodeLocationDescription': { 'BucketARN': 'string', 'FileKey': 'string', 'ObjectVersion': 'string' } } }, 'RunConfigurationDescription': { 'ApplicationRestoreConfigurationDescription': { 'ApplicationRestoreType': 'SKIP_RESTORE_FROM_SNAPSHOT'|'RESTORE_FROM_LATEST_SNAPSHOT'|'RESTORE_FROM_CUSTOM_SNAPSHOT', 'SnapshotName': 'string' }, 'FlinkRunConfigurationDescription': { 'AllowNonRestoredState': True|False } }, 'FlinkApplicationConfigurationDescription': { 'CheckpointConfigurationDescription': { 'ConfigurationType': 'DEFAULT'|'CUSTOM', 'CheckpointingEnabled': True|False, 'CheckpointInterval': 123, 'MinPauseBetweenCheckpoints': 123 }, 'MonitoringConfigurationDescription': { 'ConfigurationType': 'DEFAULT'|'CUSTOM', 'MetricsLevel': 'APPLICATION'|'TASK'|'OPERATOR'|'PARALLELISM', 'LogLevel': 'INFO'|'WARN'|'ERROR'|'DEBUG' }, 'ParallelismConfigurationDescription': { 'ConfigurationType': 'DEFAULT'|'CUSTOM', 'Parallelism': 123, 'ParallelismPerKPU': 123, 'CurrentParallelism': 123, 'AutoScalingEnabled': True|False }, 'JobPlanDescription': 'string' }, 'EnvironmentPropertyDescriptions': { 'PropertyGroupDescriptions': [ { 'PropertyGroupId': 'string', 'PropertyMap': { 'string': 'string' } }, ] }, 'ApplicationSnapshotConfigurationDescription': { 'SnapshotsEnabled': True|False }, 'ApplicationSystemRollbackConfigurationDescription': { 'RollbackEnabled': True|False }, 'VpcConfigurationDescriptions': [ { 'VpcConfigurationId': 'string', 'VpcId': 'string', 'SubnetIds': [ 'string', ], 'SecurityGroupIds': [ 'string', ] }, ], 'ZeppelinApplicationConfigurationDescription': { 'MonitoringConfigurationDescription': { 'LogLevel': 'INFO'|'WARN'|'ERROR'|'DEBUG' }, 'CatalogConfigurationDescription': { 'GlueDataCatalogConfigurationDescription': { 'DatabaseARN': 'string' } }, 'DeployAsApplicationConfigurationDescription': { 'S3ContentLocationDescription': { 'BucketARN': 'string', 'BasePath': 'string' } }, 'CustomArtifactsConfigurationDescription': [ { 'ArtifactType': 'UDF'|'DEPENDENCY_JAR', 'S3ContentLocationDescription': { 'BucketARN': 'string', 'FileKey': 'string', 'ObjectVersion': 'string' }, 'MavenReferenceDescription': { 'GroupId': 'string', 'ArtifactId': 'string', 'Version': 'string' } }, ] } }, 'CloudWatchLoggingOptionDescriptions': [ { 'CloudWatchLoggingOptionId': 'string', 'LogStreamARN': 'string', 'RoleARN': 'string' }, ], 'ApplicationMaintenanceConfigurationDescription': { 'ApplicationMaintenanceWindowStartTime': 'string', 'ApplicationMaintenanceWindowEndTime': 'string' }, 'ApplicationVersionUpdatedFrom': 123, 'ApplicationVersionRolledBackFrom': 123, 'ApplicationVersionCreateTimestamp': datetime(2015, 1, 1), 'ConditionalToken': 'string', 'ApplicationVersionRolledBackTo': 123, 'ApplicationMode': 'STREAMING'|'INTERACTIVE' } } **Response Structure** * *(dict) --* * **ApplicationDetail** *(dict) --* Provides a description of the application, such as the application's Amazon Resource Name (ARN), status, and latest version. * **ApplicationARN** *(string) --* The ARN of the application. * **ApplicationDescription** *(string) --* The description of the application. * **ApplicationName** *(string) --* The name of the application. * **RuntimeEnvironment** *(string) --* The runtime environment for the application. * **ServiceExecutionRole** *(string) --* Specifies the IAM role that the application uses to access external resources. * **ApplicationStatus** *(string) --* The status of the application. * **ApplicationVersionId** *(integer) --* Provides the current application version. Managed Service for Apache Flink updates the "ApplicationVersionId" each time you update the application. * **CreateTimestamp** *(datetime) --* The current timestamp when the application was created. * **LastUpdateTimestamp** *(datetime) --* The current timestamp when the application was last updated. * **ApplicationConfigurationDescription** *(dict) --* Describes details about the application code and starting parameters for a Managed Service for Apache Flink application. * **SqlApplicationConfigurationDescription** *(dict) --* The details about inputs, outputs, and reference data sources for a SQL-based Kinesis Data Analytics application. * **InputDescriptions** *(list) --* The array of InputDescription objects describing the input streams used by the application. * *(dict) --* Describes the application input configuration for a SQL-based Kinesis Data Analytics application. * **InputId** *(string) --* The input ID that is associated with the application input. This is the ID that Kinesis Data Analytics assigns to each input configuration that you add to your application. * **NamePrefix** *(string) --* The in-application name prefix. * **InAppStreamNames** *(list) --* Returns the in-application stream names that are mapped to the stream source. * *(string) --* * **InputProcessingConfigurationDescription** *(dict) --* The description of the preprocessor that executes on records in this input before the application's code is run. * **InputLambdaProcessorDescription** *(dict) --* Provides configuration information about the associated InputLambdaProcessorDescription * **ResourceARN** *(string) --* The ARN of the Amazon Lambda function that is used to preprocess the records in the stream. Note: To specify an earlier version of the Lambda function than the latest, include the Lambda function version in the Lambda function ARN. For more information about Lambda ARNs, see Example ARNs: Amazon Lambda * **RoleARN** *(string) --* The ARN of the IAM role that is used to access the Amazon Lambda function. Note: Provided for backward compatibility. Applications that are created with the current API version have an application- level service execution role rather than a resource-level role. * **KinesisStreamsInputDescription** *(dict) --* If a Kinesis data stream is configured as a streaming source, provides the Kinesis data stream's Amazon Resource Name (ARN). * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the Kinesis data stream. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to access the stream. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **KinesisFirehoseInputDescription** *(dict) --* If a Kinesis Data Firehose delivery stream is configured as a streaming source, provides the delivery stream's ARN. * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the delivery stream. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics assumes to access the stream. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **InputSchema** *(dict) --* Describes the format of the data in the streaming source, and how each data element maps to corresponding columns in the in-application stream that is being created. * **RecordFormat** *(dict) --* Specifies the format of the records on the streaming source. * **RecordFormatType** *(string) --* The type of record format. * **MappingParameters** *(dict) --* When you configure application input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source. * **JSONMappingParameters** *(dict) --* Provides additional mapping information when JSON is the record format on the streaming source. * **RecordRowPath** *(string) --* The path to the top-level parent that contains the records. * **CSVMappingParameters** *(dict) --* Provides additional mapping information when the record format uses delimiters (for example, CSV). * **RecordRowDelimiter** *(string) --* The row delimiter. For example, in a CSV format, *'n'* is the typical row delimiter. * **RecordColumnDelimiter** *(string) --* The column delimiter. For example, in a CSV format, a comma (",") is the typical column delimiter. * **RecordEncoding** *(string) --* Specifies the encoding of the records in the streaming source. For example, UTF-8. * **RecordColumns** *(list) --* A list of "RecordColumn" objects. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the mapping of each data element in the streaming source to the corresponding column in the in-application stream. Also used to describe the format of the reference data source. * **Name** *(string) --* The name of the column that is created in the in-application input stream or reference table. * **Mapping** *(string) --* A reference to the data element in the streaming input or the reference data source. * **SqlType** *(string) --* The type of column created in the in- application input stream or reference table. * **InputParallelism** *(dict) --* Describes the configured parallelism (number of in-application streams mapped to the streaming source). * **Count** *(integer) --* The number of in-application streams to create. * **InputStartingPositionConfiguration** *(dict) --* The point at which the application is configured to read from the input stream. * **InputStartingPosition** *(string) --* The starting position on the stream. * "NOW" - Start reading just after the most recent record in the stream, and start at the request timestamp that the customer issued. * "TRIM_HORIZON" - Start reading at the last untrimmed record in the stream, which is the oldest record available in the stream. This option is not available for an Amazon Kinesis Data Firehose delivery stream. * "LAST_STOPPED_POINT" - Resume reading from where the application last stopped reading. * **OutputDescriptions** *(list) --* The array of OutputDescription objects describing the destination streams used by the application. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the application output configuration, which includes the in-application stream name and the destination where the stream data is written. The destination can be a Kinesis data stream or a Kinesis Data Firehose delivery stream. * **OutputId** *(string) --* A unique identifier for the output configuration. * **Name** *(string) --* The name of the in-application stream that is configured as output. * **KinesisStreamsOutputDescription** *(dict) --* Describes the Kinesis data stream that is configured as the destination where output is written. * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the Kinesis data stream. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to access the stream. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **KinesisFirehoseOutputDescription** *(dict) --* Describes the Kinesis Data Firehose delivery stream that is configured as the destination where output is written. * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the delivery stream. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to access the stream. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **LambdaOutputDescription** *(dict) --* Describes the Lambda function that is configured as the destination where output is written. * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the destination Lambda function. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to write to the destination function. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **DestinationSchema** *(dict) --* The data format used for writing data to the destination. * **RecordFormatType** *(string) --* Specifies the format of the records on the output stream. * **ReferenceDataSourceDescriptions** *(list) --* The array of ReferenceDataSourceDescription objects describing the reference data sources used by the application. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the reference data source configured for an application. * **ReferenceId** *(string) --* The ID of the reference data source. This is the ID that Kinesis Data Analytics assigns when you add the reference data source to your application using the CreateApplication or UpdateApplication operation. * **TableName** *(string) --* The in-application table name created by the specific reference data source configuration. * **S3ReferenceDataSourceDescription** *(dict) --* Provides the Amazon S3 bucket name, the object key name that contains the reference data. * **BucketARN** *(string) --* The Amazon Resource Name (ARN) of the S3 bucket. * **FileKey** *(string) --* Amazon S3 object key name. * **ReferenceRoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to read the Amazon S3 object on your behalf to populate the in- application reference table. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **ReferenceSchema** *(dict) --* Describes the format of the data in the streaming source, and how each data element maps to corresponding columns created in the in- application stream. * **RecordFormat** *(dict) --* Specifies the format of the records on the streaming source. * **RecordFormatType** *(string) --* The type of record format. * **MappingParameters** *(dict) --* When you configure application input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source. * **JSONMappingParameters** *(dict) --* Provides additional mapping information when JSON is the record format on the streaming source. * **RecordRowPath** *(string) --* The path to the top-level parent that contains the records. * **CSVMappingParameters** *(dict) --* Provides additional mapping information when the record format uses delimiters (for example, CSV). * **RecordRowDelimiter** *(string) --* The row delimiter. For example, in a CSV format, *'n'* is the typical row delimiter. * **RecordColumnDelimiter** *(string) --* The column delimiter. For example, in a CSV format, a comma (",") is the typical column delimiter. * **RecordEncoding** *(string) --* Specifies the encoding of the records in the streaming source. For example, UTF-8. * **RecordColumns** *(list) --* A list of "RecordColumn" objects. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the mapping of each data element in the streaming source to the corresponding column in the in-application stream. Also used to describe the format of the reference data source. * **Name** *(string) --* The name of the column that is created in the in-application input stream or reference table. * **Mapping** *(string) --* A reference to the data element in the streaming input or the reference data source. * **SqlType** *(string) --* The type of column created in the in- application input stream or reference table. * **ApplicationCodeConfigurationDescription** *(dict) --* The details about the application code for a Managed Service for Apache Flink application. * **CodeContentType** *(string) --* Specifies whether the code content is in text or zip format. * **CodeContentDescription** *(dict) --* Describes details about the location and format of the application code. * **TextContent** *(string) --* The text-format code * **CodeMD5** *(string) --* The checksum that can be used to validate zip-format code. * **CodeSize** *(integer) --* The size in bytes of the application code. Can be used to validate zip-format code. * **S3ApplicationCodeLocationDescription** *(dict) --* The S3 bucket Amazon Resource Name (ARN), file key, and object version of the application code stored in Amazon S3. * **BucketARN** *(string) --* The Amazon Resource Name (ARN) for the S3 bucket containing the application code. * **FileKey** *(string) --* The file key for the object containing the application code. * **ObjectVersion** *(string) --* The version of the object containing the application code. * **RunConfigurationDescription** *(dict) --* The details about the starting properties for a Managed Service for Apache Flink application. * **ApplicationRestoreConfigurationDescription** *(dict) --* Describes the restore behavior of a restarting application. * **ApplicationRestoreType** *(string) --* Specifies how the application should be restored. * **SnapshotName** *(string) --* The identifier of an existing snapshot of application state to use to restart an application. The application uses this value if "RESTORE_FROM_CUSTOM_SNAPSHOT" is specified for the "ApplicationRestoreType". * **FlinkRunConfigurationDescription** *(dict) --* Describes the starting parameters for a Managed Service for Apache Flink application. * **AllowNonRestoredState** *(boolean) --* When restoring from a snapshot, specifies whether the runtime is allowed to skip a state that cannot be mapped to the new program. This will happen if the program is updated between snapshots to remove stateful parameters, and state data in the snapshot no longer corresponds to valid application data. For more information, see Allowing Non-Restored State in the Apache Flink documentation. Note: This value defaults to "false". If you update your application without specifying this parameter, "AllowNonRestoredState" will be set to "false", even if it was previously set to "true". * **FlinkApplicationConfigurationDescription** *(dict) --* The details about a Managed Service for Apache Flink application. * **CheckpointConfigurationDescription** *(dict) --* Describes an application's checkpointing configuration. Checkpointing is the process of persisting application state for fault tolerance. * **ConfigurationType** *(string) --* Describes whether the application uses the default checkpointing behavior in Managed Service for Apache Flink. Note: If this value is set to "DEFAULT", the application will use the following values, even if they are set to other values using APIs or application code: * **CheckpointingEnabled:** true * **CheckpointInterval:** 60000 * **MinPauseBetweenCheckpoints:** 5000 * **CheckpointingEnabled** *(boolean) --* Describes whether checkpointing is enabled for a Managed Service for Apache Flink application. Note: If "CheckpointConfiguration.ConfigurationType" is "DEFAULT", the application will use a "CheckpointingEnabled" value of "true", even if this value is set to another value using this API or in application code. * **CheckpointInterval** *(integer) --* Describes the interval in milliseconds between checkpoint operations. Note: If "CheckpointConfiguration.ConfigurationType" is "DEFAULT", the application will use a "CheckpointInterval" value of 60000, even if this value is set to another value using this API or in application code. * **MinPauseBetweenCheckpoints** *(integer) --* Describes the minimum time in milliseconds after a checkpoint operation completes that a new checkpoint operation can start. Note: If "CheckpointConfiguration.ConfigurationType" is "DEFAULT", the application will use a "MinPauseBetweenCheckpoints" value of 5000, even if this value is set using this API or in application code. * **MonitoringConfigurationDescription** *(dict) --* Describes configuration parameters for Amazon CloudWatch logging for an application. * **ConfigurationType** *(string) --* Describes whether to use the default CloudWatch logging configuration for an application. * **MetricsLevel** *(string) --* Describes the granularity of the CloudWatch Logs for an application. * **LogLevel** *(string) --* Describes the verbosity of the CloudWatch Logs for an application. * **ParallelismConfigurationDescription** *(dict) --* Describes parameters for how an application executes multiple tasks simultaneously. * **ConfigurationType** *(string) --* Describes whether the application uses the default parallelism for the Managed Service for Apache Flink service. * **Parallelism** *(integer) --* Describes the initial number of parallel tasks that a Managed Service for Apache Flink application can perform. If "AutoScalingEnabled" is set to True, then Managed Service for Apache Flink can increase the "CurrentParallelism" value in response to application load. The service can increase "CurrentParallelism" up to the maximum parallelism, which is "ParalellismPerKPU" times the maximum KPUs for the application. The maximum KPUs for an application is 32 by default, and can be increased by requesting a limit increase. If application load is reduced, the service can reduce the "CurrentParallelism" value down to the "Parallelism" setting. * **ParallelismPerKPU** *(integer) --* Describes the number of parallel tasks that a Managed Service for Apache Flink application can perform per Kinesis Processing Unit (KPU) used by the application. * **CurrentParallelism** *(integer) --* Describes the current number of parallel tasks that a Managed Service for Apache Flink application can perform. If "AutoScalingEnabled" is set to True, Managed Service for Apache Flink can increase this value in response to application load. The service can increase this value up to the maximum parallelism, which is "ParalellismPerKPU" times the maximum KPUs for the application. The maximum KPUs for an application is 32 by default, and can be increased by requesting a limit increase. If application load is reduced, the service can reduce the "CurrentParallelism" value down to the "Parallelism" setting. * **AutoScalingEnabled** *(boolean) --* Describes whether the Managed Service for Apache Flink service can increase the parallelism of the application in response to increased throughput. * **JobPlanDescription** *(string) --* The job plan for an application. For more information about the job plan, see Jobs and Scheduling in the Apache Flink Documentation. To retrieve the job plan for the application, use the DescribeApplicationRequest$IncludeAdditionalDetails parameter of the DescribeApplication operation. * **EnvironmentPropertyDescriptions** *(dict) --* Describes execution properties for a Managed Service for Apache Flink application. * **PropertyGroupDescriptions** *(list) --* Describes the execution property groups. * *(dict) --* Property key-value pairs passed into an application. * **PropertyGroupId** *(string) --* Describes the key of an application execution property key-value pair. * **PropertyMap** *(dict) --* Describes the value of an application execution property key-value pair. * *(string) --* * *(string) --* * **ApplicationSnapshotConfigurationDescription** *(dict) --* Describes whether snapshots are enabled for a Managed Service for Apache Flink application. * **SnapshotsEnabled** *(boolean) --* Describes whether snapshots are enabled for a Managed Service for Apache Flink application. * **ApplicationSystemRollbackConfigurationDescription** *(dict) --* Describes system rollback configuration for a Managed Service for Apache Flink application * **RollbackEnabled** *(boolean) --* Describes whether system rollbacks are enabled for a Managed Service for Apache Flink application * **VpcConfigurationDescriptions** *(list) --* The array of descriptions of VPC configurations available to the application. * *(dict) --* Describes the parameters of a VPC used by the application. * **VpcConfigurationId** *(string) --* The ID of the VPC configuration. * **VpcId** *(string) --* The ID of the associated VPC. * **SubnetIds** *(list) --* The array of Subnet IDs used by the VPC configuration. * *(string) --* * **SecurityGroupIds** *(list) --* The array of SecurityGroup IDs used by the VPC configuration. * *(string) --* * **ZeppelinApplicationConfigurationDescription** *(dict) --* The configuration parameters for a Managed Service for Apache Flink Studio notebook. * **MonitoringConfigurationDescription** *(dict) --* The monitoring configuration of a Managed Service for Apache Flink Studio notebook. * **LogLevel** *(string) --* Describes the verbosity of the CloudWatch Logs for an application. * **CatalogConfigurationDescription** *(dict) --* The Amazon Glue Data Catalog that is associated with the Managed Service for Apache Flink Studio notebook. * **GlueDataCatalogConfigurationDescription** *(dict) --* The configuration parameters for the default Amazon Glue database. You use this database for SQL queries that you write in a Managed Service for Apache Flink Studio notebook. * **DatabaseARN** *(string) --* The Amazon Resource Name (ARN) of the database. * **DeployAsApplicationConfigurationDescription** *(dict) --* The parameters required to deploy a Managed Service for Apache Flink Studio notebook as an application with durable state. * **S3ContentLocationDescription** *(dict) --* The location that holds the data required to specify an Amazon Data Analytics application. * **BucketARN** *(string) --* The Amazon Resource Name (ARN) of the S3 bucket. * **BasePath** *(string) --* The base path for the S3 bucket. * **CustomArtifactsConfigurationDescription** *(list) --* Custom artifacts are dependency JARs and user-defined functions (UDF). * *(dict) --* Specifies a dependency JAR or a JAR of user-defined functions. * **ArtifactType** *(string) --* "UDF" stands for user-defined functions. This type of artifact must be in an S3 bucket. A "DEPENDENCY_JAR" can be in either Maven or an S3 bucket. * **S3ContentLocationDescription** *(dict) --* For a Managed Service for Apache Flink application provides a description of an Amazon S3 object, including the Amazon Resource Name (ARN) of the S3 bucket, the name of the Amazon S3 object that contains the data, and the version number of the Amazon S3 object that contains the data. * **BucketARN** *(string) --* The Amazon Resource Name (ARN) for the S3 bucket containing the application code. * **FileKey** *(string) --* The file key for the object containing the application code. * **ObjectVersion** *(string) --* The version of the object containing the application code. * **MavenReferenceDescription** *(dict) --* The parameters that are required to specify a Maven dependency. * **GroupId** *(string) --* The group ID of the Maven reference. * **ArtifactId** *(string) --* The artifact ID of the Maven reference. * **Version** *(string) --* The version of the Maven reference. * **CloudWatchLoggingOptionDescriptions** *(list) --* Describes the application Amazon CloudWatch logging options. * *(dict) --* Describes the Amazon CloudWatch logging option. * **CloudWatchLoggingOptionId** *(string) --* The ID of the CloudWatch logging option description. * **LogStreamARN** *(string) --* The Amazon Resource Name (ARN) of the CloudWatch log to receive application messages. * **RoleARN** *(string) --* The IAM ARN of the role to use to send application messages. Note: Provided for backward compatibility. Applications created with the current API version have an application-level service execution role rather than a resource-level role. * **ApplicationMaintenanceConfigurationDescription** *(dict) --* The details of the maintenance configuration for the application. * **ApplicationMaintenanceWindowStartTime** *(string) --* The start time for the maintenance window. * **ApplicationMaintenanceWindowEndTime** *(string) --* The end time for the maintenance window. * **ApplicationVersionUpdatedFrom** *(integer) --* The previous application version before the latest application update. RollbackApplication reverts the application to this version. * **ApplicationVersionRolledBackFrom** *(integer) --* If you reverted the application using RollbackApplication, the application version when "RollbackApplication" was called. * **ApplicationVersionCreateTimestamp** *(datetime) --* The current timestamp when the application version was created. * **ConditionalToken** *(string) --* A value you use to implement strong concurrency for application updates. * **ApplicationVersionRolledBackTo** *(integer) --* The version to which you want to roll back the application. * **ApplicationMode** *(string) --* To create a Managed Service for Apache Flink Studio notebook, you must set the mode to "INTERACTIVE". However, for a Managed Service for Apache Flink application, the mode is optional. **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.InvalidRequestException" KinesisAnalyticsV2 / Client / describe_application_version describe_application_version **************************** KinesisAnalyticsV2.Client.describe_application_version(**kwargs) Provides a detailed description of a specified version of the application. To see a list of all the versions of an application, invoke the ListApplicationVersions operation. Note: This operation is supported only for Managed Service for Apache Flink. See also: AWS API Documentation **Request Syntax** response = client.describe_application_version( ApplicationName='string', ApplicationVersionId=123 ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of the application for which you want to get the version description. * **ApplicationVersionId** (*integer*) -- **[REQUIRED]** The ID of the application version for which you want to get the description. Return type: dict Returns: **Response Syntax** { 'ApplicationVersionDetail': { 'ApplicationARN': 'string', 'ApplicationDescription': 'string', 'ApplicationName': 'string', 'RuntimeEnvironment': 'SQL-1_0'|'FLINK-1_6'|'FLINK-1_8'|'ZEPPELIN-FLINK-1_0'|'FLINK-1_11'|'FLINK-1_13'|'ZEPPELIN-FLINK-2_0'|'FLINK-1_15'|'ZEPPELIN-FLINK-3_0'|'FLINK-1_18'|'FLINK-1_19'|'FLINK-1_20', 'ServiceExecutionRole': 'string', 'ApplicationStatus': 'DELETING'|'STARTING'|'STOPPING'|'READY'|'RUNNING'|'UPDATING'|'AUTOSCALING'|'FORCE_STOPPING'|'ROLLING_BACK'|'MAINTENANCE'|'ROLLED_BACK', 'ApplicationVersionId': 123, 'CreateTimestamp': datetime(2015, 1, 1), 'LastUpdateTimestamp': datetime(2015, 1, 1), 'ApplicationConfigurationDescription': { 'SqlApplicationConfigurationDescription': { 'InputDescriptions': [ { 'InputId': 'string', 'NamePrefix': 'string', 'InAppStreamNames': [ 'string', ], 'InputProcessingConfigurationDescription': { 'InputLambdaProcessorDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' } }, 'KinesisStreamsInputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'KinesisFirehoseInputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'InputSchema': { 'RecordFormat': { 'RecordFormatType': 'JSON'|'CSV', 'MappingParameters': { 'JSONMappingParameters': { 'RecordRowPath': 'string' }, 'CSVMappingParameters': { 'RecordRowDelimiter': 'string', 'RecordColumnDelimiter': 'string' } } }, 'RecordEncoding': 'string', 'RecordColumns': [ { 'Name': 'string', 'Mapping': 'string', 'SqlType': 'string' }, ] }, 'InputParallelism': { 'Count': 123 }, 'InputStartingPositionConfiguration': { 'InputStartingPosition': 'NOW'|'TRIM_HORIZON'|'LAST_STOPPED_POINT' } }, ], 'OutputDescriptions': [ { 'OutputId': 'string', 'Name': 'string', 'KinesisStreamsOutputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'KinesisFirehoseOutputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'LambdaOutputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'DestinationSchema': { 'RecordFormatType': 'JSON'|'CSV' } }, ], 'ReferenceDataSourceDescriptions': [ { 'ReferenceId': 'string', 'TableName': 'string', 'S3ReferenceDataSourceDescription': { 'BucketARN': 'string', 'FileKey': 'string', 'ReferenceRoleARN': 'string' }, 'ReferenceSchema': { 'RecordFormat': { 'RecordFormatType': 'JSON'|'CSV', 'MappingParameters': { 'JSONMappingParameters': { 'RecordRowPath': 'string' }, 'CSVMappingParameters': { 'RecordRowDelimiter': 'string', 'RecordColumnDelimiter': 'string' } } }, 'RecordEncoding': 'string', 'RecordColumns': [ { 'Name': 'string', 'Mapping': 'string', 'SqlType': 'string' }, ] } }, ] }, 'ApplicationCodeConfigurationDescription': { 'CodeContentType': 'PLAINTEXT'|'ZIPFILE', 'CodeContentDescription': { 'TextContent': 'string', 'CodeMD5': 'string', 'CodeSize': 123, 'S3ApplicationCodeLocationDescription': { 'BucketARN': 'string', 'FileKey': 'string', 'ObjectVersion': 'string' } } }, 'RunConfigurationDescription': { 'ApplicationRestoreConfigurationDescription': { 'ApplicationRestoreType': 'SKIP_RESTORE_FROM_SNAPSHOT'|'RESTORE_FROM_LATEST_SNAPSHOT'|'RESTORE_FROM_CUSTOM_SNAPSHOT', 'SnapshotName': 'string' }, 'FlinkRunConfigurationDescription': { 'AllowNonRestoredState': True|False } }, 'FlinkApplicationConfigurationDescription': { 'CheckpointConfigurationDescription': { 'ConfigurationType': 'DEFAULT'|'CUSTOM', 'CheckpointingEnabled': True|False, 'CheckpointInterval': 123, 'MinPauseBetweenCheckpoints': 123 }, 'MonitoringConfigurationDescription': { 'ConfigurationType': 'DEFAULT'|'CUSTOM', 'MetricsLevel': 'APPLICATION'|'TASK'|'OPERATOR'|'PARALLELISM', 'LogLevel': 'INFO'|'WARN'|'ERROR'|'DEBUG' }, 'ParallelismConfigurationDescription': { 'ConfigurationType': 'DEFAULT'|'CUSTOM', 'Parallelism': 123, 'ParallelismPerKPU': 123, 'CurrentParallelism': 123, 'AutoScalingEnabled': True|False }, 'JobPlanDescription': 'string' }, 'EnvironmentPropertyDescriptions': { 'PropertyGroupDescriptions': [ { 'PropertyGroupId': 'string', 'PropertyMap': { 'string': 'string' } }, ] }, 'ApplicationSnapshotConfigurationDescription': { 'SnapshotsEnabled': True|False }, 'ApplicationSystemRollbackConfigurationDescription': { 'RollbackEnabled': True|False }, 'VpcConfigurationDescriptions': [ { 'VpcConfigurationId': 'string', 'VpcId': 'string', 'SubnetIds': [ 'string', ], 'SecurityGroupIds': [ 'string', ] }, ], 'ZeppelinApplicationConfigurationDescription': { 'MonitoringConfigurationDescription': { 'LogLevel': 'INFO'|'WARN'|'ERROR'|'DEBUG' }, 'CatalogConfigurationDescription': { 'GlueDataCatalogConfigurationDescription': { 'DatabaseARN': 'string' } }, 'DeployAsApplicationConfigurationDescription': { 'S3ContentLocationDescription': { 'BucketARN': 'string', 'BasePath': 'string' } }, 'CustomArtifactsConfigurationDescription': [ { 'ArtifactType': 'UDF'|'DEPENDENCY_JAR', 'S3ContentLocationDescription': { 'BucketARN': 'string', 'FileKey': 'string', 'ObjectVersion': 'string' }, 'MavenReferenceDescription': { 'GroupId': 'string', 'ArtifactId': 'string', 'Version': 'string' } }, ] } }, 'CloudWatchLoggingOptionDescriptions': [ { 'CloudWatchLoggingOptionId': 'string', 'LogStreamARN': 'string', 'RoleARN': 'string' }, ], 'ApplicationMaintenanceConfigurationDescription': { 'ApplicationMaintenanceWindowStartTime': 'string', 'ApplicationMaintenanceWindowEndTime': 'string' }, 'ApplicationVersionUpdatedFrom': 123, 'ApplicationVersionRolledBackFrom': 123, 'ApplicationVersionCreateTimestamp': datetime(2015, 1, 1), 'ConditionalToken': 'string', 'ApplicationVersionRolledBackTo': 123, 'ApplicationMode': 'STREAMING'|'INTERACTIVE' } } **Response Structure** * *(dict) --* * **ApplicationVersionDetail** *(dict) --* Describes the application, including the application Amazon Resource Name (ARN), status, latest version, and input and output configurations. * **ApplicationARN** *(string) --* The ARN of the application. * **ApplicationDescription** *(string) --* The description of the application. * **ApplicationName** *(string) --* The name of the application. * **RuntimeEnvironment** *(string) --* The runtime environment for the application. * **ServiceExecutionRole** *(string) --* Specifies the IAM role that the application uses to access external resources. * **ApplicationStatus** *(string) --* The status of the application. * **ApplicationVersionId** *(integer) --* Provides the current application version. Managed Service for Apache Flink updates the "ApplicationVersionId" each time you update the application. * **CreateTimestamp** *(datetime) --* The current timestamp when the application was created. * **LastUpdateTimestamp** *(datetime) --* The current timestamp when the application was last updated. * **ApplicationConfigurationDescription** *(dict) --* Describes details about the application code and starting parameters for a Managed Service for Apache Flink application. * **SqlApplicationConfigurationDescription** *(dict) --* The details about inputs, outputs, and reference data sources for a SQL-based Kinesis Data Analytics application. * **InputDescriptions** *(list) --* The array of InputDescription objects describing the input streams used by the application. * *(dict) --* Describes the application input configuration for a SQL-based Kinesis Data Analytics application. * **InputId** *(string) --* The input ID that is associated with the application input. This is the ID that Kinesis Data Analytics assigns to each input configuration that you add to your application. * **NamePrefix** *(string) --* The in-application name prefix. * **InAppStreamNames** *(list) --* Returns the in-application stream names that are mapped to the stream source. * *(string) --* * **InputProcessingConfigurationDescription** *(dict) --* The description of the preprocessor that executes on records in this input before the application's code is run. * **InputLambdaProcessorDescription** *(dict) --* Provides configuration information about the associated InputLambdaProcessorDescription * **ResourceARN** *(string) --* The ARN of the Amazon Lambda function that is used to preprocess the records in the stream. Note: To specify an earlier version of the Lambda function than the latest, include the Lambda function version in the Lambda function ARN. For more information about Lambda ARNs, see Example ARNs: Amazon Lambda * **RoleARN** *(string) --* The ARN of the IAM role that is used to access the Amazon Lambda function. Note: Provided for backward compatibility. Applications that are created with the current API version have an application- level service execution role rather than a resource-level role. * **KinesisStreamsInputDescription** *(dict) --* If a Kinesis data stream is configured as a streaming source, provides the Kinesis data stream's Amazon Resource Name (ARN). * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the Kinesis data stream. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to access the stream. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **KinesisFirehoseInputDescription** *(dict) --* If a Kinesis Data Firehose delivery stream is configured as a streaming source, provides the delivery stream's ARN. * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the delivery stream. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics assumes to access the stream. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **InputSchema** *(dict) --* Describes the format of the data in the streaming source, and how each data element maps to corresponding columns in the in-application stream that is being created. * **RecordFormat** *(dict) --* Specifies the format of the records on the streaming source. * **RecordFormatType** *(string) --* The type of record format. * **MappingParameters** *(dict) --* When you configure application input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source. * **JSONMappingParameters** *(dict) --* Provides additional mapping information when JSON is the record format on the streaming source. * **RecordRowPath** *(string) --* The path to the top-level parent that contains the records. * **CSVMappingParameters** *(dict) --* Provides additional mapping information when the record format uses delimiters (for example, CSV). * **RecordRowDelimiter** *(string) --* The row delimiter. For example, in a CSV format, *'n'* is the typical row delimiter. * **RecordColumnDelimiter** *(string) --* The column delimiter. For example, in a CSV format, a comma (",") is the typical column delimiter. * **RecordEncoding** *(string) --* Specifies the encoding of the records in the streaming source. For example, UTF-8. * **RecordColumns** *(list) --* A list of "RecordColumn" objects. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the mapping of each data element in the streaming source to the corresponding column in the in-application stream. Also used to describe the format of the reference data source. * **Name** *(string) --* The name of the column that is created in the in-application input stream or reference table. * **Mapping** *(string) --* A reference to the data element in the streaming input or the reference data source. * **SqlType** *(string) --* The type of column created in the in- application input stream or reference table. * **InputParallelism** *(dict) --* Describes the configured parallelism (number of in-application streams mapped to the streaming source). * **Count** *(integer) --* The number of in-application streams to create. * **InputStartingPositionConfiguration** *(dict) --* The point at which the application is configured to read from the input stream. * **InputStartingPosition** *(string) --* The starting position on the stream. * "NOW" - Start reading just after the most recent record in the stream, and start at the request timestamp that the customer issued. * "TRIM_HORIZON" - Start reading at the last untrimmed record in the stream, which is the oldest record available in the stream. This option is not available for an Amazon Kinesis Data Firehose delivery stream. * "LAST_STOPPED_POINT" - Resume reading from where the application last stopped reading. * **OutputDescriptions** *(list) --* The array of OutputDescription objects describing the destination streams used by the application. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the application output configuration, which includes the in-application stream name and the destination where the stream data is written. The destination can be a Kinesis data stream or a Kinesis Data Firehose delivery stream. * **OutputId** *(string) --* A unique identifier for the output configuration. * **Name** *(string) --* The name of the in-application stream that is configured as output. * **KinesisStreamsOutputDescription** *(dict) --* Describes the Kinesis data stream that is configured as the destination where output is written. * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the Kinesis data stream. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to access the stream. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **KinesisFirehoseOutputDescription** *(dict) --* Describes the Kinesis Data Firehose delivery stream that is configured as the destination where output is written. * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the delivery stream. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to access the stream. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **LambdaOutputDescription** *(dict) --* Describes the Lambda function that is configured as the destination where output is written. * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the destination Lambda function. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to write to the destination function. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **DestinationSchema** *(dict) --* The data format used for writing data to the destination. * **RecordFormatType** *(string) --* Specifies the format of the records on the output stream. * **ReferenceDataSourceDescriptions** *(list) --* The array of ReferenceDataSourceDescription objects describing the reference data sources used by the application. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the reference data source configured for an application. * **ReferenceId** *(string) --* The ID of the reference data source. This is the ID that Kinesis Data Analytics assigns when you add the reference data source to your application using the CreateApplication or UpdateApplication operation. * **TableName** *(string) --* The in-application table name created by the specific reference data source configuration. * **S3ReferenceDataSourceDescription** *(dict) --* Provides the Amazon S3 bucket name, the object key name that contains the reference data. * **BucketARN** *(string) --* The Amazon Resource Name (ARN) of the S3 bucket. * **FileKey** *(string) --* Amazon S3 object key name. * **ReferenceRoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to read the Amazon S3 object on your behalf to populate the in- application reference table. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **ReferenceSchema** *(dict) --* Describes the format of the data in the streaming source, and how each data element maps to corresponding columns created in the in- application stream. * **RecordFormat** *(dict) --* Specifies the format of the records on the streaming source. * **RecordFormatType** *(string) --* The type of record format. * **MappingParameters** *(dict) --* When you configure application input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source. * **JSONMappingParameters** *(dict) --* Provides additional mapping information when JSON is the record format on the streaming source. * **RecordRowPath** *(string) --* The path to the top-level parent that contains the records. * **CSVMappingParameters** *(dict) --* Provides additional mapping information when the record format uses delimiters (for example, CSV). * **RecordRowDelimiter** *(string) --* The row delimiter. For example, in a CSV format, *'n'* is the typical row delimiter. * **RecordColumnDelimiter** *(string) --* The column delimiter. For example, in a CSV format, a comma (",") is the typical column delimiter. * **RecordEncoding** *(string) --* Specifies the encoding of the records in the streaming source. For example, UTF-8. * **RecordColumns** *(list) --* A list of "RecordColumn" objects. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the mapping of each data element in the streaming source to the corresponding column in the in-application stream. Also used to describe the format of the reference data source. * **Name** *(string) --* The name of the column that is created in the in-application input stream or reference table. * **Mapping** *(string) --* A reference to the data element in the streaming input or the reference data source. * **SqlType** *(string) --* The type of column created in the in- application input stream or reference table. * **ApplicationCodeConfigurationDescription** *(dict) --* The details about the application code for a Managed Service for Apache Flink application. * **CodeContentType** *(string) --* Specifies whether the code content is in text or zip format. * **CodeContentDescription** *(dict) --* Describes details about the location and format of the application code. * **TextContent** *(string) --* The text-format code * **CodeMD5** *(string) --* The checksum that can be used to validate zip-format code. * **CodeSize** *(integer) --* The size in bytes of the application code. Can be used to validate zip-format code. * **S3ApplicationCodeLocationDescription** *(dict) --* The S3 bucket Amazon Resource Name (ARN), file key, and object version of the application code stored in Amazon S3. * **BucketARN** *(string) --* The Amazon Resource Name (ARN) for the S3 bucket containing the application code. * **FileKey** *(string) --* The file key for the object containing the application code. * **ObjectVersion** *(string) --* The version of the object containing the application code. * **RunConfigurationDescription** *(dict) --* The details about the starting properties for a Managed Service for Apache Flink application. * **ApplicationRestoreConfigurationDescription** *(dict) --* Describes the restore behavior of a restarting application. * **ApplicationRestoreType** *(string) --* Specifies how the application should be restored. * **SnapshotName** *(string) --* The identifier of an existing snapshot of application state to use to restart an application. The application uses this value if "RESTORE_FROM_CUSTOM_SNAPSHOT" is specified for the "ApplicationRestoreType". * **FlinkRunConfigurationDescription** *(dict) --* Describes the starting parameters for a Managed Service for Apache Flink application. * **AllowNonRestoredState** *(boolean) --* When restoring from a snapshot, specifies whether the runtime is allowed to skip a state that cannot be mapped to the new program. This will happen if the program is updated between snapshots to remove stateful parameters, and state data in the snapshot no longer corresponds to valid application data. For more information, see Allowing Non-Restored State in the Apache Flink documentation. Note: This value defaults to "false". If you update your application without specifying this parameter, "AllowNonRestoredState" will be set to "false", even if it was previously set to "true". * **FlinkApplicationConfigurationDescription** *(dict) --* The details about a Managed Service for Apache Flink application. * **CheckpointConfigurationDescription** *(dict) --* Describes an application's checkpointing configuration. Checkpointing is the process of persisting application state for fault tolerance. * **ConfigurationType** *(string) --* Describes whether the application uses the default checkpointing behavior in Managed Service for Apache Flink. Note: If this value is set to "DEFAULT", the application will use the following values, even if they are set to other values using APIs or application code: * **CheckpointingEnabled:** true * **CheckpointInterval:** 60000 * **MinPauseBetweenCheckpoints:** 5000 * **CheckpointingEnabled** *(boolean) --* Describes whether checkpointing is enabled for a Managed Service for Apache Flink application. Note: If "CheckpointConfiguration.ConfigurationType" is "DEFAULT", the application will use a "CheckpointingEnabled" value of "true", even if this value is set to another value using this API or in application code. * **CheckpointInterval** *(integer) --* Describes the interval in milliseconds between checkpoint operations. Note: If "CheckpointConfiguration.ConfigurationType" is "DEFAULT", the application will use a "CheckpointInterval" value of 60000, even if this value is set to another value using this API or in application code. * **MinPauseBetweenCheckpoints** *(integer) --* Describes the minimum time in milliseconds after a checkpoint operation completes that a new checkpoint operation can start. Note: If "CheckpointConfiguration.ConfigurationType" is "DEFAULT", the application will use a "MinPauseBetweenCheckpoints" value of 5000, even if this value is set using this API or in application code. * **MonitoringConfigurationDescription** *(dict) --* Describes configuration parameters for Amazon CloudWatch logging for an application. * **ConfigurationType** *(string) --* Describes whether to use the default CloudWatch logging configuration for an application. * **MetricsLevel** *(string) --* Describes the granularity of the CloudWatch Logs for an application. * **LogLevel** *(string) --* Describes the verbosity of the CloudWatch Logs for an application. * **ParallelismConfigurationDescription** *(dict) --* Describes parameters for how an application executes multiple tasks simultaneously. * **ConfigurationType** *(string) --* Describes whether the application uses the default parallelism for the Managed Service for Apache Flink service. * **Parallelism** *(integer) --* Describes the initial number of parallel tasks that a Managed Service for Apache Flink application can perform. If "AutoScalingEnabled" is set to True, then Managed Service for Apache Flink can increase the "CurrentParallelism" value in response to application load. The service can increase "CurrentParallelism" up to the maximum parallelism, which is "ParalellismPerKPU" times the maximum KPUs for the application. The maximum KPUs for an application is 32 by default, and can be increased by requesting a limit increase. If application load is reduced, the service can reduce the "CurrentParallelism" value down to the "Parallelism" setting. * **ParallelismPerKPU** *(integer) --* Describes the number of parallel tasks that a Managed Service for Apache Flink application can perform per Kinesis Processing Unit (KPU) used by the application. * **CurrentParallelism** *(integer) --* Describes the current number of parallel tasks that a Managed Service for Apache Flink application can perform. If "AutoScalingEnabled" is set to True, Managed Service for Apache Flink can increase this value in response to application load. The service can increase this value up to the maximum parallelism, which is "ParalellismPerKPU" times the maximum KPUs for the application. The maximum KPUs for an application is 32 by default, and can be increased by requesting a limit increase. If application load is reduced, the service can reduce the "CurrentParallelism" value down to the "Parallelism" setting. * **AutoScalingEnabled** *(boolean) --* Describes whether the Managed Service for Apache Flink service can increase the parallelism of the application in response to increased throughput. * **JobPlanDescription** *(string) --* The job plan for an application. For more information about the job plan, see Jobs and Scheduling in the Apache Flink Documentation. To retrieve the job plan for the application, use the DescribeApplicationRequest$IncludeAdditionalDetails parameter of the DescribeApplication operation. * **EnvironmentPropertyDescriptions** *(dict) --* Describes execution properties for a Managed Service for Apache Flink application. * **PropertyGroupDescriptions** *(list) --* Describes the execution property groups. * *(dict) --* Property key-value pairs passed into an application. * **PropertyGroupId** *(string) --* Describes the key of an application execution property key-value pair. * **PropertyMap** *(dict) --* Describes the value of an application execution property key-value pair. * *(string) --* * *(string) --* * **ApplicationSnapshotConfigurationDescription** *(dict) --* Describes whether snapshots are enabled for a Managed Service for Apache Flink application. * **SnapshotsEnabled** *(boolean) --* Describes whether snapshots are enabled for a Managed Service for Apache Flink application. * **ApplicationSystemRollbackConfigurationDescription** *(dict) --* Describes system rollback configuration for a Managed Service for Apache Flink application * **RollbackEnabled** *(boolean) --* Describes whether system rollbacks are enabled for a Managed Service for Apache Flink application * **VpcConfigurationDescriptions** *(list) --* The array of descriptions of VPC configurations available to the application. * *(dict) --* Describes the parameters of a VPC used by the application. * **VpcConfigurationId** *(string) --* The ID of the VPC configuration. * **VpcId** *(string) --* The ID of the associated VPC. * **SubnetIds** *(list) --* The array of Subnet IDs used by the VPC configuration. * *(string) --* * **SecurityGroupIds** *(list) --* The array of SecurityGroup IDs used by the VPC configuration. * *(string) --* * **ZeppelinApplicationConfigurationDescription** *(dict) --* The configuration parameters for a Managed Service for Apache Flink Studio notebook. * **MonitoringConfigurationDescription** *(dict) --* The monitoring configuration of a Managed Service for Apache Flink Studio notebook. * **LogLevel** *(string) --* Describes the verbosity of the CloudWatch Logs for an application. * **CatalogConfigurationDescription** *(dict) --* The Amazon Glue Data Catalog that is associated with the Managed Service for Apache Flink Studio notebook. * **GlueDataCatalogConfigurationDescription** *(dict) --* The configuration parameters for the default Amazon Glue database. You use this database for SQL queries that you write in a Managed Service for Apache Flink Studio notebook. * **DatabaseARN** *(string) --* The Amazon Resource Name (ARN) of the database. * **DeployAsApplicationConfigurationDescription** *(dict) --* The parameters required to deploy a Managed Service for Apache Flink Studio notebook as an application with durable state. * **S3ContentLocationDescription** *(dict) --* The location that holds the data required to specify an Amazon Data Analytics application. * **BucketARN** *(string) --* The Amazon Resource Name (ARN) of the S3 bucket. * **BasePath** *(string) --* The base path for the S3 bucket. * **CustomArtifactsConfigurationDescription** *(list) --* Custom artifacts are dependency JARs and user-defined functions (UDF). * *(dict) --* Specifies a dependency JAR or a JAR of user-defined functions. * **ArtifactType** *(string) --* "UDF" stands for user-defined functions. This type of artifact must be in an S3 bucket. A "DEPENDENCY_JAR" can be in either Maven or an S3 bucket. * **S3ContentLocationDescription** *(dict) --* For a Managed Service for Apache Flink application provides a description of an Amazon S3 object, including the Amazon Resource Name (ARN) of the S3 bucket, the name of the Amazon S3 object that contains the data, and the version number of the Amazon S3 object that contains the data. * **BucketARN** *(string) --* The Amazon Resource Name (ARN) for the S3 bucket containing the application code. * **FileKey** *(string) --* The file key for the object containing the application code. * **ObjectVersion** *(string) --* The version of the object containing the application code. * **MavenReferenceDescription** *(dict) --* The parameters that are required to specify a Maven dependency. * **GroupId** *(string) --* The group ID of the Maven reference. * **ArtifactId** *(string) --* The artifact ID of the Maven reference. * **Version** *(string) --* The version of the Maven reference. * **CloudWatchLoggingOptionDescriptions** *(list) --* Describes the application Amazon CloudWatch logging options. * *(dict) --* Describes the Amazon CloudWatch logging option. * **CloudWatchLoggingOptionId** *(string) --* The ID of the CloudWatch logging option description. * **LogStreamARN** *(string) --* The Amazon Resource Name (ARN) of the CloudWatch log to receive application messages. * **RoleARN** *(string) --* The IAM ARN of the role to use to send application messages. Note: Provided for backward compatibility. Applications created with the current API version have an application-level service execution role rather than a resource-level role. * **ApplicationMaintenanceConfigurationDescription** *(dict) --* The details of the maintenance configuration for the application. * **ApplicationMaintenanceWindowStartTime** *(string) --* The start time for the maintenance window. * **ApplicationMaintenanceWindowEndTime** *(string) --* The end time for the maintenance window. * **ApplicationVersionUpdatedFrom** *(integer) --* The previous application version before the latest application update. RollbackApplication reverts the application to this version. * **ApplicationVersionRolledBackFrom** *(integer) --* If you reverted the application using RollbackApplication, the application version when "RollbackApplication" was called. * **ApplicationVersionCreateTimestamp** *(datetime) --* The current timestamp when the application version was created. * **ConditionalToken** *(string) --* A value you use to implement strong concurrency for application updates. * **ApplicationVersionRolledBackTo** *(integer) --* The version to which you want to roll back the application. * **ApplicationMode** *(string) --* To create a Managed Service for Apache Flink Studio notebook, you must set the mode to "INTERACTIVE". However, for a Managed Service for Apache Flink application, the mode is optional. **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.UnsupportedOperationExcept ion" KinesisAnalyticsV2 / Client / describe_application_operation describe_application_operation ****************************** KinesisAnalyticsV2.Client.describe_application_operation(**kwargs) Returns information about a specific operation performed on a Managed Service for Apache Flink application See also: AWS API Documentation **Request Syntax** response = client.describe_application_operation( ApplicationName='string', OperationId='string' ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of the application * **OperationId** (*string*) -- **[REQUIRED]** Identifier of the Operation Return type: dict Returns: **Response Syntax** { 'ApplicationOperationInfoDetails': { 'Operation': 'string', 'StartTime': datetime(2015, 1, 1), 'EndTime': datetime(2015, 1, 1), 'OperationStatus': 'IN_PROGRESS'|'CANCELLED'|'SUCCESSFUL'|'FAILED', 'ApplicationVersionChangeDetails': { 'ApplicationVersionUpdatedFrom': 123, 'ApplicationVersionUpdatedTo': 123 }, 'OperationFailureDetails': { 'RollbackOperationId': 'string', 'ErrorInfo': { 'ErrorString': 'string' } } } } **Response Structure** * *(dict) --* Provides details of the operation corresponding to the operation-ID on a Managed Service for Apache Flink application * **ApplicationOperationInfoDetails** *(dict) --* Provides a description of the operation, such as the operation-type and status * **Operation** *(string) --* Type of operation performed on an application * **StartTime** *(datetime) --* The timestamp at which the operation was created * **EndTime** *(datetime) --* The timestamp at which the operation finished for the application * **OperationStatus** *(string) --* Status of the operation performed on an application * **ApplicationVersionChangeDetails** *(dict) --* Contains information about the application version changes due to an operation * **ApplicationVersionUpdatedFrom** *(integer) --* The operation was performed on this version of the application * **ApplicationVersionUpdatedTo** *(integer) --* The operation execution resulted in the transition to the following version of the application * **OperationFailureDetails** *(dict) --* Provides a description of the operation failure * **RollbackOperationId** *(string) --* Provides the operation ID of a system-rollback operation executed due to failure in the current operation * **ErrorInfo** *(dict) --* Provides a description of the operation failure error * **ErrorString** *(string) --* Error message resulting in failure of the operation **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.UnsupportedOperationExcept ion" KinesisAnalyticsV2 / Client / delete_application delete_application ****************** KinesisAnalyticsV2.Client.delete_application(**kwargs) Deletes the specified application. Managed Service for Apache Flink halts application execution and deletes the application. See also: AWS API Documentation **Request Syntax** response = client.delete_application( ApplicationName='string', CreateTimestamp=datetime(2015, 1, 1) ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of the application to delete. * **CreateTimestamp** (*datetime*) -- **[REQUIRED]** Use the "DescribeApplication" operation to get this value. Return type: dict Returns: **Response Syntax** {} **Response Structure** * *(dict) --* **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.ConcurrentModificationExce ption" * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.ResourceInUseException" * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.InvalidRequestException" * "KinesisAnalyticsV2.Client.exceptions.InvalidApplicationConfigur ationException" KinesisAnalyticsV2 / Client / delete_application_cloud_watch_logging_option delete_application_cloud_watch_logging_option ********************************************* KinesisAnalyticsV2.Client.delete_application_cloud_watch_logging_option(**kwargs) Deletes an Amazon CloudWatch log stream from an SQL-based Kinesis Data Analytics application. See also: AWS API Documentation **Request Syntax** response = client.delete_application_cloud_watch_logging_option( ApplicationName='string', CurrentApplicationVersionId=123, CloudWatchLoggingOptionId='string', ConditionalToken='string' ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The application name. * **CurrentApplicationVersionId** (*integer*) -- The version ID of the application. You must provide the "CurrentApplicationVersionId" or the "ConditionalToken". You can retrieve the application version ID using DescribeApplication. For better concurrency support, use the "ConditionalToken" parameter instead of "CurrentApplicationVersionId". * **CloudWatchLoggingOptionId** (*string*) -- **[REQUIRED]** The "CloudWatchLoggingOptionId" of the Amazon CloudWatch logging option to delete. You can get the "CloudWatchLoggingOptionId" by using the DescribeApplication operation. * **ConditionalToken** (*string*) -- A value you use to implement strong concurrency for application updates. You must provide the "CurrentApplicationVersionId" or the "ConditionalToken". You get the application's current "ConditionalToken" using DescribeApplication. For better concurrency support, use the "ConditionalToken" parameter instead of "CurrentApplicationVersionId". Return type: dict Returns: **Response Syntax** { 'ApplicationARN': 'string', 'ApplicationVersionId': 123, 'CloudWatchLoggingOptionDescriptions': [ { 'CloudWatchLoggingOptionId': 'string', 'LogStreamARN': 'string', 'RoleARN': 'string' }, ], 'OperationId': 'string' } **Response Structure** * *(dict) --* * **ApplicationARN** *(string) --* The application's Amazon Resource Name (ARN). * **ApplicationVersionId** *(integer) --* The version ID of the application. Kinesis Data Analytics updates the "ApplicationVersionId" each time you change the CloudWatch logging options. * **CloudWatchLoggingOptionDescriptions** *(list) --* The descriptions of the remaining CloudWatch logging options for the application. * *(dict) --* Describes the Amazon CloudWatch logging option. * **CloudWatchLoggingOptionId** *(string) --* The ID of the CloudWatch logging option description. * **LogStreamARN** *(string) --* The Amazon Resource Name (ARN) of the CloudWatch log to receive application messages. * **RoleARN** *(string) --* The IAM ARN of the role to use to send application messages. Note: Provided for backward compatibility. Applications created with the current API version have an application-level service execution role rather than a resource-level role. * **OperationId** *(string) --* Operation ID for tracking DeleteApplicationCloudWatchLoggingOption request **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.ResourceInUseException" * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.ConcurrentModificationExce ption" * "KinesisAnalyticsV2.Client.exceptions.InvalidRequestException" * "KinesisAnalyticsV2.Client.exceptions.InvalidApplicationConfigur ationException" KinesisAnalyticsV2 / Client / create_application create_application ****************** KinesisAnalyticsV2.Client.create_application(**kwargs) Creates a Managed Service for Apache Flink application. For information about creating a Managed Service for Apache Flink application, see Creating an Application. See also: AWS API Documentation **Request Syntax** response = client.create_application( ApplicationName='string', ApplicationDescription='string', RuntimeEnvironment='SQL-1_0'|'FLINK-1_6'|'FLINK-1_8'|'ZEPPELIN-FLINK-1_0'|'FLINK-1_11'|'FLINK-1_13'|'ZEPPELIN-FLINK-2_0'|'FLINK-1_15'|'ZEPPELIN-FLINK-3_0'|'FLINK-1_18'|'FLINK-1_19'|'FLINK-1_20', ServiceExecutionRole='string', ApplicationConfiguration={ 'SqlApplicationConfiguration': { 'Inputs': [ { 'NamePrefix': 'string', 'InputProcessingConfiguration': { 'InputLambdaProcessor': { 'ResourceARN': 'string' } }, 'KinesisStreamsInput': { 'ResourceARN': 'string' }, 'KinesisFirehoseInput': { 'ResourceARN': 'string' }, 'InputParallelism': { 'Count': 123 }, 'InputSchema': { 'RecordFormat': { 'RecordFormatType': 'JSON'|'CSV', 'MappingParameters': { 'JSONMappingParameters': { 'RecordRowPath': 'string' }, 'CSVMappingParameters': { 'RecordRowDelimiter': 'string', 'RecordColumnDelimiter': 'string' } } }, 'RecordEncoding': 'string', 'RecordColumns': [ { 'Name': 'string', 'Mapping': 'string', 'SqlType': 'string' }, ] } }, ], 'Outputs': [ { 'Name': 'string', 'KinesisStreamsOutput': { 'ResourceARN': 'string' }, 'KinesisFirehoseOutput': { 'ResourceARN': 'string' }, 'LambdaOutput': { 'ResourceARN': 'string' }, 'DestinationSchema': { 'RecordFormatType': 'JSON'|'CSV' } }, ], 'ReferenceDataSources': [ { 'TableName': 'string', 'S3ReferenceDataSource': { 'BucketARN': 'string', 'FileKey': 'string' }, 'ReferenceSchema': { 'RecordFormat': { 'RecordFormatType': 'JSON'|'CSV', 'MappingParameters': { 'JSONMappingParameters': { 'RecordRowPath': 'string' }, 'CSVMappingParameters': { 'RecordRowDelimiter': 'string', 'RecordColumnDelimiter': 'string' } } }, 'RecordEncoding': 'string', 'RecordColumns': [ { 'Name': 'string', 'Mapping': 'string', 'SqlType': 'string' }, ] } }, ] }, 'FlinkApplicationConfiguration': { 'CheckpointConfiguration': { 'ConfigurationType': 'DEFAULT'|'CUSTOM', 'CheckpointingEnabled': True|False, 'CheckpointInterval': 123, 'MinPauseBetweenCheckpoints': 123 }, 'MonitoringConfiguration': { 'ConfigurationType': 'DEFAULT'|'CUSTOM', 'MetricsLevel': 'APPLICATION'|'TASK'|'OPERATOR'|'PARALLELISM', 'LogLevel': 'INFO'|'WARN'|'ERROR'|'DEBUG' }, 'ParallelismConfiguration': { 'ConfigurationType': 'DEFAULT'|'CUSTOM', 'Parallelism': 123, 'ParallelismPerKPU': 123, 'AutoScalingEnabled': True|False } }, 'EnvironmentProperties': { 'PropertyGroups': [ { 'PropertyGroupId': 'string', 'PropertyMap': { 'string': 'string' } }, ] }, 'ApplicationCodeConfiguration': { 'CodeContent': { 'TextContent': 'string', 'ZipFileContent': b'bytes', 'S3ContentLocation': { 'BucketARN': 'string', 'FileKey': 'string', 'ObjectVersion': 'string' } }, 'CodeContentType': 'PLAINTEXT'|'ZIPFILE' }, 'ApplicationSnapshotConfiguration': { 'SnapshotsEnabled': True|False }, 'ApplicationSystemRollbackConfiguration': { 'RollbackEnabled': True|False }, 'VpcConfigurations': [ { 'SubnetIds': [ 'string', ], 'SecurityGroupIds': [ 'string', ] }, ], 'ZeppelinApplicationConfiguration': { 'MonitoringConfiguration': { 'LogLevel': 'INFO'|'WARN'|'ERROR'|'DEBUG' }, 'CatalogConfiguration': { 'GlueDataCatalogConfiguration': { 'DatabaseARN': 'string' } }, 'DeployAsApplicationConfiguration': { 'S3ContentLocation': { 'BucketARN': 'string', 'BasePath': 'string' } }, 'CustomArtifactsConfiguration': [ { 'ArtifactType': 'UDF'|'DEPENDENCY_JAR', 'S3ContentLocation': { 'BucketARN': 'string', 'FileKey': 'string', 'ObjectVersion': 'string' }, 'MavenReference': { 'GroupId': 'string', 'ArtifactId': 'string', 'Version': 'string' } }, ] } }, CloudWatchLoggingOptions=[ { 'LogStreamARN': 'string' }, ], Tags=[ { 'Key': 'string', 'Value': 'string' }, ], ApplicationMode='STREAMING'|'INTERACTIVE' ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of your application (for example, "sample-app"). * **ApplicationDescription** (*string*) -- A summary description of the application. * **RuntimeEnvironment** (*string*) -- **[REQUIRED]** The runtime environment for the application. * **ServiceExecutionRole** (*string*) -- **[REQUIRED]** The IAM role used by the application to access Kinesis data streams, Kinesis Data Firehose delivery streams, Amazon S3 objects, and other external resources. * **ApplicationConfiguration** (*dict*) -- Use this parameter to configure the application. * **SqlApplicationConfiguration** *(dict) --* The creation and update parameters for a SQL-based Kinesis Data Analytics application. * **Inputs** *(list) --* The array of Input objects describing the input streams used by the application. * *(dict) --* When you configure the application input for a SQL-based Kinesis Data Analytics application, you specify the streaming source, the in-application stream name that is created, and the mapping between the two. * **NamePrefix** *(string) --* **[REQUIRED]** The name prefix to use when creating an in-application stream. Suppose that you specify a prefix " "MyInApplicationStream"." Kinesis Data Analytics then creates one or more (as per the "InputParallelism" count you specified) in-application streams with the names " "MyInApplicationStream_001"," " "MyInApplicationStream_002"," and so on. * **InputProcessingConfiguration** *(dict) --* The InputProcessingConfiguration for the input. An input processor transforms records as they are received from the stream, before the application's SQL code executes. Currently, the only input processing configuration available is InputLambdaProcessor. * **InputLambdaProcessor** *(dict) --* **[REQUIRED]** The InputLambdaProcessor that is used to preprocess the records in the stream before being processed by your application code. * **ResourceARN** *(string) --* **[REQUIRED]** The ARN of the Amazon Lambda function that operates on records in the stream. Note: To specify an earlier version of the Lambda function than the latest, include the Lambda function version in the Lambda function ARN. For more information about Lambda ARNs, see Example ARNs: Amazon Lambda * **KinesisStreamsInput** *(dict) --* If the streaming source is an Amazon Kinesis data stream, identifies the stream's Amazon Resource Name (ARN). * **ResourceARN** *(string) --* **[REQUIRED]** The ARN of the input Kinesis data stream to read. * **KinesisFirehoseInput** *(dict) --* If the streaming source is an Amazon Kinesis Data Firehose delivery stream, identifies the delivery stream's ARN. * **ResourceARN** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) of the delivery stream. * **InputParallelism** *(dict) --* Describes the number of in-application streams to create. * **Count** *(integer) --* The number of in-application streams to create. * **InputSchema** *(dict) --* **[REQUIRED]** Describes the format of the data in the streaming source, and how each data element maps to corresponding columns in the in-application stream that is being created. Also used to describe the format of the reference data source. * **RecordFormat** *(dict) --* **[REQUIRED]** Specifies the format of the records on the streaming source. * **RecordFormatType** *(string) --* **[REQUIRED]** The type of record format. * **MappingParameters** *(dict) --* When you configure application input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source. * **JSONMappingParameters** *(dict) --* Provides additional mapping information when JSON is the record format on the streaming source. * **RecordRowPath** *(string) --* **[REQUIRED]** The path to the top-level parent that contains the records. * **CSVMappingParameters** *(dict) --* Provides additional mapping information when the record format uses delimiters (for example, CSV). * **RecordRowDelimiter** *(string) --* **[REQUIRED]** The row delimiter. For example, in a CSV format, *'n'* is the typical row delimiter. * **RecordColumnDelimiter** *(string) --* **[REQUIRED]** The column delimiter. For example, in a CSV format, a comma (",") is the typical column delimiter. * **RecordEncoding** *(string) --* Specifies the encoding of the records in the streaming source. For example, UTF-8. * **RecordColumns** *(list) --* **[REQUIRED]** A list of "RecordColumn" objects. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the mapping of each data element in the streaming source to the corresponding column in the in-application stream. Also used to describe the format of the reference data source. * **Name** *(string) --* **[REQUIRED]** The name of the column that is created in the in-application input stream or reference table. * **Mapping** *(string) --* A reference to the data element in the streaming input or the reference data source. * **SqlType** *(string) --* **[REQUIRED]** The type of column created in the in-application input stream or reference table. * **Outputs** *(list) --* The array of Output objects describing the destination streams used by the application. * *(dict) --* Describes a SQL-based Kinesis Data Analytics application's output configuration, in which you identify an in-application stream and a destination where you want the in-application stream data to be written. The destination can be a Kinesis data stream or a Kinesis Data Firehose delivery stream. * **Name** *(string) --* **[REQUIRED]** The name of the in-application stream. * **KinesisStreamsOutput** *(dict) --* Identifies a Kinesis data stream as the destination. * **ResourceARN** *(string) --* **[REQUIRED]** The ARN of the destination Kinesis data stream to write to. * **KinesisFirehoseOutput** *(dict) --* Identifies a Kinesis Data Firehose delivery stream as the destination. * **ResourceARN** *(string) --* **[REQUIRED]** The ARN of the destination delivery stream to write to. * **LambdaOutput** *(dict) --* Identifies an Amazon Lambda function as the destination. * **ResourceARN** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) of the destination Lambda function to write to. Note: To specify an earlier version of the Lambda function than the latest, include the Lambda function version in the Lambda function ARN. For more information about Lambda ARNs, see Example ARNs: Amazon Lambda * **DestinationSchema** *(dict) --* **[REQUIRED]** Describes the data format when records are written to the destination. * **RecordFormatType** *(string) --* **[REQUIRED]** Specifies the format of the records on the output stream. * **ReferenceDataSources** *(list) --* The array of ReferenceDataSource objects describing the reference data sources used by the application. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the reference data source by providing the source information (Amazon S3 bucket name and object key name), the resulting in-application table name that is created, and the necessary schema to map the data elements in the Amazon S3 object to the in-application table. * **TableName** *(string) --* **[REQUIRED]** The name of the in-application table to create. * **S3ReferenceDataSource** *(dict) --* Identifies the S3 bucket and object that contains the reference data. A SQL-based Kinesis Data Analytics application loads reference data only once. If the data changes, you call the UpdateApplication operation to trigger reloading of data into your application. * **BucketARN** *(string) --* The Amazon Resource Name (ARN) of the S3 bucket. * **FileKey** *(string) --* The object key name containing the reference data. * **ReferenceSchema** *(dict) --* **[REQUIRED]** Describes the format of the data in the streaming source, and how each data element maps to corresponding columns created in the in-application stream. * **RecordFormat** *(dict) --* **[REQUIRED]** Specifies the format of the records on the streaming source. * **RecordFormatType** *(string) --* **[REQUIRED]** The type of record format. * **MappingParameters** *(dict) --* When you configure application input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source. * **JSONMappingParameters** *(dict) --* Provides additional mapping information when JSON is the record format on the streaming source. * **RecordRowPath** *(string) --* **[REQUIRED]** The path to the top-level parent that contains the records. * **CSVMappingParameters** *(dict) --* Provides additional mapping information when the record format uses delimiters (for example, CSV). * **RecordRowDelimiter** *(string) --* **[REQUIRED]** The row delimiter. For example, in a CSV format, *'n'* is the typical row delimiter. * **RecordColumnDelimiter** *(string) --* **[REQUIRED]** The column delimiter. For example, in a CSV format, a comma (",") is the typical column delimiter. * **RecordEncoding** *(string) --* Specifies the encoding of the records in the streaming source. For example, UTF-8. * **RecordColumns** *(list) --* **[REQUIRED]** A list of "RecordColumn" objects. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the mapping of each data element in the streaming source to the corresponding column in the in-application stream. Also used to describe the format of the reference data source. * **Name** *(string) --* **[REQUIRED]** The name of the column that is created in the in-application input stream or reference table. * **Mapping** *(string) --* A reference to the data element in the streaming input or the reference data source. * **SqlType** *(string) --* **[REQUIRED]** The type of column created in the in-application input stream or reference table. * **FlinkApplicationConfiguration** *(dict) --* The creation and update parameters for a Managed Service for Apache Flink application. * **CheckpointConfiguration** *(dict) --* Describes an application's checkpointing configuration. Checkpointing is the process of persisting application state for fault tolerance. For more information, see Checkpoints for Fault Tolerance in the Apache Flink Documentation. * **ConfigurationType** *(string) --* **[REQUIRED]** Describes whether the application uses Managed Service for Apache Flink' default checkpointing behavior. You must set this property to "CUSTOM" in order to set the "CheckpointingEnabled", "CheckpointInterval", or "MinPauseBetweenCheckpoints" parameters. Note: If this value is set to "DEFAULT", the application will use the following values, even if they are set to other values using APIs or application code: * **CheckpointingEnabled:** true * **CheckpointInterval:** 60000 * **MinPauseBetweenCheckpoints:** 5000 * **CheckpointingEnabled** *(boolean) --* Describes whether checkpointing is enabled for a Managed Service for Apache Flink application. Note: If "CheckpointConfiguration.ConfigurationType" is "DEFAULT", the application will use a "CheckpointingEnabled" value of "true", even if this value is set to another value using this API or in application code. * **CheckpointInterval** *(integer) --* Describes the interval in milliseconds between checkpoint operations. Note: If "CheckpointConfiguration.ConfigurationType" is "DEFAULT", the application will use a "CheckpointInterval" value of 60000, even if this value is set to another value using this API or in application code. * **MinPauseBetweenCheckpoints** *(integer) --* Describes the minimum time in milliseconds after a checkpoint operation completes that a new checkpoint operation can start. If a checkpoint operation takes longer than the "CheckpointInterval", the application otherwise performs continual checkpoint operations. For more information, see Tuning Checkpointing in the Apache Flink Documentation. Note: If "CheckpointConfiguration.ConfigurationType" is "DEFAULT", the application will use a "MinPauseBetweenCheckpoints" value of 5000, even if this value is set using this API or in application code. * **MonitoringConfiguration** *(dict) --* Describes configuration parameters for Amazon CloudWatch logging for an application. * **ConfigurationType** *(string) --* **[REQUIRED]** Describes whether to use the default CloudWatch logging configuration for an application. You must set this property to "CUSTOM" in order to set the "LogLevel" or "MetricsLevel" parameters. * **MetricsLevel** *(string) --* Describes the granularity of the CloudWatch Logs for an application. The "Parallelism" level is not recommended for applications with a Parallelism over 64 due to excessive costs. * **LogLevel** *(string) --* Describes the verbosity of the CloudWatch Logs for an application. * **ParallelismConfiguration** *(dict) --* Describes parameters for how an application executes multiple tasks simultaneously. * **ConfigurationType** *(string) --* **[REQUIRED]** Describes whether the application uses the default parallelism for the Managed Service for Apache Flink service. You must set this property to "CUSTOM" in order to change your application's "AutoScalingEnabled", "Parallelism", or "ParallelismPerKPU" properties. * **Parallelism** *(integer) --* Describes the initial number of parallel tasks that a Managed Service for Apache Flink application can perform. If "AutoScalingEnabled" is set to True, Managed Service for Apache Flink increases the "CurrentParallelism" value in response to application load. The service can increase the "CurrentParallelism" value up to the maximum parallelism, which is "ParalellismPerKPU" times the maximum KPUs for the application. The maximum KPUs for an application is 32 by default, and can be increased by requesting a limit increase. If application load is reduced, the service can reduce the "CurrentParallelism" value down to the "Parallelism" setting. * **ParallelismPerKPU** *(integer) --* Describes the number of parallel tasks that a Managed Service for Apache Flink application can perform per Kinesis Processing Unit (KPU) used by the application. For more information about KPUs, see Amazon Managed Service for Apache Flink Pricing. * **AutoScalingEnabled** *(boolean) --* Describes whether the Managed Service for Apache Flink service can increase the parallelism of the application in response to increased throughput. * **EnvironmentProperties** *(dict) --* Describes execution properties for a Managed Service for Apache Flink application. * **PropertyGroups** *(list) --* **[REQUIRED]** Describes the execution property groups. * *(dict) --* Property key-value pairs passed into an application. * **PropertyGroupId** *(string) --* **[REQUIRED]** Describes the key of an application execution property key-value pair. * **PropertyMap** *(dict) --* **[REQUIRED]** Describes the value of an application execution property key-value pair. * *(string) --* * *(string) --* * **ApplicationCodeConfiguration** *(dict) --* The code location and type parameters for a Managed Service for Apache Flink application. * **CodeContent** *(dict) --* The location and type of the application code. * **TextContent** *(string) --* The text-format code for a Managed Service for Apache Flink application. * **ZipFileContent** *(bytes) --* The zip-format code for a Managed Service for Apache Flink application. * **S3ContentLocation** *(dict) --* Information about the Amazon S3 bucket that contains the application code. * **BucketARN** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) for the S3 bucket containing the application code. * **FileKey** *(string) --* **[REQUIRED]** The file key for the object containing the application code. * **ObjectVersion** *(string) --* The version of the object containing the application code. * **CodeContentType** *(string) --* **[REQUIRED]** Specifies whether the code content is in text or zip format. * **ApplicationSnapshotConfiguration** *(dict) --* Describes whether snapshots are enabled for a Managed Service for Apache Flink application. * **SnapshotsEnabled** *(boolean) --* **[REQUIRED]** Describes whether snapshots are enabled for a Managed Service for Apache Flink application. * **ApplicationSystemRollbackConfiguration** *(dict) --* Describes system rollback configuration for a Managed Service for Apache Flink application * **RollbackEnabled** *(boolean) --* **[REQUIRED]** Describes whether system rollbacks are enabled for a Managed Service for Apache Flink application * **VpcConfigurations** *(list) --* The array of descriptions of VPC configurations available to the application. * *(dict) --* Describes the parameters of a VPC used by the application. * **SubnetIds** *(list) --* **[REQUIRED]** The array of Subnet IDs used by the VPC configuration. * *(string) --* * **SecurityGroupIds** *(list) --* **[REQUIRED]** The array of SecurityGroup IDs used by the VPC configuration. * *(string) --* * **ZeppelinApplicationConfiguration** *(dict) --* The configuration parameters for a Managed Service for Apache Flink Studio notebook. * **MonitoringConfiguration** *(dict) --* The monitoring configuration of a Managed Service for Apache Flink Studio notebook. * **LogLevel** *(string) --* **[REQUIRED]** The verbosity of the CloudWatch Logs for an application. * **CatalogConfiguration** *(dict) --* The Amazon Glue Data Catalog that you use in queries in a Managed Service for Apache Flink Studio notebook. * **GlueDataCatalogConfiguration** *(dict) --* **[REQUIRED]** The configuration parameters for the default Amazon Glue database. You use this database for Apache Flink SQL queries and table API transforms that you write in a Managed Service for Apache Flink Studio notebook. * **DatabaseARN** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) of the database. * **DeployAsApplicationConfiguration** *(dict) --* The information required to deploy a Managed Service for Apache Flink Studio notebook as an application with durable state. * **S3ContentLocation** *(dict) --* **[REQUIRED]** The description of an Amazon S3 object that contains the Amazon Data Analytics application, including the Amazon Resource Name (ARN) of the S3 bucket, the name of the Amazon S3 object that contains the data, and the version number of the Amazon S3 object that contains the data. * **BucketARN** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) of the S3 bucket. * **BasePath** *(string) --* The base path for the S3 bucket. * **CustomArtifactsConfiguration** *(list) --* Custom artifacts are dependency JARs and user-defined functions (UDF). * *(dict) --* Specifies dependency JARs, as well as JAR files that contain user-defined functions (UDF). * **ArtifactType** *(string) --* **[REQUIRED]** "UDF" stands for user-defined functions. This type of artifact must be in an S3 bucket. A "DEPENDENCY_JAR" can be in either Maven or an S3 bucket. * **S3ContentLocation** *(dict) --* For a Managed Service for Apache Flink application provides a description of an Amazon S3 object, including the Amazon Resource Name (ARN) of the S3 bucket, the name of the Amazon S3 object that contains the data, and the version number of the Amazon S3 object that contains the data. * **BucketARN** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) for the S3 bucket containing the application code. * **FileKey** *(string) --* **[REQUIRED]** The file key for the object containing the application code. * **ObjectVersion** *(string) --* The version of the object containing the application code. * **MavenReference** *(dict) --* The parameters required to fully specify a Maven reference. * **GroupId** *(string) --* **[REQUIRED]** The group ID of the Maven reference. * **ArtifactId** *(string) --* **[REQUIRED]** The artifact ID of the Maven reference. * **Version** *(string) --* **[REQUIRED]** The version of the Maven reference. * **CloudWatchLoggingOptions** (*list*) -- Use this parameter to configure an Amazon CloudWatch log stream to monitor application configuration errors. * *(dict) --* Provides a description of Amazon CloudWatch logging options, including the log stream Amazon Resource Name (ARN). * **LogStreamARN** *(string) --* **[REQUIRED]** The ARN of the CloudWatch log to receive application messages. * **Tags** (*list*) -- A list of one or more tags to assign to the application. A tag is a key-value pair that identifies an application. Note that the maximum number of application tags includes system tags. The maximum number of user-defined application tags is 50. For more information, see Using Tagging. * *(dict) --* A key-value pair (the value is optional) that you can define and assign to Amazon resources. If you specify a tag that already exists, the tag value is replaced with the value that you specify in the request. Note that the maximum number of application tags includes system tags. The maximum number of user-defined application tags is 50. For more information, see Using Tagging. * **Key** *(string) --* **[REQUIRED]** The key of the key-value tag. * **Value** *(string) --* The value of the key-value tag. The value is optional. * **ApplicationMode** (*string*) -- Use the "STREAMING" mode to create a Managed Service for Apache Flink application. To create a Managed Service for Apache Flink Studio notebook, use the "INTERACTIVE" mode. Return type: dict Returns: **Response Syntax** { 'ApplicationDetail': { 'ApplicationARN': 'string', 'ApplicationDescription': 'string', 'ApplicationName': 'string', 'RuntimeEnvironment': 'SQL-1_0'|'FLINK-1_6'|'FLINK-1_8'|'ZEPPELIN-FLINK-1_0'|'FLINK-1_11'|'FLINK-1_13'|'ZEPPELIN-FLINK-2_0'|'FLINK-1_15'|'ZEPPELIN-FLINK-3_0'|'FLINK-1_18'|'FLINK-1_19'|'FLINK-1_20', 'ServiceExecutionRole': 'string', 'ApplicationStatus': 'DELETING'|'STARTING'|'STOPPING'|'READY'|'RUNNING'|'UPDATING'|'AUTOSCALING'|'FORCE_STOPPING'|'ROLLING_BACK'|'MAINTENANCE'|'ROLLED_BACK', 'ApplicationVersionId': 123, 'CreateTimestamp': datetime(2015, 1, 1), 'LastUpdateTimestamp': datetime(2015, 1, 1), 'ApplicationConfigurationDescription': { 'SqlApplicationConfigurationDescription': { 'InputDescriptions': [ { 'InputId': 'string', 'NamePrefix': 'string', 'InAppStreamNames': [ 'string', ], 'InputProcessingConfigurationDescription': { 'InputLambdaProcessorDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' } }, 'KinesisStreamsInputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'KinesisFirehoseInputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'InputSchema': { 'RecordFormat': { 'RecordFormatType': 'JSON'|'CSV', 'MappingParameters': { 'JSONMappingParameters': { 'RecordRowPath': 'string' }, 'CSVMappingParameters': { 'RecordRowDelimiter': 'string', 'RecordColumnDelimiter': 'string' } } }, 'RecordEncoding': 'string', 'RecordColumns': [ { 'Name': 'string', 'Mapping': 'string', 'SqlType': 'string' }, ] }, 'InputParallelism': { 'Count': 123 }, 'InputStartingPositionConfiguration': { 'InputStartingPosition': 'NOW'|'TRIM_HORIZON'|'LAST_STOPPED_POINT' } }, ], 'OutputDescriptions': [ { 'OutputId': 'string', 'Name': 'string', 'KinesisStreamsOutputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'KinesisFirehoseOutputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'LambdaOutputDescription': { 'ResourceARN': 'string', 'RoleARN': 'string' }, 'DestinationSchema': { 'RecordFormatType': 'JSON'|'CSV' } }, ], 'ReferenceDataSourceDescriptions': [ { 'ReferenceId': 'string', 'TableName': 'string', 'S3ReferenceDataSourceDescription': { 'BucketARN': 'string', 'FileKey': 'string', 'ReferenceRoleARN': 'string' }, 'ReferenceSchema': { 'RecordFormat': { 'RecordFormatType': 'JSON'|'CSV', 'MappingParameters': { 'JSONMappingParameters': { 'RecordRowPath': 'string' }, 'CSVMappingParameters': { 'RecordRowDelimiter': 'string', 'RecordColumnDelimiter': 'string' } } }, 'RecordEncoding': 'string', 'RecordColumns': [ { 'Name': 'string', 'Mapping': 'string', 'SqlType': 'string' }, ] } }, ] }, 'ApplicationCodeConfigurationDescription': { 'CodeContentType': 'PLAINTEXT'|'ZIPFILE', 'CodeContentDescription': { 'TextContent': 'string', 'CodeMD5': 'string', 'CodeSize': 123, 'S3ApplicationCodeLocationDescription': { 'BucketARN': 'string', 'FileKey': 'string', 'ObjectVersion': 'string' } } }, 'RunConfigurationDescription': { 'ApplicationRestoreConfigurationDescription': { 'ApplicationRestoreType': 'SKIP_RESTORE_FROM_SNAPSHOT'|'RESTORE_FROM_LATEST_SNAPSHOT'|'RESTORE_FROM_CUSTOM_SNAPSHOT', 'SnapshotName': 'string' }, 'FlinkRunConfigurationDescription': { 'AllowNonRestoredState': True|False } }, 'FlinkApplicationConfigurationDescription': { 'CheckpointConfigurationDescription': { 'ConfigurationType': 'DEFAULT'|'CUSTOM', 'CheckpointingEnabled': True|False, 'CheckpointInterval': 123, 'MinPauseBetweenCheckpoints': 123 }, 'MonitoringConfigurationDescription': { 'ConfigurationType': 'DEFAULT'|'CUSTOM', 'MetricsLevel': 'APPLICATION'|'TASK'|'OPERATOR'|'PARALLELISM', 'LogLevel': 'INFO'|'WARN'|'ERROR'|'DEBUG' }, 'ParallelismConfigurationDescription': { 'ConfigurationType': 'DEFAULT'|'CUSTOM', 'Parallelism': 123, 'ParallelismPerKPU': 123, 'CurrentParallelism': 123, 'AutoScalingEnabled': True|False }, 'JobPlanDescription': 'string' }, 'EnvironmentPropertyDescriptions': { 'PropertyGroupDescriptions': [ { 'PropertyGroupId': 'string', 'PropertyMap': { 'string': 'string' } }, ] }, 'ApplicationSnapshotConfigurationDescription': { 'SnapshotsEnabled': True|False }, 'ApplicationSystemRollbackConfigurationDescription': { 'RollbackEnabled': True|False }, 'VpcConfigurationDescriptions': [ { 'VpcConfigurationId': 'string', 'VpcId': 'string', 'SubnetIds': [ 'string', ], 'SecurityGroupIds': [ 'string', ] }, ], 'ZeppelinApplicationConfigurationDescription': { 'MonitoringConfigurationDescription': { 'LogLevel': 'INFO'|'WARN'|'ERROR'|'DEBUG' }, 'CatalogConfigurationDescription': { 'GlueDataCatalogConfigurationDescription': { 'DatabaseARN': 'string' } }, 'DeployAsApplicationConfigurationDescription': { 'S3ContentLocationDescription': { 'BucketARN': 'string', 'BasePath': 'string' } }, 'CustomArtifactsConfigurationDescription': [ { 'ArtifactType': 'UDF'|'DEPENDENCY_JAR', 'S3ContentLocationDescription': { 'BucketARN': 'string', 'FileKey': 'string', 'ObjectVersion': 'string' }, 'MavenReferenceDescription': { 'GroupId': 'string', 'ArtifactId': 'string', 'Version': 'string' } }, ] } }, 'CloudWatchLoggingOptionDescriptions': [ { 'CloudWatchLoggingOptionId': 'string', 'LogStreamARN': 'string', 'RoleARN': 'string' }, ], 'ApplicationMaintenanceConfigurationDescription': { 'ApplicationMaintenanceWindowStartTime': 'string', 'ApplicationMaintenanceWindowEndTime': 'string' }, 'ApplicationVersionUpdatedFrom': 123, 'ApplicationVersionRolledBackFrom': 123, 'ApplicationVersionCreateTimestamp': datetime(2015, 1, 1), 'ConditionalToken': 'string', 'ApplicationVersionRolledBackTo': 123, 'ApplicationMode': 'STREAMING'|'INTERACTIVE' } } **Response Structure** * *(dict) --* * **ApplicationDetail** *(dict) --* In response to your "CreateApplication" request, Managed Service for Apache Flink returns a response with details of the application it created. * **ApplicationARN** *(string) --* The ARN of the application. * **ApplicationDescription** *(string) --* The description of the application. * **ApplicationName** *(string) --* The name of the application. * **RuntimeEnvironment** *(string) --* The runtime environment for the application. * **ServiceExecutionRole** *(string) --* Specifies the IAM role that the application uses to access external resources. * **ApplicationStatus** *(string) --* The status of the application. * **ApplicationVersionId** *(integer) --* Provides the current application version. Managed Service for Apache Flink updates the "ApplicationVersionId" each time you update the application. * **CreateTimestamp** *(datetime) --* The current timestamp when the application was created. * **LastUpdateTimestamp** *(datetime) --* The current timestamp when the application was last updated. * **ApplicationConfigurationDescription** *(dict) --* Describes details about the application code and starting parameters for a Managed Service for Apache Flink application. * **SqlApplicationConfigurationDescription** *(dict) --* The details about inputs, outputs, and reference data sources for a SQL-based Kinesis Data Analytics application. * **InputDescriptions** *(list) --* The array of InputDescription objects describing the input streams used by the application. * *(dict) --* Describes the application input configuration for a SQL-based Kinesis Data Analytics application. * **InputId** *(string) --* The input ID that is associated with the application input. This is the ID that Kinesis Data Analytics assigns to each input configuration that you add to your application. * **NamePrefix** *(string) --* The in-application name prefix. * **InAppStreamNames** *(list) --* Returns the in-application stream names that are mapped to the stream source. * *(string) --* * **InputProcessingConfigurationDescription** *(dict) --* The description of the preprocessor that executes on records in this input before the application's code is run. * **InputLambdaProcessorDescription** *(dict) --* Provides configuration information about the associated InputLambdaProcessorDescription * **ResourceARN** *(string) --* The ARN of the Amazon Lambda function that is used to preprocess the records in the stream. Note: To specify an earlier version of the Lambda function than the latest, include the Lambda function version in the Lambda function ARN. For more information about Lambda ARNs, see Example ARNs: Amazon Lambda * **RoleARN** *(string) --* The ARN of the IAM role that is used to access the Amazon Lambda function. Note: Provided for backward compatibility. Applications that are created with the current API version have an application- level service execution role rather than a resource-level role. * **KinesisStreamsInputDescription** *(dict) --* If a Kinesis data stream is configured as a streaming source, provides the Kinesis data stream's Amazon Resource Name (ARN). * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the Kinesis data stream. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to access the stream. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **KinesisFirehoseInputDescription** *(dict) --* If a Kinesis Data Firehose delivery stream is configured as a streaming source, provides the delivery stream's ARN. * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the delivery stream. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics assumes to access the stream. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **InputSchema** *(dict) --* Describes the format of the data in the streaming source, and how each data element maps to corresponding columns in the in-application stream that is being created. * **RecordFormat** *(dict) --* Specifies the format of the records on the streaming source. * **RecordFormatType** *(string) --* The type of record format. * **MappingParameters** *(dict) --* When you configure application input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source. * **JSONMappingParameters** *(dict) --* Provides additional mapping information when JSON is the record format on the streaming source. * **RecordRowPath** *(string) --* The path to the top-level parent that contains the records. * **CSVMappingParameters** *(dict) --* Provides additional mapping information when the record format uses delimiters (for example, CSV). * **RecordRowDelimiter** *(string) --* The row delimiter. For example, in a CSV format, *'n'* is the typical row delimiter. * **RecordColumnDelimiter** *(string) --* The column delimiter. For example, in a CSV format, a comma (",") is the typical column delimiter. * **RecordEncoding** *(string) --* Specifies the encoding of the records in the streaming source. For example, UTF-8. * **RecordColumns** *(list) --* A list of "RecordColumn" objects. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the mapping of each data element in the streaming source to the corresponding column in the in-application stream. Also used to describe the format of the reference data source. * **Name** *(string) --* The name of the column that is created in the in-application input stream or reference table. * **Mapping** *(string) --* A reference to the data element in the streaming input or the reference data source. * **SqlType** *(string) --* The type of column created in the in- application input stream or reference table. * **InputParallelism** *(dict) --* Describes the configured parallelism (number of in-application streams mapped to the streaming source). * **Count** *(integer) --* The number of in-application streams to create. * **InputStartingPositionConfiguration** *(dict) --* The point at which the application is configured to read from the input stream. * **InputStartingPosition** *(string) --* The starting position on the stream. * "NOW" - Start reading just after the most recent record in the stream, and start at the request timestamp that the customer issued. * "TRIM_HORIZON" - Start reading at the last untrimmed record in the stream, which is the oldest record available in the stream. This option is not available for an Amazon Kinesis Data Firehose delivery stream. * "LAST_STOPPED_POINT" - Resume reading from where the application last stopped reading. * **OutputDescriptions** *(list) --* The array of OutputDescription objects describing the destination streams used by the application. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the application output configuration, which includes the in-application stream name and the destination where the stream data is written. The destination can be a Kinesis data stream or a Kinesis Data Firehose delivery stream. * **OutputId** *(string) --* A unique identifier for the output configuration. * **Name** *(string) --* The name of the in-application stream that is configured as output. * **KinesisStreamsOutputDescription** *(dict) --* Describes the Kinesis data stream that is configured as the destination where output is written. * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the Kinesis data stream. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to access the stream. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **KinesisFirehoseOutputDescription** *(dict) --* Describes the Kinesis Data Firehose delivery stream that is configured as the destination where output is written. * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the delivery stream. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to access the stream. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **LambdaOutputDescription** *(dict) --* Describes the Lambda function that is configured as the destination where output is written. * **ResourceARN** *(string) --* The Amazon Resource Name (ARN) of the destination Lambda function. * **RoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to write to the destination function. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **DestinationSchema** *(dict) --* The data format used for writing data to the destination. * **RecordFormatType** *(string) --* Specifies the format of the records on the output stream. * **ReferenceDataSourceDescriptions** *(list) --* The array of ReferenceDataSourceDescription objects describing the reference data sources used by the application. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the reference data source configured for an application. * **ReferenceId** *(string) --* The ID of the reference data source. This is the ID that Kinesis Data Analytics assigns when you add the reference data source to your application using the CreateApplication or UpdateApplication operation. * **TableName** *(string) --* The in-application table name created by the specific reference data source configuration. * **S3ReferenceDataSourceDescription** *(dict) --* Provides the Amazon S3 bucket name, the object key name that contains the reference data. * **BucketARN** *(string) --* The Amazon Resource Name (ARN) of the S3 bucket. * **FileKey** *(string) --* Amazon S3 object key name. * **ReferenceRoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to read the Amazon S3 object on your behalf to populate the in- application reference table. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **ReferenceSchema** *(dict) --* Describes the format of the data in the streaming source, and how each data element maps to corresponding columns created in the in- application stream. * **RecordFormat** *(dict) --* Specifies the format of the records on the streaming source. * **RecordFormatType** *(string) --* The type of record format. * **MappingParameters** *(dict) --* When you configure application input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source. * **JSONMappingParameters** *(dict) --* Provides additional mapping information when JSON is the record format on the streaming source. * **RecordRowPath** *(string) --* The path to the top-level parent that contains the records. * **CSVMappingParameters** *(dict) --* Provides additional mapping information when the record format uses delimiters (for example, CSV). * **RecordRowDelimiter** *(string) --* The row delimiter. For example, in a CSV format, *'n'* is the typical row delimiter. * **RecordColumnDelimiter** *(string) --* The column delimiter. For example, in a CSV format, a comma (",") is the typical column delimiter. * **RecordEncoding** *(string) --* Specifies the encoding of the records in the streaming source. For example, UTF-8. * **RecordColumns** *(list) --* A list of "RecordColumn" objects. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the mapping of each data element in the streaming source to the corresponding column in the in-application stream. Also used to describe the format of the reference data source. * **Name** *(string) --* The name of the column that is created in the in-application input stream or reference table. * **Mapping** *(string) --* A reference to the data element in the streaming input or the reference data source. * **SqlType** *(string) --* The type of column created in the in- application input stream or reference table. * **ApplicationCodeConfigurationDescription** *(dict) --* The details about the application code for a Managed Service for Apache Flink application. * **CodeContentType** *(string) --* Specifies whether the code content is in text or zip format. * **CodeContentDescription** *(dict) --* Describes details about the location and format of the application code. * **TextContent** *(string) --* The text-format code * **CodeMD5** *(string) --* The checksum that can be used to validate zip-format code. * **CodeSize** *(integer) --* The size in bytes of the application code. Can be used to validate zip-format code. * **S3ApplicationCodeLocationDescription** *(dict) --* The S3 bucket Amazon Resource Name (ARN), file key, and object version of the application code stored in Amazon S3. * **BucketARN** *(string) --* The Amazon Resource Name (ARN) for the S3 bucket containing the application code. * **FileKey** *(string) --* The file key for the object containing the application code. * **ObjectVersion** *(string) --* The version of the object containing the application code. * **RunConfigurationDescription** *(dict) --* The details about the starting properties for a Managed Service for Apache Flink application. * **ApplicationRestoreConfigurationDescription** *(dict) --* Describes the restore behavior of a restarting application. * **ApplicationRestoreType** *(string) --* Specifies how the application should be restored. * **SnapshotName** *(string) --* The identifier of an existing snapshot of application state to use to restart an application. The application uses this value if "RESTORE_FROM_CUSTOM_SNAPSHOT" is specified for the "ApplicationRestoreType". * **FlinkRunConfigurationDescription** *(dict) --* Describes the starting parameters for a Managed Service for Apache Flink application. * **AllowNonRestoredState** *(boolean) --* When restoring from a snapshot, specifies whether the runtime is allowed to skip a state that cannot be mapped to the new program. This will happen if the program is updated between snapshots to remove stateful parameters, and state data in the snapshot no longer corresponds to valid application data. For more information, see Allowing Non-Restored State in the Apache Flink documentation. Note: This value defaults to "false". If you update your application without specifying this parameter, "AllowNonRestoredState" will be set to "false", even if it was previously set to "true". * **FlinkApplicationConfigurationDescription** *(dict) --* The details about a Managed Service for Apache Flink application. * **CheckpointConfigurationDescription** *(dict) --* Describes an application's checkpointing configuration. Checkpointing is the process of persisting application state for fault tolerance. * **ConfigurationType** *(string) --* Describes whether the application uses the default checkpointing behavior in Managed Service for Apache Flink. Note: If this value is set to "DEFAULT", the application will use the following values, even if they are set to other values using APIs or application code: * **CheckpointingEnabled:** true * **CheckpointInterval:** 60000 * **MinPauseBetweenCheckpoints:** 5000 * **CheckpointingEnabled** *(boolean) --* Describes whether checkpointing is enabled for a Managed Service for Apache Flink application. Note: If "CheckpointConfiguration.ConfigurationType" is "DEFAULT", the application will use a "CheckpointingEnabled" value of "true", even if this value is set to another value using this API or in application code. * **CheckpointInterval** *(integer) --* Describes the interval in milliseconds between checkpoint operations. Note: If "CheckpointConfiguration.ConfigurationType" is "DEFAULT", the application will use a "CheckpointInterval" value of 60000, even if this value is set to another value using this API or in application code. * **MinPauseBetweenCheckpoints** *(integer) --* Describes the minimum time in milliseconds after a checkpoint operation completes that a new checkpoint operation can start. Note: If "CheckpointConfiguration.ConfigurationType" is "DEFAULT", the application will use a "MinPauseBetweenCheckpoints" value of 5000, even if this value is set using this API or in application code. * **MonitoringConfigurationDescription** *(dict) --* Describes configuration parameters for Amazon CloudWatch logging for an application. * **ConfigurationType** *(string) --* Describes whether to use the default CloudWatch logging configuration for an application. * **MetricsLevel** *(string) --* Describes the granularity of the CloudWatch Logs for an application. * **LogLevel** *(string) --* Describes the verbosity of the CloudWatch Logs for an application. * **ParallelismConfigurationDescription** *(dict) --* Describes parameters for how an application executes multiple tasks simultaneously. * **ConfigurationType** *(string) --* Describes whether the application uses the default parallelism for the Managed Service for Apache Flink service. * **Parallelism** *(integer) --* Describes the initial number of parallel tasks that a Managed Service for Apache Flink application can perform. If "AutoScalingEnabled" is set to True, then Managed Service for Apache Flink can increase the "CurrentParallelism" value in response to application load. The service can increase "CurrentParallelism" up to the maximum parallelism, which is "ParalellismPerKPU" times the maximum KPUs for the application. The maximum KPUs for an application is 32 by default, and can be increased by requesting a limit increase. If application load is reduced, the service can reduce the "CurrentParallelism" value down to the "Parallelism" setting. * **ParallelismPerKPU** *(integer) --* Describes the number of parallel tasks that a Managed Service for Apache Flink application can perform per Kinesis Processing Unit (KPU) used by the application. * **CurrentParallelism** *(integer) --* Describes the current number of parallel tasks that a Managed Service for Apache Flink application can perform. If "AutoScalingEnabled" is set to True, Managed Service for Apache Flink can increase this value in response to application load. The service can increase this value up to the maximum parallelism, which is "ParalellismPerKPU" times the maximum KPUs for the application. The maximum KPUs for an application is 32 by default, and can be increased by requesting a limit increase. If application load is reduced, the service can reduce the "CurrentParallelism" value down to the "Parallelism" setting. * **AutoScalingEnabled** *(boolean) --* Describes whether the Managed Service for Apache Flink service can increase the parallelism of the application in response to increased throughput. * **JobPlanDescription** *(string) --* The job plan for an application. For more information about the job plan, see Jobs and Scheduling in the Apache Flink Documentation. To retrieve the job plan for the application, use the DescribeApplicationRequest$IncludeAdditionalDetails parameter of the DescribeApplication operation. * **EnvironmentPropertyDescriptions** *(dict) --* Describes execution properties for a Managed Service for Apache Flink application. * **PropertyGroupDescriptions** *(list) --* Describes the execution property groups. * *(dict) --* Property key-value pairs passed into an application. * **PropertyGroupId** *(string) --* Describes the key of an application execution property key-value pair. * **PropertyMap** *(dict) --* Describes the value of an application execution property key-value pair. * *(string) --* * *(string) --* * **ApplicationSnapshotConfigurationDescription** *(dict) --* Describes whether snapshots are enabled for a Managed Service for Apache Flink application. * **SnapshotsEnabled** *(boolean) --* Describes whether snapshots are enabled for a Managed Service for Apache Flink application. * **ApplicationSystemRollbackConfigurationDescription** *(dict) --* Describes system rollback configuration for a Managed Service for Apache Flink application * **RollbackEnabled** *(boolean) --* Describes whether system rollbacks are enabled for a Managed Service for Apache Flink application * **VpcConfigurationDescriptions** *(list) --* The array of descriptions of VPC configurations available to the application. * *(dict) --* Describes the parameters of a VPC used by the application. * **VpcConfigurationId** *(string) --* The ID of the VPC configuration. * **VpcId** *(string) --* The ID of the associated VPC. * **SubnetIds** *(list) --* The array of Subnet IDs used by the VPC configuration. * *(string) --* * **SecurityGroupIds** *(list) --* The array of SecurityGroup IDs used by the VPC configuration. * *(string) --* * **ZeppelinApplicationConfigurationDescription** *(dict) --* The configuration parameters for a Managed Service for Apache Flink Studio notebook. * **MonitoringConfigurationDescription** *(dict) --* The monitoring configuration of a Managed Service for Apache Flink Studio notebook. * **LogLevel** *(string) --* Describes the verbosity of the CloudWatch Logs for an application. * **CatalogConfigurationDescription** *(dict) --* The Amazon Glue Data Catalog that is associated with the Managed Service for Apache Flink Studio notebook. * **GlueDataCatalogConfigurationDescription** *(dict) --* The configuration parameters for the default Amazon Glue database. You use this database for SQL queries that you write in a Managed Service for Apache Flink Studio notebook. * **DatabaseARN** *(string) --* The Amazon Resource Name (ARN) of the database. * **DeployAsApplicationConfigurationDescription** *(dict) --* The parameters required to deploy a Managed Service for Apache Flink Studio notebook as an application with durable state. * **S3ContentLocationDescription** *(dict) --* The location that holds the data required to specify an Amazon Data Analytics application. * **BucketARN** *(string) --* The Amazon Resource Name (ARN) of the S3 bucket. * **BasePath** *(string) --* The base path for the S3 bucket. * **CustomArtifactsConfigurationDescription** *(list) --* Custom artifacts are dependency JARs and user-defined functions (UDF). * *(dict) --* Specifies a dependency JAR or a JAR of user-defined functions. * **ArtifactType** *(string) --* "UDF" stands for user-defined functions. This type of artifact must be in an S3 bucket. A "DEPENDENCY_JAR" can be in either Maven or an S3 bucket. * **S3ContentLocationDescription** *(dict) --* For a Managed Service for Apache Flink application provides a description of an Amazon S3 object, including the Amazon Resource Name (ARN) of the S3 bucket, the name of the Amazon S3 object that contains the data, and the version number of the Amazon S3 object that contains the data. * **BucketARN** *(string) --* The Amazon Resource Name (ARN) for the S3 bucket containing the application code. * **FileKey** *(string) --* The file key for the object containing the application code. * **ObjectVersion** *(string) --* The version of the object containing the application code. * **MavenReferenceDescription** *(dict) --* The parameters that are required to specify a Maven dependency. * **GroupId** *(string) --* The group ID of the Maven reference. * **ArtifactId** *(string) --* The artifact ID of the Maven reference. * **Version** *(string) --* The version of the Maven reference. * **CloudWatchLoggingOptionDescriptions** *(list) --* Describes the application Amazon CloudWatch logging options. * *(dict) --* Describes the Amazon CloudWatch logging option. * **CloudWatchLoggingOptionId** *(string) --* The ID of the CloudWatch logging option description. * **LogStreamARN** *(string) --* The Amazon Resource Name (ARN) of the CloudWatch log to receive application messages. * **RoleARN** *(string) --* The IAM ARN of the role to use to send application messages. Note: Provided for backward compatibility. Applications created with the current API version have an application-level service execution role rather than a resource-level role. * **ApplicationMaintenanceConfigurationDescription** *(dict) --* The details of the maintenance configuration for the application. * **ApplicationMaintenanceWindowStartTime** *(string) --* The start time for the maintenance window. * **ApplicationMaintenanceWindowEndTime** *(string) --* The end time for the maintenance window. * **ApplicationVersionUpdatedFrom** *(integer) --* The previous application version before the latest application update. RollbackApplication reverts the application to this version. * **ApplicationVersionRolledBackFrom** *(integer) --* If you reverted the application using RollbackApplication, the application version when "RollbackApplication" was called. * **ApplicationVersionCreateTimestamp** *(datetime) --* The current timestamp when the application version was created. * **ConditionalToken** *(string) --* A value you use to implement strong concurrency for application updates. * **ApplicationVersionRolledBackTo** *(integer) --* The version to which you want to roll back the application. * **ApplicationMode** *(string) --* To create a Managed Service for Apache Flink Studio notebook, you must set the mode to "INTERACTIVE". However, for a Managed Service for Apache Flink application, the mode is optional. **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.CodeValidationException" * "KinesisAnalyticsV2.Client.exceptions.ResourceInUseException" * "KinesisAnalyticsV2.Client.exceptions.LimitExceededException" * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.InvalidRequestException" * "KinesisAnalyticsV2.Client.exceptions.TooManyTagsException" * "KinesisAnalyticsV2.Client.exceptions.ConcurrentModificationExce ption" * "KinesisAnalyticsV2.Client.exceptions.UnsupportedOperationExcept ion" KinesisAnalyticsV2 / Client / stop_application stop_application **************** KinesisAnalyticsV2.Client.stop_application(**kwargs) Stops the application from processing data. You can stop an application only if it is in the running status, unless you set the "Force" parameter to "true". You can use the DescribeApplication operation to find the application status. Managed Service for Apache Flink takes a snapshot when the application is stopped, unless "Force" is set to "true". See also: AWS API Documentation **Request Syntax** response = client.stop_application( ApplicationName='string', Force=True|False ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of the running application to stop. * **Force** (*boolean*) -- Set to "true" to force the application to stop. If you set "Force" to "true", Managed Service for Apache Flink stops the application without taking a snapshot. Note: Force-stopping your application may lead to data loss or duplication. To prevent data loss or duplicate processing of data during application restarts, we recommend you to take frequent snapshots of your application. You can only force stop a Managed Service for Apache Flink application. You can't force stop a SQL-based Kinesis Data Analytics application. The application must be in the "STARTING", "UPDATING", "STOPPING", "AUTOSCALING", or "RUNNING" status. Return type: dict Returns: **Response Syntax** { 'OperationId': 'string' } **Response Structure** * *(dict) --* * **OperationId** *(string) --* Operation ID for tracking StopApplication request **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.ResourceInUseException" * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.InvalidRequestException" * "KinesisAnalyticsV2.Client.exceptions.InvalidApplicationConfigur ationException" * "KinesisAnalyticsV2.Client.exceptions.ConcurrentModificationExce ption" KinesisAnalyticsV2 / Client / delete_application_output delete_application_output ************************* KinesisAnalyticsV2.Client.delete_application_output(**kwargs) Deletes the output destination configuration from your SQL-based Kinesis Data Analytics application's configuration. Kinesis Data Analytics will no longer write data from the corresponding in- application stream to the external output destination. See also: AWS API Documentation **Request Syntax** response = client.delete_application_output( ApplicationName='string', CurrentApplicationVersionId=123, OutputId='string' ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The application name. * **CurrentApplicationVersionId** (*integer*) -- **[REQUIRED]** The application version. You can use the DescribeApplication operation to get the current application version. If the version specified is not the current version, the "ConcurrentModificationException" is returned. * **OutputId** (*string*) -- **[REQUIRED]** The ID of the configuration to delete. Each output configuration that is added to the application (either when the application is created or later) using the AddApplicationOutput operation has a unique ID. You need to provide the ID to uniquely identify the output configuration that you want to delete from the application configuration. You can use the DescribeApplication operation to get the specific "OutputId". Return type: dict Returns: **Response Syntax** { 'ApplicationARN': 'string', 'ApplicationVersionId': 123 } **Response Structure** * *(dict) --* * **ApplicationARN** *(string) --* The application Amazon Resource Name (ARN). * **ApplicationVersionId** *(integer) --* The current application version ID. **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.ResourceInUseException" * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.ConcurrentModificationExce ption" * "KinesisAnalyticsV2.Client.exceptions.InvalidRequestException" KinesisAnalyticsV2 / Client / tag_resource tag_resource ************ KinesisAnalyticsV2.Client.tag_resource(**kwargs) Adds one or more key-value tags to a Managed Service for Apache Flink application. Note that the maximum number of application tags includes system tags. The maximum number of user-defined application tags is 50. For more information, see Using Tagging. See also: AWS API Documentation **Request Syntax** response = client.tag_resource( ResourceARN='string', Tags=[ { 'Key': 'string', 'Value': 'string' }, ] ) Parameters: * **ResourceARN** (*string*) -- **[REQUIRED]** The ARN of the application to assign the tags. * **Tags** (*list*) -- **[REQUIRED]** The key-value tags to assign to the application. * *(dict) --* A key-value pair (the value is optional) that you can define and assign to Amazon resources. If you specify a tag that already exists, the tag value is replaced with the value that you specify in the request. Note that the maximum number of application tags includes system tags. The maximum number of user-defined application tags is 50. For more information, see Using Tagging. * **Key** *(string) --* **[REQUIRED]** The key of the key-value tag. * **Value** *(string) --* The value of the key-value tag. The value is optional. Return type: dict Returns: **Response Syntax** {} **Response Structure** * *(dict) --* **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.ResourceInUseException" * "KinesisAnalyticsV2.Client.exceptions.TooManyTagsException" * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.ConcurrentModificationExce ption" KinesisAnalyticsV2 / Client / add_application_reference_data_source add_application_reference_data_source ************************************* KinesisAnalyticsV2.Client.add_application_reference_data_source(**kwargs) Adds a reference data source to an existing SQL-based Kinesis Data Analytics application. Kinesis Data Analytics reads reference data (that is, an Amazon S3 object) and creates an in-application table within your application. In the request, you provide the source (S3 bucket name and object key name), name of the in-application table to create, and the necessary mapping information that describes how data in an Amazon S3 object maps to columns in the resulting in-application table. See also: AWS API Documentation **Request Syntax** response = client.add_application_reference_data_source( ApplicationName='string', CurrentApplicationVersionId=123, ReferenceDataSource={ 'TableName': 'string', 'S3ReferenceDataSource': { 'BucketARN': 'string', 'FileKey': 'string' }, 'ReferenceSchema': { 'RecordFormat': { 'RecordFormatType': 'JSON'|'CSV', 'MappingParameters': { 'JSONMappingParameters': { 'RecordRowPath': 'string' }, 'CSVMappingParameters': { 'RecordRowDelimiter': 'string', 'RecordColumnDelimiter': 'string' } } }, 'RecordEncoding': 'string', 'RecordColumns': [ { 'Name': 'string', 'Mapping': 'string', 'SqlType': 'string' }, ] } } ) Parameters: * **ApplicationName** (*string*) -- **[REQUIRED]** The name of an existing application. * **CurrentApplicationVersionId** (*integer*) -- **[REQUIRED]** The version of the application for which you are adding the reference data source. You can use the DescribeApplication operation to get the current application version. If the version specified is not the current version, the "ConcurrentModificationException" is returned. * **ReferenceDataSource** (*dict*) -- **[REQUIRED]** The reference data source can be an object in your Amazon S3 bucket. Kinesis Data Analytics reads the object and copies the data into the in-application table that is created. You provide an S3 bucket, object key name, and the resulting in- application table that is created. * **TableName** *(string) --* **[REQUIRED]** The name of the in-application table to create. * **S3ReferenceDataSource** *(dict) --* Identifies the S3 bucket and object that contains the reference data. A SQL-based Kinesis Data Analytics application loads reference data only once. If the data changes, you call the UpdateApplication operation to trigger reloading of data into your application. * **BucketARN** *(string) --* The Amazon Resource Name (ARN) of the S3 bucket. * **FileKey** *(string) --* The object key name containing the reference data. * **ReferenceSchema** *(dict) --* **[REQUIRED]** Describes the format of the data in the streaming source, and how each data element maps to corresponding columns created in the in-application stream. * **RecordFormat** *(dict) --* **[REQUIRED]** Specifies the format of the records on the streaming source. * **RecordFormatType** *(string) --* **[REQUIRED]** The type of record format. * **MappingParameters** *(dict) --* When you configure application input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source. * **JSONMappingParameters** *(dict) --* Provides additional mapping information when JSON is the record format on the streaming source. * **RecordRowPath** *(string) --* **[REQUIRED]** The path to the top-level parent that contains the records. * **CSVMappingParameters** *(dict) --* Provides additional mapping information when the record format uses delimiters (for example, CSV). * **RecordRowDelimiter** *(string) --* **[REQUIRED]** The row delimiter. For example, in a CSV format, *'n'* is the typical row delimiter. * **RecordColumnDelimiter** *(string) --* **[REQUIRED]** The column delimiter. For example, in a CSV format, a comma (",") is the typical column delimiter. * **RecordEncoding** *(string) --* Specifies the encoding of the records in the streaming source. For example, UTF-8. * **RecordColumns** *(list) --* **[REQUIRED]** A list of "RecordColumn" objects. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the mapping of each data element in the streaming source to the corresponding column in the in- application stream. Also used to describe the format of the reference data source. * **Name** *(string) --* **[REQUIRED]** The name of the column that is created in the in- application input stream or reference table. * **Mapping** *(string) --* A reference to the data element in the streaming input or the reference data source. * **SqlType** *(string) --* **[REQUIRED]** The type of column created in the in-application input stream or reference table. Return type: dict Returns: **Response Syntax** { 'ApplicationARN': 'string', 'ApplicationVersionId': 123, 'ReferenceDataSourceDescriptions': [ { 'ReferenceId': 'string', 'TableName': 'string', 'S3ReferenceDataSourceDescription': { 'BucketARN': 'string', 'FileKey': 'string', 'ReferenceRoleARN': 'string' }, 'ReferenceSchema': { 'RecordFormat': { 'RecordFormatType': 'JSON'|'CSV', 'MappingParameters': { 'JSONMappingParameters': { 'RecordRowPath': 'string' }, 'CSVMappingParameters': { 'RecordRowDelimiter': 'string', 'RecordColumnDelimiter': 'string' } } }, 'RecordEncoding': 'string', 'RecordColumns': [ { 'Name': 'string', 'Mapping': 'string', 'SqlType': 'string' }, ] } }, ] } **Response Structure** * *(dict) --* * **ApplicationARN** *(string) --* The application Amazon Resource Name (ARN). * **ApplicationVersionId** *(integer) --* The updated application version ID. Kinesis Data Analytics increments this ID when the application is updated. * **ReferenceDataSourceDescriptions** *(list) --* Describes reference data sources configured for the application. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the reference data source configured for an application. * **ReferenceId** *(string) --* The ID of the reference data source. This is the ID that Kinesis Data Analytics assigns when you add the reference data source to your application using the CreateApplication or UpdateApplication operation. * **TableName** *(string) --* The in-application table name created by the specific reference data source configuration. * **S3ReferenceDataSourceDescription** *(dict) --* Provides the Amazon S3 bucket name, the object key name that contains the reference data. * **BucketARN** *(string) --* The Amazon Resource Name (ARN) of the S3 bucket. * **FileKey** *(string) --* Amazon S3 object key name. * **ReferenceRoleARN** *(string) --* The ARN of the IAM role that Kinesis Data Analytics can assume to read the Amazon S3 object on your behalf to populate the in-application reference table. Note: Provided for backward compatibility. Applications that are created with the current API version have an application-level service execution role rather than a resource-level role. * **ReferenceSchema** *(dict) --* Describes the format of the data in the streaming source, and how each data element maps to corresponding columns created in the in-application stream. * **RecordFormat** *(dict) --* Specifies the format of the records on the streaming source. * **RecordFormatType** *(string) --* The type of record format. * **MappingParameters** *(dict) --* When you configure application input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source. * **JSONMappingParameters** *(dict) --* Provides additional mapping information when JSON is the record format on the streaming source. * **RecordRowPath** *(string) --* The path to the top-level parent that contains the records. * **CSVMappingParameters** *(dict) --* Provides additional mapping information when the record format uses delimiters (for example, CSV). * **RecordRowDelimiter** *(string) --* The row delimiter. For example, in a CSV format, *'n'* is the typical row delimiter. * **RecordColumnDelimiter** *(string) --* The column delimiter. For example, in a CSV format, a comma (",") is the typical column delimiter. * **RecordEncoding** *(string) --* Specifies the encoding of the records in the streaming source. For example, UTF-8. * **RecordColumns** *(list) --* A list of "RecordColumn" objects. * *(dict) --* For a SQL-based Kinesis Data Analytics application, describes the mapping of each data element in the streaming source to the corresponding column in the in-application stream. Also used to describe the format of the reference data source. * **Name** *(string) --* The name of the column that is created in the in- application input stream or reference table. * **Mapping** *(string) --* A reference to the data element in the streaming input or the reference data source. * **SqlType** *(string) --* The type of column created in the in-application input stream or reference table. **Exceptions** * "KinesisAnalyticsV2.Client.exceptions.ResourceNotFoundException" * "KinesisAnalyticsV2.Client.exceptions.ResourceInUseException" * "KinesisAnalyticsV2.Client.exceptions.InvalidArgumentException" * "KinesisAnalyticsV2.Client.exceptions.ConcurrentModificationExce ption" * "KinesisAnalyticsV2.Client.exceptions.InvalidRequestException"