KafkaConnect ************ Client ====== class KafkaConnect.Client A low-level client representing Managed Streaming for Kafka Connect import boto3 client = boto3.client('kafkaconnect') These are the available methods: * can_paginate * close * create_connector * create_custom_plugin * create_worker_configuration * delete_connector * delete_custom_plugin * delete_worker_configuration * describe_connector * describe_connector_operation * describe_custom_plugin * describe_worker_configuration * get_paginator * get_waiter * list_connector_operations * list_connectors * list_custom_plugins * list_tags_for_resource * list_worker_configurations * tag_resource * untag_resource * update_connector Paginators ========== Paginators are available on a client instance via the "get_paginator" method. For more detailed instructions and examples on the usage of paginators, see the paginators user guide. The available paginators are: * ListConnectorOperations * ListConnectors * ListCustomPlugins * ListWorkerConfigurations KafkaConnect / Paginator / ListWorkerConfigurations ListWorkerConfigurations ************************ class KafkaConnect.Paginator.ListWorkerConfigurations paginator = client.get_paginator('list_worker_configurations') paginate(**kwargs) Creates an iterator that will paginate through responses from "KafkaConnect.Client.list_worker_configurations()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( namePrefix='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **namePrefix** (*string*) -- Lists worker configuration names that start with the specified text string. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'workerConfigurations': [ { 'creationTime': datetime(2015, 1, 1), 'description': 'string', 'latestRevision': { 'creationTime': datetime(2015, 1, 1), 'description': 'string', 'revision': 123 }, 'name': 'string', 'workerConfigurationArn': 'string', 'workerConfigurationState': 'ACTIVE'|'DELETING' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **workerConfigurations** *(list) --* An array of worker configuration descriptions. * *(dict) --* The summary of a worker configuration. * **creationTime** *(datetime) --* The time that a worker configuration was created. * **description** *(string) --* The description of a worker configuration. * **latestRevision** *(dict) --* The latest revision of a worker configuration. * **creationTime** *(datetime) --* The time that a worker configuration revision was created. * **description** *(string) --* The description of a worker configuration revision. * **revision** *(integer) --* The revision of a worker configuration. * **name** *(string) --* The name of the worker configuration. * **workerConfigurationArn** *(string) --* The Amazon Resource Name (ARN) of the worker configuration. * **workerConfigurationState** *(string) --* The state of the worker configuration. * **NextToken** *(string) --* A token to resume pagination. KafkaConnect / Paginator / ListCustomPlugins ListCustomPlugins ***************** class KafkaConnect.Paginator.ListCustomPlugins paginator = client.get_paginator('list_custom_plugins') paginate(**kwargs) Creates an iterator that will paginate through responses from "KafkaConnect.Client.list_custom_plugins()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( namePrefix='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **namePrefix** (*string*) -- Lists custom plugin names that start with the specified text string. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'customPlugins': [ { 'creationTime': datetime(2015, 1, 1), 'customPluginArn': 'string', 'customPluginState': 'CREATING'|'CREATE_FAILED'|'ACTIVE'|'UPDATING'|'UPDATE_FAILED'|'DELETING', 'description': 'string', 'latestRevision': { 'contentType': 'JAR'|'ZIP', 'creationTime': datetime(2015, 1, 1), 'description': 'string', 'fileDescription': { 'fileMd5': 'string', 'fileSize': 123 }, 'location': { 's3Location': { 'bucketArn': 'string', 'fileKey': 'string', 'objectVersion': 'string' } }, 'revision': 123 }, 'name': 'string' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **customPlugins** *(list) --* An array of custom plugin descriptions. * *(dict) --* A summary of the custom plugin. * **creationTime** *(datetime) --* The time that the custom plugin was created. * **customPluginArn** *(string) --* The Amazon Resource Name (ARN) of the custom plugin. * **customPluginState** *(string) --* The state of the custom plugin. * **description** *(string) --* A description of the custom plugin. * **latestRevision** *(dict) --* The latest revision of the custom plugin. * **contentType** *(string) --* The format of the plugin file. * **creationTime** *(datetime) --* The time that the custom plugin was created. * **description** *(string) --* The description of the custom plugin. * **fileDescription** *(dict) --* Details about the custom plugin file. * **fileMd5** *(string) --* The hex-encoded MD5 checksum of the custom plugin file. You can use it to validate the file. * **fileSize** *(integer) --* The size in bytes of the custom plugin file. You can use it to validate the file. * **location** *(dict) --* Information about the location of the custom plugin. * **s3Location** *(dict) --* The S3 bucket Amazon Resource Name (ARN), file key, and object version of the plugin file stored in Amazon S3. * **bucketArn** *(string) --* The Amazon Resource Name (ARN) of an S3 bucket. * **fileKey** *(string) --* The file key for an object in an S3 bucket. * **objectVersion** *(string) --* The version of an object in an S3 bucket. * **revision** *(integer) --* The revision of the custom plugin. * **name** *(string) --* The name of the custom plugin. * **NextToken** *(string) --* A token to resume pagination. KafkaConnect / Paginator / ListConnectorOperations ListConnectorOperations *********************** class KafkaConnect.Paginator.ListConnectorOperations paginator = client.get_paginator('list_connector_operations') paginate(**kwargs) Creates an iterator that will paginate through responses from "KafkaConnect.Client.list_connector_operations()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( connectorArn='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **connectorArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the connector for which to list operations. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'connectorOperations': [ { 'connectorOperationArn': 'string', 'connectorOperationType': 'UPDATE_WORKER_SETTING'|'UPDATE_CONNECTOR_CONFIGURATION'|'ISOLATE_CONNECTOR'|'RESTORE_CONNECTOR', 'connectorOperationState': 'PENDING'|'UPDATE_IN_PROGRESS'|'UPDATE_COMPLETE'|'UPDATE_FAILED'|'ROLLBACK_IN_PROGRESS'|'ROLLBACK_FAILED'|'ROLLBACK_COMPLETE', 'creationTime': datetime(2015, 1, 1), 'endTime': datetime(2015, 1, 1) }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **connectorOperations** *(list) --* An array of connector operation descriptions. * *(dict) --* Summary of a connector operation. * **connectorOperationArn** *(string) --* The Amazon Resource Name (ARN) of the connector operation. * **connectorOperationType** *(string) --* The type of connector operation performed. * **connectorOperationState** *(string) --* The state of the connector operation. * **creationTime** *(datetime) --* The time when operation was created. * **endTime** *(datetime) --* The time when operation ended. * **NextToken** *(string) --* A token to resume pagination. KafkaConnect / Paginator / ListConnectors ListConnectors ************** class KafkaConnect.Paginator.ListConnectors paginator = client.get_paginator('list_connectors') paginate(**kwargs) Creates an iterator that will paginate through responses from "KafkaConnect.Client.list_connectors()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( connectorNamePrefix='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **connectorNamePrefix** (*string*) -- The name prefix that you want to use to search for and list connectors. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'connectors': [ { 'capacity': { 'autoScaling': { 'maxWorkerCount': 123, 'mcuCount': 123, 'minWorkerCount': 123, 'scaleInPolicy': { 'cpuUtilizationPercentage': 123 }, 'scaleOutPolicy': { 'cpuUtilizationPercentage': 123 } }, 'provisionedCapacity': { 'mcuCount': 123, 'workerCount': 123 } }, 'connectorArn': 'string', 'connectorDescription': 'string', 'connectorName': 'string', 'connectorState': 'RUNNING'|'CREATING'|'UPDATING'|'DELETING'|'FAILED', 'creationTime': datetime(2015, 1, 1), 'currentVersion': 'string', 'kafkaCluster': { 'apacheKafkaCluster': { 'bootstrapServers': 'string', 'vpc': { 'securityGroups': [ 'string', ], 'subnets': [ 'string', ] } } }, 'kafkaClusterClientAuthentication': { 'authenticationType': 'NONE'|'IAM' }, 'kafkaClusterEncryptionInTransit': { 'encryptionType': 'PLAINTEXT'|'TLS' }, 'kafkaConnectVersion': 'string', 'logDelivery': { 'workerLogDelivery': { 'cloudWatchLogs': { 'enabled': True|False, 'logGroup': 'string' }, 'firehose': { 'deliveryStream': 'string', 'enabled': True|False }, 's3': { 'bucket': 'string', 'enabled': True|False, 'prefix': 'string' } } }, 'plugins': [ { 'customPlugin': { 'customPluginArn': 'string', 'revision': 123 } }, ], 'serviceExecutionRoleArn': 'string', 'workerConfiguration': { 'revision': 123, 'workerConfigurationArn': 'string' } }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **connectors** *(list) --* An array of connector descriptions. * *(dict) --* Summary of a connector. * **capacity** *(dict) --* The connector's compute capacity settings. * **autoScaling** *(dict) --* Describes the connector's auto scaling capacity. * **maxWorkerCount** *(integer) --* The maximum number of workers allocated to the connector. * **mcuCount** *(integer) --* The number of microcontroller units (MCUs) allocated to each connector worker. The valid values are 1,2,4,8. * **minWorkerCount** *(integer) --* The minimum number of workers allocated to the connector. * **scaleInPolicy** *(dict) --* The sacle-in policy for the connector. * **cpuUtilizationPercentage** *(integer) --* Specifies the CPU utilization percentage threshold at which you want connector scale in to be triggered. * **scaleOutPolicy** *(dict) --* The sacle-out policy for the connector.> * **cpuUtilizationPercentage** *(integer) --* The CPU utilization percentage threshold at which you want connector scale out to be triggered. * **provisionedCapacity** *(dict) --* Describes a connector's provisioned capacity. * **mcuCount** *(integer) --* The number of microcontroller units (MCUs) allocated to each connector worker. The valid values are 1,2,4,8. * **workerCount** *(integer) --* The number of workers that are allocated to the connector. * **connectorArn** *(string) --* The Amazon Resource Name (ARN) of the connector. * **connectorDescription** *(string) --* The description of the connector. * **connectorName** *(string) --* The name of the connector. * **connectorState** *(string) --* The state of the connector. * **creationTime** *(datetime) --* The time that the connector was created. * **currentVersion** *(string) --* The current version of the connector. * **kafkaCluster** *(dict) --* The details of the Apache Kafka cluster to which the connector is connected. * **apacheKafkaCluster** *(dict) --* The Apache Kafka cluster to which the connector is connected. * **bootstrapServers** *(string) --* The bootstrap servers of the cluster. * **vpc** *(dict) --* Details of an Amazon VPC which has network connectivity to the Apache Kafka cluster. * **securityGroups** *(list) --* The security groups for the connector. * *(string) --* * **subnets** *(list) --* The subnets for the connector. * *(string) --* * **kafkaClusterClientAuthentication** *(dict) --* The type of client authentication used to connect to the Apache Kafka cluster. The value is NONE when no client authentication is used. * **authenticationType** *(string) --* The type of client authentication used to connect to the Apache Kafka cluster. Value NONE means that no client authentication is used. * **kafkaClusterEncryptionInTransit** *(dict) --* Details of encryption in transit to the Apache Kafka cluster. * **encryptionType** *(string) --* The type of encryption in transit to the Apache Kafka cluster. * **kafkaConnectVersion** *(string) --* The version of Kafka Connect. It has to be compatible with both the Apache Kafka cluster's version and the plugins. * **logDelivery** *(dict) --* The settings for delivering connector logs to Amazon CloudWatch Logs. * **workerLogDelivery** *(dict) --* The workers can send worker logs to different destination types. This configuration specifies the details of these destinations. * **cloudWatchLogs** *(dict) --* Details about delivering logs to Amazon CloudWatch Logs. * **enabled** *(boolean) --* Whether log delivery to Amazon CloudWatch Logs is enabled. * **logGroup** *(string) --* The name of the CloudWatch log group that is the destination for log delivery. * **firehose** *(dict) --* Details about delivering logs to Amazon Kinesis Data Firehose. * **deliveryStream** *(string) --* The name of the Kinesis Data Firehose delivery stream that is the destination for log delivery. * **enabled** *(boolean) --* Specifies whether connector logs get delivered to Amazon Kinesis Data Firehose. * **s3** *(dict) --* Details about delivering logs to Amazon S3. * **bucket** *(string) --* The name of the S3 bucket that is the destination for log delivery. * **enabled** *(boolean) --* Specifies whether connector logs get sent to the specified Amazon S3 destination. * **prefix** *(string) --* The S3 prefix that is the destination for log delivery. * **plugins** *(list) --* Specifies which plugins were used for this connector. * *(dict) --* The description of the plugin. * **customPlugin** *(dict) --* Details about a custom plugin. * **customPluginArn** *(string) --* The Amazon Resource Name (ARN) of the custom plugin. * **revision** *(integer) --* The revision of the custom plugin. * **serviceExecutionRoleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role used by the connector to access Amazon Web Services resources. * **workerConfiguration** *(dict) --* The worker configurations that are in use with the connector. * **revision** *(integer) --* The revision of the worker configuration. * **workerConfigurationArn** *(string) --* The Amazon Resource Name (ARN) of the worker configuration. * **NextToken** *(string) --* A token to resume pagination. KafkaConnect / Client / get_paginator get_paginator ************* KafkaConnect.Client.get_paginator(operation_name) Create a paginator for an operation. Parameters: **operation_name** (*string*) -- The operation name. This is the same name as the method name on the client. For example, if the method name is "create_foo", and you'd normally invoke the operation as "client.create_foo(**kwargs)", if the "create_foo" operation can be paginated, you can use the call "client.get_paginator("create_foo")". Raises: **OperationNotPageableError** -- Raised if the operation is not pageable. You can use the "client.can_paginate" method to check if an operation is pageable. Return type: "botocore.paginate.Paginator" Returns: A paginator object. KafkaConnect / Client / list_worker_configurations list_worker_configurations ************************** KafkaConnect.Client.list_worker_configurations(**kwargs) Returns a list of all of the worker configurations in this account and Region. See also: AWS API Documentation **Request Syntax** response = client.list_worker_configurations( maxResults=123, nextToken='string', namePrefix='string' ) Parameters: * **maxResults** (*integer*) -- The maximum number of worker configurations to list in one response. * **nextToken** (*string*) -- If the response of a ListWorkerConfigurations operation is truncated, it will include a NextToken. Send this NextToken in a subsequent request to continue listing from where the previous operation left off. * **namePrefix** (*string*) -- Lists worker configuration names that start with the specified text string. Return type: dict Returns: **Response Syntax** { 'nextToken': 'string', 'workerConfigurations': [ { 'creationTime': datetime(2015, 1, 1), 'description': 'string', 'latestRevision': { 'creationTime': datetime(2015, 1, 1), 'description': 'string', 'revision': 123 }, 'name': 'string', 'workerConfigurationArn': 'string', 'workerConfigurationState': 'ACTIVE'|'DELETING' }, ] } **Response Structure** * *(dict) --* * **nextToken** *(string) --* If the response of a ListWorkerConfigurations operation is truncated, it will include a NextToken. Send this NextToken in a subsequent request to continue listing from where the previous operation left off. * **workerConfigurations** *(list) --* An array of worker configuration descriptions. * *(dict) --* The summary of a worker configuration. * **creationTime** *(datetime) --* The time that a worker configuration was created. * **description** *(string) --* The description of a worker configuration. * **latestRevision** *(dict) --* The latest revision of a worker configuration. * **creationTime** *(datetime) --* The time that a worker configuration revision was created. * **description** *(string) --* The description of a worker configuration revision. * **revision** *(integer) --* The revision of a worker configuration. * **name** *(string) --* The name of the worker configuration. * **workerConfigurationArn** *(string) --* The Amazon Resource Name (ARN) of the worker configuration. * **workerConfigurationState** *(string) --* The state of the worker configuration. **Exceptions** * "KafkaConnect.Client.exceptions.NotFoundException" * "KafkaConnect.Client.exceptions.BadRequestException" * "KafkaConnect.Client.exceptions.ForbiddenException" * "KafkaConnect.Client.exceptions.ServiceUnavailableException" * "KafkaConnect.Client.exceptions.TooManyRequestsException" * "KafkaConnect.Client.exceptions.UnauthorizedException" * "KafkaConnect.Client.exceptions.InternalServerErrorException" KafkaConnect / Client / describe_connector describe_connector ****************** KafkaConnect.Client.describe_connector(**kwargs) Returns summary information about the connector. See also: AWS API Documentation **Request Syntax** response = client.describe_connector( connectorArn='string' ) Parameters: **connectorArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the connector that you want to describe. Return type: dict Returns: **Response Syntax** { 'capacity': { 'autoScaling': { 'maxWorkerCount': 123, 'mcuCount': 123, 'minWorkerCount': 123, 'scaleInPolicy': { 'cpuUtilizationPercentage': 123 }, 'scaleOutPolicy': { 'cpuUtilizationPercentage': 123 } }, 'provisionedCapacity': { 'mcuCount': 123, 'workerCount': 123 } }, 'connectorArn': 'string', 'connectorConfiguration': { 'string': 'string' }, 'connectorDescription': 'string', 'connectorName': 'string', 'connectorState': 'RUNNING'|'CREATING'|'UPDATING'|'DELETING'|'FAILED', 'creationTime': datetime(2015, 1, 1), 'currentVersion': 'string', 'kafkaCluster': { 'apacheKafkaCluster': { 'bootstrapServers': 'string', 'vpc': { 'securityGroups': [ 'string', ], 'subnets': [ 'string', ] } } }, 'kafkaClusterClientAuthentication': { 'authenticationType': 'NONE'|'IAM' }, 'kafkaClusterEncryptionInTransit': { 'encryptionType': 'PLAINTEXT'|'TLS' }, 'kafkaConnectVersion': 'string', 'logDelivery': { 'workerLogDelivery': { 'cloudWatchLogs': { 'enabled': True|False, 'logGroup': 'string' }, 'firehose': { 'deliveryStream': 'string', 'enabled': True|False }, 's3': { 'bucket': 'string', 'enabled': True|False, 'prefix': 'string' } } }, 'plugins': [ { 'customPlugin': { 'customPluginArn': 'string', 'revision': 123 } }, ], 'serviceExecutionRoleArn': 'string', 'workerConfiguration': { 'revision': 123, 'workerConfigurationArn': 'string' }, 'stateDescription': { 'code': 'string', 'message': 'string' } } **Response Structure** * *(dict) --* * **capacity** *(dict) --* Information about the capacity of the connector, whether it is auto scaled or provisioned. * **autoScaling** *(dict) --* Describes the connector's auto scaling capacity. * **maxWorkerCount** *(integer) --* The maximum number of workers allocated to the connector. * **mcuCount** *(integer) --* The number of microcontroller units (MCUs) allocated to each connector worker. The valid values are 1,2,4,8. * **minWorkerCount** *(integer) --* The minimum number of workers allocated to the connector. * **scaleInPolicy** *(dict) --* The sacle-in policy for the connector. * **cpuUtilizationPercentage** *(integer) --* Specifies the CPU utilization percentage threshold at which you want connector scale in to be triggered. * **scaleOutPolicy** *(dict) --* The sacle-out policy for the connector.> * **cpuUtilizationPercentage** *(integer) --* The CPU utilization percentage threshold at which you want connector scale out to be triggered. * **provisionedCapacity** *(dict) --* Describes a connector's provisioned capacity. * **mcuCount** *(integer) --* The number of microcontroller units (MCUs) allocated to each connector worker. The valid values are 1,2,4,8. * **workerCount** *(integer) --* The number of workers that are allocated to the connector. * **connectorArn** *(string) --* The Amazon Resource Name (ARN) of the connector. * **connectorConfiguration** *(dict) --* A map of keys to values that represent the configuration for the connector. * *(string) --* * *(string) --* * **connectorDescription** *(string) --* A summary description of the connector. * **connectorName** *(string) --* The name of the connector. * **connectorState** *(string) --* The state of the connector. * **creationTime** *(datetime) --* The time the connector was created. * **currentVersion** *(string) --* The current version of the connector. * **kafkaCluster** *(dict) --* The Apache Kafka cluster that the connector is connected to. * **apacheKafkaCluster** *(dict) --* The Apache Kafka cluster to which the connector is connected. * **bootstrapServers** *(string) --* The bootstrap servers of the cluster. * **vpc** *(dict) --* Details of an Amazon VPC which has network connectivity to the Apache Kafka cluster. * **securityGroups** *(list) --* The security groups for the connector. * *(string) --* * **subnets** *(list) --* The subnets for the connector. * *(string) --* * **kafkaClusterClientAuthentication** *(dict) --* The type of client authentication used to connect to the Apache Kafka cluster. The value is NONE when no client authentication is used. * **authenticationType** *(string) --* The type of client authentication used to connect to the Apache Kafka cluster. Value NONE means that no client authentication is used. * **kafkaClusterEncryptionInTransit** *(dict) --* Details of encryption in transit to the Apache Kafka cluster. * **encryptionType** *(string) --* The type of encryption in transit to the Apache Kafka cluster. * **kafkaConnectVersion** *(string) --* The version of Kafka Connect. It has to be compatible with both the Apache Kafka cluster's version and the plugins. * **logDelivery** *(dict) --* Details about delivering logs to Amazon CloudWatch Logs. * **workerLogDelivery** *(dict) --* The workers can send worker logs to different destination types. This configuration specifies the details of these destinations. * **cloudWatchLogs** *(dict) --* Details about delivering logs to Amazon CloudWatch Logs. * **enabled** *(boolean) --* Whether log delivery to Amazon CloudWatch Logs is enabled. * **logGroup** *(string) --* The name of the CloudWatch log group that is the destination for log delivery. * **firehose** *(dict) --* Details about delivering logs to Amazon Kinesis Data Firehose. * **deliveryStream** *(string) --* The name of the Kinesis Data Firehose delivery stream that is the destination for log delivery. * **enabled** *(boolean) --* Specifies whether connector logs get delivered to Amazon Kinesis Data Firehose. * **s3** *(dict) --* Details about delivering logs to Amazon S3. * **bucket** *(string) --* The name of the S3 bucket that is the destination for log delivery. * **enabled** *(boolean) --* Specifies whether connector logs get sent to the specified Amazon S3 destination. * **prefix** *(string) --* The S3 prefix that is the destination for log delivery. * **plugins** *(list) --* Specifies which plugins were used for this connector. * *(dict) --* The description of the plugin. * **customPlugin** *(dict) --* Details about a custom plugin. * **customPluginArn** *(string) --* The Amazon Resource Name (ARN) of the custom plugin. * **revision** *(integer) --* The revision of the custom plugin. * **serviceExecutionRoleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role used by the connector to access Amazon Web Services resources. * **workerConfiguration** *(dict) --* Specifies which worker configuration was used for the connector. * **revision** *(integer) --* The revision of the worker configuration. * **workerConfigurationArn** *(string) --* The Amazon Resource Name (ARN) of the worker configuration. * **stateDescription** *(dict) --* Details about the state of a connector. * **code** *(string) --* A code that describes the state of a resource. * **message** *(string) --* A message that describes the state of a resource. **Exceptions** * "KafkaConnect.Client.exceptions.NotFoundException" * "KafkaConnect.Client.exceptions.BadRequestException" * "KafkaConnect.Client.exceptions.ForbiddenException" * "KafkaConnect.Client.exceptions.ServiceUnavailableException" * "KafkaConnect.Client.exceptions.TooManyRequestsException" * "KafkaConnect.Client.exceptions.UnauthorizedException" * "KafkaConnect.Client.exceptions.InternalServerErrorException" KafkaConnect / Client / describe_connector_operation describe_connector_operation **************************** KafkaConnect.Client.describe_connector_operation(**kwargs) Returns information about the specified connector's operations. See also: AWS API Documentation **Request Syntax** response = client.describe_connector_operation( connectorOperationArn='string' ) Parameters: **connectorOperationArn** (*string*) -- **[REQUIRED]** ARN of the connector operation to be described. Return type: dict Returns: **Response Syntax** { 'connectorArn': 'string', 'connectorOperationArn': 'string', 'connectorOperationState': 'PENDING'|'UPDATE_IN_PROGRESS'|'UPDATE_COMPLETE'|'UPDATE_FAILED'|'ROLLBACK_IN_PROGRESS'|'ROLLBACK_FAILED'|'ROLLBACK_COMPLETE', 'connectorOperationType': 'UPDATE_WORKER_SETTING'|'UPDATE_CONNECTOR_CONFIGURATION'|'ISOLATE_CONNECTOR'|'RESTORE_CONNECTOR', 'operationSteps': [ { 'stepType': 'INITIALIZE_UPDATE'|'FINALIZE_UPDATE'|'UPDATE_WORKER_SETTING'|'UPDATE_CONNECTOR_CONFIGURATION'|'VALIDATE_UPDATE', 'stepState': 'PENDING'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'CANCELLED' }, ], 'originWorkerSetting': { 'capacity': { 'autoScaling': { 'maxWorkerCount': 123, 'mcuCount': 123, 'minWorkerCount': 123, 'scaleInPolicy': { 'cpuUtilizationPercentage': 123 }, 'scaleOutPolicy': { 'cpuUtilizationPercentage': 123 } }, 'provisionedCapacity': { 'mcuCount': 123, 'workerCount': 123 } } }, 'originConnectorConfiguration': { 'string': 'string' }, 'targetWorkerSetting': { 'capacity': { 'autoScaling': { 'maxWorkerCount': 123, 'mcuCount': 123, 'minWorkerCount': 123, 'scaleInPolicy': { 'cpuUtilizationPercentage': 123 }, 'scaleOutPolicy': { 'cpuUtilizationPercentage': 123 } }, 'provisionedCapacity': { 'mcuCount': 123, 'workerCount': 123 } } }, 'targetConnectorConfiguration': { 'string': 'string' }, 'errorInfo': { 'code': 'string', 'message': 'string' }, 'creationTime': datetime(2015, 1, 1), 'endTime': datetime(2015, 1, 1) } **Response Structure** * *(dict) --* * **connectorArn** *(string) --* The Amazon Resource Name (ARN) of the connector. * **connectorOperationArn** *(string) --* The Amazon Resource Name (ARN) of the connector operation. * **connectorOperationState** *(string) --* The state of the connector operation. * **connectorOperationType** *(string) --* The type of connector operation performed. * **operationSteps** *(list) --* The array of operation steps taken. * *(dict) --* Details of a step that is involved in a connector's operation. * **stepType** *(string) --* The step type of the operation. * **stepState** *(string) --* The step state of the operation. * **originWorkerSetting** *(dict) --* The origin worker setting. * **capacity** *(dict) --* A description of the connector's capacity. * **autoScaling** *(dict) --* Describes the connector's auto scaling capacity. * **maxWorkerCount** *(integer) --* The maximum number of workers allocated to the connector. * **mcuCount** *(integer) --* The number of microcontroller units (MCUs) allocated to each connector worker. The valid values are 1,2,4,8. * **minWorkerCount** *(integer) --* The minimum number of workers allocated to the connector. * **scaleInPolicy** *(dict) --* The sacle-in policy for the connector. * **cpuUtilizationPercentage** *(integer) --* Specifies the CPU utilization percentage threshold at which you want connector scale in to be triggered. * **scaleOutPolicy** *(dict) --* The sacle-out policy for the connector.> * **cpuUtilizationPercentage** *(integer) --* The CPU utilization percentage threshold at which you want connector scale out to be triggered. * **provisionedCapacity** *(dict) --* Describes a connector's provisioned capacity. * **mcuCount** *(integer) --* The number of microcontroller units (MCUs) allocated to each connector worker. The valid values are 1,2,4,8. * **workerCount** *(integer) --* The number of workers that are allocated to the connector. * **originConnectorConfiguration** *(dict) --* The origin connector configuration. * *(string) --* * *(string) --* * **targetWorkerSetting** *(dict) --* The target worker setting. * **capacity** *(dict) --* A description of the connector's capacity. * **autoScaling** *(dict) --* Describes the connector's auto scaling capacity. * **maxWorkerCount** *(integer) --* The maximum number of workers allocated to the connector. * **mcuCount** *(integer) --* The number of microcontroller units (MCUs) allocated to each connector worker. The valid values are 1,2,4,8. * **minWorkerCount** *(integer) --* The minimum number of workers allocated to the connector. * **scaleInPolicy** *(dict) --* The sacle-in policy for the connector. * **cpuUtilizationPercentage** *(integer) --* Specifies the CPU utilization percentage threshold at which you want connector scale in to be triggered. * **scaleOutPolicy** *(dict) --* The sacle-out policy for the connector.> * **cpuUtilizationPercentage** *(integer) --* The CPU utilization percentage threshold at which you want connector scale out to be triggered. * **provisionedCapacity** *(dict) --* Describes a connector's provisioned capacity. * **mcuCount** *(integer) --* The number of microcontroller units (MCUs) allocated to each connector worker. The valid values are 1,2,4,8. * **workerCount** *(integer) --* The number of workers that are allocated to the connector. * **targetConnectorConfiguration** *(dict) --* The target connector configuration. * *(string) --* * *(string) --* * **errorInfo** *(dict) --* Details about the state of a resource. * **code** *(string) --* A code that describes the state of a resource. * **message** *(string) --* A message that describes the state of a resource. * **creationTime** *(datetime) --* The time when the operation was created. * **endTime** *(datetime) --* The time when the operation ended. **Exceptions** * "KafkaConnect.Client.exceptions.NotFoundException" * "KafkaConnect.Client.exceptions.BadRequestException" * "KafkaConnect.Client.exceptions.ForbiddenException" * "KafkaConnect.Client.exceptions.ServiceUnavailableException" * "KafkaConnect.Client.exceptions.TooManyRequestsException" * "KafkaConnect.Client.exceptions.UnauthorizedException" * "KafkaConnect.Client.exceptions.InternalServerErrorException" KafkaConnect / Client / can_paginate can_paginate ************ KafkaConnect.Client.can_paginate(operation_name) Check if an operation can be paginated. Parameters: **operation_name** (*string*) -- The operation name. This is the same name as the method name on the client. For example, if the method name is "create_foo", and you'd normally invoke the operation as "client.create_foo(**kwargs)", if the "create_foo" operation can be paginated, you can use the call "client.get_paginator("create_foo")". Returns: "True" if the operation can be paginated, "False" otherwise. KafkaConnect / Client / describe_custom_plugin describe_custom_plugin ********************** KafkaConnect.Client.describe_custom_plugin(**kwargs) A summary description of the custom plugin. See also: AWS API Documentation **Request Syntax** response = client.describe_custom_plugin( customPluginArn='string' ) Parameters: **customPluginArn** (*string*) -- **[REQUIRED]** Returns information about a custom plugin. Return type: dict Returns: **Response Syntax** { 'creationTime': datetime(2015, 1, 1), 'customPluginArn': 'string', 'customPluginState': 'CREATING'|'CREATE_FAILED'|'ACTIVE'|'UPDATING'|'UPDATE_FAILED'|'DELETING', 'description': 'string', 'latestRevision': { 'contentType': 'JAR'|'ZIP', 'creationTime': datetime(2015, 1, 1), 'description': 'string', 'fileDescription': { 'fileMd5': 'string', 'fileSize': 123 }, 'location': { 's3Location': { 'bucketArn': 'string', 'fileKey': 'string', 'objectVersion': 'string' } }, 'revision': 123 }, 'name': 'string', 'stateDescription': { 'code': 'string', 'message': 'string' } } **Response Structure** * *(dict) --* * **creationTime** *(datetime) --* The time that the custom plugin was created. * **customPluginArn** *(string) --* The Amazon Resource Name (ARN) of the custom plugin. * **customPluginState** *(string) --* The state of the custom plugin. * **description** *(string) --* The description of the custom plugin. * **latestRevision** *(dict) --* The latest successfully created revision of the custom plugin. If there are no successfully created revisions, this field will be absent. * **contentType** *(string) --* The format of the plugin file. * **creationTime** *(datetime) --* The time that the custom plugin was created. * **description** *(string) --* The description of the custom plugin. * **fileDescription** *(dict) --* Details about the custom plugin file. * **fileMd5** *(string) --* The hex-encoded MD5 checksum of the custom plugin file. You can use it to validate the file. * **fileSize** *(integer) --* The size in bytes of the custom plugin file. You can use it to validate the file. * **location** *(dict) --* Information about the location of the custom plugin. * **s3Location** *(dict) --* The S3 bucket Amazon Resource Name (ARN), file key, and object version of the plugin file stored in Amazon S3. * **bucketArn** *(string) --* The Amazon Resource Name (ARN) of an S3 bucket. * **fileKey** *(string) --* The file key for an object in an S3 bucket. * **objectVersion** *(string) --* The version of an object in an S3 bucket. * **revision** *(integer) --* The revision of the custom plugin. * **name** *(string) --* The name of the custom plugin. * **stateDescription** *(dict) --* Details about the state of a custom plugin. * **code** *(string) --* A code that describes the state of a resource. * **message** *(string) --* A message that describes the state of a resource. **Exceptions** * "KafkaConnect.Client.exceptions.NotFoundException" * "KafkaConnect.Client.exceptions.BadRequestException" * "KafkaConnect.Client.exceptions.ForbiddenException" * "KafkaConnect.Client.exceptions.ServiceUnavailableException" * "KafkaConnect.Client.exceptions.TooManyRequestsException" * "KafkaConnect.Client.exceptions.UnauthorizedException" * "KafkaConnect.Client.exceptions.InternalServerErrorException" KafkaConnect / Client / list_tags_for_resource list_tags_for_resource ********************** KafkaConnect.Client.list_tags_for_resource(**kwargs) Lists all the tags attached to the specified resource. See also: AWS API Documentation **Request Syntax** response = client.list_tags_for_resource( resourceArn='string' ) Parameters: **resourceArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the resource for which you want to list all attached tags. Return type: dict Returns: **Response Syntax** { 'tags': { 'string': 'string' } } **Response Structure** * *(dict) --* * **tags** *(dict) --* Lists the tags attached to the specified resource in the corresponding request. * *(string) --* * *(string) --* **Exceptions** * "KafkaConnect.Client.exceptions.NotFoundException" * "KafkaConnect.Client.exceptions.BadRequestException" * "KafkaConnect.Client.exceptions.ForbiddenException" * "KafkaConnect.Client.exceptions.ServiceUnavailableException" * "KafkaConnect.Client.exceptions.TooManyRequestsException" * "KafkaConnect.Client.exceptions.UnauthorizedException" * "KafkaConnect.Client.exceptions.InternalServerErrorException" KafkaConnect / Client / untag_resource untag_resource ************** KafkaConnect.Client.untag_resource(**kwargs) Removes tags from the specified resource. See also: AWS API Documentation **Request Syntax** response = client.untag_resource( resourceArn='string', tagKeys=[ 'string', ] ) Parameters: * **resourceArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the resource from which you want to remove tags. * **tagKeys** (*list*) -- **[REQUIRED]** The keys of the tags that you want to remove from the resource. * *(string) --* Return type: dict Returns: **Response Syntax** {} **Response Structure** * *(dict) --* **Exceptions** * "KafkaConnect.Client.exceptions.NotFoundException" * "KafkaConnect.Client.exceptions.BadRequestException" * "KafkaConnect.Client.exceptions.ForbiddenException" * "KafkaConnect.Client.exceptions.ServiceUnavailableException" * "KafkaConnect.Client.exceptions.TooManyRequestsException" * "KafkaConnect.Client.exceptions.UnauthorizedException" * "KafkaConnect.Client.exceptions.InternalServerErrorException" KafkaConnect / Client / list_custom_plugins list_custom_plugins ******************* KafkaConnect.Client.list_custom_plugins(**kwargs) Returns a list of all of the custom plugins in this account and Region. See also: AWS API Documentation **Request Syntax** response = client.list_custom_plugins( maxResults=123, nextToken='string', namePrefix='string' ) Parameters: * **maxResults** (*integer*) -- The maximum number of custom plugins to list in one response. * **nextToken** (*string*) -- If the response of a ListCustomPlugins operation is truncated, it will include a NextToken. Send this NextToken in a subsequent request to continue listing from where the previous operation left off. * **namePrefix** (*string*) -- Lists custom plugin names that start with the specified text string. Return type: dict Returns: **Response Syntax** { 'customPlugins': [ { 'creationTime': datetime(2015, 1, 1), 'customPluginArn': 'string', 'customPluginState': 'CREATING'|'CREATE_FAILED'|'ACTIVE'|'UPDATING'|'UPDATE_FAILED'|'DELETING', 'description': 'string', 'latestRevision': { 'contentType': 'JAR'|'ZIP', 'creationTime': datetime(2015, 1, 1), 'description': 'string', 'fileDescription': { 'fileMd5': 'string', 'fileSize': 123 }, 'location': { 's3Location': { 'bucketArn': 'string', 'fileKey': 'string', 'objectVersion': 'string' } }, 'revision': 123 }, 'name': 'string' }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **customPlugins** *(list) --* An array of custom plugin descriptions. * *(dict) --* A summary of the custom plugin. * **creationTime** *(datetime) --* The time that the custom plugin was created. * **customPluginArn** *(string) --* The Amazon Resource Name (ARN) of the custom plugin. * **customPluginState** *(string) --* The state of the custom plugin. * **description** *(string) --* A description of the custom plugin. * **latestRevision** *(dict) --* The latest revision of the custom plugin. * **contentType** *(string) --* The format of the plugin file. * **creationTime** *(datetime) --* The time that the custom plugin was created. * **description** *(string) --* The description of the custom plugin. * **fileDescription** *(dict) --* Details about the custom plugin file. * **fileMd5** *(string) --* The hex-encoded MD5 checksum of the custom plugin file. You can use it to validate the file. * **fileSize** *(integer) --* The size in bytes of the custom plugin file. You can use it to validate the file. * **location** *(dict) --* Information about the location of the custom plugin. * **s3Location** *(dict) --* The S3 bucket Amazon Resource Name (ARN), file key, and object version of the plugin file stored in Amazon S3. * **bucketArn** *(string) --* The Amazon Resource Name (ARN) of an S3 bucket. * **fileKey** *(string) --* The file key for an object in an S3 bucket. * **objectVersion** *(string) --* The version of an object in an S3 bucket. * **revision** *(integer) --* The revision of the custom plugin. * **name** *(string) --* The name of the custom plugin. * **nextToken** *(string) --* If the response of a ListCustomPlugins operation is truncated, it will include a NextToken. Send this NextToken in a subsequent request to continue listing from where the previous operation left off. **Exceptions** * "KafkaConnect.Client.exceptions.NotFoundException" * "KafkaConnect.Client.exceptions.BadRequestException" * "KafkaConnect.Client.exceptions.ForbiddenException" * "KafkaConnect.Client.exceptions.ServiceUnavailableException" * "KafkaConnect.Client.exceptions.TooManyRequestsException" * "KafkaConnect.Client.exceptions.UnauthorizedException" * "KafkaConnect.Client.exceptions.InternalServerErrorException" KafkaConnect / Client / get_waiter get_waiter ********** KafkaConnect.Client.get_waiter(waiter_name) Returns an object that can wait for some condition. Parameters: **waiter_name** (*str*) -- The name of the waiter to get. See the waiters section of the service docs for a list of available waiters. Returns: The specified waiter object. Return type: "botocore.waiter.Waiter" KafkaConnect / Client / create_custom_plugin create_custom_plugin ******************** KafkaConnect.Client.create_custom_plugin(**kwargs) Creates a custom plugin using the specified properties. See also: AWS API Documentation **Request Syntax** response = client.create_custom_plugin( contentType='JAR'|'ZIP', description='string', location={ 's3Location': { 'bucketArn': 'string', 'fileKey': 'string', 'objectVersion': 'string' } }, name='string', tags={ 'string': 'string' } ) Parameters: * **contentType** (*string*) -- **[REQUIRED]** The type of the plugin file. * **description** (*string*) -- A summary description of the custom plugin. * **location** (*dict*) -- **[REQUIRED]** Information about the location of a custom plugin. * **s3Location** *(dict) --* **[REQUIRED]** The S3 bucket Amazon Resource Name (ARN), file key, and object version of the plugin file stored in Amazon S3. * **bucketArn** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) of an S3 bucket. * **fileKey** *(string) --* **[REQUIRED]** The file key for an object in an S3 bucket. * **objectVersion** *(string) --* The version of an object in an S3 bucket. * **name** (*string*) -- **[REQUIRED]** The name of the custom plugin. * **tags** (*dict*) -- The tags you want to attach to the custom plugin. * *(string) --* * *(string) --* Return type: dict Returns: **Response Syntax** { 'customPluginArn': 'string', 'customPluginState': 'CREATING'|'CREATE_FAILED'|'ACTIVE'|'UPDATING'|'UPDATE_FAILED'|'DELETING', 'name': 'string', 'revision': 123 } **Response Structure** * *(dict) --* * **customPluginArn** *(string) --* The Amazon Resource Name (ARN) that Amazon assigned to the custom plugin. * **customPluginState** *(string) --* The state of the custom plugin. * **name** *(string) --* The name of the custom plugin. * **revision** *(integer) --* The revision of the custom plugin. **Exceptions** * "KafkaConnect.Client.exceptions.NotFoundException" * "KafkaConnect.Client.exceptions.ConflictException" * "KafkaConnect.Client.exceptions.BadRequestException" * "KafkaConnect.Client.exceptions.ForbiddenException" * "KafkaConnect.Client.exceptions.ServiceUnavailableException" * "KafkaConnect.Client.exceptions.TooManyRequestsException" * "KafkaConnect.Client.exceptions.UnauthorizedException" * "KafkaConnect.Client.exceptions.InternalServerErrorException" KafkaConnect / Client / update_connector update_connector **************** KafkaConnect.Client.update_connector(**kwargs) Updates the specified connector. See also: AWS API Documentation **Request Syntax** response = client.update_connector( capacity={ 'autoScaling': { 'maxWorkerCount': 123, 'mcuCount': 123, 'minWorkerCount': 123, 'scaleInPolicy': { 'cpuUtilizationPercentage': 123 }, 'scaleOutPolicy': { 'cpuUtilizationPercentage': 123 } }, 'provisionedCapacity': { 'mcuCount': 123, 'workerCount': 123 } }, connectorConfiguration={ 'string': 'string' }, connectorArn='string', currentVersion='string' ) Parameters: * **capacity** (*dict*) -- The target capacity. * **autoScaling** *(dict) --* The target auto scaling setting. * **maxWorkerCount** *(integer) --* **[REQUIRED]** The target maximum number of workers allocated to the connector. * **mcuCount** *(integer) --* **[REQUIRED]** The target number of microcontroller units (MCUs) allocated to each connector worker. The valid values are 1,2,4,8. * **minWorkerCount** *(integer) --* **[REQUIRED]** The target minimum number of workers allocated to the connector. * **scaleInPolicy** *(dict) --* **[REQUIRED]** The target sacle-in policy for the connector. * **cpuUtilizationPercentage** *(integer) --* **[REQUIRED]** The target CPU utilization percentage threshold at which you want connector scale in to be triggered. * **scaleOutPolicy** *(dict) --* **[REQUIRED]** The target sacle-out policy for the connector. * **cpuUtilizationPercentage** *(integer) --* **[REQUIRED]** The target CPU utilization percentage threshold at which you want connector scale out to be triggered. * **provisionedCapacity** *(dict) --* The target settings for provisioned capacity. * **mcuCount** *(integer) --* **[REQUIRED]** The number of microcontroller units (MCUs) allocated to each connector worker. The valid values are 1,2,4,8. * **workerCount** *(integer) --* **[REQUIRED]** The number of workers that are allocated to the connector. * **connectorConfiguration** (*dict*) -- A map of keys to values that represent the configuration for the connector. * *(string) --* * *(string) --* * **connectorArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the connector that you want to update. * **currentVersion** (*string*) -- **[REQUIRED]** The current version of the connector that you want to update. Return type: dict Returns: **Response Syntax** { 'connectorArn': 'string', 'connectorState': 'RUNNING'|'CREATING'|'UPDATING'|'DELETING'|'FAILED', 'connectorOperationArn': 'string' } **Response Structure** * *(dict) --* * **connectorArn** *(string) --* The Amazon Resource Name (ARN) of the connector. * **connectorState** *(string) --* The state of the connector. * **connectorOperationArn** *(string) --* The Amazon Resource Name (ARN) of the connector operation. **Exceptions** * "KafkaConnect.Client.exceptions.NotFoundException" * "KafkaConnect.Client.exceptions.BadRequestException" * "KafkaConnect.Client.exceptions.ForbiddenException" * "KafkaConnect.Client.exceptions.ServiceUnavailableException" * "KafkaConnect.Client.exceptions.TooManyRequestsException" * "KafkaConnect.Client.exceptions.UnauthorizedException" * "KafkaConnect.Client.exceptions.InternalServerErrorException" KafkaConnect / Client / create_connector create_connector **************** KafkaConnect.Client.create_connector(**kwargs) Creates a connector using the specified properties. See also: AWS API Documentation **Request Syntax** response = client.create_connector( capacity={ 'autoScaling': { 'maxWorkerCount': 123, 'mcuCount': 123, 'minWorkerCount': 123, 'scaleInPolicy': { 'cpuUtilizationPercentage': 123 }, 'scaleOutPolicy': { 'cpuUtilizationPercentage': 123 } }, 'provisionedCapacity': { 'mcuCount': 123, 'workerCount': 123 } }, connectorConfiguration={ 'string': 'string' }, connectorDescription='string', connectorName='string', kafkaCluster={ 'apacheKafkaCluster': { 'bootstrapServers': 'string', 'vpc': { 'securityGroups': [ 'string', ], 'subnets': [ 'string', ] } } }, kafkaClusterClientAuthentication={ 'authenticationType': 'NONE'|'IAM' }, kafkaClusterEncryptionInTransit={ 'encryptionType': 'PLAINTEXT'|'TLS' }, kafkaConnectVersion='string', logDelivery={ 'workerLogDelivery': { 'cloudWatchLogs': { 'enabled': True|False, 'logGroup': 'string' }, 'firehose': { 'deliveryStream': 'string', 'enabled': True|False }, 's3': { 'bucket': 'string', 'enabled': True|False, 'prefix': 'string' } } }, plugins=[ { 'customPlugin': { 'customPluginArn': 'string', 'revision': 123 } }, ], serviceExecutionRoleArn='string', workerConfiguration={ 'revision': 123, 'workerConfigurationArn': 'string' }, tags={ 'string': 'string' } ) Parameters: * **capacity** (*dict*) -- **[REQUIRED]** Information about the capacity allocated to the connector. Exactly one of the two properties must be specified. * **autoScaling** *(dict) --* Information about the auto scaling parameters for the connector. * **maxWorkerCount** *(integer) --* **[REQUIRED]** The maximum number of workers allocated to the connector. * **mcuCount** *(integer) --* **[REQUIRED]** The number of microcontroller units (MCUs) allocated to each connector worker. The valid values are 1,2,4,8. * **minWorkerCount** *(integer) --* **[REQUIRED]** The minimum number of workers allocated to the connector. * **scaleInPolicy** *(dict) --* The sacle-in policy for the connector. * **cpuUtilizationPercentage** *(integer) --* **[REQUIRED]** Specifies the CPU utilization percentage threshold at which you want connector scale in to be triggered. * **scaleOutPolicy** *(dict) --* The sacle-out policy for the connector. * **cpuUtilizationPercentage** *(integer) --* **[REQUIRED]** The CPU utilization percentage threshold at which you want connector scale out to be triggered. * **provisionedCapacity** *(dict) --* Details about a fixed capacity allocated to a connector. * **mcuCount** *(integer) --* **[REQUIRED]** The number of microcontroller units (MCUs) allocated to each connector worker. The valid values are 1,2,4,8. * **workerCount** *(integer) --* **[REQUIRED]** The number of workers that are allocated to the connector. * **connectorConfiguration** (*dict*) -- **[REQUIRED]** A map of keys to values that represent the configuration for the connector. * *(string) --* * *(string) --* * **connectorDescription** (*string*) -- A summary description of the connector. * **connectorName** (*string*) -- **[REQUIRED]** The name of the connector. * **kafkaCluster** (*dict*) -- **[REQUIRED]** Specifies which Apache Kafka cluster to connect to. * **apacheKafkaCluster** *(dict) --* **[REQUIRED]** The Apache Kafka cluster to which the connector is connected. * **bootstrapServers** *(string) --* **[REQUIRED]** The bootstrap servers of the cluster. * **vpc** *(dict) --* **[REQUIRED]** Details of an Amazon VPC which has network connectivity to the Apache Kafka cluster. * **securityGroups** *(list) --* The security groups for the connector. * *(string) --* * **subnets** *(list) --* **[REQUIRED]** The subnets for the connector. * *(string) --* * **kafkaClusterClientAuthentication** (*dict*) -- **[REQUIRED]** Details of the client authentication used by the Apache Kafka cluster. * **authenticationType** *(string) --* **[REQUIRED]** The type of client authentication used to connect to the Apache Kafka cluster. Value NONE means that no client authentication is used. * **kafkaClusterEncryptionInTransit** (*dict*) -- **[REQUIRED]** Details of encryption in transit to the Apache Kafka cluster. * **encryptionType** *(string) --* **[REQUIRED]** The type of encryption in transit to the Apache Kafka cluster. * **kafkaConnectVersion** (*string*) -- **[REQUIRED]** The version of Kafka Connect. It has to be compatible with both the Apache Kafka cluster's version and the plugins. * **logDelivery** (*dict*) -- Details about log delivery. * **workerLogDelivery** *(dict) --* **[REQUIRED]** The workers can send worker logs to different destination types. This configuration specifies the details of these destinations. * **cloudWatchLogs** *(dict) --* Details about delivering logs to Amazon CloudWatch Logs. * **enabled** *(boolean) --* **[REQUIRED]** Whether log delivery to Amazon CloudWatch Logs is enabled. * **logGroup** *(string) --* The name of the CloudWatch log group that is the destination for log delivery. * **firehose** *(dict) --* Details about delivering logs to Amazon Kinesis Data Firehose. * **deliveryStream** *(string) --* The name of the Kinesis Data Firehose delivery stream that is the destination for log delivery. * **enabled** *(boolean) --* **[REQUIRED]** Specifies whether connector logs get delivered to Amazon Kinesis Data Firehose. * **s3** *(dict) --* Details about delivering logs to Amazon S3. * **bucket** *(string) --* The name of the S3 bucket that is the destination for log delivery. * **enabled** *(boolean) --* **[REQUIRED]** Specifies whether connector logs get sent to the specified Amazon S3 destination. * **prefix** *(string) --* The S3 prefix that is the destination for log delivery. * **plugins** (*list*) -- **[REQUIRED]** Warning: Amazon MSK Connect does not currently support specifying multiple plugins as a list. To use more than one plugin for your connector, you can create a single custom plugin using a ZIP file that bundles multiple plugins together. Specifies which plugin to use for the connector. You must specify a single-element list containing one "customPlugin" object. * *(dict) --* A plugin is an Amazon Web Services resource that contains the code that defines your connector logic. * **customPlugin** *(dict) --* **[REQUIRED]** Details about a custom plugin. * **customPluginArn** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) of the custom plugin. * **revision** *(integer) --* **[REQUIRED]** The revision of the custom plugin. * **serviceExecutionRoleArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the IAM role used by the connector to access the Amazon Web Services resources that it needs. The types of resources depends on the logic of the connector. For example, a connector that has Amazon S3 as a destination must have permissions that allow it to write to the S3 destination bucket. * **workerConfiguration** (*dict*) -- Specifies which worker configuration to use with the connector. * **revision** *(integer) --* **[REQUIRED]** The revision of the worker configuration. * **workerConfigurationArn** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) of the worker configuration. * **tags** (*dict*) -- The tags you want to attach to the connector. * *(string) --* * *(string) --* Return type: dict Returns: **Response Syntax** { 'connectorArn': 'string', 'connectorName': 'string', 'connectorState': 'RUNNING'|'CREATING'|'UPDATING'|'DELETING'|'FAILED' } **Response Structure** * *(dict) --* * **connectorArn** *(string) --* The Amazon Resource Name (ARN) that Amazon assigned to the connector. * **connectorName** *(string) --* The name of the connector. * **connectorState** *(string) --* The state of the connector. **Exceptions** * "KafkaConnect.Client.exceptions.NotFoundException" * "KafkaConnect.Client.exceptions.ConflictException" * "KafkaConnect.Client.exceptions.BadRequestException" * "KafkaConnect.Client.exceptions.ForbiddenException" * "KafkaConnect.Client.exceptions.ServiceUnavailableException" * "KafkaConnect.Client.exceptions.TooManyRequestsException" * "KafkaConnect.Client.exceptions.UnauthorizedException" * "KafkaConnect.Client.exceptions.InternalServerErrorException" KafkaConnect / Client / close close ***** KafkaConnect.Client.close() Closes underlying endpoint connections. KafkaConnect / Client / describe_worker_configuration describe_worker_configuration ***************************** KafkaConnect.Client.describe_worker_configuration(**kwargs) Returns information about a worker configuration. See also: AWS API Documentation **Request Syntax** response = client.describe_worker_configuration( workerConfigurationArn='string' ) Parameters: **workerConfigurationArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the worker configuration that you want to get information about. Return type: dict Returns: **Response Syntax** { 'creationTime': datetime(2015, 1, 1), 'description': 'string', 'latestRevision': { 'creationTime': datetime(2015, 1, 1), 'description': 'string', 'propertiesFileContent': 'string', 'revision': 123 }, 'name': 'string', 'workerConfigurationArn': 'string', 'workerConfigurationState': 'ACTIVE'|'DELETING' } **Response Structure** * *(dict) --* * **creationTime** *(datetime) --* The time that the worker configuration was created. * **description** *(string) --* The description of the worker configuration. * **latestRevision** *(dict) --* The latest revision of the custom configuration. * **creationTime** *(datetime) --* The time that the worker configuration was created. * **description** *(string) --* The description of the worker configuration revision. * **propertiesFileContent** *(string) --* Base64 encoded contents of the connect- distributed.properties file. * **revision** *(integer) --* The description of a revision of the worker configuration. * **name** *(string) --* The name of the worker configuration. * **workerConfigurationArn** *(string) --* The Amazon Resource Name (ARN) of the custom configuration. * **workerConfigurationState** *(string) --* The state of the worker configuration. **Exceptions** * "KafkaConnect.Client.exceptions.NotFoundException" * "KafkaConnect.Client.exceptions.BadRequestException" * "KafkaConnect.Client.exceptions.ForbiddenException" * "KafkaConnect.Client.exceptions.ServiceUnavailableException" * "KafkaConnect.Client.exceptions.TooManyRequestsException" * "KafkaConnect.Client.exceptions.UnauthorizedException" * "KafkaConnect.Client.exceptions.InternalServerErrorException" KafkaConnect / Client / list_connector_operations list_connector_operations ************************* KafkaConnect.Client.list_connector_operations(**kwargs) Lists information about a connector's operation(s). See also: AWS API Documentation **Request Syntax** response = client.list_connector_operations( connectorArn='string', maxResults=123, nextToken='string' ) Parameters: * **connectorArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the connector for which to list operations. * **maxResults** (*integer*) -- Maximum number of connector operations to fetch in one get request. * **nextToken** (*string*) -- If the response is truncated, it includes a NextToken. Send this NextToken in a subsequent request to continue listing from where it left off. Return type: dict Returns: **Response Syntax** { 'connectorOperations': [ { 'connectorOperationArn': 'string', 'connectorOperationType': 'UPDATE_WORKER_SETTING'|'UPDATE_CONNECTOR_CONFIGURATION'|'ISOLATE_CONNECTOR'|'RESTORE_CONNECTOR', 'connectorOperationState': 'PENDING'|'UPDATE_IN_PROGRESS'|'UPDATE_COMPLETE'|'UPDATE_FAILED'|'ROLLBACK_IN_PROGRESS'|'ROLLBACK_FAILED'|'ROLLBACK_COMPLETE', 'creationTime': datetime(2015, 1, 1), 'endTime': datetime(2015, 1, 1) }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **connectorOperations** *(list) --* An array of connector operation descriptions. * *(dict) --* Summary of a connector operation. * **connectorOperationArn** *(string) --* The Amazon Resource Name (ARN) of the connector operation. * **connectorOperationType** *(string) --* The type of connector operation performed. * **connectorOperationState** *(string) --* The state of the connector operation. * **creationTime** *(datetime) --* The time when operation was created. * **endTime** *(datetime) --* The time when operation ended. * **nextToken** *(string) --* If the response is truncated, it includes a NextToken. Send this NextToken in a subsequent request to continue listing from where it left off. **Exceptions** * "KafkaConnect.Client.exceptions.NotFoundException" * "KafkaConnect.Client.exceptions.BadRequestException" * "KafkaConnect.Client.exceptions.ForbiddenException" * "KafkaConnect.Client.exceptions.ServiceUnavailableException" * "KafkaConnect.Client.exceptions.TooManyRequestsException" * "KafkaConnect.Client.exceptions.UnauthorizedException" * "KafkaConnect.Client.exceptions.InternalServerErrorException" KafkaConnect / Client / delete_custom_plugin delete_custom_plugin ******************** KafkaConnect.Client.delete_custom_plugin(**kwargs) Deletes a custom plugin. See also: AWS API Documentation **Request Syntax** response = client.delete_custom_plugin( customPluginArn='string' ) Parameters: **customPluginArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the custom plugin that you want to delete. Return type: dict Returns: **Response Syntax** { 'customPluginArn': 'string', 'customPluginState': 'CREATING'|'CREATE_FAILED'|'ACTIVE'|'UPDATING'|'UPDATE_FAILED'|'DELETING' } **Response Structure** * *(dict) --* * **customPluginArn** *(string) --* The Amazon Resource Name (ARN) of the custom plugin that you requested to delete. * **customPluginState** *(string) --* The state of the custom plugin. **Exceptions** * "KafkaConnect.Client.exceptions.NotFoundException" * "KafkaConnect.Client.exceptions.BadRequestException" * "KafkaConnect.Client.exceptions.ForbiddenException" * "KafkaConnect.Client.exceptions.ServiceUnavailableException" * "KafkaConnect.Client.exceptions.TooManyRequestsException" * "KafkaConnect.Client.exceptions.UnauthorizedException" * "KafkaConnect.Client.exceptions.InternalServerErrorException" KafkaConnect / Client / delete_worker_configuration delete_worker_configuration *************************** KafkaConnect.Client.delete_worker_configuration(**kwargs) Deletes the specified worker configuration. See also: AWS API Documentation **Request Syntax** response = client.delete_worker_configuration( workerConfigurationArn='string' ) Parameters: **workerConfigurationArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the worker configuration that you want to delete. Return type: dict Returns: **Response Syntax** { 'workerConfigurationArn': 'string', 'workerConfigurationState': 'ACTIVE'|'DELETING' } **Response Structure** * *(dict) --* * **workerConfigurationArn** *(string) --* The Amazon Resource Name (ARN) of the worker configuration that you requested to delete. * **workerConfigurationState** *(string) --* The state of the worker configuration. **Exceptions** * "KafkaConnect.Client.exceptions.NotFoundException" * "KafkaConnect.Client.exceptions.BadRequestException" * "KafkaConnect.Client.exceptions.ForbiddenException" * "KafkaConnect.Client.exceptions.ServiceUnavailableException" * "KafkaConnect.Client.exceptions.TooManyRequestsException" * "KafkaConnect.Client.exceptions.UnauthorizedException" * "KafkaConnect.Client.exceptions.InternalServerErrorException" KafkaConnect / Client / tag_resource tag_resource ************ KafkaConnect.Client.tag_resource(**kwargs) Attaches tags to the specified resource. See also: AWS API Documentation **Request Syntax** response = client.tag_resource( resourceArn='string', tags={ 'string': 'string' } ) Parameters: * **resourceArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the resource to which you want to attach tags. * **tags** (*dict*) -- **[REQUIRED]** The tags that you want to attach to the resource. * *(string) --* * *(string) --* Return type: dict Returns: **Response Syntax** {} **Response Structure** * *(dict) --* **Exceptions** * "KafkaConnect.Client.exceptions.NotFoundException" * "KafkaConnect.Client.exceptions.ConflictException" * "KafkaConnect.Client.exceptions.BadRequestException" * "KafkaConnect.Client.exceptions.ForbiddenException" * "KafkaConnect.Client.exceptions.ServiceUnavailableException" * "KafkaConnect.Client.exceptions.TooManyRequestsException" * "KafkaConnect.Client.exceptions.UnauthorizedException" * "KafkaConnect.Client.exceptions.InternalServerErrorException" KafkaConnect / Client / delete_connector delete_connector **************** KafkaConnect.Client.delete_connector(**kwargs) Deletes the specified connector. See also: AWS API Documentation **Request Syntax** response = client.delete_connector( connectorArn='string', currentVersion='string' ) Parameters: * **connectorArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the connector that you want to delete. * **currentVersion** (*string*) -- The current version of the connector that you want to delete. Return type: dict Returns: **Response Syntax** { 'connectorArn': 'string', 'connectorState': 'RUNNING'|'CREATING'|'UPDATING'|'DELETING'|'FAILED' } **Response Structure** * *(dict) --* * **connectorArn** *(string) --* The Amazon Resource Name (ARN) of the connector that you requested to delete. * **connectorState** *(string) --* The state of the connector that you requested to delete. **Exceptions** * "KafkaConnect.Client.exceptions.NotFoundException" * "KafkaConnect.Client.exceptions.BadRequestException" * "KafkaConnect.Client.exceptions.ForbiddenException" * "KafkaConnect.Client.exceptions.ServiceUnavailableException" * "KafkaConnect.Client.exceptions.TooManyRequestsException" * "KafkaConnect.Client.exceptions.UnauthorizedException" * "KafkaConnect.Client.exceptions.InternalServerErrorException" KafkaConnect / Client / create_worker_configuration create_worker_configuration *************************** KafkaConnect.Client.create_worker_configuration(**kwargs) Creates a worker configuration using the specified properties. See also: AWS API Documentation **Request Syntax** response = client.create_worker_configuration( description='string', name='string', propertiesFileContent='string', tags={ 'string': 'string' } ) Parameters: * **description** (*string*) -- A summary description of the worker configuration. * **name** (*string*) -- **[REQUIRED]** The name of the worker configuration. * **propertiesFileContent** (*string*) -- **[REQUIRED]** Base64 encoded contents of connect-distributed.properties file. * **tags** (*dict*) -- The tags you want to attach to the worker configuration. * *(string) --* * *(string) --* Return type: dict Returns: **Response Syntax** { 'creationTime': datetime(2015, 1, 1), 'latestRevision': { 'creationTime': datetime(2015, 1, 1), 'description': 'string', 'revision': 123 }, 'name': 'string', 'workerConfigurationArn': 'string', 'workerConfigurationState': 'ACTIVE'|'DELETING' } **Response Structure** * *(dict) --* * **creationTime** *(datetime) --* The time that the worker configuration was created. * **latestRevision** *(dict) --* The latest revision of the worker configuration. * **creationTime** *(datetime) --* The time that a worker configuration revision was created. * **description** *(string) --* The description of a worker configuration revision. * **revision** *(integer) --* The revision of a worker configuration. * **name** *(string) --* The name of the worker configuration. * **workerConfigurationArn** *(string) --* The Amazon Resource Name (ARN) that Amazon assigned to the worker configuration. * **workerConfigurationState** *(string) --* The state of the worker configuration. **Exceptions** * "KafkaConnect.Client.exceptions.NotFoundException" * "KafkaConnect.Client.exceptions.ConflictException" * "KafkaConnect.Client.exceptions.BadRequestException" * "KafkaConnect.Client.exceptions.ForbiddenException" * "KafkaConnect.Client.exceptions.ServiceUnavailableException" * "KafkaConnect.Client.exceptions.TooManyRequestsException" * "KafkaConnect.Client.exceptions.UnauthorizedException" * "KafkaConnect.Client.exceptions.InternalServerErrorException" KafkaConnect / Client / list_connectors list_connectors *************** KafkaConnect.Client.list_connectors(**kwargs) Returns a list of all the connectors in this account and Region. The list is limited to connectors whose name starts with the specified prefix. The response also includes a description of each of the listed connectors. See also: AWS API Documentation **Request Syntax** response = client.list_connectors( connectorNamePrefix='string', maxResults=123, nextToken='string' ) Parameters: * **connectorNamePrefix** (*string*) -- The name prefix that you want to use to search for and list connectors. * **maxResults** (*integer*) -- The maximum number of connectors to list in one response. * **nextToken** (*string*) -- If the response of a ListConnectors operation is truncated, it will include a NextToken. Send this NextToken in a subsequent request to continue listing from where the previous operation left off. Return type: dict Returns: **Response Syntax** { 'connectors': [ { 'capacity': { 'autoScaling': { 'maxWorkerCount': 123, 'mcuCount': 123, 'minWorkerCount': 123, 'scaleInPolicy': { 'cpuUtilizationPercentage': 123 }, 'scaleOutPolicy': { 'cpuUtilizationPercentage': 123 } }, 'provisionedCapacity': { 'mcuCount': 123, 'workerCount': 123 } }, 'connectorArn': 'string', 'connectorDescription': 'string', 'connectorName': 'string', 'connectorState': 'RUNNING'|'CREATING'|'UPDATING'|'DELETING'|'FAILED', 'creationTime': datetime(2015, 1, 1), 'currentVersion': 'string', 'kafkaCluster': { 'apacheKafkaCluster': { 'bootstrapServers': 'string', 'vpc': { 'securityGroups': [ 'string', ], 'subnets': [ 'string', ] } } }, 'kafkaClusterClientAuthentication': { 'authenticationType': 'NONE'|'IAM' }, 'kafkaClusterEncryptionInTransit': { 'encryptionType': 'PLAINTEXT'|'TLS' }, 'kafkaConnectVersion': 'string', 'logDelivery': { 'workerLogDelivery': { 'cloudWatchLogs': { 'enabled': True|False, 'logGroup': 'string' }, 'firehose': { 'deliveryStream': 'string', 'enabled': True|False }, 's3': { 'bucket': 'string', 'enabled': True|False, 'prefix': 'string' } } }, 'plugins': [ { 'customPlugin': { 'customPluginArn': 'string', 'revision': 123 } }, ], 'serviceExecutionRoleArn': 'string', 'workerConfiguration': { 'revision': 123, 'workerConfigurationArn': 'string' } }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **connectors** *(list) --* An array of connector descriptions. * *(dict) --* Summary of a connector. * **capacity** *(dict) --* The connector's compute capacity settings. * **autoScaling** *(dict) --* Describes the connector's auto scaling capacity. * **maxWorkerCount** *(integer) --* The maximum number of workers allocated to the connector. * **mcuCount** *(integer) --* The number of microcontroller units (MCUs) allocated to each connector worker. The valid values are 1,2,4,8. * **minWorkerCount** *(integer) --* The minimum number of workers allocated to the connector. * **scaleInPolicy** *(dict) --* The sacle-in policy for the connector. * **cpuUtilizationPercentage** *(integer) --* Specifies the CPU utilization percentage threshold at which you want connector scale in to be triggered. * **scaleOutPolicy** *(dict) --* The sacle-out policy for the connector.> * **cpuUtilizationPercentage** *(integer) --* The CPU utilization percentage threshold at which you want connector scale out to be triggered. * **provisionedCapacity** *(dict) --* Describes a connector's provisioned capacity. * **mcuCount** *(integer) --* The number of microcontroller units (MCUs) allocated to each connector worker. The valid values are 1,2,4,8. * **workerCount** *(integer) --* The number of workers that are allocated to the connector. * **connectorArn** *(string) --* The Amazon Resource Name (ARN) of the connector. * **connectorDescription** *(string) --* The description of the connector. * **connectorName** *(string) --* The name of the connector. * **connectorState** *(string) --* The state of the connector. * **creationTime** *(datetime) --* The time that the connector was created. * **currentVersion** *(string) --* The current version of the connector. * **kafkaCluster** *(dict) --* The details of the Apache Kafka cluster to which the connector is connected. * **apacheKafkaCluster** *(dict) --* The Apache Kafka cluster to which the connector is connected. * **bootstrapServers** *(string) --* The bootstrap servers of the cluster. * **vpc** *(dict) --* Details of an Amazon VPC which has network connectivity to the Apache Kafka cluster. * **securityGroups** *(list) --* The security groups for the connector. * *(string) --* * **subnets** *(list) --* The subnets for the connector. * *(string) --* * **kafkaClusterClientAuthentication** *(dict) --* The type of client authentication used to connect to the Apache Kafka cluster. The value is NONE when no client authentication is used. * **authenticationType** *(string) --* The type of client authentication used to connect to the Apache Kafka cluster. Value NONE means that no client authentication is used. * **kafkaClusterEncryptionInTransit** *(dict) --* Details of encryption in transit to the Apache Kafka cluster. * **encryptionType** *(string) --* The type of encryption in transit to the Apache Kafka cluster. * **kafkaConnectVersion** *(string) --* The version of Kafka Connect. It has to be compatible with both the Apache Kafka cluster's version and the plugins. * **logDelivery** *(dict) --* The settings for delivering connector logs to Amazon CloudWatch Logs. * **workerLogDelivery** *(dict) --* The workers can send worker logs to different destination types. This configuration specifies the details of these destinations. * **cloudWatchLogs** *(dict) --* Details about delivering logs to Amazon CloudWatch Logs. * **enabled** *(boolean) --* Whether log delivery to Amazon CloudWatch Logs is enabled. * **logGroup** *(string) --* The name of the CloudWatch log group that is the destination for log delivery. * **firehose** *(dict) --* Details about delivering logs to Amazon Kinesis Data Firehose. * **deliveryStream** *(string) --* The name of the Kinesis Data Firehose delivery stream that is the destination for log delivery. * **enabled** *(boolean) --* Specifies whether connector logs get delivered to Amazon Kinesis Data Firehose. * **s3** *(dict) --* Details about delivering logs to Amazon S3. * **bucket** *(string) --* The name of the S3 bucket that is the destination for log delivery. * **enabled** *(boolean) --* Specifies whether connector logs get sent to the specified Amazon S3 destination. * **prefix** *(string) --* The S3 prefix that is the destination for log delivery. * **plugins** *(list) --* Specifies which plugins were used for this connector. * *(dict) --* The description of the plugin. * **customPlugin** *(dict) --* Details about a custom plugin. * **customPluginArn** *(string) --* The Amazon Resource Name (ARN) of the custom plugin. * **revision** *(integer) --* The revision of the custom plugin. * **serviceExecutionRoleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role used by the connector to access Amazon Web Services resources. * **workerConfiguration** *(dict) --* The worker configurations that are in use with the connector. * **revision** *(integer) --* The revision of the worker configuration. * **workerConfigurationArn** *(string) --* The Amazon Resource Name (ARN) of the worker configuration. * **nextToken** *(string) --* If the response of a ListConnectors operation is truncated, it will include a NextToken. Send this NextToken in a subsequent request to continue listing from where it left off. **Exceptions** * "KafkaConnect.Client.exceptions.NotFoundException" * "KafkaConnect.Client.exceptions.BadRequestException" * "KafkaConnect.Client.exceptions.ForbiddenException" * "KafkaConnect.Client.exceptions.ServiceUnavailableException" * "KafkaConnect.Client.exceptions.TooManyRequestsException" * "KafkaConnect.Client.exceptions.UnauthorizedException" * "KafkaConnect.Client.exceptions.InternalServerErrorException"