ChimeSDKMediaPipelines ********************** Client ====== class ChimeSDKMediaPipelines.Client A low-level client representing Amazon Chime SDK Media Pipelines The Amazon Chime SDK media pipeline APIs in this section allow software developers to create Amazon Chime SDK media pipelines that capture, concatenate, or stream your Amazon Chime SDK meetings. For more information about media pipelines, see Amazon Chime SDK media pipelines. import boto3 client = boto3.client('chime-sdk-media-pipelines') These are the available methods: * can_paginate * close * create_media_capture_pipeline * create_media_concatenation_pipeline * create_media_insights_pipeline * create_media_insights_pipeline_configuration * create_media_live_connector_pipeline * create_media_pipeline_kinesis_video_stream_pool * create_media_stream_pipeline * delete_media_capture_pipeline * delete_media_insights_pipeline_configuration * delete_media_pipeline * delete_media_pipeline_kinesis_video_stream_pool * get_media_capture_pipeline * get_media_insights_pipeline_configuration * get_media_pipeline * get_media_pipeline_kinesis_video_stream_pool * get_paginator * get_speaker_search_task * get_voice_tone_analysis_task * get_waiter * list_media_capture_pipelines * list_media_insights_pipeline_configurations * list_media_pipeline_kinesis_video_stream_pools * list_media_pipelines * list_tags_for_resource * start_speaker_search_task * start_voice_tone_analysis_task * stop_speaker_search_task * stop_voice_tone_analysis_task * tag_resource * untag_resource * update_media_insights_pipeline_configuration * update_media_insights_pipeline_status * update_media_pipeline_kinesis_video_stream_pool ChimeSDKMediaPipelines / Client / delete_media_pipeline_kinesis_video_stream_pool delete_media_pipeline_kinesis_video_stream_pool *********************************************** ChimeSDKMediaPipelines.Client.delete_media_pipeline_kinesis_video_stream_pool(**kwargs) Deletes an Amazon Kinesis Video Stream pool. See also: AWS API Documentation **Request Syntax** response = client.delete_media_pipeline_kinesis_video_stream_pool( Identifier='string' ) Parameters: **Identifier** (*string*) -- **[REQUIRED]** The unique identifier of the requested resource. Valid values include the name and ARN of the media pipeline Kinesis Video Stream pool. Returns: None **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.NotFoundException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ConflictException" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / create_media_insights_pipeline_configuration create_media_insights_pipeline_configuration ******************************************** ChimeSDKMediaPipelines.Client.create_media_insights_pipeline_configuration(**kwargs) A structure that contains the static configurations for a media insights pipeline. See also: AWS API Documentation **Request Syntax** response = client.create_media_insights_pipeline_configuration( MediaInsightsPipelineConfigurationName='string', ResourceAccessRoleArn='string', RealTimeAlertConfiguration={ 'Disabled': True|False, 'Rules': [ { 'Type': 'KeywordMatch'|'Sentiment'|'IssueDetection', 'KeywordMatchConfiguration': { 'RuleName': 'string', 'Keywords': [ 'string', ], 'Negate': True|False }, 'SentimentConfiguration': { 'RuleName': 'string', 'SentimentType': 'NEGATIVE', 'TimePeriod': 123 }, 'IssueDetectionConfiguration': { 'RuleName': 'string' } }, ] }, Elements=[ { 'Type': 'AmazonTranscribeCallAnalyticsProcessor'|'VoiceAnalyticsProcessor'|'AmazonTranscribeProcessor'|'KinesisDataStreamSink'|'LambdaFunctionSink'|'SqsQueueSink'|'SnsTopicSink'|'S3RecordingSink'|'VoiceEnhancementSink', 'AmazonTranscribeCallAnalyticsProcessorConfiguration': { 'LanguageCode': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR', 'VocabularyName': 'string', 'VocabularyFilterName': 'string', 'VocabularyFilterMethod': 'remove'|'mask'|'tag', 'LanguageModelName': 'string', 'EnablePartialResultsStabilization': True|False, 'PartialResultsStability': 'high'|'medium'|'low', 'ContentIdentificationType': 'PII', 'ContentRedactionType': 'PII', 'PiiEntityTypes': 'string', 'FilterPartialResults': True|False, 'PostCallAnalyticsSettings': { 'OutputLocation': 'string', 'DataAccessRoleArn': 'string', 'ContentRedactionOutput': 'redacted'|'redacted_and_unredacted', 'OutputEncryptionKMSKeyId': 'string' }, 'CallAnalyticsStreamCategories': [ 'string', ] }, 'AmazonTranscribeProcessorConfiguration': { 'LanguageCode': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR', 'VocabularyName': 'string', 'VocabularyFilterName': 'string', 'VocabularyFilterMethod': 'remove'|'mask'|'tag', 'ShowSpeakerLabel': True|False, 'EnablePartialResultsStabilization': True|False, 'PartialResultsStability': 'high'|'medium'|'low', 'ContentIdentificationType': 'PII', 'ContentRedactionType': 'PII', 'PiiEntityTypes': 'string', 'LanguageModelName': 'string', 'FilterPartialResults': True|False, 'IdentifyLanguage': True|False, 'IdentifyMultipleLanguages': True|False, 'LanguageOptions': 'string', 'PreferredLanguage': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR', 'VocabularyNames': 'string', 'VocabularyFilterNames': 'string' }, 'KinesisDataStreamSinkConfiguration': { 'InsightsTarget': 'string' }, 'S3RecordingSinkConfiguration': { 'Destination': 'string', 'RecordingFileFormat': 'Wav'|'Opus' }, 'VoiceAnalyticsProcessorConfiguration': { 'SpeakerSearchStatus': 'Enabled'|'Disabled', 'VoiceToneAnalysisStatus': 'Enabled'|'Disabled' }, 'LambdaFunctionSinkConfiguration': { 'InsightsTarget': 'string' }, 'SqsQueueSinkConfiguration': { 'InsightsTarget': 'string' }, 'SnsTopicSinkConfiguration': { 'InsightsTarget': 'string' }, 'VoiceEnhancementSinkConfiguration': { 'Disabled': True|False } }, ], Tags=[ { 'Key': 'string', 'Value': 'string' }, ], ClientRequestToken='string' ) Parameters: * **MediaInsightsPipelineConfigurationName** (*string*) -- **[REQUIRED]** The name of the media insights pipeline configuration. * **ResourceAccessRoleArn** (*string*) -- **[REQUIRED]** The ARN of the role used by the service to access Amazon Web Services resources, including "Transcribe" and "Transcribe Call Analytics", on the caller’s behalf. * **RealTimeAlertConfiguration** (*dict*) -- The configuration settings for the real-time alerts in a media insights pipeline configuration. * **Disabled** *(boolean) --* Turns off real-time alerts. * **Rules** *(list) --* The rules in the alert. Rules specify the words or phrases that you want to be notified about. * *(dict) --* Specifies the words or phrases that trigger an alert. * **Type** *(string) --* **[REQUIRED]** The type of alert rule. * **KeywordMatchConfiguration** *(dict) --* Specifies the settings for matching the keywords in a real-time alert rule. * **RuleName** *(string) --* **[REQUIRED]** The name of the keyword match rule. * **Keywords** *(list) --* **[REQUIRED]** The keywords or phrases that you want to match. * *(string) --* * **Negate** *(boolean) --* Matches keywords or phrases on their presence or absence. If set to "TRUE", the rule matches when all the specified keywords or phrases are absent. Default: "FALSE". * **SentimentConfiguration** *(dict) --* Specifies the settings for predicting sentiment in a real-time alert rule. * **RuleName** *(string) --* **[REQUIRED]** The name of the rule in the sentiment configuration. * **SentimentType** *(string) --* **[REQUIRED]** The type of sentiment, "POSITIVE", "NEGATIVE", or "NEUTRAL". * **TimePeriod** *(integer) --* **[REQUIRED]** Specifies the analysis interval. * **IssueDetectionConfiguration** *(dict) --* Specifies the issue detection settings for a real-time alert rule. * **RuleName** *(string) --* **[REQUIRED]** The name of the issue detection rule. * **Elements** (*list*) -- **[REQUIRED]** The elements in the request, such as a processor for Amazon Transcribe or a sink for a Kinesis Data Stream. * *(dict) --* An element in a media insights pipeline configuration. * **Type** *(string) --* **[REQUIRED]** The element type. * **AmazonTranscribeCallAnalyticsProcessorConfiguration** *(dict) --* The analytics configuration settings for transcribing audio in a media insights pipeline configuration element. * **LanguageCode** *(string) --* **[REQUIRED]** The language code in the configuration. * **VocabularyName** *(string) --* Specifies the name of the custom vocabulary to use when processing a transcription. Note that vocabulary names are case sensitive. If the language of the specified custom vocabulary doesn't match the language identified in your media, the custom vocabulary is not applied to your transcription. For more information, see Custom vocabularies in the *Amazon Transcribe Developer Guide*. Length Constraints: Minimum length of 1. Maximum length of 200. * **VocabularyFilterName** *(string) --* Specifies the name of the custom vocabulary filter to use when processing a transcription. Note that vocabulary filter names are case sensitive. If the language of the specified custom vocabulary filter doesn't match the language identified in your media, the vocabulary filter is not applied to your transcription. For more information, see Using vocabulary filtering with unwanted words in the *Amazon Transcribe Developer Guide*. Length Constraints: Minimum length of 1. Maximum length of 200. * **VocabularyFilterMethod** *(string) --* Specifies how to apply a vocabulary filter to a transcript. To replace words with *******, choose "mask". To delete words, choose "remove". To flag words without changing them, choose "tag". * **LanguageModelName** *(string) --* Specifies the name of the custom language model to use when processing a transcription. Note that language model names are case sensitive. The language of the specified language model must match the language code specified in the transcription request. If the languages don't match, the custom language model isn't applied. Language mismatches don't generate errors or warnings. For more information, see Custom language models in the *Amazon Transcribe Developer Guide*. * **EnablePartialResultsStabilization** *(boolean) --* Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy. For more information, see Partial-result stabilization in the *Amazon Transcribe Developer Guide*. * **PartialResultsStability** *(string) --* Specifies the level of stability to use when you enable partial results stabilization ( "EnablePartialResultsStabilization"). Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy. For more information, see Partial-result stabilization in the *Amazon Transcribe Developer Guide*. * **ContentIdentificationType** *(string) --* Labels all personally identifiable information (PII) identified in your transcript. Content identification is performed at the segment level; PII specified in "PiiEntityTypes" is flagged upon complete transcription of an audio segment. You can’t set "ContentIdentificationType" and "ContentRedactionType" in the same request. If you do, your request returns a "BadRequestException". For more information, see Redacting or identifying personally identifiable information in the *Amazon Transcribe Developer Guide*. * **ContentRedactionType** *(string) --* Redacts all personally identifiable information (PII) identified in your transcript. Content redaction is performed at the segment level; PII specified in "PiiEntityTypes" is redacted upon complete transcription of an audio segment. You can’t set "ContentRedactionType" and "ContentIdentificationType" in the same request. If you do, your request returns a "BadRequestException". For more information, see Redacting or identifying personally identifiable information in the *Amazon Transcribe Developer Guide*. * **PiiEntityTypes** *(string) --* Specifies the types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select "ALL". To include "PiiEntityTypes" in your Call Analytics request, you must also include "ContentIdentificationType" or "ContentRedactionType", but you can't include both. Values must be comma-separated and can include: "ADDRESS", "BANK_ACCOUNT_NUMBER", "BANK_ROUTING", "CREDIT_DEBIT_CVV", "CREDIT_DEBIT_EXPIRY", "CREDIT_DEBIT_NUMBER", "EMAIL", "NAME", "PHONE", "PIN", "SSN", or "ALL". Length Constraints: Minimum length of 1. Maximum length of 300. * **FilterPartialResults** *(boolean) --* If true, "UtteranceEvents" with "IsPartial: true" are filtered out of the insights target. * **PostCallAnalyticsSettings** *(dict) --* The settings for a post-call analysis task in an analytics configuration. * **OutputLocation** *(string) --* **[REQUIRED]** The URL of the Amazon S3 bucket that contains the post-call data. * **DataAccessRoleArn** *(string) --* **[REQUIRED]** The ARN of the role used by Amazon Web Services Transcribe to upload your post call analysis. For more information, see Post-call analytics with real-time transcriptions in the *Amazon Transcribe Developer Guide*. * **ContentRedactionOutput** *(string) --* The content redaction output settings for a post-call analysis task. * **OutputEncryptionKMSKeyId** *(string) --* The ID of the KMS (Key Management Service) key used to encrypt the output. * **CallAnalyticsStreamCategories** *(list) --* By default, all "CategoryEvents" are sent to the insights target. If this parameter is specified, only included categories are sent to the insights target. * *(string) --* * **AmazonTranscribeProcessorConfiguration** *(dict) --* The transcription processor configuration settings in a media insights pipeline configuration element. * **LanguageCode** *(string) --* The language code that represents the language spoken in your audio. If you're unsure of the language spoken in your audio, consider using "IdentifyLanguage" to enable automatic language identification. For a list of languages that real-time Call Analytics supports, see the Supported languages table in the *Amazon Transcribe Developer Guide*. * **VocabularyName** *(string) --* The name of the custom vocabulary that you specified in your Call Analytics request. Length Constraints: Minimum length of 1. Maximum length of 200. * **VocabularyFilterName** *(string) --* The name of the custom vocabulary filter that you specified in your Call Analytics request. Length Constraints: Minimum length of 1. Maximum length of 200. * **VocabularyFilterMethod** *(string) --* The vocabulary filtering method used in your Call Analytics transcription. * **ShowSpeakerLabel** *(boolean) --* Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file. For more information, see Partitioning speakers (diarization) in the *Amazon Transcribe Developer Guide*. * **EnablePartialResultsStabilization** *(boolean) --* Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy. For more information, see Partial-result stabilization in the *Amazon Transcribe Developer Guide*. * **PartialResultsStability** *(string) --* The level of stability to use when you enable partial results stabilization ( "EnablePartialResultsStabilization"). Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy. For more information, see Partial-result stabilization in the *Amazon Transcribe Developer Guide*. * **ContentIdentificationType** *(string) --* Labels all personally identifiable information (PII) identified in your transcript. Content identification is performed at the segment level; PII specified in "PiiEntityTypes" is flagged upon complete transcription of an audio segment. You can’t set "ContentIdentificationType" and "ContentRedactionType" in the same request. If you set both, your request returns a "BadRequestException". For more information, see Redacting or identifying personally identifiable information in the *Amazon Transcribe Developer Guide*. * **ContentRedactionType** *(string) --* Redacts all personally identifiable information (PII) identified in your transcript. Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment. You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you set both, your request returns a "BadRequestException". For more information, see Redacting or identifying personally identifiable information in the *Amazon Transcribe Developer Guide*. * **PiiEntityTypes** *(string) --* The types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select "ALL". To include "PiiEntityTypes" in your Call Analytics request, you must also include "ContentIdentificationType" or "ContentRedactionType", but you can't include both. Values must be comma-separated and can include: "ADDRESS", "BANK_ACCOUNT_NUMBER", "BANK_ROUTING", "CREDIT_DEBIT_CVV", "CREDIT_DEBIT_EXPIRY", "CREDIT_DEBIT_NUMBER", "EMAIL", "NAME", "PHONE", "PIN", "SSN", or "ALL". If you leave this parameter empty, the default behavior is equivalent to "ALL". * **LanguageModelName** *(string) --* The name of the custom language model that you want to use when processing your transcription. Note that language model names are case sensitive. The language of the specified language model must match the language code you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch. For more information, see Custom language models in the *Amazon Transcribe Developer Guide*. * **FilterPartialResults** *(boolean) --* If true, "TranscriptEvents" with "IsPartial: true" are filtered out of the insights target. * **IdentifyLanguage** *(boolean) --* Turns language identification on or off. * **IdentifyMultipleLanguages** *(boolean) --* Turns language identification on or off for multiple languages. Note: Calls to this API must include a "LanguageCode", "IdentifyLanguage", or "IdentifyMultipleLanguages" parameter. If you include more than one of those parameters, your transcription job fails. * **LanguageOptions** *(string) --* The language options for the transcription, such as automatic language detection. * **PreferredLanguage** *(string) --* The preferred language for the transcription. * **VocabularyNames** *(string) --* The names of the custom vocabulary or vocabularies used during transcription. * **VocabularyFilterNames** *(string) --* The names of the custom vocabulary filter or filters using during transcription. * **KinesisDataStreamSinkConfiguration** *(dict) --* The configuration settings for the Kinesis Data Stream Sink in a media insights pipeline configuration element. * **InsightsTarget** *(string) --* The ARN of the sink. * **S3RecordingSinkConfiguration** *(dict) --* The configuration settings for the Amazon S3 recording bucket in a media insights pipeline configuration element. * **Destination** *(string) --* The default URI of the Amazon S3 bucket used as the recording sink. * **RecordingFileFormat** *(string) --* The default file format for the media files sent to the Amazon S3 bucket. * **VoiceAnalyticsProcessorConfiguration** *(dict) --* The voice analytics configuration settings in a media insights pipeline configuration element. * **SpeakerSearchStatus** *(string) --* The status of the speaker search task. * **VoiceToneAnalysisStatus** *(string) --* The status of the voice tone analysis task. * **LambdaFunctionSinkConfiguration** *(dict) --* The configuration settings for the Amazon Web Services Lambda sink in a media insights pipeline configuration element. * **InsightsTarget** *(string) --* The ARN of the sink. * **SqsQueueSinkConfiguration** *(dict) --* The configuration settings for an SQS queue sink in a media insights pipeline configuration element. * **InsightsTarget** *(string) --* The ARN of the SQS sink. * **SnsTopicSinkConfiguration** *(dict) --* The configuration settings for an SNS topic sink in a media insights pipeline configuration element. * **InsightsTarget** *(string) --* The ARN of the SNS sink. * **VoiceEnhancementSinkConfiguration** *(dict) --* The configuration settings for voice enhancement sink in a media insights pipeline configuration element. * **Disabled** *(boolean) --* Disables the "VoiceEnhancementSinkConfiguration" element. * **Tags** (*list*) -- The tags assigned to the media insights pipeline configuration. * *(dict) --* A key/value pair that grants users access to meeting resources. * **Key** *(string) --* **[REQUIRED]** The key half of a tag. * **Value** *(string) --* **[REQUIRED]** The value half of a tag. * **ClientRequestToken** (*string*) -- The unique identifier for the media insights pipeline configuration request. This field is autopopulated if not provided. Return type: dict Returns: **Response Syntax** { 'MediaInsightsPipelineConfiguration': { 'MediaInsightsPipelineConfigurationName': 'string', 'MediaInsightsPipelineConfigurationArn': 'string', 'ResourceAccessRoleArn': 'string', 'RealTimeAlertConfiguration': { 'Disabled': True|False, 'Rules': [ { 'Type': 'KeywordMatch'|'Sentiment'|'IssueDetection', 'KeywordMatchConfiguration': { 'RuleName': 'string', 'Keywords': [ 'string', ], 'Negate': True|False }, 'SentimentConfiguration': { 'RuleName': 'string', 'SentimentType': 'NEGATIVE', 'TimePeriod': 123 }, 'IssueDetectionConfiguration': { 'RuleName': 'string' } }, ] }, 'Elements': [ { 'Type': 'AmazonTranscribeCallAnalyticsProcessor'|'VoiceAnalyticsProcessor'|'AmazonTranscribeProcessor'|'KinesisDataStreamSink'|'LambdaFunctionSink'|'SqsQueueSink'|'SnsTopicSink'|'S3RecordingSink'|'VoiceEnhancementSink', 'AmazonTranscribeCallAnalyticsProcessorConfiguration': { 'LanguageCode': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR', 'VocabularyName': 'string', 'VocabularyFilterName': 'string', 'VocabularyFilterMethod': 'remove'|'mask'|'tag', 'LanguageModelName': 'string', 'EnablePartialResultsStabilization': True|False, 'PartialResultsStability': 'high'|'medium'|'low', 'ContentIdentificationType': 'PII', 'ContentRedactionType': 'PII', 'PiiEntityTypes': 'string', 'FilterPartialResults': True|False, 'PostCallAnalyticsSettings': { 'OutputLocation': 'string', 'DataAccessRoleArn': 'string', 'ContentRedactionOutput': 'redacted'|'redacted_and_unredacted', 'OutputEncryptionKMSKeyId': 'string' }, 'CallAnalyticsStreamCategories': [ 'string', ] }, 'AmazonTranscribeProcessorConfiguration': { 'LanguageCode': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR', 'VocabularyName': 'string', 'VocabularyFilterName': 'string', 'VocabularyFilterMethod': 'remove'|'mask'|'tag', 'ShowSpeakerLabel': True|False, 'EnablePartialResultsStabilization': True|False, 'PartialResultsStability': 'high'|'medium'|'low', 'ContentIdentificationType': 'PII', 'ContentRedactionType': 'PII', 'PiiEntityTypes': 'string', 'LanguageModelName': 'string', 'FilterPartialResults': True|False, 'IdentifyLanguage': True|False, 'IdentifyMultipleLanguages': True|False, 'LanguageOptions': 'string', 'PreferredLanguage': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR', 'VocabularyNames': 'string', 'VocabularyFilterNames': 'string' }, 'KinesisDataStreamSinkConfiguration': { 'InsightsTarget': 'string' }, 'S3RecordingSinkConfiguration': { 'Destination': 'string', 'RecordingFileFormat': 'Wav'|'Opus' }, 'VoiceAnalyticsProcessorConfiguration': { 'SpeakerSearchStatus': 'Enabled'|'Disabled', 'VoiceToneAnalysisStatus': 'Enabled'|'Disabled' }, 'LambdaFunctionSinkConfiguration': { 'InsightsTarget': 'string' }, 'SqsQueueSinkConfiguration': { 'InsightsTarget': 'string' }, 'SnsTopicSinkConfiguration': { 'InsightsTarget': 'string' }, 'VoiceEnhancementSinkConfiguration': { 'Disabled': True|False } }, ], 'MediaInsightsPipelineConfigurationId': 'string', 'CreatedTimestamp': datetime(2015, 1, 1), 'UpdatedTimestamp': datetime(2015, 1, 1) } } **Response Structure** * *(dict) --* * **MediaInsightsPipelineConfiguration** *(dict) --* The configuration settings for the media insights pipeline. * **MediaInsightsPipelineConfigurationName** *(string) --* The name of the configuration. * **MediaInsightsPipelineConfigurationArn** *(string) --* The ARN of the configuration. * **ResourceAccessRoleArn** *(string) --* The ARN of the role used by the service to access Amazon Web Services resources. * **RealTimeAlertConfiguration** *(dict) --* Lists the rules that trigger a real-time alert. * **Disabled** *(boolean) --* Turns off real-time alerts. * **Rules** *(list) --* The rules in the alert. Rules specify the words or phrases that you want to be notified about. * *(dict) --* Specifies the words or phrases that trigger an alert. * **Type** *(string) --* The type of alert rule. * **KeywordMatchConfiguration** *(dict) --* Specifies the settings for matching the keywords in a real-time alert rule. * **RuleName** *(string) --* The name of the keyword match rule. * **Keywords** *(list) --* The keywords or phrases that you want to match. * *(string) --* * **Negate** *(boolean) --* Matches keywords or phrases on their presence or absence. If set to "TRUE", the rule matches when all the specified keywords or phrases are absent. Default: "FALSE". * **SentimentConfiguration** *(dict) --* Specifies the settings for predicting sentiment in a real-time alert rule. * **RuleName** *(string) --* The name of the rule in the sentiment configuration. * **SentimentType** *(string) --* The type of sentiment, "POSITIVE", "NEGATIVE", or "NEUTRAL". * **TimePeriod** *(integer) --* Specifies the analysis interval. * **IssueDetectionConfiguration** *(dict) --* Specifies the issue detection settings for a real- time alert rule. * **RuleName** *(string) --* The name of the issue detection rule. * **Elements** *(list) --* The elements in the configuration. * *(dict) --* An element in a media insights pipeline configuration. * **Type** *(string) --* The element type. * **AmazonTranscribeCallAnalyticsProcessorConfiguration ** *(dict) --* The analytics configuration settings for transcribing audio in a media insights pipeline configuration element. * **LanguageCode** *(string) --* The language code in the configuration. * **VocabularyName** *(string) --* Specifies the name of the custom vocabulary to use when processing a transcription. Note that vocabulary names are case sensitive. If the language of the specified custom vocabulary doesn't match the language identified in your media, the custom vocabulary is not applied to your transcription. For more information, see Custom vocabularies in the *Amazon Transcribe Developer Guide*. Length Constraints: Minimum length of 1. Maximum length of 200. * **VocabularyFilterName** *(string) --* Specifies the name of the custom vocabulary filter to use when processing a transcription. Note that vocabulary filter names are case sensitive. If the language of the specified custom vocabulary filter doesn't match the language identified in your media, the vocabulary filter is not applied to your transcription. For more information, see Using vocabulary filtering with unwanted words in the *Amazon Transcribe Developer Guide*. Length Constraints: Minimum length of 1. Maximum length of 200. * **VocabularyFilterMethod** *(string) --* Specifies how to apply a vocabulary filter to a transcript. To replace words with *******, choose "mask". To delete words, choose "remove". To flag words without changing them, choose "tag". * **LanguageModelName** *(string) --* Specifies the name of the custom language model to use when processing a transcription. Note that language model names are case sensitive. The language of the specified language model must match the language code specified in the transcription request. If the languages don't match, the custom language model isn't applied. Language mismatches don't generate errors or warnings. For more information, see Custom language models in the *Amazon Transcribe Developer Guide*. * **EnablePartialResultsStabilization** *(boolean) --* Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy. For more information, see Partial-result stabilization in the *Amazon Transcribe Developer Guide*. * **PartialResultsStability** *(string) --* Specifies the level of stability to use when you enable partial results stabilization ( "EnablePartialResultsStabilization"). Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy. For more information, see Partial-result stabilization in the *Amazon Transcribe Developer Guide*. * **ContentIdentificationType** *(string) --* Labels all personally identifiable information (PII) identified in your transcript. Content identification is performed at the segment level; PII specified in "PiiEntityTypes" is flagged upon complete transcription of an audio segment. You can’t set "ContentIdentificationType" and "ContentRedactionType" in the same request. If you do, your request returns a "BadRequestException". For more information, see Redacting or identifying personally identifiable information in the *Amazon Transcribe Developer Guide*. * **ContentRedactionType** *(string) --* Redacts all personally identifiable information (PII) identified in your transcript. Content redaction is performed at the segment level; PII specified in "PiiEntityTypes" is redacted upon complete transcription of an audio segment. You can’t set "ContentRedactionType" and "ContentIdentificationType" in the same request. If you do, your request returns a "BadRequestException". For more information, see Redacting or identifying personally identifiable information in the *Amazon Transcribe Developer Guide*. * **PiiEntityTypes** *(string) --* Specifies the types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select "ALL". To include "PiiEntityTypes" in your Call Analytics request, you must also include "ContentIdentificationType" or "ContentRedactionType", but you can't include both. Values must be comma-separated and can include: "ADDRESS", "BANK_ACCOUNT_NUMBER", "BANK_ROUTING", "CREDIT_DEBIT_CVV", "CREDIT_DEBIT_EXPIRY", "CREDIT_DEBIT_NUMBER", "EMAIL", "NAME", "PHONE", "PIN", "SSN", or "ALL". Length Constraints: Minimum length of 1. Maximum length of 300. * **FilterPartialResults** *(boolean) --* If true, "UtteranceEvents" with "IsPartial: true" are filtered out of the insights target. * **PostCallAnalyticsSettings** *(dict) --* The settings for a post-call analysis task in an analytics configuration. * **OutputLocation** *(string) --* The URL of the Amazon S3 bucket that contains the post-call data. * **DataAccessRoleArn** *(string) --* The ARN of the role used by Amazon Web Services Transcribe to upload your post call analysis. For more information, see Post-call analytics with real-time transcriptions in the *Amazon Transcribe Developer Guide*. * **ContentRedactionOutput** *(string) --* The content redaction output settings for a post- call analysis task. * **OutputEncryptionKMSKeyId** *(string) --* The ID of the KMS (Key Management Service) key used to encrypt the output. * **CallAnalyticsStreamCategories** *(list) --* By default, all "CategoryEvents" are sent to the insights target. If this parameter is specified, only included categories are sent to the insights target. * *(string) --* * **AmazonTranscribeProcessorConfiguration** *(dict) --* The transcription processor configuration settings in a media insights pipeline configuration element. * **LanguageCode** *(string) --* The language code that represents the language spoken in your audio. If you're unsure of the language spoken in your audio, consider using "IdentifyLanguage" to enable automatic language identification. For a list of languages that real-time Call Analytics supports, see the Supported languages table in the *Amazon Transcribe Developer Guide*. * **VocabularyName** *(string) --* The name of the custom vocabulary that you specified in your Call Analytics request. Length Constraints: Minimum length of 1. Maximum length of 200. * **VocabularyFilterName** *(string) --* The name of the custom vocabulary filter that you specified in your Call Analytics request. Length Constraints: Minimum length of 1. Maximum length of 200. * **VocabularyFilterMethod** *(string) --* The vocabulary filtering method used in your Call Analytics transcription. * **ShowSpeakerLabel** *(boolean) --* Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file. For more information, see Partitioning speakers (diarization) in the *Amazon Transcribe Developer Guide*. * **EnablePartialResultsStabilization** *(boolean) --* Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy. For more information, see Partial-result stabilization in the *Amazon Transcribe Developer Guide*. * **PartialResultsStability** *(string) --* The level of stability to use when you enable partial results stabilization ( "EnablePartialResultsStabilization"). Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy. For more information, see Partial-result stabilization in the *Amazon Transcribe Developer Guide*. * **ContentIdentificationType** *(string) --* Labels all personally identifiable information (PII) identified in your transcript. Content identification is performed at the segment level; PII specified in "PiiEntityTypes" is flagged upon complete transcription of an audio segment. You can’t set "ContentIdentificationType" and "ContentRedactionType" in the same request. If you set both, your request returns a "BadRequestException". For more information, see Redacting or identifying personally identifiable information in the *Amazon Transcribe Developer Guide*. * **ContentRedactionType** *(string) --* Redacts all personally identifiable information (PII) identified in your transcript. Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment. You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you set both, your request returns a "BadRequestException". For more information, see Redacting or identifying personally identifiable information in the *Amazon Transcribe Developer Guide*. * **PiiEntityTypes** *(string) --* The types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select "ALL". To include "PiiEntityTypes" in your Call Analytics request, you must also include "ContentIdentificationType" or "ContentRedactionType", but you can't include both. Values must be comma-separated and can include: "ADDRESS", "BANK_ACCOUNT_NUMBER", "BANK_ROUTING", "CREDIT_DEBIT_CVV", "CREDIT_DEBIT_EXPIRY", "CREDIT_DEBIT_NUMBER", "EMAIL", "NAME", "PHONE", "PIN", "SSN", or "ALL". If you leave this parameter empty, the default behavior is equivalent to "ALL". * **LanguageModelName** *(string) --* The name of the custom language model that you want to use when processing your transcription. Note that language model names are case sensitive. The language of the specified language model must match the language code you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch. For more information, see Custom language models in the *Amazon Transcribe Developer Guide*. * **FilterPartialResults** *(boolean) --* If true, "TranscriptEvents" with "IsPartial: true" are filtered out of the insights target. * **IdentifyLanguage** *(boolean) --* Turns language identification on or off. * **IdentifyMultipleLanguages** *(boolean) --* Turns language identification on or off for multiple languages. Note: Calls to this API must include a "LanguageCode", "IdentifyLanguage", or "IdentifyMultipleLanguages" parameter. If you include more than one of those parameters, your transcription job fails. * **LanguageOptions** *(string) --* The language options for the transcription, such as automatic language detection. * **PreferredLanguage** *(string) --* The preferred language for the transcription. * **VocabularyNames** *(string) --* The names of the custom vocabulary or vocabularies used during transcription. * **VocabularyFilterNames** *(string) --* The names of the custom vocabulary filter or filters using during transcription. * **KinesisDataStreamSinkConfiguration** *(dict) --* The configuration settings for the Kinesis Data Stream Sink in a media insights pipeline configuration element. * **InsightsTarget** *(string) --* The ARN of the sink. * **S3RecordingSinkConfiguration** *(dict) --* The configuration settings for the Amazon S3 recording bucket in a media insights pipeline configuration element. * **Destination** *(string) --* The default URI of the Amazon S3 bucket used as the recording sink. * **RecordingFileFormat** *(string) --* The default file format for the media files sent to the Amazon S3 bucket. * **VoiceAnalyticsProcessorConfiguration** *(dict) --* The voice analytics configuration settings in a media insights pipeline configuration element. * **SpeakerSearchStatus** *(string) --* The status of the speaker search task. * **VoiceToneAnalysisStatus** *(string) --* The status of the voice tone analysis task. * **LambdaFunctionSinkConfiguration** *(dict) --* The configuration settings for the Amazon Web Services Lambda sink in a media insights pipeline configuration element. * **InsightsTarget** *(string) --* The ARN of the sink. * **SqsQueueSinkConfiguration** *(dict) --* The configuration settings for an SQS queue sink in a media insights pipeline configuration element. * **InsightsTarget** *(string) --* The ARN of the SQS sink. * **SnsTopicSinkConfiguration** *(dict) --* The configuration settings for an SNS topic sink in a media insights pipeline configuration element. * **InsightsTarget** *(string) --* The ARN of the SNS sink. * **VoiceEnhancementSinkConfiguration** *(dict) --* The configuration settings for voice enhancement sink in a media insights pipeline configuration element. * **Disabled** *(boolean) --* Disables the "VoiceEnhancementSinkConfiguration" element. * **MediaInsightsPipelineConfigurationId** *(string) --* The ID of the configuration. * **CreatedTimestamp** *(datetime) --* The time at which the configuration was created. * **UpdatedTimestamp** *(datetime) --* The time at which the configuration was last updated. **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.NotFoundException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.ResourceLimitExceededE xception" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / delete_media_pipeline delete_media_pipeline ********************* ChimeSDKMediaPipelines.Client.delete_media_pipeline(**kwargs) Deletes the media pipeline. See also: AWS API Documentation **Request Syntax** response = client.delete_media_pipeline( MediaPipelineId='string' ) Parameters: **MediaPipelineId** (*string*) -- **[REQUIRED]** The ID of the media pipeline to delete. Returns: None **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.NotFoundException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ConflictException" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / get_paginator get_paginator ************* ChimeSDKMediaPipelines.Client.get_paginator(operation_name) Create a paginator for an operation. Parameters: **operation_name** (*string*) -- The operation name. This is the same name as the method name on the client. For example, if the method name is "create_foo", and you'd normally invoke the operation as "client.create_foo(**kwargs)", if the "create_foo" operation can be paginated, you can use the call "client.get_paginator("create_foo")". Raises: **OperationNotPageableError** -- Raised if the operation is not pageable. You can use the "client.can_paginate" method to check if an operation is pageable. Return type: "botocore.paginate.Paginator" Returns: A paginator object. ChimeSDKMediaPipelines / Client / update_media_insights_pipeline_status update_media_insights_pipeline_status ************************************* ChimeSDKMediaPipelines.Client.update_media_insights_pipeline_status(**kwargs) Updates the status of a media insights pipeline. See also: AWS API Documentation **Request Syntax** response = client.update_media_insights_pipeline_status( Identifier='string', UpdateStatus='Pause'|'Resume' ) Parameters: * **Identifier** (*string*) -- **[REQUIRED]** The unique identifier of the resource to be updated. Valid values include the ID and ARN of the media insights pipeline. * **UpdateStatus** (*string*) -- **[REQUIRED]** The requested status of the media insights pipeline. Returns: None **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.ConflictException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.NotFoundException" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / create_media_stream_pipeline create_media_stream_pipeline **************************** ChimeSDKMediaPipelines.Client.create_media_stream_pipeline(**kwargs) Creates a streaming media pipeline. See also: AWS API Documentation **Request Syntax** response = client.create_media_stream_pipeline( Sources=[ { 'SourceType': 'ChimeSdkMeeting', 'SourceArn': 'string' }, ], Sinks=[ { 'SinkArn': 'string', 'SinkType': 'KinesisVideoStreamPool', 'ReservedStreamCapacity': 123, 'MediaStreamType': 'MixedAudio'|'IndividualAudio' }, ], ClientRequestToken='string', Tags=[ { 'Key': 'string', 'Value': 'string' }, ] ) Parameters: * **Sources** (*list*) -- **[REQUIRED]** The data sources for the media pipeline. * *(dict) --* Structure that contains the settings for media stream sources. * **SourceType** *(string) --* **[REQUIRED]** The type of media stream source. * **SourceArn** *(string) --* **[REQUIRED]** The ARN of the meeting. * **Sinks** (*list*) -- **[REQUIRED]** The data sink for the media pipeline. * *(dict) --* Structure that contains the settings for a media stream sink. * **SinkArn** *(string) --* **[REQUIRED]** The ARN of the Kinesis Video Stream pool returned by the CreateMediaPipelineKinesisVideoStreamPool API. * **SinkType** *(string) --* **[REQUIRED]** The media stream sink's type. * **ReservedStreamCapacity** *(integer) --* **[REQUIRED]** Specifies the number of streams that the sink can accept. * **MediaStreamType** *(string) --* **[REQUIRED]** The media stream sink's media stream type. * **ClientRequestToken** (*string*) -- The token assigned to the client making the request. This field is autopopulated if not provided. * **Tags** (*list*) -- The tags assigned to the media pipeline. * *(dict) --* A key/value pair that grants users access to meeting resources. * **Key** *(string) --* **[REQUIRED]** The key half of a tag. * **Value** *(string) --* **[REQUIRED]** The value half of a tag. Return type: dict Returns: **Response Syntax** { 'MediaStreamPipeline': { 'MediaPipelineId': 'string', 'MediaPipelineArn': 'string', 'CreatedTimestamp': datetime(2015, 1, 1), 'UpdatedTimestamp': datetime(2015, 1, 1), 'Status': 'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped'|'Paused'|'NotStarted', 'Sources': [ { 'SourceType': 'ChimeSdkMeeting', 'SourceArn': 'string' }, ], 'Sinks': [ { 'SinkArn': 'string', 'SinkType': 'KinesisVideoStreamPool', 'ReservedStreamCapacity': 123, 'MediaStreamType': 'MixedAudio'|'IndividualAudio' }, ] } } **Response Structure** * *(dict) --* * **MediaStreamPipeline** *(dict) --* The requested media pipeline. * **MediaPipelineId** *(string) --* The ID of the media stream pipeline * **MediaPipelineArn** *(string) --* The ARN of the media stream pipeline. * **CreatedTimestamp** *(datetime) --* The time at which the media stream pipeline was created. * **UpdatedTimestamp** *(datetime) --* The time at which the media stream pipeline was updated. * **Status** *(string) --* The status of the media stream pipeline. * **Sources** *(list) --* The media stream pipeline's data sources. * *(dict) --* Structure that contains the settings for media stream sources. * **SourceType** *(string) --* The type of media stream source. * **SourceArn** *(string) --* The ARN of the meeting. * **Sinks** *(list) --* The media stream pipeline's data sinks. * *(dict) --* Structure that contains the settings for a media stream sink. * **SinkArn** *(string) --* The ARN of the Kinesis Video Stream pool returned by the CreateMediaPipelineKinesisVideoStreamPool API. * **SinkType** *(string) --* The media stream sink's type. * **ReservedStreamCapacity** *(integer) --* Specifies the number of streams that the sink can accept. * **MediaStreamType** *(string) --* The media stream sink's media stream type. **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.NotFoundException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.ResourceLimitExceededE xception" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / start_speaker_search_task start_speaker_search_task ************************* ChimeSDKMediaPipelines.Client.start_speaker_search_task(**kwargs) Starts a speaker search task. Warning: Before starting any speaker search tasks, you must provide all notices and obtain all consents from the speaker as required under applicable privacy and biometrics laws, and as required under the AWS service terms for the Amazon Chime SDK. See also: AWS API Documentation **Request Syntax** response = client.start_speaker_search_task( Identifier='string', VoiceProfileDomainArn='string', KinesisVideoStreamSourceTaskConfiguration={ 'StreamArn': 'string', 'ChannelId': 123, 'FragmentNumber': 'string' }, ClientRequestToken='string' ) Parameters: * **Identifier** (*string*) -- **[REQUIRED]** The unique identifier of the resource to be updated. Valid values include the ID and ARN of the media insights pipeline. * **VoiceProfileDomainArn** (*string*) -- **[REQUIRED]** The ARN of the voice profile domain that will store the voice profile. * **KinesisVideoStreamSourceTaskConfiguration** (*dict*) -- The task configuration for the Kinesis video stream source of the media insights pipeline. * **StreamArn** *(string) --* **[REQUIRED]** The ARN of the stream. * **ChannelId** *(integer) --* **[REQUIRED]** The channel ID. * **FragmentNumber** *(string) --* The unique identifier of the fragment to begin processing. * **ClientRequestToken** (*string*) -- The unique identifier for the client request. Use a different token for different speaker search tasks. This field is autopopulated if not provided. Return type: dict Returns: **Response Syntax** { 'SpeakerSearchTask': { 'SpeakerSearchTaskId': 'string', 'SpeakerSearchTaskStatus': 'NotStarted'|'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped', 'CreatedTimestamp': datetime(2015, 1, 1), 'UpdatedTimestamp': datetime(2015, 1, 1) } } **Response Structure** * *(dict) --* * **SpeakerSearchTask** *(dict) --* The details of the speaker search task. * **SpeakerSearchTaskId** *(string) --* The speaker search task ID. * **SpeakerSearchTaskStatus** *(string) --* The status of the speaker search task. * **CreatedTimestamp** *(datetime) --* The time at which a speaker search task was created. * **UpdatedTimestamp** *(datetime) --* The time at which a speaker search task was updated. **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.ConflictException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.NotFoundException" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / can_paginate can_paginate ************ ChimeSDKMediaPipelines.Client.can_paginate(operation_name) Check if an operation can be paginated. Parameters: **operation_name** (*string*) -- The operation name. This is the same name as the method name on the client. For example, if the method name is "create_foo", and you'd normally invoke the operation as "client.create_foo(**kwargs)", if the "create_foo" operation can be paginated, you can use the call "client.get_paginator("create_foo")". Returns: "True" if the operation can be paginated, "False" otherwise. ChimeSDKMediaPipelines / Client / create_media_concatenation_pipeline create_media_concatenation_pipeline *********************************** ChimeSDKMediaPipelines.Client.create_media_concatenation_pipeline(**kwargs) Creates a media concatenation pipeline. See also: AWS API Documentation **Request Syntax** response = client.create_media_concatenation_pipeline( Sources=[ { 'Type': 'MediaCapturePipeline', 'MediaCapturePipelineSourceConfiguration': { 'MediaPipelineArn': 'string', 'ChimeSdkMeetingConfiguration': { 'ArtifactsConfiguration': { 'Audio': { 'State': 'Enabled' }, 'Video': { 'State': 'Enabled'|'Disabled' }, 'Content': { 'State': 'Enabled'|'Disabled' }, 'DataChannel': { 'State': 'Enabled'|'Disabled' }, 'TranscriptionMessages': { 'State': 'Enabled'|'Disabled' }, 'MeetingEvents': { 'State': 'Enabled'|'Disabled' }, 'CompositedVideo': { 'State': 'Enabled'|'Disabled' } } } } }, ], Sinks=[ { 'Type': 'S3Bucket', 'S3BucketSinkConfiguration': { 'Destination': 'string' } }, ], ClientRequestToken='string', Tags=[ { 'Key': 'string', 'Value': 'string' }, ] ) Parameters: * **Sources** (*list*) -- **[REQUIRED]** An object that specifies the sources for the media concatenation pipeline. * *(dict) --* The source type and media pipeline configuration settings in a configuration object. * **Type** *(string) --* **[REQUIRED]** The type of concatenation source in a configuration object. * **MediaCapturePipelineSourceConfiguration** *(dict) --* **[REQUIRED]** The concatenation settings for the media pipeline in a configuration object. * **MediaPipelineArn** *(string) --* **[REQUIRED]** The media pipeline ARN in the configuration object of a media capture pipeline. * **ChimeSdkMeetingConfiguration** *(dict) --* **[REQUIRED]** The meeting configuration settings in a media capture pipeline configuration object. * **ArtifactsConfiguration** *(dict) --* **[REQUIRED]** The configuration for the artifacts in an Amazon Chime SDK meeting concatenation. * **Audio** *(dict) --* **[REQUIRED]** The configuration for the audio artifacts concatenation. * **State** *(string) --* **[REQUIRED]** Enables or disables the configuration object. * **Video** *(dict) --* **[REQUIRED]** The configuration for the video artifacts concatenation. * **State** *(string) --* **[REQUIRED]** Enables or disables the configuration object. * **Content** *(dict) --* **[REQUIRED]** The configuration for the content artifacts concatenation. * **State** *(string) --* **[REQUIRED]** Enables or disables the configuration object. * **DataChannel** *(dict) --* **[REQUIRED]** The configuration for the data channel artifacts concatenation. * **State** *(string) --* **[REQUIRED]** Enables or disables the configuration object. * **TranscriptionMessages** *(dict) --* **[REQUIRED]** The configuration for the transcription messages artifacts concatenation. * **State** *(string) --* **[REQUIRED]** Enables or disables the configuration object. * **MeetingEvents** *(dict) --* **[REQUIRED]** The configuration for the meeting events artifacts concatenation. * **State** *(string) --* **[REQUIRED]** Enables or disables the configuration object. * **CompositedVideo** *(dict) --* **[REQUIRED]** The configuration for the composited video artifacts concatenation. * **State** *(string) --* **[REQUIRED]** Enables or disables the configuration object. * **Sinks** (*list*) -- **[REQUIRED]** An object that specifies the data sinks for the media concatenation pipeline. * *(dict) --* The data sink of the configuration object. * **Type** *(string) --* **[REQUIRED]** The type of data sink in the configuration object. * **S3BucketSinkConfiguration** *(dict) --* **[REQUIRED]** The configuration settings for an Amazon S3 bucket sink. * **Destination** *(string) --* **[REQUIRED]** The destination URL of the S3 bucket. * **ClientRequestToken** (*string*) -- The unique identifier for the client request. The token makes the API request idempotent. Use a unique token for each media concatenation pipeline request. This field is autopopulated if not provided. * **Tags** (*list*) -- The tags associated with the media concatenation pipeline. * *(dict) --* A key/value pair that grants users access to meeting resources. * **Key** *(string) --* **[REQUIRED]** The key half of a tag. * **Value** *(string) --* **[REQUIRED]** The value half of a tag. Return type: dict Returns: **Response Syntax** { 'MediaConcatenationPipeline': { 'MediaPipelineId': 'string', 'MediaPipelineArn': 'string', 'Sources': [ { 'Type': 'MediaCapturePipeline', 'MediaCapturePipelineSourceConfiguration': { 'MediaPipelineArn': 'string', 'ChimeSdkMeetingConfiguration': { 'ArtifactsConfiguration': { 'Audio': { 'State': 'Enabled' }, 'Video': { 'State': 'Enabled'|'Disabled' }, 'Content': { 'State': 'Enabled'|'Disabled' }, 'DataChannel': { 'State': 'Enabled'|'Disabled' }, 'TranscriptionMessages': { 'State': 'Enabled'|'Disabled' }, 'MeetingEvents': { 'State': 'Enabled'|'Disabled' }, 'CompositedVideo': { 'State': 'Enabled'|'Disabled' } } } } }, ], 'Sinks': [ { 'Type': 'S3Bucket', 'S3BucketSinkConfiguration': { 'Destination': 'string' } }, ], 'Status': 'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped'|'Paused'|'NotStarted', 'CreatedTimestamp': datetime(2015, 1, 1), 'UpdatedTimestamp': datetime(2015, 1, 1) } } **Response Structure** * *(dict) --* * **MediaConcatenationPipeline** *(dict) --* A media concatenation pipeline object, the ID, source type, "MediaPipelineARN", and sink of a media concatenation pipeline object. * **MediaPipelineId** *(string) --* The ID of the media pipeline being concatenated. * **MediaPipelineArn** *(string) --* The ARN of the media pipeline that you specify in the "SourceConfiguration" object. * **Sources** *(list) --* The data sources being concatenated. * *(dict) --* The source type and media pipeline configuration settings in a configuration object. * **Type** *(string) --* The type of concatenation source in a configuration object. * **MediaCapturePipelineSourceConfiguration** *(dict) --* The concatenation settings for the media pipeline in a configuration object. * **MediaPipelineArn** *(string) --* The media pipeline ARN in the configuration object of a media capture pipeline. * **ChimeSdkMeetingConfiguration** *(dict) --* The meeting configuration settings in a media capture pipeline configuration object. * **ArtifactsConfiguration** *(dict) --* The configuration for the artifacts in an Amazon Chime SDK meeting concatenation. * **Audio** *(dict) --* The configuration for the audio artifacts concatenation. * **State** *(string) --* Enables or disables the configuration object. * **Video** *(dict) --* The configuration for the video artifacts concatenation. * **State** *(string) --* Enables or disables the configuration object. * **Content** *(dict) --* The configuration for the content artifacts concatenation. * **State** *(string) --* Enables or disables the configuration object. * **DataChannel** *(dict) --* The configuration for the data channel artifacts concatenation. * **State** *(string) --* Enables or disables the configuration object. * **TranscriptionMessages** *(dict) --* The configuration for the transcription messages artifacts concatenation. * **State** *(string) --* Enables or disables the configuration object. * **MeetingEvents** *(dict) --* The configuration for the meeting events artifacts concatenation. * **State** *(string) --* Enables or disables the configuration object. * **CompositedVideo** *(dict) --* The configuration for the composited video artifacts concatenation. * **State** *(string) --* Enables or disables the configuration object. * **Sinks** *(list) --* The data sinks of the concatenation pipeline. * *(dict) --* The data sink of the configuration object. * **Type** *(string) --* The type of data sink in the configuration object. * **S3BucketSinkConfiguration** *(dict) --* The configuration settings for an Amazon S3 bucket sink. * **Destination** *(string) --* The destination URL of the S3 bucket. * **Status** *(string) --* The status of the concatenation pipeline. * **CreatedTimestamp** *(datetime) --* The time at which the concatenation pipeline was created. * **UpdatedTimestamp** *(datetime) --* The time at which the concatenation pipeline was last updated. **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.ResourceLimitExceededE xception" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / create_media_live_connector_pipeline create_media_live_connector_pipeline ************************************ ChimeSDKMediaPipelines.Client.create_media_live_connector_pipeline(**kwargs) Creates a media live connector pipeline in an Amazon Chime SDK meeting. See also: AWS API Documentation **Request Syntax** response = client.create_media_live_connector_pipeline( Sources=[ { 'SourceType': 'ChimeSdkMeeting', 'ChimeSdkMeetingLiveConnectorConfiguration': { 'Arn': 'string', 'MuxType': 'AudioWithCompositedVideo'|'AudioWithActiveSpeakerVideo', 'CompositedVideo': { 'Layout': 'GridView', 'Resolution': 'HD'|'FHD', 'GridViewConfiguration': { 'ContentShareLayout': 'PresenterOnly'|'Horizontal'|'Vertical'|'ActiveSpeakerOnly', 'PresenterOnlyConfiguration': { 'PresenterPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight' }, 'ActiveSpeakerOnlyConfiguration': { 'ActiveSpeakerPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight' }, 'HorizontalLayoutConfiguration': { 'TileOrder': 'JoinSequence'|'SpeakerSequence', 'TilePosition': 'Top'|'Bottom', 'TileCount': 123, 'TileAspectRatio': 'string' }, 'VerticalLayoutConfiguration': { 'TileOrder': 'JoinSequence'|'SpeakerSequence', 'TilePosition': 'Left'|'Right', 'TileCount': 123, 'TileAspectRatio': 'string' }, 'VideoAttribute': { 'CornerRadius': 123, 'BorderColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow', 'HighlightColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow', 'BorderThickness': 123 }, 'CanvasOrientation': 'Landscape'|'Portrait' } }, 'SourceConfiguration': { 'SelectedVideoStreams': { 'AttendeeIds': [ 'string', ], 'ExternalUserIds': [ 'string', ] } } } }, ], Sinks=[ { 'SinkType': 'RTMP', 'RTMPConfiguration': { 'Url': 'string', 'AudioChannels': 'Stereo'|'Mono', 'AudioSampleRate': 'string' } }, ], ClientRequestToken='string', Tags=[ { 'Key': 'string', 'Value': 'string' }, ] ) Parameters: * **Sources** (*list*) -- **[REQUIRED]** The media live connector pipeline's data sources. * *(dict) --* The data source configuration object of a streaming media pipeline. * **SourceType** *(string) --* **[REQUIRED]** The source configuration's media source type. * **ChimeSdkMeetingLiveConnectorConfiguration** *(dict) --* **[REQUIRED]** The configuration settings of the connector pipeline. * **Arn** *(string) --* **[REQUIRED]** The configuration object's Chime SDK meeting ARN. * **MuxType** *(string) --* **[REQUIRED]** The configuration object's multiplex type. * **CompositedVideo** *(dict) --* The media pipeline's composited video. * **Layout** *(string) --* The layout setting, such as "GridView" in the configuration object. * **Resolution** *(string) --* The video resolution setting in the configuration object. Default: HD at 1280 x 720. FHD resolution: 1920 x 1080. * **GridViewConfiguration** *(dict) --* **[REQUIRED]** The "GridView" configuration setting. * **ContentShareLayout** *(string) --* **[REQUIRED]** Defines the layout of the video tiles when content sharing is enabled. * **PresenterOnlyConfiguration** *(dict) --* Defines the configuration options for a presenter only video tile. * **PresenterPosition** *(string) --* Defines the position of the presenter video tile. Default: "TopRight". * **ActiveSpeakerOnlyConfiguration** *(dict) --* The configuration settings for an "ActiveSpeakerOnly" video tile. * **ActiveSpeakerPosition** *(string) --* The position of the "ActiveSpeakerOnly" video tile. * **HorizontalLayoutConfiguration** *(dict) --* The configuration settings for a horizontal layout. * **TileOrder** *(string) --* Sets the automatic ordering of the video tiles. * **TilePosition** *(string) --* Sets the position of horizontal tiles. * **TileCount** *(integer) --* The maximum number of video tiles to display. * **TileAspectRatio** *(string) --* Specifies the aspect ratio of all video tiles. * **VerticalLayoutConfiguration** *(dict) --* The configuration settings for a vertical layout. * **TileOrder** *(string) --* Sets the automatic ordering of the video tiles. * **TilePosition** *(string) --* Sets the position of vertical tiles. * **TileCount** *(integer) --* The maximum number of tiles to display. * **TileAspectRatio** *(string) --* Sets the aspect ratio of the video tiles, such as 16:9. * **VideoAttribute** *(dict) --* The attribute settings for the video tiles. * **CornerRadius** *(integer) --* Sets the corner radius of all video tiles. * **BorderColor** *(string) --* Defines the border color of all video tiles. * **HighlightColor** *(string) --* Defines the highlight color for the active video tile. * **BorderThickness** *(integer) --* Defines the border thickness for all video tiles. * **CanvasOrientation** *(string) --* The orientation setting, horizontal or vertical. * **SourceConfiguration** *(dict) --* The source configuration settings of the media pipeline's configuration object. * **SelectedVideoStreams** *(dict) --* The selected video streams for a specified media pipeline. The number of video streams can't exceed 25. * **AttendeeIds** *(list) --* The attendee IDs of the streams selected for a media pipeline. * *(string) --* * **ExternalUserIds** *(list) --* The external user IDs of the streams selected for a media pipeline. * *(string) --* * **Sinks** (*list*) -- **[REQUIRED]** The media live connector pipeline's data sinks. * *(dict) --* The media pipeline's sink configuration settings. * **SinkType** *(string) --* **[REQUIRED]** The sink configuration's sink type. * **RTMPConfiguration** *(dict) --* **[REQUIRED]** The sink configuration's RTMP configuration settings. * **Url** *(string) --* **[REQUIRED]** The URL of the RTMP configuration. * **AudioChannels** *(string) --* The audio channels set for the RTMP configuration * **AudioSampleRate** *(string) --* The audio sample rate set for the RTMP configuration. Default: 48000. * **ClientRequestToken** (*string*) -- The token assigned to the client making the request. This field is autopopulated if not provided. * **Tags** (*list*) -- The tags associated with the media live connector pipeline. * *(dict) --* A key/value pair that grants users access to meeting resources. * **Key** *(string) --* **[REQUIRED]** The key half of a tag. * **Value** *(string) --* **[REQUIRED]** The value half of a tag. Return type: dict Returns: **Response Syntax** { 'MediaLiveConnectorPipeline': { 'Sources': [ { 'SourceType': 'ChimeSdkMeeting', 'ChimeSdkMeetingLiveConnectorConfiguration': { 'Arn': 'string', 'MuxType': 'AudioWithCompositedVideo'|'AudioWithActiveSpeakerVideo', 'CompositedVideo': { 'Layout': 'GridView', 'Resolution': 'HD'|'FHD', 'GridViewConfiguration': { 'ContentShareLayout': 'PresenterOnly'|'Horizontal'|'Vertical'|'ActiveSpeakerOnly', 'PresenterOnlyConfiguration': { 'PresenterPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight' }, 'ActiveSpeakerOnlyConfiguration': { 'ActiveSpeakerPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight' }, 'HorizontalLayoutConfiguration': { 'TileOrder': 'JoinSequence'|'SpeakerSequence', 'TilePosition': 'Top'|'Bottom', 'TileCount': 123, 'TileAspectRatio': 'string' }, 'VerticalLayoutConfiguration': { 'TileOrder': 'JoinSequence'|'SpeakerSequence', 'TilePosition': 'Left'|'Right', 'TileCount': 123, 'TileAspectRatio': 'string' }, 'VideoAttribute': { 'CornerRadius': 123, 'BorderColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow', 'HighlightColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow', 'BorderThickness': 123 }, 'CanvasOrientation': 'Landscape'|'Portrait' } }, 'SourceConfiguration': { 'SelectedVideoStreams': { 'AttendeeIds': [ 'string', ], 'ExternalUserIds': [ 'string', ] } } } }, ], 'Sinks': [ { 'SinkType': 'RTMP', 'RTMPConfiguration': { 'Url': 'string', 'AudioChannels': 'Stereo'|'Mono', 'AudioSampleRate': 'string' } }, ], 'MediaPipelineId': 'string', 'MediaPipelineArn': 'string', 'Status': 'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped'|'Paused'|'NotStarted', 'CreatedTimestamp': datetime(2015, 1, 1), 'UpdatedTimestamp': datetime(2015, 1, 1) } } **Response Structure** * *(dict) --* * **MediaLiveConnectorPipeline** *(dict) --* The new media live connector pipeline. * **Sources** *(list) --* The connector pipeline's data sources. * *(dict) --* The data source configuration object of a streaming media pipeline. * **SourceType** *(string) --* The source configuration's media source type. * **ChimeSdkMeetingLiveConnectorConfiguration** *(dict) --* The configuration settings of the connector pipeline. * **Arn** *(string) --* The configuration object's Chime SDK meeting ARN. * **MuxType** *(string) --* The configuration object's multiplex type. * **CompositedVideo** *(dict) --* The media pipeline's composited video. * **Layout** *(string) --* The layout setting, such as "GridView" in the configuration object. * **Resolution** *(string) --* The video resolution setting in the configuration object. Default: HD at 1280 x 720. FHD resolution: 1920 x 1080. * **GridViewConfiguration** *(dict) --* The "GridView" configuration setting. * **ContentShareLayout** *(string) --* Defines the layout of the video tiles when content sharing is enabled. * **PresenterOnlyConfiguration** *(dict) --* Defines the configuration options for a presenter only video tile. * **PresenterPosition** *(string) --* Defines the position of the presenter video tile. Default: "TopRight". * **ActiveSpeakerOnlyConfiguration** *(dict) --* The configuration settings for an "ActiveSpeakerOnly" video tile. * **ActiveSpeakerPosition** *(string) --* The position of the "ActiveSpeakerOnly" video tile. * **HorizontalLayoutConfiguration** *(dict) --* The configuration settings for a horizontal layout. * **TileOrder** *(string) --* Sets the automatic ordering of the video tiles. * **TilePosition** *(string) --* Sets the position of horizontal tiles. * **TileCount** *(integer) --* The maximum number of video tiles to display. * **TileAspectRatio** *(string) --* Specifies the aspect ratio of all video tiles. * **VerticalLayoutConfiguration** *(dict) --* The configuration settings for a vertical layout. * **TileOrder** *(string) --* Sets the automatic ordering of the video tiles. * **TilePosition** *(string) --* Sets the position of vertical tiles. * **TileCount** *(integer) --* The maximum number of tiles to display. * **TileAspectRatio** *(string) --* Sets the aspect ratio of the video tiles, such as 16:9. * **VideoAttribute** *(dict) --* The attribute settings for the video tiles. * **CornerRadius** *(integer) --* Sets the corner radius of all video tiles. * **BorderColor** *(string) --* Defines the border color of all video tiles. * **HighlightColor** *(string) --* Defines the highlight color for the active video tile. * **BorderThickness** *(integer) --* Defines the border thickness for all video tiles. * **CanvasOrientation** *(string) --* The orientation setting, horizontal or vertical. * **SourceConfiguration** *(dict) --* The source configuration settings of the media pipeline's configuration object. * **SelectedVideoStreams** *(dict) --* The selected video streams for a specified media pipeline. The number of video streams can't exceed 25. * **AttendeeIds** *(list) --* The attendee IDs of the streams selected for a media pipeline. * *(string) --* * **ExternalUserIds** *(list) --* The external user IDs of the streams selected for a media pipeline. * *(string) --* * **Sinks** *(list) --* The connector pipeline's data sinks. * *(dict) --* The media pipeline's sink configuration settings. * **SinkType** *(string) --* The sink configuration's sink type. * **RTMPConfiguration** *(dict) --* The sink configuration's RTMP configuration settings. * **Url** *(string) --* The URL of the RTMP configuration. * **AudioChannels** *(string) --* The audio channels set for the RTMP configuration * **AudioSampleRate** *(string) --* The audio sample rate set for the RTMP configuration. Default: 48000. * **MediaPipelineId** *(string) --* The connector pipeline's ID. * **MediaPipelineArn** *(string) --* The connector pipeline's ARN. * **Status** *(string) --* The connector pipeline's status. * **CreatedTimestamp** *(datetime) --* The time at which the connector pipeline was created. * **UpdatedTimestamp** *(datetime) --* The time at which the connector pipeline was last updated. **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.ResourceLimitExceededE xception" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / get_media_capture_pipeline get_media_capture_pipeline ************************** ChimeSDKMediaPipelines.Client.get_media_capture_pipeline(**kwargs) Gets an existing media pipeline. See also: AWS API Documentation **Request Syntax** response = client.get_media_capture_pipeline( MediaPipelineId='string' ) Parameters: **MediaPipelineId** (*string*) -- **[REQUIRED]** The ID of the pipeline that you want to get. Return type: dict Returns: **Response Syntax** { 'MediaCapturePipeline': { 'MediaPipelineId': 'string', 'MediaPipelineArn': 'string', 'SourceType': 'ChimeSdkMeeting', 'SourceArn': 'string', 'Status': 'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped'|'Paused'|'NotStarted', 'SinkType': 'S3Bucket', 'SinkArn': 'string', 'CreatedTimestamp': datetime(2015, 1, 1), 'UpdatedTimestamp': datetime(2015, 1, 1), 'ChimeSdkMeetingConfiguration': { 'SourceConfiguration': { 'SelectedVideoStreams': { 'AttendeeIds': [ 'string', ], 'ExternalUserIds': [ 'string', ] } }, 'ArtifactsConfiguration': { 'Audio': { 'MuxType': 'AudioOnly'|'AudioWithActiveSpeakerVideo'|'AudioWithCompositedVideo' }, 'Video': { 'State': 'Enabled'|'Disabled', 'MuxType': 'VideoOnly' }, 'Content': { 'State': 'Enabled'|'Disabled', 'MuxType': 'ContentOnly' }, 'CompositedVideo': { 'Layout': 'GridView', 'Resolution': 'HD'|'FHD', 'GridViewConfiguration': { 'ContentShareLayout': 'PresenterOnly'|'Horizontal'|'Vertical'|'ActiveSpeakerOnly', 'PresenterOnlyConfiguration': { 'PresenterPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight' }, 'ActiveSpeakerOnlyConfiguration': { 'ActiveSpeakerPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight' }, 'HorizontalLayoutConfiguration': { 'TileOrder': 'JoinSequence'|'SpeakerSequence', 'TilePosition': 'Top'|'Bottom', 'TileCount': 123, 'TileAspectRatio': 'string' }, 'VerticalLayoutConfiguration': { 'TileOrder': 'JoinSequence'|'SpeakerSequence', 'TilePosition': 'Left'|'Right', 'TileCount': 123, 'TileAspectRatio': 'string' }, 'VideoAttribute': { 'CornerRadius': 123, 'BorderColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow', 'HighlightColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow', 'BorderThickness': 123 }, 'CanvasOrientation': 'Landscape'|'Portrait' } } } }, 'SseAwsKeyManagementParams': { 'AwsKmsKeyId': 'string', 'AwsKmsEncryptionContext': 'string' }, 'SinkIamRoleArn': 'string' } } **Response Structure** * *(dict) --* * **MediaCapturePipeline** *(dict) --* The media pipeline object. * **MediaPipelineId** *(string) --* The ID of a media pipeline. * **MediaPipelineArn** *(string) --* The ARN of the media capture pipeline * **SourceType** *(string) --* Source type from which media artifacts are saved. You must use "ChimeMeeting". * **SourceArn** *(string) --* ARN of the source from which the media artifacts are saved. * **Status** *(string) --* The status of the media pipeline. * **SinkType** *(string) --* Destination type to which the media artifacts are saved. You must use an S3 Bucket. * **SinkArn** *(string) --* ARN of the destination to which the media artifacts are saved. * **CreatedTimestamp** *(datetime) --* The time at which the pipeline was created, in ISO 8601 format. * **UpdatedTimestamp** *(datetime) --* The time at which the pipeline was updated, in ISO 8601 format. * **ChimeSdkMeetingConfiguration** *(dict) --* The configuration for a specified media pipeline. "SourceType" must be "ChimeSdkMeeting". * **SourceConfiguration** *(dict) --* The source configuration for a specified media pipeline. * **SelectedVideoStreams** *(dict) --* The selected video streams for a specified media pipeline. The number of video streams can't exceed 25. * **AttendeeIds** *(list) --* The attendee IDs of the streams selected for a media pipeline. * *(string) --* * **ExternalUserIds** *(list) --* The external user IDs of the streams selected for a media pipeline. * *(string) --* * **ArtifactsConfiguration** *(dict) --* The configuration for the artifacts in an Amazon Chime SDK meeting. * **Audio** *(dict) --* The configuration for the audio artifacts. * **MuxType** *(string) --* The MUX type of the audio artifact configuration object. * **Video** *(dict) --* The configuration for the video artifacts. * **State** *(string) --* Indicates whether the video artifact is enabled or disabled. * **MuxType** *(string) --* The MUX type of the video artifact configuration object. * **Content** *(dict) --* The configuration for the content artifacts. * **State** *(string) --* Indicates whether the content artifact is enabled or disabled. * **MuxType** *(string) --* The MUX type of the artifact configuration. * **CompositedVideo** *(dict) --* Enables video compositing. * **Layout** *(string) --* The layout setting, such as "GridView" in the configuration object. * **Resolution** *(string) --* The video resolution setting in the configuration object. Default: HD at 1280 x 720. FHD resolution: 1920 x 1080. * **GridViewConfiguration** *(dict) --* The "GridView" configuration setting. * **ContentShareLayout** *(string) --* Defines the layout of the video tiles when content sharing is enabled. * **PresenterOnlyConfiguration** *(dict) --* Defines the configuration options for a presenter only video tile. * **PresenterPosition** *(string) --* Defines the position of the presenter video tile. Default: "TopRight". * **ActiveSpeakerOnlyConfiguration** *(dict) --* The configuration settings for an "ActiveSpeakerOnly" video tile. * **ActiveSpeakerPosition** *(string) --* The position of the "ActiveSpeakerOnly" video tile. * **HorizontalLayoutConfiguration** *(dict) --* The configuration settings for a horizontal layout. * **TileOrder** *(string) --* Sets the automatic ordering of the video tiles. * **TilePosition** *(string) --* Sets the position of horizontal tiles. * **TileCount** *(integer) --* The maximum number of video tiles to display. * **TileAspectRatio** *(string) --* Specifies the aspect ratio of all video tiles. * **VerticalLayoutConfiguration** *(dict) --* The configuration settings for a vertical layout. * **TileOrder** *(string) --* Sets the automatic ordering of the video tiles. * **TilePosition** *(string) --* Sets the position of vertical tiles. * **TileCount** *(integer) --* The maximum number of tiles to display. * **TileAspectRatio** *(string) --* Sets the aspect ratio of the video tiles, such as 16:9. * **VideoAttribute** *(dict) --* The attribute settings for the video tiles. * **CornerRadius** *(integer) --* Sets the corner radius of all video tiles. * **BorderColor** *(string) --* Defines the border color of all video tiles. * **HighlightColor** *(string) --* Defines the highlight color for the active video tile. * **BorderThickness** *(integer) --* Defines the border thickness for all video tiles. * **CanvasOrientation** *(string) --* The orientation setting, horizontal or vertical. * **SseAwsKeyManagementParams** *(dict) --* An object that contains server side encryption parameters to be used by media capture pipeline. The parameters can also be used by media concatenation pipeline taking media capture pipeline as a media source. * **AwsKmsKeyId** *(string) --* The KMS key you want to use to encrypt your media pipeline output. Decryption is required for concatenation pipeline. If using a key located in the current Amazon Web Services account, you can specify your KMS key in one of four ways: * Use the KMS key ID itself. For example, "1234abcd- 12ab-34cd-56ef-1234567890ab". * Use an alias for the KMS key ID. For example, "alias/ExampleAlias". * Use the Amazon Resource Name (ARN) for the KMS key ID. For example, "arn:aws:kms:region:account-ID:key /1234abcd-12ab-34cd-56ef-1234567890ab". * Use the ARN for the KMS key alias. For example, "arn:aws:kms:region:account-ID:alias/ExampleAlias". If using a key located in a different Amazon Web Services account than the current Amazon Web Services account, you can specify your KMS key in one of two ways: * Use the ARN for the KMS key ID. For example, "arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd- 56ef-1234567890ab". * Use the ARN for the KMS key alias. For example, "arn:aws:kms:region:account-ID:alias/ExampleAlias". If you don't specify an encryption key, your output is encrypted with the default Amazon S3 key (SSE-S3). Note that the role specified in the "SinkIamRoleArn" request parameter must have permission to use the specified KMS key. * **AwsKmsEncryptionContext** *(string) --* Base64-encoded string of a UTF-8 encoded JSON, which contains the encryption context as non-secret key-value pair known as encryption context pairs, that provides an added layer of security for your data. For more information, see KMS encryption context and Asymmetric keys in KMS in the *Key Management Service Developer Guide*. * **SinkIamRoleArn** *(string) --* The Amazon Resource Name (ARN) of the sink role to be used with "AwsKmsKeyId" in "SseAwsKeyManagementParams". **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.NotFoundException" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / update_media_insights_pipeline_configuration update_media_insights_pipeline_configuration ******************************************** ChimeSDKMediaPipelines.Client.update_media_insights_pipeline_configuration(**kwargs) Updates the media insights pipeline's configuration settings. See also: AWS API Documentation **Request Syntax** response = client.update_media_insights_pipeline_configuration( Identifier='string', ResourceAccessRoleArn='string', RealTimeAlertConfiguration={ 'Disabled': True|False, 'Rules': [ { 'Type': 'KeywordMatch'|'Sentiment'|'IssueDetection', 'KeywordMatchConfiguration': { 'RuleName': 'string', 'Keywords': [ 'string', ], 'Negate': True|False }, 'SentimentConfiguration': { 'RuleName': 'string', 'SentimentType': 'NEGATIVE', 'TimePeriod': 123 }, 'IssueDetectionConfiguration': { 'RuleName': 'string' } }, ] }, Elements=[ { 'Type': 'AmazonTranscribeCallAnalyticsProcessor'|'VoiceAnalyticsProcessor'|'AmazonTranscribeProcessor'|'KinesisDataStreamSink'|'LambdaFunctionSink'|'SqsQueueSink'|'SnsTopicSink'|'S3RecordingSink'|'VoiceEnhancementSink', 'AmazonTranscribeCallAnalyticsProcessorConfiguration': { 'LanguageCode': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR', 'VocabularyName': 'string', 'VocabularyFilterName': 'string', 'VocabularyFilterMethod': 'remove'|'mask'|'tag', 'LanguageModelName': 'string', 'EnablePartialResultsStabilization': True|False, 'PartialResultsStability': 'high'|'medium'|'low', 'ContentIdentificationType': 'PII', 'ContentRedactionType': 'PII', 'PiiEntityTypes': 'string', 'FilterPartialResults': True|False, 'PostCallAnalyticsSettings': { 'OutputLocation': 'string', 'DataAccessRoleArn': 'string', 'ContentRedactionOutput': 'redacted'|'redacted_and_unredacted', 'OutputEncryptionKMSKeyId': 'string' }, 'CallAnalyticsStreamCategories': [ 'string', ] }, 'AmazonTranscribeProcessorConfiguration': { 'LanguageCode': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR', 'VocabularyName': 'string', 'VocabularyFilterName': 'string', 'VocabularyFilterMethod': 'remove'|'mask'|'tag', 'ShowSpeakerLabel': True|False, 'EnablePartialResultsStabilization': True|False, 'PartialResultsStability': 'high'|'medium'|'low', 'ContentIdentificationType': 'PII', 'ContentRedactionType': 'PII', 'PiiEntityTypes': 'string', 'LanguageModelName': 'string', 'FilterPartialResults': True|False, 'IdentifyLanguage': True|False, 'IdentifyMultipleLanguages': True|False, 'LanguageOptions': 'string', 'PreferredLanguage': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR', 'VocabularyNames': 'string', 'VocabularyFilterNames': 'string' }, 'KinesisDataStreamSinkConfiguration': { 'InsightsTarget': 'string' }, 'S3RecordingSinkConfiguration': { 'Destination': 'string', 'RecordingFileFormat': 'Wav'|'Opus' }, 'VoiceAnalyticsProcessorConfiguration': { 'SpeakerSearchStatus': 'Enabled'|'Disabled', 'VoiceToneAnalysisStatus': 'Enabled'|'Disabled' }, 'LambdaFunctionSinkConfiguration': { 'InsightsTarget': 'string' }, 'SqsQueueSinkConfiguration': { 'InsightsTarget': 'string' }, 'SnsTopicSinkConfiguration': { 'InsightsTarget': 'string' }, 'VoiceEnhancementSinkConfiguration': { 'Disabled': True|False } }, ] ) Parameters: * **Identifier** (*string*) -- **[REQUIRED]** The unique identifier for the resource to be updated. Valid values include the name and ARN of the media insights pipeline configuration. * **ResourceAccessRoleArn** (*string*) -- **[REQUIRED]** The ARN of the role used by the service to access Amazon Web Services resources. * **RealTimeAlertConfiguration** (*dict*) -- The configuration settings for real-time alerts for the media insights pipeline. * **Disabled** *(boolean) --* Turns off real-time alerts. * **Rules** *(list) --* The rules in the alert. Rules specify the words or phrases that you want to be notified about. * *(dict) --* Specifies the words or phrases that trigger an alert. * **Type** *(string) --* **[REQUIRED]** The type of alert rule. * **KeywordMatchConfiguration** *(dict) --* Specifies the settings for matching the keywords in a real-time alert rule. * **RuleName** *(string) --* **[REQUIRED]** The name of the keyword match rule. * **Keywords** *(list) --* **[REQUIRED]** The keywords or phrases that you want to match. * *(string) --* * **Negate** *(boolean) --* Matches keywords or phrases on their presence or absence. If set to "TRUE", the rule matches when all the specified keywords or phrases are absent. Default: "FALSE". * **SentimentConfiguration** *(dict) --* Specifies the settings for predicting sentiment in a real-time alert rule. * **RuleName** *(string) --* **[REQUIRED]** The name of the rule in the sentiment configuration. * **SentimentType** *(string) --* **[REQUIRED]** The type of sentiment, "POSITIVE", "NEGATIVE", or "NEUTRAL". * **TimePeriod** *(integer) --* **[REQUIRED]** Specifies the analysis interval. * **IssueDetectionConfiguration** *(dict) --* Specifies the issue detection settings for a real-time alert rule. * **RuleName** *(string) --* **[REQUIRED]** The name of the issue detection rule. * **Elements** (*list*) -- **[REQUIRED]** The elements in the request, such as a processor for Amazon Transcribe or a sink for a Kinesis Data Stream.. * *(dict) --* An element in a media insights pipeline configuration. * **Type** *(string) --* **[REQUIRED]** The element type. * **AmazonTranscribeCallAnalyticsProcessorConfiguration** *(dict) --* The analytics configuration settings for transcribing audio in a media insights pipeline configuration element. * **LanguageCode** *(string) --* **[REQUIRED]** The language code in the configuration. * **VocabularyName** *(string) --* Specifies the name of the custom vocabulary to use when processing a transcription. Note that vocabulary names are case sensitive. If the language of the specified custom vocabulary doesn't match the language identified in your media, the custom vocabulary is not applied to your transcription. For more information, see Custom vocabularies in the *Amazon Transcribe Developer Guide*. Length Constraints: Minimum length of 1. Maximum length of 200. * **VocabularyFilterName** *(string) --* Specifies the name of the custom vocabulary filter to use when processing a transcription. Note that vocabulary filter names are case sensitive. If the language of the specified custom vocabulary filter doesn't match the language identified in your media, the vocabulary filter is not applied to your transcription. For more information, see Using vocabulary filtering with unwanted words in the *Amazon Transcribe Developer Guide*. Length Constraints: Minimum length of 1. Maximum length of 200. * **VocabularyFilterMethod** *(string) --* Specifies how to apply a vocabulary filter to a transcript. To replace words with *******, choose "mask". To delete words, choose "remove". To flag words without changing them, choose "tag". * **LanguageModelName** *(string) --* Specifies the name of the custom language model to use when processing a transcription. Note that language model names are case sensitive. The language of the specified language model must match the language code specified in the transcription request. If the languages don't match, the custom language model isn't applied. Language mismatches don't generate errors or warnings. For more information, see Custom language models in the *Amazon Transcribe Developer Guide*. * **EnablePartialResultsStabilization** *(boolean) --* Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy. For more information, see Partial-result stabilization in the *Amazon Transcribe Developer Guide*. * **PartialResultsStability** *(string) --* Specifies the level of stability to use when you enable partial results stabilization ( "EnablePartialResultsStabilization"). Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy. For more information, see Partial-result stabilization in the *Amazon Transcribe Developer Guide*. * **ContentIdentificationType** *(string) --* Labels all personally identifiable information (PII) identified in your transcript. Content identification is performed at the segment level; PII specified in "PiiEntityTypes" is flagged upon complete transcription of an audio segment. You can’t set "ContentIdentificationType" and "ContentRedactionType" in the same request. If you do, your request returns a "BadRequestException". For more information, see Redacting or identifying personally identifiable information in the *Amazon Transcribe Developer Guide*. * **ContentRedactionType** *(string) --* Redacts all personally identifiable information (PII) identified in your transcript. Content redaction is performed at the segment level; PII specified in "PiiEntityTypes" is redacted upon complete transcription of an audio segment. You can’t set "ContentRedactionType" and "ContentIdentificationType" in the same request. If you do, your request returns a "BadRequestException". For more information, see Redacting or identifying personally identifiable information in the *Amazon Transcribe Developer Guide*. * **PiiEntityTypes** *(string) --* Specifies the types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select "ALL". To include "PiiEntityTypes" in your Call Analytics request, you must also include "ContentIdentificationType" or "ContentRedactionType", but you can't include both. Values must be comma-separated and can include: "ADDRESS", "BANK_ACCOUNT_NUMBER", "BANK_ROUTING", "CREDIT_DEBIT_CVV", "CREDIT_DEBIT_EXPIRY", "CREDIT_DEBIT_NUMBER", "EMAIL", "NAME", "PHONE", "PIN", "SSN", or "ALL". Length Constraints: Minimum length of 1. Maximum length of 300. * **FilterPartialResults** *(boolean) --* If true, "UtteranceEvents" with "IsPartial: true" are filtered out of the insights target. * **PostCallAnalyticsSettings** *(dict) --* The settings for a post-call analysis task in an analytics configuration. * **OutputLocation** *(string) --* **[REQUIRED]** The URL of the Amazon S3 bucket that contains the post-call data. * **DataAccessRoleArn** *(string) --* **[REQUIRED]** The ARN of the role used by Amazon Web Services Transcribe to upload your post call analysis. For more information, see Post-call analytics with real-time transcriptions in the *Amazon Transcribe Developer Guide*. * **ContentRedactionOutput** *(string) --* The content redaction output settings for a post-call analysis task. * **OutputEncryptionKMSKeyId** *(string) --* The ID of the KMS (Key Management Service) key used to encrypt the output. * **CallAnalyticsStreamCategories** *(list) --* By default, all "CategoryEvents" are sent to the insights target. If this parameter is specified, only included categories are sent to the insights target. * *(string) --* * **AmazonTranscribeProcessorConfiguration** *(dict) --* The transcription processor configuration settings in a media insights pipeline configuration element. * **LanguageCode** *(string) --* The language code that represents the language spoken in your audio. If you're unsure of the language spoken in your audio, consider using "IdentifyLanguage" to enable automatic language identification. For a list of languages that real-time Call Analytics supports, see the Supported languages table in the *Amazon Transcribe Developer Guide*. * **VocabularyName** *(string) --* The name of the custom vocabulary that you specified in your Call Analytics request. Length Constraints: Minimum length of 1. Maximum length of 200. * **VocabularyFilterName** *(string) --* The name of the custom vocabulary filter that you specified in your Call Analytics request. Length Constraints: Minimum length of 1. Maximum length of 200. * **VocabularyFilterMethod** *(string) --* The vocabulary filtering method used in your Call Analytics transcription. * **ShowSpeakerLabel** *(boolean) --* Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file. For more information, see Partitioning speakers (diarization) in the *Amazon Transcribe Developer Guide*. * **EnablePartialResultsStabilization** *(boolean) --* Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy. For more information, see Partial-result stabilization in the *Amazon Transcribe Developer Guide*. * **PartialResultsStability** *(string) --* The level of stability to use when you enable partial results stabilization ( "EnablePartialResultsStabilization"). Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy. For more information, see Partial-result stabilization in the *Amazon Transcribe Developer Guide*. * **ContentIdentificationType** *(string) --* Labels all personally identifiable information (PII) identified in your transcript. Content identification is performed at the segment level; PII specified in "PiiEntityTypes" is flagged upon complete transcription of an audio segment. You can’t set "ContentIdentificationType" and "ContentRedactionType" in the same request. If you set both, your request returns a "BadRequestException". For more information, see Redacting or identifying personally identifiable information in the *Amazon Transcribe Developer Guide*. * **ContentRedactionType** *(string) --* Redacts all personally identifiable information (PII) identified in your transcript. Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment. You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you set both, your request returns a "BadRequestException". For more information, see Redacting or identifying personally identifiable information in the *Amazon Transcribe Developer Guide*. * **PiiEntityTypes** *(string) --* The types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select "ALL". To include "PiiEntityTypes" in your Call Analytics request, you must also include "ContentIdentificationType" or "ContentRedactionType", but you can't include both. Values must be comma-separated and can include: "ADDRESS", "BANK_ACCOUNT_NUMBER", "BANK_ROUTING", "CREDIT_DEBIT_CVV", "CREDIT_DEBIT_EXPIRY", "CREDIT_DEBIT_NUMBER", "EMAIL", "NAME", "PHONE", "PIN", "SSN", or "ALL". If you leave this parameter empty, the default behavior is equivalent to "ALL". * **LanguageModelName** *(string) --* The name of the custom language model that you want to use when processing your transcription. Note that language model names are case sensitive. The language of the specified language model must match the language code you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch. For more information, see Custom language models in the *Amazon Transcribe Developer Guide*. * **FilterPartialResults** *(boolean) --* If true, "TranscriptEvents" with "IsPartial: true" are filtered out of the insights target. * **IdentifyLanguage** *(boolean) --* Turns language identification on or off. * **IdentifyMultipleLanguages** *(boolean) --* Turns language identification on or off for multiple languages. Note: Calls to this API must include a "LanguageCode", "IdentifyLanguage", or "IdentifyMultipleLanguages" parameter. If you include more than one of those parameters, your transcription job fails. * **LanguageOptions** *(string) --* The language options for the transcription, such as automatic language detection. * **PreferredLanguage** *(string) --* The preferred language for the transcription. * **VocabularyNames** *(string) --* The names of the custom vocabulary or vocabularies used during transcription. * **VocabularyFilterNames** *(string) --* The names of the custom vocabulary filter or filters using during transcription. * **KinesisDataStreamSinkConfiguration** *(dict) --* The configuration settings for the Kinesis Data Stream Sink in a media insights pipeline configuration element. * **InsightsTarget** *(string) --* The ARN of the sink. * **S3RecordingSinkConfiguration** *(dict) --* The configuration settings for the Amazon S3 recording bucket in a media insights pipeline configuration element. * **Destination** *(string) --* The default URI of the Amazon S3 bucket used as the recording sink. * **RecordingFileFormat** *(string) --* The default file format for the media files sent to the Amazon S3 bucket. * **VoiceAnalyticsProcessorConfiguration** *(dict) --* The voice analytics configuration settings in a media insights pipeline configuration element. * **SpeakerSearchStatus** *(string) --* The status of the speaker search task. * **VoiceToneAnalysisStatus** *(string) --* The status of the voice tone analysis task. * **LambdaFunctionSinkConfiguration** *(dict) --* The configuration settings for the Amazon Web Services Lambda sink in a media insights pipeline configuration element. * **InsightsTarget** *(string) --* The ARN of the sink. * **SqsQueueSinkConfiguration** *(dict) --* The configuration settings for an SQS queue sink in a media insights pipeline configuration element. * **InsightsTarget** *(string) --* The ARN of the SQS sink. * **SnsTopicSinkConfiguration** *(dict) --* The configuration settings for an SNS topic sink in a media insights pipeline configuration element. * **InsightsTarget** *(string) --* The ARN of the SNS sink. * **VoiceEnhancementSinkConfiguration** *(dict) --* The configuration settings for voice enhancement sink in a media insights pipeline configuration element. * **Disabled** *(boolean) --* Disables the "VoiceEnhancementSinkConfiguration" element. Return type: dict Returns: **Response Syntax** { 'MediaInsightsPipelineConfiguration': { 'MediaInsightsPipelineConfigurationName': 'string', 'MediaInsightsPipelineConfigurationArn': 'string', 'ResourceAccessRoleArn': 'string', 'RealTimeAlertConfiguration': { 'Disabled': True|False, 'Rules': [ { 'Type': 'KeywordMatch'|'Sentiment'|'IssueDetection', 'KeywordMatchConfiguration': { 'RuleName': 'string', 'Keywords': [ 'string', ], 'Negate': True|False }, 'SentimentConfiguration': { 'RuleName': 'string', 'SentimentType': 'NEGATIVE', 'TimePeriod': 123 }, 'IssueDetectionConfiguration': { 'RuleName': 'string' } }, ] }, 'Elements': [ { 'Type': 'AmazonTranscribeCallAnalyticsProcessor'|'VoiceAnalyticsProcessor'|'AmazonTranscribeProcessor'|'KinesisDataStreamSink'|'LambdaFunctionSink'|'SqsQueueSink'|'SnsTopicSink'|'S3RecordingSink'|'VoiceEnhancementSink', 'AmazonTranscribeCallAnalyticsProcessorConfiguration': { 'LanguageCode': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR', 'VocabularyName': 'string', 'VocabularyFilterName': 'string', 'VocabularyFilterMethod': 'remove'|'mask'|'tag', 'LanguageModelName': 'string', 'EnablePartialResultsStabilization': True|False, 'PartialResultsStability': 'high'|'medium'|'low', 'ContentIdentificationType': 'PII', 'ContentRedactionType': 'PII', 'PiiEntityTypes': 'string', 'FilterPartialResults': True|False, 'PostCallAnalyticsSettings': { 'OutputLocation': 'string', 'DataAccessRoleArn': 'string', 'ContentRedactionOutput': 'redacted'|'redacted_and_unredacted', 'OutputEncryptionKMSKeyId': 'string' }, 'CallAnalyticsStreamCategories': [ 'string', ] }, 'AmazonTranscribeProcessorConfiguration': { 'LanguageCode': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR', 'VocabularyName': 'string', 'VocabularyFilterName': 'string', 'VocabularyFilterMethod': 'remove'|'mask'|'tag', 'ShowSpeakerLabel': True|False, 'EnablePartialResultsStabilization': True|False, 'PartialResultsStability': 'high'|'medium'|'low', 'ContentIdentificationType': 'PII', 'ContentRedactionType': 'PII', 'PiiEntityTypes': 'string', 'LanguageModelName': 'string', 'FilterPartialResults': True|False, 'IdentifyLanguage': True|False, 'IdentifyMultipleLanguages': True|False, 'LanguageOptions': 'string', 'PreferredLanguage': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR', 'VocabularyNames': 'string', 'VocabularyFilterNames': 'string' }, 'KinesisDataStreamSinkConfiguration': { 'InsightsTarget': 'string' }, 'S3RecordingSinkConfiguration': { 'Destination': 'string', 'RecordingFileFormat': 'Wav'|'Opus' }, 'VoiceAnalyticsProcessorConfiguration': { 'SpeakerSearchStatus': 'Enabled'|'Disabled', 'VoiceToneAnalysisStatus': 'Enabled'|'Disabled' }, 'LambdaFunctionSinkConfiguration': { 'InsightsTarget': 'string' }, 'SqsQueueSinkConfiguration': { 'InsightsTarget': 'string' }, 'SnsTopicSinkConfiguration': { 'InsightsTarget': 'string' }, 'VoiceEnhancementSinkConfiguration': { 'Disabled': True|False } }, ], 'MediaInsightsPipelineConfigurationId': 'string', 'CreatedTimestamp': datetime(2015, 1, 1), 'UpdatedTimestamp': datetime(2015, 1, 1) } } **Response Structure** * *(dict) --* * **MediaInsightsPipelineConfiguration** *(dict) --* The updated configuration settings. * **MediaInsightsPipelineConfigurationName** *(string) --* The name of the configuration. * **MediaInsightsPipelineConfigurationArn** *(string) --* The ARN of the configuration. * **ResourceAccessRoleArn** *(string) --* The ARN of the role used by the service to access Amazon Web Services resources. * **RealTimeAlertConfiguration** *(dict) --* Lists the rules that trigger a real-time alert. * **Disabled** *(boolean) --* Turns off real-time alerts. * **Rules** *(list) --* The rules in the alert. Rules specify the words or phrases that you want to be notified about. * *(dict) --* Specifies the words or phrases that trigger an alert. * **Type** *(string) --* The type of alert rule. * **KeywordMatchConfiguration** *(dict) --* Specifies the settings for matching the keywords in a real-time alert rule. * **RuleName** *(string) --* The name of the keyword match rule. * **Keywords** *(list) --* The keywords or phrases that you want to match. * *(string) --* * **Negate** *(boolean) --* Matches keywords or phrases on their presence or absence. If set to "TRUE", the rule matches when all the specified keywords or phrases are absent. Default: "FALSE". * **SentimentConfiguration** *(dict) --* Specifies the settings for predicting sentiment in a real-time alert rule. * **RuleName** *(string) --* The name of the rule in the sentiment configuration. * **SentimentType** *(string) --* The type of sentiment, "POSITIVE", "NEGATIVE", or "NEUTRAL". * **TimePeriod** *(integer) --* Specifies the analysis interval. * **IssueDetectionConfiguration** *(dict) --* Specifies the issue detection settings for a real- time alert rule. * **RuleName** *(string) --* The name of the issue detection rule. * **Elements** *(list) --* The elements in the configuration. * *(dict) --* An element in a media insights pipeline configuration. * **Type** *(string) --* The element type. * **AmazonTranscribeCallAnalyticsProcessorConfiguration ** *(dict) --* The analytics configuration settings for transcribing audio in a media insights pipeline configuration element. * **LanguageCode** *(string) --* The language code in the configuration. * **VocabularyName** *(string) --* Specifies the name of the custom vocabulary to use when processing a transcription. Note that vocabulary names are case sensitive. If the language of the specified custom vocabulary doesn't match the language identified in your media, the custom vocabulary is not applied to your transcription. For more information, see Custom vocabularies in the *Amazon Transcribe Developer Guide*. Length Constraints: Minimum length of 1. Maximum length of 200. * **VocabularyFilterName** *(string) --* Specifies the name of the custom vocabulary filter to use when processing a transcription. Note that vocabulary filter names are case sensitive. If the language of the specified custom vocabulary filter doesn't match the language identified in your media, the vocabulary filter is not applied to your transcription. For more information, see Using vocabulary filtering with unwanted words in the *Amazon Transcribe Developer Guide*. Length Constraints: Minimum length of 1. Maximum length of 200. * **VocabularyFilterMethod** *(string) --* Specifies how to apply a vocabulary filter to a transcript. To replace words with *******, choose "mask". To delete words, choose "remove". To flag words without changing them, choose "tag". * **LanguageModelName** *(string) --* Specifies the name of the custom language model to use when processing a transcription. Note that language model names are case sensitive. The language of the specified language model must match the language code specified in the transcription request. If the languages don't match, the custom language model isn't applied. Language mismatches don't generate errors or warnings. For more information, see Custom language models in the *Amazon Transcribe Developer Guide*. * **EnablePartialResultsStabilization** *(boolean) --* Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy. For more information, see Partial-result stabilization in the *Amazon Transcribe Developer Guide*. * **PartialResultsStability** *(string) --* Specifies the level of stability to use when you enable partial results stabilization ( "EnablePartialResultsStabilization"). Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy. For more information, see Partial-result stabilization in the *Amazon Transcribe Developer Guide*. * **ContentIdentificationType** *(string) --* Labels all personally identifiable information (PII) identified in your transcript. Content identification is performed at the segment level; PII specified in "PiiEntityTypes" is flagged upon complete transcription of an audio segment. You can’t set "ContentIdentificationType" and "ContentRedactionType" in the same request. If you do, your request returns a "BadRequestException". For more information, see Redacting or identifying personally identifiable information in the *Amazon Transcribe Developer Guide*. * **ContentRedactionType** *(string) --* Redacts all personally identifiable information (PII) identified in your transcript. Content redaction is performed at the segment level; PII specified in "PiiEntityTypes" is redacted upon complete transcription of an audio segment. You can’t set "ContentRedactionType" and "ContentIdentificationType" in the same request. If you do, your request returns a "BadRequestException". For more information, see Redacting or identifying personally identifiable information in the *Amazon Transcribe Developer Guide*. * **PiiEntityTypes** *(string) --* Specifies the types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select "ALL". To include "PiiEntityTypes" in your Call Analytics request, you must also include "ContentIdentificationType" or "ContentRedactionType", but you can't include both. Values must be comma-separated and can include: "ADDRESS", "BANK_ACCOUNT_NUMBER", "BANK_ROUTING", "CREDIT_DEBIT_CVV", "CREDIT_DEBIT_EXPIRY", "CREDIT_DEBIT_NUMBER", "EMAIL", "NAME", "PHONE", "PIN", "SSN", or "ALL". Length Constraints: Minimum length of 1. Maximum length of 300. * **FilterPartialResults** *(boolean) --* If true, "UtteranceEvents" with "IsPartial: true" are filtered out of the insights target. * **PostCallAnalyticsSettings** *(dict) --* The settings for a post-call analysis task in an analytics configuration. * **OutputLocation** *(string) --* The URL of the Amazon S3 bucket that contains the post-call data. * **DataAccessRoleArn** *(string) --* The ARN of the role used by Amazon Web Services Transcribe to upload your post call analysis. For more information, see Post-call analytics with real-time transcriptions in the *Amazon Transcribe Developer Guide*. * **ContentRedactionOutput** *(string) --* The content redaction output settings for a post- call analysis task. * **OutputEncryptionKMSKeyId** *(string) --* The ID of the KMS (Key Management Service) key used to encrypt the output. * **CallAnalyticsStreamCategories** *(list) --* By default, all "CategoryEvents" are sent to the insights target. If this parameter is specified, only included categories are sent to the insights target. * *(string) --* * **AmazonTranscribeProcessorConfiguration** *(dict) --* The transcription processor configuration settings in a media insights pipeline configuration element. * **LanguageCode** *(string) --* The language code that represents the language spoken in your audio. If you're unsure of the language spoken in your audio, consider using "IdentifyLanguage" to enable automatic language identification. For a list of languages that real-time Call Analytics supports, see the Supported languages table in the *Amazon Transcribe Developer Guide*. * **VocabularyName** *(string) --* The name of the custom vocabulary that you specified in your Call Analytics request. Length Constraints: Minimum length of 1. Maximum length of 200. * **VocabularyFilterName** *(string) --* The name of the custom vocabulary filter that you specified in your Call Analytics request. Length Constraints: Minimum length of 1. Maximum length of 200. * **VocabularyFilterMethod** *(string) --* The vocabulary filtering method used in your Call Analytics transcription. * **ShowSpeakerLabel** *(boolean) --* Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file. For more information, see Partitioning speakers (diarization) in the *Amazon Transcribe Developer Guide*. * **EnablePartialResultsStabilization** *(boolean) --* Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy. For more information, see Partial-result stabilization in the *Amazon Transcribe Developer Guide*. * **PartialResultsStability** *(string) --* The level of stability to use when you enable partial results stabilization ( "EnablePartialResultsStabilization"). Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy. For more information, see Partial-result stabilization in the *Amazon Transcribe Developer Guide*. * **ContentIdentificationType** *(string) --* Labels all personally identifiable information (PII) identified in your transcript. Content identification is performed at the segment level; PII specified in "PiiEntityTypes" is flagged upon complete transcription of an audio segment. You can’t set "ContentIdentificationType" and "ContentRedactionType" in the same request. If you set both, your request returns a "BadRequestException". For more information, see Redacting or identifying personally identifiable information in the *Amazon Transcribe Developer Guide*. * **ContentRedactionType** *(string) --* Redacts all personally identifiable information (PII) identified in your transcript. Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment. You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you set both, your request returns a "BadRequestException". For more information, see Redacting or identifying personally identifiable information in the *Amazon Transcribe Developer Guide*. * **PiiEntityTypes** *(string) --* The types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select "ALL". To include "PiiEntityTypes" in your Call Analytics request, you must also include "ContentIdentificationType" or "ContentRedactionType", but you can't include both. Values must be comma-separated and can include: "ADDRESS", "BANK_ACCOUNT_NUMBER", "BANK_ROUTING", "CREDIT_DEBIT_CVV", "CREDIT_DEBIT_EXPIRY", "CREDIT_DEBIT_NUMBER", "EMAIL", "NAME", "PHONE", "PIN", "SSN", or "ALL". If you leave this parameter empty, the default behavior is equivalent to "ALL". * **LanguageModelName** *(string) --* The name of the custom language model that you want to use when processing your transcription. Note that language model names are case sensitive. The language of the specified language model must match the language code you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch. For more information, see Custom language models in the *Amazon Transcribe Developer Guide*. * **FilterPartialResults** *(boolean) --* If true, "TranscriptEvents" with "IsPartial: true" are filtered out of the insights target. * **IdentifyLanguage** *(boolean) --* Turns language identification on or off. * **IdentifyMultipleLanguages** *(boolean) --* Turns language identification on or off for multiple languages. Note: Calls to this API must include a "LanguageCode", "IdentifyLanguage", or "IdentifyMultipleLanguages" parameter. If you include more than one of those parameters, your transcription job fails. * **LanguageOptions** *(string) --* The language options for the transcription, such as automatic language detection. * **PreferredLanguage** *(string) --* The preferred language for the transcription. * **VocabularyNames** *(string) --* The names of the custom vocabulary or vocabularies used during transcription. * **VocabularyFilterNames** *(string) --* The names of the custom vocabulary filter or filters using during transcription. * **KinesisDataStreamSinkConfiguration** *(dict) --* The configuration settings for the Kinesis Data Stream Sink in a media insights pipeline configuration element. * **InsightsTarget** *(string) --* The ARN of the sink. * **S3RecordingSinkConfiguration** *(dict) --* The configuration settings for the Amazon S3 recording bucket in a media insights pipeline configuration element. * **Destination** *(string) --* The default URI of the Amazon S3 bucket used as the recording sink. * **RecordingFileFormat** *(string) --* The default file format for the media files sent to the Amazon S3 bucket. * **VoiceAnalyticsProcessorConfiguration** *(dict) --* The voice analytics configuration settings in a media insights pipeline configuration element. * **SpeakerSearchStatus** *(string) --* The status of the speaker search task. * **VoiceToneAnalysisStatus** *(string) --* The status of the voice tone analysis task. * **LambdaFunctionSinkConfiguration** *(dict) --* The configuration settings for the Amazon Web Services Lambda sink in a media insights pipeline configuration element. * **InsightsTarget** *(string) --* The ARN of the sink. * **SqsQueueSinkConfiguration** *(dict) --* The configuration settings for an SQS queue sink in a media insights pipeline configuration element. * **InsightsTarget** *(string) --* The ARN of the SQS sink. * **SnsTopicSinkConfiguration** *(dict) --* The configuration settings for an SNS topic sink in a media insights pipeline configuration element. * **InsightsTarget** *(string) --* The ARN of the SNS sink. * **VoiceEnhancementSinkConfiguration** *(dict) --* The configuration settings for voice enhancement sink in a media insights pipeline configuration element. * **Disabled** *(boolean) --* Disables the "VoiceEnhancementSinkConfiguration" element. * **MediaInsightsPipelineConfigurationId** *(string) --* The ID of the configuration. * **CreatedTimestamp** *(datetime) --* The time at which the configuration was created. * **UpdatedTimestamp** *(datetime) --* The time at which the configuration was last updated. **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.NotFoundException" * "ChimeSDKMediaPipelines.Client.exceptions.ConflictException" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / stop_speaker_search_task stop_speaker_search_task ************************ ChimeSDKMediaPipelines.Client.stop_speaker_search_task(**kwargs) Stops a speaker search task. See also: AWS API Documentation **Request Syntax** response = client.stop_speaker_search_task( Identifier='string', SpeakerSearchTaskId='string' ) Parameters: * **Identifier** (*string*) -- **[REQUIRED]** The unique identifier of the resource to be updated. Valid values include the ID and ARN of the media insights pipeline. * **SpeakerSearchTaskId** (*string*) -- **[REQUIRED]** The speaker search task ID. Returns: None **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.ConflictException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.NotFoundException" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / create_media_insights_pipeline create_media_insights_pipeline ****************************** ChimeSDKMediaPipelines.Client.create_media_insights_pipeline(**kwargs) Creates a media insights pipeline. See also: AWS API Documentation **Request Syntax** response = client.create_media_insights_pipeline( MediaInsightsPipelineConfigurationArn='string', KinesisVideoStreamSourceRuntimeConfiguration={ 'Streams': [ { 'StreamArn': 'string', 'FragmentNumber': 'string', 'StreamChannelDefinition': { 'NumberOfChannels': 123, 'ChannelDefinitions': [ { 'ChannelId': 123, 'ParticipantRole': 'AGENT'|'CUSTOMER' }, ] } }, ], 'MediaEncoding': 'pcm', 'MediaSampleRate': 123 }, MediaInsightsRuntimeMetadata={ 'string': 'string' }, KinesisVideoStreamRecordingSourceRuntimeConfiguration={ 'Streams': [ { 'StreamArn': 'string' }, ], 'FragmentSelector': { 'FragmentSelectorType': 'ProducerTimestamp'|'ServerTimestamp', 'TimestampRange': { 'StartTimestamp': datetime(2015, 1, 1), 'EndTimestamp': datetime(2015, 1, 1) } } }, S3RecordingSinkRuntimeConfiguration={ 'Destination': 'string', 'RecordingFileFormat': 'Wav'|'Opus' }, Tags=[ { 'Key': 'string', 'Value': 'string' }, ], ClientRequestToken='string' ) Parameters: * **MediaInsightsPipelineConfigurationArn** (*string*) -- **[REQUIRED]** The ARN of the pipeline's configuration. * **KinesisVideoStreamSourceRuntimeConfiguration** (*dict*) -- The runtime configuration for the Kinesis video stream source of the media insights pipeline. * **Streams** *(list) --* **[REQUIRED]** The streams in the source runtime configuration of a Kinesis video stream. * *(dict) --* The configuration settings for a stream. * **StreamArn** *(string) --* **[REQUIRED]** The ARN of the stream. * **FragmentNumber** *(string) --* The unique identifier of the fragment to begin processing. * **StreamChannelDefinition** *(dict) --* **[REQUIRED]** The streaming channel definition in the stream configuration. * **NumberOfChannels** *(integer) --* **[REQUIRED]** The number of channels in a streaming channel. * **ChannelDefinitions** *(list) --* The definitions of the channels in a streaming channel. * *(dict) --* Defines an audio channel in a Kinesis video stream. * **ChannelId** *(integer) --* **[REQUIRED]** The channel ID. * **ParticipantRole** *(string) --* Specifies whether the audio in a channel belongs to the "AGENT" or "CUSTOMER". * **MediaEncoding** *(string) --* **[REQUIRED]** Specifies the encoding of your input audio. Supported format: PCM (only signed 16-bit little-endian audio formats, which does not include WAV) For more information, see Media formats in the *Amazon Transcribe Developer Guide*. * **MediaSampleRate** *(integer) --* **[REQUIRED]** The sample rate of the input audio (in hertz). Low-quality audio, such as telephone audio, is typically around 8,000 Hz. High-quality audio typically ranges from 16,000 Hz to 48,000 Hz. Note that the sample rate you specify must match that of your audio. Valid Range: Minimum value of 8000. Maximum value of 48000. * **MediaInsightsRuntimeMetadata** (*dict*) -- The runtime metadata for the media insights pipeline. Consists of a key-value map of strings. * *(string) --* * *(string) --* * **KinesisVideoStreamRecordingSourceRuntimeConfiguration** (*dict*) -- The runtime configuration for the Kinesis video recording stream source. * **Streams** *(list) --* **[REQUIRED]** The stream or streams to be recorded. * *(dict) --* A structure that holds the settings for recording media. * **StreamArn** *(string) --* The ARN of the recording stream. * **FragmentSelector** *(dict) --* **[REQUIRED]** Describes the timestamp range and timestamp origin of a range of fragments in the Kinesis video stream. * **FragmentSelectorType** *(string) --* **[REQUIRED]** The origin of the timestamps to use, "Server" or "Producer". For more information, see StartSelectorType in the *Amazon Kinesis Video Streams Developer Guide*. * **TimestampRange** *(dict) --* **[REQUIRED]** The range of timestamps to return. * **StartTimestamp** *(datetime) --* **[REQUIRED]** The starting timestamp for the specified range. * **EndTimestamp** *(datetime) --* **[REQUIRED]** The ending timestamp for the specified range. * **S3RecordingSinkRuntimeConfiguration** (*dict*) -- The runtime configuration for the S3 recording sink. If specified, the settings in this structure override any settings in "S3RecordingSinkConfiguration". * **Destination** *(string) --* **[REQUIRED]** The URI of the S3 bucket used as the sink. * **RecordingFileFormat** *(string) --* **[REQUIRED]** The file format for the media files sent to the Amazon S3 bucket. * **Tags** (*list*) -- The tags assigned to the media insights pipeline. * *(dict) --* A key/value pair that grants users access to meeting resources. * **Key** *(string) --* **[REQUIRED]** The key half of a tag. * **Value** *(string) --* **[REQUIRED]** The value half of a tag. * **ClientRequestToken** (*string*) -- The unique identifier for the media insights pipeline request. This field is autopopulated if not provided. Return type: dict Returns: **Response Syntax** { 'MediaInsightsPipeline': { 'MediaPipelineId': 'string', 'MediaPipelineArn': 'string', 'MediaInsightsPipelineConfigurationArn': 'string', 'Status': 'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped'|'Paused'|'NotStarted', 'KinesisVideoStreamSourceRuntimeConfiguration': { 'Streams': [ { 'StreamArn': 'string', 'FragmentNumber': 'string', 'StreamChannelDefinition': { 'NumberOfChannels': 123, 'ChannelDefinitions': [ { 'ChannelId': 123, 'ParticipantRole': 'AGENT'|'CUSTOMER' }, ] } }, ], 'MediaEncoding': 'pcm', 'MediaSampleRate': 123 }, 'MediaInsightsRuntimeMetadata': { 'string': 'string' }, 'KinesisVideoStreamRecordingSourceRuntimeConfiguration': { 'Streams': [ { 'StreamArn': 'string' }, ], 'FragmentSelector': { 'FragmentSelectorType': 'ProducerTimestamp'|'ServerTimestamp', 'TimestampRange': { 'StartTimestamp': datetime(2015, 1, 1), 'EndTimestamp': datetime(2015, 1, 1) } } }, 'S3RecordingSinkRuntimeConfiguration': { 'Destination': 'string', 'RecordingFileFormat': 'Wav'|'Opus' }, 'CreatedTimestamp': datetime(2015, 1, 1), 'ElementStatuses': [ { 'Type': 'AmazonTranscribeCallAnalyticsProcessor'|'VoiceAnalyticsProcessor'|'AmazonTranscribeProcessor'|'KinesisDataStreamSink'|'LambdaFunctionSink'|'SqsQueueSink'|'SnsTopicSink'|'S3RecordingSink'|'VoiceEnhancementSink', 'Status': 'NotStarted'|'NotSupported'|'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped'|'Paused' }, ] } } **Response Structure** * *(dict) --* * **MediaInsightsPipeline** *(dict) --* The media insights pipeline object. * **MediaPipelineId** *(string) --* The ID of a media insights pipeline. * **MediaPipelineArn** *(string) --* The ARN of a media insights pipeline. * **MediaInsightsPipelineConfigurationArn** *(string) --* The ARN of a media insight pipeline's configuration settings. * **Status** *(string) --* The status of a media insights pipeline. * **KinesisVideoStreamSourceRuntimeConfiguration** *(dict) --* The configuration settings for a Kinesis runtime video stream in a media insights pipeline. * **Streams** *(list) --* The streams in the source runtime configuration of a Kinesis video stream. * *(dict) --* The configuration settings for a stream. * **StreamArn** *(string) --* The ARN of the stream. * **FragmentNumber** *(string) --* The unique identifier of the fragment to begin processing. * **StreamChannelDefinition** *(dict) --* The streaming channel definition in the stream configuration. * **NumberOfChannels** *(integer) --* The number of channels in a streaming channel. * **ChannelDefinitions** *(list) --* The definitions of the channels in a streaming channel. * *(dict) --* Defines an audio channel in a Kinesis video stream. * **ChannelId** *(integer) --* The channel ID. * **ParticipantRole** *(string) --* Specifies whether the audio in a channel belongs to the "AGENT" or "CUSTOMER". * **MediaEncoding** *(string) --* Specifies the encoding of your input audio. Supported format: PCM (only signed 16-bit little-endian audio formats, which does not include WAV) For more information, see Media formats in the *Amazon Transcribe Developer Guide*. * **MediaSampleRate** *(integer) --* The sample rate of the input audio (in hertz). Low- quality audio, such as telephone audio, is typically around 8,000 Hz. High-quality audio typically ranges from 16,000 Hz to 48,000 Hz. Note that the sample rate you specify must match that of your audio. Valid Range: Minimum value of 8000. Maximum value of 48000. * **MediaInsightsRuntimeMetadata** *(dict) --* The runtime metadata of a media insights pipeline. * *(string) --* * *(string) --* * **KinesisVideoStreamRecordingSourceRuntimeConfiguration** *(dict) --* The runtime configuration settings for a Kinesis recording video stream in a media insights pipeline. * **Streams** *(list) --* The stream or streams to be recorded. * *(dict) --* A structure that holds the settings for recording media. * **StreamArn** *(string) --* The ARN of the recording stream. * **FragmentSelector** *(dict) --* Describes the timestamp range and timestamp origin of a range of fragments in the Kinesis video stream. * **FragmentSelectorType** *(string) --* The origin of the timestamps to use, "Server" or "Producer". For more information, see StartSelectorType in the *Amazon Kinesis Video Streams Developer Guide*. * **TimestampRange** *(dict) --* The range of timestamps to return. * **StartTimestamp** *(datetime) --* The starting timestamp for the specified range. * **EndTimestamp** *(datetime) --* The ending timestamp for the specified range. * **S3RecordingSinkRuntimeConfiguration** *(dict) --* The runtime configuration of the Amazon S3 bucket that stores recordings in a media insights pipeline. * **Destination** *(string) --* The URI of the S3 bucket used as the sink. * **RecordingFileFormat** *(string) --* The file format for the media files sent to the Amazon S3 bucket. * **CreatedTimestamp** *(datetime) --* The time at which the media insights pipeline was created. * **ElementStatuses** *(list) --* The statuses that the elements in a media insights pipeline can have during data processing. * *(dict) --* The status of the pipeline element. * **Type** *(string) --* The type of status. * **Status** *(string) --* The element's status. **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.ResourceLimitExceededE xception" * "ChimeSDKMediaPipelines.Client.exceptions.NotFoundException" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / list_tags_for_resource list_tags_for_resource ********************** ChimeSDKMediaPipelines.Client.list_tags_for_resource(**kwargs) Lists the tags available for a media pipeline. See also: AWS API Documentation **Request Syntax** response = client.list_tags_for_resource( ResourceARN='string' ) Parameters: **ResourceARN** (*string*) -- **[REQUIRED]** The ARN of the media pipeline associated with any tags. The ARN consists of the pipeline's region, resource ID, and pipeline ID. Return type: dict Returns: **Response Syntax** { 'Tags': [ { 'Key': 'string', 'Value': 'string' }, ] } **Response Structure** * *(dict) --* * **Tags** *(list) --* The tags associated with the specified media pipeline. * *(dict) --* A key/value pair that grants users access to meeting resources. * **Key** *(string) --* The key half of a tag. * **Value** *(string) --* The value half of a tag. **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.NotFoundException" * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / delete_media_insights_pipeline_configuration delete_media_insights_pipeline_configuration ******************************************** ChimeSDKMediaPipelines.Client.delete_media_insights_pipeline_configuration(**kwargs) Deletes the specified configuration settings. See also: AWS API Documentation **Request Syntax** response = client.delete_media_insights_pipeline_configuration( Identifier='string' ) Parameters: **Identifier** (*string*) -- **[REQUIRED]** The unique identifier of the resource to be deleted. Valid values include the name and ARN of the media insights pipeline configuration. Returns: None **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.NotFoundException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ConflictException" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / get_media_insights_pipeline_configuration get_media_insights_pipeline_configuration ***************************************** ChimeSDKMediaPipelines.Client.get_media_insights_pipeline_configuration(**kwargs) Gets the configuration settings for a media insights pipeline. See also: AWS API Documentation **Request Syntax** response = client.get_media_insights_pipeline_configuration( Identifier='string' ) Parameters: **Identifier** (*string*) -- **[REQUIRED]** The unique identifier of the requested resource. Valid values include the name and ARN of the media insights pipeline configuration. Return type: dict Returns: **Response Syntax** { 'MediaInsightsPipelineConfiguration': { 'MediaInsightsPipelineConfigurationName': 'string', 'MediaInsightsPipelineConfigurationArn': 'string', 'ResourceAccessRoleArn': 'string', 'RealTimeAlertConfiguration': { 'Disabled': True|False, 'Rules': [ { 'Type': 'KeywordMatch'|'Sentiment'|'IssueDetection', 'KeywordMatchConfiguration': { 'RuleName': 'string', 'Keywords': [ 'string', ], 'Negate': True|False }, 'SentimentConfiguration': { 'RuleName': 'string', 'SentimentType': 'NEGATIVE', 'TimePeriod': 123 }, 'IssueDetectionConfiguration': { 'RuleName': 'string' } }, ] }, 'Elements': [ { 'Type': 'AmazonTranscribeCallAnalyticsProcessor'|'VoiceAnalyticsProcessor'|'AmazonTranscribeProcessor'|'KinesisDataStreamSink'|'LambdaFunctionSink'|'SqsQueueSink'|'SnsTopicSink'|'S3RecordingSink'|'VoiceEnhancementSink', 'AmazonTranscribeCallAnalyticsProcessorConfiguration': { 'LanguageCode': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR', 'VocabularyName': 'string', 'VocabularyFilterName': 'string', 'VocabularyFilterMethod': 'remove'|'mask'|'tag', 'LanguageModelName': 'string', 'EnablePartialResultsStabilization': True|False, 'PartialResultsStability': 'high'|'medium'|'low', 'ContentIdentificationType': 'PII', 'ContentRedactionType': 'PII', 'PiiEntityTypes': 'string', 'FilterPartialResults': True|False, 'PostCallAnalyticsSettings': { 'OutputLocation': 'string', 'DataAccessRoleArn': 'string', 'ContentRedactionOutput': 'redacted'|'redacted_and_unredacted', 'OutputEncryptionKMSKeyId': 'string' }, 'CallAnalyticsStreamCategories': [ 'string', ] }, 'AmazonTranscribeProcessorConfiguration': { 'LanguageCode': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR', 'VocabularyName': 'string', 'VocabularyFilterName': 'string', 'VocabularyFilterMethod': 'remove'|'mask'|'tag', 'ShowSpeakerLabel': True|False, 'EnablePartialResultsStabilization': True|False, 'PartialResultsStability': 'high'|'medium'|'low', 'ContentIdentificationType': 'PII', 'ContentRedactionType': 'PII', 'PiiEntityTypes': 'string', 'LanguageModelName': 'string', 'FilterPartialResults': True|False, 'IdentifyLanguage': True|False, 'IdentifyMultipleLanguages': True|False, 'LanguageOptions': 'string', 'PreferredLanguage': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR', 'VocabularyNames': 'string', 'VocabularyFilterNames': 'string' }, 'KinesisDataStreamSinkConfiguration': { 'InsightsTarget': 'string' }, 'S3RecordingSinkConfiguration': { 'Destination': 'string', 'RecordingFileFormat': 'Wav'|'Opus' }, 'VoiceAnalyticsProcessorConfiguration': { 'SpeakerSearchStatus': 'Enabled'|'Disabled', 'VoiceToneAnalysisStatus': 'Enabled'|'Disabled' }, 'LambdaFunctionSinkConfiguration': { 'InsightsTarget': 'string' }, 'SqsQueueSinkConfiguration': { 'InsightsTarget': 'string' }, 'SnsTopicSinkConfiguration': { 'InsightsTarget': 'string' }, 'VoiceEnhancementSinkConfiguration': { 'Disabled': True|False } }, ], 'MediaInsightsPipelineConfigurationId': 'string', 'CreatedTimestamp': datetime(2015, 1, 1), 'UpdatedTimestamp': datetime(2015, 1, 1) } } **Response Structure** * *(dict) --* * **MediaInsightsPipelineConfiguration** *(dict) --* The requested media insights pipeline configuration. * **MediaInsightsPipelineConfigurationName** *(string) --* The name of the configuration. * **MediaInsightsPipelineConfigurationArn** *(string) --* The ARN of the configuration. * **ResourceAccessRoleArn** *(string) --* The ARN of the role used by the service to access Amazon Web Services resources. * **RealTimeAlertConfiguration** *(dict) --* Lists the rules that trigger a real-time alert. * **Disabled** *(boolean) --* Turns off real-time alerts. * **Rules** *(list) --* The rules in the alert. Rules specify the words or phrases that you want to be notified about. * *(dict) --* Specifies the words or phrases that trigger an alert. * **Type** *(string) --* The type of alert rule. * **KeywordMatchConfiguration** *(dict) --* Specifies the settings for matching the keywords in a real-time alert rule. * **RuleName** *(string) --* The name of the keyword match rule. * **Keywords** *(list) --* The keywords or phrases that you want to match. * *(string) --* * **Negate** *(boolean) --* Matches keywords or phrases on their presence or absence. If set to "TRUE", the rule matches when all the specified keywords or phrases are absent. Default: "FALSE". * **SentimentConfiguration** *(dict) --* Specifies the settings for predicting sentiment in a real-time alert rule. * **RuleName** *(string) --* The name of the rule in the sentiment configuration. * **SentimentType** *(string) --* The type of sentiment, "POSITIVE", "NEGATIVE", or "NEUTRAL". * **TimePeriod** *(integer) --* Specifies the analysis interval. * **IssueDetectionConfiguration** *(dict) --* Specifies the issue detection settings for a real- time alert rule. * **RuleName** *(string) --* The name of the issue detection rule. * **Elements** *(list) --* The elements in the configuration. * *(dict) --* An element in a media insights pipeline configuration. * **Type** *(string) --* The element type. * **AmazonTranscribeCallAnalyticsProcessorConfiguration ** *(dict) --* The analytics configuration settings for transcribing audio in a media insights pipeline configuration element. * **LanguageCode** *(string) --* The language code in the configuration. * **VocabularyName** *(string) --* Specifies the name of the custom vocabulary to use when processing a transcription. Note that vocabulary names are case sensitive. If the language of the specified custom vocabulary doesn't match the language identified in your media, the custom vocabulary is not applied to your transcription. For more information, see Custom vocabularies in the *Amazon Transcribe Developer Guide*. Length Constraints: Minimum length of 1. Maximum length of 200. * **VocabularyFilterName** *(string) --* Specifies the name of the custom vocabulary filter to use when processing a transcription. Note that vocabulary filter names are case sensitive. If the language of the specified custom vocabulary filter doesn't match the language identified in your media, the vocabulary filter is not applied to your transcription. For more information, see Using vocabulary filtering with unwanted words in the *Amazon Transcribe Developer Guide*. Length Constraints: Minimum length of 1. Maximum length of 200. * **VocabularyFilterMethod** *(string) --* Specifies how to apply a vocabulary filter to a transcript. To replace words with *******, choose "mask". To delete words, choose "remove". To flag words without changing them, choose "tag". * **LanguageModelName** *(string) --* Specifies the name of the custom language model to use when processing a transcription. Note that language model names are case sensitive. The language of the specified language model must match the language code specified in the transcription request. If the languages don't match, the custom language model isn't applied. Language mismatches don't generate errors or warnings. For more information, see Custom language models in the *Amazon Transcribe Developer Guide*. * **EnablePartialResultsStabilization** *(boolean) --* Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy. For more information, see Partial-result stabilization in the *Amazon Transcribe Developer Guide*. * **PartialResultsStability** *(string) --* Specifies the level of stability to use when you enable partial results stabilization ( "EnablePartialResultsStabilization"). Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy. For more information, see Partial-result stabilization in the *Amazon Transcribe Developer Guide*. * **ContentIdentificationType** *(string) --* Labels all personally identifiable information (PII) identified in your transcript. Content identification is performed at the segment level; PII specified in "PiiEntityTypes" is flagged upon complete transcription of an audio segment. You can’t set "ContentIdentificationType" and "ContentRedactionType" in the same request. If you do, your request returns a "BadRequestException". For more information, see Redacting or identifying personally identifiable information in the *Amazon Transcribe Developer Guide*. * **ContentRedactionType** *(string) --* Redacts all personally identifiable information (PII) identified in your transcript. Content redaction is performed at the segment level; PII specified in "PiiEntityTypes" is redacted upon complete transcription of an audio segment. You can’t set "ContentRedactionType" and "ContentIdentificationType" in the same request. If you do, your request returns a "BadRequestException". For more information, see Redacting or identifying personally identifiable information in the *Amazon Transcribe Developer Guide*. * **PiiEntityTypes** *(string) --* Specifies the types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select "ALL". To include "PiiEntityTypes" in your Call Analytics request, you must also include "ContentIdentificationType" or "ContentRedactionType", but you can't include both. Values must be comma-separated and can include: "ADDRESS", "BANK_ACCOUNT_NUMBER", "BANK_ROUTING", "CREDIT_DEBIT_CVV", "CREDIT_DEBIT_EXPIRY", "CREDIT_DEBIT_NUMBER", "EMAIL", "NAME", "PHONE", "PIN", "SSN", or "ALL". Length Constraints: Minimum length of 1. Maximum length of 300. * **FilterPartialResults** *(boolean) --* If true, "UtteranceEvents" with "IsPartial: true" are filtered out of the insights target. * **PostCallAnalyticsSettings** *(dict) --* The settings for a post-call analysis task in an analytics configuration. * **OutputLocation** *(string) --* The URL of the Amazon S3 bucket that contains the post-call data. * **DataAccessRoleArn** *(string) --* The ARN of the role used by Amazon Web Services Transcribe to upload your post call analysis. For more information, see Post-call analytics with real-time transcriptions in the *Amazon Transcribe Developer Guide*. * **ContentRedactionOutput** *(string) --* The content redaction output settings for a post- call analysis task. * **OutputEncryptionKMSKeyId** *(string) --* The ID of the KMS (Key Management Service) key used to encrypt the output. * **CallAnalyticsStreamCategories** *(list) --* By default, all "CategoryEvents" are sent to the insights target. If this parameter is specified, only included categories are sent to the insights target. * *(string) --* * **AmazonTranscribeProcessorConfiguration** *(dict) --* The transcription processor configuration settings in a media insights pipeline configuration element. * **LanguageCode** *(string) --* The language code that represents the language spoken in your audio. If you're unsure of the language spoken in your audio, consider using "IdentifyLanguage" to enable automatic language identification. For a list of languages that real-time Call Analytics supports, see the Supported languages table in the *Amazon Transcribe Developer Guide*. * **VocabularyName** *(string) --* The name of the custom vocabulary that you specified in your Call Analytics request. Length Constraints: Minimum length of 1. Maximum length of 200. * **VocabularyFilterName** *(string) --* The name of the custom vocabulary filter that you specified in your Call Analytics request. Length Constraints: Minimum length of 1. Maximum length of 200. * **VocabularyFilterMethod** *(string) --* The vocabulary filtering method used in your Call Analytics transcription. * **ShowSpeakerLabel** *(boolean) --* Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file. For more information, see Partitioning speakers (diarization) in the *Amazon Transcribe Developer Guide*. * **EnablePartialResultsStabilization** *(boolean) --* Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy. For more information, see Partial-result stabilization in the *Amazon Transcribe Developer Guide*. * **PartialResultsStability** *(string) --* The level of stability to use when you enable partial results stabilization ( "EnablePartialResultsStabilization"). Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy. For more information, see Partial-result stabilization in the *Amazon Transcribe Developer Guide*. * **ContentIdentificationType** *(string) --* Labels all personally identifiable information (PII) identified in your transcript. Content identification is performed at the segment level; PII specified in "PiiEntityTypes" is flagged upon complete transcription of an audio segment. You can’t set "ContentIdentificationType" and "ContentRedactionType" in the same request. If you set both, your request returns a "BadRequestException". For more information, see Redacting or identifying personally identifiable information in the *Amazon Transcribe Developer Guide*. * **ContentRedactionType** *(string) --* Redacts all personally identifiable information (PII) identified in your transcript. Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment. You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you set both, your request returns a "BadRequestException". For more information, see Redacting or identifying personally identifiable information in the *Amazon Transcribe Developer Guide*. * **PiiEntityTypes** *(string) --* The types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select "ALL". To include "PiiEntityTypes" in your Call Analytics request, you must also include "ContentIdentificationType" or "ContentRedactionType", but you can't include both. Values must be comma-separated and can include: "ADDRESS", "BANK_ACCOUNT_NUMBER", "BANK_ROUTING", "CREDIT_DEBIT_CVV", "CREDIT_DEBIT_EXPIRY", "CREDIT_DEBIT_NUMBER", "EMAIL", "NAME", "PHONE", "PIN", "SSN", or "ALL". If you leave this parameter empty, the default behavior is equivalent to "ALL". * **LanguageModelName** *(string) --* The name of the custom language model that you want to use when processing your transcription. Note that language model names are case sensitive. The language of the specified language model must match the language code you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch. For more information, see Custom language models in the *Amazon Transcribe Developer Guide*. * **FilterPartialResults** *(boolean) --* If true, "TranscriptEvents" with "IsPartial: true" are filtered out of the insights target. * **IdentifyLanguage** *(boolean) --* Turns language identification on or off. * **IdentifyMultipleLanguages** *(boolean) --* Turns language identification on or off for multiple languages. Note: Calls to this API must include a "LanguageCode", "IdentifyLanguage", or "IdentifyMultipleLanguages" parameter. If you include more than one of those parameters, your transcription job fails. * **LanguageOptions** *(string) --* The language options for the transcription, such as automatic language detection. * **PreferredLanguage** *(string) --* The preferred language for the transcription. * **VocabularyNames** *(string) --* The names of the custom vocabulary or vocabularies used during transcription. * **VocabularyFilterNames** *(string) --* The names of the custom vocabulary filter or filters using during transcription. * **KinesisDataStreamSinkConfiguration** *(dict) --* The configuration settings for the Kinesis Data Stream Sink in a media insights pipeline configuration element. * **InsightsTarget** *(string) --* The ARN of the sink. * **S3RecordingSinkConfiguration** *(dict) --* The configuration settings for the Amazon S3 recording bucket in a media insights pipeline configuration element. * **Destination** *(string) --* The default URI of the Amazon S3 bucket used as the recording sink. * **RecordingFileFormat** *(string) --* The default file format for the media files sent to the Amazon S3 bucket. * **VoiceAnalyticsProcessorConfiguration** *(dict) --* The voice analytics configuration settings in a media insights pipeline configuration element. * **SpeakerSearchStatus** *(string) --* The status of the speaker search task. * **VoiceToneAnalysisStatus** *(string) --* The status of the voice tone analysis task. * **LambdaFunctionSinkConfiguration** *(dict) --* The configuration settings for the Amazon Web Services Lambda sink in a media insights pipeline configuration element. * **InsightsTarget** *(string) --* The ARN of the sink. * **SqsQueueSinkConfiguration** *(dict) --* The configuration settings for an SQS queue sink in a media insights pipeline configuration element. * **InsightsTarget** *(string) --* The ARN of the SQS sink. * **SnsTopicSinkConfiguration** *(dict) --* The configuration settings for an SNS topic sink in a media insights pipeline configuration element. * **InsightsTarget** *(string) --* The ARN of the SNS sink. * **VoiceEnhancementSinkConfiguration** *(dict) --* The configuration settings for voice enhancement sink in a media insights pipeline configuration element. * **Disabled** *(boolean) --* Disables the "VoiceEnhancementSinkConfiguration" element. * **MediaInsightsPipelineConfigurationId** *(string) --* The ID of the configuration. * **CreatedTimestamp** *(datetime) --* The time at which the configuration was created. * **UpdatedTimestamp** *(datetime) --* The time at which the configuration was last updated. **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.NotFoundException" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / untag_resource untag_resource ************** ChimeSDKMediaPipelines.Client.untag_resource(**kwargs) Removes any tags from a media pipeline. See also: AWS API Documentation **Request Syntax** response = client.untag_resource( ResourceARN='string', TagKeys=[ 'string', ] ) Parameters: * **ResourceARN** (*string*) -- **[REQUIRED]** The ARN of the pipeline that you want to untag. * **TagKeys** (*list*) -- **[REQUIRED]** The key/value pairs in the tag that you want to remove. * *(string) --* Return type: dict Returns: **Response Syntax** {} **Response Structure** * *(dict) --* **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.NotFoundException" * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / get_voice_tone_analysis_task get_voice_tone_analysis_task **************************** ChimeSDKMediaPipelines.Client.get_voice_tone_analysis_task(**kwargs) Retrieves the details of a voice tone analysis task. See also: AWS API Documentation **Request Syntax** response = client.get_voice_tone_analysis_task( Identifier='string', VoiceToneAnalysisTaskId='string' ) Parameters: * **Identifier** (*string*) -- **[REQUIRED]** The unique identifier of the resource to be updated. Valid values include the ID and ARN of the media insights pipeline. * **VoiceToneAnalysisTaskId** (*string*) -- **[REQUIRED]** The ID of the voice tone analysis task. Return type: dict Returns: **Response Syntax** { 'VoiceToneAnalysisTask': { 'VoiceToneAnalysisTaskId': 'string', 'VoiceToneAnalysisTaskStatus': 'NotStarted'|'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped', 'CreatedTimestamp': datetime(2015, 1, 1), 'UpdatedTimestamp': datetime(2015, 1, 1) } } **Response Structure** * *(dict) --* * **VoiceToneAnalysisTask** *(dict) --* The details of the voice tone analysis task. * **VoiceToneAnalysisTaskId** *(string) --* The ID of the voice tone analysis task. * **VoiceToneAnalysisTaskStatus** *(string) --* The status of a voice tone analysis task. * **CreatedTimestamp** *(datetime) --* The time at which a voice tone analysis task was created. * **UpdatedTimestamp** *(datetime) --* The time at which a voice tone analysis task was updated. **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.NotFoundException" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / get_waiter get_waiter ********** ChimeSDKMediaPipelines.Client.get_waiter(waiter_name) Returns an object that can wait for some condition. Parameters: **waiter_name** (*str*) -- The name of the waiter to get. See the waiters section of the service docs for a list of available waiters. Returns: The specified waiter object. Return type: "botocore.waiter.Waiter" ChimeSDKMediaPipelines / Client / list_media_pipeline_kinesis_video_stream_pools list_media_pipeline_kinesis_video_stream_pools ********************************************** ChimeSDKMediaPipelines.Client.list_media_pipeline_kinesis_video_stream_pools(**kwargs) Lists the video stream pools in the media pipeline. See also: AWS API Documentation **Request Syntax** response = client.list_media_pipeline_kinesis_video_stream_pools( NextToken='string', MaxResults=123 ) Parameters: * **NextToken** (*string*) -- The token used to return the next page of results. * **MaxResults** (*integer*) -- The maximum number of results to return in a single call. Return type: dict Returns: **Response Syntax** { 'KinesisVideoStreamPools': [ { 'PoolName': 'string', 'PoolId': 'string', 'PoolArn': 'string' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **KinesisVideoStreamPools** *(list) --* The list of video stream pools. * *(dict) --* A summary of the Kinesis video stream pool. * **PoolName** *(string) --* The name of the video stream pool. * **PoolId** *(string) --* The ID of the video stream pool. * **PoolArn** *(string) --* The ARN of the video stream pool. * **NextToken** *(string) --* The token used to return the next page of results. **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.ResourceLimitExceededE xception" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / create_media_capture_pipeline create_media_capture_pipeline ***************************** ChimeSDKMediaPipelines.Client.create_media_capture_pipeline(**kwargs) Creates a media pipeline. See also: AWS API Documentation **Request Syntax** response = client.create_media_capture_pipeline( SourceType='ChimeSdkMeeting', SourceArn='string', SinkType='S3Bucket', SinkArn='string', ClientRequestToken='string', ChimeSdkMeetingConfiguration={ 'SourceConfiguration': { 'SelectedVideoStreams': { 'AttendeeIds': [ 'string', ], 'ExternalUserIds': [ 'string', ] } }, 'ArtifactsConfiguration': { 'Audio': { 'MuxType': 'AudioOnly'|'AudioWithActiveSpeakerVideo'|'AudioWithCompositedVideo' }, 'Video': { 'State': 'Enabled'|'Disabled', 'MuxType': 'VideoOnly' }, 'Content': { 'State': 'Enabled'|'Disabled', 'MuxType': 'ContentOnly' }, 'CompositedVideo': { 'Layout': 'GridView', 'Resolution': 'HD'|'FHD', 'GridViewConfiguration': { 'ContentShareLayout': 'PresenterOnly'|'Horizontal'|'Vertical'|'ActiveSpeakerOnly', 'PresenterOnlyConfiguration': { 'PresenterPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight' }, 'ActiveSpeakerOnlyConfiguration': { 'ActiveSpeakerPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight' }, 'HorizontalLayoutConfiguration': { 'TileOrder': 'JoinSequence'|'SpeakerSequence', 'TilePosition': 'Top'|'Bottom', 'TileCount': 123, 'TileAspectRatio': 'string' }, 'VerticalLayoutConfiguration': { 'TileOrder': 'JoinSequence'|'SpeakerSequence', 'TilePosition': 'Left'|'Right', 'TileCount': 123, 'TileAspectRatio': 'string' }, 'VideoAttribute': { 'CornerRadius': 123, 'BorderColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow', 'HighlightColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow', 'BorderThickness': 123 }, 'CanvasOrientation': 'Landscape'|'Portrait' } } } }, SseAwsKeyManagementParams={ 'AwsKmsKeyId': 'string', 'AwsKmsEncryptionContext': 'string' }, SinkIamRoleArn='string', Tags=[ { 'Key': 'string', 'Value': 'string' }, ] ) Parameters: * **SourceType** (*string*) -- **[REQUIRED]** Source type from which the media artifacts are captured. A Chime SDK Meeting is the only supported source. * **SourceArn** (*string*) -- **[REQUIRED]** ARN of the source from which the media artifacts are captured. * **SinkType** (*string*) -- **[REQUIRED]** Destination type to which the media artifacts are saved. You must use an S3 bucket. * **SinkArn** (*string*) -- **[REQUIRED]** The ARN of the sink type. * **ClientRequestToken** (*string*) -- The unique identifier for the client request. The token makes the API request idempotent. Use a unique token for each media pipeline request. This field is autopopulated if not provided. * **ChimeSdkMeetingConfiguration** (*dict*) -- The configuration for a specified media pipeline. "SourceType" must be "ChimeSdkMeeting". * **SourceConfiguration** *(dict) --* The source configuration for a specified media pipeline. * **SelectedVideoStreams** *(dict) --* The selected video streams for a specified media pipeline. The number of video streams can't exceed 25. * **AttendeeIds** *(list) --* The attendee IDs of the streams selected for a media pipeline. * *(string) --* * **ExternalUserIds** *(list) --* The external user IDs of the streams selected for a media pipeline. * *(string) --* * **ArtifactsConfiguration** *(dict) --* The configuration for the artifacts in an Amazon Chime SDK meeting. * **Audio** *(dict) --* **[REQUIRED]** The configuration for the audio artifacts. * **MuxType** *(string) --* **[REQUIRED]** The MUX type of the audio artifact configuration object. * **Video** *(dict) --* **[REQUIRED]** The configuration for the video artifacts. * **State** *(string) --* **[REQUIRED]** Indicates whether the video artifact is enabled or disabled. * **MuxType** *(string) --* The MUX type of the video artifact configuration object. * **Content** *(dict) --* **[REQUIRED]** The configuration for the content artifacts. * **State** *(string) --* **[REQUIRED]** Indicates whether the content artifact is enabled or disabled. * **MuxType** *(string) --* The MUX type of the artifact configuration. * **CompositedVideo** *(dict) --* Enables video compositing. * **Layout** *(string) --* The layout setting, such as "GridView" in the configuration object. * **Resolution** *(string) --* The video resolution setting in the configuration object. Default: HD at 1280 x 720. FHD resolution: 1920 x 1080. * **GridViewConfiguration** *(dict) --* **[REQUIRED]** The "GridView" configuration setting. * **ContentShareLayout** *(string) --* **[REQUIRED]** Defines the layout of the video tiles when content sharing is enabled. * **PresenterOnlyConfiguration** *(dict) --* Defines the configuration options for a presenter only video tile. * **PresenterPosition** *(string) --* Defines the position of the presenter video tile. Default: "TopRight". * **ActiveSpeakerOnlyConfiguration** *(dict) --* The configuration settings for an "ActiveSpeakerOnly" video tile. * **ActiveSpeakerPosition** *(string) --* The position of the "ActiveSpeakerOnly" video tile. * **HorizontalLayoutConfiguration** *(dict) --* The configuration settings for a horizontal layout. * **TileOrder** *(string) --* Sets the automatic ordering of the video tiles. * **TilePosition** *(string) --* Sets the position of horizontal tiles. * **TileCount** *(integer) --* The maximum number of video tiles to display. * **TileAspectRatio** *(string) --* Specifies the aspect ratio of all video tiles. * **VerticalLayoutConfiguration** *(dict) --* The configuration settings for a vertical layout. * **TileOrder** *(string) --* Sets the automatic ordering of the video tiles. * **TilePosition** *(string) --* Sets the position of vertical tiles. * **TileCount** *(integer) --* The maximum number of tiles to display. * **TileAspectRatio** *(string) --* Sets the aspect ratio of the video tiles, such as 16:9. * **VideoAttribute** *(dict) --* The attribute settings for the video tiles. * **CornerRadius** *(integer) --* Sets the corner radius of all video tiles. * **BorderColor** *(string) --* Defines the border color of all video tiles. * **HighlightColor** *(string) --* Defines the highlight color for the active video tile. * **BorderThickness** *(integer) --* Defines the border thickness for all video tiles. * **CanvasOrientation** *(string) --* The orientation setting, horizontal or vertical. * **SseAwsKeyManagementParams** (*dict*) -- An object that contains server side encryption parameters to be used by media capture pipeline. The parameters can also be used by media concatenation pipeline taking media capture pipeline as a media source. * **AwsKmsKeyId** *(string) --* **[REQUIRED]** The KMS key you want to use to encrypt your media pipeline output. Decryption is required for concatenation pipeline. If using a key located in the current Amazon Web Services account, you can specify your KMS key in one of four ways: * Use the KMS key ID itself. For example, "1234abcd-12ab- 34cd-56ef-1234567890ab". * Use an alias for the KMS key ID. For example, "alias/ExampleAlias". * Use the Amazon Resource Name (ARN) for the KMS key ID. For example, "arn:aws:kms:region:account-ID:key/1234abcd-12ab- 34cd-56ef-1234567890ab". * Use the ARN for the KMS key alias. For example, "arn:aws:kms:region:account-ID:alias/ExampleAlias". If using a key located in a different Amazon Web Services account than the current Amazon Web Services account, you can specify your KMS key in one of two ways: * Use the ARN for the KMS key ID. For example, "arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd- 56ef-1234567890ab". * Use the ARN for the KMS key alias. For example, "arn:aws:kms:region:account-ID:alias/ExampleAlias". If you don't specify an encryption key, your output is encrypted with the default Amazon S3 key (SSE-S3). Note that the role specified in the "SinkIamRoleArn" request parameter must have permission to use the specified KMS key. * **AwsKmsEncryptionContext** *(string) --* Base64-encoded string of a UTF-8 encoded JSON, which contains the encryption context as non-secret key-value pair known as encryption context pairs, that provides an added layer of security for your data. For more information, see KMS encryption context and Asymmetric keys in KMS in the *Key Management Service Developer Guide*. * **SinkIamRoleArn** (*string*) -- The Amazon Resource Name (ARN) of the sink role to be used with "AwsKmsKeyId" in "SseAwsKeyManagementParams". Can only interact with "S3Bucket" sink type. The role must belong to the caller’s account and be able to act on behalf of the caller during the API call. All minimum policy permissions requirements for the caller to perform sink-related actions are the same for "SinkIamRoleArn". Additionally, the role must have permission to "kms:GenerateDataKey" using KMS key supplied as "AwsKmsKeyId" in "SseAwsKeyManagementParams". If media concatenation will be required later, the role must also have permission to "kms:Decrypt" for the same KMS key. * **Tags** (*list*) -- The tag key-value pairs. * *(dict) --* A key/value pair that grants users access to meeting resources. * **Key** *(string) --* **[REQUIRED]** The key half of a tag. * **Value** *(string) --* **[REQUIRED]** The value half of a tag. Return type: dict Returns: **Response Syntax** { 'MediaCapturePipeline': { 'MediaPipelineId': 'string', 'MediaPipelineArn': 'string', 'SourceType': 'ChimeSdkMeeting', 'SourceArn': 'string', 'Status': 'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped'|'Paused'|'NotStarted', 'SinkType': 'S3Bucket', 'SinkArn': 'string', 'CreatedTimestamp': datetime(2015, 1, 1), 'UpdatedTimestamp': datetime(2015, 1, 1), 'ChimeSdkMeetingConfiguration': { 'SourceConfiguration': { 'SelectedVideoStreams': { 'AttendeeIds': [ 'string', ], 'ExternalUserIds': [ 'string', ] } }, 'ArtifactsConfiguration': { 'Audio': { 'MuxType': 'AudioOnly'|'AudioWithActiveSpeakerVideo'|'AudioWithCompositedVideo' }, 'Video': { 'State': 'Enabled'|'Disabled', 'MuxType': 'VideoOnly' }, 'Content': { 'State': 'Enabled'|'Disabled', 'MuxType': 'ContentOnly' }, 'CompositedVideo': { 'Layout': 'GridView', 'Resolution': 'HD'|'FHD', 'GridViewConfiguration': { 'ContentShareLayout': 'PresenterOnly'|'Horizontal'|'Vertical'|'ActiveSpeakerOnly', 'PresenterOnlyConfiguration': { 'PresenterPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight' }, 'ActiveSpeakerOnlyConfiguration': { 'ActiveSpeakerPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight' }, 'HorizontalLayoutConfiguration': { 'TileOrder': 'JoinSequence'|'SpeakerSequence', 'TilePosition': 'Top'|'Bottom', 'TileCount': 123, 'TileAspectRatio': 'string' }, 'VerticalLayoutConfiguration': { 'TileOrder': 'JoinSequence'|'SpeakerSequence', 'TilePosition': 'Left'|'Right', 'TileCount': 123, 'TileAspectRatio': 'string' }, 'VideoAttribute': { 'CornerRadius': 123, 'BorderColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow', 'HighlightColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow', 'BorderThickness': 123 }, 'CanvasOrientation': 'Landscape'|'Portrait' } } } }, 'SseAwsKeyManagementParams': { 'AwsKmsKeyId': 'string', 'AwsKmsEncryptionContext': 'string' }, 'SinkIamRoleArn': 'string' } } **Response Structure** * *(dict) --* * **MediaCapturePipeline** *(dict) --* A media pipeline object, the ID, source type, source ARN, sink type, and sink ARN of a media pipeline object. * **MediaPipelineId** *(string) --* The ID of a media pipeline. * **MediaPipelineArn** *(string) --* The ARN of the media capture pipeline * **SourceType** *(string) --* Source type from which media artifacts are saved. You must use "ChimeMeeting". * **SourceArn** *(string) --* ARN of the source from which the media artifacts are saved. * **Status** *(string) --* The status of the media pipeline. * **SinkType** *(string) --* Destination type to which the media artifacts are saved. You must use an S3 Bucket. * **SinkArn** *(string) --* ARN of the destination to which the media artifacts are saved. * **CreatedTimestamp** *(datetime) --* The time at which the pipeline was created, in ISO 8601 format. * **UpdatedTimestamp** *(datetime) --* The time at which the pipeline was updated, in ISO 8601 format. * **ChimeSdkMeetingConfiguration** *(dict) --* The configuration for a specified media pipeline. "SourceType" must be "ChimeSdkMeeting". * **SourceConfiguration** *(dict) --* The source configuration for a specified media pipeline. * **SelectedVideoStreams** *(dict) --* The selected video streams for a specified media pipeline. The number of video streams can't exceed 25. * **AttendeeIds** *(list) --* The attendee IDs of the streams selected for a media pipeline. * *(string) --* * **ExternalUserIds** *(list) --* The external user IDs of the streams selected for a media pipeline. * *(string) --* * **ArtifactsConfiguration** *(dict) --* The configuration for the artifacts in an Amazon Chime SDK meeting. * **Audio** *(dict) --* The configuration for the audio artifacts. * **MuxType** *(string) --* The MUX type of the audio artifact configuration object. * **Video** *(dict) --* The configuration for the video artifacts. * **State** *(string) --* Indicates whether the video artifact is enabled or disabled. * **MuxType** *(string) --* The MUX type of the video artifact configuration object. * **Content** *(dict) --* The configuration for the content artifacts. * **State** *(string) --* Indicates whether the content artifact is enabled or disabled. * **MuxType** *(string) --* The MUX type of the artifact configuration. * **CompositedVideo** *(dict) --* Enables video compositing. * **Layout** *(string) --* The layout setting, such as "GridView" in the configuration object. * **Resolution** *(string) --* The video resolution setting in the configuration object. Default: HD at 1280 x 720. FHD resolution: 1920 x 1080. * **GridViewConfiguration** *(dict) --* The "GridView" configuration setting. * **ContentShareLayout** *(string) --* Defines the layout of the video tiles when content sharing is enabled. * **PresenterOnlyConfiguration** *(dict) --* Defines the configuration options for a presenter only video tile. * **PresenterPosition** *(string) --* Defines the position of the presenter video tile. Default: "TopRight". * **ActiveSpeakerOnlyConfiguration** *(dict) --* The configuration settings for an "ActiveSpeakerOnly" video tile. * **ActiveSpeakerPosition** *(string) --* The position of the "ActiveSpeakerOnly" video tile. * **HorizontalLayoutConfiguration** *(dict) --* The configuration settings for a horizontal layout. * **TileOrder** *(string) --* Sets the automatic ordering of the video tiles. * **TilePosition** *(string) --* Sets the position of horizontal tiles. * **TileCount** *(integer) --* The maximum number of video tiles to display. * **TileAspectRatio** *(string) --* Specifies the aspect ratio of all video tiles. * **VerticalLayoutConfiguration** *(dict) --* The configuration settings for a vertical layout. * **TileOrder** *(string) --* Sets the automatic ordering of the video tiles. * **TilePosition** *(string) --* Sets the position of vertical tiles. * **TileCount** *(integer) --* The maximum number of tiles to display. * **TileAspectRatio** *(string) --* Sets the aspect ratio of the video tiles, such as 16:9. * **VideoAttribute** *(dict) --* The attribute settings for the video tiles. * **CornerRadius** *(integer) --* Sets the corner radius of all video tiles. * **BorderColor** *(string) --* Defines the border color of all video tiles. * **HighlightColor** *(string) --* Defines the highlight color for the active video tile. * **BorderThickness** *(integer) --* Defines the border thickness for all video tiles. * **CanvasOrientation** *(string) --* The orientation setting, horizontal or vertical. * **SseAwsKeyManagementParams** *(dict) --* An object that contains server side encryption parameters to be used by media capture pipeline. The parameters can also be used by media concatenation pipeline taking media capture pipeline as a media source. * **AwsKmsKeyId** *(string) --* The KMS key you want to use to encrypt your media pipeline output. Decryption is required for concatenation pipeline. If using a key located in the current Amazon Web Services account, you can specify your KMS key in one of four ways: * Use the KMS key ID itself. For example, "1234abcd- 12ab-34cd-56ef-1234567890ab". * Use an alias for the KMS key ID. For example, "alias/ExampleAlias". * Use the Amazon Resource Name (ARN) for the KMS key ID. For example, "arn:aws:kms:region:account-ID:key /1234abcd-12ab-34cd-56ef-1234567890ab". * Use the ARN for the KMS key alias. For example, "arn:aws:kms:region:account-ID:alias/ExampleAlias". If using a key located in a different Amazon Web Services account than the current Amazon Web Services account, you can specify your KMS key in one of two ways: * Use the ARN for the KMS key ID. For example, "arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd- 56ef-1234567890ab". * Use the ARN for the KMS key alias. For example, "arn:aws:kms:region:account-ID:alias/ExampleAlias". If you don't specify an encryption key, your output is encrypted with the default Amazon S3 key (SSE-S3). Note that the role specified in the "SinkIamRoleArn" request parameter must have permission to use the specified KMS key. * **AwsKmsEncryptionContext** *(string) --* Base64-encoded string of a UTF-8 encoded JSON, which contains the encryption context as non-secret key-value pair known as encryption context pairs, that provides an added layer of security for your data. For more information, see KMS encryption context and Asymmetric keys in KMS in the *Key Management Service Developer Guide*. * **SinkIamRoleArn** *(string) --* The Amazon Resource Name (ARN) of the sink role to be used with "AwsKmsKeyId" in "SseAwsKeyManagementParams". **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.ResourceLimitExceededE xception" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / list_media_capture_pipelines list_media_capture_pipelines **************************** ChimeSDKMediaPipelines.Client.list_media_capture_pipelines(**kwargs) Returns a list of media pipelines. See also: AWS API Documentation **Request Syntax** response = client.list_media_capture_pipelines( NextToken='string', MaxResults=123 ) Parameters: * **NextToken** (*string*) -- The token used to retrieve the next page of results. * **MaxResults** (*integer*) -- The maximum number of results to return in a single call. Valid Range: 1 - 99. Return type: dict Returns: **Response Syntax** { 'MediaCapturePipelines': [ { 'MediaPipelineId': 'string', 'MediaPipelineArn': 'string' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **MediaCapturePipelines** *(list) --* The media pipeline objects in the list. * *(dict) --* The summary data of a media capture pipeline. * **MediaPipelineId** *(string) --* The ID of the media pipeline in the summary. * **MediaPipelineArn** *(string) --* The ARN of the media pipeline in the summary. * **NextToken** *(string) --* The token used to retrieve the next page of results. **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.ResourceLimitExceededE xception" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / delete_media_capture_pipeline delete_media_capture_pipeline ***************************** ChimeSDKMediaPipelines.Client.delete_media_capture_pipeline(**kwargs) Deletes the media pipeline. See also: AWS API Documentation **Request Syntax** response = client.delete_media_capture_pipeline( MediaPipelineId='string' ) Parameters: **MediaPipelineId** (*string*) -- **[REQUIRED]** The ID of the media pipeline being deleted. Returns: None **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.NotFoundException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / stop_voice_tone_analysis_task stop_voice_tone_analysis_task ***************************** ChimeSDKMediaPipelines.Client.stop_voice_tone_analysis_task(**kwargs) Stops a voice tone analysis task. See also: AWS API Documentation **Request Syntax** response = client.stop_voice_tone_analysis_task( Identifier='string', VoiceToneAnalysisTaskId='string' ) Parameters: * **Identifier** (*string*) -- **[REQUIRED]** The unique identifier of the resource to be updated. Valid values include the ID and ARN of the media insights pipeline. * **VoiceToneAnalysisTaskId** (*string*) -- **[REQUIRED]** The ID of the voice tone analysis task. Returns: None **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.ConflictException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.NotFoundException" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / start_voice_tone_analysis_task start_voice_tone_analysis_task ****************************** ChimeSDKMediaPipelines.Client.start_voice_tone_analysis_task(**kwargs) Starts a voice tone analysis task. For more information about voice tone analysis, see Using Amazon Chime SDK voice analytics in the *Amazon Chime SDK Developer Guide*. Warning: Before starting any voice tone analysis tasks, you must provide all notices and obtain all consents from the speaker as required under applicable privacy and biometrics laws, and as required under the AWS service terms for the Amazon Chime SDK. See also: AWS API Documentation **Request Syntax** response = client.start_voice_tone_analysis_task( Identifier='string', LanguageCode='en-US', KinesisVideoStreamSourceTaskConfiguration={ 'StreamArn': 'string', 'ChannelId': 123, 'FragmentNumber': 'string' }, ClientRequestToken='string' ) Parameters: * **Identifier** (*string*) -- **[REQUIRED]** The unique identifier of the resource to be updated. Valid values include the ID and ARN of the media insights pipeline. * **LanguageCode** (*string*) -- **[REQUIRED]** The language code. * **KinesisVideoStreamSourceTaskConfiguration** (*dict*) -- The task configuration for the Kinesis video stream source of the media insights pipeline. * **StreamArn** *(string) --* **[REQUIRED]** The ARN of the stream. * **ChannelId** *(integer) --* **[REQUIRED]** The channel ID. * **FragmentNumber** *(string) --* The unique identifier of the fragment to begin processing. * **ClientRequestToken** (*string*) -- The unique identifier for the client request. Use a different token for different voice tone analysis tasks. This field is autopopulated if not provided. Return type: dict Returns: **Response Syntax** { 'VoiceToneAnalysisTask': { 'VoiceToneAnalysisTaskId': 'string', 'VoiceToneAnalysisTaskStatus': 'NotStarted'|'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped', 'CreatedTimestamp': datetime(2015, 1, 1), 'UpdatedTimestamp': datetime(2015, 1, 1) } } **Response Structure** * *(dict) --* * **VoiceToneAnalysisTask** *(dict) --* The details of the voice tone analysis task. * **VoiceToneAnalysisTaskId** *(string) --* The ID of the voice tone analysis task. * **VoiceToneAnalysisTaskStatus** *(string) --* The status of a voice tone analysis task. * **CreatedTimestamp** *(datetime) --* The time at which a voice tone analysis task was created. * **UpdatedTimestamp** *(datetime) --* The time at which a voice tone analysis task was updated. **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.ConflictException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.NotFoundException" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / close close ***** ChimeSDKMediaPipelines.Client.close() Closes underlying endpoint connections. ChimeSDKMediaPipelines / Client / list_media_insights_pipeline_configurations list_media_insights_pipeline_configurations ******************************************* ChimeSDKMediaPipelines.Client.list_media_insights_pipeline_configurations(**kwargs) Lists the available media insights pipeline configurations. See also: AWS API Documentation **Request Syntax** response = client.list_media_insights_pipeline_configurations( NextToken='string', MaxResults=123 ) Parameters: * **NextToken** (*string*) -- The token used to return the next page of results. * **MaxResults** (*integer*) -- The maximum number of results to return in a single call. Return type: dict Returns: **Response Syntax** { 'MediaInsightsPipelineConfigurations': [ { 'MediaInsightsPipelineConfigurationName': 'string', 'MediaInsightsPipelineConfigurationId': 'string', 'MediaInsightsPipelineConfigurationArn': 'string' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **MediaInsightsPipelineConfigurations** *(list) --* The requested list of media insights pipeline configurations. * *(dict) --* A summary of the media insights pipeline configuration. * **MediaInsightsPipelineConfigurationName** *(string) --* The name of the media insights pipeline configuration. * **MediaInsightsPipelineConfigurationId** *(string) --* The ID of the media insights pipeline configuration. * **MediaInsightsPipelineConfigurationArn** *(string) --* The ARN of the media insights pipeline configuration. * **NextToken** *(string) --* The token used to return the next page of results. **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.ResourceLimitExceededE xception" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / get_speaker_search_task get_speaker_search_task *********************** ChimeSDKMediaPipelines.Client.get_speaker_search_task(**kwargs) Retrieves the details of the specified speaker search task. See also: AWS API Documentation **Request Syntax** response = client.get_speaker_search_task( Identifier='string', SpeakerSearchTaskId='string' ) Parameters: * **Identifier** (*string*) -- **[REQUIRED]** The unique identifier of the resource to be updated. Valid values include the ID and ARN of the media insights pipeline. * **SpeakerSearchTaskId** (*string*) -- **[REQUIRED]** The ID of the speaker search task. Return type: dict Returns: **Response Syntax** { 'SpeakerSearchTask': { 'SpeakerSearchTaskId': 'string', 'SpeakerSearchTaskStatus': 'NotStarted'|'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped', 'CreatedTimestamp': datetime(2015, 1, 1), 'UpdatedTimestamp': datetime(2015, 1, 1) } } **Response Structure** * *(dict) --* * **SpeakerSearchTask** *(dict) --* The details of the speaker search task. * **SpeakerSearchTaskId** *(string) --* The speaker search task ID. * **SpeakerSearchTaskStatus** *(string) --* The status of the speaker search task. * **CreatedTimestamp** *(datetime) --* The time at which a speaker search task was created. * **UpdatedTimestamp** *(datetime) --* The time at which a speaker search task was updated. **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.NotFoundException" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / create_media_pipeline_kinesis_video_stream_pool create_media_pipeline_kinesis_video_stream_pool *********************************************** ChimeSDKMediaPipelines.Client.create_media_pipeline_kinesis_video_stream_pool(**kwargs) Creates an Amazon Kinesis Video Stream pool for use with media stream pipelines. Note: If a meeting uses an opt-in Region as its MediaRegion, the KVS stream must be in that same Region. For example, if a meeting uses the "af-south-1" Region, the KVS stream must also be in "af- south-1". However, if the meeting uses a Region that AWS turns on by default, the KVS stream can be in any available Region, including an opt-in Region. For example, if the meeting uses "ca- central-1", the KVS stream can be in "eu-west-2", "us-east-1", "af-south-1", or any other Region that the Amazon Chime SDK supports.To learn which AWS Region a meeting uses, call the GetMeeting API and use the MediaRegion parameter from the response.For more information about opt-in Regions, refer to Available Regions in the *Amazon Chime SDK Developer Guide*, and Specify which AWS Regions your account can use, in the *AWS Account Management Reference Guide*. See also: AWS API Documentation **Request Syntax** response = client.create_media_pipeline_kinesis_video_stream_pool( StreamConfiguration={ 'Region': 'string', 'DataRetentionInHours': 123 }, PoolName='string', ClientRequestToken='string', Tags=[ { 'Key': 'string', 'Value': 'string' }, ] ) Parameters: * **StreamConfiguration** (*dict*) -- **[REQUIRED]** The configuration settings for the stream. * **Region** *(string) --* **[REQUIRED]** The Amazon Web Services Region of the video stream. * **DataRetentionInHours** *(integer) --* The amount of time that data is retained. * **PoolName** (*string*) -- **[REQUIRED]** The name of the pool. * **ClientRequestToken** (*string*) -- The token assigned to the client making the request. This field is autopopulated if not provided. * **Tags** (*list*) -- The tags assigned to the stream pool. * *(dict) --* A key/value pair that grants users access to meeting resources. * **Key** *(string) --* **[REQUIRED]** The key half of a tag. * **Value** *(string) --* **[REQUIRED]** The value half of a tag. Return type: dict Returns: **Response Syntax** { 'KinesisVideoStreamPoolConfiguration': { 'PoolArn': 'string', 'PoolName': 'string', 'PoolId': 'string', 'PoolStatus': 'CREATING'|'ACTIVE'|'UPDATING'|'DELETING'|'FAILED', 'PoolSize': 123, 'StreamConfiguration': { 'Region': 'string', 'DataRetentionInHours': 123 }, 'CreatedTimestamp': datetime(2015, 1, 1), 'UpdatedTimestamp': datetime(2015, 1, 1) } } **Response Structure** * *(dict) --* * **KinesisVideoStreamPoolConfiguration** *(dict) --* The configuration for applying the streams to the pool. * **PoolArn** *(string) --* The ARN of the video stream pool configuration. * **PoolName** *(string) --* The name of the video stream pool configuration. * **PoolId** *(string) --* The ID of the video stream pool in the configuration. * **PoolStatus** *(string) --* The status of the video stream pool in the configuration. * **PoolSize** *(integer) --* The size of the video stream pool in the configuration. * **StreamConfiguration** *(dict) --* The Kinesis video stream pool configuration object. * **Region** *(string) --* The Amazon Web Services Region of the video stream. * **DataRetentionInHours** *(integer) --* The amount of time that data is retained. * **CreatedTimestamp** *(datetime) --* The time at which the configuration was created. * **UpdatedTimestamp** *(datetime) --* The time at which the configuration was updated. **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.ResourceLimitExceededE xception" * "ChimeSDKMediaPipelines.Client.exceptions.ConflictException" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / list_media_pipelines list_media_pipelines ******************** ChimeSDKMediaPipelines.Client.list_media_pipelines(**kwargs) Returns a list of media pipelines. See also: AWS API Documentation **Request Syntax** response = client.list_media_pipelines( NextToken='string', MaxResults=123 ) Parameters: * **NextToken** (*string*) -- The token used to retrieve the next page of results. * **MaxResults** (*integer*) -- The maximum number of results to return in a single call. Valid Range: 1 - 99. Return type: dict Returns: **Response Syntax** { 'MediaPipelines': [ { 'MediaPipelineId': 'string', 'MediaPipelineArn': 'string' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **MediaPipelines** *(list) --* The media pipeline objects in the list. * *(dict) --* The summary of the media pipeline. * **MediaPipelineId** *(string) --* The ID of the media pipeline in the summary. * **MediaPipelineArn** *(string) --* The ARN of the media pipeline in the summary. * **NextToken** *(string) --* The token used to retrieve the next page of results. **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.ResourceLimitExceededE xception" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / tag_resource tag_resource ************ ChimeSDKMediaPipelines.Client.tag_resource(**kwargs) The ARN of the media pipeline that you want to tag. Consists of the pipeline's endpoint region, resource ID, and pipeline ID. See also: AWS API Documentation **Request Syntax** response = client.tag_resource( ResourceARN='string', Tags=[ { 'Key': 'string', 'Value': 'string' }, ] ) Parameters: * **ResourceARN** (*string*) -- **[REQUIRED]** The ARN of the media pipeline associated with any tags. The ARN consists of the pipeline's endpoint region, resource ID, and pipeline ID. * **Tags** (*list*) -- **[REQUIRED]** The tags associated with the specified media pipeline. * *(dict) --* A key/value pair that grants users access to meeting resources. * **Key** *(string) --* **[REQUIRED]** The key half of a tag. * **Value** *(string) --* **[REQUIRED]** The value half of a tag. Return type: dict Returns: **Response Syntax** {} **Response Structure** * *(dict) --* **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.NotFoundException" * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / update_media_pipeline_kinesis_video_stream_pool update_media_pipeline_kinesis_video_stream_pool *********************************************** ChimeSDKMediaPipelines.Client.update_media_pipeline_kinesis_video_stream_pool(**kwargs) Updates an Amazon Kinesis Video Stream pool in a media pipeline. See also: AWS API Documentation **Request Syntax** response = client.update_media_pipeline_kinesis_video_stream_pool( Identifier='string', StreamConfiguration={ 'DataRetentionInHours': 123 } ) Parameters: * **Identifier** (*string*) -- **[REQUIRED]** The unique identifier of the requested resource. Valid values include the name and ARN of the media pipeline Kinesis Video Stream pool. * **StreamConfiguration** (*dict*) -- The configuration settings for the video stream. * **DataRetentionInHours** *(integer) --* The updated time that data is retained. Return type: dict Returns: **Response Syntax** { 'KinesisVideoStreamPoolConfiguration': { 'PoolArn': 'string', 'PoolName': 'string', 'PoolId': 'string', 'PoolStatus': 'CREATING'|'ACTIVE'|'UPDATING'|'DELETING'|'FAILED', 'PoolSize': 123, 'StreamConfiguration': { 'Region': 'string', 'DataRetentionInHours': 123 }, 'CreatedTimestamp': datetime(2015, 1, 1), 'UpdatedTimestamp': datetime(2015, 1, 1) } } **Response Structure** * *(dict) --* * **KinesisVideoStreamPoolConfiguration** *(dict) --* The video stream pool configuration object. * **PoolArn** *(string) --* The ARN of the video stream pool configuration. * **PoolName** *(string) --* The name of the video stream pool configuration. * **PoolId** *(string) --* The ID of the video stream pool in the configuration. * **PoolStatus** *(string) --* The status of the video stream pool in the configuration. * **PoolSize** *(integer) --* The size of the video stream pool in the configuration. * **StreamConfiguration** *(dict) --* The Kinesis video stream pool configuration object. * **Region** *(string) --* The Amazon Web Services Region of the video stream. * **DataRetentionInHours** *(integer) --* The amount of time that data is retained. * **CreatedTimestamp** *(datetime) --* The time at which the configuration was created. * **UpdatedTimestamp** *(datetime) --* The time at which the configuration was updated. **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.NotFoundException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.ConflictException" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / get_media_pipeline_kinesis_video_stream_pool get_media_pipeline_kinesis_video_stream_pool ******************************************** ChimeSDKMediaPipelines.Client.get_media_pipeline_kinesis_video_stream_pool(**kwargs) Gets an Kinesis video stream pool. See also: AWS API Documentation **Request Syntax** response = client.get_media_pipeline_kinesis_video_stream_pool( Identifier='string' ) Parameters: **Identifier** (*string*) -- **[REQUIRED]** The unique identifier of the requested resource. Valid values include the name and ARN of the media pipeline Kinesis Video Stream pool. Return type: dict Returns: **Response Syntax** { 'KinesisVideoStreamPoolConfiguration': { 'PoolArn': 'string', 'PoolName': 'string', 'PoolId': 'string', 'PoolStatus': 'CREATING'|'ACTIVE'|'UPDATING'|'DELETING'|'FAILED', 'PoolSize': 123, 'StreamConfiguration': { 'Region': 'string', 'DataRetentionInHours': 123 }, 'CreatedTimestamp': datetime(2015, 1, 1), 'UpdatedTimestamp': datetime(2015, 1, 1) } } **Response Structure** * *(dict) --* * **KinesisVideoStreamPoolConfiguration** *(dict) --* The video stream pool configuration object. * **PoolArn** *(string) --* The ARN of the video stream pool configuration. * **PoolName** *(string) --* The name of the video stream pool configuration. * **PoolId** *(string) --* The ID of the video stream pool in the configuration. * **PoolStatus** *(string) --* The status of the video stream pool in the configuration. * **PoolSize** *(integer) --* The size of the video stream pool in the configuration. * **StreamConfiguration** *(dict) --* The Kinesis video stream pool configuration object. * **Region** *(string) --* The Amazon Web Services Region of the video stream. * **DataRetentionInHours** *(integer) --* The amount of time that data is retained. * **CreatedTimestamp** *(datetime) --* The time at which the configuration was created. * **UpdatedTimestamp** *(datetime) --* The time at which the configuration was updated. **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.NotFoundException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n" ChimeSDKMediaPipelines / Client / get_media_pipeline get_media_pipeline ****************** ChimeSDKMediaPipelines.Client.get_media_pipeline(**kwargs) Gets an existing media pipeline. See also: AWS API Documentation **Request Syntax** response = client.get_media_pipeline( MediaPipelineId='string' ) Parameters: **MediaPipelineId** (*string*) -- **[REQUIRED]** The ID of the pipeline that you want to get. Return type: dict Returns: **Response Syntax** { 'MediaPipeline': { 'MediaCapturePipeline': { 'MediaPipelineId': 'string', 'MediaPipelineArn': 'string', 'SourceType': 'ChimeSdkMeeting', 'SourceArn': 'string', 'Status': 'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped'|'Paused'|'NotStarted', 'SinkType': 'S3Bucket', 'SinkArn': 'string', 'CreatedTimestamp': datetime(2015, 1, 1), 'UpdatedTimestamp': datetime(2015, 1, 1), 'ChimeSdkMeetingConfiguration': { 'SourceConfiguration': { 'SelectedVideoStreams': { 'AttendeeIds': [ 'string', ], 'ExternalUserIds': [ 'string', ] } }, 'ArtifactsConfiguration': { 'Audio': { 'MuxType': 'AudioOnly'|'AudioWithActiveSpeakerVideo'|'AudioWithCompositedVideo' }, 'Video': { 'State': 'Enabled'|'Disabled', 'MuxType': 'VideoOnly' }, 'Content': { 'State': 'Enabled'|'Disabled', 'MuxType': 'ContentOnly' }, 'CompositedVideo': { 'Layout': 'GridView', 'Resolution': 'HD'|'FHD', 'GridViewConfiguration': { 'ContentShareLayout': 'PresenterOnly'|'Horizontal'|'Vertical'|'ActiveSpeakerOnly', 'PresenterOnlyConfiguration': { 'PresenterPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight' }, 'ActiveSpeakerOnlyConfiguration': { 'ActiveSpeakerPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight' }, 'HorizontalLayoutConfiguration': { 'TileOrder': 'JoinSequence'|'SpeakerSequence', 'TilePosition': 'Top'|'Bottom', 'TileCount': 123, 'TileAspectRatio': 'string' }, 'VerticalLayoutConfiguration': { 'TileOrder': 'JoinSequence'|'SpeakerSequence', 'TilePosition': 'Left'|'Right', 'TileCount': 123, 'TileAspectRatio': 'string' }, 'VideoAttribute': { 'CornerRadius': 123, 'BorderColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow', 'HighlightColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow', 'BorderThickness': 123 }, 'CanvasOrientation': 'Landscape'|'Portrait' } } } }, 'SseAwsKeyManagementParams': { 'AwsKmsKeyId': 'string', 'AwsKmsEncryptionContext': 'string' }, 'SinkIamRoleArn': 'string' }, 'MediaLiveConnectorPipeline': { 'Sources': [ { 'SourceType': 'ChimeSdkMeeting', 'ChimeSdkMeetingLiveConnectorConfiguration': { 'Arn': 'string', 'MuxType': 'AudioWithCompositedVideo'|'AudioWithActiveSpeakerVideo', 'CompositedVideo': { 'Layout': 'GridView', 'Resolution': 'HD'|'FHD', 'GridViewConfiguration': { 'ContentShareLayout': 'PresenterOnly'|'Horizontal'|'Vertical'|'ActiveSpeakerOnly', 'PresenterOnlyConfiguration': { 'PresenterPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight' }, 'ActiveSpeakerOnlyConfiguration': { 'ActiveSpeakerPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight' }, 'HorizontalLayoutConfiguration': { 'TileOrder': 'JoinSequence'|'SpeakerSequence', 'TilePosition': 'Top'|'Bottom', 'TileCount': 123, 'TileAspectRatio': 'string' }, 'VerticalLayoutConfiguration': { 'TileOrder': 'JoinSequence'|'SpeakerSequence', 'TilePosition': 'Left'|'Right', 'TileCount': 123, 'TileAspectRatio': 'string' }, 'VideoAttribute': { 'CornerRadius': 123, 'BorderColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow', 'HighlightColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow', 'BorderThickness': 123 }, 'CanvasOrientation': 'Landscape'|'Portrait' } }, 'SourceConfiguration': { 'SelectedVideoStreams': { 'AttendeeIds': [ 'string', ], 'ExternalUserIds': [ 'string', ] } } } }, ], 'Sinks': [ { 'SinkType': 'RTMP', 'RTMPConfiguration': { 'Url': 'string', 'AudioChannels': 'Stereo'|'Mono', 'AudioSampleRate': 'string' } }, ], 'MediaPipelineId': 'string', 'MediaPipelineArn': 'string', 'Status': 'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped'|'Paused'|'NotStarted', 'CreatedTimestamp': datetime(2015, 1, 1), 'UpdatedTimestamp': datetime(2015, 1, 1) }, 'MediaConcatenationPipeline': { 'MediaPipelineId': 'string', 'MediaPipelineArn': 'string', 'Sources': [ { 'Type': 'MediaCapturePipeline', 'MediaCapturePipelineSourceConfiguration': { 'MediaPipelineArn': 'string', 'ChimeSdkMeetingConfiguration': { 'ArtifactsConfiguration': { 'Audio': { 'State': 'Enabled' }, 'Video': { 'State': 'Enabled'|'Disabled' }, 'Content': { 'State': 'Enabled'|'Disabled' }, 'DataChannel': { 'State': 'Enabled'|'Disabled' }, 'TranscriptionMessages': { 'State': 'Enabled'|'Disabled' }, 'MeetingEvents': { 'State': 'Enabled'|'Disabled' }, 'CompositedVideo': { 'State': 'Enabled'|'Disabled' } } } } }, ], 'Sinks': [ { 'Type': 'S3Bucket', 'S3BucketSinkConfiguration': { 'Destination': 'string' } }, ], 'Status': 'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped'|'Paused'|'NotStarted', 'CreatedTimestamp': datetime(2015, 1, 1), 'UpdatedTimestamp': datetime(2015, 1, 1) }, 'MediaInsightsPipeline': { 'MediaPipelineId': 'string', 'MediaPipelineArn': 'string', 'MediaInsightsPipelineConfigurationArn': 'string', 'Status': 'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped'|'Paused'|'NotStarted', 'KinesisVideoStreamSourceRuntimeConfiguration': { 'Streams': [ { 'StreamArn': 'string', 'FragmentNumber': 'string', 'StreamChannelDefinition': { 'NumberOfChannels': 123, 'ChannelDefinitions': [ { 'ChannelId': 123, 'ParticipantRole': 'AGENT'|'CUSTOMER' }, ] } }, ], 'MediaEncoding': 'pcm', 'MediaSampleRate': 123 }, 'MediaInsightsRuntimeMetadata': { 'string': 'string' }, 'KinesisVideoStreamRecordingSourceRuntimeConfiguration': { 'Streams': [ { 'StreamArn': 'string' }, ], 'FragmentSelector': { 'FragmentSelectorType': 'ProducerTimestamp'|'ServerTimestamp', 'TimestampRange': { 'StartTimestamp': datetime(2015, 1, 1), 'EndTimestamp': datetime(2015, 1, 1) } } }, 'S3RecordingSinkRuntimeConfiguration': { 'Destination': 'string', 'RecordingFileFormat': 'Wav'|'Opus' }, 'CreatedTimestamp': datetime(2015, 1, 1), 'ElementStatuses': [ { 'Type': 'AmazonTranscribeCallAnalyticsProcessor'|'VoiceAnalyticsProcessor'|'AmazonTranscribeProcessor'|'KinesisDataStreamSink'|'LambdaFunctionSink'|'SqsQueueSink'|'SnsTopicSink'|'S3RecordingSink'|'VoiceEnhancementSink', 'Status': 'NotStarted'|'NotSupported'|'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped'|'Paused' }, ] }, 'MediaStreamPipeline': { 'MediaPipelineId': 'string', 'MediaPipelineArn': 'string', 'CreatedTimestamp': datetime(2015, 1, 1), 'UpdatedTimestamp': datetime(2015, 1, 1), 'Status': 'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped'|'Paused'|'NotStarted', 'Sources': [ { 'SourceType': 'ChimeSdkMeeting', 'SourceArn': 'string' }, ], 'Sinks': [ { 'SinkArn': 'string', 'SinkType': 'KinesisVideoStreamPool', 'ReservedStreamCapacity': 123, 'MediaStreamType': 'MixedAudio'|'IndividualAudio' }, ] } } } **Response Structure** * *(dict) --* * **MediaPipeline** *(dict) --* The media pipeline object. * **MediaCapturePipeline** *(dict) --* A pipeline that enables users to capture audio and video. * **MediaPipelineId** *(string) --* The ID of a media pipeline. * **MediaPipelineArn** *(string) --* The ARN of the media capture pipeline * **SourceType** *(string) --* Source type from which media artifacts are saved. You must use "ChimeMeeting". * **SourceArn** *(string) --* ARN of the source from which the media artifacts are saved. * **Status** *(string) --* The status of the media pipeline. * **SinkType** *(string) --* Destination type to which the media artifacts are saved. You must use an S3 Bucket. * **SinkArn** *(string) --* ARN of the destination to which the media artifacts are saved. * **CreatedTimestamp** *(datetime) --* The time at which the pipeline was created, in ISO 8601 format. * **UpdatedTimestamp** *(datetime) --* The time at which the pipeline was updated, in ISO 8601 format. * **ChimeSdkMeetingConfiguration** *(dict) --* The configuration for a specified media pipeline. "SourceType" must be "ChimeSdkMeeting". * **SourceConfiguration** *(dict) --* The source configuration for a specified media pipeline. * **SelectedVideoStreams** *(dict) --* The selected video streams for a specified media pipeline. The number of video streams can't exceed 25. * **AttendeeIds** *(list) --* The attendee IDs of the streams selected for a media pipeline. * *(string) --* * **ExternalUserIds** *(list) --* The external user IDs of the streams selected for a media pipeline. * *(string) --* * **ArtifactsConfiguration** *(dict) --* The configuration for the artifacts in an Amazon Chime SDK meeting. * **Audio** *(dict) --* The configuration for the audio artifacts. * **MuxType** *(string) --* The MUX type of the audio artifact configuration object. * **Video** *(dict) --* The configuration for the video artifacts. * **State** *(string) --* Indicates whether the video artifact is enabled or disabled. * **MuxType** *(string) --* The MUX type of the video artifact configuration object. * **Content** *(dict) --* The configuration for the content artifacts. * **State** *(string) --* Indicates whether the content artifact is enabled or disabled. * **MuxType** *(string) --* The MUX type of the artifact configuration. * **CompositedVideo** *(dict) --* Enables video compositing. * **Layout** *(string) --* The layout setting, such as "GridView" in the configuration object. * **Resolution** *(string) --* The video resolution setting in the configuration object. Default: HD at 1280 x 720. FHD resolution: 1920 x 1080. * **GridViewConfiguration** *(dict) --* The "GridView" configuration setting. * **ContentShareLayout** *(string) --* Defines the layout of the video tiles when content sharing is enabled. * **PresenterOnlyConfiguration** *(dict) --* Defines the configuration options for a presenter only video tile. * **PresenterPosition** *(string) --* Defines the position of the presenter video tile. Default: "TopRight". * **ActiveSpeakerOnlyConfiguration** *(dict) --* The configuration settings for an "ActiveSpeakerOnly" video tile. * **ActiveSpeakerPosition** *(string) --* The position of the "ActiveSpeakerOnly" video tile. * **HorizontalLayoutConfiguration** *(dict) --* The configuration settings for a horizontal layout. * **TileOrder** *(string) --* Sets the automatic ordering of the video tiles. * **TilePosition** *(string) --* Sets the position of horizontal tiles. * **TileCount** *(integer) --* The maximum number of video tiles to display. * **TileAspectRatio** *(string) --* Specifies the aspect ratio of all video tiles. * **VerticalLayoutConfiguration** *(dict) --* The configuration settings for a vertical layout. * **TileOrder** *(string) --* Sets the automatic ordering of the video tiles. * **TilePosition** *(string) --* Sets the position of vertical tiles. * **TileCount** *(integer) --* The maximum number of tiles to display. * **TileAspectRatio** *(string) --* Sets the aspect ratio of the video tiles, such as 16:9. * **VideoAttribute** *(dict) --* The attribute settings for the video tiles. * **CornerRadius** *(integer) --* Sets the corner radius of all video tiles. * **BorderColor** *(string) --* Defines the border color of all video tiles. * **HighlightColor** *(string) --* Defines the highlight color for the active video tile. * **BorderThickness** *(integer) --* Defines the border thickness for all video tiles. * **CanvasOrientation** *(string) --* The orientation setting, horizontal or vertical. * **SseAwsKeyManagementParams** *(dict) --* An object that contains server side encryption parameters to be used by media capture pipeline. The parameters can also be used by media concatenation pipeline taking media capture pipeline as a media source. * **AwsKmsKeyId** *(string) --* The KMS key you want to use to encrypt your media pipeline output. Decryption is required for concatenation pipeline. If using a key located in the current Amazon Web Services account, you can specify your KMS key in one of four ways: * Use the KMS key ID itself. For example, "1234abcd- 12ab-34cd-56ef-1234567890ab". * Use an alias for the KMS key ID. For example, "alias/ExampleAlias". * Use the Amazon Resource Name (ARN) for the KMS key ID. For example, "arn:aws:kms:region:account-ID:key /1234abcd-12ab-34cd-56ef-1234567890ab". * Use the ARN for the KMS key alias. For example, "arn:aws:kms:region:account-ID:alias/ExampleAlias". If using a key located in a different Amazon Web Services account than the current Amazon Web Services account, you can specify your KMS key in one of two ways: * Use the ARN for the KMS key ID. For example, "arn:aws:kms:region:account-ID:key/1234abcd-12ab- 34cd-56ef-1234567890ab". * Use the ARN for the KMS key alias. For example, "arn:aws:kms:region:account-ID:alias/ExampleAlias". If you don't specify an encryption key, your output is encrypted with the default Amazon S3 key (SSE-S3). Note that the role specified in the "SinkIamRoleArn" request parameter must have permission to use the specified KMS key. * **AwsKmsEncryptionContext** *(string) --* Base64-encoded string of a UTF-8 encoded JSON, which contains the encryption context as non-secret key- value pair known as encryption context pairs, that provides an added layer of security for your data. For more information, see KMS encryption context and Asymmetric keys in KMS in the *Key Management Service Developer Guide*. * **SinkIamRoleArn** *(string) --* The Amazon Resource Name (ARN) of the sink role to be used with "AwsKmsKeyId" in "SseAwsKeyManagementParams". * **MediaLiveConnectorPipeline** *(dict) --* The connector pipeline of the media pipeline. * **Sources** *(list) --* The connector pipeline's data sources. * *(dict) --* The data source configuration object of a streaming media pipeline. * **SourceType** *(string) --* The source configuration's media source type. * **ChimeSdkMeetingLiveConnectorConfiguration** *(dict) --* The configuration settings of the connector pipeline. * **Arn** *(string) --* The configuration object's Chime SDK meeting ARN. * **MuxType** *(string) --* The configuration object's multiplex type. * **CompositedVideo** *(dict) --* The media pipeline's composited video. * **Layout** *(string) --* The layout setting, such as "GridView" in the configuration object. * **Resolution** *(string) --* The video resolution setting in the configuration object. Default: HD at 1280 x 720. FHD resolution: 1920 x 1080. * **GridViewConfiguration** *(dict) --* The "GridView" configuration setting. * **ContentShareLayout** *(string) --* Defines the layout of the video tiles when content sharing is enabled. * **PresenterOnlyConfiguration** *(dict) --* Defines the configuration options for a presenter only video tile. * **PresenterPosition** *(string) --* Defines the position of the presenter video tile. Default: "TopRight". * **ActiveSpeakerOnlyConfiguration** *(dict) --* The configuration settings for an "ActiveSpeakerOnly" video tile. * **ActiveSpeakerPosition** *(string) --* The position of the "ActiveSpeakerOnly" video tile. * **HorizontalLayoutConfiguration** *(dict) --* The configuration settings for a horizontal layout. * **TileOrder** *(string) --* Sets the automatic ordering of the video tiles. * **TilePosition** *(string) --* Sets the position of horizontal tiles. * **TileCount** *(integer) --* The maximum number of video tiles to display. * **TileAspectRatio** *(string) --* Specifies the aspect ratio of all video tiles. * **VerticalLayoutConfiguration** *(dict) --* The configuration settings for a vertical layout. * **TileOrder** *(string) --* Sets the automatic ordering of the video tiles. * **TilePosition** *(string) --* Sets the position of vertical tiles. * **TileCount** *(integer) --* The maximum number of tiles to display. * **TileAspectRatio** *(string) --* Sets the aspect ratio of the video tiles, such as 16:9. * **VideoAttribute** *(dict) --* The attribute settings for the video tiles. * **CornerRadius** *(integer) --* Sets the corner radius of all video tiles. * **BorderColor** *(string) --* Defines the border color of all video tiles. * **HighlightColor** *(string) --* Defines the highlight color for the active video tile. * **BorderThickness** *(integer) --* Defines the border thickness for all video tiles. * **CanvasOrientation** *(string) --* The orientation setting, horizontal or vertical. * **SourceConfiguration** *(dict) --* The source configuration settings of the media pipeline's configuration object. * **SelectedVideoStreams** *(dict) --* The selected video streams for a specified media pipeline. The number of video streams can't exceed 25. * **AttendeeIds** *(list) --* The attendee IDs of the streams selected for a media pipeline. * *(string) --* * **ExternalUserIds** *(list) --* The external user IDs of the streams selected for a media pipeline. * *(string) --* * **Sinks** *(list) --* The connector pipeline's data sinks. * *(dict) --* The media pipeline's sink configuration settings. * **SinkType** *(string) --* The sink configuration's sink type. * **RTMPConfiguration** *(dict) --* The sink configuration's RTMP configuration settings. * **Url** *(string) --* The URL of the RTMP configuration. * **AudioChannels** *(string) --* The audio channels set for the RTMP configuration * **AudioSampleRate** *(string) --* The audio sample rate set for the RTMP configuration. Default: 48000. * **MediaPipelineId** *(string) --* The connector pipeline's ID. * **MediaPipelineArn** *(string) --* The connector pipeline's ARN. * **Status** *(string) --* The connector pipeline's status. * **CreatedTimestamp** *(datetime) --* The time at which the connector pipeline was created. * **UpdatedTimestamp** *(datetime) --* The time at which the connector pipeline was last updated. * **MediaConcatenationPipeline** *(dict) --* The media concatenation pipeline in a media pipeline. * **MediaPipelineId** *(string) --* The ID of the media pipeline being concatenated. * **MediaPipelineArn** *(string) --* The ARN of the media pipeline that you specify in the "SourceConfiguration" object. * **Sources** *(list) --* The data sources being concatenated. * *(dict) --* The source type and media pipeline configuration settings in a configuration object. * **Type** *(string) --* The type of concatenation source in a configuration object. * **MediaCapturePipelineSourceConfiguration** *(dict) --* The concatenation settings for the media pipeline in a configuration object. * **MediaPipelineArn** *(string) --* The media pipeline ARN in the configuration object of a media capture pipeline. * **ChimeSdkMeetingConfiguration** *(dict) --* The meeting configuration settings in a media capture pipeline configuration object. * **ArtifactsConfiguration** *(dict) --* The configuration for the artifacts in an Amazon Chime SDK meeting concatenation. * **Audio** *(dict) --* The configuration for the audio artifacts concatenation. * **State** *(string) --* Enables or disables the configuration object. * **Video** *(dict) --* The configuration for the video artifacts concatenation. * **State** *(string) --* Enables or disables the configuration object. * **Content** *(dict) --* The configuration for the content artifacts concatenation. * **State** *(string) --* Enables or disables the configuration object. * **DataChannel** *(dict) --* The configuration for the data channel artifacts concatenation. * **State** *(string) --* Enables or disables the configuration object. * **TranscriptionMessages** *(dict) --* The configuration for the transcription messages artifacts concatenation. * **State** *(string) --* Enables or disables the configuration object. * **MeetingEvents** *(dict) --* The configuration for the meeting events artifacts concatenation. * **State** *(string) --* Enables or disables the configuration object. * **CompositedVideo** *(dict) --* The configuration for the composited video artifacts concatenation. * **State** *(string) --* Enables or disables the configuration object. * **Sinks** *(list) --* The data sinks of the concatenation pipeline. * *(dict) --* The data sink of the configuration object. * **Type** *(string) --* The type of data sink in the configuration object. * **S3BucketSinkConfiguration** *(dict) --* The configuration settings for an Amazon S3 bucket sink. * **Destination** *(string) --* The destination URL of the S3 bucket. * **Status** *(string) --* The status of the concatenation pipeline. * **CreatedTimestamp** *(datetime) --* The time at which the concatenation pipeline was created. * **UpdatedTimestamp** *(datetime) --* The time at which the concatenation pipeline was last updated. * **MediaInsightsPipeline** *(dict) --* The media insights pipeline of a media pipeline. * **MediaPipelineId** *(string) --* The ID of a media insights pipeline. * **MediaPipelineArn** *(string) --* The ARN of a media insights pipeline. * **MediaInsightsPipelineConfigurationArn** *(string) --* The ARN of a media insight pipeline's configuration settings. * **Status** *(string) --* The status of a media insights pipeline. * **KinesisVideoStreamSourceRuntimeConfiguration** *(dict) --* The configuration settings for a Kinesis runtime video stream in a media insights pipeline. * **Streams** *(list) --* The streams in the source runtime configuration of a Kinesis video stream. * *(dict) --* The configuration settings for a stream. * **StreamArn** *(string) --* The ARN of the stream. * **FragmentNumber** *(string) --* The unique identifier of the fragment to begin processing. * **StreamChannelDefinition** *(dict) --* The streaming channel definition in the stream configuration. * **NumberOfChannels** *(integer) --* The number of channels in a streaming channel. * **ChannelDefinitions** *(list) --* The definitions of the channels in a streaming channel. * *(dict) --* Defines an audio channel in a Kinesis video stream. * **ChannelId** *(integer) --* The channel ID. * **ParticipantRole** *(string) --* Specifies whether the audio in a channel belongs to the "AGENT" or "CUSTOMER". * **MediaEncoding** *(string) --* Specifies the encoding of your input audio. Supported format: PCM (only signed 16-bit little-endian audio formats, which does not include WAV) For more information, see Media formats in the *Amazon Transcribe Developer Guide*. * **MediaSampleRate** *(integer) --* The sample rate of the input audio (in hertz). Low- quality audio, such as telephone audio, is typically around 8,000 Hz. High-quality audio typically ranges from 16,000 Hz to 48,000 Hz. Note that the sample rate you specify must match that of your audio. Valid Range: Minimum value of 8000. Maximum value of 48000. * **MediaInsightsRuntimeMetadata** *(dict) --* The runtime metadata of a media insights pipeline. * *(string) --* * *(string) --* * **KinesisVideoStreamRecordingSourceRuntimeConfiguration ** *(dict) --* The runtime configuration settings for a Kinesis recording video stream in a media insights pipeline. * **Streams** *(list) --* The stream or streams to be recorded. * *(dict) --* A structure that holds the settings for recording media. * **StreamArn** *(string) --* The ARN of the recording stream. * **FragmentSelector** *(dict) --* Describes the timestamp range and timestamp origin of a range of fragments in the Kinesis video stream. * **FragmentSelectorType** *(string) --* The origin of the timestamps to use, "Server" or "Producer". For more information, see StartSelectorType in the *Amazon Kinesis Video Streams Developer Guide*. * **TimestampRange** *(dict) --* The range of timestamps to return. * **StartTimestamp** *(datetime) --* The starting timestamp for the specified range. * **EndTimestamp** *(datetime) --* The ending timestamp for the specified range. * **S3RecordingSinkRuntimeConfiguration** *(dict) --* The runtime configuration of the Amazon S3 bucket that stores recordings in a media insights pipeline. * **Destination** *(string) --* The URI of the S3 bucket used as the sink. * **RecordingFileFormat** *(string) --* The file format for the media files sent to the Amazon S3 bucket. * **CreatedTimestamp** *(datetime) --* The time at which the media insights pipeline was created. * **ElementStatuses** *(list) --* The statuses that the elements in a media insights pipeline can have during data processing. * *(dict) --* The status of the pipeline element. * **Type** *(string) --* The type of status. * **Status** *(string) --* The element's status. * **MediaStreamPipeline** *(dict) --* Designates a media pipeline as a media stream pipeline. * **MediaPipelineId** *(string) --* The ID of the media stream pipeline * **MediaPipelineArn** *(string) --* The ARN of the media stream pipeline. * **CreatedTimestamp** *(datetime) --* The time at which the media stream pipeline was created. * **UpdatedTimestamp** *(datetime) --* The time at which the media stream pipeline was updated. * **Status** *(string) --* The status of the media stream pipeline. * **Sources** *(list) --* The media stream pipeline's data sources. * *(dict) --* Structure that contains the settings for media stream sources. * **SourceType** *(string) --* The type of media stream source. * **SourceArn** *(string) --* The ARN of the meeting. * **Sinks** *(list) --* The media stream pipeline's data sinks. * *(dict) --* Structure that contains the settings for a media stream sink. * **SinkArn** *(string) --* The ARN of the Kinesis Video Stream pool returned by the CreateMediaPipelineKinesisVideoStreamPool API. * **SinkType** *(string) --* The media stream sink's type. * **ReservedStreamCapacity** *(integer) --* Specifies the number of streams that the sink can accept. * **MediaStreamType** *(string) --* The media stream sink's media stream type. **Exceptions** * "ChimeSDKMediaPipelines.Client.exceptions.BadRequestException" * "ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException" * "ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientExcepti on" * "ChimeSDKMediaPipelines.Client.exceptions.NotFoundException" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableExce ption" * "ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureExceptio n"