Rekognition *********** Client ====== class Rekognition.Client A low-level client representing Amazon Rekognition This is the API Reference for Amazon Rekognition Image, Amazon Rekognition Custom Labels, Amazon Rekognition Stored Video, Amazon Rekognition Streaming Video. It provides descriptions of actions, data types, common parameters, and common errors. **Amazon Rekognition Image** * AssociateFaces * CompareFaces * CreateCollection * CreateUser * DeleteCollection * DeleteFaces * DeleteUser * DescribeCollection * DetectFaces * DetectLabels * DetectModerationLabels * DetectProtectiveEquipment * DetectText * DisassociateFaces * GetCelebrityInfo * GetMediaAnalysisJob * IndexFaces * ListCollections * ListMediaAnalysisJob * ListFaces * ListUsers * RecognizeCelebrities * SearchFaces * SearchFacesByImage * SearchUsers * SearchUsersByImage * StartMediaAnalysisJob **Amazon Rekognition Custom Labels** * CopyProjectVersion * CreateDataset * CreateProject * CreateProjectVersion * DeleteDataset * DeleteProject * DeleteProjectPolicy * DeleteProjectVersion * DescribeDataset * DescribeProjects * DescribeProjectVersions * DetectCustomLabels * DistributeDatasetEntries * ListDatasetEntries * ListDatasetLabels * ListProjectPolicies * PutProjectPolicy * StartProjectVersion * StopProjectVersion * UpdateDatasetEntries **Amazon Rekognition Video Stored Video** * GetCelebrityRecognition * GetContentModeration * GetFaceDetection * GetFaceSearch * GetLabelDetection * GetPersonTracking * GetSegmentDetection * GetTextDetection * StartCelebrityRecognition * StartContentModeration * StartFaceDetection * StartFaceSearch * StartLabelDetection * StartPersonTracking * StartSegmentDetection * StartTextDetection **Amazon Rekognition Video Streaming Video** * CreateStreamProcessor * DeleteStreamProcessor * DescribeStreamProcessor * ListStreamProcessors * StartStreamProcessor * StopStreamProcessor * UpdateStreamProcessor import boto3 client = boto3.client('rekognition') These are the available methods: * associate_faces * can_paginate * close * compare_faces * copy_project_version * create_collection * create_dataset * create_face_liveness_session * create_project * create_project_version * create_stream_processor * create_user * delete_collection * delete_dataset * delete_faces * delete_project * delete_project_policy * delete_project_version * delete_stream_processor * delete_user * describe_collection * describe_dataset * describe_project_versions * describe_projects * describe_stream_processor * detect_custom_labels * detect_faces * detect_labels * detect_moderation_labels * detect_protective_equipment * detect_text * disassociate_faces * distribute_dataset_entries * get_celebrity_info * get_celebrity_recognition * get_content_moderation * get_face_detection * get_face_liveness_session_results * get_face_search * get_label_detection * get_media_analysis_job * get_paginator * get_person_tracking * get_segment_detection * get_text_detection * get_waiter * index_faces * list_collections * list_dataset_entries * list_dataset_labels * list_faces * list_media_analysis_jobs * list_project_policies * list_stream_processors * list_tags_for_resource * list_users * put_project_policy * recognize_celebrities * search_faces * search_faces_by_image * search_users * search_users_by_image * start_celebrity_recognition * start_content_moderation * start_face_detection * start_face_search * start_label_detection * start_media_analysis_job * start_person_tracking * start_project_version * start_segment_detection * start_stream_processor * start_text_detection * stop_project_version * stop_stream_processor * tag_resource * untag_resource * update_dataset_entries * update_stream_processor Paginators ========== Paginators are available on a client instance via the "get_paginator" method. For more detailed instructions and examples on the usage of paginators, see the paginators user guide. The available paginators are: * DescribeProjectVersions * DescribeProjects * ListCollections * ListDatasetEntries * ListDatasetLabels * ListFaces * ListProjectPolicies * ListStreamProcessors * ListUsers Waiters ======= Waiters are available on a client instance via the "get_waiter" method. For more detailed instructions and examples on the usage or waiters, see the waiters user guide. The available waiters are: * ProjectVersionRunning * ProjectVersionTrainingCompleted Rekognition / Waiter / ProjectVersionRunning ProjectVersionRunning ********************* class Rekognition.Waiter.ProjectVersionRunning waiter = client.get_waiter('project_version_running') wait(**kwargs) Polls "Rekognition.Client.describe_project_versions()" every 30 seconds until a successful state is reached. An error is raised after 40 failed checks. See also: AWS API Documentation **Request Syntax** waiter.wait( ProjectArn='string', VersionNames=[ 'string', ], NextToken='string', MaxResults=123, WaiterConfig={ 'Delay': 123, 'MaxAttempts': 123 } ) Parameters: * **ProjectArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the project that contains the model/adapter you want to describe. * **VersionNames** (*list*) -- A list of model or project version names that you want to describe. You can add up to 10 model or project version names to the list. If you don't specify a value, all project version descriptions are returned. A version name is part of a project version ARN. For example, "my- model.2020-01-21T09.10.15" is the version name in the following ARN. "arn:aws:rekognition:us- east-1:123456789012:project/getting-started/version/my- model.2020-01-21T09.10.15/1234567890123". * *(string) --* * **NextToken** (*string*) -- If the previous response was incomplete (because there is more results to retrieve), Amazon Rekognition returns a pagination token in the response. You can use this pagination token to retrieve the next set of results. * **MaxResults** (*integer*) -- The maximum number of results to return per paginated call. The largest value you can specify is 100. If you specify a value greater than 100, a ValidationException error occurs. The default value is 100. * **WaiterConfig** (*dict*) -- A dictionary that provides parameters to control waiting behavior. * **Delay** *(integer) --* The amount of time in seconds to wait between attempts. Default: 30 * **MaxAttempts** *(integer) --* The maximum number of attempts to be made. Default: 40 Returns: None Rekognition / Waiter / ProjectVersionTrainingCompleted ProjectVersionTrainingCompleted ******************************* class Rekognition.Waiter.ProjectVersionTrainingCompleted waiter = client.get_waiter('project_version_training_completed') wait(**kwargs) Polls "Rekognition.Client.describe_project_versions()" every 120 seconds until a successful state is reached. An error is raised after 360 failed checks. See also: AWS API Documentation **Request Syntax** waiter.wait( ProjectArn='string', VersionNames=[ 'string', ], NextToken='string', MaxResults=123, WaiterConfig={ 'Delay': 123, 'MaxAttempts': 123 } ) Parameters: * **ProjectArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the project that contains the model/adapter you want to describe. * **VersionNames** (*list*) -- A list of model or project version names that you want to describe. You can add up to 10 model or project version names to the list. If you don't specify a value, all project version descriptions are returned. A version name is part of a project version ARN. For example, "my- model.2020-01-21T09.10.15" is the version name in the following ARN. "arn:aws:rekognition:us- east-1:123456789012:project/getting-started/version/my- model.2020-01-21T09.10.15/1234567890123". * *(string) --* * **NextToken** (*string*) -- If the previous response was incomplete (because there is more results to retrieve), Amazon Rekognition returns a pagination token in the response. You can use this pagination token to retrieve the next set of results. * **MaxResults** (*integer*) -- The maximum number of results to return per paginated call. The largest value you can specify is 100. If you specify a value greater than 100, a ValidationException error occurs. The default value is 100. * **WaiterConfig** (*dict*) -- A dictionary that provides parameters to control waiting behavior. * **Delay** *(integer) --* The amount of time in seconds to wait between attempts. Default: 120 * **MaxAttempts** *(integer) --* The maximum number of attempts to be made. Default: 360 Returns: None Rekognition / Paginator / ListDatasetLabels ListDatasetLabels ***************** class Rekognition.Paginator.ListDatasetLabels paginator = client.get_paginator('list_dataset_labels') paginate(**kwargs) Creates an iterator that will paginate through responses from "Rekognition.Client.list_dataset_labels()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( DatasetArn='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **DatasetArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the dataset that you want to use. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'DatasetLabelDescriptions': [ { 'LabelName': 'string', 'LabelStats': { 'EntryCount': 123, 'BoundingBoxCount': 123 } }, ], } **Response Structure** * *(dict) --* * **DatasetLabelDescriptions** *(list) --* A list of the labels in the dataset. * *(dict) --* Describes a dataset label. For more information, see ListDatasetLabels. * **LabelName** *(string) --* The name of the label. * **LabelStats** *(dict) --* Statistics about the label. * **EntryCount** *(integer) --* The total number of images that use the label. * **BoundingBoxCount** *(integer) --* The total number of images that have the label assigned to a bounding box. Rekognition / Paginator / ListStreamProcessors ListStreamProcessors ******************** class Rekognition.Paginator.ListStreamProcessors paginator = client.get_paginator('list_stream_processors') paginate(**kwargs) Creates an iterator that will paginate through responses from "Rekognition.Client.list_stream_processors()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max- items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'StreamProcessors': [ { 'Name': 'string', 'Status': 'STOPPED'|'STARTING'|'RUNNING'|'FAILED'|'STOPPING'|'UPDATING' }, ] } **Response Structure** * *(dict) --* * **StreamProcessors** *(list) --* List of stream processors that you have created. * *(dict) --* An object that recognizes faces or labels in a streaming video. An Amazon Rekognition stream processor is created by a call to CreateStreamProcessor. The request parameters for "CreateStreamProcessor" describe the Kinesis video stream source for the streaming video, face recognition parameters, and where to stream the analysis resullts. * **Name** *(string) --* Name of the Amazon Rekognition stream processor. * **Status** *(string) --* Current status of the Amazon Rekognition stream processor. Rekognition / Paginator / ListCollections ListCollections *************** class Rekognition.Paginator.ListCollections paginator = client.get_paginator('list_collections') paginate(**kwargs) Creates an iterator that will paginate through responses from "Rekognition.Client.list_collections()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max- items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'CollectionIds': [ 'string', ], 'FaceModelVersions': [ 'string', ] } **Response Structure** * *(dict) --* * **CollectionIds** *(list) --* An array of collection IDs. * *(string) --* * **FaceModelVersions** *(list) --* Version numbers of the face detection models associated with the collections in the array "CollectionIds". For example, the value of "FaceModelVersions[2]" is the version number for the face detection model used by the collection in "CollectionId[2]". * *(string) --* Rekognition / Paginator / DescribeProjectVersions DescribeProjectVersions *********************** class Rekognition.Paginator.DescribeProjectVersions paginator = client.get_paginator('describe_project_versions') paginate(**kwargs) Creates an iterator that will paginate through responses from "Rekognition.Client.describe_project_versions()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( ProjectArn='string', VersionNames=[ 'string', ], PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **ProjectArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the project that contains the model/adapter you want to describe. * **VersionNames** (*list*) -- A list of model or project version names that you want to describe. You can add up to 10 model or project version names to the list. If you don't specify a value, all project version descriptions are returned. A version name is part of a project version ARN. For example, "my- model.2020-01-21T09.10.15" is the version name in the following ARN. "arn:aws:rekognition:us- east-1:123456789012:project/getting-started/version/my- model.2020-01-21T09.10.15/1234567890123". * *(string) --* * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'ProjectVersionDescriptions': [ { 'ProjectVersionArn': 'string', 'CreationTimestamp': datetime(2015, 1, 1), 'MinInferenceUnits': 123, 'Status': 'TRAINING_IN_PROGRESS'|'TRAINING_COMPLETED'|'TRAINING_FAILED'|'STARTING'|'RUNNING'|'FAILED'|'STOPPING'|'STOPPED'|'DELETING'|'COPYING_IN_PROGRESS'|'COPYING_COMPLETED'|'COPYING_FAILED'|'DEPRECATED'|'EXPIRED', 'StatusMessage': 'string', 'BillableTrainingTimeInSeconds': 123, 'TrainingEndTimestamp': datetime(2015, 1, 1), 'OutputConfig': { 'S3Bucket': 'string', 'S3KeyPrefix': 'string' }, 'TrainingDataResult': { 'Input': { 'Assets': [ { 'GroundTruthManifest': { 'S3Object': { 'Bucket': 'string', 'Name': 'string', 'Version': 'string' } } }, ] }, 'Output': { 'Assets': [ { 'GroundTruthManifest': { 'S3Object': { 'Bucket': 'string', 'Name': 'string', 'Version': 'string' } } }, ] }, 'Validation': { 'Assets': [ { 'GroundTruthManifest': { 'S3Object': { 'Bucket': 'string', 'Name': 'string', 'Version': 'string' } } }, ] } }, 'TestingDataResult': { 'Input': { 'Assets': [ { 'GroundTruthManifest': { 'S3Object': { 'Bucket': 'string', 'Name': 'string', 'Version': 'string' } } }, ], 'AutoCreate': True|False }, 'Output': { 'Assets': [ { 'GroundTruthManifest': { 'S3Object': { 'Bucket': 'string', 'Name': 'string', 'Version': 'string' } } }, ], 'AutoCreate': True|False }, 'Validation': { 'Assets': [ { 'GroundTruthManifest': { 'S3Object': { 'Bucket': 'string', 'Name': 'string', 'Version': 'string' } } }, ] } }, 'EvaluationResult': { 'F1Score': ..., 'Summary': { 'S3Object': { 'Bucket': 'string', 'Name': 'string', 'Version': 'string' } } }, 'ManifestSummary': { 'S3Object': { 'Bucket': 'string', 'Name': 'string', 'Version': 'string' } }, 'KmsKeyId': 'string', 'MaxInferenceUnits': 123, 'SourceProjectVersionArn': 'string', 'VersionDescription': 'string', 'Feature': 'CONTENT_MODERATION'|'CUSTOM_LABELS', 'BaseModelVersion': 'string', 'FeatureConfig': { 'ContentModeration': { 'ConfidenceThreshold': ... } } }, ], } **Response Structure** * *(dict) --* * **ProjectVersionDescriptions** *(list) --* A list of project version descriptions. The list is sorted by the creation date and time of the project versions, latest to earliest. * *(dict) --* A description of a version of a Amazon Rekognition project version. * **ProjectVersionArn** *(string) --* The Amazon Resource Name (ARN) of the project version. * **CreationTimestamp** *(datetime) --* The Unix datetime for the date and time that training started. * **MinInferenceUnits** *(integer) --* The minimum number of inference units used by the model. Applies only to Custom Labels projects. For more information, see StartProjectVersion. * **Status** *(string) --* The current status of the model version. * **StatusMessage** *(string) --* A descriptive message for an error or warning that occurred. * **BillableTrainingTimeInSeconds** *(integer) --* The duration, in seconds, that you were billed for a successful training of the model version. This value is only returned if the model version has been successfully trained. * **TrainingEndTimestamp** *(datetime) --* The Unix date and time that training of the model ended. * **OutputConfig** *(dict) --* The location where training results are saved. * **S3Bucket** *(string) --* The S3 bucket where training output is placed. * **S3KeyPrefix** *(string) --* The prefix applied to the training output files. * **TrainingDataResult** *(dict) --* Contains information about the training results. * **Input** *(dict) --* The training data that you supplied. * **Assets** *(list) --* A manifest file that contains references to the training images and ground-truth annotations. * *(dict) --* Assets are the images that you use to train and evaluate a model version. Assets can also contain validation information that you use to debug a failed model training. * **GroundTruthManifest** *(dict) --* The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest file. * **S3Object** *(dict) --* Provides the S3 bucket name and object name. The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations. For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide. * **Bucket** *(string) --* Name of the S3 bucket. * **Name** *(string) --* S3 object key name. * **Version** *(string) --* If the bucket is versioning enabled, you can specify the object version. * **Output** *(dict) --* Reference to images (assets) that were actually used during training with trained model predictions. * **Assets** *(list) --* A manifest file that contains references to the training images and ground-truth annotations. * *(dict) --* Assets are the images that you use to train and evaluate a model version. Assets can also contain validation information that you use to debug a failed model training. * **GroundTruthManifest** *(dict) --* The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest file. * **S3Object** *(dict) --* Provides the S3 bucket name and object name. The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations. For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide. * **Bucket** *(string) --* Name of the S3 bucket. * **Name** *(string) --* S3 object key name. * **Version** *(string) --* If the bucket is versioning enabled, you can specify the object version. * **Validation** *(dict) --* A manifest that you supplied for training, with validation results for each line. * **Assets** *(list) --* The assets that comprise the validation data. * *(dict) --* Assets are the images that you use to train and evaluate a model version. Assets can also contain validation information that you use to debug a failed model training. * **GroundTruthManifest** *(dict) --* The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest file. * **S3Object** *(dict) --* Provides the S3 bucket name and object name. The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations. For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide. * **Bucket** *(string) --* Name of the S3 bucket. * **Name** *(string) --* S3 object key name. * **Version** *(string) --* If the bucket is versioning enabled, you can specify the object version. * **TestingDataResult** *(dict) --* Contains information about the testing results. * **Input** *(dict) --* The testing dataset that was supplied for training. * **Assets** *(list) --* The assets used for testing. * *(dict) --* Assets are the images that you use to train and evaluate a model version. Assets can also contain validation information that you use to debug a failed model training. * **GroundTruthManifest** *(dict) --* The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest file. * **S3Object** *(dict) --* Provides the S3 bucket name and object name. The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations. For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide. * **Bucket** *(string) --* Name of the S3 bucket. * **Name** *(string) --* S3 object key name. * **Version** *(string) --* If the bucket is versioning enabled, you can specify the object version. * **AutoCreate** *(boolean) --* If specified, Rekognition splits training dataset to create a test dataset for the training job. * **Output** *(dict) --* The subset of the dataset that was actually tested. Some images (assets) might not be tested due to file formatting and other issues. * **Assets** *(list) --* The assets used for testing. * *(dict) --* Assets are the images that you use to train and evaluate a model version. Assets can also contain validation information that you use to debug a failed model training. * **GroundTruthManifest** *(dict) --* The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest file. * **S3Object** *(dict) --* Provides the S3 bucket name and object name. The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations. For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide. * **Bucket** *(string) --* Name of the S3 bucket. * **Name** *(string) --* S3 object key name. * **Version** *(string) --* If the bucket is versioning enabled, you can specify the object version. * **AutoCreate** *(boolean) --* If specified, Rekognition splits training dataset to create a test dataset for the training job. * **Validation** *(dict) --* The location of the data validation manifest. The data validation manifest is created for the test dataset during model training. * **Assets** *(list) --* The assets that comprise the validation data. * *(dict) --* Assets are the images that you use to train and evaluate a model version. Assets can also contain validation information that you use to debug a failed model training. * **GroundTruthManifest** *(dict) --* The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest file. * **S3Object** *(dict) --* Provides the S3 bucket name and object name. The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations. For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide. * **Bucket** *(string) --* Name of the S3 bucket. * **Name** *(string) --* S3 object key name. * **Version** *(string) --* If the bucket is versioning enabled, you can specify the object version. * **EvaluationResult** *(dict) --* The training results. "EvaluationResult" is only returned if training is successful. * **F1Score** *(float) --* The F1 score for the evaluation of all labels. The F1 score metric evaluates the overall precision and recall performance of the model as a single value. A higher value indicates better precision and recall performance. A lower score indicates that precision, recall, or both are performing poorly. * **Summary** *(dict) --* The S3 bucket that contains the training summary. * **S3Object** *(dict) --* Provides the S3 bucket name and object name. The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations. For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide. * **Bucket** *(string) --* Name of the S3 bucket. * **Name** *(string) --* S3 object key name. * **Version** *(string) --* If the bucket is versioning enabled, you can specify the object version. * **ManifestSummary** *(dict) --* The location of the summary manifest. The summary manifest provides aggregate data validation results for the training and test datasets. * **S3Object** *(dict) --* Provides the S3 bucket name and object name. The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations. For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide. * **Bucket** *(string) --* Name of the S3 bucket. * **Name** *(string) --* S3 object key name. * **Version** *(string) --* If the bucket is versioning enabled, you can specify the object version. * **KmsKeyId** *(string) --* The identifer for the AWS Key Management Service key (AWS KMS key) that was used to encrypt the model during training. * **MaxInferenceUnits** *(integer) --* The maximum number of inference units Amazon Rekognition uses to auto-scale the model. Applies only to Custom Labels projects. For more information, see StartProjectVersion. * **SourceProjectVersionArn** *(string) --* If the model version was copied from a different project, "SourceProjectVersionArn" contains the ARN of the source model version. * **VersionDescription** *(string) --* A user-provided description of the project version. * **Feature** *(string) --* The feature that was customized. * **BaseModelVersion** *(string) --* The base detection model version used to create the project version. * **FeatureConfig** *(dict) --* Feature specific configuration that was applied during training. * **ContentModeration** *(dict) --* Configuration options for Custom Moderation training. * **ConfidenceThreshold** *(float) --* The confidence level you plan to use to identify if unsafe content is present during inference. Rekognition / Paginator / ListUsers ListUsers ********* class Rekognition.Paginator.ListUsers paginator = client.get_paginator('list_users') paginate(**kwargs) Creates an iterator that will paginate through responses from "Rekognition.Client.list_users()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( CollectionId='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **CollectionId** (*string*) -- **[REQUIRED]** The ID of an existing collection. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'Users': [ { 'UserId': 'string', 'UserStatus': 'ACTIVE'|'UPDATING'|'CREATING'|'CREATED' }, ], } **Response Structure** * *(dict) --* * **Users** *(list) --* List of UsersID associated with the specified collection. * *(dict) --* Metadata of the user stored in a collection. * **UserId** *(string) --* A provided ID for the User. Unique within the collection. * **UserStatus** *(string) --* Communicates if the UserID has been updated with latest set of faces to be associated with the UserID. Rekognition / Paginator / ListFaces ListFaces ********* class Rekognition.Paginator.ListFaces paginator = client.get_paginator('list_faces') paginate(**kwargs) Creates an iterator that will paginate through responses from "Rekognition.Client.list_faces()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( CollectionId='string', UserId='string', FaceIds=[ 'string', ], PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **CollectionId** (*string*) -- **[REQUIRED]** ID of the collection from which to list the faces. * **UserId** (*string*) -- An array of user IDs to filter results with when listing faces in a collection. * **FaceIds** (*list*) -- An array of face IDs to filter results with when listing faces in a collection. * *(string) --* * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'Faces': [ { 'FaceId': 'string', 'BoundingBox': { 'Width': ..., 'Height': ..., 'Left': ..., 'Top': ... }, 'ImageId': 'string', 'ExternalImageId': 'string', 'Confidence': ..., 'IndexFacesModelVersion': 'string', 'UserId': 'string' }, ], 'FaceModelVersion': 'string' } **Response Structure** * *(dict) --* * **Faces** *(list) --* An array of "Face" objects. * *(dict) --* Describes the face properties such as the bounding box, face ID, image ID of the input image, and external image ID that you assigned. * **FaceId** *(string) --* Unique identifier that Amazon Rekognition assigns to the face. * **BoundingBox** *(dict) --* Bounding box of the face. * **Width** *(float) --* Width of the bounding box as a ratio of the overall image width. * **Height** *(float) --* Height of the bounding box as a ratio of the overall image height. * **Left** *(float) --* Left coordinate of the bounding box as a ratio of overall image width. * **Top** *(float) --* Top coordinate of the bounding box as a ratio of overall image height. * **ImageId** *(string) --* Unique identifier that Amazon Rekognition assigns to the input image. * **ExternalImageId** *(string) --* Identifier that you assign to all the faces in the input image. * **Confidence** *(float) --* Confidence level that the bounding box contains a face (and not a different object such as a tree). * **IndexFacesModelVersion** *(string) --* The version of the face detect and storage model that was used when indexing the face vector. * **UserId** *(string) --* Unique identifier assigned to the user. * **FaceModelVersion** *(string) --* Version number of the face detection model associated with the input collection ( "CollectionId"). Rekognition / Paginator / ListProjectPolicies ListProjectPolicies ******************* class Rekognition.Paginator.ListProjectPolicies paginator = client.get_paginator('list_project_policies') paginate(**kwargs) Creates an iterator that will paginate through responses from "Rekognition.Client.list_project_policies()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( ProjectArn='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **ProjectArn** (*string*) -- **[REQUIRED]** The ARN of the project for which you want to list the project policies. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'ProjectPolicies': [ { 'ProjectArn': 'string', 'PolicyName': 'string', 'PolicyRevisionId': 'string', 'PolicyDocument': 'string', 'CreationTimestamp': datetime(2015, 1, 1), 'LastUpdatedTimestamp': datetime(2015, 1, 1) }, ], } **Response Structure** * *(dict) --* * **ProjectPolicies** *(list) --* A list of project policies attached to the project. * *(dict) --* Describes a project policy in the response from ListProjectPolicies. * **ProjectArn** *(string) --* The Amazon Resource Name (ARN) of the project to which the project policy is attached. * **PolicyName** *(string) --* The name of the project policy. * **PolicyRevisionId** *(string) --* The revision ID of the project policy. * **PolicyDocument** *(string) --* The JSON document for the project policy. * **CreationTimestamp** *(datetime) --* The Unix datetime for the creation of the project policy. * **LastUpdatedTimestamp** *(datetime) --* The Unix datetime for when the project policy was last updated. Rekognition / Paginator / DescribeProjects DescribeProjects **************** class Rekognition.Paginator.DescribeProjects paginator = client.get_paginator('describe_projects') paginate(**kwargs) Creates an iterator that will paginate through responses from "Rekognition.Client.describe_projects()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( ProjectNames=[ 'string', ], Features=[ 'CONTENT_MODERATION'|'CUSTOM_LABELS', ], PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **ProjectNames** (*list*) -- A list of the projects that you want Rekognition to describe. If you don't specify a value, the response includes descriptions for all the projects in your AWS account. * *(string) --* * **Features** (*list*) -- Specifies the type of customization to filter projects by. If no value is specified, CUSTOM_LABELS is used as a default. * *(string) --* * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'ProjectDescriptions': [ { 'ProjectArn': 'string', 'CreationTimestamp': datetime(2015, 1, 1), 'Status': 'CREATING'|'CREATED'|'DELETING', 'Datasets': [ { 'CreationTimestamp': datetime(2015, 1, 1), 'DatasetType': 'TRAIN'|'TEST', 'DatasetArn': 'string', 'Status': 'CREATE_IN_PROGRESS'|'CREATE_COMPLETE'|'CREATE_FAILED'|'UPDATE_IN_PROGRESS'|'UPDATE_COMPLETE'|'UPDATE_FAILED'|'DELETE_IN_PROGRESS', 'StatusMessage': 'string', 'StatusMessageCode': 'SUCCESS'|'SERVICE_ERROR'|'CLIENT_ERROR' }, ], 'Feature': 'CONTENT_MODERATION'|'CUSTOM_LABELS', 'AutoUpdate': 'ENABLED'|'DISABLED' }, ], } **Response Structure** * *(dict) --* * **ProjectDescriptions** *(list) --* A list of project descriptions. The list is sorted by the date and time the projects are created. * *(dict) --* A description of an Amazon Rekognition Custom Labels project. For more information, see DescribeProjects. * **ProjectArn** *(string) --* The Amazon Resource Name (ARN) of the project. * **CreationTimestamp** *(datetime) --* The Unix timestamp for the date and time that the project was created. * **Status** *(string) --* The current status of the project. * **Datasets** *(list) --* Information about the training and test datasets in the project. * *(dict) --* Summary information for an Amazon Rekognition Custom Labels dataset. For more information, see ProjectDescription. * **CreationTimestamp** *(datetime) --* The Unix timestamp for the date and time that the dataset was created. * **DatasetType** *(string) --* The type of the dataset. * **DatasetArn** *(string) --* The Amazon Resource Name (ARN) for the dataset. * **Status** *(string) --* The status for the dataset. * **StatusMessage** *(string) --* The status message for the dataset. * **StatusMessageCode** *(string) --* The status message code for the dataset operation. If a service error occurs, try the API call again later. If a client error occurs, check the input parameters to the dataset API call that failed. * **Feature** *(string) --* Specifies the project that is being customized. * **AutoUpdate** *(string) --* Indicates whether automatic retraining will be attempted for the versions of the project. Applies only to adapters. Rekognition / Paginator / ListDatasetEntries ListDatasetEntries ****************** class Rekognition.Paginator.ListDatasetEntries paginator = client.get_paginator('list_dataset_entries') paginate(**kwargs) Creates an iterator that will paginate through responses from "Rekognition.Client.list_dataset_entries()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( DatasetArn='string', ContainsLabels=[ 'string', ], Labeled=True|False, SourceRefContains='string', HasErrors=True|False, PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **DatasetArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) for the dataset that you want to use. * **ContainsLabels** (*list*) -- Specifies a label filter for the response. The response includes an entry only if one or more of the labels in "ContainsLabels" exist in the entry. * *(string) --* * **Labeled** (*boolean*) -- Specify "true" to get only the JSON Lines where the image is labeled. Specify "false" to get only the JSON Lines where the image isn't labeled. If you don't specify "Labeled", "ListDatasetEntries" returns JSON Lines for labeled and unlabeled images. * **SourceRefContains** (*string*) -- If specified, "ListDatasetEntries" only returns JSON Lines where the value of "SourceRefContains" is part of the "source-ref" field. The "source-ref" field contains the Amazon S3 location of the image. You can use "SouceRefContains" for tasks such as getting the JSON Line for a single image, or gettting JSON Lines for all images within a specific folder. * **HasErrors** (*boolean*) -- Specifies an error filter for the response. Specify "True" to only include entries that have errors. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'DatasetEntries': [ 'string', ], } **Response Structure** * *(dict) --* * **DatasetEntries** *(list) --* A list of entries (images) in the dataset. * *(string) --* Rekognition / Client / delete_collection delete_collection ***************** Rekognition.Client.delete_collection(**kwargs) Deletes the specified collection. Note that this operation removes all faces in the collection. For an example, see Deleting a collection. This operation requires permissions to perform the "rekognition:DeleteCollection" action. See also: AWS API Documentation **Request Syntax** response = client.delete_collection( CollectionId='string' ) Parameters: **CollectionId** (*string*) -- **[REQUIRED]** ID of the collection to delete. Return type: dict Returns: **Response Syntax** { 'StatusCode': 123 } **Response Structure** * *(dict) --* * **StatusCode** *(integer) --* HTTP status code that indicates the result of the operation. **Exceptions** * "Rekognition.Client.exceptions.InvalidParameterException" * "Rekognition.Client.exceptions.AccessDeniedException" * "Rekognition.Client.exceptions.InternalServerError" * "Rekognition.Client.exceptions.ThrottlingException" * "Rekognition.Client.exceptions.ProvisionedThroughputExceededExce ption" * "Rekognition.Client.exceptions.ResourceNotFoundException" **Examples** This operation deletes a Rekognition collection. response = client.delete_collection( CollectionId='myphotos', ) print(response) Expected Output: { 'StatusCode': 200, 'ResponseMetadata': { '...': '...', }, } Rekognition / Client / detect_moderation_labels detect_moderation_labels ************************ Rekognition.Client.detect_moderation_labels(**kwargs) Detects unsafe content in a specified JPEG or PNG format image. Use "DetectModerationLabels" to moderate images depending on your requirements. For example, you might want to filter images that contain nudity, but not images containing suggestive content. To filter images, use the labels returned by "DetectModerationLabels" to determine which types of content are appropriate. For information about moderation labels, see Detecting Unsafe Content in the Amazon Rekognition Developer Guide. You pass the input image either as base64-encoded image bytes or as a reference to an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes is not supported. The image must be either a PNG or JPEG formatted file. You can specify an adapter to use when retrieving label predictions by providing a "ProjectVersionArn" to the "ProjectVersion" argument. See also: AWS API Documentation **Request Syntax** response = client.detect_moderation_labels( Image={ 'Bytes': b'bytes', 'S3Object': { 'Bucket': 'string', 'Name': 'string', 'Version': 'string' } }, MinConfidence=..., HumanLoopConfig={ 'HumanLoopName': 'string', 'FlowDefinitionArn': 'string', 'DataAttributes': { 'ContentClassifiers': [ 'FreeOfPersonallyIdentifiableInformation'|'FreeOfAdultContent', ] } }, ProjectVersion='string' ) Parameters: * **Image** (*dict*) -- **[REQUIRED]** The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported. If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the "Bytes" field. For more information, see Images in the Amazon Rekognition developer guide. * **Bytes** *(bytes) --* Blob of image bytes up to 5 MBs. Note that the maximum image size you can pass to "DetectCustomLabels" is 4MB. * **S3Object** *(dict) --* Identifies an S3 object as the image source. * **Bucket** *(string) --* Name of the S3 bucket. * **Name** *(string) --* S3 object key name. * **Version** *(string) --* If the bucket is versioning enabled, you can specify the object version. * **MinConfidence** (*float*) -- Specifies the minimum confidence level for the labels to return. Amazon Rekognition doesn't return any labels with a confidence level lower than this specified value. If you don't specify "MinConfidence", the operation returns labels with confidence values greater than or equal to 50 percent. * **HumanLoopConfig** (*dict*) -- Sets up the configuration for human evaluation, including the FlowDefinition the image will be sent to. * **HumanLoopName** *(string) --* **[REQUIRED]** The name of the human review used for this image. This should be kept unique within a region. * **FlowDefinitionArn** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) of the flow definition. You can create a flow definition by using the Amazon Sagemaker CreateFlowDefinition Operation. * **DataAttributes** *(dict) --* Sets attributes of the input data. * **ContentClassifiers** *(list) --* Sets whether the input image is free of personally identifiable information. * *(string) --* * **ProjectVersion** (*string*) -- Identifier for the custom adapter. Expects the ProjectVersionArn as a value. Use the CreateProject or CreateProjectVersion APIs to create a custom adapter. Return type: dict Returns: **Response Syntax** { 'ModerationLabels': [ { 'Confidence': ..., 'Name': 'string', 'ParentName': 'string', 'TaxonomyLevel': 123 }, ], 'ModerationModelVersion': 'string', 'HumanLoopActivationOutput': { 'HumanLoopArn': 'string', 'HumanLoopActivationReasons': [ 'string', ], 'HumanLoopActivationConditionsEvaluationResults': 'string' }, 'ProjectVersion': 'string', 'ContentTypes': [ { 'Confidence': ..., 'Name': 'string' }, ] } **Response Structure** * *(dict) --* * **ModerationLabels** *(list) --* Array of detected Moderation labels. For video operations, this includes the time, in milliseconds from the start of the video, they were detected. * *(dict) --* Provides information about a single type of inappropriate, unwanted, or offensive content found in an image or video. Each type of moderated content has a label within a hierarchical taxonomy. For more information, see Content moderation in the Amazon Rekognition Developer Guide. * **Confidence** *(float) --* Specifies the confidence that Amazon Rekognition has that the label has been correctly identified. If you don't specify the "MinConfidence" parameter in the call to "DetectModerationLabels", the operation returns labels with a confidence value greater than or equal to 50 percent. * **Name** *(string) --* The label name for the type of unsafe content detected in the image. * **ParentName** *(string) --* The name for the parent label. Labels at the top level of the hierarchy have the parent label """". * **TaxonomyLevel** *(integer) --* The level of the moderation label with regard to its taxonomy, from 1 to 3. * **ModerationModelVersion** *(string) --* Version number of the base moderation detection model that was used to detect unsafe content. * **HumanLoopActivationOutput** *(dict) --* Shows the results of the human in the loop evaluation. * **HumanLoopArn** *(string) --* The Amazon Resource Name (ARN) of the HumanLoop created. * **HumanLoopActivationReasons** *(list) --* Shows if and why human review was needed. * *(string) --* * **HumanLoopActivationConditionsEvaluationResults** *(string) --* Shows the result of condition evaluations, including those conditions which activated a human review. * **ProjectVersion** *(string) --* Identifier of the custom adapter that was used during inference. If during inference the adapter was EXPIRED, then the parameter will not be returned, indicating that a base moderation detection project version was used. * **ContentTypes** *(list) --* A list of predicted results for the type of content an image contains. For example, the image content might be from animation, sports, or a video game. * *(dict) --* Contains information regarding the confidence and name of a detected content type. * **Confidence** *(float) --* The confidence level of the label given * **Name** *(string) --* The name of the label **Exceptions** * "Rekognition.Client.exceptions.InvalidS3ObjectException" * "Rekognition.Client.exceptions.InvalidParameterException" * "Rekognition.Client.exceptions.ImageTooLargeException" * "Rekognition.Client.exceptions.AccessDeniedException" * "Rekognition.Client.exceptions.InternalServerError" * "Rekognition.Client.exceptions.ThrottlingException" * "Rekognition.Client.exceptions.ProvisionedThroughputExceededExce ption" * "Rekognition.Client.exceptions.InvalidImageFormatException" * "Rekognition.Client.exceptions.HumanLoopQuotaExceededException" * "Rekognition.Client.exceptions.ResourceNotFoundException" * "Rekognition.Client.exceptions.ResourceNotReadyException" Rekognition / Client / stop_stream_processor stop_stream_processor ********************* Rekognition.Client.stop_stream_processor(**kwargs) Stops a running stream processor that was created by CreateStreamProcessor. See also: AWS API Documentation **Request Syntax** response = client.stop_stream_processor( Name='string' ) Parameters: **Name** (*string*) -- **[REQUIRED]** The name of a stream processor created by CreateStreamProcessor. Return type: dict Returns: **Response Syntax** {} **Response Structure** * *(dict) --* **Exceptions** * "Rekognition.Client.exceptions.AccessDeniedException" * "Rekognition.Client.exceptions.InternalServerError" * "Rekognition.Client.exceptions.ThrottlingException" * "Rekognition.Client.exceptions.InvalidParameterException" * "Rekognition.Client.exceptions.ResourceNotFoundException" * "Rekognition.Client.exceptions.ResourceInUseException" * "Rekognition.Client.exceptions.ProvisionedThroughputExceededExce ption" Rekognition / Client / get_paginator get_paginator ************* Rekognition.Client.get_paginator(operation_name) Create a paginator for an operation. Parameters: **operation_name** (*string*) -- The operation name. This is the same name as the method name on the client. For example, if the method name is "create_foo", and you'd normally invoke the operation as "client.create_foo(**kwargs)", if the "create_foo" operation can be paginated, you can use the call "client.get_paginator("create_foo")". Raises: **OperationNotPageableError** -- Raised if the operation is not pageable. You can use the "client.can_paginate" method to check if an operation is pageable. Return type: "botocore.paginate.Paginator" Returns: A paginator object. Rekognition / Client / detect_custom_labels detect_custom_labels ******************** Rekognition.Client.detect_custom_labels(**kwargs) Note: This operation applies only to Amazon Rekognition Custom Labels. Detects custom labels in a supplied image by using an Amazon Rekognition Custom Labels model. You specify which version of a model version to use by using the "ProjectVersionArn" input parameter. You pass the input image as base64-encoded image bytes or as a reference to an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes is not supported. The image must be either a PNG or JPEG formatted file. For each object that the model version detects on an image, the API returns a ( "CustomLabel") object in an array ( "CustomLabels"). Each "CustomLabel" object provides the label name ( "Name"), the level of confidence that the image contains the object ( "Confidence"), and object location information, if it exists, for the label on the image ( "Geometry"). To filter labels that are returned, specify a value for "MinConfidence". "DetectCustomLabelsLabels" only returns labels with a confidence that's higher than the specified value. The value of "MinConfidence" maps to the assumed threshold values created during training. For more information, see *Assumed threshold* in the Amazon Rekognition Custom Labels Developer Guide. Amazon Rekognition Custom Labels metrics expresses an assumed threshold as a floating point value between 0-1. The range of "MinConfidence" normalizes the threshold value to a percentage value (0-100). Confidence responses from "DetectCustomLabels" are also returned as a percentage. You can use "MinConfidence" to change the precision and recall or your model. For more information, see *Analyzing an image* in the Amazon Rekognition Custom Labels Developer Guide. If you don't specify a value for "MinConfidence", "DetectCustomLabels" returns labels based on the assumed threshold of each label. This is a stateless API operation. That is, the operation does not persist any data. This operation requires permissions to perform the "rekognition:DetectCustomLabels" action. For more information, see *Analyzing an image* in the Amazon Rekognition Custom Labels Developer Guide. See also: AWS API Documentation **Request Syntax** response = client.detect_custom_labels( ProjectVersionArn='string', Image={ 'Bytes': b'bytes', 'S3Object': { 'Bucket': 'string', 'Name': 'string', 'Version': 'string' } }, MaxResults=123, MinConfidence=... ) Parameters: * **ProjectVersionArn** (*string*) -- **[REQUIRED]** The ARN of the model version that you want to use. Only models associated with Custom Labels projects accepted by the operation. If a provided ARN refers to a model version associated with a project for a different feature type, then an InvalidParameterException is returned. * **Image** (*dict*) -- **[REQUIRED]** Provides the input image either as bytes or an S3 object. You pass image bytes to an Amazon Rekognition API operation by using the "Bytes" property. For example, you would use the "Bytes" property to pass an image loaded from a local file system. Image bytes passed by using the "Bytes" property must be base64-encoded. Your code may not need to encode image bytes if you are using an AWS SDK to call Amazon Rekognition API operations. For more information, see Analyzing an Image Loaded from a Local File System in the Amazon Rekognition Developer Guide. You pass images stored in an S3 bucket to an Amazon Rekognition API operation by using the "S3Object" property. Images stored in an S3 bucket do not need to be base64-encoded. The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes using the Bytes property is not supported. You must first upload the image to an Amazon S3 bucket and then call the operation using the S3Object property. For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide. * **Bytes** *(bytes) --* Blob of image bytes up to 5 MBs. Note that the maximum image size you can pass to "DetectCustomLabels" is 4MB. * **S3Object** *(dict) --* Identifies an S3 object as the image source. * **Bucket** *(string) --* Name of the S3 bucket. * **Name** *(string) --* S3 object key name. * **Version** *(string) --* If the bucket is versioning enabled, you can specify the object version. * **MaxResults** (*integer*) -- Maximum number of results you want the service to return in the response. The service returns the specified number of highest confidence labels ranked from highest confidence to lowest. * **MinConfidence** (*float*) -- Specifies the minimum confidence level for the labels to return. "DetectCustomLabels" doesn't return any labels with a confidence value that's lower than this specified value. If you specify a value of 0, "DetectCustomLabels" returns all labels, regardless of the assumed threshold applied to each label. If you don't specify a value for "MinConfidence", "DetectCustomLabels" returns labels based on the assumed threshold of each label. Return type: dict Returns: **Response Syntax** { 'CustomLabels': [ { 'Name': 'string', 'Confidence': ..., 'Geometry': { 'BoundingBox': { 'Width': ..., 'Height': ..., 'Left': ..., 'Top': ... }, 'Polygon': [ { 'X': ..., 'Y': ... }, ] } }, ] } **Response Structure** * *(dict) --* * **CustomLabels** *(list) --* An array of custom labels detected in the input image. * *(dict) --* A custom label detected in an image by a call to DetectCustomLabels. * **Name** *(string) --* The name of the custom label. * **Confidence** *(float) --* The confidence that the model has in the detection of the custom label. The range is 0-100. A higher value indicates a higher confidence. * **Geometry** *(dict) --* The location of the detected object on the image that corresponds to the custom label. Includes an axis aligned coarse bounding box surrounding the object and a finer grain polygon for more accurate spatial information. * **BoundingBox** *(dict) --* An axis-aligned coarse representation of the detected item's location on the image. * **Width** *(float) --* Width of the bounding box as a ratio of the overall image width. * **Height** *(float) --* Height of the bounding box as a ratio of the overall image height. * **Left** *(float) --* Left coordinate of the bounding box as a ratio of overall image width. * **Top** *(float) --* Top coordinate of the bounding box as a ratio of overall image height. * **Polygon** *(list) --* Within the bounding box, a fine-grained polygon around the detected item. * *(dict) --* The X and Y coordinates of a point on an image or video frame. The X and Y values are ratios of the overall image size or video resolution. For example, if an input image is 700x200 and the values are X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the image. An array of "Point" objects makes up a "Polygon". A "Polygon" is returned by DetectText and by DetectCustomLabels "Polygon" represents a fine- grained polygon around a detected item. For more information, see Geometry in the Amazon Rekognition Developer Guide. * **X** *(float) --* The value of the X coordinate for a point on a "Polygon". * **Y** *(float) --* The value of the Y coordinate for a point on a "Polygon". **Exceptions** * "Rekognition.Client.exceptions.ResourceNotFoundException" * "Rekognition.Client.exceptions.ResourceNotReadyException" * "Rekognition.Client.exceptions.InvalidS3ObjectException" * "Rekognition.Client.exceptions.InvalidParameterException" * "Rekognition.Client.exceptions.ImageTooLargeException" * "Rekognition.Client.exceptions.LimitExceededException" * "Rekognition.Client.exceptions.AccessDeniedException" * "Rekognition.Client.exceptions.InternalServerError" * "Rekognition.Client.exceptions.ThrottlingException" * "Rekognition.Client.exceptions.ProvisionedThroughputExceededExce ption" * "Rekognition.Client.exceptions.InvalidImageFormatException" Rekognition / Client / search_faces search_faces ************ Rekognition.Client.search_faces(**kwargs) For a given input face ID, searches for matching faces in the collection the face belongs to. You get a face ID when you add a face to the collection using the IndexFaces operation. The operation compares the features of the input face with faces in the specified collection. Note: You can also search faces without indexing faces by using the "SearchFacesByImage" operation. The operation response returns an array of faces that match, ordered by similarity score with the highest similarity first. More specifically, it is an array of metadata for each face match that is found. Along with the metadata, the response also includes a "confidence" value for each face match, indicating the confidence that the specific face matches the input face. For an example, see Searching for a face using its face ID in the Amazon Rekognition Developer Guide. This operation requires permissions to perform the "rekognition:SearchFaces" action. See also: AWS API Documentation **Request Syntax** response = client.search_faces( CollectionId='string', FaceId='string', MaxFaces=123, FaceMatchThreshold=... ) Parameters: * **CollectionId** (*string*) -- **[REQUIRED]** ID of the collection the face belongs to. * **FaceId** (*string*) -- **[REQUIRED]** ID of a face to find matches for in the collection. * **MaxFaces** (*integer*) -- Maximum number of faces to return. The operation returns the maximum number of faces with the highest confidence in the match. * **FaceMatchThreshold** (*float*) -- Optional value specifying the minimum confidence in the face match to return. For example, don't return any matches where confidence in matches is less than 70%. The default value is 80%. Return type: dict Returns: **Response Syntax** { 'SearchedFaceId': 'string', 'FaceMatches': [ { 'Similarity': ..., 'Face': { 'FaceId': 'string', 'BoundingBox': { 'Width': ..., 'Height': ..., 'Left': ..., 'Top': ... }, 'ImageId': 'string', 'ExternalImageId': 'string', 'Confidence': ..., 'IndexFacesModelVersion': 'string', 'UserId': 'string' } }, ], 'FaceModelVersion': 'string' } **Response Structure** * *(dict) --* * **SearchedFaceId** *(string) --* ID of the face that was searched for matches in a collection. * **FaceMatches** *(list) --* An array of faces that matched the input face, along with the confidence in the match. * *(dict) --* Provides face metadata. In addition, it also provides the confidence in the match of this face with the input face. * **Similarity** *(float) --* Confidence in the match of this face with the input face. * **Face** *(dict) --* Describes the face properties such as the bounding box, face ID, image ID of the source image, and external image ID that you assigned. * **FaceId** *(string) --* Unique identifier that Amazon Rekognition assigns to the face. * **BoundingBox** *(dict) --* Bounding box of the face. * **Width** *(float) --* Width of the bounding box as a ratio of the overall image width. * **Height** *(float) --* Height of the bounding box as a ratio of the overall image height. * **Left** *(float) --* Left coordinate of the bounding box as a ratio of overall image width. * **Top** *(float) --* Top coordinate of the bounding box as a ratio of overall image height. * **ImageId** *(string) --* Unique identifier that Amazon Rekognition assigns to the input image. * **ExternalImageId** *(string) --* Identifier that you assign to all the faces in the input image. * **Confidence** *(float) --* Confidence level that the bounding box contains a face (and not a different object such as a tree). * **IndexFacesModelVersion** *(string) --* The version of the face detect and storage model that was used when indexing the face vector. * **UserId** *(string) --* Unique identifier assigned to the user. * **FaceModelVersion** *(string) --* Version number of the face detection model associated with the input collection ( "CollectionId"). **Exceptions** * "Rekognition.Client.exceptions.InvalidParameterException" * "Rekognition.Client.exceptions.AccessDeniedException" * "Rekognition.Client.exceptions.InternalServerError" * "Rekognition.Client.exceptions.ThrottlingException" * "Rekognition.Client.exceptions.ProvisionedThroughputExceededExce ption" * "Rekognition.Client.exceptions.ResourceNotFoundException" **Examples** This operation searches for matching faces in the collection the supplied face belongs to. response = client.search_faces( CollectionId='myphotos', FaceId='70008e50-75e4-55d0-8e80-363fb73b3a14', FaceMatchThreshold=90, MaxFaces=10, ) print(response) Expected Output: { 'FaceMatches': [ { 'Face': { 'BoundingBox': { 'Height': 0.3259260058403015, 'Left': 0.5144439935684204, 'Top': 0.15111100673675537, 'Width': 0.24444399774074554, }, 'Confidence': 99.99949645996094, 'FaceId': '8be04dba-4e58-520d-850e-9eae4af70eb2', 'ImageId': '465f4e93-763e-51d0-b030-b9667a2d94b1', }, 'Similarity': 99.97222137451172, }, { 'Face': { 'BoundingBox': { 'Height': 0.16555599868297577, 'Left': 0.30963000655174255, 'Top': 0.7066670060157776, 'Width': 0.22074100375175476, }, 'Confidence': 100, 'FaceId': '29a75abe-397b-5101-ba4f-706783b2246c', 'ImageId': '147fdf82-7a71-52cf-819b-e786c7b9746e', }, 'Similarity': 97.04154968261719, }, { 'Face': { 'BoundingBox': { 'Height': 0.18888899683952332, 'Left': 0.3783380091190338, 'Top': 0.2355560064315796, 'Width': 0.25222599506378174, }, 'Confidence': 99.9999008178711, 'FaceId': '908544ad-edc3-59df-8faf-6a87cc256cf5', 'ImageId': '3c731605-d772-541a-a5e7-0375dbc68a07', }, 'Similarity': 95.94520568847656, }, ], 'SearchedFaceId': '70008e50-75e4-55d0-8e80-363fb73b3a14', 'ResponseMetadata': { '...': '...', }, } Rekognition / Client / start_project_version start_project_version ********************* Rekognition.Client.start_project_version(**kwargs) Note: This operation applies only to Amazon Rekognition Custom Labels. Starts the running of the version of a model. Starting a model takes a while to complete. To check the current state of the model, use DescribeProjectVersions. Once the model is running, you can detect custom labels in new images by calling DetectCustomLabels. Note: You are charged for the amount of time that the model is running. To stop a running model, call StopProjectVersion. This operation requires permissions to perform the "rekognition:StartProjectVersion" action. See also: AWS API Documentation **Request Syntax** response = client.start_project_version( ProjectVersionArn='string', MinInferenceUnits=123, MaxInferenceUnits=123 ) Parameters: * **ProjectVersionArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name(ARN) of the model version that you want to start. * **MinInferenceUnits** (*integer*) -- **[REQUIRED]** The minimum number of inference units to use. A single inference unit represents 1 hour of processing. Use a higher number to increase the TPS throughput of your model. You are charged for the number of inference units that you use. * **MaxInferenceUnits** (*integer*) -- The maximum number of inference units to use for auto-scaling the model. If you don't specify a value, Amazon Rekognition Custom Labels doesn't auto-scale the model. Return type: dict Returns: **Response Syntax** { 'Status': 'TRAINING_IN_PROGRESS'|'TRAINING_COMPLETED'|'TRAINING_FAILED'|'STARTING'|'RUNNING'|'FAILED'|'STOPPING'|'STOPPED'|'DELETING'|'COPYING_IN_PROGRESS'|'COPYING_COMPLETED'|'COPYING_FAILED'|'DEPRECATED'|'EXPIRED' } **Response Structure** * *(dict) --* * **Status** *(string) --* The current running status of the model. **Exceptions** * "Rekognition.Client.exceptions.ResourceNotFoundException" * "Rekognition.Client.exceptions.ResourceInUseException" * "Rekognition.Client.exceptions.LimitExceededException" * "Rekognition.Client.exceptions.InvalidParameterException" * "Rekognition.Client.exceptions.AccessDeniedException" * "Rekognition.Client.exceptions.InternalServerError" * "Rekognition.Client.exceptions.ThrottlingException" * "Rekognition.Client.exceptions.ProvisionedThroughputExceededExce ption" Rekognition / Client / search_users search_users ************ Rekognition.Client.search_users(**kwargs) Searches for UserIDs within a collection based on a "FaceId" or "UserId". This API can be used to find the closest UserID (with a highest similarity) to associate a face. The request must be provided with either "FaceId" or "UserId". The operation returns an array of UserID that match the "FaceId" or "UserId", ordered by similarity score with the highest similarity first. See also: AWS API Documentation **Request Syntax** response = client.search_users( CollectionId='string', UserId='string', FaceId='string', UserMatchThreshold=..., MaxUsers=123 ) Parameters: * **CollectionId** (*string*) -- **[REQUIRED]** The ID of an existing collection containing the UserID, used with a UserId or FaceId. If a FaceId is provided, UserId isn’t required to be present in the Collection. * **UserId** (*string*) -- ID for the existing User. * **FaceId** (*string*) -- ID for the existing face. * **UserMatchThreshold** (*float*) -- Optional value that specifies the minimum confidence in the matched UserID to return. Default value of 80. * **MaxUsers** (*integer*) -- Maximum number of identities to return. Return type: dict Returns: **Response Syntax** { 'UserMatches': [ { 'Similarity': ..., 'User': { 'UserId': 'string', 'UserStatus': 'ACTIVE'|'UPDATING'|'CREATING'|'CREATED' } }, ], 'FaceModelVersion': 'string', 'SearchedFace': { 'FaceId': 'string' }, 'SearchedUser': { 'UserId': 'string' } } **Response Structure** * *(dict) --* * **UserMatches** *(list) --* An array of UserMatch objects that matched the input face along with the confidence in the match. Array will be empty if there are no matches. * *(dict) --* Provides UserID metadata along with the confidence in the match of this UserID with the input face. * **Similarity** *(float) --* Describes the UserID metadata. * **User** *(dict) --* Confidence in the match of this UserID with the input face. * **UserId** *(string) --* A provided ID for the UserID. Unique within the collection. * **UserStatus** *(string) --* The status of the user matched to a provided FaceID. * **FaceModelVersion** *(string) --* Version number of the face detection model associated with the input CollectionId. * **SearchedFace** *(dict) --* Contains the ID of a face that was used to search for matches in a collection. * **FaceId** *(string) --* Unique identifier assigned to the face. * **SearchedUser** *(dict) --* Contains the ID of the UserID that was used to search for matches in a collection. * **UserId** *(string) --* A provided ID for the UserID. Unique within the collection. **Exceptions** * "Rekognition.Client.exceptions.InvalidParameterException" * "Rekognition.Client.exceptions.ResourceNotFoundException" * "Rekognition.Client.exceptions.ProvisionedThroughputExceededExce ption" * "Rekognition.Client.exceptions.AccessDeniedException" * "Rekognition.Client.exceptions.InternalServerError" * "Rekognition.Client.exceptions.ThrottlingException" Rekognition / Client / delete_project_policy delete_project_policy ********************* Rekognition.Client.delete_project_policy(**kwargs) Note: This operation applies only to Amazon Rekognition Custom Labels. Deletes an existing project policy. To get a list of project policies attached to a project, call ListProjectPolicies. To attach a project policy to a project, call PutProjectPolicy. This operation requires permissions to perform the "rekognition:DeleteProjectPolicy" action. See also: AWS API Documentation **Request Syntax** response = client.delete_project_policy( ProjectArn='string', PolicyName='string', PolicyRevisionId='string' ) Parameters: * **ProjectArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the project that the project policy you want to delete is attached to. * **PolicyName** (*string*) -- **[REQUIRED]** The name of the policy that you want to delete. * **PolicyRevisionId** (*string*) -- The ID of the project policy revision that you want to delete. Return type: dict Returns: **Response Syntax** {} **Response Structure** * *(dict) --* **Exceptions** * "Rekognition.Client.exceptions.AccessDeniedException" * "Rekognition.Client.exceptions.InternalServerError" * "Rekognition.Client.exceptions.InvalidParameterException" * "Rekognition.Client.exceptions.ResourceNotFoundException" * "Rekognition.Client.exceptions.ThrottlingException" * "Rekognition.Client.exceptions.ProvisionedThroughputExceededExce ption" * "Rekognition.Client.exceptions.InvalidPolicyRevisionIdException" Rekognition / Client / start_content_moderation start_content_moderation ************************ Rekognition.Client.start_content_moderation(**kwargs) Starts asynchronous detection of inappropriate, unwanted, or offensive content in a stored video. For a list of moderation labels in Amazon Rekognition, see Using the image and video moderation APIs. Amazon Rekognition Video can moderate content in a video stored in an Amazon S3 bucket. Use Video to specify the bucket name and the filename of the video. "StartContentModeration" returns a job identifier ( "JobId") which you use to get the results of the analysis. When content analysis is finished, Amazon Rekognition Video publishes a completion status to the Amazon Simple Notification Service topic that you specify in "NotificationChannel". To get the results of the content analysis, first check that the status value published to the Amazon SNS topic is "SUCCEEDED". If so, call GetContentModeration and pass the job identifier ( "JobId") from the initial call to "StartContentModeration". For more information, see Moderating content in the Amazon Rekognition Developer Guide. See also: AWS API Documentation **Request Syntax** response = client.start_content_moderation( Video={ 'S3Object': { 'Bucket': 'string', 'Name': 'string', 'Version': 'string' } }, MinConfidence=..., ClientRequestToken='string', NotificationChannel={ 'SNSTopicArn': 'string', 'RoleArn': 'string' }, JobTag='string' ) Parameters: * **Video** (*dict*) -- **[REQUIRED]** The video in which you want to detect inappropriate, unwanted, or offensive content. The video must be stored in an Amazon S3 bucket. * **S3Object** *(dict) --* The Amazon S3 bucket name and file name for the video. * **Bucket** *(string) --* Name of the S3 bucket. * **Name** *(string) --* S3 object key name. * **Version** *(string) --* If the bucket is versioning enabled, you can specify the object version. * **MinConfidence** (*float*) -- Specifies the minimum confidence that Amazon Rekognition must have in order to return a moderated content label. Confidence represents how certain Amazon Rekognition is that the moderated content is correctly identified. 0 is the lowest confidence. 100 is the highest confidence. Amazon Rekognition doesn't return any moderated content labels with a confidence level lower than this specified value. If you don't specify "MinConfidence", "GetContentModeration" returns labels with confidence values greater than or equal to 50 percent. * **ClientRequestToken** (*string*) -- Idempotent token used to identify the start request. If you use the same token with multiple "StartContentModeration" requests, the same "JobId" is returned. Use "ClientRequestToken" to prevent the same job from being accidently started more than once. * **NotificationChannel** (*dict*) -- The Amazon SNS topic ARN that you want Amazon Rekognition Video to publish the completion status of the content analysis to. The Amazon SNS topic must have a topic name that begins with *AmazonRekognition* if you are using the AmazonRekognitionServiceRole permissions policy to access the topic. * **SNSTopicArn** *(string) --* **[REQUIRED]** The Amazon SNS topic to which Amazon Rekognition posts the completion status. * **RoleArn** *(string) --* **[REQUIRED]** The ARN of an IAM role that gives Amazon Rekognition publishing permissions to the Amazon SNS topic. * **JobTag** (*string*) -- An identifier you specify that's returned in the completion notification that's published to your Amazon Simple Notification Service topic. For example, you can use "JobTag" to group related jobs and identify them in the completion notification. Return type: dict Returns: **Response Syntax** { 'JobId': 'string' } **Response Structure** * *(dict) --* * **JobId** *(string) --* The identifier for the content analysis job. Use "JobId" to identify the job in a subsequent call to "GetContentModeration". **Exceptions** * "Rekognition.Client.exceptions.AccessDeniedException" * "Rekognition.Client.exceptions.IdempotentParameterMismatchExcept ion" * "Rekognition.Client.exceptions.InvalidParameterException" * "Rekognition.Client.exceptions.InvalidS3ObjectException" * "Rekognition.Client.exceptions.InternalServerError" * "Rekognition.Client.exceptions.VideoTooLargeException" * "Rekognition.Client.exceptions.ProvisionedThroughputExceededExce ption" * "Rekognition.Client.exceptions.LimitExceededException" * "Rekognition.Client.exceptions.ThrottlingException" Rekognition / Client / can_paginate can_paginate ************ Rekognition.Client.can_paginate(operation_name) Check if an operation can be paginated. Parameters: **operation_name** (*string*) -- The operation name. This is the same name as the method name on the client. For example, if the method name is "create_foo", and you'd normally invoke the operation as "client.create_foo(**kwargs)", if the "create_foo" operation can be paginated, you can use the call "client.get_paginator("create_foo")". Returns: "True" if the operation can be paginated, "False" otherwise. Rekognition / Client / create_dataset create_dataset ************** Rekognition.Client.create_dataset(**kwargs) Note: This operation applies only to Amazon Rekognition Custom Labels. Creates a new Amazon Rekognition Custom Labels dataset. You can create a dataset by using an Amazon Sagemaker format manifest file or by copying an existing Amazon Rekognition Custom Labels dataset. To create a training dataset for a project, specify "TRAIN" for the value of "DatasetType". To create the test dataset for a project, specify "TEST" for the value of "DatasetType". The response from "CreateDataset" is the Amazon Resource Name (ARN) for the dataset. Creating a dataset takes a while to complete. Use DescribeDataset to check the current status. The dataset created successfully if the value of "Status" is "CREATE_COMPLETE". To check if any non-terminal errors occurred, call ListDatasetEntries and check for the presence of "errors" lists in the JSON Lines. Dataset creation fails if a terminal error occurs ( "Status" = "CREATE_FAILED"). Currently, you can't access the terminal error information. For more information, see Creating dataset in the *Amazon Rekognition Custom Labels Developer Guide*. This operation requires permissions to perform the "rekognition:CreateDataset" action. If you want to copy an existing dataset, you also require permission to perform the "rekognition:ListDatasetEntries" action. See also: AWS API Documentation **Request Syntax** response = client.create_dataset( DatasetSource={ 'GroundTruthManifest': { 'S3Object': { 'Bucket': 'string', 'Name': 'string', 'Version': 'string' } }, 'DatasetArn': 'string' }, DatasetType='TRAIN'|'TEST', ProjectArn='string', Tags={ 'string': 'string' } ) Parameters: * **DatasetSource** (*dict*) -- The source files for the dataset. You can specify the ARN of an existing dataset or specify the Amazon S3 bucket location of an Amazon Sagemaker format manifest file. If you don't specify "datasetSource", an empty dataset is created. To add labeled images to the dataset, You can use the console or call UpdateDatasetEntries. * **GroundTruthManifest** *(dict) --* The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest file. * **S3Object** *(dict) --* Provides the S3 bucket name and object name. The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations. For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide. * **Bucket** *(string) --* Name of the S3 bucket. * **Name** *(string) --* S3 object key name. * **Version** *(string) --* If the bucket is versioning enabled, you can specify the object version. * **DatasetArn** *(string) --* The ARN of an Amazon Rekognition Custom Labels dataset that you want to copy. * **DatasetType** (*string*) -- **[REQUIRED]** The type of the dataset. Specify "TRAIN" to create a training dataset. Specify "TEST" to create a test dataset. * **ProjectArn** (*string*) -- **[REQUIRED]** The ARN of the Amazon Rekognition Custom Labels project to which you want to asssign the dataset. * **Tags** (*dict*) -- A set of tags (key-value pairs) that you want to attach to the dataset. * *(string) --* * *(string) --* Return type: dict Returns: **Response Syntax** { 'DatasetArn': 'string' } **Response Structure** * *(dict) --* * **DatasetArn** *(string) --* The ARN of the created Amazon Rekognition Custom Labels dataset. **Exceptions** * "Rekognition.Client.exceptions.InternalServerError" * "Rekognition.Client.exceptions.ThrottlingException" * "Rekognition.Client.exceptions.ProvisionedThroughputExceededExce ption" * "Rekognition.Client.exceptions.InvalidParameterException" * "Rekognition.Client.exceptions.AccessDeniedException" * "Rekognition.Client.exceptions.LimitExceededException" * "Rekognition.Client.exceptions.InvalidS3ObjectException" * "Rekognition.Client.exceptions.ResourceAlreadyExistsException" * "Rekognition.Client.exceptions.ResourceNotFoundException" Rekognition / Client / describe_project_versions describe_project_versions ************************* Rekognition.Client.describe_project_versions(**kwargs) Lists and describes the versions of an Amazon Rekognition project. You can specify up to 10 model or adapter versions in "ProjectVersionArns". If you don't specify a value, descriptions for all model/adapter versions in the project are returned. This operation requires permissions to perform the "rekognition:DescribeProjectVersions" action. See also: AWS API Documentation **Request Syntax** response = client.describe_project_versions( ProjectArn='string', VersionNames=[ 'string', ], NextToken='string', MaxResults=123 ) Parameters: * **ProjectArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the project that contains the model/adapter you want to describe. * **VersionNames** (*list*) -- A list of model or project version names that you want to describe. You can add up to 10 model or project version names to the list. If you don't specify a value, all project version descriptions are returned. A version name is part of a project version ARN. For example, "my-model.2020-01-21T09.10.15" is the version name in the following ARN. "arn:aws:rekognition :us-east-1:123456789012:project/getting-started/version/my- model.2020-01-21T09.10.15/1234567890123". * *(string) --* * **NextToken** (*string*) -- If the previous response was incomplete (because there is more results to retrieve), Amazon Rekognition returns a pagination token in the response. You can use this pagination token to retrieve the next set of results. * **MaxResults** (*integer*) -- The maximum number of results to return per paginated call. The largest value you can specify is 100. If you specify a value greater than 100, a ValidationException error occurs. The default value is 100. Return type: dict Returns: **Response Syntax** { 'ProjectVersionDescriptions': [ { 'ProjectVersionArn': 'string', 'CreationTimestamp': datetime(2015, 1, 1), 'MinInferenceUnits': 123, 'Status': 'TRAINING_IN_PROGRESS'|'TRAINING_COMPLETED'|'TRAINING_FAILED'|'STARTING'|'RUNNING'|'FAILED'|'STOPPING'|'STOPPED'|'DELETING'|'COPYING_IN_PROGRESS'|'COPYING_COMPLETED'|'COPYING_FAILED'|'DEPRECATED'|'EXPIRED', 'StatusMessage': 'string', 'BillableTrainingTimeInSeconds': 123, 'TrainingEndTimestamp': datetime(2015, 1, 1), 'OutputConfig': { 'S3Bucket': 'string', 'S3KeyPrefix': 'string' }, 'TrainingDataResult': { 'Input': { 'Assets': [ { 'GroundTruthManifest': { 'S3Object': { 'Bucket': 'string', 'Name': 'string', 'Version': 'string' } } }, ] }, 'Output': { 'Assets': [ { 'GroundTruthManifest': { 'S3Object': { 'Bucket': 'string', 'Name': 'string', 'Version': 'string' } } }, ] }, 'Validation': { 'Assets': [ { 'GroundTruthManifest': { 'S3Object': { 'Bucket': 'string', 'Name': 'string', 'Version': 'string' } } }, ] } }, 'TestingDataResult': { 'Input': { 'Assets': [ { 'GroundTruthManifest': { 'S3Object': { 'Bucket': 'string', 'Name': 'string', 'Version': 'string' } } }, ], 'AutoCreate': True|False }, 'Output': { 'Assets': [ { 'GroundTruthManifest': { 'S3Object': { 'Bucket': 'string', 'Name': 'string', 'Version': 'string' } } }, ], 'AutoCreate': True|False }, 'Validation': { 'Assets': [ { 'GroundTruthManifest': { 'S3Object': { 'Bucket': 'string', 'Name': 'string', 'Version': 'string' } } }, ] } }, 'EvaluationResult': { 'F1Score': ..., 'Summary': { 'S3Object': { 'Bucket': 'string', 'Name': 'string', 'Version': 'string' } } }, 'ManifestSummary': { 'S3Object': { 'Bucket': 'string', 'Name': 'string', 'Version': 'string' } }, 'KmsKeyId': 'string', 'MaxInferenceUnits': 123, 'SourceProjectVersionArn': 'string', 'VersionDescription': 'string', 'Feature': 'CONTENT_MODERATION'|'CUSTOM_LABELS', 'BaseModelVersion': 'string', 'FeatureConfig': { 'ContentModeration': { 'ConfidenceThreshold': ... } } }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **ProjectVersionDescriptions** *(list) --* A list of project version descriptions. The list is sorted by the creation date and time of the project versions, latest to earliest. * *(dict) --* A description of a version of a Amazon Rekognition project version. * **ProjectVersionArn** *(string) --* The Amazon Resource Name (ARN) of the project version. * **CreationTimestamp** *(datetime) --* The Unix datetime for the date and time that training started. * **MinInferenceUnits** *(integer) --* The minimum number of inference units used by the model. Applies only to Custom Labels projects. For more information, see StartProjectVersion. * **Status** *(string) --* The current status of the model version. * **StatusMessage** *(string) --* A descriptive message for an error or warning that occurred. * **BillableTrainingTimeInSeconds** *(integer) --* The duration, in seconds, that you were billed for a successful training of the model version. This value is only returned if the model version has been successfully trained. * **TrainingEndTimestamp** *(datetime) --* The Unix date and time that training of the model ended. * **OutputConfig** *(dict) --* The location where training results are saved. * **S3Bucket** *(string) --* The S3 bucket where training output is placed. * **S3KeyPrefix** *(string) --* The prefix applied to the training output files. * **TrainingDataResult** *(dict) --* Contains information about the training results. * **Input** *(dict) --* The training data that you supplied. * **Assets** *(list) --* A manifest file that contains references to the training images and ground-truth annotations. * *(dict) --* Assets are the images that you use to train and evaluate a model version. Assets can also contain validation information that you use to debug a failed model training. * **GroundTruthManifest** *(dict) --* The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest file. * **S3Object** *(dict) --* Provides the S3 bucket name and object name. The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations. For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide. * **Bucket** *(string) --* Name of the S3 bucket. * **Name** *(string) --* S3 object key name. * **Version** *(string) --* If the bucket is versioning enabled, you can specify the object version. * **Output** *(dict) --* Reference to images (assets) that were actually used during training with trained model predictions. * **Assets** *(list) --* A manifest file that contains references to the training images and ground-truth annotations. * *(dict) --* Assets are the images that you use to train and evaluate a model version. Assets can also contain validation information that you use to debug a failed model training. * **GroundTruthManifest** *(dict) --* The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest file. * **S3Object** *(dict) --* Provides the S3 bucket name and object name. The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations. For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide. * **Bucket** *(string) --* Name of the S3 bucket. * **Name** *(string) --* S3 object key name. * **Version** *(string) --* If the bucket is versioning enabled, you can specify the object version. * **Validation** *(dict) --* A manifest that you supplied for training, with validation results for each line. * **Assets** *(list) --* The assets that comprise the validation data. * *(dict) --* Assets are the images that you use to train and evaluate a model version. Assets can also contain validation information that you use to debug a failed model training. * **GroundTruthManifest** *(dict) --* The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest file. * **S3Object** *(dict) --* Provides the S3 bucket name and object name. The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations. For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide. * **Bucket** *(string) --* Name of the S3 bucket. * **Name** *(string) --* S3 object key name. * **Version** *(string) --* If the bucket is versioning enabled, you can specify the object version. * **TestingDataResult** *(dict) --* Contains information about the testing results. * **Input** *(dict) --* The testing dataset that was supplied for training. * **Assets** *(list) --* The assets used for testing. * *(dict) --* Assets are the images that you use to train and evaluate a model version. Assets can also contain validation information that you use to debug a failed model training. * **GroundTruthManifest** *(dict) --* The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest file. * **S3Object** *(dict) --* Provides the S3 bucket name and object name. The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations. For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide. * **Bucket** *(string) --* Name of the S3 bucket. * **Name** *(string) --* S3 object key name. * **Version** *(string) --* If the bucket is versioning enabled, you can specify the object version. * **AutoCreate** *(boolean) --* If specified, Rekognition splits training dataset to create a test dataset for the training job. * **Output** *(dict) --* The subset of the dataset that was actually tested. Some images (assets) might not be tested due to file formatting and other issues. * **Assets** *(list) --* The assets used for testing. * *(dict) --* Assets are the images that you use to train and evaluate a model version. Assets can also contain validation information that you use to debug a failed model training. * **GroundTruthManifest** *(dict) --* The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest file. * **S3Object** *(dict) --* Provides the S3 bucket name and object name. The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations. For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide. * **Bucket** *(string) --* Name of the S3 bucket. * **Name** *(string) --* S3 object key name. * **Version** *(string) --* If the bucket is versioning enabled, you can specify the object version. * **AutoCreate** *(boolean) --* If specified, Rekognition splits training dataset to create a test dataset for the training job. * **Validation** *(dict) --* The location of the data validation manifest. The data validation manifest is created for the test dataset during model training. * **Assets** *(list) --* The assets that comprise the validation data. * *(dict) --* Assets are the images that you use to train and evaluate a model version. Assets can also contain validation information that you use to debug a failed model training. * **GroundTruthManifest** *(dict) --* The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest file. * **S3Object** *(dict) --* Provides the S3 bucket name and object name. The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations. For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide. * **Bucket** *(string) --* Name of the S3 bucket. * **Name** *(string) --* S3 object key name. * **Version** *(string) --* If the bucket is versioning enabled, you can specify the object version. * **EvaluationResult** *(dict) --* The training results. "EvaluationResult" is only returned if training is successful. * **F1Score** *(float) --* The F1 score for the evaluation of all labels. The F1 score metric evaluates the overall precision and recall performance of the model as a single value. A higher value indicates better precision and recall performance. A lower score indicates that precision, recall, or both are performing poorly. * **Summary** *(dict) --* The S3 bucket that contains the training summary. * **S3Object** *(dict) --* Provides the S3 bucket name and object name. The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations. For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide. * **Bucket** *(string) --* Name of the S3 bucket. * **Name** *(string) --* S3 object key name. * **Version** *(string) --* If the bucket is versioning enabled, you can specify the object version. * **ManifestSummary** *(dict) --* The location of the summary manifest. The summary manifest provides aggregate data validation results for the training and test datasets. * **S3Object** *(dict) --* Provides the S3 bucket name and object name. The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations. For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide. * **Bucket** *(string) --* Name of the S3 bucket. * **Name** *(string) --* S3 object key name. * **Version** *(string) --* If the bucket is versioning enabled, you can specify the object version. * **KmsKeyId** *(string) --* The identifer for the AWS Key Management Service key (AWS KMS key) that was used to encrypt the model during training. * **MaxInferenceUnits** *(integer) --* The maximum number of inference units Amazon Rekognition uses to auto-scale the model. Applies only to Custom Labels projects. For more information, see StartProjectVersion. * **SourceProjectVersionArn** *(string) --* If the model version was copied from a different project, "SourceProjectVersionArn" contains the ARN of the source model version. * **VersionDescription** *(string) --* A user-provided description of the project version. * **Feature** *(string) --* The feature that was customized. * **BaseModelVersion** *(string) --* The base detection model version used to create the project version. * **FeatureConfig** *(dict) --* Feature specific configuration that was applied during training. * **ContentModeration** *(dict) --* Configuration options for Custom Moderation training. * **ConfidenceThreshold** *(float) --* The confidence level you plan to use to identify if unsafe content is present during inference. * **NextToken** *(string) --* If the previous response was incomplete (because there is more results to retrieve), Amazon Rekognition returns a pagination token in the response. You can use this pagination token to retrieve the next set of results. **Exceptions** * "Rekognition.Client.exceptions.ResourceNotFoundException" * "Rekognition.Client.exceptions.InvalidPaginationTokenException" * "Rekognition.Client.exceptions.InvalidParameterException" * "Rekognition.Client.exceptions.AccessDeniedException" * "Rekognition.Client.exceptions.InternalServerError" * "Rekognition.Client.exceptions.ThrottlingException" * "Rekognition.Client.exceptions.ProvisionedThroughputExceededExce ption" Rekognition / Client / get_face_detection get_face_detection ****************** Rekognition.Client.get_face_detection(**kwargs) Gets face detection results for a Amazon Rekognition Video analysis started by StartFaceDetection. Face detection with Amazon Rekognition Video is an asynchronous operation. You start face detection by calling StartFaceDetection which returns a job identifier ( "JobId"). When the face detection operation finishes, Amazon Rekognition Video publishes a completion status to the Amazon Simple Notification Service topic registered in the initial call to "StartFaceDetection". To get the results of the face detection operation, first check that the status value published to the Amazon SNS topic is "SUCCEEDED". If so, call GetFaceDetection and pass the job identifier ( "JobId") from the initial call to "StartFaceDetection". "GetFaceDetection" returns an array of detected faces ( "Faces") sorted by the time the faces were detected. Use MaxResults parameter to limit the number of labels returned. If there are more results than specified in "MaxResults", the value of "NextToken" in the operation response contains a pagination token for getting the next set of results. To get the next page of results, call "GetFaceDetection" and populate the "NextToken" request parameter with the token value returned from the previous call to "GetFaceDetection". Note that for the "GetFaceDetection" operation, the returned values for "FaceOccluded" and "EyeDirection" will always be "null". See also: AWS API Documentation **Request Syntax** response = client.get_face_detection( JobId='string', MaxResults=123, NextToken='string' ) Parameters: * **JobId** (*string*) -- **[REQUIRED]** Unique identifier for the face detection job. The "JobId" is returned from "StartFaceDetection". * **MaxResults** (*integer*) -- Maximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000. * **NextToken** (*string*) -- If the previous response was incomplete (because there are more faces to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of faces. Return type: dict Returns: **Response Syntax** { 'JobStatus': 'IN_PROGRESS'|'SUCCEEDED'|'FAILED', 'StatusMessage': 'string', 'VideoMetadata': { 'Codec': 'string', 'DurationMillis': 123, 'Format': 'string', 'FrameRate': ..., 'FrameHeight': 123, 'FrameWidth': 123, 'ColorRange': 'FULL'|'LIMITED' }, 'NextToken': 'string', 'Faces': [ { 'Timestamp': 123, 'Face': { 'BoundingBox': { 'Width': ..., 'Height': ..., 'Left': ..., 'Top': ... }, 'AgeRange': { 'Low': 123, 'High': 123 }, 'Smile': { 'Value': True|False, 'Confidence': ... }, 'Eyeglasses': { 'Value': True|False, 'Confidence': ... }, 'Sunglasses': { 'Value': True|False, 'Confidence': ... }, 'Gender': { 'Value': 'Male'|'Female', 'Confidence': ... }, 'Beard': { 'Value': True|False, 'Confidence': ... }, 'Mustache': { 'Value': True|False, 'Confidence': ... }, 'EyesOpen': { 'Value': True|False, 'Confidence': ... }, 'MouthOpen': { 'Value': True|False, 'Confidence': ... }, 'Emotions': [ { 'Type': 'HAPPY'|'SAD'|'ANGRY'|'CONFUSED'|'DISGUSTED'|'SURPRISED'|'CALM'|'UNKNOWN'|'FEAR', 'Confidence': ... }, ], 'Landmarks': [ { 'Type': 'eyeLeft'|'eyeRight'|'nose'|'mouthLeft'|'mouthRight'|'leftEyeBrowLeft'|'leftEyeBrowRight'|'leftEyeBrowUp'|'rightEyeBrowLeft'|'rightEyeBrowRight'|'rightEyeBrowUp'|'leftEyeLeft'|'leftEyeRight'|'leftEyeUp'|'leftEyeDown'|'rightEyeLeft'|'rightEyeRight'|'rightEyeUp'|'rightEyeDown'|'noseLeft'|'noseRight'|'mouthUp'|'mouthDown'|'leftPupil'|'rightPupil'|'upperJawlineLeft'|'midJawlineLeft'|'chinBottom'|'midJawlineRight'|'upperJawlineRight', 'X': ..., 'Y': ... }, ], 'Pose': { 'Roll': ..., 'Yaw': ..., 'Pitch': ... }, 'Quality': { 'Brightness': ..., 'Sharpness': ... }, 'Confidence': ..., 'FaceOccluded': { 'Value': True|False, 'Confidence': ... }, 'EyeDirection': { 'Yaw': ..., 'Pitch': ..., 'Confidence': ... } } }, ], 'JobId': 'string', 'Video': { 'S3Object': { 'Bucket': 'string', 'Name': 'string', 'Version': 'string' } }, 'JobTag': 'string' } **Response Structure** * *(dict) --* * **JobStatus** *(string) --* The current status of the face detection job. * **StatusMessage** *(string) --* If the job fails, "StatusMessage" provides a descriptive error message. * **VideoMetadata** *(dict) --* Information about a video that Amazon Rekognition Video analyzed. "Videometadata" is returned in every page of paginated responses from a Amazon Rekognition video operation. * **Codec** *(string) --* Type of compression used in the analyzed video. * **DurationMillis** *(integer) --* Length of the video in milliseconds. * **Format** *(string) --* Format of the analyzed video. Possible values are MP4, MOV and AVI. * **FrameRate** *(float) --* Number of frames per second in the video. * **FrameHeight** *(integer) --* Vertical pixel dimension of the video. * **FrameWidth** *(integer) --* Horizontal pixel dimension of the video. * **ColorRange** *(string) --* A description of the range of luminance values in a video, either LIMITED (16 to 235) or FULL (0 to 255). * **NextToken** *(string) --* If the response is truncated, Amazon Rekognition returns this token that you can use in the subsequent request to retrieve the next set of faces. * **Faces** *(list) --* An array of faces detected in the video. Each element contains a detected face's details and the time, in milliseconds from the start of the video, the face was detected. * *(dict) --* Information about a face detected in a video analysis request and the time the face was detected in the video. * **Timestamp** *(integer) --* Time, in milliseconds from the start of the video, that the face was detected. Note that "Timestamp" is not guaranteed to be accurate to the individual frame where the face first appears. * **Face** *(dict) --* The face properties for the detected face. * **BoundingBox** *(dict) --* Bounding box of the face. Default attribute. * **Width** *(float) --* Width of the bounding box as a ratio of the overall image width. * **Height** *(float) --* Height of the bounding box as a ratio of the overall image height. * **Left** *(float) --* Left coordinate of the bounding box as a ratio of overall image width. * **Top** *(float) --* Top coordinate of the bounding box as a ratio of overall image height. * **AgeRange** *(dict) --* The estimated age range, in years, for the face. Low represents the lowest estimated age and High represents the highest estimated age. * **Low** *(integer) --* The lowest estimated age. * **High** *(integer) --* The highest estimated age. * **Smile** *(dict) --* Indicates whether or not the face is smiling, and the confidence level in the determination. * **Value** *(boolean) --* Boolean value that indicates whether the face is smiling or not. * **Confidence** *(float) --* Level of confidence in the determination. * **Eyeglasses** *(dict) --* Indicates whether or not the face is wearing eye glasses, and the confidence level in the determination. * **Value** *(boolean) --* Boolean value that indicates whether the face is wearing eye glasses or not. * **Confidence** *(float) --* Level of confidence in the determination. * **Sunglasses** *(dict) --* Indicates whether or not the face is wearing sunglasses, and the confidence level in the determination. * **Value** *(boolean) --* Boolean value that indicates whether the face is wearing sunglasses or not. * **Confidence** *(float) --* Level of confidence in the determination. * **Gender** *(dict) --* The predicted gender of a detected face. * **Value** *(string) --* The predicted gender of the face. * **Confidence** *(float) --* Level of confidence in the prediction. * **Beard** *(dict) --* Indicates whether or not the face has a beard, and the confidence level in the determination. * **Value** *(boolean) --* Boolean value that indicates whether the face has beard or not. * **Confidence** *(float) --* Level of confidence in the determination. * **Mustache** *(dict) --* Indicates whether or not the face has a mustache, and the confidence level in the determination. * **Value** *(boolean) --* Boolean value that indicates whether the face has mustache or not. * **Confidence** *(float) --* Level of confidence in the determination. * **EyesOpen** *(dict) --* Indicates whether or not the eyes on the face are open, and the confidence level in the determination. * **Value** *(boolean) --* Boolean value that indicates whether the eyes on the face are open. * **Confidence** *(float) --* Level of confidence in the determination. * **MouthOpen** *(dict) --* Indicates whether or not the mouth on the face is open, and the confidence level in the determination. * **Value** *(boolean) --* Boolean value that indicates whether the mouth on the face is open or not. * **Confidence** *(float) --* Level of confidence in the determination. * **Emotions** *(list) --* The emotions that appear to be expressed on the face, and the confidence level in the determination. The API is only making a determination of the physical appearance of a person's face. It is not a determination of the person’s internal emotional state and should not be used in such a way. For example, a person pretending to have a sad face might not be sad emotionally. * *(dict) --* The API returns a prediction of an emotion based on a person's facial expressions, along with the confidence level for the predicted emotion. It is not a determination of the person’s internal emotional state and should not be used in such a way. For example, a person pretending to have a sad face might not be sad emotionally. The API is not intended to be used, and you may not use it, in a manner that violates the EU Artificial Intelligence Act or any other applicable law. * **Type** *(string) --* Type of emotion detected. * **Confidence** *(float) --* Level of confidence in the determination. * **Landmarks** *(list) --* Indicates the location of landmarks on the face. Default attribute. * *(dict) --* Indicates the location of the landmark on the face. * **Type** *(string) --* Type of landmark. * **X** *(float) --* The x-coordinate of the landmark expressed as a ratio of the width of the image. The x-coordinate is measured from the left-side of the image. For example, if the image is 700 pixels wide and the x-coordinate of the landmark is at 350 pixels, this value is 0.5. * **Y** *(float) --* The y-coordinate of the landmark expressed as a ratio of the height of the image. The y-coordinate is measured from the top of the image. For example, if the image height is 200 pixels and the y-coordinate of the landmark is at 50 pixels, this value is 0.25. * **Pose** *(dict) --* Indicates the pose of the face as determined by its pitch, roll, and yaw. Default attribute. * **Roll** *(float) --* Value representing the face rotation on the roll axis. * **Yaw** *(float) --* Value representing the face rotation on the yaw axis. * **Pitch** *(float) --* Value representing the face rotation on the pitch axis. * **Quality** *(dict) --* Identifies image brightness and sharpness. Default attribute. * **Brightness** *(float) --* Value representing brightness of the face. The service returns a value between 0 and 100 (inclusive). A higher value indicates a brighter face image. * **Sharpness** *(float) --* Value representing sharpness of the face. The service returns a value between 0 and 100 (inclusive). A higher value indicates a sharper face image. * **Confidence** *(float) --* Confidence level that the bounding box contains a face (and not a different object such as a tree). Default attribute. * **FaceOccluded** *(dict) --* "FaceOccluded" should return "true" with a high confidence score if a detected face’s eyes, nose, and mouth are partially captured or if they are covered by masks, dark sunglasses, cell phones, hands, or other objects. "FaceOccluded" should return "false" with a high confidence score if common occurrences that do not impact face verification are detected, such as eye glasses, lightly tinted sunglasses, strands of hair, and others. * **Value** *(boolean) --* True if a detected face’s eyes, nose, and mouth are partially captured or if they are covered by masks, dark sunglasses, cell phones, hands, or other objects. False if common occurrences that do not impact face verification are detected, such as eye glasses, lightly tinted sunglasses, strands of hair, and others. * **Confidence** *(float) --* The confidence that the service has detected the presence of a face occlusion. * **EyeDirection** *(dict) --* Indicates the direction the eyes are gazing in, as defined by pitch and yaw. * **Yaw** *(float) --* Value representing eye direction on the yaw axis. * **Pitch** *(float) --* Value representing eye direction on the pitch axis. * **Confidence** *(float) --* The confidence that the service has in its predicted eye direction. * **JobId** *(string) --* Job identifier for the face detection operation for which you want to obtain results. The job identifer is returned by an initial call to StartFaceDetection. * **Video** *(dict) --* Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as StartLabelDetection use "Video" to specify a video for analysis. The supported file formats are .mp4, .mov and .avi. * **S3Object** *(dict) --* The Amazon S3 bucket name and file name for the video. * **Bucket** *(string) --* Name of the S3 bucket. * **Name** *(string) --* S3 object key name. * **Version** *(string) --* If the bucket is versioning enabled, you can specify the object version. * **JobTag** *(string) --* A job identifier specified in the call to StartFaceDetection and returned in the job completion notification sent to your Amazon Simple Notification Service topic. **Exceptions** * "Rekognition.Client.exceptions.AccessDeniedException" * "Rekognition.Client.exceptions.InternalServerError" * "Rekognition.Client.exceptions.InvalidParameterException" * "Rekognition.Client.exceptions.InvalidPaginationTokenException" * "Rekognition.Client.exceptions.ProvisionedThroughputExceededExce ption" * "Rekognition.Client.exceptions.ResourceNotFoundException" * "Rekognition.Client.exceptions.ThrottlingException" Rekognition / Client / copy_project_version copy_project_version ******************** Rekognition.Client.copy_project_version(**kwargs) Note: This operation applies only to Amazon Rekognition Custom Labels. Copies a version of an Amazon Rekognition Custom Labels model from a source project to a destination project. The source and destination projects can be in different AWS accounts but must be in the same AWS Region. You can't copy a model to another AWS service. To copy a model version to a different AWS account, you need to create a resource-based policy known as a *project policy*. You attach the project policy to the source project by calling PutProjectPolicy. The project policy gives permission to copy the model version from a trusting AWS account to a trusted account. For more information creating and attaching a project policy, see Attaching a project policy (SDK) in the *Amazon Rekognition Custom Labels Developer Guide*. If you are copying a model version to a project in the same AWS account, you don't need to create a project policy. Note: Copying project versions is supported only for Custom Labels models.To copy a model, the destination project, source project, and source model version must already exist. Copying a model version takes a while to complete. To get the current status, call DescribeProjectVersions and check the value of "Status" in the ProjectVersionDescription object. The copy operation has finished when the value of "Status" is "COPYING_COMPLETED". This operation requires permissions to perform the "rekognition:CopyProjectVersion" action. See also: AWS API Documentation **Request Syntax** response = client.copy_project_version( SourceProjectArn='string', SourceProjectVersionArn='string', DestinationProjectArn='string', VersionName='string', OutputConfig={ 'S3Bucket': 'string', 'S3KeyPrefix': 'string' }, Tags={ 'string': 'string' }, KmsKeyId='string' ) Parameters: * **SourceProjectArn** (*string*) -- **[REQUIRED]** The ARN of the source project in the trusting AWS account. * **SourceProjectVersionArn** (*string*) -- **[REQUIRED]** The ARN of the model version in the source project that you want to copy to a destination project. * **DestinationProjectArn** (*string*) -- **[REQUIRED]** The ARN of the project in the trusted AWS account that you want to copy the model version to. * **VersionName** (*string*) -- **[REQUIRED]** A name for the version of the model that's copied to the destination project. * **OutputConfig** (*dict*) -- **[REQUIRED]** The S3 bucket and folder location where the training output for the source model version is placed. * **S3Bucket** *(string) --* The S3 bucket where training output is placed. * **S3KeyPrefix** *(string) --* The prefix applied to the training output files. * **Tags** (*dict*) -- The key-value tags to assign to the model version. * *(string) --* * *(string) --* * **KmsKeyId** (*string*) -- The identifier for your AWS Key Management Service key (AWS KMS key). You can supply the Amazon Resource Name (ARN) of your KMS key, the ID of your KMS key, an alias for your KMS key, or an alias ARN. The key is used to encrypt training results and manifest files written to the output Amazon S3 bucket ( "OutputConfig"). If you choose to use your own KMS key, you need the following permissions on the KMS key. * kms:CreateGrant * kms:DescribeKey * kms:GenerateDataKey * kms:Decrypt If you don't specify a value for "KmsKeyId", images copied into the service are encrypted using a key that AWS owns and manages. Return type: dict Returns: **Response Syntax** { 'ProjectVersionArn': 'string' } **Response Structure** * *(dict) --* * **ProjectVersionArn** *(string) --* The ARN of the copied model version in the destination project. **Exceptions** * "Rekognition.Client.exceptions.AccessDeniedException" * "Rekognition.Client.exceptions.InternalServerError" * "Rekognition.Client.exceptions.InvalidParameterException" * "Rekognition.Client.exceptions.LimitExceededException" * "Rekognition.Client.exceptions.ResourceNotFoundException" * "Rekognition.Client.exceptions.ThrottlingException" * "Rekognition.Client.exceptions.ServiceQuotaExceededException" * "Rekognition.Client.exceptions.ProvisionedThroughputExceededExce ption" * "Rekognition.Client.exceptions.ResourceInUseException" Rekognition / Client / delete_dataset delete_dataset ************** Rekognition.Client.delete_dataset(**kwargs) Note: This operation applies only to Amazon Rekognition Custom Labels. Deletes an existing Amazon Rekognition Custom Labels dataset. Deleting a dataset might take while. Use DescribeDataset to check the current status. The dataset is still deleting if the value of "Status" is "DELETE_IN_PROGRESS". If you try to access the dataset after it is deleted, you get a "ResourceNotFoundException" exception. You can't delete a dataset while it is creating ( "Status" = "CREATE_IN_PROGRESS") or if the dataset is updating ( "Status" = "UPDATE_IN_PROGRESS"). This operation requires permissions to perform the "rekognition:DeleteDataset" action. See also: AWS API Documentation **Request Syntax** response = client.delete_dataset( DatasetArn='string' ) Parameters: **DatasetArn** (*string*) -- **[REQUIRED]** The ARN of the Amazon Rekognition Custom Labels dataset that you want to delete. Return type: dict Returns: **Response Syntax** {} **Response Structure** * *(dict) --* **Exceptions** * "Rekognition.Client.exceptions.InternalServerError" * "Rekognition.Client.exceptions.ThrottlingException" * "Rekognition.Client.exceptions.ProvisionedThroughputExceededExce ption" * "Rekognition.Client.exceptions.InvalidParameterException" * "Rekognition.Client.exceptions.AccessDeniedException" * "Rekognition.Client.exceptions.LimitExceededException" * "Rekognition.Client.exceptions.ResourceInUseException" * "Rekognition.Client.exceptions.ResourceNotFoundException" Rekognition / Client / put_project_policy put_project_policy ****************** Rekognition.Client.put_project_policy(**kwargs) Note: This operation applies only to Amazon Rekognition Custom Labels. Attaches a project policy to a Amazon Rekognition Custom Labels project in a trusting AWS account. A project policy specifies that a trusted AWS account can copy a model version from a trusting AWS account to a project in the trusted AWS account. To copy a model version you use the CopyProjectVersion operation. Only applies to Custom Labels projects. For more information about the format of a project policy document, see Attaching a project policy (SDK) in the *Amazon Rekognition Custom Labels Developer Guide*. The response from "PutProjectPolicy" is a revision ID for the project policy. You can attach multiple project policies to a project. You can also update an existing project policy by specifying the policy revision ID of the existing policy. To remove a project policy from a project, call DeleteProjectPolicy. To get a list of project policies attached to a project, call ListProjectPolicies. You copy a model version by calling CopyProjectVersion. This operation requires permissions to perform the "rekognition:PutProjectPolicy" action. See also: AWS API Documentation **Request Syntax** response = client.put_project_policy( ProjectArn='string', PolicyName='string', PolicyRevisionId='string', PolicyDocument='string' ) Parameters: * **ProjectArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the project that the project policy is attached to. * **PolicyName** (*string*) -- **[REQUIRED]** A name for the policy. * **PolicyRevisionId** (*string*) -- The revision ID for the Project Policy. Each time you modify a policy, Amazon Rekognition Custom Labels generates and assigns a new "PolicyRevisionId" and then deletes the previous version of the policy. * **PolicyDocument** (*string*) -- **[REQUIRED]** A resource policy to add to the model. The policy is a JSON structure that contains one or more statements that define the policy. The policy must follow the IAM syntax. For more information about the contents of a JSON policy document, see IAM JSON policy reference. Return type: dict Returns: **Response Syntax** { 'PolicyRevisionId': 'string' } **Response Structure** * *(dict) --* * **PolicyRevisionId** *(string) --* The ID of the project policy. **Exceptions** * "Rekognition.Client.exceptions.AccessDeniedException" * "Rekognition.Client.exceptions.InternalServerError" * "Rekognition.Client.exceptions.InvalidParameterException" * "Rekognition.Client.exceptions.InvalidPolicyRevisionIdException" * "Rekognition.Client.exceptions.MalformedPolicyDocumentException" * "Rekognition.Client.exceptions.ResourceNotFoundException" * "Rekognition.Client.exceptions.ResourceAlreadyExistsException" * "Rekognition.Client.exceptions.ThrottlingException" * "Rekognition.Client.exceptions.ServiceQuotaExceededException" * "Rekognition.Client.exceptions.ProvisionedThroughputExceededExce ption" * "Rekognition.Client.exceptions.LimitExceededException" Rekognition / Client / detect_labels detect_labels ************* Rekognition.Client.detect_labels(**kwargs) Detects instances of real-world entities within an image (JPEG or PNG) provided as input. This includes objects like flower, tree, and table; events like wedding, graduation, and birthday party; and concepts like landscape, evening, and nature. For an example, see Analyzing images stored in an Amazon S3 bucket in the Amazon Rekognition Developer Guide. You pass the input image as base64-encoded image bytes or as a reference to an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes is not supported. The image must be either a PNG or JPEG formatted file. **Optional Parameters** You can specify one or both of the "GENERAL_LABELS" and "IMAGE_PROPERTIES" feature types when calling the DetectLabels API. Including "GENERAL_LABELS" will ensure the response includes the labels detected in the input image, while including >>``<>``<>``<>``<>*<>``<