Personalize *********** Client ====== class Personalize.Client A low-level client representing Amazon Personalize Amazon Personalize is a machine learning service that makes it easy to add individualized recommendations to customers. import boto3 client = boto3.client('personalize') These are the available methods: * can_paginate * close * create_batch_inference_job * create_batch_segment_job * create_campaign * create_data_deletion_job * create_dataset * create_dataset_export_job * create_dataset_group * create_dataset_import_job * create_event_tracker * create_filter * create_metric_attribution * create_recommender * create_schema * create_solution * create_solution_version * delete_campaign * delete_dataset * delete_dataset_group * delete_event_tracker * delete_filter * delete_metric_attribution * delete_recommender * delete_schema * delete_solution * describe_algorithm * describe_batch_inference_job * describe_batch_segment_job * describe_campaign * describe_data_deletion_job * describe_dataset * describe_dataset_export_job * describe_dataset_group * describe_dataset_import_job * describe_event_tracker * describe_feature_transformation * describe_filter * describe_metric_attribution * describe_recipe * describe_recommender * describe_schema * describe_solution * describe_solution_version * get_paginator * get_solution_metrics * get_waiter * list_batch_inference_jobs * list_batch_segment_jobs * list_campaigns * list_data_deletion_jobs * list_dataset_export_jobs * list_dataset_groups * list_dataset_import_jobs * list_datasets * list_event_trackers * list_filters * list_metric_attribution_metrics * list_metric_attributions * list_recipes * list_recommenders * list_schemas * list_solution_versions * list_solutions * list_tags_for_resource * start_recommender * stop_recommender * stop_solution_version_creation * tag_resource * untag_resource * update_campaign * update_dataset * update_metric_attribution * update_recommender * update_solution Paginators ========== Paginators are available on a client instance via the "get_paginator" method. For more detailed instructions and examples on the usage of paginators, see the paginators user guide. The available paginators are: * ListBatchInferenceJobs * ListBatchSegmentJobs * ListCampaigns * ListDatasetExportJobs * ListDatasetGroups * ListDatasetImportJobs * ListDatasets * ListEventTrackers * ListFilters * ListMetricAttributionMetrics * ListMetricAttributions * ListRecipes * ListRecommenders * ListSchemas * ListSolutionVersions * ListSolutions Personalize / Paginator / ListDatasetGroups ListDatasetGroups ***************** class Personalize.Paginator.ListDatasetGroups paginator = client.get_paginator('list_dataset_groups') paginate(**kwargs) Creates an iterator that will paginate through responses from "Personalize.Client.list_dataset_groups()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max- items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'datasetGroups': [ { 'name': 'string', 'datasetGroupArn': 'string', 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'failureReason': 'string', 'domain': 'ECOMMERCE'|'VIDEO_ON_DEMAND' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **datasetGroups** *(list) --* The list of your dataset groups. * *(dict) --* Provides a summary of the properties of a dataset group. For a complete listing, call the DescribeDatasetGroup API. * **name** *(string) --* The name of the dataset group. * **datasetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the dataset group. * **status** *(string) --* The status of the dataset group. A dataset group can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * DELETE PENDING * **creationDateTime** *(datetime) --* The date and time (in Unix time) that the dataset group was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the dataset group was last updated. * **failureReason** *(string) --* If creating a dataset group fails, the reason behind the failure. * **domain** *(string) --* The domain of a Domain dataset group. * **NextToken** *(string) --* A token to resume pagination. Personalize / Paginator / ListRecipes ListRecipes *********** class Personalize.Paginator.ListRecipes paginator = client.get_paginator('list_recipes') paginate(**kwargs) Creates an iterator that will paginate through responses from "Personalize.Client.list_recipes()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( recipeProvider='SERVICE', domain='ECOMMERCE'|'VIDEO_ON_DEMAND', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **recipeProvider** (*string*) -- The default is "SERVICE". * **domain** (*string*) -- Filters returned recipes by domain for a Domain dataset group. Only recipes (Domain dataset group use cases) for this domain are included in the response. If you don't specify a domain, all recipes are returned. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'recipes': [ { 'name': 'string', 'recipeArn': 'string', 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'domain': 'ECOMMERCE'|'VIDEO_ON_DEMAND' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **recipes** *(list) --* The list of available recipes. * *(dict) --* Provides a summary of the properties of a recipe. For a complete listing, call the DescribeRecipe API. * **name** *(string) --* The name of the recipe. * **recipeArn** *(string) --* The Amazon Resource Name (ARN) of the recipe. * **status** *(string) --* The status of the recipe. * **creationDateTime** *(datetime) --* The date and time (in Unix time) that the recipe was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the recipe was last updated. * **domain** *(string) --* The domain of the recipe (if the recipe is a Domain dataset group use case). * **NextToken** *(string) --* A token to resume pagination. Personalize / Paginator / ListCampaigns ListCampaigns ************* class Personalize.Paginator.ListCampaigns paginator = client.get_paginator('list_campaigns') paginate(**kwargs) Creates an iterator that will paginate through responses from "Personalize.Client.list_campaigns()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( solutionArn='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **solutionArn** (*string*) -- The Amazon Resource Name (ARN) of the solution to list the campaigns for. When a solution is not specified, all the campaigns associated with the account are listed. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'campaigns': [ { 'name': 'string', 'campaignArn': 'string', 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'failureReason': 'string' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **campaigns** *(list) --* A list of the campaigns. * *(dict) --* Provides a summary of the properties of a campaign. For a complete listing, call the DescribeCampaign API. * **name** *(string) --* The name of the campaign. * **campaignArn** *(string) --* The Amazon Resource Name (ARN) of the campaign. * **status** *(string) --* The status of the campaign. A campaign can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * DELETE PENDING > DELETE IN_PROGRESS * **creationDateTime** *(datetime) --* The date and time (in Unix time) that the campaign was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the campaign was last updated. * **failureReason** *(string) --* If a campaign fails, the reason behind the failure. * **NextToken** *(string) --* A token to resume pagination. Personalize / Paginator / ListEventTrackers ListEventTrackers ***************** class Personalize.Paginator.ListEventTrackers paginator = client.get_paginator('list_event_trackers') paginate(**kwargs) Creates an iterator that will paginate through responses from "Personalize.Client.list_event_trackers()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( datasetGroupArn='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **datasetGroupArn** (*string*) -- The ARN of a dataset group used to filter the response. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'eventTrackers': [ { 'name': 'string', 'eventTrackerArn': 'string', 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1) }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **eventTrackers** *(list) --* A list of event trackers. * *(dict) --* Provides a summary of the properties of an event tracker. For a complete listing, call the DescribeEventTracker API. * **name** *(string) --* The name of the event tracker. * **eventTrackerArn** *(string) --* The Amazon Resource Name (ARN) of the event tracker. * **status** *(string) --* The status of the event tracker. An event tracker can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * DELETE PENDING > DELETE IN_PROGRESS * **creationDateTime** *(datetime) --* The date and time (in Unix time) that the event tracker was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the event tracker was last updated. * **NextToken** *(string) --* A token to resume pagination. Personalize / Paginator / ListMetricAttributionMetrics ListMetricAttributionMetrics **************************** class Personalize.Paginator.ListMetricAttributionMetrics paginator = client.get_paginator('list_metric_attribution_metrics') paginate(**kwargs) Creates an iterator that will paginate through responses from "Personalize.Client.list_metric_attribution_metrics()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( metricAttributionArn='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **metricAttributionArn** (*string*) -- The Amazon Resource Name (ARN) of the metric attribution to retrieve attributes for. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'metrics': [ { 'eventType': 'string', 'metricName': 'string', 'expression': 'string' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **metrics** *(list) --* The metrics for the specified metric attribution. * *(dict) --* Contains information on a metric that a metric attribution reports on. For more information, see Measuring impact of recommendations. * **eventType** *(string) --* The metric's event type. * **metricName** *(string) --* The metric's name. The name helps you identify the metric in Amazon CloudWatch or Amazon S3. * **expression** *(string) --* The attribute's expression. Available functions are "SUM()" or "SAMPLECOUNT()". For SUM() functions, provide the dataset type (either Interactions or Items) and column to sum as a parameter. For example SUM(Items.PRICE). * **NextToken** *(string) --* A token to resume pagination. Personalize / Paginator / ListBatchInferenceJobs ListBatchInferenceJobs ********************** class Personalize.Paginator.ListBatchInferenceJobs paginator = client.get_paginator('list_batch_inference_jobs') paginate(**kwargs) Creates an iterator that will paginate through responses from "Personalize.Client.list_batch_inference_jobs()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( solutionVersionArn='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **solutionVersionArn** (*string*) -- The Amazon Resource Name (ARN) of the solution version from which the batch inference jobs were created. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'batchInferenceJobs': [ { 'batchInferenceJobArn': 'string', 'jobName': 'string', 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'failureReason': 'string', 'solutionVersionArn': 'string', 'batchInferenceJobMode': 'BATCH_INFERENCE'|'THEME_GENERATION' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **batchInferenceJobs** *(list) --* A list containing information on each job that is returned. * *(dict) --* A truncated version of the BatchInferenceJob. The ListBatchInferenceJobs operation returns a list of batch inference job summaries. * **batchInferenceJobArn** *(string) --* The Amazon Resource Name (ARN) of the batch inference job. * **jobName** *(string) --* The name of the batch inference job. * **status** *(string) --* The status of the batch inference job. The status is one of the following values: * PENDING * IN PROGRESS * ACTIVE * CREATE FAILED * **creationDateTime** *(datetime) --* The time at which the batch inference job was created. * **lastUpdatedDateTime** *(datetime) --* The time at which the batch inference job was last updated. * **failureReason** *(string) --* If the batch inference job failed, the reason for the failure. * **solutionVersionArn** *(string) --* The ARN of the solution version used by the batch inference job. * **batchInferenceJobMode** *(string) --* The job's mode. * **NextToken** *(string) --* A token to resume pagination. Personalize / Paginator / ListFilters ListFilters *********** class Personalize.Paginator.ListFilters paginator = client.get_paginator('list_filters') paginate(**kwargs) Creates an iterator that will paginate through responses from "Personalize.Client.list_filters()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( datasetGroupArn='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **datasetGroupArn** (*string*) -- The ARN of the dataset group that contains the filters. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'Filters': [ { 'name': 'string', 'filterArn': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'datasetGroupArn': 'string', 'failureReason': 'string', 'status': 'string' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **Filters** *(list) --* A list of returned filters. * *(dict) --* A short summary of a filter's attributes. * **name** *(string) --* The name of the filter. * **filterArn** *(string) --* The ARN of the filter. * **creationDateTime** *(datetime) --* The time at which the filter was created. * **lastUpdatedDateTime** *(datetime) --* The time at which the filter was last updated. * **datasetGroupArn** *(string) --* The ARN of the dataset group to which the filter belongs. * **failureReason** *(string) --* If the filter failed, the reason for the failure. * **status** *(string) --* The status of the filter. * **NextToken** *(string) --* A token to resume pagination. Personalize / Paginator / ListSolutionVersions ListSolutionVersions ******************** class Personalize.Paginator.ListSolutionVersions paginator = client.get_paginator('list_solution_versions') paginate(**kwargs) Creates an iterator that will paginate through responses from "Personalize.Client.list_solution_versions()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( solutionArn='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **solutionArn** (*string*) -- The Amazon Resource Name (ARN) of the solution. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'solutionVersions': [ { 'solutionVersionArn': 'string', 'status': 'string', 'trainingMode': 'FULL'|'UPDATE'|'AUTOTRAIN', 'trainingType': 'AUTOMATIC'|'MANUAL', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'failureReason': 'string' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **solutionVersions** *(list) --* A list of solution versions describing the version properties. * *(dict) --* Provides a summary of the properties of a solution version. For a complete listing, call the DescribeSolutionVersion API. * **solutionVersionArn** *(string) --* The Amazon Resource Name (ARN) of the solution version. * **status** *(string) --* The status of the solution version. A solution version can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * **trainingMode** *(string) --* The scope of training to be performed when creating the solution version. A "FULL" training considers all of the data in your dataset group. An "UPDATE" processes only the data that has changed since the latest training. Only solution versions created with the User-Personalization recipe can use "UPDATE". * **trainingType** *(string) --* Whether the solution version was created automatically or manually. * **creationDateTime** *(datetime) --* The date and time (in Unix time) that this version of a solution was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the solution version was last updated. * **failureReason** *(string) --* If a solution version fails, the reason behind the failure. * **NextToken** *(string) --* A token to resume pagination. Personalize / Paginator / ListBatchSegmentJobs ListBatchSegmentJobs ******************** class Personalize.Paginator.ListBatchSegmentJobs paginator = client.get_paginator('list_batch_segment_jobs') paginate(**kwargs) Creates an iterator that will paginate through responses from "Personalize.Client.list_batch_segment_jobs()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( solutionVersionArn='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **solutionVersionArn** (*string*) -- The Amazon Resource Name (ARN) of the solution version that the batch segment jobs used to generate batch segments. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'batchSegmentJobs': [ { 'batchSegmentJobArn': 'string', 'jobName': 'string', 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'failureReason': 'string', 'solutionVersionArn': 'string' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **batchSegmentJobs** *(list) --* A list containing information on each job that is returned. * *(dict) --* A truncated version of the BatchSegmentJob datatype. ListBatchSegmentJobs operation returns a list of batch segment job summaries. * **batchSegmentJobArn** *(string) --* The Amazon Resource Name (ARN) of the batch segment job. * **jobName** *(string) --* The name of the batch segment job. * **status** *(string) --* The status of the batch segment job. The status is one of the following values: * PENDING * IN PROGRESS * ACTIVE * CREATE FAILED * **creationDateTime** *(datetime) --* The time at which the batch segment job was created. * **lastUpdatedDateTime** *(datetime) --* The time at which the batch segment job was last updated. * **failureReason** *(string) --* If the batch segment job failed, the reason for the failure. * **solutionVersionArn** *(string) --* The Amazon Resource Name (ARN) of the solution version used by the batch segment job to generate batch segments. * **NextToken** *(string) --* A token to resume pagination. Personalize / Paginator / ListDatasetImportJobs ListDatasetImportJobs ********************* class Personalize.Paginator.ListDatasetImportJobs paginator = client.get_paginator('list_dataset_import_jobs') paginate(**kwargs) Creates an iterator that will paginate through responses from "Personalize.Client.list_dataset_import_jobs()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( datasetArn='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **datasetArn** (*string*) -- The Amazon Resource Name (ARN) of the dataset to list the dataset import jobs for. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'datasetImportJobs': [ { 'datasetImportJobArn': 'string', 'jobName': 'string', 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'failureReason': 'string', 'importMode': 'FULL'|'INCREMENTAL' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **datasetImportJobs** *(list) --* The list of dataset import jobs. * *(dict) --* Provides a summary of the properties of a dataset import job. For a complete listing, call the DescribeDatasetImportJob API. * **datasetImportJobArn** *(string) --* The Amazon Resource Name (ARN) of the dataset import job. * **jobName** *(string) --* The name of the dataset import job. * **status** *(string) --* The status of the dataset import job. A dataset import job can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * **creationDateTime** *(datetime) --* The date and time (in Unix time) that the dataset import job was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the dataset import job status was last updated. * **failureReason** *(string) --* If a dataset import job fails, the reason behind the failure. * **importMode** *(string) --* The import mode the dataset import job used to update the data in the dataset. For more information see Updating existing bulk data. * **NextToken** *(string) --* A token to resume pagination. Personalize / Paginator / ListSolutions ListSolutions ************* class Personalize.Paginator.ListSolutions paginator = client.get_paginator('list_solutions') paginate(**kwargs) Creates an iterator that will paginate through responses from "Personalize.Client.list_solutions()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( datasetGroupArn='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **datasetGroupArn** (*string*) -- The Amazon Resource Name (ARN) of the dataset group. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'solutions': [ { 'name': 'string', 'solutionArn': 'string', 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'recipeArn': 'string' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **solutions** *(list) --* A list of the current solutions. * *(dict) --* Provides a summary of the properties of a solution. For a complete listing, call the DescribeSolution API. * **name** *(string) --* The name of the solution. * **solutionArn** *(string) --* The Amazon Resource Name (ARN) of the solution. * **status** *(string) --* The status of the solution. A solution can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * DELETE PENDING > DELETE IN_PROGRESS * **creationDateTime** *(datetime) --* The date and time (in Unix time) that the solution was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the solution was last updated. * **recipeArn** *(string) --* The Amazon Resource Name (ARN) of the recipe used by the solution. * **NextToken** *(string) --* A token to resume pagination. Personalize / Paginator / ListRecommenders ListRecommenders **************** class Personalize.Paginator.ListRecommenders paginator = client.get_paginator('list_recommenders') paginate(**kwargs) Creates an iterator that will paginate through responses from "Personalize.Client.list_recommenders()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( datasetGroupArn='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **datasetGroupArn** (*string*) -- The Amazon Resource Name (ARN) of the Domain dataset group to list the recommenders for. When a Domain dataset group is not specified, all the recommenders associated with the account are listed. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'recommenders': [ { 'name': 'string', 'recommenderArn': 'string', 'datasetGroupArn': 'string', 'recipeArn': 'string', 'recommenderConfig': { 'itemExplorationConfig': { 'string': 'string' }, 'minRecommendationRequestsPerSecond': 123, 'trainingDataConfig': { 'excludedDatasetColumns': { 'string': [ 'string', ] } }, 'enableMetadataWithRecommendations': True|False }, 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1) }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **recommenders** *(list) --* A list of the recommenders. * *(dict) --* Provides a summary of the properties of the recommender. * **name** *(string) --* The name of the recommender. * **recommenderArn** *(string) --* The Amazon Resource Name (ARN) of the recommender. * **datasetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the Domain dataset group that contains the recommender. * **recipeArn** *(string) --* The Amazon Resource Name (ARN) of the recipe (Domain dataset group use case) that the recommender was created for. * **recommenderConfig** *(dict) --* The configuration details of the recommender. * **itemExplorationConfig** *(dict) --* Specifies the exploration configuration hyperparameters, including "explorationWeight" and "explorationItemAgeCutOff", you want to use to configure the amount of item exploration Amazon Personalize uses when recommending items. Provide "itemExplorationConfig" data only if your recommenders generate personalized recommendations for a user (not popular items or similar items). * *(string) --* * *(string) --* * **minRecommendationRequestsPerSecond** *(integer) --* Specifies the requested minimum provisioned recommendation requests per second that Amazon Personalize will support. A high "minRecommendationRequestsPerSecond" will increase your bill. We recommend starting with 1 for "minRecommendationRequestsPerSecond" (the default). Track your usage using Amazon CloudWatch metrics, and increase the "minRecommendationRequestsPerSecond" as necessary. * **trainingDataConfig** *(dict) --* Specifies the training data configuration to use when creating a domain recommender. * **excludedDatasetColumns** *(dict) --* Specifies the columns to exclude from training. Each key is a dataset type, and each value is a list of columns. Exclude columns to control what data Amazon Personalize uses to generate recommendations. For example, you might have a column that you want to use only to filter recommendations. You can exclude this column from training and Amazon Personalize considers it only when filtering. * *(string) --* * *(list) --* * *(string) --* * **enableMetadataWithRecommendations** *(boolean) --* Whether metadata with recommendations is enabled for the recommender. If enabled, you can specify the columns from your Items dataset in your request for recommendations. Amazon Personalize returns this data for each item in the recommendation response. For information about enabling metadata for a recommender, see Enabling metadata in recommendations for a recommender. If you enable metadata in recommendations, you will incur additional costs. For more information, see Amazon Personalize pricing. * **status** *(string) --* The status of the recommender. A recommender can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * STOP PENDING > STOP IN_PROGRESS > INACTIVE > START PENDING > START IN_PROGRESS > ACTIVE * DELETE PENDING > DELETE IN_PROGRESS * **creationDateTime** *(datetime) --* The date and time (in Unix format) that the recommender was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix format) that the recommender was last updated. * **NextToken** *(string) --* A token to resume pagination. Personalize / Paginator / ListMetricAttributions ListMetricAttributions ********************** class Personalize.Paginator.ListMetricAttributions paginator = client.get_paginator('list_metric_attributions') paginate(**kwargs) Creates an iterator that will paginate through responses from "Personalize.Client.list_metric_attributions()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( datasetGroupArn='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **datasetGroupArn** (*string*) -- The metric attributions' dataset group Amazon Resource Name (ARN). * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'metricAttributions': [ { 'name': 'string', 'metricAttributionArn': 'string', 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'failureReason': 'string' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **metricAttributions** *(list) --* The list of metric attributions. * *(dict) --* Provides a summary of the properties of a metric attribution. For a complete listing, call the DescribeMetricAttribution. * **name** *(string) --* The name of the metric attribution. * **metricAttributionArn** *(string) --* The metric attribution's Amazon Resource Name (ARN). * **status** *(string) --* The metric attribution's status. * **creationDateTime** *(datetime) --* The metric attribution's creation date time. * **lastUpdatedDateTime** *(datetime) --* The metric attribution's last updated date time. * **failureReason** *(string) --* The metric attribution's failure reason. * **NextToken** *(string) --* A token to resume pagination. Personalize / Paginator / ListDatasets ListDatasets ************ class Personalize.Paginator.ListDatasets paginator = client.get_paginator('list_datasets') paginate(**kwargs) Creates an iterator that will paginate through responses from "Personalize.Client.list_datasets()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( datasetGroupArn='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **datasetGroupArn** (*string*) -- The Amazon Resource Name (ARN) of the dataset group that contains the datasets to list. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'datasets': [ { 'name': 'string', 'datasetArn': 'string', 'datasetType': 'string', 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1) }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **datasets** *(list) --* An array of "Dataset" objects. Each object provides metadata information. * *(dict) --* Provides a summary of the properties of a dataset. For a complete listing, call the DescribeDataset API. * **name** *(string) --* The name of the dataset. * **datasetArn** *(string) --* The Amazon Resource Name (ARN) of the dataset. * **datasetType** *(string) --* The dataset type. One of the following values: * Interactions * Items * Users * Event-Interactions * **status** *(string) --* The status of the dataset. A dataset can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * DELETE PENDING > DELETE IN_PROGRESS * **creationDateTime** *(datetime) --* The date and time (in Unix time) that the dataset was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the dataset was last updated. * **NextToken** *(string) --* A token to resume pagination. Personalize / Paginator / ListSchemas ListSchemas *********** class Personalize.Paginator.ListSchemas paginator = client.get_paginator('list_schemas') paginate(**kwargs) Creates an iterator that will paginate through responses from "Personalize.Client.list_schemas()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max- items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'schemas': [ { 'name': 'string', 'schemaArn': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'domain': 'ECOMMERCE'|'VIDEO_ON_DEMAND' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **schemas** *(list) --* A list of schemas. * *(dict) --* Provides a summary of the properties of a dataset schema. For a complete listing, call the DescribeSchema API. * **name** *(string) --* The name of the schema. * **schemaArn** *(string) --* The Amazon Resource Name (ARN) of the schema. * **creationDateTime** *(datetime) --* The date and time (in Unix time) that the schema was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the schema was last updated. * **domain** *(string) --* The domain of a schema that you created for a dataset in a Domain dataset group. * **NextToken** *(string) --* A token to resume pagination. Personalize / Paginator / ListDatasetExportJobs ListDatasetExportJobs ********************* class Personalize.Paginator.ListDatasetExportJobs paginator = client.get_paginator('list_dataset_export_jobs') paginate(**kwargs) Creates an iterator that will paginate through responses from "Personalize.Client.list_dataset_export_jobs()". See also: AWS API Documentation **Request Syntax** response_iterator = paginator.paginate( datasetArn='string', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) Parameters: * **datasetArn** (*string*) -- The Amazon Resource Name (ARN) of the dataset to list the dataset export jobs for. * **PaginationConfig** (*dict*) -- A dictionary that provides parameters to control pagination. * **MaxItems** *(integer) --* The total number of items to return. If the total number of items available is more than the value specified in max-items then a "NextToken" will be provided in the output that you can use to resume pagination. * **PageSize** *(integer) --* The size of each page. * **StartingToken** *(string) --* A token to specify where to start paginating. This is the "NextToken" from a previous response. Return type: dict Returns: **Response Syntax** { 'datasetExportJobs': [ { 'datasetExportJobArn': 'string', 'jobName': 'string', 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'failureReason': 'string' }, ], 'NextToken': 'string' } **Response Structure** * *(dict) --* * **datasetExportJobs** *(list) --* The list of dataset export jobs. * *(dict) --* Provides a summary of the properties of a dataset export job. For a complete listing, call the DescribeDatasetExportJob API. * **datasetExportJobArn** *(string) --* The Amazon Resource Name (ARN) of the dataset export job. * **jobName** *(string) --* The name of the dataset export job. * **status** *(string) --* The status of the dataset export job. A dataset export job can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * **creationDateTime** *(datetime) --* The date and time (in Unix time) that the dataset export job was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the dataset export job status was last updated. * **failureReason** *(string) --* If a dataset export job fails, the reason behind the failure. * **NextToken** *(string) --* A token to resume pagination. Personalize / Client / list_data_deletion_jobs list_data_deletion_jobs *********************** Personalize.Client.list_data_deletion_jobs(**kwargs) Returns a list of data deletion jobs for a dataset group ordered by creation time, with the most recent first. When a dataset group is not specified, all the data deletion jobs associated with the account are listed. The response provides the properties for each job, including the Amazon Resource Name (ARN). For more information on data deletion jobs, see Deleting users. See also: AWS API Documentation **Request Syntax** response = client.list_data_deletion_jobs( datasetGroupArn='string', nextToken='string', maxResults=123 ) Parameters: * **datasetGroupArn** (*string*) -- The Amazon Resource Name (ARN) of the dataset group to list data deletion jobs for. * **nextToken** (*string*) -- A token returned from the previous call to "ListDataDeletionJobs" for getting the next set of jobs (if they exist). * **maxResults** (*integer*) -- The maximum number of data deletion jobs to return. Return type: dict Returns: **Response Syntax** { 'dataDeletionJobs': [ { 'dataDeletionJobArn': 'string', 'datasetGroupArn': 'string', 'jobName': 'string', 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'failureReason': 'string' }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **dataDeletionJobs** *(list) --* The list of data deletion jobs. * *(dict) --* Provides a summary of the properties of a data deletion job. For a complete listing, call the DescribeDataDeletionJob API operation. * **dataDeletionJobArn** *(string) --* The Amazon Resource Name (ARN) of the data deletion job. * **datasetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the dataset group the job deleted records from. * **jobName** *(string) --* The name of the data deletion job. * **status** *(string) --* The status of the data deletion job. A data deletion job can have one of the following statuses: * PENDING > IN_PROGRESS > COMPLETED -or- FAILED * **creationDateTime** *(datetime) --* The creation date and time (in Unix time) of the data deletion job. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) the data deletion job was last updated. * **failureReason** *(string) --* If a data deletion job fails, provides the reason why. * **nextToken** *(string) --* A token for getting the next set of data deletion jobs (if they exist). **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.InvalidNextTokenException" Personalize / Client / create_dataset_import_job create_dataset_import_job ************************* Personalize.Client.create_dataset_import_job(**kwargs) Creates a job that imports training data from your data source (an Amazon S3 bucket) to an Amazon Personalize dataset. To allow Amazon Personalize to import the training data, you must specify an IAM service role that has permission to read from the data source, as Amazon Personalize makes a copy of your data and processes it internally. For information on granting access to your Amazon S3 bucket, see Giving Amazon Personalize Access to Amazon S3 Resources. If you already created a recommender or deployed a custom solution version with a campaign, how new bulk records influence recommendations depends on the domain use case or recipe that you use. For more information, see How new data influences real-time recommendations. Warning: By default, a dataset import job replaces any existing data in the dataset that you imported in bulk. To add new records without replacing existing data, specify INCREMENTAL for the import mode in the CreateDatasetImportJob operation. **Status** A dataset import job can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED To get the status of the import job, call DescribeDatasetImportJob, providing the Amazon Resource Name (ARN) of the dataset import job. The dataset import is complete when the status shows as ACTIVE. If the status shows as CREATE FAILED, the response includes a "failureReason" key, which describes why the job failed. Note: Importing takes time. You must wait until the status shows as ACTIVE before training a model using the dataset. **Related APIs** * ListDatasetImportJobs * DescribeDatasetImportJob See also: AWS API Documentation **Request Syntax** response = client.create_dataset_import_job( jobName='string', datasetArn='string', dataSource={ 'dataLocation': 'string' }, roleArn='string', tags=[ { 'tagKey': 'string', 'tagValue': 'string' }, ], importMode='FULL'|'INCREMENTAL', publishAttributionMetricsToS3=True|False ) Parameters: * **jobName** (*string*) -- **[REQUIRED]** The name for the dataset import job. * **datasetArn** (*string*) -- **[REQUIRED]** The ARN of the dataset that receives the imported data. * **dataSource** (*dict*) -- **[REQUIRED]** The Amazon S3 bucket that contains the training data to import. * **dataLocation** *(string) --* For dataset import jobs, the path to the Amazon S3 bucket where the data that you want to upload to your dataset is stored. For data deletion jobs, the path to the Amazon S3 bucket that stores the list of records to delete. For example: "s3://bucket-name/folder-name/fileName.csv" If your CSV files are in a folder in your Amazon S3 bucket and you want your import job or data deletion job to consider multiple files, you can specify the path to the folder. With a data deletion job, Amazon Personalize uses all files in the folder and any sub folder. Use the following syntax with a "/" after the folder name: "s3://bucket-name/folder-name/" * **roleArn** (*string*) -- **[REQUIRED]** The ARN of the IAM role that has permissions to read from the Amazon S3 data source. * **tags** (*list*) -- A list of tags to apply to the dataset import job. * *(dict) --* The optional metadata that you apply to resources to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. For more information see Tagging Amazon Personalize resources. * **tagKey** *(string) --* **[REQUIRED]** One part of a key-value pair that makes up a tag. A key is a general label that acts like a category for more specific tag values. * **tagValue** *(string) --* **[REQUIRED]** The optional part of a key-value pair that makes up a tag. A value acts as a descriptor within a tag category (key). * **importMode** (*string*) -- Specify how to add the new records to an existing dataset. The default import mode is "FULL". If you haven't imported bulk records into the dataset previously, you can only specify "FULL". * Specify "FULL" to overwrite all existing bulk data in your dataset. Data you imported individually is not replaced. * Specify "INCREMENTAL" to append the new records to the existing data in your dataset. Amazon Personalize replaces any record with the same ID with the new one. * **publishAttributionMetricsToS3** (*boolean*) -- If you created a metric attribution, specify whether to publish metrics for this import job to Amazon S3 Return type: dict Returns: **Response Syntax** { 'datasetImportJobArn': 'string' } **Response Structure** * *(dict) --* * **datasetImportJobArn** *(string) --* The ARN of the dataset import job. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.ResourceAlreadyExistsException" * "Personalize.Client.exceptions.LimitExceededException" * "Personalize.Client.exceptions.ResourceInUseException" * "Personalize.Client.exceptions.TooManyTagsException" Personalize / Client / describe_recommender describe_recommender ******************** Personalize.Client.describe_recommender(**kwargs) Describes the given recommender, including its status. A recommender can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * STOP PENDING > STOP IN_PROGRESS > INACTIVE > START PENDING > START IN_PROGRESS > ACTIVE * DELETE PENDING > DELETE IN_PROGRESS When the "status" is "CREATE FAILED", the response includes the "failureReason" key, which describes why. The "modelMetrics" key is null when the recommender is being created or deleted. For more information on recommenders, see CreateRecommender. See also: AWS API Documentation **Request Syntax** response = client.describe_recommender( recommenderArn='string' ) Parameters: **recommenderArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the recommender to describe. Return type: dict Returns: **Response Syntax** { 'recommender': { 'recommenderArn': 'string', 'datasetGroupArn': 'string', 'name': 'string', 'recipeArn': 'string', 'recommenderConfig': { 'itemExplorationConfig': { 'string': 'string' }, 'minRecommendationRequestsPerSecond': 123, 'trainingDataConfig': { 'excludedDatasetColumns': { 'string': [ 'string', ] } }, 'enableMetadataWithRecommendations': True|False }, 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'status': 'string', 'failureReason': 'string', 'latestRecommenderUpdate': { 'recommenderConfig': { 'itemExplorationConfig': { 'string': 'string' }, 'minRecommendationRequestsPerSecond': 123, 'trainingDataConfig': { 'excludedDatasetColumns': { 'string': [ 'string', ] } }, 'enableMetadataWithRecommendations': True|False }, 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'status': 'string', 'failureReason': 'string' }, 'modelMetrics': { 'string': 123.0 } } } **Response Structure** * *(dict) --* * **recommender** *(dict) --* The properties of the recommender. * **recommenderArn** *(string) --* The Amazon Resource Name (ARN) of the recommender. * **datasetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the Domain dataset group that contains the recommender. * **name** *(string) --* The name of the recommender. * **recipeArn** *(string) --* The Amazon Resource Name (ARN) of the recipe (Domain dataset group use case) that the recommender was created for. * **recommenderConfig** *(dict) --* The configuration details of the recommender. * **itemExplorationConfig** *(dict) --* Specifies the exploration configuration hyperparameters, including "explorationWeight" and "explorationItemAgeCutOff", you want to use to configure the amount of item exploration Amazon Personalize uses when recommending items. Provide "itemExplorationConfig" data only if your recommenders generate personalized recommendations for a user (not popular items or similar items). * *(string) --* * *(string) --* * **minRecommendationRequestsPerSecond** *(integer) --* Specifies the requested minimum provisioned recommendation requests per second that Amazon Personalize will support. A high "minRecommendationRequestsPerSecond" will increase your bill. We recommend starting with 1 for "minRecommendationRequestsPerSecond" (the default). Track your usage using Amazon CloudWatch metrics, and increase the "minRecommendationRequestsPerSecond" as necessary. * **trainingDataConfig** *(dict) --* Specifies the training data configuration to use when creating a domain recommender. * **excludedDatasetColumns** *(dict) --* Specifies the columns to exclude from training. Each key is a dataset type, and each value is a list of columns. Exclude columns to control what data Amazon Personalize uses to generate recommendations. For example, you might have a column that you want to use only to filter recommendations. You can exclude this column from training and Amazon Personalize considers it only when filtering. * *(string) --* * *(list) --* * *(string) --* * **enableMetadataWithRecommendations** *(boolean) --* Whether metadata with recommendations is enabled for the recommender. If enabled, you can specify the columns from your Items dataset in your request for recommendations. Amazon Personalize returns this data for each item in the recommendation response. For information about enabling metadata for a recommender, see Enabling metadata in recommendations for a recommender. If you enable metadata in recommendations, you will incur additional costs. For more information, see Amazon Personalize pricing. * **creationDateTime** *(datetime) --* The date and time (in Unix format) that the recommender was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix format) that the recommender was last updated. * **status** *(string) --* The status of the recommender. A recommender can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * STOP PENDING > STOP IN_PROGRESS > INACTIVE > START PENDING > START IN_PROGRESS > ACTIVE * DELETE PENDING > DELETE IN_PROGRESS * **failureReason** *(string) --* If a recommender fails, the reason behind the failure. * **latestRecommenderUpdate** *(dict) --* Provides a summary of the latest updates to the recommender. * **recommenderConfig** *(dict) --* The configuration details of the recommender update. * **itemExplorationConfig** *(dict) --* Specifies the exploration configuration hyperparameters, including "explorationWeight" and "explorationItemAgeCutOff", you want to use to configure the amount of item exploration Amazon Personalize uses when recommending items. Provide "itemExplorationConfig" data only if your recommenders generate personalized recommendations for a user (not popular items or similar items). * *(string) --* * *(string) --* * **minRecommendationRequestsPerSecond** *(integer) --* Specifies the requested minimum provisioned recommendation requests per second that Amazon Personalize will support. A high "minRecommendationRequestsPerSecond" will increase your bill. We recommend starting with 1 for "minRecommendationRequestsPerSecond" (the default). Track your usage using Amazon CloudWatch metrics, and increase the "minRecommendationRequestsPerSecond" as necessary. * **trainingDataConfig** *(dict) --* Specifies the training data configuration to use when creating a domain recommender. * **excludedDatasetColumns** *(dict) --* Specifies the columns to exclude from training. Each key is a dataset type, and each value is a list of columns. Exclude columns to control what data Amazon Personalize uses to generate recommendations. For example, you might have a column that you want to use only to filter recommendations. You can exclude this column from training and Amazon Personalize considers it only when filtering. * *(string) --* * *(list) --* * *(string) --* * **enableMetadataWithRecommendations** *(boolean) --* Whether metadata with recommendations is enabled for the recommender. If enabled, you can specify the columns from your Items dataset in your request for recommendations. Amazon Personalize returns this data for each item in the recommendation response. For information about enabling metadata for a recommender, see Enabling metadata in recommendations for a recommender. If you enable metadata in recommendations, you will incur additional costs. For more information, see Amazon Personalize pricing. * **creationDateTime** *(datetime) --* The date and time (in Unix format) that the recommender update was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the recommender update was last updated. * **status** *(string) --* The status of the recommender update. A recommender update can be in one of the following states: CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * **failureReason** *(string) --* If a recommender update fails, the reason behind the failure. * **modelMetrics** *(dict) --* Provides evaluation metrics that help you determine the performance of a recommender. For more information, see Evaluating a recommender. * *(string) --* * *(float) --* **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" Personalize / Client / get_paginator get_paginator ************* Personalize.Client.get_paginator(operation_name) Create a paginator for an operation. Parameters: **operation_name** (*string*) -- The operation name. This is the same name as the method name on the client. For example, if the method name is "create_foo", and you'd normally invoke the operation as "client.create_foo(**kwargs)", if the "create_foo" operation can be paginated, you can use the call "client.get_paginator("create_foo")". Raises: **OperationNotPageableError** -- Raised if the operation is not pageable. You can use the "client.can_paginate" method to check if an operation is pageable. Return type: "botocore.paginate.Paginator" Returns: A paginator object. Personalize / Client / list_batch_inference_jobs list_batch_inference_jobs ************************* Personalize.Client.list_batch_inference_jobs(**kwargs) Gets a list of the batch inference jobs that have been performed off of a solution version. See also: AWS API Documentation **Request Syntax** response = client.list_batch_inference_jobs( solutionVersionArn='string', nextToken='string', maxResults=123 ) Parameters: * **solutionVersionArn** (*string*) -- The Amazon Resource Name (ARN) of the solution version from which the batch inference jobs were created. * **nextToken** (*string*) -- The token to request the next page of results. * **maxResults** (*integer*) -- The maximum number of batch inference job results to return in each page. The default value is 100. Return type: dict Returns: **Response Syntax** { 'batchInferenceJobs': [ { 'batchInferenceJobArn': 'string', 'jobName': 'string', 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'failureReason': 'string', 'solutionVersionArn': 'string', 'batchInferenceJobMode': 'BATCH_INFERENCE'|'THEME_GENERATION' }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **batchInferenceJobs** *(list) --* A list containing information on each job that is returned. * *(dict) --* A truncated version of the BatchInferenceJob. The ListBatchInferenceJobs operation returns a list of batch inference job summaries. * **batchInferenceJobArn** *(string) --* The Amazon Resource Name (ARN) of the batch inference job. * **jobName** *(string) --* The name of the batch inference job. * **status** *(string) --* The status of the batch inference job. The status is one of the following values: * PENDING * IN PROGRESS * ACTIVE * CREATE FAILED * **creationDateTime** *(datetime) --* The time at which the batch inference job was created. * **lastUpdatedDateTime** *(datetime) --* The time at which the batch inference job was last updated. * **failureReason** *(string) --* If the batch inference job failed, the reason for the failure. * **solutionVersionArn** *(string) --* The ARN of the solution version used by the batch inference job. * **batchInferenceJobMode** *(string) --* The job's mode. * **nextToken** *(string) --* The token to use to retrieve the next page of results. The value is "null" when there are no more results to return. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.InvalidNextTokenException" Personalize / Client / list_recommenders list_recommenders ***************** Personalize.Client.list_recommenders(**kwargs) Returns a list of recommenders in a given Domain dataset group. When a Domain dataset group is not specified, all the recommenders associated with the account are listed. The response provides the properties for each recommender, including the Amazon Resource Name (ARN). For more information on recommenders, see CreateRecommender. See also: AWS API Documentation **Request Syntax** response = client.list_recommenders( datasetGroupArn='string', nextToken='string', maxResults=123 ) Parameters: * **datasetGroupArn** (*string*) -- The Amazon Resource Name (ARN) of the Domain dataset group to list the recommenders for. When a Domain dataset group is not specified, all the recommenders associated with the account are listed. * **nextToken** (*string*) -- A token returned from the previous call to "ListRecommenders" for getting the next set of recommenders (if they exist). * **maxResults** (*integer*) -- The maximum number of recommenders to return. Return type: dict Returns: **Response Syntax** { 'recommenders': [ { 'name': 'string', 'recommenderArn': 'string', 'datasetGroupArn': 'string', 'recipeArn': 'string', 'recommenderConfig': { 'itemExplorationConfig': { 'string': 'string' }, 'minRecommendationRequestsPerSecond': 123, 'trainingDataConfig': { 'excludedDatasetColumns': { 'string': [ 'string', ] } }, 'enableMetadataWithRecommendations': True|False }, 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1) }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **recommenders** *(list) --* A list of the recommenders. * *(dict) --* Provides a summary of the properties of the recommender. * **name** *(string) --* The name of the recommender. * **recommenderArn** *(string) --* The Amazon Resource Name (ARN) of the recommender. * **datasetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the Domain dataset group that contains the recommender. * **recipeArn** *(string) --* The Amazon Resource Name (ARN) of the recipe (Domain dataset group use case) that the recommender was created for. * **recommenderConfig** *(dict) --* The configuration details of the recommender. * **itemExplorationConfig** *(dict) --* Specifies the exploration configuration hyperparameters, including "explorationWeight" and "explorationItemAgeCutOff", you want to use to configure the amount of item exploration Amazon Personalize uses when recommending items. Provide "itemExplorationConfig" data only if your recommenders generate personalized recommendations for a user (not popular items or similar items). * *(string) --* * *(string) --* * **minRecommendationRequestsPerSecond** *(integer) --* Specifies the requested minimum provisioned recommendation requests per second that Amazon Personalize will support. A high "minRecommendationRequestsPerSecond" will increase your bill. We recommend starting with 1 for "minRecommendationRequestsPerSecond" (the default). Track your usage using Amazon CloudWatch metrics, and increase the "minRecommendationRequestsPerSecond" as necessary. * **trainingDataConfig** *(dict) --* Specifies the training data configuration to use when creating a domain recommender. * **excludedDatasetColumns** *(dict) --* Specifies the columns to exclude from training. Each key is a dataset type, and each value is a list of columns. Exclude columns to control what data Amazon Personalize uses to generate recommendations. For example, you might have a column that you want to use only to filter recommendations. You can exclude this column from training and Amazon Personalize considers it only when filtering. * *(string) --* * *(list) --* * *(string) --* * **enableMetadataWithRecommendations** *(boolean) --* Whether metadata with recommendations is enabled for the recommender. If enabled, you can specify the columns from your Items dataset in your request for recommendations. Amazon Personalize returns this data for each item in the recommendation response. For information about enabling metadata for a recommender, see Enabling metadata in recommendations for a recommender. If you enable metadata in recommendations, you will incur additional costs. For more information, see Amazon Personalize pricing. * **status** *(string) --* The status of the recommender. A recommender can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * STOP PENDING > STOP IN_PROGRESS > INACTIVE > START PENDING > START IN_PROGRESS > ACTIVE * DELETE PENDING > DELETE IN_PROGRESS * **creationDateTime** *(datetime) --* The date and time (in Unix format) that the recommender was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix format) that the recommender was last updated. * **nextToken** *(string) --* A token for getting the next set of recommenders (if they exist). **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.InvalidNextTokenException" Personalize / Client / list_filters list_filters ************ Personalize.Client.list_filters(**kwargs) Lists all filters that belong to a given dataset group. See also: AWS API Documentation **Request Syntax** response = client.list_filters( datasetGroupArn='string', nextToken='string', maxResults=123 ) Parameters: * **datasetGroupArn** (*string*) -- The ARN of the dataset group that contains the filters. * **nextToken** (*string*) -- A token returned from the previous call to "ListFilters" for getting the next set of filters (if they exist). * **maxResults** (*integer*) -- The maximum number of filters to return. Return type: dict Returns: **Response Syntax** { 'Filters': [ { 'name': 'string', 'filterArn': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'datasetGroupArn': 'string', 'failureReason': 'string', 'status': 'string' }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **Filters** *(list) --* A list of returned filters. * *(dict) --* A short summary of a filter's attributes. * **name** *(string) --* The name of the filter. * **filterArn** *(string) --* The ARN of the filter. * **creationDateTime** *(datetime) --* The time at which the filter was created. * **lastUpdatedDateTime** *(datetime) --* The time at which the filter was last updated. * **datasetGroupArn** *(string) --* The ARN of the dataset group to which the filter belongs. * **failureReason** *(string) --* If the filter failed, the reason for the failure. * **status** *(string) --* The status of the filter. * **nextToken** *(string) --* A token for getting the next set of filters (if they exist). **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.InvalidNextTokenException" Personalize / Client / delete_metric_attribution delete_metric_attribution ************************* Personalize.Client.delete_metric_attribution(**kwargs) Deletes a metric attribution. See also: AWS API Documentation **Request Syntax** response = client.delete_metric_attribution( metricAttributionArn='string' ) Parameters: **metricAttributionArn** (*string*) -- **[REQUIRED]** The metric attribution's Amazon Resource Name (ARN). Returns: None **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.ResourceInUseException" Personalize / Client / describe_algorithm describe_algorithm ****************** Personalize.Client.describe_algorithm(**kwargs) Describes the given algorithm. See also: AWS API Documentation **Request Syntax** response = client.describe_algorithm( algorithmArn='string' ) Parameters: **algorithmArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the algorithm to describe. Return type: dict Returns: **Response Syntax** { 'algorithm': { 'name': 'string', 'algorithmArn': 'string', 'algorithmImage': { 'name': 'string', 'dockerURI': 'string' }, 'defaultHyperParameters': { 'string': 'string' }, 'defaultHyperParameterRanges': { 'integerHyperParameterRanges': [ { 'name': 'string', 'minValue': 123, 'maxValue': 123, 'isTunable': True|False }, ], 'continuousHyperParameterRanges': [ { 'name': 'string', 'minValue': 123.0, 'maxValue': 123.0, 'isTunable': True|False }, ], 'categoricalHyperParameterRanges': [ { 'name': 'string', 'values': [ 'string', ], 'isTunable': True|False }, ] }, 'defaultResourceConfig': { 'string': 'string' }, 'trainingInputMode': 'string', 'roleArn': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1) } } **Response Structure** * *(dict) --* * **algorithm** *(dict) --* A listing of the properties of the algorithm. * **name** *(string) --* The name of the algorithm. * **algorithmArn** *(string) --* The Amazon Resource Name (ARN) of the algorithm. * **algorithmImage** *(dict) --* The URI of the Docker container for the algorithm image. * **name** *(string) --* The name of the algorithm image. * **dockerURI** *(string) --* The URI of the Docker container for the algorithm image. * **defaultHyperParameters** *(dict) --* Specifies the default hyperparameters. * *(string) --* * *(string) --* * **defaultHyperParameterRanges** *(dict) --* Specifies the default hyperparameters, their ranges, and whether they are tunable. A tunable hyperparameter can have its value determined during hyperparameter optimization (HPO). * **integerHyperParameterRanges** *(list) --* The integer-valued hyperparameters and their default ranges. * *(dict) --* Provides the name and default range of a integer- valued hyperparameter and whether the hyperparameter is tunable. A tunable hyperparameter can have its value determined during hyperparameter optimization (HPO). * **name** *(string) --* The name of the hyperparameter. * **minValue** *(integer) --* The minimum allowable value for the hyperparameter. * **maxValue** *(integer) --* The maximum allowable value for the hyperparameter. * **isTunable** *(boolean) --* Indicates whether the hyperparameter is tunable. * **continuousHyperParameterRanges** *(list) --* The continuous hyperparameters and their default ranges. * *(dict) --* Provides the name and default range of a continuous hyperparameter and whether the hyperparameter is tunable. A tunable hyperparameter can have its value determined during hyperparameter optimization (HPO). * **name** *(string) --* The name of the hyperparameter. * **minValue** *(float) --* The minimum allowable value for the hyperparameter. * **maxValue** *(float) --* The maximum allowable value for the hyperparameter. * **isTunable** *(boolean) --* Whether the hyperparameter is tunable. * **categoricalHyperParameterRanges** *(list) --* The categorical hyperparameters and their default ranges. * *(dict) --* Provides the name and default range of a categorical hyperparameter and whether the hyperparameter is tunable. A tunable hyperparameter can have its value determined during hyperparameter optimization (HPO). * **name** *(string) --* The name of the hyperparameter. * **values** *(list) --* A list of the categories for the hyperparameter. * *(string) --* * **isTunable** *(boolean) --* Whether the hyperparameter is tunable. * **defaultResourceConfig** *(dict) --* Specifies the default maximum number of training jobs and parallel training jobs. * *(string) --* * *(string) --* * **trainingInputMode** *(string) --* The training input mode. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the role. * **creationDateTime** *(datetime) --* The date and time (in Unix time) that the algorithm was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the algorithm was last updated. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" Personalize / Client / describe_event_tracker describe_event_tracker ********************** Personalize.Client.describe_event_tracker(**kwargs) Describes an event tracker. The response includes the "trackingId" and "status" of the event tracker. For more information on event trackers, see CreateEventTracker. See also: AWS API Documentation **Request Syntax** response = client.describe_event_tracker( eventTrackerArn='string' ) Parameters: **eventTrackerArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the event tracker to describe. Return type: dict Returns: **Response Syntax** { 'eventTracker': { 'name': 'string', 'eventTrackerArn': 'string', 'accountId': 'string', 'trackingId': 'string', 'datasetGroupArn': 'string', 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1) } } **Response Structure** * *(dict) --* * **eventTracker** *(dict) --* An object that describes the event tracker. * **name** *(string) --* The name of the event tracker. * **eventTrackerArn** *(string) --* The ARN of the event tracker. * **accountId** *(string) --* The Amazon Web Services account that owns the event tracker. * **trackingId** *(string) --* The ID of the event tracker. Include this ID in requests to the PutEvents API. * **datasetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the dataset group that receives the event data. * **status** *(string) --* The status of the event tracker. An event tracker can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * DELETE PENDING > DELETE IN_PROGRESS * **creationDateTime** *(datetime) --* The date and time (in Unix format) that the event tracker was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the event tracker was last updated. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" Personalize / Client / delete_recommender delete_recommender ****************** Personalize.Client.delete_recommender(**kwargs) Deactivates and removes a recommender. A deleted recommender can no longer be specified in a GetRecommendations request. See also: AWS API Documentation **Request Syntax** response = client.delete_recommender( recommenderArn='string' ) Parameters: **recommenderArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the recommender to delete. Returns: None **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.ResourceInUseException" Personalize / Client / describe_dataset_export_job describe_dataset_export_job *************************** Personalize.Client.describe_dataset_export_job(**kwargs) Describes the dataset export job created by CreateDatasetExportJob, including the export job status. See also: AWS API Documentation **Request Syntax** response = client.describe_dataset_export_job( datasetExportJobArn='string' ) Parameters: **datasetExportJobArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the dataset export job to describe. Return type: dict Returns: **Response Syntax** { 'datasetExportJob': { 'jobName': 'string', 'datasetExportJobArn': 'string', 'datasetArn': 'string', 'ingestionMode': 'BULK'|'PUT'|'ALL', 'roleArn': 'string', 'status': 'string', 'jobOutput': { 's3DataDestination': { 'path': 'string', 'kmsKeyArn': 'string' } }, 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'failureReason': 'string' } } **Response Structure** * *(dict) --* * **datasetExportJob** *(dict) --* Information about the dataset export job, including the status. The status is one of the following values: * CREATE PENDING * CREATE IN_PROGRESS * ACTIVE * CREATE FAILED * **jobName** *(string) --* The name of the export job. * **datasetExportJobArn** *(string) --* The Amazon Resource Name (ARN) of the dataset export job. * **datasetArn** *(string) --* The Amazon Resource Name (ARN) of the dataset to export. * **ingestionMode** *(string) --* The data to export, based on how you imported the data. You can choose to export "BULK" data that you imported using a dataset import job, "PUT" data that you imported incrementally (using the console, PutEvents, PutUsers and PutItems operations), or "ALL" for both types. The default value is "PUT". * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM service role that has permissions to add data to your output Amazon S3 bucket. * **status** *(string) --* The status of the dataset export job. A dataset export job can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * **jobOutput** *(dict) --* The path to the Amazon S3 bucket where the job's output is stored. For example: "s3://bucket-name/folder-name/" * **s3DataDestination** *(dict) --* The configuration details of an Amazon S3 input or output bucket. * **path** *(string) --* The file path of the Amazon S3 bucket. * **kmsKeyArn** *(string) --* The Amazon Resource Name (ARN) of the Key Management Service (KMS) key that Amazon Personalize uses to encrypt or decrypt the input and output files. * **creationDateTime** *(datetime) --* The creation date and time (in Unix time) of the dataset export job. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) the status of the dataset export job was last updated. * **failureReason** *(string) --* If a dataset export job fails, provides the reason why. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" Personalize / Client / stop_recommender stop_recommender **************** Personalize.Client.stop_recommender(**kwargs) Stops a recommender that is ACTIVE. Stopping a recommender halts billing and automatic retraining for the recommender. See also: AWS API Documentation **Request Syntax** response = client.stop_recommender( recommenderArn='string' ) Parameters: **recommenderArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the recommender to stop. Return type: dict Returns: **Response Syntax** { 'recommenderArn': 'string' } **Response Structure** * *(dict) --* * **recommenderArn** *(string) --* The Amazon Resource Name (ARN) of the recommender you stopped. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.ResourceInUseException" Personalize / Client / create_recommender create_recommender ****************** Personalize.Client.create_recommender(**kwargs) Creates a recommender with the recipe (a Domain dataset group use case) you specify. You create recommenders for a Domain dataset group and specify the recommender's Amazon Resource Name (ARN) when you make a GetRecommendations request. **Minimum recommendation requests per second** Warning: A high "minRecommendationRequestsPerSecond" will increase your bill. We recommend starting with 1 for "minRecommendationRequestsPerSecond" (the default). Track your usage using Amazon CloudWatch metrics, and increase the "minRecommendationRequestsPerSecond" as necessary. When you create a recommender, you can configure the recommender's minimum recommendation requests per second. The minimum recommendation requests per second ( "minRecommendationRequestsPerSecond") specifies the baseline recommendation request throughput provisioned by Amazon Personalize. The default minRecommendationRequestsPerSecond is "1". A recommendation request is a single "GetRecommendations" operation. Request throughput is measured in requests per second and Amazon Personalize uses your requests per second to derive your requests per hour and the price of your recommender usage. If your requests per second increases beyond "minRecommendationRequestsPerSecond", Amazon Personalize auto- scales the provisioned capacity up and down, but never below "minRecommendationRequestsPerSecond". There's a short time delay while the capacity is increased that might cause loss of requests. Your bill is the greater of either the minimum requests per hour (based on minRecommendationRequestsPerSecond) or the actual number of requests. The actual request throughput used is calculated as the average requests/second within a one-hour window. We recommend starting with the default "minRecommendationRequestsPerSecond", track your usage using Amazon CloudWatch metrics, and then increase the "minRecommendationRequestsPerSecond" as necessary. **Status** A recommender can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * STOP PENDING > STOP IN_PROGRESS > INACTIVE > START PENDING > START IN_PROGRESS > ACTIVE * DELETE PENDING > DELETE IN_PROGRESS To get the recommender status, call DescribeRecommender. Note: Wait until the "status" of the recommender is "ACTIVE" before asking the recommender for recommendations. **Related APIs** * ListRecommenders * DescribeRecommender * UpdateRecommender * DeleteRecommender See also: AWS API Documentation **Request Syntax** response = client.create_recommender( name='string', datasetGroupArn='string', recipeArn='string', recommenderConfig={ 'itemExplorationConfig': { 'string': 'string' }, 'minRecommendationRequestsPerSecond': 123, 'trainingDataConfig': { 'excludedDatasetColumns': { 'string': [ 'string', ] } }, 'enableMetadataWithRecommendations': True|False }, tags=[ { 'tagKey': 'string', 'tagValue': 'string' }, ] ) Parameters: * **name** (*string*) -- **[REQUIRED]** The name of the recommender. * **datasetGroupArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the destination domain dataset group for the recommender. * **recipeArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the recipe that the recommender will use. For a recommender, a recipe is a Domain dataset group use case. Only Domain dataset group use cases can be used to create a recommender. For information about use cases see Choosing recommender use cases. * **recommenderConfig** (*dict*) -- The configuration details of the recommender. * **itemExplorationConfig** *(dict) --* Specifies the exploration configuration hyperparameters, including "explorationWeight" and "explorationItemAgeCutOff", you want to use to configure the amount of item exploration Amazon Personalize uses when recommending items. Provide "itemExplorationConfig" data only if your recommenders generate personalized recommendations for a user (not popular items or similar items). * *(string) --* * *(string) --* * **minRecommendationRequestsPerSecond** *(integer) --* Specifies the requested minimum provisioned recommendation requests per second that Amazon Personalize will support. A high "minRecommendationRequestsPerSecond" will increase your bill. We recommend starting with 1 for "minRecommendationRequestsPerSecond" (the default). Track your usage using Amazon CloudWatch metrics, and increase the "minRecommendationRequestsPerSecond" as necessary. * **trainingDataConfig** *(dict) --* Specifies the training data configuration to use when creating a domain recommender. * **excludedDatasetColumns** *(dict) --* Specifies the columns to exclude from training. Each key is a dataset type, and each value is a list of columns. Exclude columns to control what data Amazon Personalize uses to generate recommendations. For example, you might have a column that you want to use only to filter recommendations. You can exclude this column from training and Amazon Personalize considers it only when filtering. * *(string) --* * *(list) --* * *(string) --* * **enableMetadataWithRecommendations** *(boolean) --* Whether metadata with recommendations is enabled for the recommender. If enabled, you can specify the columns from your Items dataset in your request for recommendations. Amazon Personalize returns this data for each item in the recommendation response. For information about enabling metadata for a recommender, see Enabling metadata in recommendations for a recommender. If you enable metadata in recommendations, you will incur additional costs. For more information, see Amazon Personalize pricing. * **tags** (*list*) -- A list of tags to apply to the recommender. * *(dict) --* The optional metadata that you apply to resources to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. For more information see Tagging Amazon Personalize resources. * **tagKey** *(string) --* **[REQUIRED]** One part of a key-value pair that makes up a tag. A key is a general label that acts like a category for more specific tag values. * **tagValue** *(string) --* **[REQUIRED]** The optional part of a key-value pair that makes up a tag. A value acts as a descriptor within a tag category (key). Return type: dict Returns: **Response Syntax** { 'recommenderArn': 'string' } **Response Structure** * *(dict) --* * **recommenderArn** *(string) --* The Amazon Resource Name (ARN) of the recommender. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceAlreadyExistsException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.LimitExceededException" * "Personalize.Client.exceptions.ResourceInUseException" * "Personalize.Client.exceptions.TooManyTagsException" Personalize / Client / can_paginate can_paginate ************ Personalize.Client.can_paginate(operation_name) Check if an operation can be paginated. Parameters: **operation_name** (*string*) -- The operation name. This is the same name as the method name on the client. For example, if the method name is "create_foo", and you'd normally invoke the operation as "client.create_foo(**kwargs)", if the "create_foo" operation can be paginated, you can use the call "client.get_paginator("create_foo")". Returns: "True" if the operation can be paginated, "False" otherwise. Personalize / Client / create_dataset create_dataset ************** Personalize.Client.create_dataset(**kwargs) Creates an empty dataset and adds it to the specified dataset group. Use CreateDatasetImportJob to import your training data to a dataset. There are 5 types of datasets: * Item interactions * Items * Users * Action interactions * Actions Each dataset type has an associated schema with required field types. Only the "Item interactions" dataset is required in order to train a model (also referred to as creating a solution). A dataset can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * DELETE PENDING > DELETE IN_PROGRESS To get the status of the dataset, call DescribeDataset. **Related APIs** * CreateDatasetGroup * ListDatasets * DescribeDataset * DeleteDataset See also: AWS API Documentation **Request Syntax** response = client.create_dataset( name='string', schemaArn='string', datasetGroupArn='string', datasetType='string', tags=[ { 'tagKey': 'string', 'tagValue': 'string' }, ] ) Parameters: * **name** (*string*) -- **[REQUIRED]** The name for the dataset. * **schemaArn** (*string*) -- **[REQUIRED]** The ARN of the schema to associate with the dataset. The schema defines the dataset fields. * **datasetGroupArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the dataset group to add the dataset to. * **datasetType** (*string*) -- **[REQUIRED]** The type of dataset. One of the following (case insensitive) values: * Interactions * Items * Users * Actions * Action_Interactions * **tags** (*list*) -- A list of tags to apply to the dataset. * *(dict) --* The optional metadata that you apply to resources to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. For more information see Tagging Amazon Personalize resources. * **tagKey** *(string) --* **[REQUIRED]** One part of a key-value pair that makes up a tag. A key is a general label that acts like a category for more specific tag values. * **tagValue** *(string) --* **[REQUIRED]** The optional part of a key-value pair that makes up a tag. A value acts as a descriptor within a tag category (key). Return type: dict Returns: **Response Syntax** { 'datasetArn': 'string' } **Response Structure** * *(dict) --* * **datasetArn** *(string) --* The ARN of the dataset. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.ResourceAlreadyExistsException" * "Personalize.Client.exceptions.LimitExceededException" * "Personalize.Client.exceptions.ResourceInUseException" * "Personalize.Client.exceptions.TooManyTagsException" Personalize / Client / describe_dataset_import_job describe_dataset_import_job *************************** Personalize.Client.describe_dataset_import_job(**kwargs) Describes the dataset import job created by CreateDatasetImportJob, including the import job status. See also: AWS API Documentation **Request Syntax** response = client.describe_dataset_import_job( datasetImportJobArn='string' ) Parameters: **datasetImportJobArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the dataset import job to describe. Return type: dict Returns: **Response Syntax** { 'datasetImportJob': { 'jobName': 'string', 'datasetImportJobArn': 'string', 'datasetArn': 'string', 'dataSource': { 'dataLocation': 'string' }, 'roleArn': 'string', 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'failureReason': 'string', 'importMode': 'FULL'|'INCREMENTAL', 'publishAttributionMetricsToS3': True|False } } **Response Structure** * *(dict) --* * **datasetImportJob** *(dict) --* Information about the dataset import job, including the status. The status is one of the following values: * CREATE PENDING * CREATE IN_PROGRESS * ACTIVE * CREATE FAILED * **jobName** *(string) --* The name of the import job. * **datasetImportJobArn** *(string) --* The ARN of the dataset import job. * **datasetArn** *(string) --* The Amazon Resource Name (ARN) of the dataset that receives the imported data. * **dataSource** *(dict) --* The Amazon S3 bucket that contains the training data to import. * **dataLocation** *(string) --* For dataset import jobs, the path to the Amazon S3 bucket where the data that you want to upload to your dataset is stored. For data deletion jobs, the path to the Amazon S3 bucket that stores the list of records to delete. For example: "s3://bucket-name/folder-name/fileName.csv" If your CSV files are in a folder in your Amazon S3 bucket and you want your import job or data deletion job to consider multiple files, you can specify the path to the folder. With a data deletion job, Amazon Personalize uses all files in the folder and any sub folder. Use the following syntax with a "/" after the folder name: "s3://bucket-name/folder-name/" * **roleArn** *(string) --* The ARN of the IAM role that has permissions to read from the Amazon S3 data source. * **status** *(string) --* The status of the dataset import job. A dataset import job can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * **creationDateTime** *(datetime) --* The creation date and time (in Unix time) of the dataset import job. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) the dataset was last updated. * **failureReason** *(string) --* If a dataset import job fails, provides the reason why. * **importMode** *(string) --* The import mode used by the dataset import job to import new records. * **publishAttributionMetricsToS3** *(boolean) --* Whether the job publishes metrics to Amazon S3 for a metric attribution. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" Personalize / Client / list_metric_attributions list_metric_attributions ************************ Personalize.Client.list_metric_attributions(**kwargs) Lists metric attributions. See also: AWS API Documentation **Request Syntax** response = client.list_metric_attributions( datasetGroupArn='string', nextToken='string', maxResults=123 ) Parameters: * **datasetGroupArn** (*string*) -- The metric attributions' dataset group Amazon Resource Name (ARN). * **nextToken** (*string*) -- Specify the pagination token from a previous request to retrieve the next page of results. * **maxResults** (*integer*) -- The maximum number of metric attributions to return in one page of results. Return type: dict Returns: **Response Syntax** { 'metricAttributions': [ { 'name': 'string', 'metricAttributionArn': 'string', 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'failureReason': 'string' }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **metricAttributions** *(list) --* The list of metric attributions. * *(dict) --* Provides a summary of the properties of a metric attribution. For a complete listing, call the DescribeMetricAttribution. * **name** *(string) --* The name of the metric attribution. * **metricAttributionArn** *(string) --* The metric attribution's Amazon Resource Name (ARN). * **status** *(string) --* The metric attribution's status. * **creationDateTime** *(datetime) --* The metric attribution's creation date time. * **lastUpdatedDateTime** *(datetime) --* The metric attribution's last updated date time. * **failureReason** *(string) --* The metric attribution's failure reason. * **nextToken** *(string) --* Specify the pagination token from a previous request to retrieve the next page of results. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.InvalidNextTokenException" Personalize / Client / delete_solution delete_solution *************** Personalize.Client.delete_solution(**kwargs) Deletes all versions of a solution and the "Solution" object itself. Before deleting a solution, you must delete all campaigns based on the solution. To determine what campaigns are using the solution, call ListCampaigns and supply the Amazon Resource Name (ARN) of the solution. You can't delete a solution if an associated "SolutionVersion" is in the CREATE PENDING or IN PROGRESS state. For more information on solutions, see CreateSolution. See also: AWS API Documentation **Request Syntax** response = client.delete_solution( solutionArn='string' ) Parameters: **solutionArn** (*string*) -- **[REQUIRED]** The ARN of the solution to delete. Returns: None **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.ResourceInUseException" Personalize / Client / delete_dataset delete_dataset ************** Personalize.Client.delete_dataset(**kwargs) Deletes a dataset. You can't delete a dataset if an associated "DatasetImportJob" or "SolutionVersion" is in the CREATE PENDING or IN PROGRESS state. For more information about deleting datasets, see Deleting a dataset. See also: AWS API Documentation **Request Syntax** response = client.delete_dataset( datasetArn='string' ) Parameters: **datasetArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the dataset to delete. Returns: None **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.ResourceInUseException" Personalize / Client / list_solutions list_solutions ************** Personalize.Client.list_solutions(**kwargs) Returns a list of solutions in a given dataset group. When a dataset group is not specified, all the solutions associated with the account are listed. The response provides the properties for each solution, including the Amazon Resource Name (ARN). For more information on solutions, see CreateSolution. See also: AWS API Documentation **Request Syntax** response = client.list_solutions( datasetGroupArn='string', nextToken='string', maxResults=123 ) Parameters: * **datasetGroupArn** (*string*) -- The Amazon Resource Name (ARN) of the dataset group. * **nextToken** (*string*) -- A token returned from the previous call to "ListSolutions" for getting the next set of solutions (if they exist). * **maxResults** (*integer*) -- The maximum number of solutions to return. Return type: dict Returns: **Response Syntax** { 'solutions': [ { 'name': 'string', 'solutionArn': 'string', 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'recipeArn': 'string' }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **solutions** *(list) --* A list of the current solutions. * *(dict) --* Provides a summary of the properties of a solution. For a complete listing, call the DescribeSolution API. * **name** *(string) --* The name of the solution. * **solutionArn** *(string) --* The Amazon Resource Name (ARN) of the solution. * **status** *(string) --* The status of the solution. A solution can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * DELETE PENDING > DELETE IN_PROGRESS * **creationDateTime** *(datetime) --* The date and time (in Unix time) that the solution was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the solution was last updated. * **recipeArn** *(string) --* The Amazon Resource Name (ARN) of the recipe used by the solution. * **nextToken** *(string) --* A token for getting the next set of solutions (if they exist). **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.InvalidNextTokenException" Personalize / Client / create_dataset_export_job create_dataset_export_job ************************* Personalize.Client.create_dataset_export_job(**kwargs) Creates a job that exports data from your dataset to an Amazon S3 bucket. To allow Amazon Personalize to export the training data, you must specify an service-linked IAM role that gives Amazon Personalize "PutObject" permissions for your Amazon S3 bucket. For information, see Exporting a dataset in the Amazon Personalize developer guide. **Status** A dataset export job can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED To get the status of the export job, call DescribeDatasetExportJob, and specify the Amazon Resource Name (ARN) of the dataset export job. The dataset export is complete when the status shows as ACTIVE. If the status shows as CREATE FAILED, the response includes a "failureReason" key, which describes why the job failed. See also: AWS API Documentation **Request Syntax** response = client.create_dataset_export_job( jobName='string', datasetArn='string', ingestionMode='BULK'|'PUT'|'ALL', roleArn='string', jobOutput={ 's3DataDestination': { 'path': 'string', 'kmsKeyArn': 'string' } }, tags=[ { 'tagKey': 'string', 'tagValue': 'string' }, ] ) Parameters: * **jobName** (*string*) -- **[REQUIRED]** The name for the dataset export job. * **datasetArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the dataset that contains the data to export. * **ingestionMode** (*string*) -- The data to export, based on how you imported the data. You can choose to export only "BULK" data that you imported using a dataset import job, only "PUT" data that you imported incrementally (using the console, PutEvents, PutUsers and PutItems operations), or "ALL" for both types. The default value is "PUT". * **roleArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the IAM service role that has permissions to add data to your output Amazon S3 bucket. * **jobOutput** (*dict*) -- **[REQUIRED]** The path to the Amazon S3 bucket where the job's output is stored. * **s3DataDestination** *(dict) --* **[REQUIRED]** The configuration details of an Amazon S3 input or output bucket. * **path** *(string) --* **[REQUIRED]** The file path of the Amazon S3 bucket. * **kmsKeyArn** *(string) --* The Amazon Resource Name (ARN) of the Key Management Service (KMS) key that Amazon Personalize uses to encrypt or decrypt the input and output files. * **tags** (*list*) -- A list of tags to apply to the dataset export job. * *(dict) --* The optional metadata that you apply to resources to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. For more information see Tagging Amazon Personalize resources. * **tagKey** *(string) --* **[REQUIRED]** One part of a key-value pair that makes up a tag. A key is a general label that acts like a category for more specific tag values. * **tagValue** *(string) --* **[REQUIRED]** The optional part of a key-value pair that makes up a tag. A value acts as a descriptor within a tag category (key). Return type: dict Returns: **Response Syntax** { 'datasetExportJobArn': 'string' } **Response Structure** * *(dict) --* * **datasetExportJobArn** *(string) --* The Amazon Resource Name (ARN) of the dataset export job. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.ResourceAlreadyExistsException" * "Personalize.Client.exceptions.LimitExceededException" * "Personalize.Client.exceptions.ResourceInUseException" * "Personalize.Client.exceptions.TooManyTagsException" Personalize / Client / describe_data_deletion_job describe_data_deletion_job ************************** Personalize.Client.describe_data_deletion_job(**kwargs) Describes the data deletion job created by CreateDataDeletionJob, including the job status. See also: AWS API Documentation **Request Syntax** response = client.describe_data_deletion_job( dataDeletionJobArn='string' ) Parameters: **dataDeletionJobArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the data deletion job. Return type: dict Returns: **Response Syntax** { 'dataDeletionJob': { 'jobName': 'string', 'dataDeletionJobArn': 'string', 'datasetGroupArn': 'string', 'dataSource': { 'dataLocation': 'string' }, 'roleArn': 'string', 'status': 'string', 'numDeleted': 123, 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'failureReason': 'string' } } **Response Structure** * *(dict) --* * **dataDeletionJob** *(dict) --* Information about the data deletion job, including the status. The status is one of the following values: * PENDING * IN_PROGRESS * COMPLETED * FAILED * **jobName** *(string) --* The name of the data deletion job. * **dataDeletionJobArn** *(string) --* The Amazon Resource Name (ARN) of the data deletion job. * **datasetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the dataset group the job deletes records from. * **dataSource** *(dict) --* Describes the data source that contains the data to upload to a dataset, or the list of records to delete from Amazon Personalize. * **dataLocation** *(string) --* For dataset import jobs, the path to the Amazon S3 bucket where the data that you want to upload to your dataset is stored. For data deletion jobs, the path to the Amazon S3 bucket that stores the list of records to delete. For example: "s3://bucket-name/folder-name/fileName.csv" If your CSV files are in a folder in your Amazon S3 bucket and you want your import job or data deletion job to consider multiple files, you can specify the path to the folder. With a data deletion job, Amazon Personalize uses all files in the folder and any sub folder. Use the following syntax with a "/" after the folder name: "s3://bucket-name/folder-name/" * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM role that has permissions to read from the Amazon S3 data source. * **status** *(string) --* The status of the data deletion job. A data deletion job can have one of the following statuses: * PENDING > IN_PROGRESS > COMPLETED -or- FAILED * **numDeleted** *(integer) --* The number of records deleted by a COMPLETED job. * **creationDateTime** *(datetime) --* The creation date and time (in Unix time) of the data deletion job. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) the data deletion job was last updated. * **failureReason** *(string) --* If a data deletion job fails, provides the reason why. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" Personalize / Client / update_metric_attribution update_metric_attribution ************************* Personalize.Client.update_metric_attribution(**kwargs) Updates a metric attribution. See also: AWS API Documentation **Request Syntax** response = client.update_metric_attribution( addMetrics=[ { 'eventType': 'string', 'metricName': 'string', 'expression': 'string' }, ], removeMetrics=[ 'string', ], metricsOutputConfig={ 's3DataDestination': { 'path': 'string', 'kmsKeyArn': 'string' }, 'roleArn': 'string' }, metricAttributionArn='string' ) Parameters: * **addMetrics** (*list*) -- Add new metric attributes to the metric attribution. * *(dict) --* Contains information on a metric that a metric attribution reports on. For more information, see Measuring impact of recommendations. * **eventType** *(string) --* **[REQUIRED]** The metric's event type. * **metricName** *(string) --* **[REQUIRED]** The metric's name. The name helps you identify the metric in Amazon CloudWatch or Amazon S3. * **expression** *(string) --* **[REQUIRED]** The attribute's expression. Available functions are "SUM()" or "SAMPLECOUNT()". For SUM() functions, provide the dataset type (either Interactions or Items) and column to sum as a parameter. For example SUM(Items.PRICE). * **removeMetrics** (*list*) -- Remove metric attributes from the metric attribution. * *(string) --* * **metricsOutputConfig** (*dict*) -- An output config for the metric attribution. * **s3DataDestination** *(dict) --* The configuration details of an Amazon S3 input or output bucket. * **path** *(string) --* **[REQUIRED]** The file path of the Amazon S3 bucket. * **kmsKeyArn** *(string) --* The Amazon Resource Name (ARN) of the Key Management Service (KMS) key that Amazon Personalize uses to encrypt or decrypt the input and output files. * **roleArn** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) of the IAM service role that has permissions to add data to your output Amazon S3 bucket and add metrics to Amazon CloudWatch. For more information, see Measuring impact of recommendations. * **metricAttributionArn** (*string*) -- The Amazon Resource Name (ARN) for the metric attribution to update. Return type: dict Returns: **Response Syntax** { 'metricAttributionArn': 'string' } **Response Structure** * *(dict) --* * **metricAttributionArn** *(string) --* The Amazon Resource Name (ARN) for the metric attribution that you updated. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.ResourceInUseException" * "Personalize.Client.exceptions.ResourceAlreadyExistsException" Personalize / Client / create_batch_inference_job create_batch_inference_job ************************** Personalize.Client.create_batch_inference_job(**kwargs) Generates batch recommendations based on a list of items or users stored in Amazon S3 and exports the recommendations to an Amazon S3 bucket. To generate batch recommendations, specify the ARN of a solution version and an Amazon S3 URI for the input and output data. For user personalization, popular items, and personalized ranking solutions, the batch inference job generates a list of recommended items for each user ID in the input file. For related items solutions, the job generates a list of recommended items for each item ID in the input file. For more information, see Creating a batch inference job. If you use the Similar-Items recipe, Amazon Personalize can add descriptive themes to batch recommendations. To generate themes, set the job's mode to "THEME_GENERATION" and specify the name of the field that contains item names in the input data. For more information about generating themes, see Batch recommendations with themes from Content Generator. You can't get batch recommendations with the Trending-Now or Next- Best-Action recipes. See also: AWS API Documentation **Request Syntax** response = client.create_batch_inference_job( jobName='string', solutionVersionArn='string', filterArn='string', numResults=123, jobInput={ 's3DataSource': { 'path': 'string', 'kmsKeyArn': 'string' } }, jobOutput={ 's3DataDestination': { 'path': 'string', 'kmsKeyArn': 'string' } }, roleArn='string', batchInferenceJobConfig={ 'itemExplorationConfig': { 'string': 'string' } }, tags=[ { 'tagKey': 'string', 'tagValue': 'string' }, ], batchInferenceJobMode='BATCH_INFERENCE'|'THEME_GENERATION', themeGenerationConfig={ 'fieldsForThemeGeneration': { 'itemName': 'string' } } ) Parameters: * **jobName** (*string*) -- **[REQUIRED]** The name of the batch inference job to create. * **solutionVersionArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the solution version that will be used to generate the batch inference recommendations. * **filterArn** (*string*) -- The ARN of the filter to apply to the batch inference job. For more information on using filters, see Filtering batch recommendations. * **numResults** (*integer*) -- The number of recommendations to retrieve. * **jobInput** (*dict*) -- **[REQUIRED]** The Amazon S3 path that leads to the input file to base your recommendations on. The input material must be in JSON format. * **s3DataSource** *(dict) --* **[REQUIRED]** The URI of the Amazon S3 location that contains your input data. The Amazon S3 bucket must be in the same region as the API endpoint you are calling. * **path** *(string) --* **[REQUIRED]** The file path of the Amazon S3 bucket. * **kmsKeyArn** *(string) --* The Amazon Resource Name (ARN) of the Key Management Service (KMS) key that Amazon Personalize uses to encrypt or decrypt the input and output files. * **jobOutput** (*dict*) -- **[REQUIRED]** The path to the Amazon S3 bucket where the job's output will be stored. * **s3DataDestination** *(dict) --* **[REQUIRED]** Information on the Amazon S3 bucket in which the batch inference job's output is stored. * **path** *(string) --* **[REQUIRED]** The file path of the Amazon S3 bucket. * **kmsKeyArn** *(string) --* The Amazon Resource Name (ARN) of the Key Management Service (KMS) key that Amazon Personalize uses to encrypt or decrypt the input and output files. * **roleArn** (*string*) -- **[REQUIRED]** The ARN of the Amazon Identity and Access Management role that has permissions to read and write to your input and output Amazon S3 buckets respectively. * **batchInferenceJobConfig** (*dict*) -- The configuration details of a batch inference job. * **itemExplorationConfig** *(dict) --* A string to string map specifying the exploration configuration hyperparameters, including "explorationWeight" and "explorationItemAgeCutOff", you want to use to configure the amount of item exploration Amazon Personalize uses when recommending items. See User-Personalization. * *(string) --* * *(string) --* * **tags** (*list*) -- A list of tags to apply to the batch inference job. * *(dict) --* The optional metadata that you apply to resources to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. For more information see Tagging Amazon Personalize resources. * **tagKey** *(string) --* **[REQUIRED]** One part of a key-value pair that makes up a tag. A key is a general label that acts like a category for more specific tag values. * **tagValue** *(string) --* **[REQUIRED]** The optional part of a key-value pair that makes up a tag. A value acts as a descriptor within a tag category (key). * **batchInferenceJobMode** (*string*) -- The mode of the batch inference job. To generate descriptive themes for groups of similar items, set the job mode to "THEME_GENERATION". If you don't want to generate themes, use the default "BATCH_INFERENCE". When you get batch recommendations with themes, you will incur additional costs. For more information, see Amazon Personalize pricing. * **themeGenerationConfig** (*dict*) -- For theme generation jobs, specify the name of the column in your Items dataset that contains each item's name. * **fieldsForThemeGeneration** *(dict) --* **[REQUIRED]** Fields used to generate descriptive themes for a batch inference job. * **itemName** *(string) --* **[REQUIRED]** The name of the Items dataset column that stores the name of each item in the dataset. Return type: dict Returns: **Response Syntax** { 'batchInferenceJobArn': 'string' } **Response Structure** * *(dict) --* * **batchInferenceJobArn** *(string) --* The ARN of the batch inference job. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceAlreadyExistsException" * "Personalize.Client.exceptions.LimitExceededException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.ResourceInUseException" * "Personalize.Client.exceptions.TooManyTagsException" Personalize / Client / create_metric_attribution create_metric_attribution ************************* Personalize.Client.create_metric_attribution(**kwargs) Creates a metric attribution. A metric attribution creates reports on the data that you import into Amazon Personalize. Depending on how you imported the data, you can view reports in Amazon CloudWatch or Amazon S3. For more information, see Measuring impact of recommendations. See also: AWS API Documentation **Request Syntax** response = client.create_metric_attribution( name='string', datasetGroupArn='string', metrics=[ { 'eventType': 'string', 'metricName': 'string', 'expression': 'string' }, ], metricsOutputConfig={ 's3DataDestination': { 'path': 'string', 'kmsKeyArn': 'string' }, 'roleArn': 'string' } ) Parameters: * **name** (*string*) -- **[REQUIRED]** A name for the metric attribution. * **datasetGroupArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the destination dataset group for the metric attribution. * **metrics** (*list*) -- **[REQUIRED]** A list of metric attributes for the metric attribution. Each metric attribute specifies an event type to track and a function. Available functions are "SUM()" or "SAMPLECOUNT()". For SUM() functions, provide the dataset type (either Interactions or Items) and column to sum as a parameter. For example SUM(Items.PRICE). * *(dict) --* Contains information on a metric that a metric attribution reports on. For more information, see Measuring impact of recommendations. * **eventType** *(string) --* **[REQUIRED]** The metric's event type. * **metricName** *(string) --* **[REQUIRED]** The metric's name. The name helps you identify the metric in Amazon CloudWatch or Amazon S3. * **expression** *(string) --* **[REQUIRED]** The attribute's expression. Available functions are "SUM()" or "SAMPLECOUNT()". For SUM() functions, provide the dataset type (either Interactions or Items) and column to sum as a parameter. For example SUM(Items.PRICE). * **metricsOutputConfig** (*dict*) -- **[REQUIRED]** The output configuration details for the metric attribution. * **s3DataDestination** *(dict) --* The configuration details of an Amazon S3 input or output bucket. * **path** *(string) --* **[REQUIRED]** The file path of the Amazon S3 bucket. * **kmsKeyArn** *(string) --* The Amazon Resource Name (ARN) of the Key Management Service (KMS) key that Amazon Personalize uses to encrypt or decrypt the input and output files. * **roleArn** *(string) --* **[REQUIRED]** The Amazon Resource Name (ARN) of the IAM service role that has permissions to add data to your output Amazon S3 bucket and add metrics to Amazon CloudWatch. For more information, see Measuring impact of recommendations. Return type: dict Returns: **Response Syntax** { 'metricAttributionArn': 'string' } **Response Structure** * *(dict) --* * **metricAttributionArn** *(string) --* The Amazon Resource Name (ARN) for the new metric attribution. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.ResourceAlreadyExistsException" * "Personalize.Client.exceptions.ResourceInUseException" * "Personalize.Client.exceptions.LimitExceededException" Personalize / Client / describe_campaign describe_campaign ***************** Personalize.Client.describe_campaign(**kwargs) Describes the given campaign, including its status. A campaign can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * DELETE PENDING > DELETE IN_PROGRESS When the "status" is "CREATE FAILED", the response includes the "failureReason" key, which describes why. For more information on campaigns, see CreateCampaign. See also: AWS API Documentation **Request Syntax** response = client.describe_campaign( campaignArn='string' ) Parameters: **campaignArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the campaign. Return type: dict Returns: **Response Syntax** { 'campaign': { 'name': 'string', 'campaignArn': 'string', 'solutionVersionArn': 'string', 'minProvisionedTPS': 123, 'campaignConfig': { 'itemExplorationConfig': { 'string': 'string' }, 'enableMetadataWithRecommendations': True|False, 'syncWithLatestSolutionVersion': True|False }, 'status': 'string', 'failureReason': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'latestCampaignUpdate': { 'solutionVersionArn': 'string', 'minProvisionedTPS': 123, 'campaignConfig': { 'itemExplorationConfig': { 'string': 'string' }, 'enableMetadataWithRecommendations': True|False, 'syncWithLatestSolutionVersion': True|False }, 'status': 'string', 'failureReason': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1) } } } **Response Structure** * *(dict) --* * **campaign** *(dict) --* The properties of the campaign. * **name** *(string) --* The name of the campaign. * **campaignArn** *(string) --* The Amazon Resource Name (ARN) of the campaign. * **solutionVersionArn** *(string) --* The Amazon Resource Name (ARN) of the solution version the campaign uses. * **minProvisionedTPS** *(integer) --* Specifies the requested minimum provisioned transactions (recommendations) per second. A high "minProvisionedTPS" will increase your bill. We recommend starting with 1 for "minProvisionedTPS" (the default). Track your usage using Amazon CloudWatch metrics, and increase the "minProvisionedTPS" as necessary. * **campaignConfig** *(dict) --* The configuration details of a campaign. * **itemExplorationConfig** *(dict) --* Specifies the exploration configuration hyperparameters, including "explorationWeight" and "explorationItemAgeCutOff", you want to use to configure the amount of item exploration Amazon Personalize uses when recommending items. Provide "itemExplorationConfig" data only if your solution uses the User-Personalization recipe. * *(string) --* * *(string) --* * **enableMetadataWithRecommendations** *(boolean) --* Whether metadata with recommendations is enabled for the campaign. If enabled, you can specify the columns from your Items dataset in your request for recommendations. Amazon Personalize returns this data for each item in the recommendation response. For information about enabling metadata for a campaign, see Enabling metadata in recommendations for a campaign. If you enable metadata in recommendations, you will incur additional costs. For more information, see Amazon Personalize pricing. * **syncWithLatestSolutionVersion** *(boolean) --* Whether the campaign automatically updates to use the latest solution version (trained model) of a solution. If you specify "True", you must specify the ARN of your *solution* for the "SolutionVersionArn" parameter. It must be in "SolutionArn/$LATEST" format. The default is "False" and you must manually update the campaign to deploy the latest solution version. For more information about automatic campaign updates, see Enabling automatic campaign updates. * **status** *(string) --* The status of the campaign. A campaign can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * DELETE PENDING > DELETE IN_PROGRESS * **failureReason** *(string) --* If a campaign fails, the reason behind the failure. * **creationDateTime** *(datetime) --* The date and time (in Unix format) that the campaign was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix format) that the campaign was last updated. * **latestCampaignUpdate** *(dict) --* Provides a summary of the properties of a campaign update. For a complete listing, call the DescribeCampaign API. * **solutionVersionArn** *(string) --* The Amazon Resource Name (ARN) of the deployed solution version. * **minProvisionedTPS** *(integer) --* Specifies the requested minimum provisioned transactions (recommendations) per second that Amazon Personalize will support. * **campaignConfig** *(dict) --* The configuration details of a campaign. * **itemExplorationConfig** *(dict) --* Specifies the exploration configuration hyperparameters, including "explorationWeight" and "explorationItemAgeCutOff", you want to use to configure the amount of item exploration Amazon Personalize uses when recommending items. Provide "itemExplorationConfig" data only if your solution uses the User-Personalization recipe. * *(string) --* * *(string) --* * **enableMetadataWithRecommendations** *(boolean) --* Whether metadata with recommendations is enabled for the campaign. If enabled, you can specify the columns from your Items dataset in your request for recommendations. Amazon Personalize returns this data for each item in the recommendation response. For information about enabling metadata for a campaign, see Enabling metadata in recommendations for a campaign. If you enable metadata in recommendations, you will incur additional costs. For more information, see Amazon Personalize pricing. * **syncWithLatestSolutionVersion** *(boolean) --* Whether the campaign automatically updates to use the latest solution version (trained model) of a solution. If you specify "True", you must specify the ARN of your *solution* for the "SolutionVersionArn" parameter. It must be in "SolutionArn/$LATEST" format. The default is "False" and you must manually update the campaign to deploy the latest solution version. For more information about automatic campaign updates, see Enabling automatic campaign updates. * **status** *(string) --* The status of the campaign update. A campaign update can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * DELETE PENDING > DELETE IN_PROGRESS * **failureReason** *(string) --* If a campaign update fails, the reason behind the failure. * **creationDateTime** *(datetime) --* The date and time (in Unix time) that the campaign update was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the campaign update was last updated. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" Personalize / Client / update_campaign update_campaign *************** Personalize.Client.update_campaign(**kwargs) Updates a campaign to deploy a retrained solution version with an existing campaign, change your campaign's "minProvisionedTPS", or modify your campaign's configuration. For example, you can set "enableMetadataWithRecommendations" to true for an existing campaign. To update a campaign to start automatically using the latest solution version, specify the following: * For the "SolutionVersionArn" parameter, specify the Amazon Resource Name (ARN) of your solution in "SolutionArn/$LATEST" format. * In the "campaignConfig", set "syncWithLatestSolutionVersion" to "true". To update a campaign, the campaign status must be ACTIVE or CREATE FAILED. Check the campaign status using the DescribeCampaign operation. Note: You can still get recommendations from a campaign while an update is in progress. The campaign will use the previous solution version and campaign configuration to generate recommendations until the latest campaign update status is "Active". For more information about updating a campaign, including code samples, see Updating a campaign. For more information about campaigns, see Creating a campaign. See also: AWS API Documentation **Request Syntax** response = client.update_campaign( campaignArn='string', solutionVersionArn='string', minProvisionedTPS=123, campaignConfig={ 'itemExplorationConfig': { 'string': 'string' }, 'enableMetadataWithRecommendations': True|False, 'syncWithLatestSolutionVersion': True|False } ) Parameters: * **campaignArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the campaign. * **solutionVersionArn** (*string*) -- The Amazon Resource Name (ARN) of a new model to deploy. To specify the latest solution version of your solution, specify the ARN of your *solution* in "SolutionArn/$LATEST" format. You must use this format if you set "syncWithLatestSolutionVersion" to "True" in the CampaignConfig. To deploy a model that isn't the latest solution version of your solution, specify the ARN of the solution version. For more information about automatic campaign updates, see Enabling automatic campaign updates. * **minProvisionedTPS** (*integer*) -- Specifies the requested minimum provisioned transactions (recommendations) per second that Amazon Personalize will support. A high "minProvisionedTPS" will increase your bill. We recommend starting with 1 for "minProvisionedTPS" (the default). Track your usage using Amazon CloudWatch metrics, and increase the "minProvisionedTPS" as necessary. * **campaignConfig** (*dict*) -- The configuration details of a campaign. * **itemExplorationConfig** *(dict) --* Specifies the exploration configuration hyperparameters, including "explorationWeight" and "explorationItemAgeCutOff", you want to use to configure the amount of item exploration Amazon Personalize uses when recommending items. Provide "itemExplorationConfig" data only if your solution uses the User-Personalization recipe. * *(string) --* * *(string) --* * **enableMetadataWithRecommendations** *(boolean) --* Whether metadata with recommendations is enabled for the campaign. If enabled, you can specify the columns from your Items dataset in your request for recommendations. Amazon Personalize returns this data for each item in the recommendation response. For information about enabling metadata for a campaign, see Enabling metadata in recommendations for a campaign. If you enable metadata in recommendations, you will incur additional costs. For more information, see Amazon Personalize pricing. * **syncWithLatestSolutionVersion** *(boolean) --* Whether the campaign automatically updates to use the latest solution version (trained model) of a solution. If you specify "True", you must specify the ARN of your *solution* for the "SolutionVersionArn" parameter. It must be in "SolutionArn/$LATEST" format. The default is "False" and you must manually update the campaign to deploy the latest solution version. For more information about automatic campaign updates, see Enabling automatic campaign updates. Return type: dict Returns: **Response Syntax** { 'campaignArn': 'string' } **Response Structure** * *(dict) --* * **campaignArn** *(string) --* The same campaign ARN as given in the request. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.ResourceInUseException" Personalize / Client / list_recipes list_recipes ************ Personalize.Client.list_recipes(**kwargs) Returns a list of available recipes. The response provides the properties for each recipe, including the recipe's Amazon Resource Name (ARN). See also: AWS API Documentation **Request Syntax** response = client.list_recipes( recipeProvider='SERVICE', nextToken='string', maxResults=123, domain='ECOMMERCE'|'VIDEO_ON_DEMAND' ) Parameters: * **recipeProvider** (*string*) -- The default is "SERVICE". * **nextToken** (*string*) -- A token returned from the previous call to "ListRecipes" for getting the next set of recipes (if they exist). * **maxResults** (*integer*) -- The maximum number of recipes to return. * **domain** (*string*) -- Filters returned recipes by domain for a Domain dataset group. Only recipes (Domain dataset group use cases) for this domain are included in the response. If you don't specify a domain, all recipes are returned. Return type: dict Returns: **Response Syntax** { 'recipes': [ { 'name': 'string', 'recipeArn': 'string', 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'domain': 'ECOMMERCE'|'VIDEO_ON_DEMAND' }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **recipes** *(list) --* The list of available recipes. * *(dict) --* Provides a summary of the properties of a recipe. For a complete listing, call the DescribeRecipe API. * **name** *(string) --* The name of the recipe. * **recipeArn** *(string) --* The Amazon Resource Name (ARN) of the recipe. * **status** *(string) --* The status of the recipe. * **creationDateTime** *(datetime) --* The date and time (in Unix time) that the recipe was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the recipe was last updated. * **domain** *(string) --* The domain of the recipe (if the recipe is a Domain dataset group use case). * **nextToken** *(string) --* A token for getting the next set of recipes. **Exceptions** * "Personalize.Client.exceptions.InvalidNextTokenException" * "Personalize.Client.exceptions.InvalidInputException" Personalize / Client / list_dataset_export_jobs list_dataset_export_jobs ************************ Personalize.Client.list_dataset_export_jobs(**kwargs) Returns a list of dataset export jobs that use the given dataset. When a dataset is not specified, all the dataset export jobs associated with the account are listed. The response provides the properties for each dataset export job, including the Amazon Resource Name (ARN). For more information on dataset export jobs, see CreateDatasetExportJob. For more information on datasets, see CreateDataset. See also: AWS API Documentation **Request Syntax** response = client.list_dataset_export_jobs( datasetArn='string', nextToken='string', maxResults=123 ) Parameters: * **datasetArn** (*string*) -- The Amazon Resource Name (ARN) of the dataset to list the dataset export jobs for. * **nextToken** (*string*) -- A token returned from the previous call to "ListDatasetExportJobs" for getting the next set of dataset export jobs (if they exist). * **maxResults** (*integer*) -- The maximum number of dataset export jobs to return. Return type: dict Returns: **Response Syntax** { 'datasetExportJobs': [ { 'datasetExportJobArn': 'string', 'jobName': 'string', 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'failureReason': 'string' }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **datasetExportJobs** *(list) --* The list of dataset export jobs. * *(dict) --* Provides a summary of the properties of a dataset export job. For a complete listing, call the DescribeDatasetExportJob API. * **datasetExportJobArn** *(string) --* The Amazon Resource Name (ARN) of the dataset export job. * **jobName** *(string) --* The name of the dataset export job. * **status** *(string) --* The status of the dataset export job. A dataset export job can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * **creationDateTime** *(datetime) --* The date and time (in Unix time) that the dataset export job was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the dataset export job status was last updated. * **failureReason** *(string) --* If a dataset export job fails, the reason behind the failure. * **nextToken** *(string) --* A token for getting the next set of dataset export jobs (if they exist). **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.InvalidNextTokenException" Personalize / Client / update_dataset update_dataset ************** Personalize.Client.update_dataset(**kwargs) Update a dataset to replace its schema with a new or existing one. For more information, see Replacing a dataset's schema. See also: AWS API Documentation **Request Syntax** response = client.update_dataset( datasetArn='string', schemaArn='string' ) Parameters: * **datasetArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the dataset that you want to update. * **schemaArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the new schema you want use. Return type: dict Returns: **Response Syntax** { 'datasetArn': 'string' } **Response Structure** * *(dict) --* * **datasetArn** *(string) --* The Amazon Resource Name (ARN) of the dataset you updated. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.ResourceInUseException" Personalize / Client / list_tags_for_resource list_tags_for_resource ********************** Personalize.Client.list_tags_for_resource(**kwargs) Get a list of tags attached to a resource. See also: AWS API Documentation **Request Syntax** response = client.list_tags_for_resource( resourceArn='string' ) Parameters: **resourceArn** (*string*) -- **[REQUIRED]** The resource's Amazon Resource Name (ARN). Return type: dict Returns: **Response Syntax** { 'tags': [ { 'tagKey': 'string', 'tagValue': 'string' }, ] } **Response Structure** * *(dict) --* * **tags** *(list) --* The resource's tags. * *(dict) --* The optional metadata that you apply to resources to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. For more information see Tagging Amazon Personalize resources. * **tagKey** *(string) --* One part of a key-value pair that makes up a tag. A key is a general label that acts like a category for more specific tag values. * **tagValue** *(string) --* The optional part of a key-value pair that makes up a tag. A value acts as a descriptor within a tag category (key). **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.ResourceInUseException" Personalize / Client / create_event_tracker create_event_tracker ******************** Personalize.Client.create_event_tracker(**kwargs) Creates an event tracker that you use when adding event data to a specified dataset group using the PutEvents API. Note: Only one event tracker can be associated with a dataset group. You will get an error if you call "CreateEventTracker" using the same dataset group as an existing event tracker. When you create an event tracker, the response includes a tracking ID, which you pass as a parameter when you use the PutEvents operation. Amazon Personalize then appends the event data to the Item interactions dataset of the dataset group you specify in your event tracker. The event tracker can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * DELETE PENDING > DELETE IN_PROGRESS To get the status of the event tracker, call DescribeEventTracker. Note: The event tracker must be in the ACTIVE state before using the tracking ID. **Related APIs** * ListEventTrackers * DescribeEventTracker * DeleteEventTracker See also: AWS API Documentation **Request Syntax** response = client.create_event_tracker( name='string', datasetGroupArn='string', tags=[ { 'tagKey': 'string', 'tagValue': 'string' }, ] ) Parameters: * **name** (*string*) -- **[REQUIRED]** The name for the event tracker. * **datasetGroupArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the dataset group that receives the event data. * **tags** (*list*) -- A list of tags to apply to the event tracker. * *(dict) --* The optional metadata that you apply to resources to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. For more information see Tagging Amazon Personalize resources. * **tagKey** *(string) --* **[REQUIRED]** One part of a key-value pair that makes up a tag. A key is a general label that acts like a category for more specific tag values. * **tagValue** *(string) --* **[REQUIRED]** The optional part of a key-value pair that makes up a tag. A value acts as a descriptor within a tag category (key). Return type: dict Returns: **Response Syntax** { 'eventTrackerArn': 'string', 'trackingId': 'string' } **Response Structure** * *(dict) --* * **eventTrackerArn** *(string) --* The ARN of the event tracker. * **trackingId** *(string) --* The ID of the event tracker. Include this ID in requests to the PutEvents API. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceAlreadyExistsException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.LimitExceededException" * "Personalize.Client.exceptions.ResourceInUseException" * "Personalize.Client.exceptions.TooManyTagsException" Personalize / Client / untag_resource untag_resource ************** Personalize.Client.untag_resource(**kwargs) Removes the specified tags that are attached to a resource. For more information, see Removing tags from Amazon Personalize resources. See also: AWS API Documentation **Request Syntax** response = client.untag_resource( resourceArn='string', tagKeys=[ 'string', ] ) Parameters: * **resourceArn** (*string*) -- **[REQUIRED]** The resource's Amazon Resource Name (ARN). * **tagKeys** (*list*) -- **[REQUIRED]** The keys of the tags to be removed. * *(string) --* Return type: dict Returns: **Response Syntax** {} **Response Structure** * *(dict) --* **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.TooManyTagKeysException" * "Personalize.Client.exceptions.ResourceInUseException" Personalize / Client / stop_solution_version_creation stop_solution_version_creation ****************************** Personalize.Client.stop_solution_version_creation(**kwargs) Stops creating a solution version that is in a state of CREATE_PENDING or CREATE IN_PROGRESS. Depending on the current state of the solution version, the solution version state changes as follows: * CREATE_PENDING > CREATE_STOPPED or * CREATE_IN_PROGRESS > CREATE_STOPPING > CREATE_STOPPED You are billed for all of the training completed up until you stop the solution version creation. You cannot resume creating a solution version once it has been stopped. See also: AWS API Documentation **Request Syntax** response = client.stop_solution_version_creation( solutionVersionArn='string' ) Parameters: **solutionVersionArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the solution version you want to stop creating. Returns: None **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.ResourceInUseException" Personalize / Client / update_recommender update_recommender ****************** Personalize.Client.update_recommender(**kwargs) Updates the recommender to modify the recommender configuration. If you update the recommender to modify the columns used in training, Amazon Personalize automatically starts a full retraining of the models backing your recommender. While the update completes, you can still get recommendations from the recommender. The recommender uses the previous configuration until the update completes. To track the status of this update, use the "latestRecommenderUpdate" returned in the DescribeRecommender operation. See also: AWS API Documentation **Request Syntax** response = client.update_recommender( recommenderArn='string', recommenderConfig={ 'itemExplorationConfig': { 'string': 'string' }, 'minRecommendationRequestsPerSecond': 123, 'trainingDataConfig': { 'excludedDatasetColumns': { 'string': [ 'string', ] } }, 'enableMetadataWithRecommendations': True|False } ) Parameters: * **recommenderArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the recommender to modify. * **recommenderConfig** (*dict*) -- **[REQUIRED]** The configuration details of the recommender. * **itemExplorationConfig** *(dict) --* Specifies the exploration configuration hyperparameters, including "explorationWeight" and "explorationItemAgeCutOff", you want to use to configure the amount of item exploration Amazon Personalize uses when recommending items. Provide "itemExplorationConfig" data only if your recommenders generate personalized recommendations for a user (not popular items or similar items). * *(string) --* * *(string) --* * **minRecommendationRequestsPerSecond** *(integer) --* Specifies the requested minimum provisioned recommendation requests per second that Amazon Personalize will support. A high "minRecommendationRequestsPerSecond" will increase your bill. We recommend starting with 1 for "minRecommendationRequestsPerSecond" (the default). Track your usage using Amazon CloudWatch metrics, and increase the "minRecommendationRequestsPerSecond" as necessary. * **trainingDataConfig** *(dict) --* Specifies the training data configuration to use when creating a domain recommender. * **excludedDatasetColumns** *(dict) --* Specifies the columns to exclude from training. Each key is a dataset type, and each value is a list of columns. Exclude columns to control what data Amazon Personalize uses to generate recommendations. For example, you might have a column that you want to use only to filter recommendations. You can exclude this column from training and Amazon Personalize considers it only when filtering. * *(string) --* * *(list) --* * *(string) --* * **enableMetadataWithRecommendations** *(boolean) --* Whether metadata with recommendations is enabled for the recommender. If enabled, you can specify the columns from your Items dataset in your request for recommendations. Amazon Personalize returns this data for each item in the recommendation response. For information about enabling metadata for a recommender, see Enabling metadata in recommendations for a recommender. If you enable metadata in recommendations, you will incur additional costs. For more information, see Amazon Personalize pricing. Return type: dict Returns: **Response Syntax** { 'recommenderArn': 'string' } **Response Structure** * *(dict) --* * **recommenderArn** *(string) --* The same recommender Amazon Resource Name (ARN) as given in the request. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.ResourceInUseException" Personalize / Client / describe_batch_segment_job describe_batch_segment_job ************************** Personalize.Client.describe_batch_segment_job(**kwargs) Gets the properties of a batch segment job including name, Amazon Resource Name (ARN), status, input and output configurations, and the ARN of the solution version used to generate segments. See also: AWS API Documentation **Request Syntax** response = client.describe_batch_segment_job( batchSegmentJobArn='string' ) Parameters: **batchSegmentJobArn** (*string*) -- **[REQUIRED]** The ARN of the batch segment job to describe. Return type: dict Returns: **Response Syntax** { 'batchSegmentJob': { 'jobName': 'string', 'batchSegmentJobArn': 'string', 'filterArn': 'string', 'failureReason': 'string', 'solutionVersionArn': 'string', 'numResults': 123, 'jobInput': { 's3DataSource': { 'path': 'string', 'kmsKeyArn': 'string' } }, 'jobOutput': { 's3DataDestination': { 'path': 'string', 'kmsKeyArn': 'string' } }, 'roleArn': 'string', 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1) } } **Response Structure** * *(dict) --* * **batchSegmentJob** *(dict) --* Information on the specified batch segment job. * **jobName** *(string) --* The name of the batch segment job. * **batchSegmentJobArn** *(string) --* The Amazon Resource Name (ARN) of the batch segment job. * **filterArn** *(string) --* The ARN of the filter used on the batch segment job. * **failureReason** *(string) --* If the batch segment job failed, the reason for the failure. * **solutionVersionArn** *(string) --* The Amazon Resource Name (ARN) of the solution version used by the batch segment job to generate batch segments. * **numResults** *(integer) --* The number of predicted users generated by the batch segment job for each line of input data. The maximum number of users per segment is 5 million. * **jobInput** *(dict) --* The Amazon S3 path that leads to the input data used to generate the batch segment job. * **s3DataSource** *(dict) --* The configuration details of an Amazon S3 input or output bucket. * **path** *(string) --* The file path of the Amazon S3 bucket. * **kmsKeyArn** *(string) --* The Amazon Resource Name (ARN) of the Key Management Service (KMS) key that Amazon Personalize uses to encrypt or decrypt the input and output files. * **jobOutput** *(dict) --* The Amazon S3 bucket that contains the output data generated by the batch segment job. * **s3DataDestination** *(dict) --* The configuration details of an Amazon S3 input or output bucket. * **path** *(string) --* The file path of the Amazon S3 bucket. * **kmsKeyArn** *(string) --* The Amazon Resource Name (ARN) of the Key Management Service (KMS) key that Amazon Personalize uses to encrypt or decrypt the input and output files. * **roleArn** *(string) --* The ARN of the Amazon Identity and Access Management (IAM) role that requested the batch segment job. * **status** *(string) --* The status of the batch segment job. The status is one of the following values: * PENDING * IN PROGRESS * ACTIVE * CREATE FAILED * **creationDateTime** *(datetime) --* The time at which the batch segment job was created. * **lastUpdatedDateTime** *(datetime) --* The time at which the batch segment job last updated. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" Personalize / Client / create_campaign create_campaign *************** Personalize.Client.create_campaign(**kwargs) Warning: You incur campaign costs while it is active. To avoid unnecessary costs, make sure to delete the campaign when you are finished. For information about campaign costs, see Amazon Personalize pricing. Creates a campaign that deploys a solution version. When a client calls the GetRecommendations and GetPersonalizedRanking APIs, a campaign is specified in the request. **Minimum Provisioned TPS and Auto-Scaling** Warning: A high "minProvisionedTPS" will increase your cost. We recommend starting with 1 for "minProvisionedTPS" (the default). Track your usage using Amazon CloudWatch metrics, and increase the "minProvisionedTPS" as necessary. When you create an Amazon Personalize campaign, you can specify the minimum provisioned transactions per second ( "minProvisionedTPS") for the campaign. This is the baseline transaction throughput for the campaign provisioned by Amazon Personalize. It sets the minimum billing charge for the campaign while it is active. A transaction is a single "GetRecommendations" or "GetPersonalizedRanking" request. The default "minProvisionedTPS" is 1. If your TPS increases beyond the "minProvisionedTPS", Amazon Personalize auto-scales the provisioned capacity up and down, but never below "minProvisionedTPS". There's a short time delay while the capacity is increased that might cause loss of transactions. When your traffic reduces, capacity returns to the "minProvisionedTPS". You are charged for the the minimum provisioned TPS or, if your requests exceed the "minProvisionedTPS", the actual TPS. The actual TPS is the total number of recommendation requests you make. We recommend starting with a low "minProvisionedTPS", track your usage using Amazon CloudWatch metrics, and then increase the "minProvisionedTPS" as necessary. For more information about campaign costs, see Amazon Personalize pricing. **Status** A campaign can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * DELETE PENDING > DELETE IN_PROGRESS To get the campaign status, call DescribeCampaign. Note: Wait until the "status" of the campaign is "ACTIVE" before asking the campaign for recommendations. **Related APIs** * ListCampaigns * DescribeCampaign * UpdateCampaign * DeleteCampaign See also: AWS API Documentation **Request Syntax** response = client.create_campaign( name='string', solutionVersionArn='string', minProvisionedTPS=123, campaignConfig={ 'itemExplorationConfig': { 'string': 'string' }, 'enableMetadataWithRecommendations': True|False, 'syncWithLatestSolutionVersion': True|False }, tags=[ { 'tagKey': 'string', 'tagValue': 'string' }, ] ) Parameters: * **name** (*string*) -- **[REQUIRED]** A name for the new campaign. The campaign name must be unique within your account. * **solutionVersionArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the trained model to deploy with the campaign. To specify the latest solution version of your solution, specify the ARN of your *solution* in "SolutionArn/$LATEST" format. You must use this format if you set "syncWithLatestSolutionVersion" to "True" in the CampaignConfig. To deploy a model that isn't the latest solution version of your solution, specify the ARN of the solution version. For more information about automatic campaign updates, see Enabling automatic campaign updates. * **minProvisionedTPS** (*integer*) -- Specifies the requested minimum provisioned transactions (recommendations) per second that Amazon Personalize will support. A high "minProvisionedTPS" will increase your bill. We recommend starting with 1 for "minProvisionedTPS" (the default). Track your usage using Amazon CloudWatch metrics, and increase the "minProvisionedTPS" as necessary. * **campaignConfig** (*dict*) -- The configuration details of a campaign. * **itemExplorationConfig** *(dict) --* Specifies the exploration configuration hyperparameters, including "explorationWeight" and "explorationItemAgeCutOff", you want to use to configure the amount of item exploration Amazon Personalize uses when recommending items. Provide "itemExplorationConfig" data only if your solution uses the User-Personalization recipe. * *(string) --* * *(string) --* * **enableMetadataWithRecommendations** *(boolean) --* Whether metadata with recommendations is enabled for the campaign. If enabled, you can specify the columns from your Items dataset in your request for recommendations. Amazon Personalize returns this data for each item in the recommendation response. For information about enabling metadata for a campaign, see Enabling metadata in recommendations for a campaign. If you enable metadata in recommendations, you will incur additional costs. For more information, see Amazon Personalize pricing. * **syncWithLatestSolutionVersion** *(boolean) --* Whether the campaign automatically updates to use the latest solution version (trained model) of a solution. If you specify "True", you must specify the ARN of your *solution* for the "SolutionVersionArn" parameter. It must be in "SolutionArn/$LATEST" format. The default is "False" and you must manually update the campaign to deploy the latest solution version. For more information about automatic campaign updates, see Enabling automatic campaign updates. * **tags** (*list*) -- A list of tags to apply to the campaign. * *(dict) --* The optional metadata that you apply to resources to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. For more information see Tagging Amazon Personalize resources. * **tagKey** *(string) --* **[REQUIRED]** One part of a key-value pair that makes up a tag. A key is a general label that acts like a category for more specific tag values. * **tagValue** *(string) --* **[REQUIRED]** The optional part of a key-value pair that makes up a tag. A value acts as a descriptor within a tag category (key). Return type: dict Returns: **Response Syntax** { 'campaignArn': 'string' } **Response Structure** * *(dict) --* * **campaignArn** *(string) --* The Amazon Resource Name (ARN) of the campaign. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.ResourceAlreadyExistsException" * "Personalize.Client.exceptions.LimitExceededException" * "Personalize.Client.exceptions.ResourceInUseException" * "Personalize.Client.exceptions.TooManyTagsException" Personalize / Client / get_waiter get_waiter ********** Personalize.Client.get_waiter(waiter_name) Returns an object that can wait for some condition. Parameters: **waiter_name** (*str*) -- The name of the waiter to get. See the waiters section of the service docs for a list of available waiters. Returns: The specified waiter object. Return type: "botocore.waiter.Waiter" Personalize / Client / describe_metric_attribution describe_metric_attribution *************************** Personalize.Client.describe_metric_attribution(**kwargs) Describes a metric attribution. See also: AWS API Documentation **Request Syntax** response = client.describe_metric_attribution( metricAttributionArn='string' ) Parameters: **metricAttributionArn** (*string*) -- **[REQUIRED]** The metric attribution's Amazon Resource Name (ARN). Return type: dict Returns: **Response Syntax** { 'metricAttribution': { 'name': 'string', 'metricAttributionArn': 'string', 'datasetGroupArn': 'string', 'metricsOutputConfig': { 's3DataDestination': { 'path': 'string', 'kmsKeyArn': 'string' }, 'roleArn': 'string' }, 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'failureReason': 'string' } } **Response Structure** * *(dict) --* * **metricAttribution** *(dict) --* The details of the metric attribution. * **name** *(string) --* The metric attribution's name. * **metricAttributionArn** *(string) --* The metric attribution's Amazon Resource Name (ARN). * **datasetGroupArn** *(string) --* The metric attribution's dataset group Amazon Resource Name (ARN). * **metricsOutputConfig** *(dict) --* The metric attribution's output configuration. * **s3DataDestination** *(dict) --* The configuration details of an Amazon S3 input or output bucket. * **path** *(string) --* The file path of the Amazon S3 bucket. * **kmsKeyArn** *(string) --* The Amazon Resource Name (ARN) of the Key Management Service (KMS) key that Amazon Personalize uses to encrypt or decrypt the input and output files. * **roleArn** *(string) --* The Amazon Resource Name (ARN) of the IAM service role that has permissions to add data to your output Amazon S3 bucket and add metrics to Amazon CloudWatch. For more information, see Measuring impact of recommendations. * **status** *(string) --* The metric attribution's status. * **creationDateTime** *(datetime) --* The metric attribution's creation date time. * **lastUpdatedDateTime** *(datetime) --* The metric attribution's last updated date time. * **failureReason** *(string) --* The metric attribution's failure reason. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" Personalize / Client / describe_feature_transformation describe_feature_transformation ******************************* Personalize.Client.describe_feature_transformation(**kwargs) Describes the given feature transformation. See also: AWS API Documentation **Request Syntax** response = client.describe_feature_transformation( featureTransformationArn='string' ) Parameters: **featureTransformationArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the feature transformation to describe. Return type: dict Returns: **Response Syntax** { 'featureTransformation': { 'name': 'string', 'featureTransformationArn': 'string', 'defaultParameters': { 'string': 'string' }, 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'status': 'string' } } **Response Structure** * *(dict) --* * **featureTransformation** *(dict) --* A listing of the FeatureTransformation properties. * **name** *(string) --* The name of the feature transformation. * **featureTransformationArn** *(string) --* The Amazon Resource Name (ARN) of the FeatureTransformation object. * **defaultParameters** *(dict) --* Provides the default parameters for feature transformation. * *(string) --* * *(string) --* * **creationDateTime** *(datetime) --* The creation date and time (in Unix time) of the feature transformation. * **lastUpdatedDateTime** *(datetime) --* The last update date and time (in Unix time) of the feature transformation. * **status** *(string) --* The status of the feature transformation. A feature transformation can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" Personalize / Client / delete_dataset_group delete_dataset_group ******************** Personalize.Client.delete_dataset_group(**kwargs) Deletes a dataset group. Before you delete a dataset group, you must delete the following: * All associated event trackers. * All associated solutions. * All datasets in the dataset group. See also: AWS API Documentation **Request Syntax** response = client.delete_dataset_group( datasetGroupArn='string' ) Parameters: **datasetGroupArn** (*string*) -- **[REQUIRED]** The ARN of the dataset group to delete. Returns: None **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.ResourceInUseException" Personalize / Client / update_solution update_solution *************** Personalize.Client.update_solution(**kwargs) Updates an Amazon Personalize solution to use a different automatic training configuration. When you update a solution, you can change whether the solution uses automatic training, and you can change the training frequency. For more information about updating a solution, see Updating a solution. A solution update can be in one of the following states: CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED To get the status of a solution update, call the DescribeSolution API operation and find the status in the "latestSolutionUpdate". See also: AWS API Documentation **Request Syntax** response = client.update_solution( solutionArn='string', performAutoTraining=True|False, solutionUpdateConfig={ 'autoTrainingConfig': { 'schedulingExpression': 'string' }, 'eventsConfig': { 'eventParametersList': [ { 'eventType': 'string', 'eventValueThreshold': 123.0, 'weight': 123.0 }, ] } } ) Parameters: * **solutionArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the solution to update. * **performAutoTraining** (*boolean*) -- Whether the solution uses automatic training to create new solution versions (trained models). You can change the training frequency by specifying a "schedulingExpression" in the "AutoTrainingConfig" as part of solution configuration. If you turn on automatic training, the first automatic training starts within one hour after the solution update completes. If you manually create a solution version within the hour, the solution skips the first automatic training. For more information about automatic training, see Configuring automatic training. After training starts, you can get the solution version's Amazon Resource Name (ARN) with the ListSolutionVersions API operation. To get its status, use the DescribeSolutionVersion. * **solutionUpdateConfig** (*dict*) -- The new configuration details of the solution. * **autoTrainingConfig** *(dict) --* The automatic training configuration to use when "performAutoTraining" is true. * **schedulingExpression** *(string) --* Specifies how often to automatically train new solution versions. Specify a rate expression in rate(*value* *unit*) format. For value, specify a number between 1 and 30. For unit, specify "day" or "days". For example, to automatically create a new solution version every 5 days, specify "rate(5 days)". The default is every 7 days. For more information about auto training, see Creating and configuring a solution. * **eventsConfig** *(dict) --* Describes the configuration of an event, which includes a list of event parameters. You can specify up to 10 event parameters. Events are used in solution creation. * **eventParametersList** *(list) --* A list of event parameters, which includes event types and their event value thresholds and weights. * *(dict) --* Describes the parameters of events, which are used in solution creation. * **eventType** *(string) --* The name of the event type to be considered for solution creation. * **eventValueThreshold** *(float) --* The threshold of the event type. Only events with a value greater or equal to this threshold will be considered for solution creation. * **weight** *(float) --* The weight of the event type. A higher weight means higher importance of the event type for the created solution. Return type: dict Returns: **Response Syntax** { 'solutionArn': 'string' } **Response Structure** * *(dict) --* * **solutionArn** *(string) --* The same solution Amazon Resource Name (ARN) as given in the request. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.ResourceInUseException" * "Personalize.Client.exceptions.LimitExceededException" * "Personalize.Client.exceptions.ResourceInUseException" Personalize / Client / list_dataset_import_jobs list_dataset_import_jobs ************************ Personalize.Client.list_dataset_import_jobs(**kwargs) Returns a list of dataset import jobs that use the given dataset. When a dataset is not specified, all the dataset import jobs associated with the account are listed. The response provides the properties for each dataset import job, including the Amazon Resource Name (ARN). For more information on dataset import jobs, see CreateDatasetImportJob. For more information on datasets, see CreateDataset. See also: AWS API Documentation **Request Syntax** response = client.list_dataset_import_jobs( datasetArn='string', nextToken='string', maxResults=123 ) Parameters: * **datasetArn** (*string*) -- The Amazon Resource Name (ARN) of the dataset to list the dataset import jobs for. * **nextToken** (*string*) -- A token returned from the previous call to "ListDatasetImportJobs" for getting the next set of dataset import jobs (if they exist). * **maxResults** (*integer*) -- The maximum number of dataset import jobs to return. Return type: dict Returns: **Response Syntax** { 'datasetImportJobs': [ { 'datasetImportJobArn': 'string', 'jobName': 'string', 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'failureReason': 'string', 'importMode': 'FULL'|'INCREMENTAL' }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **datasetImportJobs** *(list) --* The list of dataset import jobs. * *(dict) --* Provides a summary of the properties of a dataset import job. For a complete listing, call the DescribeDatasetImportJob API. * **datasetImportJobArn** *(string) --* The Amazon Resource Name (ARN) of the dataset import job. * **jobName** *(string) --* The name of the dataset import job. * **status** *(string) --* The status of the dataset import job. A dataset import job can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * **creationDateTime** *(datetime) --* The date and time (in Unix time) that the dataset import job was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the dataset import job status was last updated. * **failureReason** *(string) --* If a dataset import job fails, the reason behind the failure. * **importMode** *(string) --* The import mode the dataset import job used to update the data in the dataset. For more information see Updating existing bulk data. * **nextToken** *(string) --* A token for getting the next set of dataset import jobs (if they exist). **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.InvalidNextTokenException" Personalize / Client / describe_solution describe_solution ***************** Personalize.Client.describe_solution(**kwargs) Describes a solution. For more information on solutions, see CreateSolution. See also: AWS API Documentation **Request Syntax** response = client.describe_solution( solutionArn='string' ) Parameters: **solutionArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the solution to describe. Return type: dict Returns: **Response Syntax** { 'solution': { 'name': 'string', 'solutionArn': 'string', 'performHPO': True|False, 'performAutoML': True|False, 'performAutoTraining': True|False, 'recipeArn': 'string', 'datasetGroupArn': 'string', 'eventType': 'string', 'solutionConfig': { 'eventValueThreshold': 'string', 'hpoConfig': { 'hpoObjective': { 'type': 'string', 'metricName': 'string', 'metricRegex': 'string' }, 'hpoResourceConfig': { 'maxNumberOfTrainingJobs': 'string', 'maxParallelTrainingJobs': 'string' }, 'algorithmHyperParameterRanges': { 'integerHyperParameterRanges': [ { 'name': 'string', 'minValue': 123, 'maxValue': 123 }, ], 'continuousHyperParameterRanges': [ { 'name': 'string', 'minValue': 123.0, 'maxValue': 123.0 }, ], 'categoricalHyperParameterRanges': [ { 'name': 'string', 'values': [ 'string', ] }, ] } }, 'algorithmHyperParameters': { 'string': 'string' }, 'featureTransformationParameters': { 'string': 'string' }, 'autoMLConfig': { 'metricName': 'string', 'recipeList': [ 'string', ] }, 'eventsConfig': { 'eventParametersList': [ { 'eventType': 'string', 'eventValueThreshold': 123.0, 'weight': 123.0 }, ] }, 'optimizationObjective': { 'itemAttribute': 'string', 'objectiveSensitivity': 'LOW'|'MEDIUM'|'HIGH'|'OFF' }, 'trainingDataConfig': { 'excludedDatasetColumns': { 'string': [ 'string', ] } }, 'autoTrainingConfig': { 'schedulingExpression': 'string' } }, 'autoMLResult': { 'bestRecipeArn': 'string' }, 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'latestSolutionVersion': { 'solutionVersionArn': 'string', 'status': 'string', 'trainingMode': 'FULL'|'UPDATE'|'AUTOTRAIN', 'trainingType': 'AUTOMATIC'|'MANUAL', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'failureReason': 'string' }, 'latestSolutionUpdate': { 'solutionUpdateConfig': { 'autoTrainingConfig': { 'schedulingExpression': 'string' }, 'eventsConfig': { 'eventParametersList': [ { 'eventType': 'string', 'eventValueThreshold': 123.0, 'weight': 123.0 }, ] } }, 'status': 'string', 'performAutoTraining': True|False, 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'failureReason': 'string' } } } **Response Structure** * *(dict) --* * **solution** *(dict) --* An object that describes the solution. * **name** *(string) --* The name of the solution. * **solutionArn** *(string) --* The ARN of the solution. * **performHPO** *(boolean) --* Whether to perform hyperparameter optimization (HPO) on the chosen recipe. The default is "false". * **performAutoML** *(boolean) --* Warning: We don't recommend enabling automated machine learning. Instead, match your use case to the available Amazon Personalize recipes. For more information, see Determining your use case. When true, Amazon Personalize performs a search for the best USER_PERSONALIZATION recipe from the list specified in the solution configuration ( "recipeArn" must not be specified). When false (the default), Amazon Personalize uses "recipeArn" for training. * **performAutoTraining** *(boolean) --* Specifies whether the solution automatically creates solution versions. The default is "True" and the solution automatically creates new solution versions every 7 days. For more information about auto training, see Creating and configuring a solution. * **recipeArn** *(string) --* The ARN of the recipe used to create the solution. This is required when "performAutoML" is false. * **datasetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the dataset group that provides the training data. * **eventType** *(string) --* The event type (for example, 'click' or 'like') that is used for training the model. If no "eventType" is provided, Amazon Personalize uses all interactions for training with equal weight regardless of type. * **solutionConfig** *(dict) --* Describes the configuration properties for the solution. * **eventValueThreshold** *(string) --* Only events with a value greater than or equal to this threshold are used for training a model. * **hpoConfig** *(dict) --* Describes the properties for hyperparameter optimization (HPO). * **hpoObjective** *(dict) --* The metric to optimize during HPO. Note: Amazon Personalize doesn't support configuring the "hpoObjective" at this time. * **type** *(string) --* The type of the metric. Valid values are "Maximize" and "Minimize". * **metricName** *(string) --* The name of the metric. * **metricRegex** *(string) --* A regular expression for finding the metric in the training job logs. * **hpoResourceConfig** *(dict) --* Describes the resource configuration for HPO. * **maxNumberOfTrainingJobs** *(string) --* The maximum number of training jobs when you create a solution version. The maximum value for "maxNumberOfTrainingJobs" is "40". * **maxParallelTrainingJobs** *(string) --* The maximum number of parallel training jobs when you create a solution version. The maximum value for "maxParallelTrainingJobs" is "10". * **algorithmHyperParameterRanges** *(dict) --* The hyperparameters and their allowable ranges. * **integerHyperParameterRanges** *(list) --* The integer-valued hyperparameters and their ranges. * *(dict) --* Provides the name and range of an integer-valued hyperparameter. * **name** *(string) --* The name of the hyperparameter. * **minValue** *(integer) --* The minimum allowable value for the hyperparameter. * **maxValue** *(integer) --* The maximum allowable value for the hyperparameter. * **continuousHyperParameterRanges** *(list) --* The continuous hyperparameters and their ranges. * *(dict) --* Provides the name and range of a continuous hyperparameter. * **name** *(string) --* The name of the hyperparameter. * **minValue** *(float) --* The minimum allowable value for the hyperparameter. * **maxValue** *(float) --* The maximum allowable value for the hyperparameter. * **categoricalHyperParameterRanges** *(list) --* The categorical hyperparameters and their ranges. * *(dict) --* Provides the name and range of a categorical hyperparameter. * **name** *(string) --* The name of the hyperparameter. * **values** *(list) --* A list of the categories for the hyperparameter. * *(string) --* * **algorithmHyperParameters** *(dict) --* Lists the algorithm hyperparameters and their values. * *(string) --* * *(string) --* * **featureTransformationParameters** *(dict) --* Lists the feature transformation parameters. * *(string) --* * *(string) --* * **autoMLConfig** *(dict) --* The AutoMLConfig object containing a list of recipes to search when AutoML is performed. * **metricName** *(string) --* The metric to optimize. * **recipeList** *(list) --* The list of candidate recipes. * *(string) --* * **eventsConfig** *(dict) --* Describes the configuration of an event, which includes a list of event parameters. You can specify up to 10 event parameters. Events are used in solution creation. * **eventParametersList** *(list) --* A list of event parameters, which includes event types and their event value thresholds and weights. * *(dict) --* Describes the parameters of events, which are used in solution creation. * **eventType** *(string) --* The name of the event type to be considered for solution creation. * **eventValueThreshold** *(float) --* The threshold of the event type. Only events with a value greater or equal to this threshold will be considered for solution creation. * **weight** *(float) --* The weight of the event type. A higher weight means higher importance of the event type for the created solution. * **optimizationObjective** *(dict) --* Describes the additional objective for the solution, such as maximizing streaming minutes or increasing revenue. For more information see Optimizing a solution. * **itemAttribute** *(string) --* The numerical metadata column in an Items dataset related to the optimization objective. For example, VIDEO_LENGTH (to maximize streaming minutes), or PRICE (to maximize revenue). * **objectiveSensitivity** *(string) --* Specifies how Amazon Personalize balances the importance of your optimization objective versus relevance. * **trainingDataConfig** *(dict) --* Specifies the training data configuration to use when creating a custom solution version (trained model). * **excludedDatasetColumns** *(dict) --* Specifies the columns to exclude from training. Each key is a dataset type, and each value is a list of columns. Exclude columns to control what data Amazon Personalize uses to generate recommendations. For example, you might have a column that you want to use only to filter recommendations. You can exclude this column from training and Amazon Personalize considers it only when filtering. * *(string) --* * *(list) --* * *(string) --* * **autoTrainingConfig** *(dict) --* Specifies the automatic training configuration to use. * **schedulingExpression** *(string) --* Specifies how often to automatically train new solution versions. Specify a rate expression in rate(*value* *unit*) format. For value, specify a number between 1 and 30. For unit, specify "day" or "days". For example, to automatically create a new solution version every 5 days, specify "rate(5 days)". The default is every 7 days. For more information about auto training, see Creating and configuring a solution. * **autoMLResult** *(dict) --* When "performAutoML" is true, specifies the best recipe found. * **bestRecipeArn** *(string) --* The Amazon Resource Name (ARN) of the best recipe. * **status** *(string) --* The status of the solution. A solution can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * DELETE PENDING > DELETE IN_PROGRESS * **creationDateTime** *(datetime) --* The creation date and time (in Unix time) of the solution. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the solution was last updated. * **latestSolutionVersion** *(dict) --* Describes the latest version of the solution, including the status and the ARN. * **solutionVersionArn** *(string) --* The Amazon Resource Name (ARN) of the solution version. * **status** *(string) --* The status of the solution version. A solution version can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * **trainingMode** *(string) --* The scope of training to be performed when creating the solution version. A "FULL" training considers all of the data in your dataset group. An "UPDATE" processes only the data that has changed since the latest training. Only solution versions created with the User- Personalization recipe can use "UPDATE". * **trainingType** *(string) --* Whether the solution version was created automatically or manually. * **creationDateTime** *(datetime) --* The date and time (in Unix time) that this version of a solution was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the solution version was last updated. * **failureReason** *(string) --* If a solution version fails, the reason behind the failure. * **latestSolutionUpdate** *(dict) --* Provides a summary of the latest updates to the solution. * **solutionUpdateConfig** *(dict) --* The configuration details of the solution. * **autoTrainingConfig** *(dict) --* The automatic training configuration to use when "performAutoTraining" is true. * **schedulingExpression** *(string) --* Specifies how often to automatically train new solution versions. Specify a rate expression in rate(*value* *unit*) format. For value, specify a number between 1 and 30. For unit, specify "day" or "days". For example, to automatically create a new solution version every 5 days, specify "rate(5 days)". The default is every 7 days. For more information about auto training, see Creating and configuring a solution. * **eventsConfig** *(dict) --* Describes the configuration of an event, which includes a list of event parameters. You can specify up to 10 event parameters. Events are used in solution creation. * **eventParametersList** *(list) --* A list of event parameters, which includes event types and their event value thresholds and weights. * *(dict) --* Describes the parameters of events, which are used in solution creation. * **eventType** *(string) --* The name of the event type to be considered for solution creation. * **eventValueThreshold** *(float) --* The threshold of the event type. Only events with a value greater or equal to this threshold will be considered for solution creation. * **weight** *(float) --* The weight of the event type. A higher weight means higher importance of the event type for the created solution. * **status** *(string) --* The status of the solution update. A solution update can be in one of the following states: CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * **performAutoTraining** *(boolean) --* Whether the solution automatically creates solution versions. * **creationDateTime** *(datetime) --* The date and time (in Unix format) that the solution update was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the solution update was last updated. * **failureReason** *(string) --* If a solution update fails, the reason behind the failure. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" Personalize / Client / get_solution_metrics get_solution_metrics ******************** Personalize.Client.get_solution_metrics(**kwargs) Gets the metrics for the specified solution version. See also: AWS API Documentation **Request Syntax** response = client.get_solution_metrics( solutionVersionArn='string' ) Parameters: **solutionVersionArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the solution version for which to get metrics. Return type: dict Returns: **Response Syntax** { 'solutionVersionArn': 'string', 'metrics': { 'string': 123.0 } } **Response Structure** * *(dict) --* * **solutionVersionArn** *(string) --* The same solution version ARN as specified in the request. * **metrics** *(dict) --* The metrics for the solution version. For more information, see Evaluating a solution version with metrics. * *(string) --* * *(float) --* **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.ResourceInUseException" Personalize / Client / describe_dataset_group describe_dataset_group ********************** Personalize.Client.describe_dataset_group(**kwargs) Describes the given dataset group. For more information on dataset groups, see CreateDatasetGroup. See also: AWS API Documentation **Request Syntax** response = client.describe_dataset_group( datasetGroupArn='string' ) Parameters: **datasetGroupArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the dataset group to describe. Return type: dict Returns: **Response Syntax** { 'datasetGroup': { 'name': 'string', 'datasetGroupArn': 'string', 'status': 'string', 'roleArn': 'string', 'kmsKeyArn': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'failureReason': 'string', 'domain': 'ECOMMERCE'|'VIDEO_ON_DEMAND' } } **Response Structure** * *(dict) --* * **datasetGroup** *(dict) --* A listing of the dataset group's properties. * **name** *(string) --* The name of the dataset group. * **datasetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the dataset group. * **status** *(string) --* The current status of the dataset group. A dataset group can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * DELETE PENDING * **roleArn** *(string) --* The ARN of the Identity and Access Management (IAM) role that has permissions to access the Key Management Service (KMS) key. Supplying an IAM role is only valid when also specifying a KMS key. * **kmsKeyArn** *(string) --* The Amazon Resource Name (ARN) of the Key Management Service (KMS) key used to encrypt the datasets. * **creationDateTime** *(datetime) --* The creation date and time (in Unix time) of the dataset group. * **lastUpdatedDateTime** *(datetime) --* The last update date and time (in Unix time) of the dataset group. * **failureReason** *(string) --* If creating a dataset group fails, provides the reason why. * **domain** *(string) --* The domain of a Domain dataset group. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" Personalize / Client / describe_solution_version describe_solution_version ************************* Personalize.Client.describe_solution_version(**kwargs) Describes a specific version of a solution. For more information on solutions, see CreateSolution See also: AWS API Documentation **Request Syntax** response = client.describe_solution_version( solutionVersionArn='string' ) Parameters: **solutionVersionArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the solution version. Return type: dict Returns: **Response Syntax** { 'solutionVersion': { 'name': 'string', 'solutionVersionArn': 'string', 'solutionArn': 'string', 'performHPO': True|False, 'performAutoML': True|False, 'recipeArn': 'string', 'eventType': 'string', 'datasetGroupArn': 'string', 'solutionConfig': { 'eventValueThreshold': 'string', 'hpoConfig': { 'hpoObjective': { 'type': 'string', 'metricName': 'string', 'metricRegex': 'string' }, 'hpoResourceConfig': { 'maxNumberOfTrainingJobs': 'string', 'maxParallelTrainingJobs': 'string' }, 'algorithmHyperParameterRanges': { 'integerHyperParameterRanges': [ { 'name': 'string', 'minValue': 123, 'maxValue': 123 }, ], 'continuousHyperParameterRanges': [ { 'name': 'string', 'minValue': 123.0, 'maxValue': 123.0 }, ], 'categoricalHyperParameterRanges': [ { 'name': 'string', 'values': [ 'string', ] }, ] } }, 'algorithmHyperParameters': { 'string': 'string' }, 'featureTransformationParameters': { 'string': 'string' }, 'autoMLConfig': { 'metricName': 'string', 'recipeList': [ 'string', ] }, 'eventsConfig': { 'eventParametersList': [ { 'eventType': 'string', 'eventValueThreshold': 123.0, 'weight': 123.0 }, ] }, 'optimizationObjective': { 'itemAttribute': 'string', 'objectiveSensitivity': 'LOW'|'MEDIUM'|'HIGH'|'OFF' }, 'trainingDataConfig': { 'excludedDatasetColumns': { 'string': [ 'string', ] } }, 'autoTrainingConfig': { 'schedulingExpression': 'string' } }, 'trainingHours': 123.0, 'trainingMode': 'FULL'|'UPDATE'|'AUTOTRAIN', 'tunedHPOParams': { 'algorithmHyperParameters': { 'string': 'string' } }, 'status': 'string', 'failureReason': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'trainingType': 'AUTOMATIC'|'MANUAL' } } **Response Structure** * *(dict) --* * **solutionVersion** *(dict) --* The solution version. * **name** *(string) --* The name of the solution version. * **solutionVersionArn** *(string) --* The ARN of the solution version. * **solutionArn** *(string) --* The ARN of the solution. * **performHPO** *(boolean) --* Whether to perform hyperparameter optimization (HPO) on the chosen recipe. The default is "false". * **performAutoML** *(boolean) --* When true, Amazon Personalize searches for the most optimal recipe according to the solution configuration. When false (the default), Amazon Personalize uses "recipeArn". * **recipeArn** *(string) --* The ARN of the recipe used in the solution. * **eventType** *(string) --* The event type (for example, 'click' or 'like') that is used for training the model. * **datasetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the dataset group providing the training data. * **solutionConfig** *(dict) --* Describes the configuration properties for the solution. * **eventValueThreshold** *(string) --* Only events with a value greater than or equal to this threshold are used for training a model. * **hpoConfig** *(dict) --* Describes the properties for hyperparameter optimization (HPO). * **hpoObjective** *(dict) --* The metric to optimize during HPO. Note: Amazon Personalize doesn't support configuring the "hpoObjective" at this time. * **type** *(string) --* The type of the metric. Valid values are "Maximize" and "Minimize". * **metricName** *(string) --* The name of the metric. * **metricRegex** *(string) --* A regular expression for finding the metric in the training job logs. * **hpoResourceConfig** *(dict) --* Describes the resource configuration for HPO. * **maxNumberOfTrainingJobs** *(string) --* The maximum number of training jobs when you create a solution version. The maximum value for "maxNumberOfTrainingJobs" is "40". * **maxParallelTrainingJobs** *(string) --* The maximum number of parallel training jobs when you create a solution version. The maximum value for "maxParallelTrainingJobs" is "10". * **algorithmHyperParameterRanges** *(dict) --* The hyperparameters and their allowable ranges. * **integerHyperParameterRanges** *(list) --* The integer-valued hyperparameters and their ranges. * *(dict) --* Provides the name and range of an integer-valued hyperparameter. * **name** *(string) --* The name of the hyperparameter. * **minValue** *(integer) --* The minimum allowable value for the hyperparameter. * **maxValue** *(integer) --* The maximum allowable value for the hyperparameter. * **continuousHyperParameterRanges** *(list) --* The continuous hyperparameters and their ranges. * *(dict) --* Provides the name and range of a continuous hyperparameter. * **name** *(string) --* The name of the hyperparameter. * **minValue** *(float) --* The minimum allowable value for the hyperparameter. * **maxValue** *(float) --* The maximum allowable value for the hyperparameter. * **categoricalHyperParameterRanges** *(list) --* The categorical hyperparameters and their ranges. * *(dict) --* Provides the name and range of a categorical hyperparameter. * **name** *(string) --* The name of the hyperparameter. * **values** *(list) --* A list of the categories for the hyperparameter. * *(string) --* * **algorithmHyperParameters** *(dict) --* Lists the algorithm hyperparameters and their values. * *(string) --* * *(string) --* * **featureTransformationParameters** *(dict) --* Lists the feature transformation parameters. * *(string) --* * *(string) --* * **autoMLConfig** *(dict) --* The AutoMLConfig object containing a list of recipes to search when AutoML is performed. * **metricName** *(string) --* The metric to optimize. * **recipeList** *(list) --* The list of candidate recipes. * *(string) --* * **eventsConfig** *(dict) --* Describes the configuration of an event, which includes a list of event parameters. You can specify up to 10 event parameters. Events are used in solution creation. * **eventParametersList** *(list) --* A list of event parameters, which includes event types and their event value thresholds and weights. * *(dict) --* Describes the parameters of events, which are used in solution creation. * **eventType** *(string) --* The name of the event type to be considered for solution creation. * **eventValueThreshold** *(float) --* The threshold of the event type. Only events with a value greater or equal to this threshold will be considered for solution creation. * **weight** *(float) --* The weight of the event type. A higher weight means higher importance of the event type for the created solution. * **optimizationObjective** *(dict) --* Describes the additional objective for the solution, such as maximizing streaming minutes or increasing revenue. For more information see Optimizing a solution. * **itemAttribute** *(string) --* The numerical metadata column in an Items dataset related to the optimization objective. For example, VIDEO_LENGTH (to maximize streaming minutes), or PRICE (to maximize revenue). * **objectiveSensitivity** *(string) --* Specifies how Amazon Personalize balances the importance of your optimization objective versus relevance. * **trainingDataConfig** *(dict) --* Specifies the training data configuration to use when creating a custom solution version (trained model). * **excludedDatasetColumns** *(dict) --* Specifies the columns to exclude from training. Each key is a dataset type, and each value is a list of columns. Exclude columns to control what data Amazon Personalize uses to generate recommendations. For example, you might have a column that you want to use only to filter recommendations. You can exclude this column from training and Amazon Personalize considers it only when filtering. * *(string) --* * *(list) --* * *(string) --* * **autoTrainingConfig** *(dict) --* Specifies the automatic training configuration to use. * **schedulingExpression** *(string) --* Specifies how often to automatically train new solution versions. Specify a rate expression in rate(*value* *unit*) format. For value, specify a number between 1 and 30. For unit, specify "day" or "days". For example, to automatically create a new solution version every 5 days, specify "rate(5 days)". The default is every 7 days. For more information about auto training, see Creating and configuring a solution. * **trainingHours** *(float) --* The time used to train the model. You are billed for the time it takes to train a model. This field is visible only after Amazon Personalize successfully trains a model. * **trainingMode** *(string) --* The scope of training to be performed when creating the solution version. A "FULL" training considers all of the data in your dataset group. An "UPDATE" processes only the data that has changed since the latest training. Only solution versions created with the User-Personalization recipe can use "UPDATE". * **tunedHPOParams** *(dict) --* If hyperparameter optimization was performed, contains the hyperparameter values of the best performing model. * **algorithmHyperParameters** *(dict) --* A list of the hyperparameter values of the best performing model. * *(string) --* * *(string) --* * **status** *(string) --* The status of the solution version. A solution version can be in one of the following states: * CREATE PENDING * CREATE IN_PROGRESS * ACTIVE * CREATE FAILED * CREATE STOPPING * CREATE STOPPED * **failureReason** *(string) --* If training a solution version fails, the reason for the failure. * **creationDateTime** *(datetime) --* The date and time (in Unix time) that this version of the solution was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the solution was last updated. * **trainingType** *(string) --* Whether the solution version was created automatically or manually. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" Personalize / Client / list_datasets list_datasets ************* Personalize.Client.list_datasets(**kwargs) Returns the list of datasets contained in the given dataset group. The response provides the properties for each dataset, including the Amazon Resource Name (ARN). For more information on datasets, see CreateDataset. See also: AWS API Documentation **Request Syntax** response = client.list_datasets( datasetGroupArn='string', nextToken='string', maxResults=123 ) Parameters: * **datasetGroupArn** (*string*) -- The Amazon Resource Name (ARN) of the dataset group that contains the datasets to list. * **nextToken** (*string*) -- A token returned from the previous call to "ListDatasets" for getting the next set of dataset import jobs (if they exist). * **maxResults** (*integer*) -- The maximum number of datasets to return. Return type: dict Returns: **Response Syntax** { 'datasets': [ { 'name': 'string', 'datasetArn': 'string', 'datasetType': 'string', 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1) }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **datasets** *(list) --* An array of "Dataset" objects. Each object provides metadata information. * *(dict) --* Provides a summary of the properties of a dataset. For a complete listing, call the DescribeDataset API. * **name** *(string) --* The name of the dataset. * **datasetArn** *(string) --* The Amazon Resource Name (ARN) of the dataset. * **datasetType** *(string) --* The dataset type. One of the following values: * Interactions * Items * Users * Event-Interactions * **status** *(string) --* The status of the dataset. A dataset can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * DELETE PENDING > DELETE IN_PROGRESS * **creationDateTime** *(datetime) --* The date and time (in Unix time) that the dataset was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the dataset was last updated. * **nextToken** *(string) --* A token for getting the next set of datasets (if they exist). **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.InvalidNextTokenException" Personalize / Client / create_schema create_schema ************* Personalize.Client.create_schema(**kwargs) Creates an Amazon Personalize schema from the specified schema string. The schema you create must be in Avro JSON format. Amazon Personalize recognizes three schema variants. Each schema is associated with a dataset type and has a set of required field and keywords. If you are creating a schema for a dataset in a Domain dataset group, you provide the domain of the Domain dataset group. You specify a schema when you call CreateDataset. **Related APIs** * ListSchemas * DescribeSchema * DeleteSchema See also: AWS API Documentation **Request Syntax** response = client.create_schema( name='string', schema='string', domain='ECOMMERCE'|'VIDEO_ON_DEMAND' ) Parameters: * **name** (*string*) -- **[REQUIRED]** The name for the schema. * **schema** (*string*) -- **[REQUIRED]** A schema in Avro JSON format. * **domain** (*string*) -- The domain for the schema. If you are creating a schema for a dataset in a Domain dataset group, specify the domain you chose when you created the Domain dataset group. Return type: dict Returns: **Response Syntax** { 'schemaArn': 'string' } **Response Structure** * *(dict) --* * **schemaArn** *(string) --* The Amazon Resource Name (ARN) of the created schema. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceAlreadyExistsException" * "Personalize.Client.exceptions.LimitExceededException" Personalize / Client / delete_schema delete_schema ************* Personalize.Client.delete_schema(**kwargs) Deletes a schema. Before deleting a schema, you must delete all datasets referencing the schema. For more information on schemas, see CreateSchema. See also: AWS API Documentation **Request Syntax** response = client.delete_schema( schemaArn='string' ) Parameters: **schemaArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the schema to delete. Returns: None **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.ResourceInUseException" Personalize / Client / list_dataset_groups list_dataset_groups ******************* Personalize.Client.list_dataset_groups(**kwargs) Returns a list of dataset groups. The response provides the properties for each dataset group, including the Amazon Resource Name (ARN). For more information on dataset groups, see CreateDatasetGroup. See also: AWS API Documentation **Request Syntax** response = client.list_dataset_groups( nextToken='string', maxResults=123 ) Parameters: * **nextToken** (*string*) -- A token returned from the previous call to "ListDatasetGroups" for getting the next set of dataset groups (if they exist). * **maxResults** (*integer*) -- The maximum number of dataset groups to return. Return type: dict Returns: **Response Syntax** { 'datasetGroups': [ { 'name': 'string', 'datasetGroupArn': 'string', 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'failureReason': 'string', 'domain': 'ECOMMERCE'|'VIDEO_ON_DEMAND' }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **datasetGroups** *(list) --* The list of your dataset groups. * *(dict) --* Provides a summary of the properties of a dataset group. For a complete listing, call the DescribeDatasetGroup API. * **name** *(string) --* The name of the dataset group. * **datasetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the dataset group. * **status** *(string) --* The status of the dataset group. A dataset group can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * DELETE PENDING * **creationDateTime** *(datetime) --* The date and time (in Unix time) that the dataset group was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the dataset group was last updated. * **failureReason** *(string) --* If creating a dataset group fails, the reason behind the failure. * **domain** *(string) --* The domain of a Domain dataset group. * **nextToken** *(string) --* A token for getting the next set of dataset groups (if they exist). **Exceptions** * "Personalize.Client.exceptions.InvalidNextTokenException" Personalize / Client / list_schemas list_schemas ************ Personalize.Client.list_schemas(**kwargs) Returns the list of schemas associated with the account. The response provides the properties for each schema, including the Amazon Resource Name (ARN). For more information on schemas, see CreateSchema. See also: AWS API Documentation **Request Syntax** response = client.list_schemas( nextToken='string', maxResults=123 ) Parameters: * **nextToken** (*string*) -- A token returned from the previous call to "ListSchemas" for getting the next set of schemas (if they exist). * **maxResults** (*integer*) -- The maximum number of schemas to return. Return type: dict Returns: **Response Syntax** { 'schemas': [ { 'name': 'string', 'schemaArn': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'domain': 'ECOMMERCE'|'VIDEO_ON_DEMAND' }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **schemas** *(list) --* A list of schemas. * *(dict) --* Provides a summary of the properties of a dataset schema. For a complete listing, call the DescribeSchema API. * **name** *(string) --* The name of the schema. * **schemaArn** *(string) --* The Amazon Resource Name (ARN) of the schema. * **creationDateTime** *(datetime) --* The date and time (in Unix time) that the schema was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the schema was last updated. * **domain** *(string) --* The domain of a schema that you created for a dataset in a Domain dataset group. * **nextToken** *(string) --* A token used to get the next set of schemas (if they exist). **Exceptions** * "Personalize.Client.exceptions.InvalidNextTokenException" Personalize / Client / list_solution_versions list_solution_versions ********************** Personalize.Client.list_solution_versions(**kwargs) Returns a list of solution versions for the given solution. When a solution is not specified, all the solution versions associated with the account are listed. The response provides the properties for each solution version, including the Amazon Resource Name (ARN). See also: AWS API Documentation **Request Syntax** response = client.list_solution_versions( solutionArn='string', nextToken='string', maxResults=123 ) Parameters: * **solutionArn** (*string*) -- The Amazon Resource Name (ARN) of the solution. * **nextToken** (*string*) -- A token returned from the previous call to "ListSolutionVersions" for getting the next set of solution versions (if they exist). * **maxResults** (*integer*) -- The maximum number of solution versions to return. Return type: dict Returns: **Response Syntax** { 'solutionVersions': [ { 'solutionVersionArn': 'string', 'status': 'string', 'trainingMode': 'FULL'|'UPDATE'|'AUTOTRAIN', 'trainingType': 'AUTOMATIC'|'MANUAL', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'failureReason': 'string' }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **solutionVersions** *(list) --* A list of solution versions describing the version properties. * *(dict) --* Provides a summary of the properties of a solution version. For a complete listing, call the DescribeSolutionVersion API. * **solutionVersionArn** *(string) --* The Amazon Resource Name (ARN) of the solution version. * **status** *(string) --* The status of the solution version. A solution version can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * **trainingMode** *(string) --* The scope of training to be performed when creating the solution version. A "FULL" training considers all of the data in your dataset group. An "UPDATE" processes only the data that has changed since the latest training. Only solution versions created with the User- Personalization recipe can use "UPDATE". * **trainingType** *(string) --* Whether the solution version was created automatically or manually. * **creationDateTime** *(datetime) --* The date and time (in Unix time) that this version of a solution was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the solution version was last updated. * **failureReason** *(string) --* If a solution version fails, the reason behind the failure. * **nextToken** *(string) --* A token for getting the next set of solution versions (if they exist). **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.InvalidNextTokenException" Personalize / Client / create_solution create_solution *************** Personalize.Client.create_solution(**kwargs) Warning: By default, all new solutions use automatic training. With automatic training, you incur training costs while your solution is active. To avoid unnecessary costs, when you are finished you can update the solution to turn off automatic training. For information about training costs, see Amazon Personalize pricing. Creates the configuration for training a model (creating a solution version). This configuration includes the recipe to use for model training and optional training configuration, such as columns to use in training and feature transformation parameters. For more information about configuring a solution, see Creating and configuring a solution. By default, new solutions use automatic training to create solution versions every 7 days. You can change the training frequency. Automatic solution version creation starts within one hour after the solution is ACTIVE. If you manually create a solution version within the hour, the solution skips the first automatic training. For more information, see Configuring automatic training. To turn off automatic training, set "performAutoTraining" to false. If you turn off automatic training, you must manually create a solution version by calling the CreateSolutionVersion operation. After training starts, you can get the solution version's Amazon Resource Name (ARN) with the ListSolutionVersions API operation. To get its status, use the DescribeSolutionVersion. After training completes you can evaluate model accuracy by calling GetSolutionMetrics. When you are satisfied with the solution version, you deploy it using CreateCampaign. The campaign provides recommendations to a client through the GetRecommendations API. Note: Amazon Personalize doesn't support configuring the "hpoObjective" for solution hyperparameter optimization at this time. **Status** A solution can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * DELETE PENDING > DELETE IN_PROGRESS To get the status of the solution, call DescribeSolution. If you use manual training, the status must be ACTIVE before you call "CreateSolutionVersion". **Related APIs** * UpdateSolution * ListSolutions * CreateSolutionVersion * DescribeSolution * DeleteSolution * ListSolutionVersions * DescribeSolutionVersion See also: AWS API Documentation **Request Syntax** response = client.create_solution( name='string', performHPO=True|False, performAutoML=True|False, performAutoTraining=True|False, recipeArn='string', datasetGroupArn='string', eventType='string', solutionConfig={ 'eventValueThreshold': 'string', 'hpoConfig': { 'hpoObjective': { 'type': 'string', 'metricName': 'string', 'metricRegex': 'string' }, 'hpoResourceConfig': { 'maxNumberOfTrainingJobs': 'string', 'maxParallelTrainingJobs': 'string' }, 'algorithmHyperParameterRanges': { 'integerHyperParameterRanges': [ { 'name': 'string', 'minValue': 123, 'maxValue': 123 }, ], 'continuousHyperParameterRanges': [ { 'name': 'string', 'minValue': 123.0, 'maxValue': 123.0 }, ], 'categoricalHyperParameterRanges': [ { 'name': 'string', 'values': [ 'string', ] }, ] } }, 'algorithmHyperParameters': { 'string': 'string' }, 'featureTransformationParameters': { 'string': 'string' }, 'autoMLConfig': { 'metricName': 'string', 'recipeList': [ 'string', ] }, 'eventsConfig': { 'eventParametersList': [ { 'eventType': 'string', 'eventValueThreshold': 123.0, 'weight': 123.0 }, ] }, 'optimizationObjective': { 'itemAttribute': 'string', 'objectiveSensitivity': 'LOW'|'MEDIUM'|'HIGH'|'OFF' }, 'trainingDataConfig': { 'excludedDatasetColumns': { 'string': [ 'string', ] } }, 'autoTrainingConfig': { 'schedulingExpression': 'string' } }, tags=[ { 'tagKey': 'string', 'tagValue': 'string' }, ] ) Parameters: * **name** (*string*) -- **[REQUIRED]** The name for the solution. * **performHPO** (*boolean*) -- Whether to perform hyperparameter optimization (HPO) on the specified or selected recipe. The default is "false". When performing AutoML, this parameter is always "true" and you should not set it to "false". * **performAutoML** (*boolean*) -- Warning: We don't recommend enabling automated machine learning. Instead, match your use case to the available Amazon Personalize recipes. For more information, see Choosing a recipe. Whether to perform automated machine learning (AutoML). The default is "false". For this case, you must specify "recipeArn". When set to "true", Amazon Personalize analyzes your training data and selects the optimal USER_PERSONALIZATION recipe and hyperparameters. In this case, you must omit "recipeArn". Amazon Personalize determines the optimal recipe by running tests with different values for the hyperparameters. AutoML lengthens the training process as compared to selecting a specific recipe. * **performAutoTraining** (*boolean*) -- Whether the solution uses automatic training to create new solution versions (trained models). The default is "True" and the solution automatically creates new solution versions every 7 days. You can change the training frequency by specifying a "schedulingExpression" in the "AutoTrainingConfig" as part of solution configuration. For more information about automatic training, see Configuring automatic training. Automatic solution version creation starts within one hour after the solution is ACTIVE. If you manually create a solution version within the hour, the solution skips the first automatic training. After training starts, you can get the solution version's Amazon Resource Name (ARN) with the ListSolutionVersions API operation. To get its status, use the DescribeSolutionVersion. * **recipeArn** (*string*) -- The Amazon Resource Name (ARN) of the recipe to use for model training. This is required when "performAutoML" is false. For information about different Amazon Personalize recipes and their ARNs, see Choosing a recipe. * **datasetGroupArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the dataset group that provides the training data. * **eventType** (*string*) -- When your have multiple event types (using an "EVENT_TYPE" schema field), this parameter specifies which event type (for example, 'click' or 'like') is used for training the model. If you do not provide an "eventType", Amazon Personalize will use all interactions for training with equal weight regardless of type. * **solutionConfig** (*dict*) -- The configuration properties for the solution. When "performAutoML" is set to true, Amazon Personalize only evaluates the "autoMLConfig" section of the solution configuration. Note: Amazon Personalize doesn't support configuring the "hpoObjective" at this time. * **eventValueThreshold** *(string) --* Only events with a value greater than or equal to this threshold are used for training a model. * **hpoConfig** *(dict) --* Describes the properties for hyperparameter optimization (HPO). * **hpoObjective** *(dict) --* The metric to optimize during HPO. Note: Amazon Personalize doesn't support configuring the "hpoObjective" at this time. * **type** *(string) --* The type of the metric. Valid values are "Maximize" and "Minimize". * **metricName** *(string) --* The name of the metric. * **metricRegex** *(string) --* A regular expression for finding the metric in the training job logs. * **hpoResourceConfig** *(dict) --* Describes the resource configuration for HPO. * **maxNumberOfTrainingJobs** *(string) --* The maximum number of training jobs when you create a solution version. The maximum value for "maxNumberOfTrainingJobs" is "40". * **maxParallelTrainingJobs** *(string) --* The maximum number of parallel training jobs when you create a solution version. The maximum value for "maxParallelTrainingJobs" is "10". * **algorithmHyperParameterRanges** *(dict) --* The hyperparameters and their allowable ranges. * **integerHyperParameterRanges** *(list) --* The integer-valued hyperparameters and their ranges. * *(dict) --* Provides the name and range of an integer-valued hyperparameter. * **name** *(string) --* The name of the hyperparameter. * **minValue** *(integer) --* The minimum allowable value for the hyperparameter. * **maxValue** *(integer) --* The maximum allowable value for the hyperparameter. * **continuousHyperParameterRanges** *(list) --* The continuous hyperparameters and their ranges. * *(dict) --* Provides the name and range of a continuous hyperparameter. * **name** *(string) --* The name of the hyperparameter. * **minValue** *(float) --* The minimum allowable value for the hyperparameter. * **maxValue** *(float) --* The maximum allowable value for the hyperparameter. * **categoricalHyperParameterRanges** *(list) --* The categorical hyperparameters and their ranges. * *(dict) --* Provides the name and range of a categorical hyperparameter. * **name** *(string) --* The name of the hyperparameter. * **values** *(list) --* A list of the categories for the hyperparameter. * *(string) --* * **algorithmHyperParameters** *(dict) --* Lists the algorithm hyperparameters and their values. * *(string) --* * *(string) --* * **featureTransformationParameters** *(dict) --* Lists the feature transformation parameters. * *(string) --* * *(string) --* * **autoMLConfig** *(dict) --* The AutoMLConfig object containing a list of recipes to search when AutoML is performed. * **metricName** *(string) --* The metric to optimize. * **recipeList** *(list) --* The list of candidate recipes. * *(string) --* * **eventsConfig** *(dict) --* Describes the configuration of an event, which includes a list of event parameters. You can specify up to 10 event parameters. Events are used in solution creation. * **eventParametersList** *(list) --* A list of event parameters, which includes event types and their event value thresholds and weights. * *(dict) --* Describes the parameters of events, which are used in solution creation. * **eventType** *(string) --* The name of the event type to be considered for solution creation. * **eventValueThreshold** *(float) --* The threshold of the event type. Only events with a value greater or equal to this threshold will be considered for solution creation. * **weight** *(float) --* The weight of the event type. A higher weight means higher importance of the event type for the created solution. * **optimizationObjective** *(dict) --* Describes the additional objective for the solution, such as maximizing streaming minutes or increasing revenue. For more information see Optimizing a solution. * **itemAttribute** *(string) --* The numerical metadata column in an Items dataset related to the optimization objective. For example, VIDEO_LENGTH (to maximize streaming minutes), or PRICE (to maximize revenue). * **objectiveSensitivity** *(string) --* Specifies how Amazon Personalize balances the importance of your optimization objective versus relevance. * **trainingDataConfig** *(dict) --* Specifies the training data configuration to use when creating a custom solution version (trained model). * **excludedDatasetColumns** *(dict) --* Specifies the columns to exclude from training. Each key is a dataset type, and each value is a list of columns. Exclude columns to control what data Amazon Personalize uses to generate recommendations. For example, you might have a column that you want to use only to filter recommendations. You can exclude this column from training and Amazon Personalize considers it only when filtering. * *(string) --* * *(list) --* * *(string) --* * **autoTrainingConfig** *(dict) --* Specifies the automatic training configuration to use. * **schedulingExpression** *(string) --* Specifies how often to automatically train new solution versions. Specify a rate expression in rate(*value* *unit*) format. For value, specify a number between 1 and 30. For unit, specify "day" or "days". For example, to automatically create a new solution version every 5 days, specify "rate(5 days)". The default is every 7 days. For more information about auto training, see Creating and configuring a solution. * **tags** (*list*) -- A list of tags to apply to the solution. * *(dict) --* The optional metadata that you apply to resources to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. For more information see Tagging Amazon Personalize resources. * **tagKey** *(string) --* **[REQUIRED]** One part of a key-value pair that makes up a tag. A key is a general label that acts like a category for more specific tag values. * **tagValue** *(string) --* **[REQUIRED]** The optional part of a key-value pair that makes up a tag. A value acts as a descriptor within a tag category (key). Return type: dict Returns: **Response Syntax** { 'solutionArn': 'string' } **Response Structure** * *(dict) --* * **solutionArn** *(string) --* The ARN of the solution. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceAlreadyExistsException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.LimitExceededException" * "Personalize.Client.exceptions.ResourceInUseException" * "Personalize.Client.exceptions.TooManyTagsException" Personalize / Client / create_data_deletion_job create_data_deletion_job ************************ Personalize.Client.create_data_deletion_job(**kwargs) Creates a batch job that deletes all references to specific users from an Amazon Personalize dataset group in batches. You specify the users to delete in a CSV file of userIds in an Amazon S3 bucket. After a job completes, Amazon Personalize no longer trains on the users’ data and no longer considers the users when generating user segments. For more information about creating a data deletion job, see Deleting users. * Your input file must be a CSV file with a single USER_ID column that lists the users IDs. For more information about preparing the CSV file, see Preparing your data deletion file and uploading it to Amazon S3. * To give Amazon Personalize permission to access your input CSV file of userIds, you must specify an IAM service role that has permission to read from the data source. This role needs "GetObject" and "ListBucket" permissions for the bucket and its content. These permissions are the same as importing data. For information on granting access to your Amazon S3 bucket, see Giving Amazon Personalize Access to Amazon S3 Resources. After you create a job, it can take up to a day to delete all references to the users from datasets and models. Until the job completes, Amazon Personalize continues to use the data when training. And if you use a User Segmentation recipe, the users might appear in user segments. **Status** A data deletion job can have one of the following statuses: * PENDING > IN_PROGRESS > COMPLETED -or- FAILED To get the status of the data deletion job, call DescribeDataDeletionJob API operation and specify the Amazon Resource Name (ARN) of the job. If the status is FAILED, the response includes a "failureReason" key, which describes why the job failed. **Related APIs** * ListDataDeletionJobs * DescribeDataDeletionJob See also: AWS API Documentation **Request Syntax** response = client.create_data_deletion_job( jobName='string', datasetGroupArn='string', dataSource={ 'dataLocation': 'string' }, roleArn='string', tags=[ { 'tagKey': 'string', 'tagValue': 'string' }, ] ) Parameters: * **jobName** (*string*) -- **[REQUIRED]** The name for the data deletion job. * **datasetGroupArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the dataset group that has the datasets you want to delete records from. * **dataSource** (*dict*) -- **[REQUIRED]** The Amazon S3 bucket that contains the list of userIds of the users to delete. * **dataLocation** *(string) --* For dataset import jobs, the path to the Amazon S3 bucket where the data that you want to upload to your dataset is stored. For data deletion jobs, the path to the Amazon S3 bucket that stores the list of records to delete. For example: "s3://bucket-name/folder-name/fileName.csv" If your CSV files are in a folder in your Amazon S3 bucket and you want your import job or data deletion job to consider multiple files, you can specify the path to the folder. With a data deletion job, Amazon Personalize uses all files in the folder and any sub folder. Use the following syntax with a "/" after the folder name: "s3://bucket-name/folder-name/" * **roleArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the IAM role that has permissions to read from the Amazon S3 data source. * **tags** (*list*) -- A list of tags to apply to the data deletion job. * *(dict) --* The optional metadata that you apply to resources to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. For more information see Tagging Amazon Personalize resources. * **tagKey** *(string) --* **[REQUIRED]** One part of a key-value pair that makes up a tag. A key is a general label that acts like a category for more specific tag values. * **tagValue** *(string) --* **[REQUIRED]** The optional part of a key-value pair that makes up a tag. A value acts as a descriptor within a tag category (key). Return type: dict Returns: **Response Syntax** { 'dataDeletionJobArn': 'string' } **Response Structure** * *(dict) --* * **dataDeletionJobArn** *(string) --* The Amazon Resource Name (ARN) of the data deletion job. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.ResourceAlreadyExistsException" * "Personalize.Client.exceptions.LimitExceededException" * "Personalize.Client.exceptions.ResourceInUseException" * "Personalize.Client.exceptions.TooManyTagsException" Personalize / Client / close close ***** Personalize.Client.close() Closes underlying endpoint connections. Personalize / Client / describe_filter describe_filter *************** Personalize.Client.describe_filter(**kwargs) Describes a filter's properties. See also: AWS API Documentation **Request Syntax** response = client.describe_filter( filterArn='string' ) Parameters: **filterArn** (*string*) -- **[REQUIRED]** The ARN of the filter to describe. Return type: dict Returns: **Response Syntax** { 'filter': { 'name': 'string', 'filterArn': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'datasetGroupArn': 'string', 'failureReason': 'string', 'filterExpression': 'string', 'status': 'string' } } **Response Structure** * *(dict) --* * **filter** *(dict) --* The filter's details. * **name** *(string) --* The name of the filter. * **filterArn** *(string) --* The ARN of the filter. * **creationDateTime** *(datetime) --* The time at which the filter was created. * **lastUpdatedDateTime** *(datetime) --* The time at which the filter was last updated. * **datasetGroupArn** *(string) --* The ARN of the dataset group to which the filter belongs. * **failureReason** *(string) --* If the filter failed, the reason for its failure. * **filterExpression** *(string) --* Specifies the type of item interactions to filter out of recommendation results. The filter expression must follow specific format rules. For information about filter expression structure and syntax, see Filter expressions. * **status** *(string) --* The status of the filter. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" Personalize / Client / create_solution_version create_solution_version *********************** Personalize.Client.create_solution_version(**kwargs) Trains or retrains an active solution in a Custom dataset group. A solution is created using the CreateSolution operation and must be in the ACTIVE state before calling "CreateSolutionVersion". A new version of the solution is created every time you call this operation. **Status** A solution version can be in one of the following states: * CREATE PENDING * CREATE IN_PROGRESS * ACTIVE * CREATE FAILED * CREATE STOPPING * CREATE STOPPED To get the status of the version, call DescribeSolutionVersion. Wait until the status shows as ACTIVE before calling "CreateCampaign". If the status shows as CREATE FAILED, the response includes a "failureReason" key, which describes why the job failed. **Related APIs** * ListSolutionVersions * DescribeSolutionVersion * ListSolutions * CreateSolution * DescribeSolution * DeleteSolution See also: AWS API Documentation **Request Syntax** response = client.create_solution_version( name='string', solutionArn='string', trainingMode='FULL'|'UPDATE'|'AUTOTRAIN', tags=[ { 'tagKey': 'string', 'tagValue': 'string' }, ] ) Parameters: * **name** (*string*) -- The name of the solution version. * **solutionArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the solution containing the training configuration information. * **trainingMode** (*string*) -- The scope of training to be performed when creating the solution version. The default is "FULL". This creates a completely new model based on the entirety of the training data from the datasets in your dataset group. If you use User-Personalization, you can specify a training mode of "UPDATE". This updates the model to consider new items for recommendations. It is not a full retraining. You should still complete a full retraining weekly. If you specify "UPDATE", Amazon Personalize will stop automatic updates for the solution version. To resume updates, create a new solution with training mode set to "FULL" and deploy it in a campaign. For more information about automatic updates, see Automatic updates. The "UPDATE" option can only be used when you already have an active solution version created from the input solution using the "FULL" option and the input solution was trained with the User-Personalization recipe or the legacy HRNN-Coldstart recipe. * **tags** (*list*) -- A list of tags to apply to the solution version. * *(dict) --* The optional metadata that you apply to resources to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. For more information see Tagging Amazon Personalize resources. * **tagKey** *(string) --* **[REQUIRED]** One part of a key-value pair that makes up a tag. A key is a general label that acts like a category for more specific tag values. * **tagValue** *(string) --* **[REQUIRED]** The optional part of a key-value pair that makes up a tag. A value acts as a descriptor within a tag category (key). Return type: dict Returns: **Response Syntax** { 'solutionVersionArn': 'string' } **Response Structure** * *(dict) --* * **solutionVersionArn** *(string) --* The ARN of the new solution version. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.LimitExceededException" * "Personalize.Client.exceptions.ResourceInUseException" * "Personalize.Client.exceptions.TooManyTagsException" * "Personalize.Client.exceptions.ResourceAlreadyExistsException" Personalize / Client / describe_dataset describe_dataset **************** Personalize.Client.describe_dataset(**kwargs) Describes the given dataset. For more information on datasets, see CreateDataset. See also: AWS API Documentation **Request Syntax** response = client.describe_dataset( datasetArn='string' ) Parameters: **datasetArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the dataset to describe. Return type: dict Returns: **Response Syntax** { 'dataset': { 'name': 'string', 'datasetArn': 'string', 'datasetGroupArn': 'string', 'datasetType': 'string', 'schemaArn': 'string', 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'latestDatasetUpdate': { 'schemaArn': 'string', 'status': 'string', 'failureReason': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1) }, 'trackingId': 'string' } } **Response Structure** * *(dict) --* * **dataset** *(dict) --* A listing of the dataset's properties. * **name** *(string) --* The name of the dataset. * **datasetArn** *(string) --* The Amazon Resource Name (ARN) of the dataset that you want metadata for. * **datasetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the dataset group. * **datasetType** *(string) --* One of the following values: * Interactions * Items * Users * Actions * Action_Interactions * **schemaArn** *(string) --* The ARN of the associated schema. * **status** *(string) --* The status of the dataset. A dataset can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * DELETE PENDING > DELETE IN_PROGRESS * **creationDateTime** *(datetime) --* The creation date and time (in Unix time) of the dataset. * **lastUpdatedDateTime** *(datetime) --* A time stamp that shows when the dataset was updated. * **latestDatasetUpdate** *(dict) --* Describes the latest update to the dataset. * **schemaArn** *(string) --* The Amazon Resource Name (ARN) of the schema that replaced the previous schema of the dataset. * **status** *(string) --* The status of the dataset update. * **failureReason** *(string) --* If updating a dataset fails, provides the reason why. * **creationDateTime** *(datetime) --* The creation date and time (in Unix time) of the dataset update. * **lastUpdatedDateTime** *(datetime) --* The last update date and time (in Unix time) of the dataset. * **trackingId** *(string) --* The ID of the event tracker for an Action interactions dataset. You specify the tracker's ID in the "PutActionInteractions" API operation. Amazon Personalize uses it to direct new data to the Action interactions dataset in your dataset group. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" Personalize / Client / start_recommender start_recommender ***************** Personalize.Client.start_recommender(**kwargs) Starts a recommender that is INACTIVE. Starting a recommender does not create any new models, but resumes billing and automatic retraining for the recommender. See also: AWS API Documentation **Request Syntax** response = client.start_recommender( recommenderArn='string' ) Parameters: **recommenderArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the recommender to start. Return type: dict Returns: **Response Syntax** { 'recommenderArn': 'string' } **Response Structure** * *(dict) --* * **recommenderArn** *(string) --* The Amazon Resource Name (ARN) of the recommender you started. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.ResourceInUseException" Personalize / Client / list_campaigns list_campaigns ************** Personalize.Client.list_campaigns(**kwargs) Returns a list of campaigns that use the given solution. When a solution is not specified, all the campaigns associated with the account are listed. The response provides the properties for each campaign, including the Amazon Resource Name (ARN). For more information on campaigns, see CreateCampaign. See also: AWS API Documentation **Request Syntax** response = client.list_campaigns( solutionArn='string', nextToken='string', maxResults=123 ) Parameters: * **solutionArn** (*string*) -- The Amazon Resource Name (ARN) of the solution to list the campaigns for. When a solution is not specified, all the campaigns associated with the account are listed. * **nextToken** (*string*) -- A token returned from the previous call to ListCampaigns for getting the next set of campaigns (if they exist). * **maxResults** (*integer*) -- The maximum number of campaigns to return. Return type: dict Returns: **Response Syntax** { 'campaigns': [ { 'name': 'string', 'campaignArn': 'string', 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'failureReason': 'string' }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **campaigns** *(list) --* A list of the campaigns. * *(dict) --* Provides a summary of the properties of a campaign. For a complete listing, call the DescribeCampaign API. * **name** *(string) --* The name of the campaign. * **campaignArn** *(string) --* The Amazon Resource Name (ARN) of the campaign. * **status** *(string) --* The status of the campaign. A campaign can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * DELETE PENDING > DELETE IN_PROGRESS * **creationDateTime** *(datetime) --* The date and time (in Unix time) that the campaign was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the campaign was last updated. * **failureReason** *(string) --* If a campaign fails, the reason behind the failure. * **nextToken** *(string) --* A token for getting the next set of campaigns (if they exist). **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.InvalidNextTokenException" Personalize / Client / describe_schema describe_schema *************** Personalize.Client.describe_schema(**kwargs) Describes a schema. For more information on schemas, see CreateSchema. See also: AWS API Documentation **Request Syntax** response = client.describe_schema( schemaArn='string' ) Parameters: **schemaArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the schema to retrieve. Return type: dict Returns: **Response Syntax** { 'schema': { 'name': 'string', 'schemaArn': 'string', 'schema': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'domain': 'ECOMMERCE'|'VIDEO_ON_DEMAND' } } **Response Structure** * *(dict) --* * **schema** *(dict) --* The requested schema. * **name** *(string) --* The name of the schema. * **schemaArn** *(string) --* The Amazon Resource Name (ARN) of the schema. * **schema** *(string) --* The schema. * **creationDateTime** *(datetime) --* The date and time (in Unix time) that the schema was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the schema was last updated. * **domain** *(string) --* The domain of a schema that you created for a dataset in a Domain dataset group. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" Personalize / Client / create_dataset_group create_dataset_group ******************** Personalize.Client.create_dataset_group(**kwargs) Creates an empty dataset group. A dataset group is a container for Amazon Personalize resources. A dataset group can contain at most three datasets, one for each type of dataset: * Item interactions * Items * Users * Actions * Action interactions A dataset group can be a Domain dataset group, where you specify a domain and use pre-configured resources like recommenders, or a Custom dataset group, where you use custom resources, such as a solution with a solution version, that you deploy with a campaign. If you start with a Domain dataset group, you can still add custom resources such as solutions and solution versions trained with recipes for custom use cases and deployed with campaigns. A dataset group can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * DELETE PENDING To get the status of the dataset group, call DescribeDatasetGroup. If the status shows as CREATE FAILED, the response includes a "failureReason" key, which describes why the creation failed. Note: You must wait until the "status" of the dataset group is "ACTIVE" before adding a dataset to the group. You can specify an Key Management Service (KMS) key to encrypt the datasets in the group. If you specify a KMS key, you must also include an Identity and Access Management (IAM) role that has permission to access the key. **APIs that require a dataset group ARN in the request** * CreateDataset * CreateEventTracker * CreateSolution **Related APIs** * ListDatasetGroups * DescribeDatasetGroup * DeleteDatasetGroup See also: AWS API Documentation **Request Syntax** response = client.create_dataset_group( name='string', roleArn='string', kmsKeyArn='string', domain='ECOMMERCE'|'VIDEO_ON_DEMAND', tags=[ { 'tagKey': 'string', 'tagValue': 'string' }, ] ) Parameters: * **name** (*string*) -- **[REQUIRED]** The name for the new dataset group. * **roleArn** (*string*) -- The ARN of the Identity and Access Management (IAM) role that has permissions to access the Key Management Service (KMS) key. Supplying an IAM role is only valid when also specifying a KMS key. * **kmsKeyArn** (*string*) -- The Amazon Resource Name (ARN) of a Key Management Service (KMS) key used to encrypt the datasets. * **domain** (*string*) -- The domain of the dataset group. Specify a domain to create a Domain dataset group. The domain you specify determines the default schemas for datasets and the use cases available for recommenders. If you don't specify a domain, you create a Custom dataset group with solution versions that you deploy with a campaign. * **tags** (*list*) -- A list of tags to apply to the dataset group. * *(dict) --* The optional metadata that you apply to resources to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. For more information see Tagging Amazon Personalize resources. * **tagKey** *(string) --* **[REQUIRED]** One part of a key-value pair that makes up a tag. A key is a general label that acts like a category for more specific tag values. * **tagValue** *(string) --* **[REQUIRED]** The optional part of a key-value pair that makes up a tag. A value acts as a descriptor within a tag category (key). Return type: dict Returns: **Response Syntax** { 'datasetGroupArn': 'string', 'domain': 'ECOMMERCE'|'VIDEO_ON_DEMAND' } **Response Structure** * *(dict) --* * **datasetGroupArn** *(string) --* The Amazon Resource Name (ARN) of the new dataset group. * **domain** *(string) --* The domain for the new Domain dataset group. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceAlreadyExistsException" * "Personalize.Client.exceptions.LimitExceededException" * "Personalize.Client.exceptions.TooManyTagsException" Personalize / Client / create_batch_segment_job create_batch_segment_job ************************ Personalize.Client.create_batch_segment_job(**kwargs) Creates a batch segment job. The operation can handle up to 50 million records and the input file must be in JSON format. For more information, see Getting batch recommendations and user segments. See also: AWS API Documentation **Request Syntax** response = client.create_batch_segment_job( jobName='string', solutionVersionArn='string', filterArn='string', numResults=123, jobInput={ 's3DataSource': { 'path': 'string', 'kmsKeyArn': 'string' } }, jobOutput={ 's3DataDestination': { 'path': 'string', 'kmsKeyArn': 'string' } }, roleArn='string', tags=[ { 'tagKey': 'string', 'tagValue': 'string' }, ] ) Parameters: * **jobName** (*string*) -- **[REQUIRED]** The name of the batch segment job to create. * **solutionVersionArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the solution version you want the batch segment job to use to generate batch segments. * **filterArn** (*string*) -- The ARN of the filter to apply to the batch segment job. For more information on using filters, see Filtering batch recommendations. * **numResults** (*integer*) -- The number of predicted users generated by the batch segment job for each line of input data. The maximum number of users per segment is 5 million. * **jobInput** (*dict*) -- **[REQUIRED]** The Amazon S3 path for the input data used to generate the batch segment job. * **s3DataSource** *(dict) --* **[REQUIRED]** The configuration details of an Amazon S3 input or output bucket. * **path** *(string) --* **[REQUIRED]** The file path of the Amazon S3 bucket. * **kmsKeyArn** *(string) --* The Amazon Resource Name (ARN) of the Key Management Service (KMS) key that Amazon Personalize uses to encrypt or decrypt the input and output files. * **jobOutput** (*dict*) -- **[REQUIRED]** The Amazon S3 path for the bucket where the job's output will be stored. * **s3DataDestination** *(dict) --* **[REQUIRED]** The configuration details of an Amazon S3 input or output bucket. * **path** *(string) --* **[REQUIRED]** The file path of the Amazon S3 bucket. * **kmsKeyArn** *(string) --* The Amazon Resource Name (ARN) of the Key Management Service (KMS) key that Amazon Personalize uses to encrypt or decrypt the input and output files. * **roleArn** (*string*) -- **[REQUIRED]** The ARN of the Amazon Identity and Access Management role that has permissions to read and write to your input and output Amazon S3 buckets respectively. * **tags** (*list*) -- A list of tags to apply to the batch segment job. * *(dict) --* The optional metadata that you apply to resources to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. For more information see Tagging Amazon Personalize resources. * **tagKey** *(string) --* **[REQUIRED]** One part of a key-value pair that makes up a tag. A key is a general label that acts like a category for more specific tag values. * **tagValue** *(string) --* **[REQUIRED]** The optional part of a key-value pair that makes up a tag. A value acts as a descriptor within a tag category (key). Return type: dict Returns: **Response Syntax** { 'batchSegmentJobArn': 'string' } **Response Structure** * *(dict) --* * **batchSegmentJobArn** *(string) --* The ARN of the batch segment job. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceAlreadyExistsException" * "Personalize.Client.exceptions.LimitExceededException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.ResourceInUseException" * "Personalize.Client.exceptions.TooManyTagsException" Personalize / Client / tag_resource tag_resource ************ Personalize.Client.tag_resource(**kwargs) Add a list of tags to a resource. See also: AWS API Documentation **Request Syntax** response = client.tag_resource( resourceArn='string', tags=[ { 'tagKey': 'string', 'tagValue': 'string' }, ] ) Parameters: * **resourceArn** (*string*) -- **[REQUIRED]** The resource's Amazon Resource Name (ARN). * **tags** (*list*) -- **[REQUIRED]** Tags to apply to the resource. For more information see Tagging Amazon Personalize resources. * *(dict) --* The optional metadata that you apply to resources to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. For more information see Tagging Amazon Personalize resources. * **tagKey** *(string) --* **[REQUIRED]** One part of a key-value pair that makes up a tag. A key is a general label that acts like a category for more specific tag values. * **tagValue** *(string) --* **[REQUIRED]** The optional part of a key-value pair that makes up a tag. A value acts as a descriptor within a tag category (key). Return type: dict Returns: **Response Syntax** {} **Response Structure** * *(dict) --* **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.TooManyTagsException" * "Personalize.Client.exceptions.ResourceInUseException" * "Personalize.Client.exceptions.LimitExceededException" Personalize / Client / list_batch_segment_jobs list_batch_segment_jobs *********************** Personalize.Client.list_batch_segment_jobs(**kwargs) Gets a list of the batch segment jobs that have been performed off of a solution version that you specify. See also: AWS API Documentation **Request Syntax** response = client.list_batch_segment_jobs( solutionVersionArn='string', nextToken='string', maxResults=123 ) Parameters: * **solutionVersionArn** (*string*) -- The Amazon Resource Name (ARN) of the solution version that the batch segment jobs used to generate batch segments. * **nextToken** (*string*) -- The token to request the next page of results. * **maxResults** (*integer*) -- The maximum number of batch segment job results to return in each page. The default value is 100. Return type: dict Returns: **Response Syntax** { 'batchSegmentJobs': [ { 'batchSegmentJobArn': 'string', 'jobName': 'string', 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1), 'failureReason': 'string', 'solutionVersionArn': 'string' }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **batchSegmentJobs** *(list) --* A list containing information on each job that is returned. * *(dict) --* A truncated version of the BatchSegmentJob datatype. ListBatchSegmentJobs operation returns a list of batch segment job summaries. * **batchSegmentJobArn** *(string) --* The Amazon Resource Name (ARN) of the batch segment job. * **jobName** *(string) --* The name of the batch segment job. * **status** *(string) --* The status of the batch segment job. The status is one of the following values: * PENDING * IN PROGRESS * ACTIVE * CREATE FAILED * **creationDateTime** *(datetime) --* The time at which the batch segment job was created. * **lastUpdatedDateTime** *(datetime) --* The time at which the batch segment job was last updated. * **failureReason** *(string) --* If the batch segment job failed, the reason for the failure. * **solutionVersionArn** *(string) --* The Amazon Resource Name (ARN) of the solution version used by the batch segment job to generate batch segments. * **nextToken** *(string) --* The token to use to retrieve the next page of results. The value is "null" when there are no more results to return. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.InvalidNextTokenException" Personalize / Client / list_metric_attribution_metrics list_metric_attribution_metrics ******************************* Personalize.Client.list_metric_attribution_metrics(**kwargs) Lists the metrics for the metric attribution. See also: AWS API Documentation **Request Syntax** response = client.list_metric_attribution_metrics( metricAttributionArn='string', nextToken='string', maxResults=123 ) Parameters: * **metricAttributionArn** (*string*) -- The Amazon Resource Name (ARN) of the metric attribution to retrieve attributes for. * **nextToken** (*string*) -- Specify the pagination token from a previous request to retrieve the next page of results. * **maxResults** (*integer*) -- The maximum number of metrics to return in one page of results. Return type: dict Returns: **Response Syntax** { 'metrics': [ { 'eventType': 'string', 'metricName': 'string', 'expression': 'string' }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **metrics** *(list) --* The metrics for the specified metric attribution. * *(dict) --* Contains information on a metric that a metric attribution reports on. For more information, see Measuring impact of recommendations. * **eventType** *(string) --* The metric's event type. * **metricName** *(string) --* The metric's name. The name helps you identify the metric in Amazon CloudWatch or Amazon S3. * **expression** *(string) --* The attribute's expression. Available functions are "SUM()" or "SAMPLECOUNT()". For SUM() functions, provide the dataset type (either Interactions or Items) and column to sum as a parameter. For example SUM(Items.PRICE). * **nextToken** *(string) --* Specify the pagination token from a previous "ListMetricAttributionMetricsResponse" request to retrieve the next page of results. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.InvalidNextTokenException" Personalize / Client / create_filter create_filter ************* Personalize.Client.create_filter(**kwargs) Creates a recommendation filter. For more information, see Filtering recommendations and user segments. See also: AWS API Documentation **Request Syntax** response = client.create_filter( name='string', datasetGroupArn='string', filterExpression='string', tags=[ { 'tagKey': 'string', 'tagValue': 'string' }, ] ) Parameters: * **name** (*string*) -- **[REQUIRED]** The name of the filter to create. * **datasetGroupArn** (*string*) -- **[REQUIRED]** The ARN of the dataset group that the filter will belong to. * **filterExpression** (*string*) -- **[REQUIRED]** The filter expression defines which items are included or excluded from recommendations. Filter expression must follow specific format rules. For information about filter expression structure and syntax, see Filter expressions. * **tags** (*list*) -- A list of tags to apply to the filter. * *(dict) --* The optional metadata that you apply to resources to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. For more information see Tagging Amazon Personalize resources. * **tagKey** *(string) --* **[REQUIRED]** One part of a key-value pair that makes up a tag. A key is a general label that acts like a category for more specific tag values. * **tagValue** *(string) --* **[REQUIRED]** The optional part of a key-value pair that makes up a tag. A value acts as a descriptor within a tag category (key). Return type: dict Returns: **Response Syntax** { 'filterArn': 'string' } **Response Structure** * *(dict) --* * **filterArn** *(string) --* The ARN of the new filter. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceAlreadyExistsException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.LimitExceededException" * "Personalize.Client.exceptions.TooManyTagsException" Personalize / Client / list_event_trackers list_event_trackers ******************* Personalize.Client.list_event_trackers(**kwargs) Returns the list of event trackers associated with the account. The response provides the properties for each event tracker, including the Amazon Resource Name (ARN) and tracking ID. For more information on event trackers, see CreateEventTracker. See also: AWS API Documentation **Request Syntax** response = client.list_event_trackers( datasetGroupArn='string', nextToken='string', maxResults=123 ) Parameters: * **datasetGroupArn** (*string*) -- The ARN of a dataset group used to filter the response. * **nextToken** (*string*) -- A token returned from the previous call to "ListEventTrackers" for getting the next set of event trackers (if they exist). * **maxResults** (*integer*) -- The maximum number of event trackers to return. Return type: dict Returns: **Response Syntax** { 'eventTrackers': [ { 'name': 'string', 'eventTrackerArn': 'string', 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1) }, ], 'nextToken': 'string' } **Response Structure** * *(dict) --* * **eventTrackers** *(list) --* A list of event trackers. * *(dict) --* Provides a summary of the properties of an event tracker. For a complete listing, call the DescribeEventTracker API. * **name** *(string) --* The name of the event tracker. * **eventTrackerArn** *(string) --* The Amazon Resource Name (ARN) of the event tracker. * **status** *(string) --* The status of the event tracker. An event tracker can be in one of the following states: * CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED * DELETE PENDING > DELETE IN_PROGRESS * **creationDateTime** *(datetime) --* The date and time (in Unix time) that the event tracker was created. * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix time) that the event tracker was last updated. * **nextToken** *(string) --* A token for getting the next set of event trackers (if they exist). **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.InvalidNextTokenException" Personalize / Client / delete_filter delete_filter ************* Personalize.Client.delete_filter(**kwargs) Deletes a filter. See also: AWS API Documentation **Request Syntax** response = client.delete_filter( filterArn='string' ) Parameters: **filterArn** (*string*) -- **[REQUIRED]** The ARN of the filter to delete. Returns: None **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.ResourceInUseException" Personalize / Client / describe_recipe describe_recipe *************** Personalize.Client.describe_recipe(**kwargs) Describes a recipe. A recipe contains three items: * An algorithm that trains a model. * Hyperparameters that govern the training. * Feature transformation information for modifying the input data before training. Amazon Personalize provides a set of predefined recipes. You specify a recipe when you create a solution with the CreateSolution API. "CreateSolution" trains a model by using the algorithm in the specified recipe and a training dataset. The solution, when deployed as a campaign, can provide recommendations using the GetRecommendations API. See also: AWS API Documentation **Request Syntax** response = client.describe_recipe( recipeArn='string' ) Parameters: **recipeArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the recipe to describe. Return type: dict Returns: **Response Syntax** { 'recipe': { 'name': 'string', 'recipeArn': 'string', 'algorithmArn': 'string', 'featureTransformationArn': 'string', 'status': 'string', 'description': 'string', 'creationDateTime': datetime(2015, 1, 1), 'recipeType': 'string', 'lastUpdatedDateTime': datetime(2015, 1, 1) } } **Response Structure** * *(dict) --* * **recipe** *(dict) --* An object that describes the recipe. * **name** *(string) --* The name of the recipe. * **recipeArn** *(string) --* The Amazon Resource Name (ARN) of the recipe. * **algorithmArn** *(string) --* The Amazon Resource Name (ARN) of the algorithm that Amazon Personalize uses to train the model. * **featureTransformationArn** *(string) --* The ARN of the FeatureTransformation object. * **status** *(string) --* The status of the recipe. * **description** *(string) --* The description of the recipe. * **creationDateTime** *(datetime) --* The date and time (in Unix format) that the recipe was created. * **recipeType** *(string) --* One of the following values: * PERSONALIZED_RANKING * RELATED_ITEMS * USER_PERSONALIZATION * **lastUpdatedDateTime** *(datetime) --* The date and time (in Unix format) that the recipe was last updated. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" Personalize / Client / delete_campaign delete_campaign *************** Personalize.Client.delete_campaign(**kwargs) Removes a campaign by deleting the solution deployment. The solution that the campaign is based on is not deleted and can be redeployed when needed. A deleted campaign can no longer be specified in a GetRecommendations request. For information on creating campaigns, see CreateCampaign. See also: AWS API Documentation **Request Syntax** response = client.delete_campaign( campaignArn='string' ) Parameters: **campaignArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the campaign to delete. Returns: None **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.ResourceInUseException" Personalize / Client / delete_event_tracker delete_event_tracker ******************** Personalize.Client.delete_event_tracker(**kwargs) Deletes the event tracker. Does not delete the dataset from the dataset group. For more information on event trackers, see CreateEventTracker. See also: AWS API Documentation **Request Syntax** response = client.delete_event_tracker( eventTrackerArn='string' ) Parameters: **eventTrackerArn** (*string*) -- **[REQUIRED]** The Amazon Resource Name (ARN) of the event tracker to delete. Returns: None **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException" * "Personalize.Client.exceptions.ResourceInUseException" Personalize / Client / describe_batch_inference_job describe_batch_inference_job **************************** Personalize.Client.describe_batch_inference_job(**kwargs) Gets the properties of a batch inference job including name, Amazon Resource Name (ARN), status, input and output configurations, and the ARN of the solution version used to generate the recommendations. See also: AWS API Documentation **Request Syntax** response = client.describe_batch_inference_job( batchInferenceJobArn='string' ) Parameters: **batchInferenceJobArn** (*string*) -- **[REQUIRED]** The ARN of the batch inference job to describe. Return type: dict Returns: **Response Syntax** { 'batchInferenceJob': { 'jobName': 'string', 'batchInferenceJobArn': 'string', 'filterArn': 'string', 'failureReason': 'string', 'solutionVersionArn': 'string', 'numResults': 123, 'jobInput': { 's3DataSource': { 'path': 'string', 'kmsKeyArn': 'string' } }, 'jobOutput': { 's3DataDestination': { 'path': 'string', 'kmsKeyArn': 'string' } }, 'batchInferenceJobConfig': { 'itemExplorationConfig': { 'string': 'string' } }, 'roleArn': 'string', 'batchInferenceJobMode': 'BATCH_INFERENCE'|'THEME_GENERATION', 'themeGenerationConfig': { 'fieldsForThemeGeneration': { 'itemName': 'string' } }, 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1) } } **Response Structure** * *(dict) --* * **batchInferenceJob** *(dict) --* Information on the specified batch inference job. * **jobName** *(string) --* The name of the batch inference job. * **batchInferenceJobArn** *(string) --* The Amazon Resource Name (ARN) of the batch inference job. * **filterArn** *(string) --* The ARN of the filter used on the batch inference job. * **failureReason** *(string) --* If the batch inference job failed, the reason for the failure. * **solutionVersionArn** *(string) --* The Amazon Resource Name (ARN) of the solution version from which the batch inference job was created. * **numResults** *(integer) --* The number of recommendations generated by the batch inference job. This number includes the error messages generated for failed input records. * **jobInput** *(dict) --* The Amazon S3 path that leads to the input data used to generate the batch inference job. * **s3DataSource** *(dict) --* The URI of the Amazon S3 location that contains your input data. The Amazon S3 bucket must be in the same region as the API endpoint you are calling. * **path** *(string) --* The file path of the Amazon S3 bucket. * **kmsKeyArn** *(string) --* The Amazon Resource Name (ARN) of the Key Management Service (KMS) key that Amazon Personalize uses to encrypt or decrypt the input and output files. * **jobOutput** *(dict) --* The Amazon S3 bucket that contains the output data generated by the batch inference job. * **s3DataDestination** *(dict) --* Information on the Amazon S3 bucket in which the batch inference job's output is stored. * **path** *(string) --* The file path of the Amazon S3 bucket. * **kmsKeyArn** *(string) --* The Amazon Resource Name (ARN) of the Key Management Service (KMS) key that Amazon Personalize uses to encrypt or decrypt the input and output files. * **batchInferenceJobConfig** *(dict) --* A string to string map of the configuration details of a batch inference job. * **itemExplorationConfig** *(dict) --* A string to string map specifying the exploration configuration hyperparameters, including "explorationWeight" and "explorationItemAgeCutOff", you want to use to configure the amount of item exploration Amazon Personalize uses when recommending items. See User-Personalization. * *(string) --* * *(string) --* * **roleArn** *(string) --* The ARN of the Amazon Identity and Access Management (IAM) role that requested the batch inference job. * **batchInferenceJobMode** *(string) --* The job's mode. * **themeGenerationConfig** *(dict) --* The job's theme generation settings. * **fieldsForThemeGeneration** *(dict) --* Fields used to generate descriptive themes for a batch inference job. * **itemName** *(string) --* The name of the Items dataset column that stores the name of each item in the dataset. * **status** *(string) --* The status of the batch inference job. The status is one of the following values: * PENDING * IN PROGRESS * ACTIVE * CREATE FAILED * **creationDateTime** *(datetime) --* The time at which the batch inference job was created. * **lastUpdatedDateTime** *(datetime) --* The time at which the batch inference job was last updated. **Exceptions** * "Personalize.Client.exceptions.InvalidInputException" * "Personalize.Client.exceptions.ResourceNotFoundException"