View Source aws_sagemaker (aws v1.0.4)

Provides APIs for creating and managing SageMaker resources.

Other Resources:

  • SageMaker Developer Guide: https://docs.aws.amazon.com/sagemaker/latest/dg/whatis.html#first-time-user

  • Amazon Augmented AI Runtime API Reference: https://docs.aws.amazon.com/augmented-ai/2019-11-07/APIReference/Welcome.html

Summary

Functions

Creates an association between the source and the destination.

Adds or overwrites one or more tags for the specified SageMaker resource.

Associates a trial component with a trial.

This action batch describes a list of versioned model packages

Creates an action.

Create a machine learning algorithm that you can use in SageMaker and list in the Amazon Web Services Marketplace.

Creates a running app for the specified UserProfile.

Creates a configuration for running a SageMaker image as a KernelGateway app.

Creates an artifact.

Creates an Autopilot job also referred to as Autopilot experiment or AutoML job.

Creates an Autopilot job also referred to as Autopilot experiment or AutoML job V2.

Creates a SageMaker HyperPod cluster.

Creates a Git repository as a resource in your SageMaker account.

Starts a model compilation job.

Creates a context.

Creates a definition for a job that monitors data quality and drift.

Creates a device fleet.

Creates a Domain.

Creates an edge deployment plan, consisting of multiple stages.

Creates a new stage in an existing edge deployment plan.

Starts a SageMaker Edge Manager model packaging job.

Creates an endpoint using the endpoint configuration specified in the request.

Creates an endpoint configuration that SageMaker hosting services uses to deploy models.

Creates a SageMaker experiment.

Create a new FeatureGroup.

Creates a flow definition.

Defines the settings you will use for the human review workflow user interface.

Starts a hyperparameter tuning job.

Creates a custom SageMaker image.

Creates a version of the SageMaker image specified by ImageName.

Creates an inference component, which is a SageMaker hosting object that you can use to deploy a model to an endpoint.

Creates an inference experiment using the configurations specified in the request.

Creates a job that uses workers to label the data objects in your input dataset.

Creates a model in SageMaker.

Creates the definition for a model bias job.

Creates an Amazon SageMaker Model Card.

Creates an Amazon SageMaker Model Card export job.
Creates the definition for a model explainability job.

Creates a model package that you can use to create SageMaker models or list on Amazon Web Services Marketplace, or a versioned model that is part of a model group.

Creates a definition for a job that monitors model quality and drift.

Creates a schedule that regularly starts Amazon SageMaker Processing Jobs to monitor the data captured for an Amazon SageMaker Endpoint.

Creates an SageMaker notebook instance.

Creates a lifecycle configuration that you can associate with a notebook instance.

Creates a pipeline using a JSON pipeline definition.

Creates a URL for a specified UserProfile in a Domain.

Returns a URL that you can use to connect to the Jupyter server from a notebook instance.

Creates a processing job.
Creates a machine learning (ML) project that can contain one or more templates that set up an ML pipeline from training to deploying an approved model.
Creates a space used for real time collaboration in a domain.
Creates a new Amazon SageMaker Studio Lifecycle Configuration.

Starts a model training job.

Starts a transform job.

Creates an SageMaker trial.

Creates a trial component, which is a stage of a machine learning trial.

Creates a user profile.

Use this operation to create a workforce.

Creates a new work team for labeling your data.

Deletes an action.
Removes the specified algorithm from your account.
Used to stop and delete an app.
Deletes an AppImageConfig.

Deletes an artifact.

Deletes an association.
Delete a SageMaker HyperPod cluster.
Deletes the specified Git repository from your account.

Deletes the specified compilation job.

Deletes an context.
Deletes a data quality monitoring job definition.

Used to delete a domain.

Deletes an edge deployment plan if (and only if) all the stages in the plan are inactive or there are no stages in the plan.
Delete a stage in an edge deployment plan if (and only if) the stage is inactive.

Deletes an endpoint.

Deletes an endpoint configuration.

Deletes an SageMaker experiment.

Delete the FeatureGroup and any data that was written to the OnlineStore of the FeatureGroup.

Deletes the specified flow definition.

Delete the contents of a hub.

Use this operation to delete a human task user interface (worker task template).

Deletes a hyperparameter tuning job.

Deletes a SageMaker image and all versions of the image.

Deletes a version of a SageMaker image.

Deletes an inference component.

Deletes an inference experiment.

Deletes a model.

Deletes an Amazon SageMaker model bias job definition.
Deletes an Amazon SageMaker Model Card.
Deletes an Amazon SageMaker model explainability job definition.

Deletes a model package.

Deletes the specified model group.
Deletes a model group resource policy.
Deletes the secified model quality monitoring job definition.

Deletes a monitoring schedule.

Deletes an SageMaker notebook instance.

Deletes a notebook instance lifecycle configuration.

Deletes a pipeline if there are no running instances of the pipeline.

Delete the specified project.
Used to delete a space.

Deletes the Amazon SageMaker Studio Lifecycle Configuration.

Deletes the specified tags from an SageMaker resource.

Deletes the specified trial.

Deletes the specified trial component.

Deletes a user profile.

Use this operation to delete a workforce.

Deletes an existing work team.

Deregisters the specified devices.

Describes an action.
Returns a description of the specified algorithm that is in your account.
Describes the app.
Describes an AppImageConfig.
Describes an artifact.

Returns information about an AutoML job created by calling CreateAutoMLJob: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateAutoMLJob.html.

Returns information about an AutoML job created by calling CreateAutoMLJobV2: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateAutoMLJobV2.html or CreateAutoMLJob: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateAutoMLJob.html.
Retrieves information of a SageMaker HyperPod cluster.
Retrieves information of an instance (also called a node interchangeably) of a SageMaker HyperPod cluster.
Gets details about the specified Git repository.

Returns information about a model compilation job.

Describes a context.
Gets the details of a data quality monitoring job definition.
Describes the device.
A description of the fleet the device belongs to.
The description of the domain.
Describes an edge deployment plan with deployment status per stage.
A description of edge packaging jobs.
Returns the description of an endpoint.
Returns the description of an endpoint configuration created using the CreateEndpointConfig API.
Provides a list of an experiment's properties.

Use this operation to describe a FeatureGroup.

Shows the metadata for a feature within a feature group.
Returns information about the specified flow definition.

Describe a hub.

Describe the content of a hub.

Returns information about the requested human task user interface (worker task template).

Returns a description of a hyperparameter tuning job, depending on the fields selected.

Describes a SageMaker image.
Describes a version of a SageMaker image.
Returns information about an inference component.
Returns details about an inference experiment.

Provides the results of the Inference Recommender job.

Gets information about a labeling job.

Provides a list of properties for the requested lineage group.

Describes a model that you created using the CreateModel API.
Returns a description of a model bias job definition.
Describes the content, creation time, and security configuration of an Amazon SageMaker Model Card.
Describes an Amazon SageMaker Model Card export job.
Returns a description of a model explainability job definition.

Returns a description of the specified model package, which is used to create SageMaker models or list them on Amazon Web Services Marketplace.

Gets a description for the specified model group.
Returns a description of a model quality job definition.
Describes the schedule for a monitoring job.
Returns information about a notebook instance.

Returns a description of a notebook instance lifecycle configuration.

Describes the details of a pipeline.
Describes the details of an execution's pipeline definition.
Describes the details of a pipeline execution.
Returns a description of a processing job.
Describes the details of a project.
Describes the space.
Describes the Amazon SageMaker Studio Lifecycle Configuration.

Gets information about a work team provided by a vendor.

Returns information about a training job.

Returns information about a transform job.
Provides a list of a trial's properties.
Provides a list of a trials component's properties.

Describes a user profile.

Lists private workforce information, including workforce name, Amazon Resource Name (ARN), and, if applicable, allowed IP address ranges (CIDRs: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html).

Gets information about a specific work team.

Disables using Service Catalog in SageMaker.

Disassociates a trial component from a trial.

Enables using Service Catalog in SageMaker.

The resource policy for the lineage group.

Gets a resource policy that manages access for a model group.

Gets the status of Service Catalog in SageMaker.

Starts an Amazon SageMaker Inference Recommender autoscaling recommendation job.

An auto-complete API for the search functionality in the SageMaker console.

Import hub content.

Lists the actions in your account and their properties.
Lists the machine learning algorithms that have been created.
Lists the aliases of a specified image or image version.

Lists the AppImageConfigs in your account and their properties.

Lists the artifacts in your account and their properties.
Lists the associations in your account and their properties.
Request a list of jobs.
List the candidates created for the job.
Retrieves the list of instances (also called nodes interchangeably) in a SageMaker HyperPod cluster.
Retrieves the list of SageMaker HyperPod clusters.
Gets a list of the Git repositories in your account.

Lists model compilation jobs that satisfy various filters.

Lists the contexts in your account and their properties.
Lists the data quality job definitions in your account.
Returns a list of devices in the fleet.
A list of devices.
Lists the domains.
Lists all edge deployment plans.
Returns a list of edge packaging jobs.
Lists endpoint configurations.

Lists all the experiments in your account.

List FeatureGroups based on given filter and order.
Returns information about the flow definitions in your account.

List hub content versions.

List the contents of a hub.

List all existing hubs.

Returns information about the human task user interfaces in your account.
Gets a list of HyperParameterTuningJobSummary: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_HyperParameterTuningJobSummary.html objects that describe the hyperparameter tuning jobs launched in your account.

Lists the versions of a specified image and their properties.

Lists the images in your account and their properties.

Lists the inference components in your account and their properties.
Returns the list of all inference experiments.

Returns a list of the subtasks for an Inference Recommender job.

Lists recommendation jobs that satisfy various filters.
Gets a list of labeling jobs.
Gets a list of labeling jobs assigned to a specified work team.

A list of lineage groups shared with your Amazon Web Services account.

Lists model bias jobs definitions that satisfy various filters.
List the export jobs for the Amazon SageMaker Model Card.
List existing versions of an Amazon SageMaker Model Card.
List existing model cards.
Lists model explainability job definitions that satisfy various filters.
Lists the domain, framework, task, and model name of standard machine learning models found in common model zoos.
Gets a list of the model groups in your Amazon Web Services account.
Lists the model packages that have been created.
Gets a list of model quality monitoring job definitions in your account.
Lists models created with the CreateModel API.
Gets a list of past alerts in a model monitoring schedule.
Gets the alerts for a single monitoring schedule.
Returns list of all monitoring job executions.
Returns list of all monitoring schedules.
Lists notebook instance lifestyle configurations created with the CreateNotebookInstanceLifecycleConfig: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateNotebookInstanceLifecycleConfig.html API.
Returns a list of the SageMaker notebook instances in the requester's account in an Amazon Web Services Region.
Gets a list of PipeLineExecutionStep objects.
Gets a list of the pipeline executions.
Gets a list of parameters for a pipeline execution.
Gets a list of pipelines.
Lists processing jobs that satisfy various filters.
Gets a list of the projects in an Amazon Web Services account.

Lists Amazon SageMaker Catalogs based on given filters and orders.

Lists devices allocated to the stage, containing detailed device information and deployment status.
Lists the Amazon SageMaker Studio Lifecycle Configurations in your Amazon Web Services Account.

Gets a list of the work teams that you are subscribed to in the Amazon Web Services Marketplace.

Returns the tags for the specified SageMaker resource.

Lists training jobs.

Gets a list of TrainingJobSummary: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_TrainingJobSummary.html objects that describe the training jobs that a hyperparameter tuning job launched.
Lists transform jobs.

Lists the trial components in your account.

Lists the trials in your account.

Lists user profiles.

Use this operation to list all private and vendor workforces in an Amazon Web Services Region.

Gets a list of private work teams that you have defined in a region.

Adds a resouce policy to control access to a model group.

Use this action to inspect your lineage and discover relationships between entities.

Renders the UI template so that you can preview the worker's experience.
Retry the execution of the pipeline.

Finds SageMaker resources that match a search query.

Notifies the pipeline that the execution of a callback step failed, along with a message describing why.

Notifies the pipeline that the execution of a callback step succeeded and provides a list of the step's output parameters.

Starts a stage in an edge deployment plan.
Starts an inference experiment.

Starts a previously stopped monitoring schedule.

Launches an ML compute instance with the latest version of the libraries and attaches your ML storage volume.

Starts a pipeline execution.
A method for forcing a running job to shut down.

Stops a model compilation job.

Stops a stage in an edge deployment plan.
Request to stop an edge packaging job.

Stops a running hyperparameter tuning job and all running training jobs that the tuning job launched.

Stops an inference experiment.
Stops an Inference Recommender job.

Stops a running labeling job.

Stops a previously started monitoring schedule.

Terminates the ML compute instance.

Stops a pipeline execution.

Stops a processing job.

Stops a training job.

Stops a batch transform job.

Updates an action.
Updates the properties of an AppImageConfig.
Updates an artifact.
Updates a SageMaker HyperPod cluster.

Updates the platform software of a SageMaker HyperPod cluster for security patching.

Updates the specified Git repository with the specified values.
Updates a context.
Updates a fleet of devices.
Updates one or more devices in a fleet.
Updates the default settings for new user profiles in the domain.

Deploys the EndpointConfig specified in the request to a new fleet of instances.

Updates variant weight of one or more variants associated with an existing endpoint, or capacity of one variant associated with an existing endpoint.

Adds, updates, or removes the description of an experiment.

Updates the feature group by either adding features or updating the online store configuration.

Updates the description and parameters of the feature group.

Updates the properties of a SageMaker image.

Updates the properties of a SageMaker image version.
Updates an inference component.
Runtime settings for a model that is deployed with an inference component.

Updates an inference experiment that you created.

Update an Amazon SageMaker Model Card.

Updates a versioned model.
Update the parameters of a model monitor alert.
Updates a previously created schedule.

Updates a notebook instance.

Updates a notebook instance lifecycle configuration created with the CreateNotebookInstanceLifecycleConfig: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateNotebookInstanceLifecycleConfig.html API.
Updates a pipeline.
Updates a pipeline execution.

Updates a machine learning (ML) project that is created from a template that sets up an ML pipeline from training to deploying an approved model.

Updates the settings of a space.
Update a model training job to request a new Debugger profiling configuration or to change warm pool retention length.
Updates the display name of a trial.
Updates one or more properties of a trial component.
Updates a user profile.

Use this operation to update your workforce.

Updates an existing work team with new member definitions or description.

Functions

Link to this function

add_association(Client, Input)

View Source

Creates an association between the source and the destination.

A source can be associated with multiple destinations, and a destination can be associated with multiple sources. An association is a lineage tracking entity. For more information, see Amazon SageMaker ML Lineage Tracking: https://docs.aws.amazon.com/sagemaker/latest/dg/lineage-tracking.html.
Link to this function

add_association(Client, Input, Options)

View Source

Adds or overwrites one or more tags for the specified SageMaker resource.

You can add tags to notebook instances, training jobs, hyperparameter tuning jobs, batch transform jobs, models, labeling jobs, work teams, endpoint configurations, and endpoints.

Each tag consists of a key and an optional value. Tag keys must be unique per resource. For more information about tags, see For more information, see Amazon Web Services Tagging Strategies: https://aws.amazon.com/answers/account-management/aws-tagging-strategies/.

Tags that you add to a hyperparameter tuning job by calling this API are also added to any training jobs that the hyperparameter tuning job launches after you call this API, but not to training jobs that the hyperparameter tuning job launched before you called this API. To make sure that the tags associated with a hyperparameter tuning job are also added to all training jobs that the hyperparameter tuning job launches, add the tags when you first create the tuning job by specifying them in the Tags parameter of CreateHyperParameterTuningJob: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateHyperParameterTuningJob.html

Tags that you add to a SageMaker Domain or User Profile by calling this API are also added to any Apps that the Domain or User Profile launches after you call this API, but not to Apps that the Domain or User Profile launched before you called this API. To make sure that the tags associated with a Domain or User Profile are also added to all Apps that the Domain or User Profile launches, add the tags when you first create the Domain or User Profile by specifying them in the Tags parameter of CreateDomain: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateDomain.html or CreateUserProfile: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateUserProfile.html.
Link to this function

add_tags(Client, Input, Options)

View Source
Link to this function

associate_trial_component(Client, Input)

View Source

Associates a trial component with a trial.

A trial component can be associated with multiple trials. To disassociate a trial component from a trial, call the DisassociateTrialComponent: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DisassociateTrialComponent.html API.
Link to this function

associate_trial_component(Client, Input, Options)

View Source
Link to this function

batch_describe_model_package(Client, Input)

View Source
This action batch describes a list of versioned model packages
Link to this function

batch_describe_model_package(Client, Input, Options)

View Source
Link to this function

create_action(Client, Input)

View Source

Creates an action.

An action is a lineage tracking entity that represents an action or activity. For example, a model deployment or an HPO job. Generally, an action involves at least one input or output artifact. For more information, see Amazon SageMaker ML Lineage Tracking: https://docs.aws.amazon.com/sagemaker/latest/dg/lineage-tracking.html.
Link to this function

create_action(Client, Input, Options)

View Source
Link to this function

create_algorithm(Client, Input)

View Source
Create a machine learning algorithm that you can use in SageMaker and list in the Amazon Web Services Marketplace.
Link to this function

create_algorithm(Client, Input, Options)

View Source
Link to this function

create_app(Client, Input)

View Source

Creates a running app for the specified UserProfile.

This operation is automatically invoked by Amazon SageMaker upon access to the associated Domain, and when new kernel configurations are selected by the user. A user may have multiple Apps active simultaneously.
Link to this function

create_app(Client, Input, Options)

View Source
Link to this function

create_app_image_config(Client, Input)

View Source

Creates a configuration for running a SageMaker image as a KernelGateway app.

The configuration specifies the Amazon Elastic File System storage volume on the image, and a list of the kernels in the image.
Link to this function

create_app_image_config(Client, Input, Options)

View Source
Link to this function

create_artifact(Client, Input)

View Source

Creates an artifact.

An artifact is a lineage tracking entity that represents a URI addressable object or data. Some examples are the S3 URI of a dataset and the ECR registry path of an image. For more information, see Amazon SageMaker ML Lineage Tracking: https://docs.aws.amazon.com/sagemaker/latest/dg/lineage-tracking.html.
Link to this function

create_artifact(Client, Input, Options)

View Source
Link to this function

create_auto_ml_job(Client, Input)

View Source

Creates an Autopilot job also referred to as Autopilot experiment or AutoML job.

We recommend using the new versions CreateAutoMLJobV2: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateAutoMLJobV2.html and DescribeAutoMLJobV2: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeAutoMLJobV2.html, which offer backward compatibility.

CreateAutoMLJobV2 can manage tabular problem types identical to those of its previous version CreateAutoMLJob, as well as time-series forecasting, non-tabular problem types such as image or text classification, and text generation (LLMs fine-tuning).

Find guidelines about how to migrate a CreateAutoMLJob to CreateAutoMLJobV2 in Migrate a CreateAutoMLJob to CreateAutoMLJobV2: https://docs.aws.amazon.com/sagemaker/latest/dg/autopilot-automate-model-development-create-experiment.html#autopilot-create-experiment-api-migrate-v1-v2.

You can find the best-performing model after you run an AutoML job by calling DescribeAutoMLJobV2: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeAutoMLJobV2.html (recommended) or DescribeAutoMLJob: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeAutoMLJob.html.
Link to this function

create_auto_ml_job(Client, Input, Options)

View Source
Link to this function

create_auto_ml_job_v2(Client, Input)

View Source

Creates an Autopilot job also referred to as Autopilot experiment or AutoML job V2.

CreateAutoMLJobV2: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateAutoMLJobV2.html and DescribeAutoMLJobV2: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeAutoMLJobV2.html are new versions of CreateAutoMLJob: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateAutoMLJob.html and DescribeAutoMLJob: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeAutoMLJob.html which offer backward compatibility.

CreateAutoMLJobV2 can manage tabular problem types identical to those of its previous version CreateAutoMLJob, as well as time-series forecasting, non-tabular problem types such as image or text classification, and text generation (LLMs fine-tuning).

Find guidelines about how to migrate a CreateAutoMLJob to CreateAutoMLJobV2 in Migrate a CreateAutoMLJob to CreateAutoMLJobV2: https://docs.aws.amazon.com/sagemaker/latest/dg/autopilot-automate-model-development-create-experiment.html#autopilot-create-experiment-api-migrate-v1-v2.

For the list of available problem types supported by CreateAutoMLJobV2, see AutoMLProblemTypeConfig: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AutoMLProblemTypeConfig.html.

You can find the best-performing model after you run an AutoML job V2 by calling DescribeAutoMLJobV2: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeAutoMLJobV2.html.
Link to this function

create_auto_ml_job_v2(Client, Input, Options)

View Source
Link to this function

create_cluster(Client, Input)

View Source

Creates a SageMaker HyperPod cluster.

SageMaker HyperPod is a capability of SageMaker for creating and managing persistent clusters for developing large machine learning models, such as large language models (LLMs) and diffusion models. To learn more, see Amazon SageMaker HyperPod: https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-hyperpod.html in the Amazon SageMaker Developer Guide.
Link to this function

create_cluster(Client, Input, Options)

View Source
Link to this function

create_code_repository(Client, Input)

View Source

Creates a Git repository as a resource in your SageMaker account.

You can associate the repository with notebook instances so that you can use Git source control for the notebooks you create. The Git repository is a resource in your SageMaker account, so it can be associated with more than one notebook instance, and it persists independently from the lifecycle of any notebook instances it is associated with.

The repository can be hosted either in Amazon Web Services CodeCommit: https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html or in any other Git repository.
Link to this function

create_code_repository(Client, Input, Options)

View Source
Link to this function

create_compilation_job(Client, Input)

View Source

Starts a model compilation job.

After the model has been compiled, Amazon SageMaker saves the resulting model artifacts to an Amazon Simple Storage Service (Amazon S3) bucket that you specify.

If you choose to host your model using Amazon SageMaker hosting services, you can use the resulting model artifacts as part of the model. You can also use the artifacts with Amazon Web Services IoT Greengrass. In that case, deploy them as an ML resource.

In the request body, you provide the following:

  • A name for the compilation job

  • Information about the input model artifacts

  • The output location for the compiled model and the device (target) that the model runs on

  • The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker assumes to perform the model compilation job.

You can also provide a Tag to track the model compilation job's resource use and costs. The response body contains the CompilationJobArn for the compiled job.

To stop a model compilation job, use StopCompilationJob: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_StopCompilationJob.html. To get information about a particular model compilation job, use DescribeCompilationJob: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeCompilationJob.html. To get information about multiple model compilation jobs, use ListCompilationJobs: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_ListCompilationJobs.html.
Link to this function

create_compilation_job(Client, Input, Options)

View Source
Link to this function

create_context(Client, Input)

View Source

Creates a context.

A context is a lineage tracking entity that represents a logical grouping of other tracking or experiment entities. Some examples are an endpoint and a model package. For more information, see Amazon SageMaker ML Lineage Tracking: https://docs.aws.amazon.com/sagemaker/latest/dg/lineage-tracking.html.
Link to this function

create_context(Client, Input, Options)

View Source
Link to this function

create_data_quality_job_definition(Client, Input)

View Source

Creates a definition for a job that monitors data quality and drift.

For information about model monitor, see Amazon SageMaker Model Monitor: https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor.html.
Link to this function

create_data_quality_job_definition(Client, Input, Options)

View Source
Link to this function

create_device_fleet(Client, Input)

View Source
Creates a device fleet.
Link to this function

create_device_fleet(Client, Input, Options)

View Source
Link to this function

create_domain(Client, Input)

View Source

Creates a Domain.

A domain consists of an associated Amazon Elastic File System volume, a list of authorized users, and a variety of security, application, policy, and Amazon Virtual Private Cloud (VPC) configurations. Users within a domain can share notebook files and other artifacts with each other.

EFS storage

When a domain is created, an EFS volume is created for use by all of the users within the domain. Each user receives a private home directory within the EFS volume for notebooks, Git repositories, and data files.

SageMaker uses the Amazon Web Services Key Management Service (Amazon Web Services KMS) to encrypt the EFS volume attached to the domain with an Amazon Web Services managed key by default. For more control, you can specify a customer managed key. For more information, see Protect Data at Rest Using Encryption: https://docs.aws.amazon.com/sagemaker/latest/dg/encryption-at-rest.html.

VPC configuration

All traffic between the domain and the Amazon EFS volume is through the specified VPC and subnets. For other traffic, you can specify the AppNetworkAccessType parameter. AppNetworkAccessType corresponds to the network access type that you choose when you onboard to the domain. The following options are available:

  • PublicInternetOnly - Non-EFS traffic goes through a VPC managed by Amazon SageMaker, which allows internet access. This is the default value.

  • VpcOnly - All traffic is through the specified VPC and subnets. Internet access is disabled by default. To allow internet access, you must specify a NAT gateway.

    When internet access is disabled, you won't be able to run a Amazon SageMaker Studio notebook or to train or host models unless your VPC has an interface endpoint to the SageMaker API and runtime or a NAT gateway and your security groups allow outbound connections.

NFS traffic over TCP on port 2049 needs to be allowed in both inbound and outbound rules in order to launch a Amazon SageMaker Studio app successfully.

For more information, see Connect Amazon SageMaker Studio Notebooks to Resources in a VPC: https://docs.aws.amazon.com/sagemaker/latest/dg/studio-notebooks-and-internet-access.html.
Link to this function

create_domain(Client, Input, Options)

View Source
Link to this function

create_edge_deployment_plan(Client, Input)

View Source

Creates an edge deployment plan, consisting of multiple stages.

Each stage may have a different deployment configuration and devices.
Link to this function

create_edge_deployment_plan(Client, Input, Options)

View Source
Link to this function

create_edge_deployment_stage(Client, Input)

View Source
Creates a new stage in an existing edge deployment plan.
Link to this function

create_edge_deployment_stage(Client, Input, Options)

View Source
Link to this function

create_edge_packaging_job(Client, Input)

View Source

Starts a SageMaker Edge Manager model packaging job.

Edge Manager will use the model artifacts from the Amazon Simple Storage Service bucket that you specify. After the model has been packaged, Amazon SageMaker saves the resulting artifacts to an S3 bucket that you specify.
Link to this function

create_edge_packaging_job(Client, Input, Options)

View Source
Link to this function

create_endpoint(Client, Input)

View Source

Creates an endpoint using the endpoint configuration specified in the request.

SageMaker uses the endpoint to provision resources and deploy models. You create the endpoint configuration with the CreateEndpointConfig: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateEndpointConfig.html API.

Use this API to deploy models using SageMaker hosting services.

You must not delete an EndpointConfig that is in use by an endpoint that is live or while the UpdateEndpoint or CreateEndpoint operations are being performed on the endpoint. To update an endpoint, you must create a new EndpointConfig.

The endpoint name must be unique within an Amazon Web Services Region in your Amazon Web Services account.

When it receives the request, SageMaker creates the endpoint, launches the resources (ML compute instances), and deploys the model(s) on them.

When you call CreateEndpoint: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateEndpoint.html, a load call is made to DynamoDB to verify that your endpoint configuration exists. When you read data from a DynamoDB table supporting Eventually Consistent Reads : https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadConsistency.html, the response might not reflect the results of a recently completed write operation. The response might include some stale data. If the dependent entities are not yet in DynamoDB, this causes a validation error. If you repeat your read request after a short time, the response should return the latest data. So retry logic is recommended to handle these possible issues. We also recommend that customers call DescribeEndpointConfig: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeEndpointConfig.html before calling CreateEndpoint: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateEndpoint.html to minimize the potential impact of a DynamoDB eventually consistent read.

When SageMaker receives the request, it sets the endpoint status to Creating. After it creates the endpoint, it sets the status to InService. SageMaker can then process incoming requests for inferences. To check the status of an endpoint, use the DescribeEndpoint: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeEndpoint.html API.

If any of the models hosted at this endpoint get model data from an Amazon S3 location, SageMaker uses Amazon Web Services Security Token Service to download model artifacts from the S3 path you provided. Amazon Web Services STS is activated in your Amazon Web Services account by default. If you previously deactivated Amazon Web Services STS for a region, you need to reactivate Amazon Web Services STS for that region. For more information, see Activating and Deactivating Amazon Web Services STS in an Amazon Web Services Region: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html in the Amazon Web Services Identity and Access Management User Guide.

To add the IAM role policies for using this API operation, go to the IAM console: https://console.aws.amazon.com/iam/, and choose Roles in the left navigation pane. Search the IAM role that you want to grant access to use the CreateEndpoint: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateEndpoint.html and CreateEndpointConfig: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateEndpointConfig.html API operations, add the following policies to the role.

Option 1: For a full SageMaker access, search and attach the AmazonSageMakerFullAccess policy.

Option 2: For granting a limited access to an IAM role, paste the following Action elements manually into the JSON file of the IAM role:

"Action": ["sagemaker:CreateEndpoint", "sagemaker:CreateEndpointConfig"]

"Resource": [

"arn:aws:sagemaker:region:account-id:endpoint/endpointName"

"arn:aws:sagemaker:region:account-id:endpoint-config/endpointConfigName"

]

For more information, see SageMaker API Permissions: Actions, Permissions, and Resources Reference: https://docs.aws.amazon.com/sagemaker/latest/dg/api-permissions-reference.html.
Link to this function

create_endpoint(Client, Input, Options)

View Source
Link to this function

create_endpoint_config(Client, Input)

View Source

Creates an endpoint configuration that SageMaker hosting services uses to deploy models.

In the configuration, you identify one or more models, created using the CreateModel API, to deploy and the resources that you want SageMaker to provision. Then you call the CreateEndpoint: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateEndpoint.html API.

Use this API if you want to use SageMaker hosting services to deploy models into production.

In the request, you define a ProductionVariant, for each model that you want to deploy. Each ProductionVariant parameter also describes the resources that you want SageMaker to provision. This includes the number and type of ML compute instances to deploy.

If you are hosting multiple models, you also assign a VariantWeight to specify how much traffic you want to allocate to each model. For example, suppose that you want to host two models, A and B, and you assign traffic weight 2 for model A and 1 for model B. SageMaker distributes two-thirds of the traffic to Model A, and one-third to model B.

When you call CreateEndpoint: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateEndpoint.html, a load call is made to DynamoDB to verify that your endpoint configuration exists. When you read data from a DynamoDB table supporting Eventually Consistent Reads : https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadConsistency.html, the response might not reflect the results of a recently completed write operation. The response might include some stale data. If the dependent entities are not yet in DynamoDB, this causes a validation error. If you repeat your read request after a short time, the response should return the latest data. So retry logic is recommended to handle these possible issues. We also recommend that customers call DescribeEndpointConfig: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeEndpointConfig.html before calling CreateEndpoint: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateEndpoint.html to minimize the potential impact of a DynamoDB eventually consistent read.
Link to this function

create_endpoint_config(Client, Input, Options)

View Source
Link to this function

create_experiment(Client, Input)

View Source

Creates a SageMaker experiment.

An experiment is a collection of trials that are observed, compared and evaluated as a group. A trial is a set of steps, called trial components, that produce a machine learning model.

In the Studio UI, trials are referred to as run groups and trial components are referred to as runs.

The goal of an experiment is to determine the components that produce the best model. Multiple trials are performed, each one isolating and measuring the impact of a change to one or more inputs, while keeping the remaining inputs constant.

When you use SageMaker Studio or the SageMaker Python SDK, all experiments, trials, and trial components are automatically tracked, logged, and indexed. When you use the Amazon Web Services SDK for Python (Boto), you must use the logging APIs provided by the SDK.

You can add tags to experiments, trials, trial components and then use the Search: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_Search.html API to search for the tags.

To add a description to an experiment, specify the optional Description parameter. To add a description later, or to change the description, call the UpdateExperiment: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_UpdateExperiment.html API.

To get a list of all your experiments, call the ListExperiments: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_ListExperiments.html API. To view an experiment's properties, call the DescribeExperiment: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeExperiment.html API. To get a list of all the trials associated with an experiment, call the ListTrials: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_ListTrials.html API. To create a trial call the CreateTrial: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateTrial.html API.
Link to this function

create_experiment(Client, Input, Options)

View Source
Link to this function

create_feature_group(Client, Input)

View Source

Create a new FeatureGroup.

A FeatureGroup is a group of Features defined in the FeatureStore to describe a Record.

The FeatureGroup defines the schema and features contained in the FeatureGroup. A FeatureGroup definition is composed of a list of Features, a RecordIdentifierFeatureName, an EventTimeFeatureName and configurations for its OnlineStore and OfflineStore. Check Amazon Web Services service quotas: https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html to see the FeatureGroups quota for your Amazon Web Services account.

Note that it can take approximately 10-15 minutes to provision an OnlineStoreFeatureGroup with the InMemoryStorageType.

You must include at least one of OnlineStoreConfig and OfflineStoreConfig to create a FeatureGroup.
Link to this function

create_feature_group(Client, Input, Options)

View Source
Link to this function

create_flow_definition(Client, Input)

View Source
Creates a flow definition.
Link to this function

create_flow_definition(Client, Input, Options)

View Source
Link to this function

create_hub(Client, Input)

View Source

Create a hub.

Hub APIs are only callable through SageMaker Studio.
Link to this function

create_hub(Client, Input, Options)

View Source
Link to this function

create_human_task_ui(Client, Input)

View Source

Defines the settings you will use for the human review workflow user interface.

Reviewers will see a three-panel interface with an instruction area, the item to review, and an input area.
Link to this function

create_human_task_ui(Client, Input, Options)

View Source
Link to this function

create_hyper_parameter_tuning_job(Client, Input)

View Source

Starts a hyperparameter tuning job.

A hyperparameter tuning job finds the best version of a model by running many training jobs on your dataset using the algorithm you choose and values for hyperparameters within ranges that you specify. It then chooses the hyperparameter values that result in a model that performs the best, as measured by an objective metric that you choose.

A hyperparameter tuning job automatically creates Amazon SageMaker experiments, trials, and trial components for each training job that it runs. You can view these entities in Amazon SageMaker Studio. For more information, see View Experiments, Trials, and Trial Components: https://docs.aws.amazon.com/sagemaker/latest/dg/experiments-view-compare.html#experiments-view.

Do not include any security-sensitive information including account access IDs, secrets or tokens in any hyperparameter field. If the use of security-sensitive credentials are detected, SageMaker will reject your training job request and return an exception error.
Link to this function

create_hyper_parameter_tuning_job(Client, Input, Options)

View Source
Link to this function

create_image(Client, Input)

View Source

Creates a custom SageMaker image.

A SageMaker image is a set of image versions. Each image version represents a container image stored in Amazon ECR. For more information, see Bring your own SageMaker image: https://docs.aws.amazon.com/sagemaker/latest/dg/studio-byoi.html.
Link to this function

create_image(Client, Input, Options)

View Source
Link to this function

create_image_version(Client, Input)

View Source

Creates a version of the SageMaker image specified by ImageName.

The version represents the Amazon ECR container image specified by BaseImage.
Link to this function

create_image_version(Client, Input, Options)

View Source
Link to this function

create_inference_component(Client, Input)

View Source

Creates an inference component, which is a SageMaker hosting object that you can use to deploy a model to an endpoint.

In the inference component settings, you specify the model, the endpoint, and how the model utilizes the resources that the endpoint hosts. You can optimize resource utilization by tailoring how the required CPU cores, accelerators, and memory are allocated. You can deploy multiple inference components to an endpoint, where each inference component contains one model and the resource utilization needs for that individual model. After you deploy an inference component, you can directly invoke the associated model when you use the InvokeEndpoint API action.
Link to this function

create_inference_component(Client, Input, Options)

View Source
Link to this function

create_inference_experiment(Client, Input)

View Source

Creates an inference experiment using the configurations specified in the request.

Use this API to setup and schedule an experiment to compare model variants on a Amazon SageMaker inference endpoint. For more information about inference experiments, see Shadow tests: https://docs.aws.amazon.com/sagemaker/latest/dg/shadow-tests.html.

Amazon SageMaker begins your experiment at the scheduled time and routes traffic to your endpoint's model variants based on your specified configuration.

While the experiment is in progress or after it has concluded, you can view metrics that compare your model variants. For more information, see View, monitor, and edit shadow tests: https://docs.aws.amazon.com/sagemaker/latest/dg/shadow-tests-view-monitor-edit.html.
Link to this function

create_inference_experiment(Client, Input, Options)

View Source
Link to this function

create_inference_recommendations_job(Client, Input)

View Source

Starts a recommendation job.

You can create either an instance recommendation or load test job.
Link to this function

create_inference_recommendations_job(Client, Input, Options)

View Source
Link to this function

create_labeling_job(Client, Input)

View Source

Creates a job that uses workers to label the data objects in your input dataset.

You can use the labeled data to train machine learning models.

You can select your workforce from one of three providers:

  • A private workforce that you create. It can include employees, contractors, and outside experts. Use a private workforce when want the data to stay within your organization or when a specific set of skills is required.

  • One or more vendors that you select from the Amazon Web Services Marketplace. Vendors provide expertise in specific areas.

  • The Amazon Mechanical Turk workforce. This is the largest workforce, but it should only be used for public data or data that has been stripped of any personally identifiable information.

You can also use automated data labeling to reduce the number of data objects that need to be labeled by a human. Automated data labeling uses active learning to determine if a data object can be labeled by machine or if it needs to be sent to a human worker. For more information, see Using Automated Data Labeling: https://docs.aws.amazon.com/sagemaker/latest/dg/sms-automated-labeling.html.

The data objects to be labeled are contained in an Amazon S3 bucket. You create a manifest file that describes the location of each object. For more information, see Using Input and Output Data: https://docs.aws.amazon.com/sagemaker/latest/dg/sms-data.html.

The output can be used as the manifest file for another labeling job or as training data for your machine learning models.

You can use this operation to create a static labeling job or a streaming labeling job. A static labeling job stops if all data objects in the input manifest file identified in ManifestS3Uri have been labeled. A streaming labeling job runs perpetually until it is manually stopped, or remains idle for 10 days. You can send new data objects to an active (InProgress) streaming labeling job in real time. To learn how to create a static labeling job, see Create a Labeling Job (API) : https://docs.aws.amazon.com/sagemaker/latest/dg/sms-create-labeling-job-api.html in the Amazon SageMaker Developer Guide. To learn how to create a streaming labeling job, see Create a Streaming Labeling Job: https://docs.aws.amazon.com/sagemaker/latest/dg/sms-streaming-create-job.html.
Link to this function

create_labeling_job(Client, Input, Options)

View Source
Link to this function

create_model(Client, Input)

View Source

Creates a model in SageMaker.

In the request, you name the model and describe a primary container. For the primary container, you specify the Docker image that contains inference code, artifacts (from prior training), and a custom environment map that the inference code uses when you deploy the model for predictions.

Use this API to create a model if you want to use SageMaker hosting services or run a batch transform job.

To host your model, you create an endpoint configuration with the CreateEndpointConfig API, and then create an endpoint with the CreateEndpoint API. SageMaker then deploys all of the containers that you defined for the model in the hosting environment.

For an example that calls this method when deploying a model to SageMaker hosting services, see Create a Model (Amazon Web Services SDK for Python (Boto 3)).: https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html#realtime-endpoints-deployment-create-model

To run a batch transform using your model, you start a job with the CreateTransformJob API. SageMaker uses your model and your dataset to get inferences which are then saved to a specified S3 location.

In the request, you also provide an IAM role that SageMaker can assume to access model artifacts and docker image for deployment on ML compute hosting instances or for batch transform jobs. In addition, you also use the IAM role to manage permissions the inference code needs. For example, if the inference code access any other Amazon Web Services resources, you grant necessary permissions via this role.
Link to this function

create_model(Client, Input, Options)

View Source
Link to this function

create_model_bias_job_definition(Client, Input)

View Source
Creates the definition for a model bias job.
Link to this function

create_model_bias_job_definition(Client, Input, Options)

View Source
Link to this function

create_model_card(Client, Input)

View Source

Creates an Amazon SageMaker Model Card.

For information about how to use model cards, see Amazon SageMaker Model Card: https://docs.aws.amazon.com/sagemaker/latest/dg/model-cards.html.
Link to this function

create_model_card(Client, Input, Options)

View Source
Link to this function

create_model_card_export_job(Client, Input)

View Source
Creates an Amazon SageMaker Model Card export job.
Link to this function

create_model_card_export_job(Client, Input, Options)

View Source
Link to this function

create_model_explainability_job_definition(Client, Input)

View Source
Creates the definition for a model explainability job.
Link to this function

create_model_explainability_job_definition(Client, Input, Options)

View Source
Link to this function

create_model_package(Client, Input)

View Source

Creates a model package that you can use to create SageMaker models or list on Amazon Web Services Marketplace, or a versioned model that is part of a model group.

Buyers can subscribe to model packages listed on Amazon Web Services Marketplace to create models in SageMaker.

To create a model package by specifying a Docker container that contains your inference code and the Amazon S3 location of your model artifacts, provide values for InferenceSpecification. To create a model from an algorithm resource that you created or subscribed to in Amazon Web Services Marketplace, provide a value for SourceAlgorithmSpecification.

There are two types of model packages:

Versioned - a model that is part of a model group in the model registry.

Unversioned - a model package that is not part of a model group.
Link to this function

create_model_package(Client, Input, Options)

View Source
Link to this function

create_model_package_group(Client, Input)

View Source

Creates a model group.

A model group contains a group of model versions.
Link to this function

create_model_package_group(Client, Input, Options)

View Source
Link to this function

create_model_quality_job_definition(Client, Input)

View Source

Creates a definition for a job that monitors model quality and drift.

For information about model monitor, see Amazon SageMaker Model Monitor: https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor.html.
Link to this function

create_model_quality_job_definition(Client, Input, Options)

View Source
Link to this function

create_monitoring_schedule(Client, Input)

View Source
Creates a schedule that regularly starts Amazon SageMaker Processing Jobs to monitor the data captured for an Amazon SageMaker Endpoint.
Link to this function

create_monitoring_schedule(Client, Input, Options)

View Source
Link to this function

create_notebook_instance(Client, Input)

View Source

Creates an SageMaker notebook instance.

A notebook instance is a machine learning (ML) compute instance running on a Jupyter notebook.

In a CreateNotebookInstance request, specify the type of ML compute instance that you want to run. SageMaker launches the instance, installs common libraries that you can use to explore datasets for model training, and attaches an ML storage volume to the notebook instance.

SageMaker also provides a set of example notebooks. Each notebook demonstrates how to use SageMaker with a specific algorithm or with a machine learning framework.

After receiving the request, SageMaker does the following:

  1. Creates a network interface in the SageMaker VPC.

  2. (Option) If you specified SubnetId, SageMaker creates a network interface in your own VPC, which is inferred from the subnet ID that you provide in the input. When creating this network interface, SageMaker attaches the security group that you specified in the request to the network interface that it creates in your VPC.

  3. Launches an EC2 instance of the type specified in the request in the SageMaker VPC. If you specified SubnetId of your VPC, SageMaker specifies both network interfaces when launching this instance. This enables inbound traffic from your own VPC to the notebook instance, assuming that the security groups allow it.

After creating the notebook instance, SageMaker returns its Amazon Resource Name (ARN). You can't change the name of a notebook instance after you create it.

After SageMaker creates the notebook instance, you can connect to the Jupyter server and work in Jupyter notebooks. For example, you can write code to explore a dataset that you can use for model training, train a model, host models by creating SageMaker endpoints, and validate hosted models.

For more information, see How It Works: https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works.html.
Link to this function

create_notebook_instance(Client, Input, Options)

View Source
Link to this function

create_notebook_instance_lifecycle_config(Client, Input)

View Source

Creates a lifecycle configuration that you can associate with a notebook instance.

A lifecycle configuration is a collection of shell scripts that run when you create or start a notebook instance.

Each lifecycle configuration script has a limit of 16384 characters.

The value of the $PATH environment variable that is available to both scripts is /sbin:bin:/usr/sbin:/usr/bin.

View Amazon CloudWatch Logs for notebook instance lifecycle configurations in log group /aws/sagemaker/NotebookInstances in log stream [notebook-instance-name]/[LifecycleConfigHook].

Lifecycle configuration scripts cannot run for longer than 5 minutes. If a script runs for longer than 5 minutes, it fails and the notebook instance is not created or started.

For information about notebook instance lifestyle configurations, see Step 2.1: (Optional) Customize a Notebook Instance: https://docs.aws.amazon.com/sagemaker/latest/dg/notebook-lifecycle-config.html.
Link to this function

create_notebook_instance_lifecycle_config(Client, Input, Options)

View Source
Link to this function

create_pipeline(Client, Input)

View Source
Creates a pipeline using a JSON pipeline definition.
Link to this function

create_pipeline(Client, Input, Options)

View Source
Link to this function

create_presigned_domain_url(Client, Input)

View Source

Creates a URL for a specified UserProfile in a Domain.

When accessed in a web browser, the user will be automatically signed in to the domain, and granted access to all of the Apps and files associated with the Domain's Amazon Elastic File System volume. This operation can only be called when the authentication mode equals IAM.

The IAM role or user passed to this API defines the permissions to access the app. Once the presigned URL is created, no additional permission is required to access this URL. IAM authorization policies for this API are also enforced for every HTTP request and WebSocket frame that attempts to connect to the app.

You can restrict access to this API and to the URL that it returns to a list of IP addresses, Amazon VPCs or Amazon VPC Endpoints that you specify. For more information, see Connect to Amazon SageMaker Studio Through an Interface VPC Endpoint: https://docs.aws.amazon.com/sagemaker/latest/dg/studio-interface-endpoint.html .

The URL that you get from a call to CreatePresignedDomainUrl has a default timeout of 5 minutes. You can configure this value using ExpiresInSeconds. If you try to use the URL after the timeout limit expires, you are directed to the Amazon Web Services console sign-in page.
Link to this function

create_presigned_domain_url(Client, Input, Options)

View Source
Link to this function

create_presigned_notebook_instance_url(Client, Input)

View Source

Returns a URL that you can use to connect to the Jupyter server from a notebook instance.

In the SageMaker console, when you choose Open next to a notebook instance, SageMaker opens a new tab showing the Jupyter server home page from the notebook instance. The console uses this API to get the URL and show the page.

The IAM role or user used to call this API defines the permissions to access the notebook instance. Once the presigned URL is created, no additional permission is required to access this URL. IAM authorization policies for this API are also enforced for every HTTP request and WebSocket frame that attempts to connect to the notebook instance.

You can restrict access to this API and to the URL that it returns to a list of IP addresses that you specify. Use the NotIpAddress condition operator and the aws:SourceIP condition context key to specify the list of IP addresses that you want to have access to the notebook instance. For more information, see Limit Access to a Notebook Instance by IP Address: https://docs.aws.amazon.com/sagemaker/latest/dg/security_iam_id-based-policy-examples.html#nbi-ip-filter.

The URL that you get from a call to CreatePresignedNotebookInstanceUrl: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreatePresignedNotebookInstanceUrl.html is valid only for 5 minutes. If you try to use the URL after the 5-minute limit expires, you are directed to the Amazon Web Services console sign-in page.
Link to this function

create_presigned_notebook_instance_url(Client, Input, Options)

View Source
Link to this function

create_processing_job(Client, Input)

View Source
Creates a processing job.
Link to this function

create_processing_job(Client, Input, Options)

View Source
Link to this function

create_project(Client, Input)

View Source
Creates a machine learning (ML) project that can contain one or more templates that set up an ML pipeline from training to deploying an approved model.
Link to this function

create_project(Client, Input, Options)

View Source
Link to this function

create_space(Client, Input)

View Source
Creates a space used for real time collaboration in a domain.
Link to this function

create_space(Client, Input, Options)

View Source
Link to this function

create_studio_lifecycle_config(Client, Input)

View Source
Creates a new Amazon SageMaker Studio Lifecycle Configuration.
Link to this function

create_studio_lifecycle_config(Client, Input, Options)

View Source
Link to this function

create_training_job(Client, Input)

View Source

Starts a model training job.

After training completes, SageMaker saves the resulting model artifacts to an Amazon S3 location that you specify.

If you choose to host your model using SageMaker hosting services, you can use the resulting model artifacts as part of the model. You can also use the artifacts in a machine learning service other than SageMaker, provided that you know how to use them for inference.

In the request body, you provide the following:

  • AlgorithmSpecification - Identifies the training algorithm to use.

  • HyperParameters - Specify these algorithm-specific parameters to enable the estimation of model parameters during training. Hyperparameters can be tuned to optimize this learning process. For a list of hyperparameters for each training algorithm provided by SageMaker, see Algorithms: https://docs.aws.amazon.com/sagemaker/latest/dg/algos.html.

    Do not include any security-sensitive information including account access IDs, secrets or tokens in any hyperparameter field. If the use of security-sensitive credentials are detected, SageMaker will reject your training job request and return an exception error.

  • InputDataConfig - Describes the input required by the training job and the Amazon S3, EFS, or FSx location where it is stored.

  • OutputDataConfig - Identifies the Amazon S3 bucket where you want SageMaker to save the results of model training.

  • ResourceConfig - Identifies the resources, ML compute instances, and ML storage volumes to deploy for model training. In distributed training, you specify more than one instance.

  • EnableManagedSpotTraining - Optimize the cost of training machine learning models by up to 80% by using Amazon EC2 Spot instances. For more information, see Managed Spot Training: https://docs.aws.amazon.com/sagemaker/latest/dg/model-managed-spot-training.html.

  • RoleArn - The Amazon Resource Name (ARN) that SageMaker assumes to perform tasks on your behalf during model training. You must grant this role the necessary permissions so that SageMaker can successfully complete model training.

  • StoppingCondition - To help cap training costs, use MaxRuntimeInSeconds to set a time limit for training. Use MaxWaitTimeInSeconds to specify how long a managed spot training job has to complete.

  • Environment - The environment variables to set in the Docker container.

  • RetryStrategy - The number of times to retry the job when the job fails due to an InternalServerError.

For more information about SageMaker, see How It Works: https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works.html.
Link to this function

create_training_job(Client, Input, Options)

View Source
Link to this function

create_transform_job(Client, Input)

View Source

Starts a transform job.

A transform job uses a trained model to get inferences on a dataset and saves these results to an Amazon S3 location that you specify.

To perform batch transformations, you create a transform job and use the data that you have readily available.

In the request body, you provide the following:

  • TransformJobName - Identifies the transform job. The name must be unique within an Amazon Web Services Region in an Amazon Web Services account.

  • ModelName - Identifies the model to use. ModelName must be the name of an existing Amazon SageMaker model in the same Amazon Web Services Region and Amazon Web Services account. For information on creating a model, see CreateModel: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateModel.html.

  • TransformInput - Describes the dataset to be transformed and the Amazon S3 location where it is stored.

  • TransformOutput - Identifies the Amazon S3 location where you want Amazon SageMaker to save the results from the transform job.

  • TransformResources - Identifies the ML compute instances for the transform job.

For more information about how batch transformation works, see Batch Transform: https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html.
Link to this function

create_transform_job(Client, Input, Options)

View Source
Link to this function

create_trial(Client, Input)

View Source

Creates an SageMaker trial.

A trial is a set of steps called trial components that produce a machine learning model. A trial is part of a single SageMaker experiment.

When you use SageMaker Studio or the SageMaker Python SDK, all experiments, trials, and trial components are automatically tracked, logged, and indexed. When you use the Amazon Web Services SDK for Python (Boto), you must use the logging APIs provided by the SDK.

You can add tags to a trial and then use the Search: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_Search.html API to search for the tags.

To get a list of all your trials, call the ListTrials: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_ListTrials.html API. To view a trial's properties, call the DescribeTrial: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeTrial.html API. To create a trial component, call the CreateTrialComponent: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateTrialComponent.html API.
Link to this function

create_trial(Client, Input, Options)

View Source
Link to this function

create_trial_component(Client, Input)

View Source

Creates a trial component, which is a stage of a machine learning trial.

A trial is composed of one or more trial components. A trial component can be used in multiple trials.

Trial components include pre-processing jobs, training jobs, and batch transform jobs.

When you use SageMaker Studio or the SageMaker Python SDK, all experiments, trials, and trial components are automatically tracked, logged, and indexed. When you use the Amazon Web Services SDK for Python (Boto), you must use the logging APIs provided by the SDK.

You can add tags to a trial component and then use the Search: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_Search.html API to search for the tags.
Link to this function

create_trial_component(Client, Input, Options)

View Source
Link to this function

create_user_profile(Client, Input)

View Source

Creates a user profile.

A user profile represents a single user within a domain, and is the main way to reference a "person" for the purposes of sharing, reporting, and other user-oriented features. This entity is created when a user onboards to a domain. If an administrator invites a person by email or imports them from IAM Identity Center, a user profile is automatically created. A user profile is the primary holder of settings for an individual user and has a reference to the user's private Amazon Elastic File System home directory.
Link to this function

create_user_profile(Client, Input, Options)

View Source
Link to this function

create_workforce(Client, Input)

View Source

Use this operation to create a workforce.

This operation will return an error if a workforce already exists in the Amazon Web Services Region that you specify. You can only create one workforce in each Amazon Web Services Region per Amazon Web Services account.

If you want to create a new workforce in an Amazon Web Services Region where a workforce already exists, use the DeleteWorkforce: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DeleteWorkforce.html API operation to delete the existing workforce and then use CreateWorkforce to create a new workforce.

To create a private workforce using Amazon Cognito, you must specify a Cognito user pool in CognitoConfig. You can also create an Amazon Cognito workforce using the Amazon SageMaker console. For more information, see Create a Private Workforce (Amazon Cognito): https://docs.aws.amazon.com/sagemaker/latest/dg/sms-workforce-create-private.html.

To create a private workforce using your own OIDC Identity Provider (IdP), specify your IdP configuration in OidcConfig. Your OIDC IdP must support groups because groups are used by Ground Truth and Amazon A2I to create work teams. For more information, see Create a Private Workforce (OIDC IdP): https://docs.aws.amazon.com/sagemaker/latest/dg/sms-workforce-create-private-oidc.html.
Link to this function

create_workforce(Client, Input, Options)

View Source
Link to this function

create_workteam(Client, Input)

View Source

Creates a new work team for labeling your data.

A work team is defined by one or more Amazon Cognito user pools. You must first create the user pools before you can create a work team.

You cannot create more than 25 work teams in an account and region.
Link to this function

create_workteam(Client, Input, Options)

View Source
Link to this function

delete_action(Client, Input)

View Source
Deletes an action.
Link to this function

delete_action(Client, Input, Options)

View Source
Link to this function

delete_algorithm(Client, Input)

View Source
Removes the specified algorithm from your account.
Link to this function

delete_algorithm(Client, Input, Options)

View Source
Link to this function

delete_app(Client, Input)

View Source
Used to stop and delete an app.
Link to this function

delete_app(Client, Input, Options)

View Source
Link to this function

delete_app_image_config(Client, Input)

View Source
Deletes an AppImageConfig.
Link to this function

delete_app_image_config(Client, Input, Options)

View Source
Link to this function

delete_artifact(Client, Input)

View Source

Deletes an artifact.

Either ArtifactArn or Source must be specified.
Link to this function

delete_artifact(Client, Input, Options)

View Source
Link to this function

delete_association(Client, Input)

View Source
Deletes an association.
Link to this function

delete_association(Client, Input, Options)

View Source
Link to this function

delete_cluster(Client, Input)

View Source
Delete a SageMaker HyperPod cluster.
Link to this function

delete_cluster(Client, Input, Options)

View Source
Link to this function

delete_code_repository(Client, Input)

View Source
Deletes the specified Git repository from your account.
Link to this function

delete_code_repository(Client, Input, Options)

View Source
Link to this function

delete_compilation_job(Client, Input)

View Source

Deletes the specified compilation job.

This action deletes only the compilation job resource in Amazon SageMaker. It doesn't delete other resources that are related to that job, such as the model artifacts that the job creates, the compilation logs in CloudWatch, the compiled model, or the IAM role.

You can delete a compilation job only if its current status is COMPLETED, FAILED, or STOPPED. If the job status is STARTING or INPROGRESS, stop the job, and then delete it after its status becomes STOPPED.
Link to this function

delete_compilation_job(Client, Input, Options)

View Source
Link to this function

delete_context(Client, Input)

View Source
Deletes an context.
Link to this function

delete_context(Client, Input, Options)

View Source
Link to this function

delete_data_quality_job_definition(Client, Input)

View Source
Deletes a data quality monitoring job definition.
Link to this function

delete_data_quality_job_definition(Client, Input, Options)

View Source
Link to this function

delete_device_fleet(Client, Input)

View Source
Deletes a fleet.
Link to this function

delete_device_fleet(Client, Input, Options)

View Source
Link to this function

delete_domain(Client, Input)

View Source

Used to delete a domain.

If you onboarded with IAM mode, you will need to delete your domain to onboard again using IAM Identity Center. Use with caution. All of the members of the domain will lose access to their EFS volume, including data, notebooks, and other artifacts.
Link to this function

delete_domain(Client, Input, Options)

View Source
Link to this function

delete_edge_deployment_plan(Client, Input)

View Source
Deletes an edge deployment plan if (and only if) all the stages in the plan are inactive or there are no stages in the plan.
Link to this function

delete_edge_deployment_plan(Client, Input, Options)

View Source
Link to this function

delete_edge_deployment_stage(Client, Input)

View Source
Delete a stage in an edge deployment plan if (and only if) the stage is inactive.
Link to this function

delete_edge_deployment_stage(Client, Input, Options)

View Source
Link to this function

delete_endpoint(Client, Input)

View Source

Deletes an endpoint.

SageMaker frees up all of the resources that were deployed when the endpoint was created.

SageMaker retires any custom KMS key grants associated with the endpoint, meaning you don't need to use the RevokeGrant: http://docs.aws.amazon.com/kms/latest/APIReference/API_RevokeGrant.html API call.

When you delete your endpoint, SageMaker asynchronously deletes associated endpoint resources such as KMS key grants. You might still see these resources in your account for a few minutes after deleting your endpoint. Do not delete or revoke the permissions for your ExecutionRoleArn: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateModel.html#sagemaker-CreateModel-request-ExecutionRoleArn, otherwise SageMaker cannot delete these resources.
Link to this function

delete_endpoint(Client, Input, Options)

View Source
Link to this function

delete_endpoint_config(Client, Input)

View Source

Deletes an endpoint configuration.

The DeleteEndpointConfig API deletes only the specified configuration. It does not delete endpoints created using the configuration.

You must not delete an EndpointConfig in use by an endpoint that is live or while the UpdateEndpoint or CreateEndpoint operations are being performed on the endpoint. If you delete the EndpointConfig of an endpoint that is active or being created or updated you may lose visibility into the instance type the endpoint is using. The endpoint must be deleted in order to stop incurring charges.
Link to this function

delete_endpoint_config(Client, Input, Options)

View Source
Link to this function

delete_experiment(Client, Input)

View Source

Deletes an SageMaker experiment.

All trials associated with the experiment must be deleted first. Use the ListTrials: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_ListTrials.html API to get a list of the trials associated with the experiment.
Link to this function

delete_experiment(Client, Input, Options)

View Source
Link to this function

delete_feature_group(Client, Input)

View Source

Delete the FeatureGroup and any data that was written to the OnlineStore of the FeatureGroup.

Data cannot be accessed from the OnlineStore immediately after DeleteFeatureGroup is called.

Data written into the OfflineStore will not be deleted. The Amazon Web Services Glue database and tables that are automatically created for your OfflineStore are not deleted.

Note that it can take approximately 10-15 minutes to delete an OnlineStoreFeatureGroup with the InMemoryStorageType.
Link to this function

delete_feature_group(Client, Input, Options)

View Source
Link to this function

delete_flow_definition(Client, Input)

View Source
Deletes the specified flow definition.
Link to this function

delete_flow_definition(Client, Input, Options)

View Source
Link to this function

delete_hub(Client, Input)

View Source

Delete a hub.

Hub APIs are only callable through SageMaker Studio.
Link to this function

delete_hub(Client, Input, Options)

View Source
Link to this function

delete_hub_content(Client, Input)

View Source

Delete the contents of a hub.

Hub APIs are only callable through SageMaker Studio.
Link to this function

delete_hub_content(Client, Input, Options)

View Source
Link to this function

delete_human_task_ui(Client, Input)

View Source

Use this operation to delete a human task user interface (worker task template).

To see a list of human task user interfaces (work task templates) in your account, use ListHumanTaskUis: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_ListHumanTaskUis.html. When you delete a worker task template, it no longer appears when you call ListHumanTaskUis.
Link to this function

delete_human_task_ui(Client, Input, Options)

View Source
Link to this function

delete_hyper_parameter_tuning_job(Client, Input)

View Source

Deletes a hyperparameter tuning job.

The DeleteHyperParameterTuningJob API deletes only the tuning job entry that was created in SageMaker when you called the CreateHyperParameterTuningJob API. It does not delete training jobs, artifacts, or the IAM role that you specified when creating the model.
Link to this function

delete_hyper_parameter_tuning_job(Client, Input, Options)

View Source
Link to this function

delete_image(Client, Input)

View Source

Deletes a SageMaker image and all versions of the image.

The container images aren't deleted.
Link to this function

delete_image(Client, Input, Options)

View Source
Link to this function

delete_image_version(Client, Input)

View Source

Deletes a version of a SageMaker image.

The container image the version represents isn't deleted.
Link to this function

delete_image_version(Client, Input, Options)

View Source
Link to this function

delete_inference_component(Client, Input)

View Source
Deletes an inference component.
Link to this function

delete_inference_component(Client, Input, Options)

View Source
Link to this function

delete_inference_experiment(Client, Input)

View Source

Deletes an inference experiment.

This operation does not delete your endpoint, variants, or any underlying resources. This operation only deletes the metadata of your experiment.
Link to this function

delete_inference_experiment(Client, Input, Options)

View Source
Link to this function

delete_model(Client, Input)

View Source

Deletes a model.

The DeleteModel API deletes only the model entry that was created in SageMaker when you called the CreateModel API. It does not delete model artifacts, inference code, or the IAM role that you specified when creating the model.
Link to this function

delete_model(Client, Input, Options)

View Source
Link to this function

delete_model_bias_job_definition(Client, Input)

View Source
Deletes an Amazon SageMaker model bias job definition.
Link to this function

delete_model_bias_job_definition(Client, Input, Options)

View Source
Link to this function

delete_model_card(Client, Input)

View Source
Deletes an Amazon SageMaker Model Card.
Link to this function

delete_model_card(Client, Input, Options)

View Source
Link to this function

delete_model_explainability_job_definition(Client, Input)

View Source
Deletes an Amazon SageMaker model explainability job definition.
Link to this function

delete_model_explainability_job_definition(Client, Input, Options)

View Source
Link to this function

delete_model_package(Client, Input)

View Source

Deletes a model package.

A model package is used to create SageMaker models or list on Amazon Web Services Marketplace. Buyers can subscribe to model packages listed on Amazon Web Services Marketplace to create models in SageMaker.
Link to this function

delete_model_package(Client, Input, Options)

View Source
Link to this function

delete_model_package_group(Client, Input)

View Source
Deletes the specified model group.
Link to this function

delete_model_package_group(Client, Input, Options)

View Source
Link to this function

delete_model_package_group_policy(Client, Input)

View Source
Deletes a model group resource policy.
Link to this function

delete_model_package_group_policy(Client, Input, Options)

View Source
Link to this function

delete_model_quality_job_definition(Client, Input)

View Source
Deletes the secified model quality monitoring job definition.
Link to this function

delete_model_quality_job_definition(Client, Input, Options)

View Source
Link to this function

delete_monitoring_schedule(Client, Input)

View Source

Deletes a monitoring schedule.

Also stops the schedule had not already been stopped. This does not delete the job execution history of the monitoring schedule.
Link to this function

delete_monitoring_schedule(Client, Input, Options)

View Source
Link to this function

delete_notebook_instance(Client, Input)

View Source

Deletes an SageMaker notebook instance.

Before you can delete a notebook instance, you must call the StopNotebookInstance API.

When you delete a notebook instance, you lose all of your data. SageMaker removes the ML compute instance, and deletes the ML storage volume and the network interface associated with the notebook instance.
Link to this function

delete_notebook_instance(Client, Input, Options)

View Source
Link to this function

delete_notebook_instance_lifecycle_config(Client, Input)

View Source
Deletes a notebook instance lifecycle configuration.
Link to this function

delete_notebook_instance_lifecycle_config(Client, Input, Options)

View Source
Link to this function

delete_pipeline(Client, Input)

View Source

Deletes a pipeline if there are no running instances of the pipeline.

To delete a pipeline, you must stop all running instances of the pipeline using the StopPipelineExecution API. When you delete a pipeline, all instances of the pipeline are deleted.
Link to this function

delete_pipeline(Client, Input, Options)

View Source
Link to this function

delete_project(Client, Input)

View Source
Delete the specified project.
Link to this function

delete_project(Client, Input, Options)

View Source
Link to this function

delete_space(Client, Input)

View Source
Used to delete a space.
Link to this function

delete_space(Client, Input, Options)

View Source
Link to this function

delete_studio_lifecycle_config(Client, Input)

View Source

Deletes the Amazon SageMaker Studio Lifecycle Configuration.

In order to delete the Lifecycle Configuration, there must be no running apps using the Lifecycle Configuration. You must also remove the Lifecycle Configuration from UserSettings in all Domains and UserProfiles.
Link to this function

delete_studio_lifecycle_config(Client, Input, Options)

View Source
Link to this function

delete_tags(Client, Input)

View Source

Deletes the specified tags from an SageMaker resource.

To list a resource's tags, use the ListTags API.

When you call this API to delete tags from a hyperparameter tuning job, the deleted tags are not removed from training jobs that the hyperparameter tuning job launched before you called this API.

When you call this API to delete tags from a SageMaker Domain or User Profile, the deleted tags are not removed from Apps that the SageMaker Domain or User Profile launched before you called this API.
Link to this function

delete_tags(Client, Input, Options)

View Source
Link to this function

delete_trial(Client, Input)

View Source

Deletes the specified trial.

All trial components that make up the trial must be deleted first. Use the DescribeTrialComponent: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeTrialComponent.html API to get the list of trial components.
Link to this function

delete_trial(Client, Input, Options)

View Source
Link to this function

delete_trial_component(Client, Input)

View Source

Deletes the specified trial component.

A trial component must be disassociated from all trials before the trial component can be deleted. To disassociate a trial component from a trial, call the DisassociateTrialComponent: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DisassociateTrialComponent.html API.
Link to this function

delete_trial_component(Client, Input, Options)

View Source
Link to this function

delete_user_profile(Client, Input)

View Source

Deletes a user profile.

When a user profile is deleted, the user loses access to their EFS volume, including data, notebooks, and other artifacts.
Link to this function

delete_user_profile(Client, Input, Options)

View Source
Link to this function

delete_workforce(Client, Input)

View Source

Use this operation to delete a workforce.

If you want to create a new workforce in an Amazon Web Services Region where a workforce already exists, use this operation to delete the existing workforce and then use CreateWorkforce: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateWorkforce.html to create a new workforce.

If a private workforce contains one or more work teams, you must use the DeleteWorkteam: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DeleteWorkteam.html operation to delete all work teams before you delete the workforce. If you try to delete a workforce that contains one or more work teams, you will recieve a ResourceInUse error.
Link to this function

delete_workforce(Client, Input, Options)

View Source
Link to this function

delete_workteam(Client, Input)

View Source

Deletes an existing work team.

This operation can't be undone.
Link to this function

delete_workteam(Client, Input, Options)

View Source
Link to this function

deregister_devices(Client, Input)

View Source

Deregisters the specified devices.

After you deregister a device, you will need to re-register the devices.
Link to this function

deregister_devices(Client, Input, Options)

View Source
Link to this function

describe_action(Client, Input)

View Source
Describes an action.
Link to this function

describe_action(Client, Input, Options)

View Source
Link to this function

describe_algorithm(Client, Input)

View Source
Returns a description of the specified algorithm that is in your account.
Link to this function

describe_algorithm(Client, Input, Options)

View Source
Link to this function

describe_app(Client, Input)

View Source
Describes the app.
Link to this function

describe_app(Client, Input, Options)

View Source
Link to this function

describe_app_image_config(Client, Input)

View Source
Describes an AppImageConfig.
Link to this function

describe_app_image_config(Client, Input, Options)

View Source
Link to this function

describe_artifact(Client, Input)

View Source
Describes an artifact.
Link to this function

describe_artifact(Client, Input, Options)

View Source
Link to this function

describe_auto_ml_job(Client, Input)

View Source

Returns information about an AutoML job created by calling CreateAutoMLJob: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateAutoMLJob.html.

AutoML jobs created by calling CreateAutoMLJobV2: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateAutoMLJobV2.html cannot be described by DescribeAutoMLJob.
Link to this function

describe_auto_ml_job(Client, Input, Options)

View Source
Link to this function

describe_auto_ml_job_v2(Client, Input)

View Source
Returns information about an AutoML job created by calling CreateAutoMLJobV2: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateAutoMLJobV2.html or CreateAutoMLJob: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateAutoMLJob.html.
Link to this function

describe_auto_ml_job_v2(Client, Input, Options)

View Source
Link to this function

describe_cluster(Client, Input)

View Source
Retrieves information of a SageMaker HyperPod cluster.
Link to this function

describe_cluster(Client, Input, Options)

View Source
Link to this function

describe_cluster_node(Client, Input)

View Source
Retrieves information of an instance (also called a node interchangeably) of a SageMaker HyperPod cluster.
Link to this function

describe_cluster_node(Client, Input, Options)

View Source
Link to this function

describe_code_repository(Client, Input)

View Source
Gets details about the specified Git repository.
Link to this function

describe_code_repository(Client, Input, Options)

View Source
Link to this function

describe_compilation_job(Client, Input)

View Source

Returns information about a model compilation job.

To create a model compilation job, use CreateCompilationJob: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateCompilationJob.html. To get information about multiple model compilation jobs, use ListCompilationJobs: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_ListCompilationJobs.html.
Link to this function

describe_compilation_job(Client, Input, Options)

View Source
Link to this function

describe_context(Client, Input)

View Source
Describes a context.
Link to this function

describe_context(Client, Input, Options)

View Source
Link to this function

describe_data_quality_job_definition(Client, Input)

View Source
Gets the details of a data quality monitoring job definition.
Link to this function

describe_data_quality_job_definition(Client, Input, Options)

View Source
Link to this function

describe_device(Client, Input)

View Source
Describes the device.
Link to this function

describe_device(Client, Input, Options)

View Source
Link to this function

describe_device_fleet(Client, Input)

View Source
A description of the fleet the device belongs to.
Link to this function

describe_device_fleet(Client, Input, Options)

View Source
Link to this function

describe_domain(Client, Input)

View Source
The description of the domain.
Link to this function

describe_domain(Client, Input, Options)

View Source
Link to this function

describe_edge_deployment_plan(Client, Input)

View Source
Describes an edge deployment plan with deployment status per stage.
Link to this function

describe_edge_deployment_plan(Client, Input, Options)

View Source
Link to this function

describe_edge_packaging_job(Client, Input)

View Source
A description of edge packaging jobs.
Link to this function

describe_edge_packaging_job(Client, Input, Options)

View Source
Link to this function

describe_endpoint(Client, Input)

View Source
Returns the description of an endpoint.
Link to this function

describe_endpoint(Client, Input, Options)

View Source
Link to this function

describe_endpoint_config(Client, Input)

View Source
Returns the description of an endpoint configuration created using the CreateEndpointConfig API.
Link to this function

describe_endpoint_config(Client, Input, Options)

View Source
Link to this function

describe_experiment(Client, Input)

View Source
Provides a list of an experiment's properties.
Link to this function

describe_experiment(Client, Input, Options)

View Source
Link to this function

describe_feature_group(Client, Input)

View Source

Use this operation to describe a FeatureGroup.

The response includes information on the creation time, FeatureGroup name, the unique identifier for each FeatureGroup, and more.
Link to this function

describe_feature_group(Client, Input, Options)

View Source
Link to this function

describe_feature_metadata(Client, Input)

View Source
Shows the metadata for a feature within a feature group.
Link to this function

describe_feature_metadata(Client, Input, Options)

View Source
Link to this function

describe_flow_definition(Client, Input)

View Source
Returns information about the specified flow definition.
Link to this function

describe_flow_definition(Client, Input, Options)

View Source
Link to this function

describe_hub(Client, Input)

View Source

Describe a hub.

Hub APIs are only callable through SageMaker Studio.
Link to this function

describe_hub(Client, Input, Options)

View Source
Link to this function

describe_hub_content(Client, Input)

View Source

Describe the content of a hub.

Hub APIs are only callable through SageMaker Studio.
Link to this function

describe_hub_content(Client, Input, Options)

View Source
Link to this function

describe_human_task_ui(Client, Input)

View Source
Returns information about the requested human task user interface (worker task template).
Link to this function

describe_human_task_ui(Client, Input, Options)

View Source
Link to this function

describe_hyper_parameter_tuning_job(Client, Input)

View Source

Returns a description of a hyperparameter tuning job, depending on the fields selected.

These fields can include the name, Amazon Resource Name (ARN), job status of your tuning job and more.
Link to this function

describe_hyper_parameter_tuning_job(Client, Input, Options)

View Source
Link to this function

describe_image(Client, Input)

View Source
Describes a SageMaker image.
Link to this function

describe_image(Client, Input, Options)

View Source
Link to this function

describe_image_version(Client, Input)

View Source
Describes a version of a SageMaker image.
Link to this function

describe_image_version(Client, Input, Options)

View Source
Link to this function

describe_inference_component(Client, Input)

View Source
Returns information about an inference component.
Link to this function

describe_inference_component(Client, Input, Options)

View Source
Link to this function

describe_inference_experiment(Client, Input)

View Source
Returns details about an inference experiment.
Link to this function

describe_inference_experiment(Client, Input, Options)

View Source
Link to this function

describe_inference_recommendations_job(Client, Input)

View Source

Provides the results of the Inference Recommender job.

One or more recommendation jobs are returned.
Link to this function

describe_inference_recommendations_job(Client, Input, Options)

View Source
Link to this function

describe_labeling_job(Client, Input)

View Source
Gets information about a labeling job.
Link to this function

describe_labeling_job(Client, Input, Options)

View Source
Link to this function

describe_lineage_group(Client, Input)

View Source

Provides a list of properties for the requested lineage group.

For more information, see Cross-Account Lineage Tracking : https://docs.aws.amazon.com/sagemaker/latest/dg/xaccount-lineage-tracking.html in the Amazon SageMaker Developer Guide.
Link to this function

describe_lineage_group(Client, Input, Options)

View Source
Link to this function

describe_model(Client, Input)

View Source
Describes a model that you created using the CreateModel API.
Link to this function

describe_model(Client, Input, Options)

View Source
Link to this function

describe_model_bias_job_definition(Client, Input)

View Source
Returns a description of a model bias job definition.
Link to this function

describe_model_bias_job_definition(Client, Input, Options)

View Source
Link to this function

describe_model_card(Client, Input)

View Source
Describes the content, creation time, and security configuration of an Amazon SageMaker Model Card.
Link to this function

describe_model_card(Client, Input, Options)

View Source
Link to this function

describe_model_card_export_job(Client, Input)

View Source
Describes an Amazon SageMaker Model Card export job.
Link to this function

describe_model_card_export_job(Client, Input, Options)

View Source
Link to this function

describe_model_explainability_job_definition(Client, Input)

View Source
Returns a description of a model explainability job definition.
Link to this function

describe_model_explainability_job_definition(Client, Input, Options)

View Source
Link to this function

describe_model_package(Client, Input)

View Source

Returns a description of the specified model package, which is used to create SageMaker models or list them on Amazon Web Services Marketplace.

To create models in SageMaker, buyers can subscribe to model packages listed on Amazon Web Services Marketplace.
Link to this function

describe_model_package(Client, Input, Options)

View Source
Link to this function

describe_model_package_group(Client, Input)

View Source
Gets a description for the specified model group.
Link to this function

describe_model_package_group(Client, Input, Options)

View Source
Link to this function

describe_model_quality_job_definition(Client, Input)

View Source
Returns a description of a model quality job definition.
Link to this function

describe_model_quality_job_definition(Client, Input, Options)

View Source
Link to this function

describe_monitoring_schedule(Client, Input)

View Source
Describes the schedule for a monitoring job.
Link to this function

describe_monitoring_schedule(Client, Input, Options)

View Source
Link to this function

describe_notebook_instance(Client, Input)

View Source
Returns information about a notebook instance.
Link to this function

describe_notebook_instance(Client, Input, Options)

View Source
Link to this function

describe_notebook_instance_lifecycle_config(Client, Input)

View Source

Returns a description of a notebook instance lifecycle configuration.

For information about notebook instance lifestyle configurations, see Step 2.1: (Optional) Customize a Notebook Instance: https://docs.aws.amazon.com/sagemaker/latest/dg/notebook-lifecycle-config.html.
Link to this function

describe_notebook_instance_lifecycle_config(Client, Input, Options)

View Source
Link to this function

describe_pipeline(Client, Input)

View Source
Describes the details of a pipeline.
Link to this function

describe_pipeline(Client, Input, Options)

View Source
Link to this function

describe_pipeline_definition_for_execution(Client, Input)

View Source
Describes the details of an execution's pipeline definition.
Link to this function

describe_pipeline_definition_for_execution(Client, Input, Options)

View Source
Link to this function

describe_pipeline_execution(Client, Input)

View Source
Describes the details of a pipeline execution.
Link to this function

describe_pipeline_execution(Client, Input, Options)

View Source
Link to this function

describe_processing_job(Client, Input)

View Source
Returns a description of a processing job.
Link to this function

describe_processing_job(Client, Input, Options)

View Source
Link to this function

describe_project(Client, Input)

View Source
Describes the details of a project.
Link to this function

describe_project(Client, Input, Options)

View Source
Link to this function

describe_space(Client, Input)

View Source
Describes the space.
Link to this function

describe_space(Client, Input, Options)

View Source
Link to this function

describe_studio_lifecycle_config(Client, Input)

View Source
Describes the Amazon SageMaker Studio Lifecycle Configuration.
Link to this function

describe_studio_lifecycle_config(Client, Input, Options)

View Source
Link to this function

describe_subscribed_workteam(Client, Input)

View Source

Gets information about a work team provided by a vendor.

It returns details about the subscription with a vendor in the Amazon Web Services Marketplace.
Link to this function

describe_subscribed_workteam(Client, Input, Options)

View Source
Link to this function

describe_training_job(Client, Input)

View Source

Returns information about a training job.

Some of the attributes below only appear if the training job successfully starts. If the training job fails, TrainingJobStatus is Failed and, depending on the FailureReason, attributes like TrainingStartTime, TrainingTimeInSeconds, TrainingEndTime, and BillableTimeInSeconds may not be present in the response.
Link to this function

describe_training_job(Client, Input, Options)

View Source
Link to this function

describe_transform_job(Client, Input)

View Source
Returns information about a transform job.
Link to this function

describe_transform_job(Client, Input, Options)

View Source
Link to this function

describe_trial(Client, Input)

View Source
Provides a list of a trial's properties.
Link to this function

describe_trial(Client, Input, Options)

View Source
Link to this function

describe_trial_component(Client, Input)

View Source
Provides a list of a trials component's properties.
Link to this function

describe_trial_component(Client, Input, Options)

View Source
Link to this function

describe_user_profile(Client, Input)

View Source

Describes a user profile.

For more information, see CreateUserProfile.
Link to this function

describe_user_profile(Client, Input, Options)

View Source
Link to this function

describe_workforce(Client, Input)

View Source

Lists private workforce information, including workforce name, Amazon Resource Name (ARN), and, if applicable, allowed IP address ranges (CIDRs: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html).

Allowable IP address ranges are the IP addresses that workers can use to access tasks.

This operation applies only to private workforces.
Link to this function

describe_workforce(Client, Input, Options)

View Source
Link to this function

describe_workteam(Client, Input)

View Source

Gets information about a specific work team.

You can see information such as the create date, the last updated date, membership information, and the work team's Amazon Resource Name (ARN).
Link to this function

describe_workteam(Client, Input, Options)

View Source
Link to this function

disable_sagemaker_servicecatalog_portfolio(Client, Input)

View Source

Disables using Service Catalog in SageMaker.

Service Catalog is used to create SageMaker projects.
Link to this function

disable_sagemaker_servicecatalog_portfolio(Client, Input, Options)

View Source
Link to this function

disassociate_trial_component(Client, Input)

View Source

Disassociates a trial component from a trial.

This doesn't effect other trials the component is associated with. Before you can delete a component, you must disassociate the component from all trials it is associated with. To associate a trial component with a trial, call the AssociateTrialComponent: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AssociateTrialComponent.html API.

To get a list of the trials a component is associated with, use the Search: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_Search.html API. Specify ExperimentTrialComponent for the Resource parameter. The list appears in the response under Results.TrialComponent.Parents.
Link to this function

disassociate_trial_component(Client, Input, Options)

View Source
Link to this function

enable_sagemaker_servicecatalog_portfolio(Client, Input)

View Source

Enables using Service Catalog in SageMaker.

Service Catalog is used to create SageMaker projects.
Link to this function

enable_sagemaker_servicecatalog_portfolio(Client, Input, Options)

View Source
Link to this function

get_device_fleet_report(Client, Input)

View Source
Describes a fleet.
Link to this function

get_device_fleet_report(Client, Input, Options)

View Source
Link to this function

get_lineage_group_policy(Client, Input)

View Source
The resource policy for the lineage group.
Link to this function

get_lineage_group_policy(Client, Input, Options)

View Source
Link to this function

get_model_package_group_policy(Client, Input)

View Source

Gets a resource policy that manages access for a model group.

For information about resource policies, see Identity-based policies and resource-based policies: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_identity-vs-resource.html in the Amazon Web Services Identity and Access Management User Guide..
Link to this function

get_model_package_group_policy(Client, Input, Options)

View Source
Link to this function

get_sagemaker_servicecatalog_portfolio_status(Client, Input)

View Source

Gets the status of Service Catalog in SageMaker.

Service Catalog is used to create SageMaker projects.
Link to this function

get_sagemaker_servicecatalog_portfolio_status(Client, Input, Options)

View Source
Link to this function

get_scaling_configuration_recommendation(Client, Input)

View Source

Starts an Amazon SageMaker Inference Recommender autoscaling recommendation job.

Returns recommendations for autoscaling policies that you can apply to your SageMaker endpoint.
Link to this function

get_scaling_configuration_recommendation(Client, Input, Options)

View Source
Link to this function

get_search_suggestions(Client, Input)

View Source

An auto-complete API for the search functionality in the SageMaker console.

It returns suggestions of possible matches for the property name to use in Search queries. Provides suggestions for HyperParameters, Tags, and Metrics.
Link to this function

get_search_suggestions(Client, Input, Options)

View Source
Link to this function

import_hub_content(Client, Input)

View Source

Import hub content.

Hub APIs are only callable through SageMaker Studio.
Link to this function

import_hub_content(Client, Input, Options)

View Source
Link to this function

list_actions(Client, Input)

View Source
Lists the actions in your account and their properties.
Link to this function

list_actions(Client, Input, Options)

View Source
Link to this function

list_algorithms(Client, Input)

View Source
Lists the machine learning algorithms that have been created.
Link to this function

list_algorithms(Client, Input, Options)

View Source
Link to this function

list_aliases(Client, Input)

View Source
Lists the aliases of a specified image or image version.
Link to this function

list_aliases(Client, Input, Options)

View Source
Link to this function

list_app_image_configs(Client, Input)

View Source

Lists the AppImageConfigs in your account and their properties.

The list can be filtered by creation time or modified time, and whether the AppImageConfig name contains a specified string.
Link to this function

list_app_image_configs(Client, Input, Options)

View Source
Link to this function

list_apps(Client, Input)

View Source
Lists apps.
Link to this function

list_apps(Client, Input, Options)

View Source
Link to this function

list_artifacts(Client, Input)

View Source
Lists the artifacts in your account and their properties.
Link to this function

list_artifacts(Client, Input, Options)

View Source
Link to this function

list_associations(Client, Input)

View Source
Lists the associations in your account and their properties.
Link to this function

list_associations(Client, Input, Options)

View Source
Link to this function

list_auto_ml_jobs(Client, Input)

View Source
Request a list of jobs.
Link to this function

list_auto_ml_jobs(Client, Input, Options)

View Source
Link to this function

list_candidates_for_auto_ml_job(Client, Input)

View Source
List the candidates created for the job.
Link to this function

list_candidates_for_auto_ml_job(Client, Input, Options)

View Source
Link to this function

list_cluster_nodes(Client, Input)

View Source
Retrieves the list of instances (also called nodes interchangeably) in a SageMaker HyperPod cluster.
Link to this function

list_cluster_nodes(Client, Input, Options)

View Source
Link to this function

list_clusters(Client, Input)

View Source
Retrieves the list of SageMaker HyperPod clusters.
Link to this function

list_clusters(Client, Input, Options)

View Source
Link to this function

list_code_repositories(Client, Input)

View Source
Gets a list of the Git repositories in your account.
Link to this function

list_code_repositories(Client, Input, Options)

View Source
Link to this function

list_compilation_jobs(Client, Input)

View Source

Lists model compilation jobs that satisfy various filters.

To create a model compilation job, use CreateCompilationJob: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateCompilationJob.html. To get information about a particular model compilation job you have created, use DescribeCompilationJob: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeCompilationJob.html.
Link to this function

list_compilation_jobs(Client, Input, Options)

View Source
Link to this function

list_contexts(Client, Input)

View Source
Lists the contexts in your account and their properties.
Link to this function

list_contexts(Client, Input, Options)

View Source
Link to this function

list_data_quality_job_definitions(Client, Input)

View Source
Lists the data quality job definitions in your account.
Link to this function

list_data_quality_job_definitions(Client, Input, Options)

View Source
Link to this function

list_device_fleets(Client, Input)

View Source
Returns a list of devices in the fleet.
Link to this function

list_device_fleets(Client, Input, Options)

View Source
Link to this function

list_devices(Client, Input)

View Source
A list of devices.
Link to this function

list_devices(Client, Input, Options)

View Source
Link to this function

list_domains(Client, Input)

View Source
Lists the domains.
Link to this function

list_domains(Client, Input, Options)

View Source
Link to this function

list_edge_deployment_plans(Client, Input)

View Source
Lists all edge deployment plans.
Link to this function

list_edge_deployment_plans(Client, Input, Options)

View Source
Link to this function

list_edge_packaging_jobs(Client, Input)

View Source
Returns a list of edge packaging jobs.
Link to this function

list_edge_packaging_jobs(Client, Input, Options)

View Source
Link to this function

list_endpoint_configs(Client, Input)

View Source
Lists endpoint configurations.
Link to this function

list_endpoint_configs(Client, Input, Options)

View Source
Link to this function

list_endpoints(Client, Input)

View Source
Lists endpoints.
Link to this function

list_endpoints(Client, Input, Options)

View Source
Link to this function

list_experiments(Client, Input)

View Source

Lists all the experiments in your account.

The list can be filtered to show only experiments that were created in a specific time range. The list can be sorted by experiment name or creation time.
Link to this function

list_experiments(Client, Input, Options)

View Source
Link to this function

list_feature_groups(Client, Input)

View Source
List FeatureGroups based on given filter and order.
Link to this function

list_feature_groups(Client, Input, Options)

View Source
Link to this function

list_flow_definitions(Client, Input)

View Source
Returns information about the flow definitions in your account.
Link to this function

list_flow_definitions(Client, Input, Options)

View Source
Link to this function

list_hub_content_versions(Client, Input)

View Source

List hub content versions.

Hub APIs are only callable through SageMaker Studio.
Link to this function

list_hub_content_versions(Client, Input, Options)

View Source
Link to this function

list_hub_contents(Client, Input)

View Source

List the contents of a hub.

Hub APIs are only callable through SageMaker Studio.
Link to this function

list_hub_contents(Client, Input, Options)

View Source
Link to this function

list_hubs(Client, Input)

View Source

List all existing hubs.

Hub APIs are only callable through SageMaker Studio.
Link to this function

list_hubs(Client, Input, Options)

View Source
Link to this function

list_human_task_uis(Client, Input)

View Source
Returns information about the human task user interfaces in your account.
Link to this function

list_human_task_uis(Client, Input, Options)

View Source
Link to this function

list_hyper_parameter_tuning_jobs(Client, Input)

View Source
Gets a list of HyperParameterTuningJobSummary: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_HyperParameterTuningJobSummary.html objects that describe the hyperparameter tuning jobs launched in your account.
Link to this function

list_hyper_parameter_tuning_jobs(Client, Input, Options)

View Source
Link to this function

list_image_versions(Client, Input)

View Source

Lists the versions of a specified image and their properties.

The list can be filtered by creation time or modified time.
Link to this function

list_image_versions(Client, Input, Options)

View Source
Link to this function

list_images(Client, Input)

View Source

Lists the images in your account and their properties.

The list can be filtered by creation time or modified time, and whether the image name contains a specified string.
Link to this function

list_images(Client, Input, Options)

View Source
Link to this function

list_inference_components(Client, Input)

View Source
Lists the inference components in your account and their properties.
Link to this function

list_inference_components(Client, Input, Options)

View Source
Link to this function

list_inference_experiments(Client, Input)

View Source
Returns the list of all inference experiments.
Link to this function

list_inference_experiments(Client, Input, Options)

View Source
Link to this function

list_inference_recommendations_job_steps(Client, Input)

View Source

Returns a list of the subtasks for an Inference Recommender job.

The supported subtasks are benchmarks, which evaluate the performance of your model on different instance types.
Link to this function

list_inference_recommendations_job_steps(Client, Input, Options)

View Source
Link to this function

list_inference_recommendations_jobs(Client, Input)

View Source
Lists recommendation jobs that satisfy various filters.
Link to this function

list_inference_recommendations_jobs(Client, Input, Options)

View Source
Link to this function

list_labeling_jobs(Client, Input)

View Source
Gets a list of labeling jobs.
Link to this function

list_labeling_jobs(Client, Input, Options)

View Source
Link to this function

list_labeling_jobs_for_workteam(Client, Input)

View Source
Gets a list of labeling jobs assigned to a specified work team.
Link to this function

list_labeling_jobs_for_workteam(Client, Input, Options)

View Source
Link to this function

list_lineage_groups(Client, Input)

View Source

A list of lineage groups shared with your Amazon Web Services account.

For more information, see Cross-Account Lineage Tracking : https://docs.aws.amazon.com/sagemaker/latest/dg/xaccount-lineage-tracking.html in the Amazon SageMaker Developer Guide.
Link to this function

list_lineage_groups(Client, Input, Options)

View Source
Link to this function

list_model_bias_job_definitions(Client, Input)

View Source
Lists model bias jobs definitions that satisfy various filters.
Link to this function

list_model_bias_job_definitions(Client, Input, Options)

View Source
Link to this function

list_model_card_export_jobs(Client, Input)

View Source
List the export jobs for the Amazon SageMaker Model Card.
Link to this function

list_model_card_export_jobs(Client, Input, Options)

View Source
Link to this function

list_model_card_versions(Client, Input)

View Source
List existing versions of an Amazon SageMaker Model Card.
Link to this function

list_model_card_versions(Client, Input, Options)

View Source
Link to this function

list_model_cards(Client, Input)

View Source
List existing model cards.
Link to this function

list_model_cards(Client, Input, Options)

View Source
Link to this function

list_model_explainability_job_definitions(Client, Input)

View Source
Lists model explainability job definitions that satisfy various filters.
Link to this function

list_model_explainability_job_definitions(Client, Input, Options)

View Source
Link to this function

list_model_metadata(Client, Input)

View Source
Lists the domain, framework, task, and model name of standard machine learning models found in common model zoos.
Link to this function

list_model_metadata(Client, Input, Options)

View Source
Link to this function

list_model_package_groups(Client, Input)

View Source
Gets a list of the model groups in your Amazon Web Services account.
Link to this function

list_model_package_groups(Client, Input, Options)

View Source
Link to this function

list_model_packages(Client, Input)

View Source
Lists the model packages that have been created.
Link to this function

list_model_packages(Client, Input, Options)

View Source
Link to this function

list_model_quality_job_definitions(Client, Input)

View Source
Gets a list of model quality monitoring job definitions in your account.
Link to this function

list_model_quality_job_definitions(Client, Input, Options)

View Source
Link to this function

list_models(Client, Input)

View Source
Lists models created with the CreateModel API.
Link to this function

list_models(Client, Input, Options)

View Source
Link to this function

list_monitoring_alert_history(Client, Input)

View Source
Gets a list of past alerts in a model monitoring schedule.
Link to this function

list_monitoring_alert_history(Client, Input, Options)

View Source
Link to this function

list_monitoring_alerts(Client, Input)

View Source
Gets the alerts for a single monitoring schedule.
Link to this function

list_monitoring_alerts(Client, Input, Options)

View Source
Link to this function

list_monitoring_executions(Client, Input)

View Source
Returns list of all monitoring job executions.
Link to this function

list_monitoring_executions(Client, Input, Options)

View Source
Link to this function

list_monitoring_schedules(Client, Input)

View Source
Returns list of all monitoring schedules.
Link to this function

list_monitoring_schedules(Client, Input, Options)

View Source
Link to this function

list_notebook_instance_lifecycle_configs(Client, Input)

View Source
Lists notebook instance lifestyle configurations created with the CreateNotebookInstanceLifecycleConfig: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateNotebookInstanceLifecycleConfig.html API.
Link to this function

list_notebook_instance_lifecycle_configs(Client, Input, Options)

View Source
Link to this function

list_notebook_instances(Client, Input)

View Source
Returns a list of the SageMaker notebook instances in the requester's account in an Amazon Web Services Region.
Link to this function

list_notebook_instances(Client, Input, Options)

View Source
Link to this function

list_pipeline_execution_steps(Client, Input)

View Source
Gets a list of PipeLineExecutionStep objects.
Link to this function

list_pipeline_execution_steps(Client, Input, Options)

View Source
Link to this function

list_pipeline_executions(Client, Input)

View Source
Gets a list of the pipeline executions.
Link to this function

list_pipeline_executions(Client, Input, Options)

View Source
Link to this function

list_pipeline_parameters_for_execution(Client, Input)

View Source
Gets a list of parameters for a pipeline execution.
Link to this function

list_pipeline_parameters_for_execution(Client, Input, Options)

View Source
Link to this function

list_pipelines(Client, Input)

View Source
Gets a list of pipelines.
Link to this function

list_pipelines(Client, Input, Options)

View Source
Link to this function

list_processing_jobs(Client, Input)

View Source
Lists processing jobs that satisfy various filters.
Link to this function

list_processing_jobs(Client, Input, Options)

View Source
Link to this function

list_projects(Client, Input)

View Source
Gets a list of the projects in an Amazon Web Services account.
Link to this function

list_projects(Client, Input, Options)

View Source
Link to this function

list_resource_catalogs(Client, Input)

View Source

Lists Amazon SageMaker Catalogs based on given filters and orders.

The maximum number of ResourceCatalogs viewable is 1000.
Link to this function

list_resource_catalogs(Client, Input, Options)

View Source
Link to this function

list_spaces(Client, Input)

View Source
Lists spaces.
Link to this function

list_spaces(Client, Input, Options)

View Source
Link to this function

list_stage_devices(Client, Input)

View Source
Lists devices allocated to the stage, containing detailed device information and deployment status.
Link to this function

list_stage_devices(Client, Input, Options)

View Source
Link to this function

list_studio_lifecycle_configs(Client, Input)

View Source
Lists the Amazon SageMaker Studio Lifecycle Configurations in your Amazon Web Services Account.
Link to this function

list_studio_lifecycle_configs(Client, Input, Options)

View Source
Link to this function

list_subscribed_workteams(Client, Input)

View Source

Gets a list of the work teams that you are subscribed to in the Amazon Web Services Marketplace.

The list may be empty if no work team satisfies the filter specified in the NameContains parameter.
Link to this function

list_subscribed_workteams(Client, Input, Options)

View Source
Link to this function

list_tags(Client, Input)

View Source
Returns the tags for the specified SageMaker resource.
Link to this function

list_tags(Client, Input, Options)

View Source
Link to this function

list_training_jobs(Client, Input)

View Source

Lists training jobs.

When StatusEquals and MaxResults are set at the same time, the MaxResults number of training jobs are first retrieved ignoring the StatusEquals parameter and then they are filtered by the StatusEquals parameter, which is returned as a response.

For example, if ListTrainingJobs is invoked with the following parameters:

{ ... MaxResults: 100, StatusEquals: InProgress ... }

First, 100 trainings jobs with any status, including those other than InProgress, are selected (sorted according to the creation time, from the most current to the oldest). Next, those with a status of InProgress are returned.

You can quickly test the API using the following Amazon Web Services CLI code.

aws sagemaker list-training-jobs --max-results 100 --status-equals InProgress
Link to this function

list_training_jobs(Client, Input, Options)

View Source
Link to this function

list_training_jobs_for_hyper_parameter_tuning_job(Client, Input)

View Source
Gets a list of TrainingJobSummary: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_TrainingJobSummary.html objects that describe the training jobs that a hyperparameter tuning job launched.
Link to this function

list_training_jobs_for_hyper_parameter_tuning_job(Client, Input, Options)

View Source
Link to this function

list_transform_jobs(Client, Input)

View Source
Lists transform jobs.
Link to this function

list_transform_jobs(Client, Input, Options)

View Source
Link to this function

list_trial_components(Client, Input)

View Source

Lists the trial components in your account.

You can sort the list by trial component name or creation time. You can filter the list to show only components that were created in a specific time range. You can also filter on one of the following:

  • ExperimentName

  • SourceArn

  • TrialName

Link to this function

list_trial_components(Client, Input, Options)

View Source
Link to this function

list_trials(Client, Input)

View Source

Lists the trials in your account.

Specify an experiment name to limit the list to the trials that are part of that experiment. Specify a trial component name to limit the list to the trials that associated with that trial component. The list can be filtered to show only trials that were created in a specific time range. The list can be sorted by trial name or creation time.
Link to this function

list_trials(Client, Input, Options)

View Source
Link to this function

list_user_profiles(Client, Input)

View Source
Lists user profiles.
Link to this function

list_user_profiles(Client, Input, Options)

View Source
Link to this function

list_workforces(Client, Input)

View Source

Use this operation to list all private and vendor workforces in an Amazon Web Services Region.

Note that you can only have one private workforce per Amazon Web Services Region.
Link to this function

list_workforces(Client, Input, Options)

View Source
Link to this function

list_workteams(Client, Input)

View Source

Gets a list of private work teams that you have defined in a region.

The list may be empty if no work team satisfies the filter specified in the NameContains parameter.
Link to this function

list_workteams(Client, Input, Options)

View Source
Link to this function

put_model_package_group_policy(Client, Input)

View Source

Adds a resouce policy to control access to a model group.

For information about resoure policies, see Identity-based policies and resource-based policies: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_identity-vs-resource.html in the Amazon Web Services Identity and Access Management User Guide..
Link to this function

put_model_package_group_policy(Client, Input, Options)

View Source
Link to this function

query_lineage(Client, Input)

View Source

Use this action to inspect your lineage and discover relationships between entities.

For more information, see Querying Lineage Entities: https://docs.aws.amazon.com/sagemaker/latest/dg/querying-lineage-entities.html in the Amazon SageMaker Developer Guide.
Link to this function

query_lineage(Client, Input, Options)

View Source
Link to this function

register_devices(Client, Input)

View Source
Register devices.
Link to this function

register_devices(Client, Input, Options)

View Source
Link to this function

render_ui_template(Client, Input)

View Source
Renders the UI template so that you can preview the worker's experience.
Link to this function

render_ui_template(Client, Input, Options)

View Source
Link to this function

retry_pipeline_execution(Client, Input)

View Source
Retry the execution of the pipeline.
Link to this function

retry_pipeline_execution(Client, Input, Options)

View Source

Finds SageMaker resources that match a search query.

Matching resources are returned as a list of SearchRecord objects in the response. You can sort the search results by any resource property in a ascending or descending order.

You can query against the following value types: numeric, text, Boolean, and timestamp.

The Search API may provide access to otherwise restricted data. See Amazon SageMaker API Permissions: Actions, Permissions, and Resources Reference: https://docs.aws.amazon.com/sagemaker/latest/dg/api-permissions-reference.html for more information.
Link to this function

search(Client, Input, Options)

View Source
Link to this function

send_pipeline_execution_step_failure(Client, Input)

View Source

Notifies the pipeline that the execution of a callback step failed, along with a message describing why.

When a callback step is run, the pipeline generates a callback token and includes the token in a message sent to Amazon Simple Queue Service (Amazon SQS).
Link to this function

send_pipeline_execution_step_failure(Client, Input, Options)

View Source
Link to this function

send_pipeline_execution_step_success(Client, Input)

View Source

Notifies the pipeline that the execution of a callback step succeeded and provides a list of the step's output parameters.

When a callback step is run, the pipeline generates a callback token and includes the token in a message sent to Amazon Simple Queue Service (Amazon SQS).
Link to this function

send_pipeline_execution_step_success(Client, Input, Options)

View Source
Link to this function

start_edge_deployment_stage(Client, Input)

View Source
Starts a stage in an edge deployment plan.
Link to this function

start_edge_deployment_stage(Client, Input, Options)

View Source
Link to this function

start_inference_experiment(Client, Input)

View Source
Starts an inference experiment.
Link to this function

start_inference_experiment(Client, Input, Options)

View Source
Link to this function

start_monitoring_schedule(Client, Input)

View Source

Starts a previously stopped monitoring schedule.

By default, when you successfully create a new schedule, the status of a monitoring schedule is scheduled.
Link to this function

start_monitoring_schedule(Client, Input, Options)

View Source
Link to this function

start_notebook_instance(Client, Input)

View Source

Launches an ML compute instance with the latest version of the libraries and attaches your ML storage volume.

After configuring the notebook instance, SageMaker sets the notebook instance status to InService. A notebook instance's status must be InService before you can connect to your Jupyter notebook.
Link to this function

start_notebook_instance(Client, Input, Options)

View Source
Link to this function

start_pipeline_execution(Client, Input)

View Source
Starts a pipeline execution.
Link to this function

start_pipeline_execution(Client, Input, Options)

View Source
Link to this function

stop_auto_ml_job(Client, Input)

View Source
A method for forcing a running job to shut down.
Link to this function

stop_auto_ml_job(Client, Input, Options)

View Source
Link to this function

stop_compilation_job(Client, Input)

View Source

Stops a model compilation job.

To stop a job, Amazon SageMaker sends the algorithm the SIGTERM signal. This gracefully shuts the job down. If the job hasn't stopped, it sends the SIGKILL signal.

When it receives a StopCompilationJob request, Amazon SageMaker changes the CompilationJobStatus of the job to Stopping. After Amazon SageMaker stops the job, it sets the CompilationJobStatus to Stopped.
Link to this function

stop_compilation_job(Client, Input, Options)

View Source
Link to this function

stop_edge_deployment_stage(Client, Input)

View Source
Stops a stage in an edge deployment plan.
Link to this function

stop_edge_deployment_stage(Client, Input, Options)

View Source
Link to this function

stop_edge_packaging_job(Client, Input)

View Source
Request to stop an edge packaging job.
Link to this function

stop_edge_packaging_job(Client, Input, Options)

View Source
Link to this function

stop_hyper_parameter_tuning_job(Client, Input)

View Source

Stops a running hyperparameter tuning job and all running training jobs that the tuning job launched.

All model artifacts output from the training jobs are stored in Amazon Simple Storage Service (Amazon S3). All data that the training jobs write to Amazon CloudWatch Logs are still available in CloudWatch. After the tuning job moves to the Stopped state, it releases all reserved resources for the tuning job.
Link to this function

stop_hyper_parameter_tuning_job(Client, Input, Options)

View Source
Link to this function

stop_inference_experiment(Client, Input)

View Source
Stops an inference experiment.
Link to this function

stop_inference_experiment(Client, Input, Options)

View Source
Link to this function

stop_inference_recommendations_job(Client, Input)

View Source
Stops an Inference Recommender job.
Link to this function

stop_inference_recommendations_job(Client, Input, Options)

View Source
Link to this function

stop_labeling_job(Client, Input)

View Source

Stops a running labeling job.

A job that is stopped cannot be restarted. Any results obtained before the job is stopped are placed in the Amazon S3 output bucket.
Link to this function

stop_labeling_job(Client, Input, Options)

View Source
Link to this function

stop_monitoring_schedule(Client, Input)

View Source
Stops a previously started monitoring schedule.
Link to this function

stop_monitoring_schedule(Client, Input, Options)

View Source
Link to this function

stop_notebook_instance(Client, Input)

View Source

Terminates the ML compute instance.

Before terminating the instance, SageMaker disconnects the ML storage volume from it. SageMaker preserves the ML storage volume. SageMaker stops charging you for the ML compute instance when you call StopNotebookInstance.

To access data on the ML storage volume for a notebook instance that has been terminated, call the StartNotebookInstance API. StartNotebookInstance launches another ML compute instance, configures it, and attaches the preserved ML storage volume so you can continue your work.
Link to this function

stop_notebook_instance(Client, Input, Options)

View Source
Link to this function

stop_pipeline_execution(Client, Input)

View Source

Stops a pipeline execution.

Callback Step

A pipeline execution won't stop while a callback step is running. When you call StopPipelineExecution on a pipeline execution with a running callback step, SageMaker Pipelines sends an additional Amazon SQS message to the specified SQS queue. The body of the SQS message contains a "Status" field which is set to "Stopping".

You should add logic to your Amazon SQS message consumer to take any needed action (for example, resource cleanup) upon receipt of the message followed by a call to SendPipelineExecutionStepSuccess or SendPipelineExecutionStepFailure.

Only when SageMaker Pipelines receives one of these calls will it stop the pipeline execution.

Lambda Step

A pipeline execution can't be stopped while a lambda step is running because the Lambda function invoked by the lambda step can't be stopped. If you attempt to stop the execution while the Lambda function is running, the pipeline waits for the Lambda function to finish or until the timeout is hit, whichever occurs first, and then stops. If the Lambda function finishes, the pipeline execution status is Stopped. If the timeout is hit the pipeline execution status is Failed.
Link to this function

stop_pipeline_execution(Client, Input, Options)

View Source
Link to this function

stop_processing_job(Client, Input)

View Source
Stops a processing job.
Link to this function

stop_processing_job(Client, Input, Options)

View Source
Link to this function

stop_training_job(Client, Input)

View Source

Stops a training job.

To stop a job, SageMaker sends the algorithm the SIGTERM signal, which delays job termination for 120 seconds. Algorithms might use this 120-second window to save the model artifacts, so the results of the training is not lost.

When it receives a StopTrainingJob request, SageMaker changes the status of the job to Stopping. After SageMaker stops the job, it sets the status to Stopped.
Link to this function

stop_training_job(Client, Input, Options)

View Source
Link to this function

stop_transform_job(Client, Input)

View Source

Stops a batch transform job.

When Amazon SageMaker receives a StopTransformJob request, the status of the job changes to Stopping. After Amazon SageMaker stops the job, the status is set to Stopped. When you stop a batch transform job before it is completed, Amazon SageMaker doesn't store the job's output in Amazon S3.
Link to this function

stop_transform_job(Client, Input, Options)

View Source
Link to this function

update_action(Client, Input)

View Source
Updates an action.
Link to this function

update_action(Client, Input, Options)

View Source
Link to this function

update_app_image_config(Client, Input)

View Source
Updates the properties of an AppImageConfig.
Link to this function

update_app_image_config(Client, Input, Options)

View Source
Link to this function

update_artifact(Client, Input)

View Source
Updates an artifact.
Link to this function

update_artifact(Client, Input, Options)

View Source
Link to this function

update_cluster(Client, Input)

View Source
Updates a SageMaker HyperPod cluster.
Link to this function

update_cluster(Client, Input, Options)

View Source
Link to this function

update_cluster_software(Client, Input)

View Source

Updates the platform software of a SageMaker HyperPod cluster for security patching.

To learn how to use this API, see Update the SageMaker HyperPod platform software of a cluster: https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-hyperpod-operate.html#sagemaker-hyperpod-operate-cli-command-update-cluster-software.
Link to this function

update_cluster_software(Client, Input, Options)

View Source
Link to this function

update_code_repository(Client, Input)

View Source
Updates the specified Git repository with the specified values.
Link to this function

update_code_repository(Client, Input, Options)

View Source
Link to this function

update_context(Client, Input)

View Source
Updates a context.
Link to this function

update_context(Client, Input, Options)

View Source
Link to this function

update_device_fleet(Client, Input)

View Source
Updates a fleet of devices.
Link to this function

update_device_fleet(Client, Input, Options)

View Source
Link to this function

update_devices(Client, Input)

View Source
Updates one or more devices in a fleet.
Link to this function

update_devices(Client, Input, Options)

View Source
Link to this function

update_domain(Client, Input)

View Source
Updates the default settings for new user profiles in the domain.
Link to this function

update_domain(Client, Input, Options)

View Source
Link to this function

update_endpoint(Client, Input)

View Source

Deploys the EndpointConfig specified in the request to a new fleet of instances.

SageMaker shifts endpoint traffic to the new instances with the updated endpoint configuration and then deletes the old instances using the previous EndpointConfig (there is no availability loss). For more information about how to control the update and traffic shifting process, see Update models in production: https://docs.aws.amazon.com/sagemaker/latest/dg/deployment-guardrails.html.

When SageMaker receives the request, it sets the endpoint status to Updating. After updating the endpoint, it sets the status to InService. To check the status of an endpoint, use the DescribeEndpoint: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeEndpoint.html API.

You must not delete an EndpointConfig in use by an endpoint that is live or while the UpdateEndpoint or CreateEndpoint operations are being performed on the endpoint. To update an endpoint, you must create a new EndpointConfig.

If you delete the EndpointConfig of an endpoint that is active or being created or updated you may lose visibility into the instance type the endpoint is using. The endpoint must be deleted in order to stop incurring charges.
Link to this function

update_endpoint(Client, Input, Options)

View Source
Link to this function

update_endpoint_weights_and_capacities(Client, Input)

View Source

Updates variant weight of one or more variants associated with an existing endpoint, or capacity of one variant associated with an existing endpoint.

When it receives the request, SageMaker sets the endpoint status to Updating. After updating the endpoint, it sets the status to InService. To check the status of an endpoint, use the DescribeEndpoint: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeEndpoint.html API.
Link to this function

update_endpoint_weights_and_capacities(Client, Input, Options)

View Source
Link to this function

update_experiment(Client, Input)

View Source

Adds, updates, or removes the description of an experiment.

Updates the display name of an experiment.
Link to this function

update_experiment(Client, Input, Options)

View Source
Link to this function

update_feature_group(Client, Input)

View Source

Updates the feature group by either adding features or updating the online store configuration.

Use one of the following request parameters at a time while using the UpdateFeatureGroup API.

You can add features for your feature group using the FeatureAdditions request parameter. Features cannot be removed from a feature group.

You can update the online store configuration by using the OnlineStoreConfig request parameter. If a TtlDuration is specified, the default TtlDuration applies for all records added to the feature group after the feature group is updated. If a record level TtlDuration exists from using the PutRecord API, the record level TtlDuration applies to that record instead of the default TtlDuration.
Link to this function

update_feature_group(Client, Input, Options)

View Source
Link to this function

update_feature_metadata(Client, Input)

View Source
Updates the description and parameters of the feature group.
Link to this function

update_feature_metadata(Client, Input, Options)

View Source
Link to this function

update_hub(Client, Input)

View Source

Update a hub.

Hub APIs are only callable through SageMaker Studio.
Link to this function

update_hub(Client, Input, Options)

View Source
Link to this function

update_image(Client, Input)

View Source

Updates the properties of a SageMaker image.

To change the image's tags, use the AddTags: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AddTags.html and DeleteTags: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DeleteTags.html APIs.
Link to this function

update_image(Client, Input, Options)

View Source
Link to this function

update_image_version(Client, Input)

View Source
Updates the properties of a SageMaker image version.
Link to this function

update_image_version(Client, Input, Options)

View Source
Link to this function

update_inference_component(Client, Input)

View Source
Updates an inference component.
Link to this function

update_inference_component(Client, Input, Options)

View Source
Link to this function

update_inference_component_runtime_config(Client, Input)

View Source
Runtime settings for a model that is deployed with an inference component.
Link to this function

update_inference_component_runtime_config(Client, Input, Options)

View Source
Link to this function

update_inference_experiment(Client, Input)

View Source

Updates an inference experiment that you created.

The status of the inference experiment has to be either Created, Running. For more information on the status of an inference experiment, see DescribeInferenceExperiment: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeInferenceExperiment.html.
Link to this function

update_inference_experiment(Client, Input, Options)

View Source
Link to this function

update_model_card(Client, Input)

View Source

Update an Amazon SageMaker Model Card.

You cannot update both model card content and model card status in a single call.
Link to this function

update_model_card(Client, Input, Options)

View Source
Link to this function

update_model_package(Client, Input)

View Source
Updates a versioned model.
Link to this function

update_model_package(Client, Input, Options)

View Source
Link to this function

update_monitoring_alert(Client, Input)

View Source
Update the parameters of a model monitor alert.
Link to this function

update_monitoring_alert(Client, Input, Options)

View Source
Link to this function

update_monitoring_schedule(Client, Input)

View Source
Updates a previously created schedule.
Link to this function

update_monitoring_schedule(Client, Input, Options)

View Source
Link to this function

update_notebook_instance(Client, Input)

View Source

Updates a notebook instance.

NotebookInstance updates include upgrading or downgrading the ML compute instance used for your notebook instance to accommodate changes in your workload requirements.
Link to this function

update_notebook_instance(Client, Input, Options)

View Source
Link to this function

update_notebook_instance_lifecycle_config(Client, Input)

View Source
Updates a notebook instance lifecycle configuration created with the CreateNotebookInstanceLifecycleConfig: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateNotebookInstanceLifecycleConfig.html API.
Link to this function

update_notebook_instance_lifecycle_config(Client, Input, Options)

View Source
Link to this function

update_pipeline(Client, Input)

View Source
Updates a pipeline.
Link to this function

update_pipeline(Client, Input, Options)

View Source
Link to this function

update_pipeline_execution(Client, Input)

View Source
Updates a pipeline execution.
Link to this function

update_pipeline_execution(Client, Input, Options)

View Source
Link to this function

update_project(Client, Input)

View Source

Updates a machine learning (ML) project that is created from a template that sets up an ML pipeline from training to deploying an approved model.

You must not update a project that is in use. If you update the ServiceCatalogProvisioningUpdateDetails of a project that is active or being created, or updated, you may lose resources already created by the project.
Link to this function

update_project(Client, Input, Options)

View Source
Link to this function

update_space(Client, Input)

View Source
Updates the settings of a space.
Link to this function

update_space(Client, Input, Options)

View Source
Link to this function

update_training_job(Client, Input)

View Source
Update a model training job to request a new Debugger profiling configuration or to change warm pool retention length.
Link to this function

update_training_job(Client, Input, Options)

View Source
Link to this function

update_trial(Client, Input)

View Source
Updates the display name of a trial.
Link to this function

update_trial(Client, Input, Options)

View Source
Link to this function

update_trial_component(Client, Input)

View Source
Updates one or more properties of a trial component.
Link to this function

update_trial_component(Client, Input, Options)

View Source
Link to this function

update_user_profile(Client, Input)

View Source
Updates a user profile.
Link to this function

update_user_profile(Client, Input, Options)

View Source
Link to this function

update_workforce(Client, Input)

View Source

Use this operation to update your workforce.

You can use this operation to require that workers use specific IP addresses to work on tasks and to update your OpenID Connect (OIDC) Identity Provider (IdP) workforce configuration.

The worker portal is now supported in VPC and public internet.

Use SourceIpConfig to restrict worker access to tasks to a specific range of IP addresses. You specify allowed IP addresses by creating a list of up to ten CIDRs: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html. By default, a workforce isn't restricted to specific IP addresses. If you specify a range of IP addresses, workers who attempt to access tasks using any IP address outside the specified range are denied and get a Not Found error message on the worker portal.

To restrict access to all the workers in public internet, add the SourceIpConfig CIDR value as "10.0.0.0/16".

Amazon SageMaker does not support Source Ip restriction for worker portals in VPC.

Use OidcConfig to update the configuration of a workforce created using your own OIDC IdP.

You can only update your OIDC IdP configuration when there are no work teams associated with your workforce. You can delete work teams using the DeleteWorkteam: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DeleteWorkteam.html operation.

After restricting access to a range of IP addresses or updating your OIDC IdP configuration with this operation, you can view details about your update workforce using the DescribeWorkforce: https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeWorkforce.html operation.

This operation only applies to private workforces.
Link to this function

update_workforce(Client, Input, Options)

View Source
Link to this function

update_workteam(Client, Input)

View Source
Updates an existing work team with new member definitions or description.
Link to this function

update_workteam(Client, Input, Options)

View Source