View Source AWS.BedrockRuntime (aws-elixir v1.0.4)
Describes the API operations for running inference using Amazon Bedrock models.
Link to this section Summary
Functions
The action to apply a guardrail.
Sends messages to the specified Amazon Bedrock model.
Sends messages to the specified Amazon Bedrock model and returns the response in a stream.
Retrieve information about an asynchronous invocation.
Invokes the specified Amazon Bedrock model to run inference using the prompt and inference parameters provided in the request body.
Invoke the specified Amazon Bedrock model to run inference using the prompt and inference parameters provided in the request body.
Lists asynchronous invocations.
Starts an asynchronous invocation.
Link to this section Functions
apply_guardrail(client, guardrail_identifier, guardrail_version, input, options \\ [])
View SourceThe action to apply a guardrail.
For troubleshooting some of the common errors you might encounter when using the
ApplyGuardrail
API,
see Troubleshooting Amazon Bedrock API Error Codes
in the Amazon Bedrock User Guide
Sends messages to the specified Amazon Bedrock model.
Converse
provides
a consistent interface that works with all models that
support messages. This allows you to write code once and use it with different
models.
If a model has unique inference parameters, you can also pass those unique
parameters
to the model.
Amazon Bedrock doesn't store any text, images, or documents that you provide as content. The data is only used to generate the response.
You can submit a prompt by including it in the messages
field, specifying the
modelId
of a foundation model or inference profile to run inference on it, and
including any other fields that are relevant to your use case.
You can also submit a prompt from Prompt management by specifying the ARN of the
prompt version and including a map of variables to values in the
promptVariables
field. You can append more messages to the prompt by using the
messages
field. If you use a prompt from Prompt management, you can't include
the following fields in the request: additionalModelRequestFields
,
inferenceConfig
, system
, or toolConfig
. Instead, these fields must be
defined through Prompt management. For more information, see Use a prompt from Prompt
management.
For information about the Converse API, see Use the Converse API in the Amazon Bedrock User Guide. To use a guardrail, see Use a guardrail with the Converse API in the Amazon Bedrock User Guide. To use a tool with a model, see Tool use (Function calling) in the Amazon Bedrock User Guide
For example code, see Converse API examples in the Amazon Bedrock User Guide.
This operation requires permission for the bedrock:InvokeModel
action.
To deny all inference access to resources that you specify in the modelId field,
you
need to deny access to the bedrock:InvokeModel
and
bedrock:InvokeModelWithResponseStream
actions. Doing this also denies
access to the resource through the base inference actions
(InvokeModel and
InvokeModelWithResponseStream).
For more information see Deny access for inference on specific models.
For troubleshooting some of the common errors you might encounter when using the
Converse
API,
see Troubleshooting Amazon Bedrock API Error Codes
in the Amazon Bedrock User Guide
Sends messages to the specified Amazon Bedrock model and returns the response in a stream.
ConverseStream
provides a consistent API
that works with all Amazon Bedrock models that support messages.
This allows you to write code once and use it with different models. Should a
model have unique inference parameters, you can also pass those unique
parameters to the
model.
To find out if a model supports streaming, call
GetFoundationModel and check the responseStreamingSupported
field in the response.
The CLI doesn't support streaming operations in Amazon Bedrock, including
ConverseStream
.
Amazon Bedrock doesn't store any text, images, or documents that you provide as content. The data is only used to generate the response.
You can submit a prompt by including it in the messages
field, specifying the
modelId
of a foundation model or inference profile to run inference on it, and
including any other fields that are relevant to your use case.
You can also submit a prompt from Prompt management by specifying the ARN of the
prompt version and including a map of variables to values in the
promptVariables
field. You can append more messages to the prompt by using the
messages
field. If you use a prompt from Prompt management, you can't include
the following fields in the request: additionalModelRequestFields
,
inferenceConfig
, system
, or toolConfig
. Instead, these fields must be
defined through Prompt management. For more information, see Use a prompt from
Prompt
management.
For information about the Converse API, see Use the Converse API in the Amazon Bedrock User Guide. To use a guardrail, see Use a guardrail with the Converse API in the Amazon Bedrock User Guide. To use a tool with a model, see Tool use (Function calling) in the Amazon Bedrock User Guide
For example code, see Conversation streaming example in the Amazon Bedrock User Guide.
This operation requires permission for the
bedrock:InvokeModelWithResponseStream
action.
To deny all inference access to resources that you specify in the modelId field,
you
need to deny access to the bedrock:InvokeModel
and
bedrock:InvokeModelWithResponseStream
actions. Doing this also denies
access to the resource through the base inference actions
(InvokeModel and
InvokeModelWithResponseStream).
For more information see Deny access for inference on specific models.
For troubleshooting some of the common errors you might encounter when using the
ConverseStream
API,
see Troubleshooting Amazon Bedrock API Error Codes
in the Amazon Bedrock User Guide
Retrieve information about an asynchronous invocation.
Invokes the specified Amazon Bedrock model to run inference using the prompt and inference parameters provided in the request body.
You use model inference to generate text, images, and embeddings.
For example code, see Invoke model code examples in the Amazon Bedrock User Guide.
This operation requires permission for the bedrock:InvokeModel
action.
To deny all inference access to resources that you specify in the modelId field,
you
need to deny access to the bedrock:InvokeModel
and
bedrock:InvokeModelWithResponseStream
actions. Doing this also denies
access to the resource through the Converse API actions
(Converse and
ConverseStream).
For more information see Deny access for inference on specific models.
For troubleshooting some of the common errors you might encounter when using the
InvokeModel
API,
see Troubleshooting Amazon Bedrock API Error Codes
in the Amazon Bedrock User Guide
invoke_model_with_response_stream(client, model_id, input, options \\ [])
View SourceInvoke the specified Amazon Bedrock model to run inference using the prompt and inference parameters provided in the request body.
The response is returned in a stream.
To see if a model supports streaming, call
GetFoundationModel and check the responseStreamingSupported
field in the response.
The CLI doesn't support streaming operations in Amazon Bedrock, including
InvokeModelWithResponseStream
.
For example code, see Invoke model with streaming code example in the Amazon Bedrock User Guide.
This operation requires permissions to perform the
bedrock:InvokeModelWithResponseStream
action.
To deny all inference access to resources that you specify in the modelId field,
you
need to deny access to the bedrock:InvokeModel
and
bedrock:InvokeModelWithResponseStream
actions. Doing this also denies
access to the resource through the Converse API actions
(Converse
and
ConverseStream). For more information see Deny access for inference on specific
models.
For troubleshooting some of the common errors you might encounter when using the
InvokeModelWithResponseStream
API,
see Troubleshooting Amazon Bedrock API Error Codes
in the Amazon Bedrock User Guide
list_async_invokes(client, max_results \\ nil, next_token \\ nil, sort_by \\ nil, sort_order \\ nil, status_equals \\ nil, submit_time_after \\ nil, submit_time_before \\ nil, options \\ [])
View SourceLists asynchronous invocations.
Starts an asynchronous invocation.
This operation requires permission for the bedrock:InvokeModel
action.
To deny all inference access to resources that you specify in the modelId field,
you
need to deny access to the bedrock:InvokeModel
and
bedrock:InvokeModelWithResponseStream
actions. Doing this also denies
access to the resource through the Converse API actions
(Converse and
ConverseStream).
For more information see Deny access for inference on specific models.