View Source OpenTelemetry.SemConv.Incubating.GenAiAttributes (OpenTelemetry.SemConv v1.27.0)
OpenTelemetry Semantic Conventions for Gen_Ai attributes.
Summary
Types
The name of the operation being performed.
The Generative AI product as identified by the client or server instrumentation.
The type of token being counted.
Functions
The full response received from the GenAI model.
The name of the operation being performed.
The full prompt sent to the GenAI model.
The frequency penalty setting for the GenAI request.
The maximum number of tokens the model generates for a request.
The name of the GenAI model a request is being made to.
The presence penalty setting for the GenAI request.
List of sequences that the model will use to stop generating further tokens.
The temperature setting for the GenAI request.
The top_k sampling setting for the GenAI request.
The top_p sampling setting for the GenAI request.
Array of reasons the model stopped generating tokens, corresponding to each generation received.
The unique identifier for the completion.
The name of the model that generated the response.
The Generative AI product as identified by the client or server instrumentation.
The type of token being counted.
The number of tokens used in the GenAI input (prompt).
The number of tokens used in the GenAI response (completion).
Types
@type gen_ai_operation_name_values() :: %{
chat: :chat,
text_completion: :text_completion
}
The name of the operation being performed.
Enum Values
:chat
e - Chat completion operation such as OpenAI Chat API:text_completion
e - Text completions operation such as OpenAI Completions API (Legacy)
@type gen_ai_system_values() :: %{
openai: :openai,
vertex_ai: :vertex_ai,
anthropic: :anthropic,
cohere: :cohere
}
The Generative AI product as identified by the client or server instrumentation.
Enum Values
@type gen_ai_token_type_values() :: %{input: :input, completion: :output}
The type of token being counted.
Enum Values
Functions
@spec gen_ai_completion() :: :"gen_ai.completion"
The full response received from the GenAI model.
Value type
Value must be of type atom() | String.t()
.
Notes
It's RECOMMENDED to format completions as JSON string matching OpenAI messages format
Examples
["[{'role': 'assistant', 'content': 'The capital of France is Paris.'}]"]
iex> OpenTelemetry.SemConv.Incubating.GenAiAttributes.gen_ai_completion()
:"gen_ai.completion"
?GEN_AI_COMPLETION.
'gen_ai.completion'
@spec gen_ai_operation_name() :: :"gen_ai.operation.name"
The name of the operation being performed.
Notes
If one of the predefined values applies, but specific system uses a different name it's RECOMMENDED to document it in the semantic conventions for specific GenAI system and use system-specific name in the instrumentation. If a different name is not documented, instrumentation libraries SHOULD use applicable predefined value.
iex> OpenTelemetry.SemConv.Incubating.GenAiAttributes.gen_ai_operation_name()
:"gen_ai.operation.name"
iex> OpenTelemetry.SemConv.Incubating.GenAiAttributes.gen_ai_operation_name_values().chat
:chat
iex> %{OpenTelemetry.SemConv.Incubating.GenAiAttributes.gen_ai_operation_name() => OpenTelemetry.SemConv.Incubating.GenAiAttributes.gen_ai_operation_name_values().chat}
%{:"gen_ai.operation.name" => :chat}
?GEN_AI_OPERATION_NAME.
'gen_ai.operation.name'
?GEN_AI_OPERATION_NAME_VALUES_CHAT.
'chat'
#{?GEN_AI_OPERATION_NAME => ?GEN_AI_OPERATION_NAME_VALUES_CHAT}.
#{'gen_ai.operation.name' => 'chat'}
@spec gen_ai_operation_name_values() :: gen_ai_operation_name_values()
@spec gen_ai_prompt() :: :"gen_ai.prompt"
The full prompt sent to the GenAI model.
Value type
Value must be of type atom() | String.t()
.
Notes
It's RECOMMENDED to format prompts as JSON string matching OpenAI messages format
Examples
["[{'role': 'user', 'content': 'What is the capital of France?'}]"]
iex> OpenTelemetry.SemConv.Incubating.GenAiAttributes.gen_ai_prompt()
:"gen_ai.prompt"
?GEN_AI_PROMPT.
'gen_ai.prompt'
@spec gen_ai_request_frequency_penalty() :: :"gen_ai.request.frequency_penalty"
The frequency penalty setting for the GenAI request.
Value type
Value must be of type float()
.
Examples
[0.1]
iex> OpenTelemetry.SemConv.Incubating.GenAiAttributes.gen_ai_request_frequency_penalty()
:"gen_ai.request.frequency_penalty"
?GEN_AI_REQUEST_FREQUENCY_PENALTY.
'gen_ai.request.frequency_penalty'
@spec gen_ai_request_max_tokens() :: :"gen_ai.request.max_tokens"
The maximum number of tokens the model generates for a request.
Value type
Value must be of type integer()
.
Examples
[100]
iex> OpenTelemetry.SemConv.Incubating.GenAiAttributes.gen_ai_request_max_tokens()
:"gen_ai.request.max_tokens"
?GEN_AI_REQUEST_MAX_TOKENS.
'gen_ai.request.max_tokens'
@spec gen_ai_request_model() :: :"gen_ai.request.model"
The name of the GenAI model a request is being made to.
Value type
Value must be of type atom() | String.t()
.
Examples
gpt-4
iex> OpenTelemetry.SemConv.Incubating.GenAiAttributes.gen_ai_request_model()
:"gen_ai.request.model"
?GEN_AI_REQUEST_MODEL.
'gen_ai.request.model'
@spec gen_ai_request_presence_penalty() :: :"gen_ai.request.presence_penalty"
The presence penalty setting for the GenAI request.
Value type
Value must be of type float()
.
Examples
[0.1]
iex> OpenTelemetry.SemConv.Incubating.GenAiAttributes.gen_ai_request_presence_penalty()
:"gen_ai.request.presence_penalty"
?GEN_AI_REQUEST_PRESENCE_PENALTY.
'gen_ai.request.presence_penalty'
@spec gen_ai_request_stop_sequences() :: :"gen_ai.request.stop_sequences"
List of sequences that the model will use to stop generating further tokens.
Value type
Value must be of type [atom() | String.t()]
.
Examples
["forest", "lived"]
iex> OpenTelemetry.SemConv.Incubating.GenAiAttributes.gen_ai_request_stop_sequences()
:"gen_ai.request.stop_sequences"
?GEN_AI_REQUEST_STOP_SEQUENCES.
'gen_ai.request.stop_sequences'
@spec gen_ai_request_temperature() :: :"gen_ai.request.temperature"
The temperature setting for the GenAI request.
Value type
Value must be of type float()
.
Examples
[0.0]
iex> OpenTelemetry.SemConv.Incubating.GenAiAttributes.gen_ai_request_temperature()
:"gen_ai.request.temperature"
?GEN_AI_REQUEST_TEMPERATURE.
'gen_ai.request.temperature'
@spec gen_ai_request_top_k() :: :"gen_ai.request.top_k"
The top_k sampling setting for the GenAI request.
Value type
Value must be of type float()
.
Examples
[1.0]
iex> OpenTelemetry.SemConv.Incubating.GenAiAttributes.gen_ai_request_top_k()
:"gen_ai.request.top_k"
?GEN_AI_REQUEST_TOP_K.
'gen_ai.request.top_k'
@spec gen_ai_request_top_p() :: :"gen_ai.request.top_p"
The top_p sampling setting for the GenAI request.
Value type
Value must be of type float()
.
Examples
[1.0]
iex> OpenTelemetry.SemConv.Incubating.GenAiAttributes.gen_ai_request_top_p()
:"gen_ai.request.top_p"
?GEN_AI_REQUEST_TOP_P.
'gen_ai.request.top_p'
@spec gen_ai_response_finish_reasons() :: :"gen_ai.response.finish_reasons"
Array of reasons the model stopped generating tokens, corresponding to each generation received.
Value type
Value must be of type [atom() | String.t()]
.
Examples
["stop"]
iex> OpenTelemetry.SemConv.Incubating.GenAiAttributes.gen_ai_response_finish_reasons()
:"gen_ai.response.finish_reasons"
?GEN_AI_RESPONSE_FINISH_REASONS.
'gen_ai.response.finish_reasons'
@spec gen_ai_response_id() :: :"gen_ai.response.id"
The unique identifier for the completion.
Value type
Value must be of type atom() | String.t()
.
Examples
["chatcmpl-123"]
iex> OpenTelemetry.SemConv.Incubating.GenAiAttributes.gen_ai_response_id()
:"gen_ai.response.id"
?GEN_AI_RESPONSE_ID.
'gen_ai.response.id'
@spec gen_ai_response_model() :: :"gen_ai.response.model"
The name of the model that generated the response.
Value type
Value must be of type atom() | String.t()
.
Examples
["gpt-4-0613"]
iex> OpenTelemetry.SemConv.Incubating.GenAiAttributes.gen_ai_response_model()
:"gen_ai.response.model"
?GEN_AI_RESPONSE_MODEL.
'gen_ai.response.model'
@spec gen_ai_system() :: :"gen_ai.system"
The Generative AI product as identified by the client or server instrumentation.
Notes
The gen_ai.system
describes a family of GenAI models with specific model identified
by gen_ai.request.model
and gen_ai.response.model
attributes.
The actual GenAI product may differ from the one identified by the client.
For example, when using OpenAI client libraries to communicate with Mistral, the gen_ai.system
is set to openai
based on the instrumentation's best knowledge.
For custom model, a custom friendly name SHOULD be used.
If none of these options apply, the gen_ai.system
SHOULD be set to _OTHER
.
Examples
openai
iex> OpenTelemetry.SemConv.Incubating.GenAiAttributes.gen_ai_system()
:"gen_ai.system"
iex> OpenTelemetry.SemConv.Incubating.GenAiAttributes.gen_ai_system_values().openai
:openai
iex> %{OpenTelemetry.SemConv.Incubating.GenAiAttributes.gen_ai_system() => OpenTelemetry.SemConv.Incubating.GenAiAttributes.gen_ai_system_values().openai}
%{:"gen_ai.system" => :openai}
?GEN_AI_SYSTEM.
'gen_ai.system'
?GEN_AI_SYSTEM_VALUES_OPENAI.
'openai'
#{?GEN_AI_SYSTEM => ?GEN_AI_SYSTEM_VALUES_OPENAI}.
#{'gen_ai.system' => 'openai'}
@spec gen_ai_system_values() :: gen_ai_system_values()
@spec gen_ai_token_type() :: :"gen_ai.token.type"
The type of token being counted.
Examples
["input", "output"]
iex> OpenTelemetry.SemConv.Incubating.GenAiAttributes.gen_ai_token_type()
:"gen_ai.token.type"
iex> OpenTelemetry.SemConv.Incubating.GenAiAttributes.gen_ai_token_type_values().input
:input
iex> %{OpenTelemetry.SemConv.Incubating.GenAiAttributes.gen_ai_token_type() => OpenTelemetry.SemConv.Incubating.GenAiAttributes.gen_ai_token_type_values().input}
%{:"gen_ai.token.type" => :input}
?GEN_AI_TOKEN_TYPE.
'gen_ai.token.type'
?GEN_AI_TOKEN_TYPE_VALUES_INPUT.
'input'
#{?GEN_AI_TOKEN_TYPE => ?GEN_AI_TOKEN_TYPE_VALUES_INPUT}.
#{'gen_ai.token.type' => 'input'}
@spec gen_ai_token_type_values() :: gen_ai_token_type_values()
@spec gen_ai_usage_completion_tokens() :: :"gen_ai.usage.completion_tokens"
@spec gen_ai_usage_input_tokens() :: :"gen_ai.usage.input_tokens"
The number of tokens used in the GenAI input (prompt).
Value type
Value must be of type integer()
.
Examples
[100]
iex> OpenTelemetry.SemConv.Incubating.GenAiAttributes.gen_ai_usage_input_tokens()
:"gen_ai.usage.input_tokens"
?GEN_AI_USAGE_INPUT_TOKENS.
'gen_ai.usage.input_tokens'
@spec gen_ai_usage_output_tokens() :: :"gen_ai.usage.output_tokens"
The number of tokens used in the GenAI response (completion).
Value type
Value must be of type integer()
.
Examples
[180]
iex> OpenTelemetry.SemConv.Incubating.GenAiAttributes.gen_ai_usage_output_tokens()
:"gen_ai.usage.output_tokens"
?GEN_AI_USAGE_OUTPUT_TOKENS.
'gen_ai.usage.output_tokens'
@spec gen_ai_usage_prompt_tokens() :: :"gen_ai.usage.prompt_tokens"