Langfuse.Generation (Langfuse v0.1.0)
View SourceA generation represents an LLM API call within a trace.
Generations are specialized observations for tracking model invocations. They capture model information, input prompts, output completions, token usage, costs, and can be linked to Langfuse prompts for version tracking.
Creating Generations
Generations are created as children of traces or spans:
trace = Langfuse.trace(name: "chat")
generation = Langfuse.Generation.new(trace,
name: "completion",
model: "gpt-4",
input: [%{role: "user", content: "Hello"}]
)Recording Output and Usage
After receiving the model response, update the generation:
generation = Langfuse.Generation.update(generation,
output: %{role: "assistant", content: "Hi there!"},
usage: %{input: 10, output: 5, total: 15}
)
generation = Langfuse.Generation.end_generation(generation)Linking Prompts
Track which prompt version was used:
{:ok, prompt} = Langfuse.Prompt.get("chat-template")
generation = Langfuse.Generation.new(trace,
name: "completion",
model: "gpt-4",
prompt_name: prompt.name,
prompt_version: prompt.version
)Token Usage and Costs
The :usage option accepts a map with token counts and optional costs:
usage: %{
input: 150,
output: 50,
total: 200,
input_cost: 0.003,
output_cost: 0.006,
total_cost: 0.009
}
Summary
Types
Log level for the observation.
Valid parent types for a generation.
A generation struct containing all generation attributes.
Token usage and cost information.
Functions
Ends the generation by setting its end time to now.
Formats usage data for the Langfuse API.
Returns the generation ID.
Returns the trace ID that this generation belongs to.
Creates a new generation and enqueues it for ingestion.
Updates an existing generation and enqueues the update for ingestion.
Types
@type level() :: :debug | :default | :warning | :error
Log level for the observation.
@type parent() :: Langfuse.Trace.t() | Langfuse.Span.t() | t()
Valid parent types for a generation.
@type t() :: %Langfuse.Generation{ completion_start_time: DateTime.t() | nil, end_time: DateTime.t() | nil, id: String.t(), input: term(), level: level() | nil, metadata: map() | nil, model: String.t() | nil, model_parameters: map() | nil, name: String.t(), output: term(), parent_observation_id: String.t() | nil, prompt_name: String.t() | nil, prompt_version: pos_integer() | nil, start_time: DateTime.t(), status_message: String.t() | nil, trace_id: String.t(), usage: usage() | nil, version: String.t() | nil }
A generation struct containing all generation attributes.
The :id is auto-generated if not provided. The :start_time defaults
to the current UTC time.
@type usage() :: %{ optional(:input) => non_neg_integer(), optional(:output) => non_neg_integer(), optional(:total) => non_neg_integer(), optional(:unit) => String.t(), optional(:input_cost) => float(), optional(:output_cost) => float(), optional(:total_cost) => float(), optional(:prompt_tokens) => non_neg_integer(), optional(:completion_tokens) => non_neg_integer(), optional(:total_tokens) => non_neg_integer(), optional(atom()) => term() }
Token usage and cost information.
All fields are optional. Costs should be in USD.
Standard Fields
:input- Input token/unit count:output- Output token/unit count:total- Total token/unit count:unit- Unit type (e.g., "TOKENS", "CHARACTERS"):input_cost- Input cost in USD:output_cost- Output cost in USD:total_cost- Total cost in USD
OpenAI-Compatible Fields
:prompt_tokens- Prompt tokens (maps to input):completion_tokens- Completion tokens (maps to output):total_tokens- Total tokens (maps to total)
Extended Usage Details
Any additional keys are passed through as usage details, supporting
provider-specific fields like :cache_read_input_tokens or
:reasoning_tokens.
Functions
Ends the generation by setting its end time to now.
Examples
iex> trace = Langfuse.Trace.new(name: "test")
iex> gen = Langfuse.Generation.new(trace, name: "llm", model: "gpt-4")
iex> gen.end_time
nil
iex> gen = Langfuse.Generation.end_generation(gen)
iex> gen.end_time != nil
true
Formats usage data for the Langfuse API.
Converts Elixir-style snake_case keys to camelCase and handles OpenAI-compatible field mappings.
Examples
iex> Langfuse.Generation.format_usage(nil)
nil
iex> Langfuse.Generation.format_usage(%{input: 10, output: 5})
%{input: 10, output: 5}
iex> Langfuse.Generation.format_usage(%{input: 100, output: 50, input_cost: 0.001, output_cost: 0.002})
%{input: 100, output: 50, inputCost: 0.001, outputCost: 0.002}
iex> Langfuse.Generation.format_usage(%{prompt_tokens: 100, completion_tokens: 50})
%{input: 100, output: 50}
iex> Langfuse.Generation.format_usage(%{cache_read_input_tokens: 50, reasoning_tokens: 100})
%{cacheReadInputTokens: 50, reasoningTokens: 100}
Returns the generation ID.
Examples
iex> trace = Langfuse.Trace.new(name: "test")
iex> gen = Langfuse.Generation.new(trace, name: "llm", id: "gen-123")
iex> Langfuse.Generation.get_id(gen)
"gen-123"
Returns the trace ID that this generation belongs to.
Examples
iex> trace = Langfuse.Trace.new(name: "test", id: "trace-456")
iex> gen = Langfuse.Generation.new(trace, name: "llm")
iex> Langfuse.Generation.get_trace_id(gen)
"trace-456"
Creates a new generation and enqueues it for ingestion.
The generation is created as a child of the given parent (trace, span, or another generation). It is immediately queued for asynchronous delivery to Langfuse.
Options
:name- Name of the generation (required):id- Custom generation ID. Uses secure random hex if not provided.:model- Model identifier (e.g., "gpt-4", "claude-3-opus").:model_parameters- Model parameters as a map (temperature, etc.).:input- Input messages or prompt.:output- Model response/completion.:usage- Token usage map. Seeusage/0.:metadata- Arbitrary metadata as a map.:level- Log level::debug,:default,:warning, or:error.:status_message- Status description.:prompt_name- Name of linked Langfuse prompt.:prompt_version- Version of linked Langfuse prompt.:start_time- Custom start time. Defaults toDateTime.utc_now/0.:end_time- End time if already known.:completion_start_time- When streaming response started.:version- Application version string.
Examples
iex> trace = Langfuse.Trace.new(name: "test", id: "trace-1")
iex> gen = Langfuse.Generation.new(trace, name: "llm", model: "gpt-4")
iex> gen.model
"gpt-4"
iex> gen.trace_id
"trace-1"
iex> trace = Langfuse.Trace.new(name: "test")
iex> gen = Langfuse.Generation.new(trace,
...> name: "completion",
...> model: "gpt-4",
...> model_parameters: %{temperature: 0.7},
...> input: [%{role: "user", content: "Hello"}]
...> )
iex> gen.model_parameters
%{temperature: 0.7}
Updates an existing generation and enqueues the update for ingestion.
Commonly used to add output and usage after receiving the model response.
Options
:model- Updated model identifier.:model_parameters- Updated model parameters.:input- Updated input data.:output- Model response/completion.:usage- Token usage map.:metadata- Updated metadata map.:level- Updated log level.:status_message- Updated status description.:end_time- End time. Useend_generation/1to set automatically.:completion_start_time- When streaming started.:version- Updated version string.
Examples
iex> trace = Langfuse.Trace.new(name: "test")
iex> gen = Langfuse.Generation.new(trace, name: "llm", model: "gpt-4")
iex> gen = Langfuse.Generation.update(gen,
...> output: %{content: "Hello!"},
...> usage: %{input: 10, output: 5, total: 15}
...> )
iex> gen.output
%{content: "Hello!"}