Braintrust.LangChainCallbacks (Braintrust v0.3.0)
View SourceLangChain callback handler for Braintrust observability.
Automatically logs LLM interactions to Braintrust when used with LangChain's LLMChain.
Usage
alias LangChain.Chains.LLMChain
alias LangChain.ChatModels.ChatOpenAI
alias LangChain.Message
alias Braintrust.LangChainCallbacks
{:ok, chain} =
%{llm: ChatOpenAI.new!(%{model: "gpt-4"})}
|> LLMChain.new!()
|> LLMChain.add_callback(LangChainCallbacks.handler(
project_id: "proj_xxx",
metadata: %{"environment" => "production"},
tags: ["chat"]
))
|> LLMChain.add_message(Message.new_user!("Hello!"))
|> LLMChain.run()Options
:project_id- Braintrust project ID (required):metadata- Additional metadata to attach to all spans (default: %{}):tags- Tags to attach to spans (default: []):api_key- Override API key for logging requests:base_url- Override base URL for logging requests
What Gets Logged
Each LLM interaction creates a log entry with:
input- Messages in OpenAI format (enables "Try prompt" button in UI)output- Assistant response contentmetadata- Model name, provider, status, plus custom metadatametrics- Token usage (input_tokens, output_tokens, total_tokens)tags- Custom tagserror- Error information if processing failed
Optional Dependency
This module requires the langchain package. Add it to your dependencies:
{:langchain, "~> 0.4"}Summary
Functions
Creates a callback handler map for use with LLMChain.
Creates a streaming-aware callback handler.
Types
Functions
@spec handler(handler_opts()) :: map()
Creates a callback handler map for use with LLMChain.
Examples
handler = Braintrust.LangChainCallbacks.handler(project_id: "proj_xxx")
chain
|> LLMChain.add_callback(handler)
|> LLMChain.run()
@spec streaming_handler(handler_opts()) :: map()
Creates a streaming-aware callback handler.
In addition to the standard handler callbacks, this tracks:
- Time-to-first-token (TTFT) - Time from request start to first delta
- Streaming duration - Total time for all deltas
Usage
handler = Braintrust.LangChainCallbacks.streaming_handler(
project_id: "proj_xxx",
tags: ["streaming"]
)
%{llm: ChatOpenAI.new!(%{model: "gpt-4", stream: true})}
|> LLMChain.new!()
|> LLMChain.add_callback(handler)
|> LLMChain.run()