View Source API Reference llm_composer v0.19.1
Modules
LlmComposer is responsible for interacting with a language model to perform chat-related operations,
such as running completions and generating responses.
Cache behaviour to use other implementations for cache mod.
Basic ETS cache.
Centralized cost information assembly module.
models.dev-specific pricing fetcher for OpenAI, Google, and Bedrock providers.
OpenRouter-specific pricing fetcher.
Centralized pricing retrieval and calculation module.
Struct for tracking costs and token usage of LLM API calls.
Custom errors in here
Defines a struct for representing a callable function within the context of a language model interaction.
Helper struct for function call actions.
Helpers for building assistant messages and tool-result messages when handling function (tool) calls returned by LLM providers.
Provides manual execution of function calls from LLM responses.
Provides helper functions for the LlmComposer module for handling language model responses.
Helper mod for setup the Tesla http client and its options
Normalized representation of a response coming from any provider.
Module that represents an arbitrary message for any LLM.
Behaviour for provider modules used by LlmComposer.
Protocol that turns provider-specific raw responses into LlmComposer.LlmResponse structs.
Behaviour for implementing provider routing strategies.
Simple provider router that implements exponential backoff for failed providers.
Protocol that normalizes provider stream payloads into LlmComposer.StreamChunk structs.
Provider implementation for Amazon Bedrock.
ExAws HTTP client for Bedrock using Mint by default, with optional Finch support.
Provider implementation for Google
Provider implementation for Ollama
Provider implementation for OpenAI chat completions API.
Provider implementation for OpenAI Responses API.
Provider implementation for OpenRouter
Handles provider execution logic including fallback strategies, routing, and error handling for multiple provider configurations.
Defines the settings for configuring chat interactions with a language model.
Normalized representation of a streaming chunk emitted by any provider.