# llm_composer v0.19.4 - Table of Contents ## Pages - [LlmComposer](readme.md) - [LICENSE](license.md) - Guides - [Providers](providers.md) - [Streaming](streaming.md) - [Cost Tracking](cost_tracking.md) - [Function Calls](function_calls.md) - [Provider Router](provider_router.md) - [Custom Providers](custom_provider.md) - [Configuration Reference](configuration.md) ## Modules - Core - [LlmComposer](LlmComposer.md): `LlmComposer` is responsible for interacting with a language model to perform chat-related operations, such as running completions and generating responses. - [LlmComposer.Function](LlmComposer.Function.md): Defines a struct for representing a callable function within the context of a language model interaction. - [LlmComposer.LlmResponse](LlmComposer.LlmResponse.md): Normalized representation of a response coming from any provider. - [LlmComposer.Message](LlmComposer.Message.md): Module that represents an arbitrary message for any LLM. - [LlmComposer.Provider](LlmComposer.Provider.md): Behaviour for provider modules used by `LlmComposer`. - [LlmComposer.Settings](LlmComposer.Settings.md): Defines the settings for configuring chat interactions with a language model. - [LlmComposer.StreamChunk](LlmComposer.StreamChunk.md): Normalized representation of a streaming chunk emitted by any provider. - Providers - [LlmComposer.Providers.Bedrock](LlmComposer.Providers.Bedrock.md): Provider implementation for Amazon Bedrock. - [LlmComposer.Providers.Bedrock.HttpClient](LlmComposer.Providers.Bedrock.HttpClient.md): ExAws HTTP client for Bedrock using Mint by default, with optional Finch support. - [LlmComposer.Providers.Google](LlmComposer.Providers.Google.md): Provider implementation for Google - [LlmComposer.Providers.Ollama](LlmComposer.Providers.Ollama.md): Provider implementation for Ollama - [LlmComposer.Providers.OpenAI](LlmComposer.Providers.OpenAI.md): Provider implementation for OpenAI chat completions API. - [LlmComposer.Providers.OpenAIResponses](LlmComposer.Providers.OpenAIResponses.md): Provider implementation for OpenAI Responses API. - [LlmComposer.Providers.OpenRouter](LlmComposer.Providers.OpenRouter.md): Provider implementation for OpenRouter - Response Parsing - [LlmComposer.ProviderResponse](LlmComposer.ProviderResponse.md): Protocol that turns provider-specific raw responses into `LlmComposer.LlmResponse` structs. - Streaming - [LlmComposer.ProviderStreamChunk](LlmComposer.ProviderStreamChunk.md): Protocol that normalizes provider stream payloads into `LlmComposer.StreamChunk` structs. - Function Calling - [LlmComposer.FunctionCall](LlmComposer.FunctionCall.md): Helper struct for function call actions. - [LlmComposer.FunctionCallHelpers](LlmComposer.FunctionCallHelpers.md): Helpers for building assistant messages and tool-result messages when handling function (tool) calls returned by LLM providers. - [LlmComposer.FunctionExecutor](LlmComposer.FunctionExecutor.md): Provides manual execution of function calls from LLM responses. - Cost Tracking - [LlmComposer.Cost.CostAssembler](LlmComposer.Cost.CostAssembler.md): Centralized cost information assembly module. - [LlmComposer.Cost.Fetchers.ModelsDev](LlmComposer.Cost.Fetchers.ModelsDev.md): models.dev-specific pricing fetcher for OpenAI, Google, and Bedrock providers. - [LlmComposer.Cost.Fetchers.OpenRouter](LlmComposer.Cost.Fetchers.OpenRouter.md): OpenRouter-specific pricing fetcher. - [LlmComposer.Cost.Pricing](LlmComposer.Cost.Pricing.md): Centralized pricing retrieval and calculation module. - [LlmComposer.CostInfo](LlmComposer.CostInfo.md): Struct for tracking costs and token usage of LLM API calls. - Routing - [LlmComposer.ProviderRouter](LlmComposer.ProviderRouter.md): Behaviour for implementing provider routing strategies. - [LlmComposer.ProviderRouter.Simple](LlmComposer.ProviderRouter.Simple.md): Simple provider router that implements exponential backoff for failed providers. - [LlmComposer.ProvidersRunner](LlmComposer.ProvidersRunner.md): Handles provider execution logic including fallback strategies, routing, and error handling for multiple provider configurations. - Cache - [LlmComposer.Cache.Behaviour](LlmComposer.Cache.Behaviour.md): Cache behaviour to use other implementations for cache mod. - [LlmComposer.Cache.Ets](LlmComposer.Cache.Ets.md): Basic ETS cache. - Internals - [LlmComposer.Errors](LlmComposer.Errors.md): Custom errors in here - [LlmComposer.Helpers](LlmComposer.Helpers.md): Provides helper functions for the `LlmComposer` module for handling language model responses. - [LlmComposer.HttpClient](LlmComposer.HttpClient.md): Helper mod for setup the Tesla http client and its options - Exceptions - [LlmComposer.Errors.MissingKeyError](LlmComposer.Errors.MissingKeyError.md)