View Source API Reference llm_composer v0.12.0
Modules
LlmComposer is responsible for interacting with a language model to perform chat-related operations,
such as running completions and executing functions based on the responses. The module provides
functionality to handle user messages, generate responses, and automatically execute functions as needed.
Cache behaviour to use other implementations for cache mod.
Basic ETS cache.
Centralized cost information assembly module.
models.dev-specific pricing fetcher for OpenAI and Google providers.
OpenRouter-specific pricing fetcher.
Centralized pricing retrieval and calculation module.
Struct for tracking costs and token usage of LLM API calls.
Custom errors in here
Defines a struct for representing a callable function within the context of a language model interaction.
Helper struct for function call actions.
Provides helper functions for the LlmComposer module, particularly for managing
function calls and handling language model responses.
Helper mod for setup the Tesla http client and its options
Module to parse and easily handle llm responses.
Module that represents an arbitrary message for any LLM.
Behaviour definition for LLM models.
Behaviour for implementing provider routing strategies.
Simple provider router that implements exponential backoff for failed providers.
Provider implementation for Amazon Bedrock.
Provider implementation for Google
Provider implementation for Ollama
Provider implementation for OpenAI
Provider implementation for OpenRouter
Handles provider execution logic including fallback strategies, routing, and error handling for multiple provider configurations.
Defines the settings for configuring chat interactions with a language model.