View Source API Reference llm_composer v0.13.0
Modules
LlmComposer is responsible for interacting with a language model to perform chat-related operations,
such as running completions and generating responses.
Cache behaviour to use other implementations for cache mod.
Basic ETS cache.
Centralized cost information assembly module.
models.dev-specific pricing fetcher for OpenAI and Google providers.
OpenRouter-specific pricing fetcher.
Centralized pricing retrieval and calculation module.
Struct for tracking costs and token usage of LLM API calls.
Custom errors in here
Defines a struct for representing a callable function within the context of a language model interaction.
Helper struct for function call actions.
Helpers for building assistant messages and tool-result messages when handling function (tool) calls returned by LLM providers.
Provides manual execution of function calls from LLM responses.
Provides helper functions for the LlmComposer module for handling language model responses.
Helper mod for setup the Tesla http client and its options
Module to parse and easily handle llm responses.
Module that represents an arbitrary message for any LLM.
Behaviour definition for LLM models.
Behaviour for implementing provider routing strategies.
Simple provider router that implements exponential backoff for failed providers.
Provider implementation for Amazon Bedrock.
Provider implementation for Google
Provider implementation for Ollama
Provider implementation for OpenAI
Provider implementation for OpenRouter
Handles provider execution logic including fallback strategies, routing, and error handling for multiple provider configurations.
Defines the settings for configuring chat interactions with a language model.