View Source LlmComposer (llm_composer v0.8.0)
LlmComposer
is responsible for interacting with a language model to perform chat-related operations,
such as running completions and executing functions based on the responses. The module provides
functionality to handle user messages, generate responses, and automatically execute functions as needed.
Example Usage
To use LlmComposer
for creating a simple chat interaction with a language model, define a settings configuration and initiate a chat:
# Define the settings for your LlmComposer instance
settings = %LlmComposer.Settings{
provider: LlmComposer.Providers.OpenAI,
provider_opts: [model: "gpt-4o-mini"],
system_prompt: "You are a helpful assistant.",
user_prompt_prefix: "",
auto_exec_functions: false,
functions: [],
api_key: ""
}
# Initiate a simple chat interaction with the defined settings
{:ok, response} = LlmComposer.simple_chat(settings, "Hello, how are you?")
# Print the main response from the assistant
IO.inspect(response.main_response)
Output Example
Running this code might produce the following log and output:
16:41:07.594 [debug] input_tokens=18, output_tokens=9
%LlmComposer.Message{
type: :assistant,
content: "Hello! How can I assist you today?"
}
In this example, the simple_chat/2 function sends the user's message to the language model using the provided settings, and the response is displayed as the assistant's reply.
Summary
Functions
Processes a raw stream response and returns a parsed stream of message content.
Runs the completion process by sending messages to the language model and handling the response.
Initiates a simple chat interaction with the language model.
Types
@type messages() :: [LlmComposer.Message.t()]
Functions
@spec parse_stream_response(Enumerable.t()) :: Enumerable.t()
Processes a raw stream response and returns a parsed stream of message content.
Parameters
stream
: The raw stream object from the LLM response.
Returns
- A stream that yields parsed content strings, filtering out "[DONE]" markers and decode errors.
Example
# Stream tested with Finch, maybe works with other adapters.
Application.put_env(:llm_composer, :tesla_adapter, {Tesla.Adapter.Finch, name: MyFinch})
{:ok, finch} = Finch.start_link(name: MyFinch)
settings = %LlmComposer.Settings{
provider: LlmComposer.Providers.Ollama,
provider_opts: [model: "llama3.2"],
stream_response: true
}
messages = [
%LlmComposer.Message{type: :user, content: "Tell me a short story"}
]
{:ok, res} = LlmComposer.run_completion(settings, messages)
# Process the stream and print each parsed chunk
res.stream
|> LlmComposer.parse_stream_response()
|> Enum.each(fn parsed_data ->
content = get_in(parsed_data, ["message", "content"])
if content, do: IO.write(content)
end)
@spec run_completion( LlmComposer.Settings.t(), messages(), LlmComposer.LlmResponse.t() | nil ) :: LlmComposer.Helpers.action_result()
Runs the completion process by sending messages to the language model and handling the response.
Parameters
settings
: The settings for the language model, including prompts, model options, and functions.messages
: The list of messages to be sent to the language model.previous_response
(optional): The previous response object, if any, used for context.
Returns
- A tuple containing
:ok
with the response or:error
if the model call fails.
@spec simple_chat(LlmComposer.Settings.t(), String.t()) :: LlmComposer.Helpers.action_result()
Initiates a simple chat interaction with the language model.
Parameters
settings
: The settings for the language model, including prompts and options.msg
: The user message to be sent to the language model.
Returns
- The result of the language model's response, which may include function executions if specified.