View Source LlmComposer (llm_composer v0.19.1)

LlmComposer is responsible for interacting with a language model to perform chat-related operations, such as running completions and generating responses.

Example Usage

To use LlmComposer for creating a simple chat interaction with a language model, define a settings configuration and initiate a chat:

# Define the settings for your LlmComposer instance
settings = %LlmComposer.Settings{
  providers: [
    {LlmComposer.Providers.OpenAI,  [model: "gpt-4.1-mini"]}
  ],
  system_prompt: "You are a helpful assistant.",
  user_prompt_prefix: "",
  api_key: ""
}

# Initiate a simple chat interaction with the defined settings
{:ok, response} = LlmComposer.simple_chat(settings, "Hello, how are you?")

# Print the main response from the assistant
IO.inspect(response.main_response)

Output Example

Running this code might produce the following log and output:

16:41:07.594 [debug] input_tokens=18, output_tokens=9
%LlmComposer.Message{
  type: :assistant,
  content: "Hello! How can I assist you today?"
}

In this example, the simple_chat/2 function sends the user's message to the language model using the provided settings, and the response is displayed as the assistant's reply.

Summary

Functions

Parses a provider stream into normalized LlmComposer.StreamChunk structs.

Runs the completion process by sending messages to the language model and handling the response.

Initiates a simple chat interaction with the language model.

Types

Functions

Link to this function

parse_stream_response(stream, provider, opts \\ [])

View Source
@spec parse_stream_response(Enumerable.t(), atom(), keyword()) :: Enumerable.t()

Parses a provider stream into normalized LlmComposer.StreamChunk structs.

Parameters

  • stream: The raw streaming enumerable produced by the provider response.
  • provider: The atom identifying the provider that produced the stream.
  • opts: Additional parsing options (currently unused).

Returns

  • A stream of %LlmComposer.StreamChunk{} values that include the original raw chunk, categorized event type, optional usage data, and normalized metadata.

Example

  {:ok, res} = LlmComposer.run_completion(settings, messages)

  res.stream
  |> LlmComposer.parse_stream_response(res.provider)
  |> Enum.each(fn chunk ->
    IO.write(chunk.text || "")
  end)
Link to this function

run_completion(settings, messages, previous_response \\ nil)

View Source
@spec run_completion(
  LlmComposer.Settings.t(),
  messages(),
  LlmComposer.LlmResponse.t() | nil
) ::
  {:ok, LlmComposer.LlmResponse.t()} | {:error, term()}

Runs the completion process by sending messages to the language model and handling the response.

Parameters

  • settings: The settings for the language model, including prompts and model options.
  • messages: The list of messages to be sent to the language model.
  • previous_response (optional): The previous response object, if any, used for context.

Returns

  • A tuple containing :ok with the response or :error if the model call fails.
Link to this function

simple_chat(settings, msg)

View Source
@spec simple_chat(LlmComposer.Settings.t(), String.t()) ::
  {:ok, LlmComposer.LlmResponse.t()} | {:error, term()}

Initiates a simple chat interaction with the language model.

Parameters

  • settings: The settings for the language model, including prompts and options.
  • msg: The user message to be sent to the language model.

Returns

  • The result of the language model's response.