ReqLLM.Context (ReqLLM v1.0.0-rc.5)

View Source

Context represents a conversation history as a collection of messages.

Provides canonical message constructor functions that can be imported for clean, readable message creation. Supports standard roles: :user, :assistant, :system, and :tool.

Example

import ReqLLM.Context

context = Context.new([
  system("You are a helpful assistant"),
  user("What's the weather like?"),
  assistant("I'll check that for you")
])

Context.validate!(context)

Summary

Functions

Append a message to the context.

Shortcut for an assistant message; accepts a string or content parts list.

Build an assistant message with a tool call.

Build an assistant message with multiple tool calls.

Build an assistant message from collected text and tool calls.

Build a message from role and content parts (metadata optional).

Concatenate two contexts.

Execute a list of tool calls and append their results to the context.

Deserialize a JSON string or decoded map back into a Context struct.

Bang version of from_json/1 that raises on error.

Merges the original context with a response to create an updated context.

Create a new Context from a list of messages (defaults to empty).

Normalize any "prompt-ish" input into a validated ReqLLM.Context.

Bang version of normalize/2 that raises on error.

Prepend a message to the context.

Append an assistant message to the context.

Prepend a system message to the context.

Append a user message to the context.

Shortcut for a system message; accepts a string or content parts list.

Build a text-only message for the given role.

Return the underlying message list.

Shortcut for a tool message; accepts a string or content parts list.

Shortcut for a user message; accepts a string or content parts list.

Validate context: ensures valid messages and at most one system message.

Bang version of validate/1; raises ReqLLM.Error.Validation.Error on invalid context.

Build a message with text and an image URL for the given role.

Wrap a context with provider-specific tagged struct.

Types

t()

@type t() :: %ReqLLM.Context{messages: [ReqLLM.Message.t()]}

Functions

append(ctx, msg)

@spec append(t(), ReqLLM.Message.t()) :: t()
@spec append(t(), [ReqLLM.Message.t()]) :: t()

Append a message to the context.

assistant(content, meta \\ %{})

@spec assistant([ReqLLM.Message.ContentPart.t()] | String.t(), map()) ::
  ReqLLM.Message.t()

Shortcut for an assistant message; accepts a string or content parts list.

assistant_tool_call(name, input, opts \\ [])

@spec assistant_tool_call(String.t(), term(), keyword()) :: ReqLLM.Message.t()

Build an assistant message with a tool call.

assistant_tool_calls(calls, meta \\ %{})

@spec assistant_tool_calls(
  [%{id: String.t(), name: String.t(), input: term()}],
  map()
) ::
  ReqLLM.Message.t()

Build an assistant message with multiple tool calls.

assistant_with_tools(text, tool_calls, meta \\ %{})

@spec assistant_with_tools(String.t(), [map()], map()) :: ReqLLM.Message.t()

Build an assistant message from collected text and tool calls.

Convenience function for creating assistant messages that may contain both text content and tool calls from streaming responses.

Parameters

  • text - Text content from the response
  • tool_calls - List of tool call maps with :id, :name, :arguments
  • meta - Optional metadata map

Returns

Assistant message with appropriate content parts.

build(role, content, meta \\ %{})

Build a message from role and content parts (metadata optional).

concat(ctx, other)

@spec concat(t(), t()) :: t()

Concatenate two contexts.

execute_and_append_tools(context, tool_calls, available_tools)

@spec execute_and_append_tools(t(), [map()], [ReqLLM.Tool.t()]) :: t()

Execute a list of tool calls and append their results to the context.

Takes a list of tool call maps (with :id, :name, :arguments keys) and a list of available tools, executes each call, and appends the results as tool messages.

Parameters

  • context - The context to append results to
  • tool_calls - List of tool call maps with :id, :name, :arguments
  • available_tools - List of ReqLLM.Tool structs to execute against

Returns

Updated context with tool result messages appended.

Examples

tool_calls = [%{id: "call_1", name: "calculator", arguments: %{"operation" => "add", "a" => 2, "b" => 3}}]
context = Context.execute_and_append_tools(context, tool_calls, tools)

from_json(json_string)

@spec from_json(String.t() | map()) :: {:ok, t()} | {:error, term()}

Deserialize a JSON string or decoded map back into a Context struct.

Takes either a JSON string or a map (from Jason.decode!/1) and reconstructs a proper Context struct by leveraging the existing normalize/2 function.

Examples

# From JSON string
context = Context.new([Context.user("Hello")])
json_string = Jason.encode!(context)
{:ok, restored_context} = Context.from_json(json_string)

# From already decoded map
decoded_map = Jason.decode!(json_string)
{:ok, restored_context} = Context.from_json(decoded_map)

from_json!(input)

@spec from_json!(String.t() | map()) :: t()

Bang version of from_json/1 that raises on error.

merge_response(context, response)

@spec merge_response(t(), ReqLLM.Response.t()) :: ReqLLM.Response.t()

Merges the original context with a response to create an updated context.

Takes a context and a response, then creates a new context containing the original messages plus the assistant response message.

Parameters

  • context - Original ReqLLM.Context
  • response - ReqLLM.Response containing the assistant message

Returns

  • Updated response with merged context

Examples

context = ReqLLM.Context.new([user("Hello")])
response = %ReqLLM.Response{message: assistant("Hi there!")}
updated_response = ReqLLM.Context.merge_response(context, response)
# response.context now contains both user and assistant messages

new(list \\ [])

@spec new([ReqLLM.Message.t()]) :: t()

Create a new Context from a list of messages (defaults to empty).

normalize(prompt, opts \\ [])

@spec normalize(
  String.t()
  | ReqLLM.Message.t()
  | t()
  | map()
  | [String.t() | ReqLLM.Message.t() | t() | map()],
  keyword()
) :: {:ok, t()} | {:error, term()}

Normalize any "prompt-ish" input into a validated ReqLLM.Context.

Accepts various input types and converts them to a proper Context struct:

  • String: converts to user message
  • Message struct: wraps in Context
  • Context struct: passes through
  • List: processes each item and creates Context from all messages
  • Loose maps: converts to Message if they have role/content keys

Options

  • :system_prompt - String to add as system message if none exists
  • :validate - Boolean to run validation (default: true)
  • :convert_loose - Boolean to allow loose maps with role/content (default: true)

Examples

# String to user message
Context.normalize("Hello")
#=> {:ok, %Context{messages: [%Message{role: :user, content: [%ContentPart{text: "Hello"}]}]}}

# Add system prompt
Context.normalize("Hello", system_prompt: "You are helpful")
#=> {:ok, %Context{messages: [%Message{role: :system}, %Message{role: :user}]}}

# List of mixed types
Context.normalize([%Message{role: :system}, "Hello"])

normalize!(prompt, opts \\ [])

@spec normalize!(
  String.t()
  | ReqLLM.Message.t()
  | t()
  | map()
  | [String.t() | ReqLLM.Message.t() | t() | map()],
  keyword()
) :: t()

Bang version of normalize/2 that raises on error.

prepend(ctx, msg)

@spec prepend(t(), ReqLLM.Message.t()) :: t()

Prepend a message to the context.

push_assistant(ctx, content, meta \\ %{})

@spec push_assistant(t(), String.t() | [ReqLLM.Message.ContentPart.t()], map()) :: t()

Append an assistant message to the context.

push_system(ctx, content, meta \\ %{})

@spec push_system(t(), String.t() | [ReqLLM.Message.ContentPart.t()], map()) :: t()

Prepend a system message to the context.

push_user(ctx, content, meta \\ %{})

@spec push_user(t(), String.t() | [ReqLLM.Message.ContentPart.t()], map()) :: t()

Append a user message to the context.

system(content, meta \\ %{})

Shortcut for a system message; accepts a string or content parts list.

text(role, content, meta \\ %{})

@spec text(atom(), String.t(), map()) :: ReqLLM.Message.t()

Build a text-only message for the given role.

to_list(context)

@spec to_list(t()) :: [ReqLLM.Message.t()]

Return the underlying message list.

tool(content, meta \\ %{})

Shortcut for a tool message; accepts a string or content parts list.

tool_result_message(tool_name, tool_call_id, output, meta \\ %{})

@spec tool_result_message(String.t(), String.t(), term(), map()) :: ReqLLM.Message.t()

Build a tool result message.

user(content, meta \\ %{})

Shortcut for a user message; accepts a string or content parts list.

validate(context)

@spec validate(t()) :: {:ok, t()} | {:error, String.t()}

Validate context: ensures valid messages and at most one system message.

validate!(context)

@spec validate!(t()) :: t()

Bang version of validate/1; raises ReqLLM.Error.Validation.Error on invalid context.

with_image(role, text, url, meta \\ %{})

@spec with_image(atom(), String.t(), String.t(), map()) :: ReqLLM.Message.t()

Build a message with text and an image URL for the given role.

wrap(ctx, model)

@spec wrap(t(), ReqLLM.Model.t()) :: term()

Wrap a context with provider-specific tagged struct.

Takes a ReqLLM.Context and ReqLLM.Model and wraps the context in the appropriate provider-specific struct for encoding/decoding.

Parameters

Returns

  • Provider-specific tagged struct ready for encoding

Examples

context = ReqLLM.Context.new([ReqLLM.Context.user("Hello")])
model = ReqLLM.Model.from("anthropic:claude-3-haiku-20240307")
tagged = ReqLLM.Context.wrap(context, model)
#=> %ReqLLM.Providers.Anthropic.Context{context: context}