OpEx.Chat (opex v0.2.4)
View SourceChat interface with tool calling support and customizable hooks.
This module provides a flexible chat loop that can be customized via hooks for:
- Custom tool execution
- Message persistence
- Tool result handling
MCP tools are handled automatically via the provided MCP client PIDs.
Summary
Functions
Executes a chat conversation with tool calling support.
Gets all available tools (MCP + custom) for this chat session. Useful for debugging and testing.
Creates a new chat session.
Functions
Executes a chat conversation with tool calling support.
Options
:model- Model to use (required):messages- List of messages (required):system_prompt- System prompt (optional):execute_tools- Whether to execute tools automatically (default: true):context- Arbitrary context passed to hooks (default: %{}):temperature- Controls randomness (0.0-2.0, optional, defaults to API default of 1.0):parallel_tool_calls- Whether to allow parallel tool calls (boolean, optional)
Gets all available tools (MCP + custom) for this chat session. Useful for debugging and testing.
Creates a new chat session.
Options
:mcp_clients- List of{:ok, pid}or{:error, reason}tuples for MCP clients:custom_tools- List of custom tool definitions in OpenAI format:rejected_tools- List of tool names to exclude from MCP tools:custom_tool_executor- Function(tool_name, args, context) -> {:ok, result} | {:error, reason}:on_assistant_message- Hook(message, context) -> :ok | {:ok, context} | :stop | {:stop, context}:on_tool_result- Hook(tool_call_id, tool_name, result, context) -> :ok | {:ok, context} | :stop | {:stop, context}. When this hook returns:stopor{:stop, context}, the tool execution loop stops immediately and no further LLM calls are made. The response will include_metadata.stopped_by_hook = true.:tool_result_role- Role to use for tool result messages (default: "tool", can be "user" for MCP compatibility)