Translates between provider-neutral tool structs and provider-specific wire formats.
Three operations, three providers:
encode_definitions/2— Converts[LlmToolkit.Tool.t()]into the provider's request payload format for tool declarations.decode_tool_calls/2— Extracts[LlmToolkit.Tool.Call.t()]from a raw provider response body.encode_result/2— Formats aLlmToolkit.Tool.Result.t()as the provider's expected tool-result message.
Provider Format Differences
| Aspect | OpenAI | Anthropic | Ollama |
|---|---|---|---|
| Definition key | parameters | input_schema | parameters |
| Definition wrapper | {type, function: {...}} | {name, desc, schema} | {type, function: {...}} |
| Call arguments | JSON string | object | object |
| Call ID | present | present | absent |
| Result role | "tool" | "user" | "tool" |
| Result ID ref | tool_call_id | tool_use_id | (none) |
Summary
Functions
Decodes tool call requests from a raw provider response body into
provider-neutral LlmToolkit.Tool.Call structs.
Encodes a list of provider-neutral tool definitions into the wire format expected by the given provider.
Encodes a tool result into the message format the provider expects when feeding results back into the conversation.
Types
Functions
@spec decode_tool_calls(map(), provider()) :: [LlmToolkit.Tool.Call.t()]
Decodes tool call requests from a raw provider response body into
provider-neutral LlmToolkit.Tool.Call structs.
OpenAI
Expects response_body["choices"][0]["message"]["tool_calls"] where each
entry has "id", "function" => %{"name" => ..., "arguments" => json_string}.
Anthropic
Expects response_body["content"] to contain blocks with
"type" => "tool_use", each having "id", "name", and "input" (object).
Ollama
Expects response_body["message"]["tool_calls"] where each entry has
"function" => %{"name" => ..., "arguments" => object}. No IDs.
Returns an empty list when no tool calls are found.
@spec encode_definitions([LlmToolkit.Tool.t()], provider()) :: [map()]
Encodes a list of provider-neutral tool definitions into the wire format expected by the given provider.
Examples
iex> tool = %LlmToolkit.Tool{name: "ping", description: "Ping", parameters: %{"type" => "object"}, metadata: %{}}
iex> [encoded] = LlmCore.Tool.Codec.encode_definitions([tool], :openai)
iex> encoded["type"]
"function"
iex> encoded["function"]["name"]
"ping"
@spec encode_result(LlmToolkit.Tool.Result.t(), provider()) :: map()
Encodes a tool result into the message format the provider expects when feeding results back into the conversation.
OpenAI
%{"role" => "tool", "tool_call_id" => id, "content" => content}Anthropic
%{"role" => "user", "content" => [%{"type" => "tool_result", "tool_use_id" => id, "content" => content}]}Ollama
%{"role" => "tool", "content" => content}