ReqLLM.Context.Codec protocol (ReqLLM v1.0.0-rc.3)
View SourceProtocol for encoding canonical ReqLLM.Context structures to provider-specific request JSON.
This protocol handles the request encoding phase, converting ReqLLM contexts and models into the JSON format expected by each provider's API.
Default Implementation
The Map implementation provides a baseline OpenAI-compatible request format that works
for most providers including OpenAI, Groq, OpenRouter, and xAI:
ReqLLM.Context.Codec.encode_request(context, model)
#=> %{
# model: "gpt-4",
# messages: [%{role: "user", content: "Hello"}],
# stream: true,
# max_tokens: 1000,
# temperature: 0.7,
# tools: [%{type: "function", function: %{name: "...", ...}}]
# }Provider-Specific Overrides
Providers that require different formats can implement their own protocol:
defimpl ReqLLM.Context.Codec, for: MyProvider.Context do
def encode_request(context, model) do
# Custom encoding logic for provider-specific format
end
endTool Encoding
Tools are automatically converted to OpenAI function format using ReqLLM.Schema.to_openai_format/1,
which handles parameter schema conversion from keyword lists to JSON Schema.
Summary
Functions
Encode context and model to provider-specific request JSON.
Types
@type t() :: term()
All the types that implement this protocol.
Functions
@spec encode_request(ReqLLM.Context.t(), ReqLLM.Model.t()) :: term()
Encode context and model to provider-specific request JSON.