ReqLLM.Response.Codec protocol (ReqLLM v1.0.0-rc.3)
View SourceProtocol for decoding provider responses and SSE events to canonical ReqLLM structures.
This protocol handles both non-streaming response decoding and streaming SSE event processing, converting provider-specific formats to canonical ReqLLM structures.
Default Implementation
The Map implementation provides baseline OpenAI-compatible decoding for common providers
that use the ChatCompletions API format (OpenAI, Groq, OpenRouter, xAI):
# Non-streaming response decoding
ReqLLM.Response.Codec.decode_response(response_json, model)
#=> {:ok, %ReqLLM.Response{message: %ReqLLM.Message{...}, usage: %{...}}}
# Streaming SSE event decoding
ReqLLM.Response.Codec.decode_sse_event(sse_event, model)
#=> [%ReqLLM.StreamChunk{type: :content, text: "Hello"}]Provider-Specific Overrides
Providers with unique response formats implement their own protocol:
defimpl ReqLLM.Response.Codec, for: MyProvider.Response do
def decode_response(data, model) do
# Custom decoding logic for provider-specific format
end
def decode_sse_event(event, model) do
# Custom SSE event processing
end
endResponse Pipeline
- Raw provider response →
decode_response/2→ ReqLLM.Response struct - SSE event →
decode_sse_event/2→ List of StreamChunk structs
Summary
Functions
Decode provider response data with model context.
Decode SSE event data into StreamChunks with model context for streaming responses.
Types
@type t() :: term()
All the types that implement this protocol.
Functions
@spec decode_response(t(), ReqLLM.Model.t()) :: {:ok, ReqLLM.Response.t()} | {:error, term()}
Decode provider response data with model context.
@spec decode_sse_event(t(), ReqLLM.Model.t()) :: [ReqLLM.StreamChunk.t()]
Decode SSE event data into StreamChunks with model context for streaming responses.