ReqLLM.Response.Codec protocol (ReqLLM v1.0.0-rc.3)

View Source

Protocol for decoding provider responses and SSE events to canonical ReqLLM structures.

This protocol handles both non-streaming response decoding and streaming SSE event processing, converting provider-specific formats to canonical ReqLLM structures.

Default Implementation

The Map implementation provides baseline OpenAI-compatible decoding for common providers that use the ChatCompletions API format (OpenAI, Groq, OpenRouter, xAI):

# Non-streaming response decoding
ReqLLM.Response.Codec.decode_response(response_json, model)
#=> {:ok, %ReqLLM.Response{message: %ReqLLM.Message{...}, usage: %{...}}}

# Streaming SSE event decoding
ReqLLM.Response.Codec.decode_sse_event(sse_event, model) 
#=> [%ReqLLM.StreamChunk{type: :content, text: "Hello"}]

Provider-Specific Overrides

Providers with unique response formats implement their own protocol:

defimpl ReqLLM.Response.Codec, for: MyProvider.Response do
  def decode_response(data, model) do
    # Custom decoding logic for provider-specific format
  end

  def decode_sse_event(event, model) do
    # Custom SSE event processing
  end
end

Response Pipeline

  1. Raw provider responsedecode_response/2ReqLLM.Response struct
  2. SSE eventdecode_sse_event/2List of StreamChunk structs

Summary

Types

t()

All the types that implement this protocol.

Functions

Decode provider response data with model context.

Decode SSE event data into StreamChunks with model context for streaming responses.

Types

t()

@type t() :: term()

All the types that implement this protocol.

Functions

decode_response(data, model)

@spec decode_response(t(), ReqLLM.Model.t()) ::
  {:ok, ReqLLM.Response.t()} | {:error, term()}

Decode provider response data with model context.

decode_sse_event(sse_event, model)

@spec decode_sse_event(t(), ReqLLM.Model.t()) :: [ReqLLM.StreamChunk.t()]

Decode SSE event data into StreamChunks with model context for streaming responses.