DeepEvalEx.LLM.Adapter behaviour (DeepEvalEx v0.1.0)
View SourceBehaviour for LLM adapters.
Implement this behaviour to add support for a new LLM provider. DeepEvalEx uses adapters to abstract away the differences between LLM providers (OpenAI, Anthropic, Ollama, etc.).
Implementing a Custom Adapter
defmodule MyApp.CustomLLMAdapter do
@behaviour DeepEvalEx.LLM.Adapter
@impl true
def generate(prompt, opts) do
# Call your LLM API
{:ok, "response text"}
end
@impl true
def generate_with_schema(prompt, schema, opts) do
# Call your LLM API with structured output
{:ok, %{key: "value"}}
end
@impl true
def model_name(opts), do: Keyword.get(opts, :model, "custom-model")
@impl true
def supports_structured_outputs?, do: true
@impl true
def supports_log_probs?, do: false
endUsing a Custom Adapter
DeepEvalEx.evaluate(test_case, [metric],
adapter: MyApp.CustomLLMAdapter,
model: "custom-model-v2"
)
Summary
Callbacks
Generates a response from the LLM.
Generates a structured response matching the given schema.
Returns the model name/identifier.
Returns whether this adapter supports log probabilities.
Returns whether this adapter supports multimodal inputs (images).
Returns whether this adapter supports structured outputs.
Functions
Gets the default adapter based on configuration.
Generates a response using the default or specified adapter.
Generates a structured response using the default or specified adapter.
Gets the configured adapter module for a provider.
Types
Callbacks
Generates a response from the LLM.
Parameters
prompt- The prompt to send to the LLMopts- Options including::model- Model name/identifier:temperature- Sampling temperature (0.0 - 2.0):max_tokens- Maximum tokens in response:api_key- API key (if not configured globally)
Returns
{:ok, response}- The generated text response{:error, reason}- Error tuple
Generates a structured response matching the given schema.
Uses the LLM's native structured output capability (JSON mode, function calling, or tool use) to ensure the response matches the expected schema.
Parameters
prompt- The prompt to send to the LLMschema- An Ecto schema module or JSON schema mapopts- Same options asgenerate/2
Returns
{:ok, struct}- Parsed response matching the schema{:error, reason}- Error tuple
Returns the model name/identifier.
@callback supports_log_probs?() :: boolean()
Returns whether this adapter supports log probabilities.
Log probs can be used for more accurate scoring in some metrics.
@callback supports_multimodal?() :: boolean()
Returns whether this adapter supports multimodal inputs (images).
@callback supports_structured_outputs?() :: boolean()
Returns whether this adapter supports structured outputs.
When true, generate_with_schema/3 uses native structured output
features. When false, it falls back to parsing JSON from the response.
Functions
Gets the default adapter based on configuration.
Generates a response using the default or specified adapter.
This is a convenience function that resolves the adapter and calls generate.
Options
:adapter- Adapter module or provider atom:model- Model name- All other options are passed to the adapter
Generates a structured response using the default or specified adapter.
Gets the configured adapter module for a provider.
Examples
DeepEvalEx.LLM.Adapter.get_adapter(:openai)
#=> DeepEvalEx.LLM.Adapters.OpenAI
DeepEvalEx.LLM.Adapter.get_adapter(:anthropic)
#=> DeepEvalEx.LLM.Adapters.Anthropic