GenAI.InferenceProviderBehaviour behaviour (GenAI Core v0.2.1)
Link to this section Summary
Callbacks
Return config_key inference provide application config stored under :genai entry
Obtain map of effective settings: settings, model_settings, provider_settings, config_settings, etc.
Prepare endpoint and method to make inference call to
Prepare request headers
Prepare request body to be passed to inference call.
Build and run inference thread
Build and run inference thread in streaming mode
Link to this section Types
completion()
@type completion() :: any()
context()
@type context() :: any()
headers()
@type headers() :: list()
messages()
@type messages() :: list()
method()
@type method() :: :get | :post | :put | :delete | :option | :patch
model()
@type model() :: any()
options()
@type options() :: any()
request_body()
@type request_body() :: any()
session()
@type session() :: any()
settings()
@type settings() :: map()
tools()
@type tools() :: list() | nil
uri()
@type uri() :: url()
url()
@type url() :: String.t()
Link to this section Callbacks
chat(any, any, any, any, any, any, any)
config_key()
@callback config_key() :: atom()
Return config_key inference provide application config stored under :genai entry
effective_settings(model, session, context, options)
@callback effective_settings(model(), session(), context(), options()) :: {:ok, {settings(), session()}} | {:error, term()}
Obtain map of effective settings: settings, model_settings, provider_settings, config_settings, etc.
endpoint(model, settings, session, context, options)
@callback endpoint(model(), settings(), session(), context(), options()) :: {:ok, {method(), uri()}} | {:ok, {{method(), uri()}, session()}} | {:error, term()}
Prepare endpoint and method to make inference call to
headers(model, settings, session, context, options)
@callback headers(model(), settings(), session(), context(), options()) :: {:ok, headers()} | {:ok, {headers(), session()}} | {:error, term()}
Prepare request headers
request_body(model, messages, tools, settings, session, context, options)
@callback request_body( model(), messages(), tools(), settings(), session(), context(), options() ) :: {:ok, headers()} | {:ok, {headers(), session()}} | {:error, term()}
Prepare request body to be passed to inference call.
run(session, context, options)
@callback run(session(), context(), options()) :: {:ok, {completion(), session()}} | {:error, term()}
Build and run inference thread
standardize_model(model)
stream(session, context, options)
@callback stream(session(), context(), options()) :: {:ok, {completion(), session()}} | {:error, term()} | :nyi
Build and run inference thread in streaming mode