GenAI.Provider.Ollama (Noizu Labs, GenAI Wrapper v0.3.0)

Copy Markdown

Module for interacting with the Ollama API. Ollama provides local LLM inference with various open-source models.

Summary

Functions

Return config_key inference provide application config stored under :genai entry

Obtain map of effective settings: settings, model_settings, provider_settings, config_settings, etc.

Prepare endpoint and method to make inference call to

Retrieves a list of models available on the local Ollama instance.

Prepare request body to be passed to inference call.

Build and run inference thread

Build and run inference thread in streaming mode

Functions

chat(messages, tools, settings)

chat(model, messages, tools, hyper_parameters, provider_settings \\ [], context \\ nil, options \\ nil)

Callback implementation for GenAI.InferenceProviderBehaviour.chat/7.

config_key()

Return config_key inference provide application config stored under :genai entry

default_encoder()

effective_settings(model, session, context, options \\ nil)

Obtain map of effective settings: settings, model_settings, provider_settings, config_settings, etc.

endpoint(model, settings, session, context, options \\ nil)

Prepare endpoint and method to make inference call to

headers(options)

headers(model, settings, session, context, options \\ nil)

Prepare request headers

models(settings \\ [])

Retrieves a list of models available on the local Ollama instance.

request_body(model, messages, tools, settings, session, context, options \\ nil)

Prepare request body to be passed to inference call.

run(session, context, options \\ nil)

Build and run inference thread

standardize_model(model)

Callback implementation for GenAI.InferenceProviderBehaviour.standardize_model/1.

stream(session, context, options \\ nil)

Build and run inference thread in streaming mode