Dagger.LLM (dagger v0.19.4)

View Source

Dagger.LLM

Summary

Functions

create a branch in the LLM's history

returns the type of the current state

return the LLM's current environment

Indicates whether there are any queued prompts or tool results to send to the model

return the llm message history

return the raw llm message history as json

A unique identifier for this LLM.

return the last llm reply from the history

Submit the queued prompt, evaluate any tool calls, queue their results, and keep going until the model ends its turn

return the model used by the llm

return the provider used by the llm

Submit the queued prompt or tool call results, evaluate any tool calls, and queue their results

synchronize LLM state

returns the token usage of the current state

print documentation for available tools

Return a new LLM with the specified function no longer exposed as a tool

allow the LLM to interact with an environment via MCP

Add an external MCP server to the LLM

swap out the llm model

append a prompt to the llm context

append the contents of a file to the llm context

Use a static set of tools for method calls, e.g. for MCP clients that do not support dynamic tool registration

Add a system prompt to the LLM's environment

Disable the default system prompt

Types

t()

@type t() :: %Dagger.LLM{client: term(), query_builder: term()}

Functions

attempt(llm, number)

@spec attempt(t(), integer()) :: t()

create a branch in the LLM's history

bind_result(llm, name)

@spec bind_result(t(), String.t()) :: Dagger.Binding.t() | nil

returns the type of the current state

env(llm)

@spec env(t()) :: Dagger.Env.t()

return the LLM's current environment

has_prompt(llm)

@spec has_prompt(t()) :: {:ok, boolean()} | {:error, term()}

Indicates whether there are any queued prompts or tool results to send to the model

history(llm)

@spec history(t()) :: {:ok, [String.t()]} | {:error, term()}

return the llm message history

history_json(llm)

@spec history_json(t()) :: {:ok, Dagger.JSON.t()} | {:error, term()}

return the raw llm message history as json

id(llm)

@spec id(t()) :: {:ok, Dagger.LLMID.t()} | {:error, term()}

A unique identifier for this LLM.

last_reply(llm)

@spec last_reply(t()) :: {:ok, String.t()} | {:error, term()}

return the last llm reply from the history

loop(llm)

@spec loop(t()) :: t()

Submit the queued prompt, evaluate any tool calls, queue their results, and keep going until the model ends its turn

model(llm)

@spec model(t()) :: {:ok, String.t()} | {:error, term()}

return the model used by the llm

provider(llm)

@spec provider(t()) :: {:ok, String.t()} | {:error, term()}

return the provider used by the llm

step(llm)

@spec step(t()) :: {:ok, t()} | {:error, term()}

Submit the queued prompt or tool call results, evaluate any tool calls, and queue their results

sync(llm)

@spec sync(t()) :: {:ok, t()} | {:error, term()}

synchronize LLM state

token_usage(llm)

@spec token_usage(t()) :: Dagger.LLMTokenUsage.t()

returns the token usage of the current state

tools(llm)

@spec tools(t()) :: {:ok, String.t()} | {:error, term()}

print documentation for available tools

with_blocked_function(llm, type_name, function)

@spec with_blocked_function(t(), String.t(), String.t()) :: t()

Return a new LLM with the specified function no longer exposed as a tool

with_env(llm, env)

@spec with_env(t(), Dagger.Env.t()) :: t()

allow the LLM to interact with an environment via MCP

with_mcp_server(llm, name, service)

@spec with_mcp_server(t(), String.t(), Dagger.Service.t()) :: t()

Add an external MCP server to the LLM

with_model(llm, model)

@spec with_model(t(), String.t()) :: t()

swap out the llm model

with_prompt(llm, prompt)

@spec with_prompt(t(), String.t()) :: t()

append a prompt to the llm context

with_prompt_file(llm, file)

@spec with_prompt_file(t(), Dagger.File.t()) :: t()

append the contents of a file to the llm context

with_static_tools(llm)

@spec with_static_tools(t()) :: t()

Use a static set of tools for method calls, e.g. for MCP clients that do not support dynamic tool registration

with_system_prompt(llm, prompt)

@spec with_system_prompt(t(), String.t()) :: t()

Add a system prompt to the LLM's environment

without_default_system_prompt(llm)

@spec without_default_system_prompt(t()) :: t()

Disable the default system prompt