Dagger.LLM (dagger v0.19.4)
View SourceDagger.LLM
Summary
Functions
create a branch in the LLM's history
returns the type of the current state
return the LLM's current environment
Indicates whether there are any queued prompts or tool results to send to the model
return the llm message history
return the raw llm message history as json
A unique identifier for this LLM.
return the last llm reply from the history
Submit the queued prompt, evaluate any tool calls, queue their results, and keep going until the model ends its turn
return the model used by the llm
return the provider used by the llm
Submit the queued prompt or tool call results, evaluate any tool calls, and queue their results
synchronize LLM state
returns the token usage of the current state
print documentation for available tools
Return a new LLM with the specified function no longer exposed as a tool
allow the LLM to interact with an environment via MCP
Add an external MCP server to the LLM
swap out the llm model
append a prompt to the llm context
append the contents of a file to the llm context
Use a static set of tools for method calls, e.g. for MCP clients that do not support dynamic tool registration
Add a system prompt to the LLM's environment
Disable the default system prompt
Types
Functions
create a branch in the LLM's history
@spec bind_result(t(), String.t()) :: Dagger.Binding.t() | nil
returns the type of the current state
@spec env(t()) :: Dagger.Env.t()
return the LLM's current environment
Indicates whether there are any queued prompts or tool results to send to the model
return the llm message history
@spec history_json(t()) :: {:ok, Dagger.JSON.t()} | {:error, term()}
return the raw llm message history as json
@spec id(t()) :: {:ok, Dagger.LLMID.t()} | {:error, term()}
A unique identifier for this LLM.
return the last llm reply from the history
Submit the queued prompt, evaluate any tool calls, queue their results, and keep going until the model ends its turn
return the model used by the llm
return the provider used by the llm
Submit the queued prompt or tool call results, evaluate any tool calls, and queue their results
synchronize LLM state
@spec token_usage(t()) :: Dagger.LLMTokenUsage.t()
returns the token usage of the current state
print documentation for available tools
Return a new LLM with the specified function no longer exposed as a tool
@spec with_env(t(), Dagger.Env.t()) :: t()
allow the LLM to interact with an environment via MCP
@spec with_mcp_server(t(), String.t(), Dagger.Service.t()) :: t()
Add an external MCP server to the LLM
swap out the llm model
append a prompt to the llm context
@spec with_prompt_file(t(), Dagger.File.t()) :: t()
append the contents of a file to the llm context
Use a static set of tools for method calls, e.g. for MCP clients that do not support dynamic tool registration
Add a system prompt to the LLM's environment
Disable the default system prompt