Magus.AgentChain (Magus v0.1.0)
Summary
Functions
Define a chain for agents.
Adds a JSON schema to the chain as the response format from the LLM.
Start a new AgentChain configuration.
Run the AgentChain.
Types
Functions
Define a chain for agents.
This is an extention of the LangChain.Chains.LLMChain chain that provides a few helpers for the agent use case.
A stream_handler
can be passed in when the AgentChain is created and then
used automatically when the chain is run. This is useful for the AgentExecutor
to listen to tokens as they're returned by the LLM.
The AgentChain can also be configured with a JSON schema that is used when requesting content from the LLM and then used to validate that the LLM response conforms to the schema.
add_message(chain, message)
@spec add_message(t(), LangChain.Message.t()) :: t()
add_messages(chain, messages)
@spec add_messages(t(), [LangChain.Message.t()]) :: t()
add_tool(chain, tool)
@spec add_tool(t(), LangChain.Function.t()) :: t()
ask_for_json_response(chain, schema)
Adds a JSON schema to the chain as the response format from the LLM.
A message is added to the chain that tells the LLM to return the response content in JSON format. In the OpenAI case, the request will be made in JSON Mode
get_default_gemini_llm()
new!(opts \\ [])
Start a new AgentChain configuration.
Options
:verbose
- Runs the LLM in verbose mode if set to true. Defaults tofalse
.:stream_handler
- Handler that is called as the LLM returns messages.
run(chain)
@spec run(t()) :: {:error, binary() | list()} | {:ok, any(), LangChain.Message.t()}
Run the AgentChain.
If a stream_handler
was specified when the AgentChain was created,
it will be called as the LLM returns tokens.
If a JSON response was requested with ask_for_json_response
, the response
will be validated against the schema and decoded to a struct.