AI.Accumulator (fnord v0.9.29)

View Source

When a file or other input is too large for the model's context window, this module may be used to process the file in chunks. It automatically modifies the supplied agent prompt to include instructions for accumulating a response across multiple chunks based on the context (max context window tokens) parameter supplied by the model parameter.

Note that while this uses the AI.Completion module, it does not have the same interface and cannot be used for long-running conversations because it does not accept a list of messages as input.

Summary

Types

error()

@type error() :: {:error, binary()}

response()

@type response() :: success() | error()

success()

@type success() :: {:ok, AI.Completion.t()}

t()

@type t() :: %AI.Accumulator{
  buffer: binary(),
  compact?: boolean(),
  completion_args: Keyword.t(),
  line_numbers: term(),
  model: AI.Model.t(),
  prompt: binary(),
  question: binary(),
  splitter: AI.Splitter.t(),
  toolbox: AI.Tools.toolbox() | nil
}

Functions

get_response(opts \\ [])

@spec get_response(Keyword.t()) :: response()

process_chunk(acc)