API Reference fnord v#0.8.82
View SourceModules
When file or other input to too large for the model's context window, this
module may be used to process the file in chunks. It automatically modifies
the supplied agent prompt to include instructions for accumulating a response
across multiple chunks based on the context (max context window tokens)
parameter supplied by the model parameter.
Behavior for AI agents that process instructions and return responses.
This module's purpose is to highlight the frustrations of working with LLMs.
This agent uses a combination of the reasoning features of the OpenAI o3-mini model as well as its own reasoning process to research and answer the input question.
This module provides an agent that summarizes files' contents in order to generate embeddings for the database and summaries for the user.
Agent that scores memories for a conversation.
This module sends a request to the model and handles the response. It is able to handle tool calls and responses.
Compacts conversation history for AI completion requests. Compacts by summarizing tool calls and assistant messages following each user message, while preserving user messages and special system messages.
Pure functions for memory matching logic.
Evaluation engine for memory-based automatic thoughts.
Coordinates the mini-agents that manage project research notes. The workflow for this is
OpenAI's tokenizer uses regexes that are not compatible with Erlang's regex engine. There are a couple of modules available on hex, but all of them require a working python installation, access to rustc, a number of external dependencies, and some env flags set to allow it to compile.
This module is used to split a string into chunks by the number of tokens, while accounting for other data that might be going with it to the API endpoint with the limited token count.
This module defines the behaviour for tool calls. Defining a new tool
requires implementing the spec/0 and call/2 functions.
Note: The current crop of LLMs appear to be extremely overfitted to a tool called "apply_patch" for making code changes. This module is me giving up on trying to prevent them from using the shell tool to call a non-existent apply_patch command and instead trying rolling with it.
!@#$%^&*()_+ agents and their %$#@ing parameter shenanigans.
Deterministic, language-agnostic whitespace fitting for file hunks.
Lists all available projects except for the current project.
Tool for managing learned memories - patterns from experience that fire automatic thoughts.
Explicit validation tool that wraps the heavy QA validator.
Tool to add a new task to a Services.Task list.
Tool to create a new Services.Task list.
Tool to push a new task to the front of a Services.Task list.
Tool to resolve a task as success or failure in a Services.Task list.
Tool to return a task list as a formatted, detailed string.
Behaviour for launching a browser (or equivalent) to open a URL.
Default OS-aware browser launcher.
Aggregator for MCP commands. Directly handles list, check, add, update, and remove operations, and delegates login and status commands to specialized submodules.
Formats MCP check results in a human-friendly format with checkmarks.
MCP OAuth2 login entrypoint under the config namespace.
Show MCP OAuth token status for a server under the config namespace.
Cross-process filesystem lock helpers for arbitrary files. Uses a lock dir with atomic stale lock takeover.
Fnord is a code search tool that uses OpenAI's embeddings API to index and search code files.
Frobs are external tool call integrations. They allow users to define external actions that can be executed by the LLM while researching the user's query.
One-time migration from per-frob registry.json files to settings.json frob arrays. After successful migration of a frob, its registry.json is deleted to prevent stale configuration.
Wrapper for direct git CLI calls. Provides helper functions for repo checks, formatted info messages, and listing ignored files in a given root.
Provides a per-process HTTP pool override mechanism for Hackney pools.
Auto-discovers MCP endpoint paths when the default path returns 404.
Behaviour for DI-friendly OAuth2/OIDC Authorization Code + PKCE flow.
Default OAuth2 Authorization Code + PKCE adapter.
Builds Authorization header for MCP transports.
If token is near expiry, attempts a refresh via Client and persists.
Pure OAuth2 + PKCE client implementation for MCP servers.
Minimal credentials store for OAuth2 tokens.
OAuth2 server discovery and automatic configuration. Implements RFC 8414 Authorization Server Metadata discovery.
Minimal loopback HTTP server for OAuth2 Authorization Code callback.
RFC 7591 Dynamic Client Registration for OAuth2. Allows automatic registration of native clients with OAuth providers.
Supervisor for MCP client instances for the current invocation.
Convert MCP server config into Hermes transport tuples and helpers for OAuth header injection
A simple notification module that works on MacOS and Linux.
Project resolution from the current working directory.
Semantic search over indexed conversations.
Minimal in-memory approvals gate for sensitive "finalize" steps (M4).
Pure helper to extract a stable prefix for shell approvals.
GenServer that manages backup file creation for file editing operations with dual counter system.
Queue for injecting user messages into a conversation mid-completion.
Background indexer for conversations.
Drop-in-ish replacement for Application env that shadows values down a process tree. Think: dynamic scope via process ancestry.
GenServer maintaining the in-memory graph of all loaded memories.
A service that manages a pool of AI agent names, batch-allocating them from the nomenclater for efficiency. Names can be checked out and optionally checked back in for reuse within the same session.
This module provides a mechanism to perform actions only once, using a unique key provided by the caller to determine whether the action has already been performed this session.
Provides functions to compile and match regular expression patterns for approvals.
Manage frob enablement in settings.json using approvals-style arrays.
Manage Hermes MCP server configuration under the "mcp_servers" key in settings.
Unix file store for memories with composite scope (global + per-project).
Handles children.log file operations for memories. Format: newline-delimited list of child slugs (one per line).
Handles heuristic.json file operations for memories. Contains: pattern_tokens (bag-of-words with frequencies)
Handles meta.json file operations for memories. Contains: id, slug, label, response_template, scope, parent_id, timestamps, counters, weight
Conversations are stored per project in the project's store dir, under
converations/. Each file is mostly JSON, but with a timestamp prepended
to the JSON data, separated by a colon. This allows for easy sorting, without
having to parse dozens or hundreds of messages for each file.
Manages semantic index data for conversations within a project.
Handles migration of project store entries from absolute-path-based IDs to relative-path-based IDs.
User interface functions for output, logging, and user interaction.
Formats output strings using an external command specified by the
FNORD_FORMATTER environment variable. If unset or empty, returns the
original string. On command failure or non-zero exit code, logs a warning and
returns the original string.
Behaviour for UI output operations.
Production implementation of UI.Output that uses UI.Queue and Owl.IO.
Priority queue for UI operations to ensure proper serialization of output and user interactions.
Human-friendly duration formatting utilities.