API Reference fnord v#0.9.22

View Source

Modules

When a file or other input is too large for the model's context window, this module may be used to process the file in chunks. It automatically modifies the supplied agent prompt to include instructions for accumulating a response across multiple chunks based on the context (max context window tokens) parameter supplied by the model parameter.

Behavior for AI agents that process instructions and return responses.

State management for code-oriented agents (planner, implementor, validator). Provides multi-turn conversation primitives and task management helpers.

This module's purpose is to highlight the frustrations of working with LLMs.

Behaviour and execution engine for composite agents - agents that orchestrate work across multiple completion turns, optionally with tool use, structured output, and sub-agent delegation.

This agent applies a multi-step reasoning process to research, debug, and code in response to the user's prompt.

Functions related to the Coordinator's edit mode behavior.

Frippery and furbelows for the Coordinator agent. This module contains functions that provide fluff and flavor to the Coordinator's interactions, like greeting the user colorfully and appending the MOTD to the response.

Integration code for the Coordinator and AI.Tools, AI.Completion, etc.

Functions related to the Coordinator's user-interrupt handling behavior.

Behaviors related to injecting subconscious intution into the coordinator's workflow.

Functions related to the Coordinator's memory behavior: prompt text, identity injection at session start, semantic memory recall, and end-of-session memory reflection.

Notes-specific behaviors for AI.Agent.Coordinator, including generating messages related to note-taking and management.

Task-specific behaviors for AI.Agent.Coordinator, including generating messages related to task management.

Test-mode-specific behaviors for AI.Agent.Coordinator, including generating messages related to testing. Test mode is a special case that lets the dev do manual "integration testing" to verify tool functionality and integration with the agent code.

This module provides an agent that summarizes files' contents in order to generate embeddings for the database and summaries for the user.

Agent that examines two same-scope long-term memories and decides whether to merge them into a single synthesized memory.

Agent that analyzes session-scoped memories and outputs a structured JSON response describing actions to take (add/replace/delete) and which session memories were processed.

Acceptance review agent - behavioral and product-level specialist. Evaluates code changes from the perspective of a user and product designer: behavioral delta, UX coherency, integration effects, and user assumptions. Reads the before-state via git show to establish the original behavior before evaluating changes. Produces structured JSON findings.

Comment narrative agent. Evaluates whether comments form a coherent outline of the code's behavior and purpose - treating the codebase as developer UX and the comments as the documentation layer. Produces structured JSON findings.

Triage agent for code reviews. Estimates change complexity, partitions large changes into right-sized review units, fans out scoped Reviewers in parallel, optionally runs an integration review for cross-component seams, and synthesizes a deduplicated final report.

Slop detection agent. Scans comments, docs, error messages, and UI strings for AI writing tells and anti-patterns. Binary findings - it's slop or it isn't. Produces structured JSON findings.

Pedantic review agent - mechanical correctness specialist. Reads every changed file and checks spelling, naming consistency, doc and comment accuracy, spec completeness, project guideline adherence, formatting, and stale artifacts. Produces structured JSON findings.

Master review agent. Coordinates a comprehensive post-implementation review by researching the change, dispatching five specialist reviewers in parallel, confirming their findings against the actual code, and producing a unified severity-grouped report.

State and data flow review agent - mid-level architecture specialist. Traces how data moves through the system, examines implicit state machines, verifies contracts between modules, and evaluates separation of concerns, error propagation, and testability. Produces structured JSON findings.

Generic agent implementation for executing a %Skills.Skill{}.

This module sends a request to the model and handles the response. It is able to handle tool calls and responses.

API endpoint abstraction.

Coordinates the mini-agents that manage project research notes. The workflow for this is

OpenAI's tokenizer uses regexes that are not compatible with Erlang's regex engine. There are a couple of modules available on hex, but all of them require a working python installation, access to rustc, a number of external dependencies, and some env flags set to allow it to compile.

This module is used to split a string into chunks by the number of tokens, while accounting for other data that might be going with it to the API endpoint with the limited token count.

This module defines the behaviour for tool calls. Defining a new tool requires implementing the spec/0 and call/2 functions.

Note: The current crop of LLMs appear to be extremely overfitted to a tool called "apply_patch" for making code changes. This module is me giving up on trying to prevent them from using the cmd_tool to call a non-existent apply_patch command and instead trying rolling with it.

!@#$%^&*()_+ agents and their %$#@ing parameter shenanigans.

Deterministic, language-agnostic whitespace fitting for file hunks.

Provides file-level information queries for the AI agent. This module reads file contents using AI.Tools.get_file_contents/1 which is the canonical accessor for file content. A centralized cache (Services.FileCache) is used by that accessor to avoid unnecessary disk reads.

Lists all available projects except for the current project.

Long-term memory tool for project and global scopes. Used internally by the MemoryIndexer service to persist, update, recall, and delete memories that have been promoted from session scope. Not exposed in the coordinator's toolbox -- only the background indexer pipeline calls this.

JSON Schema validation and coercion for tool call arguments.

Tool entry point for the review agent pipeline. Delegates to AI.Agent.Review.Decomposer, which triages changes by complexity, partitions large diffs into focused review units, and fans out scoped Reviewers - each running five specialists (pedantic, acceptance, state flow, no-slop, breadcrumbs) - before synthesizing a deduplicated final report.

Execute an enabled Skill by name.

Save a new skill definition either into the current project's skills directory or into the user-global skills directory.

A tool that returns the fnord spec to the LLM to help it understand how to use the CLI and what commands are available. This allows the LLM to assist the user with questions about its own capabilities.

Tool to add a new task to a Services.Task list.

Tool to create a new Services.Task list.

Tool to update the description of an existing Services.Task list.

Push one or more tasks to the front of an existing task list. Accepts list_id as a string or integer and normalizes it to a string.

Tool to resolve a task as success or failure in a Services.Task list.

Tool to return a task list as a formatted, detailed string.

Behaviour for launching a browser (or equivalent) to open a URL.

Default OS-aware browser launcher.

Cmd

Aggregator for MCP commands. Directly handles list, check, add, update, and remove operations, and delegates login and status commands to specialized submodules.

Formats MCP check results in a human-friendly format with checkmarks.

MCP OAuth2 login entrypoint under the config namespace.

Show MCP OAuth token status for a server under the config namespace.

Manages project validation rules stored in validation settings.

CLI management for Skills.

Cross-process filesystem lock helpers for arbitrary files. Uses a lock dir with atomic stale lock takeover.

Fnord is a code search tool that uses OpenAI's embeddings API to index and search code files.

TOML parsing utilities.

Frobs are external tool call integrations. They allow users to define external actions that can be executed by the LLM while researching the user's query.

One-time migration from per-frob registry.json files to settings.json frob arrays. After successful migration of a frob, its registry.json is deleted to prevent stale configuration.

UI-driven prompting for frob parameters. This module uses UI to prompt the user for each property defined in a frob's spec.json and returns a map of coerced values. It relies on AI.Tools.Params for validation/coercion.

Wrapper for direct git CLI calls. Provides helper functions for repo checks, formatted info messages, and listing ignored files in a given root.

Provides a per-process HTTP pool override mechanism for Hackney pools.

This behaviour wraps the AI-powered operations used by Cmd.Index to allow overrides for testing. See impl/0.

Auto-discovers MCP endpoint paths when the default path returns 404.

Integration point for configuring Hermes MCP logging.

Behaviour for DI-friendly OAuth2/OIDC Authorization Code + PKCE flow.

Default OAuth2 Authorization Code + PKCE adapter.

Builds Authorization header for MCP transports. If token is near expiry, attempts a refresh via Client and persists.

Pure OAuth2 + PKCE client implementation for MCP servers.

Minimal credentials store for OAuth2 tokens.

OAuth2 server discovery and automatic configuration. Implements RFC 8414 Authorization Server Metadata discovery.

Minimal loopback HTTP server for OAuth2 Authorization Code callback.

RFC 7591 Dynamic Client Registration for OAuth2. Allows automatic registration of native clients with OAuth providers.

Creates a small executable wrapper for stdio MCP servers.

Supervisor for MCP client instances for the current invocation.

Convert MCP server config into Hermes transport tuples and helpers for OAuth header injection

Shared file-backed storage for long-term memories.

Global memory storage implementation for the Memory behaviour.

Helpers for repairing and migrating memory index statuses.

Helpers for presenting %Memory{} metadata to humans.

Project-level memory storage implementation for the Memory behaviour.

Defines long-term scope rules for memories.

A simple notification module that works on MacOS and Linux.

Helpers for persisting raw assistant outputs for a project.

Centralized exact-match text replacement engine. Provides validated string substitution with hashline prefix detection, typography normalization, and ambiguity checks. Used by both the file_edit_tool (coordinator's direct path) and the Patcher agent (natural language instruction path).

Project resolution from the current working directory.

Thin wrapper around a JSON library.

Protocol for converting structs to JSON-safe plain maps. Implement this instead of @derive {Jason.Encoder, ...} to keep the JSON backend as an internal detail of SafeJson.

Semantic search over indexed conversations.

Minimal in-memory approvals gate for sensitive "finalize" steps (M4).

Pure helper to extract a stable prefix for shell approvals.

GenServer that manages backup file creation for file editing operations with dual counter system.

Session-local (per BEAM node) control plane for pausing background indexing on a per-model basis.

Queue for injecting user messages into a conversation mid-completion.

Background indexer for conversations.

GenServer-backed file cache used by AI tools and other services that need to read file contents from the project workspace.

Drop-in-ish replacement for Application env that shadows values down a process tree. Think: dynamic scope via process ancestry.

Background service that promotes session-scoped memories to long-term (project/global) storage. Independently scans conversations for unprocessed session memories, processes one conversation at a time via the Memory.Indexer agent, and applies the resulting actions.

A service that manages a pool of AI agent names, batch-allocating them from the nomenclater for efficiency. Names can be checked out and optionally checked back in for reuse within the same session.

This module provides a mechanism to perform actions only once, using a unique key provided by the caller to determine whether the action has already been performed this session.

Tracks nested skill execution depth per process tree.

Represents a task list with an identifier, optional description, and a list of tasks. Provides core operations for creating and manipulating task lists.

Utility functions for task management.

Singleton service for creating temporary files via Briefly.

Provides functions to compile and match regular expression patterns for approvals.

Manage frob enablement in settings.json using approvals-style arrays.

Manage Hermes MCP server configuration under the "mcp_servers" key in settings.

Manage skill enablement in settings.json.

Reads and normalizes project-scoped validation settings.

Skills are TOML-defined agent presets that can be executed by the coordinator.

Load and validate skill definitions from TOML files.

Runtime helpers for executing skills.

A single skill definition loaded from a TOML file.

Encode a skill definition as TOML.

Module for recording and checking API usage data. Cross OS process coordination is handled with regard to file reads/writes. It is up to the caller to ensure that requests are internally ordered for consistency.

Conversations are stored per project in the project's store dir, under converations/. Each file is mostly JSON, but with a timestamp prepended to the JSON data, separated by a colon. This allows for easy sorting, without having to parse dozens or hundreds of messages for each file.

Helpers to best-effort heal and migrate conversation task-list statuses and shapes.

Manages semantic index data for conversations within a project.

Handles migration of project store entries from absolute-path-based IDs to relative-path-based IDs.

Façade over entry persistence using direct File operations and Entry submodules.

Behaviour definition for persisting project entries in the store.

Migrates legacy entry directories into the new files/ layout.

UI

User interface functions for output, logging, and user interaction.

Formats output strings using an external command specified by the FNORD_FORMATTER environment variable. If unset or empty, returns the original string. On command failure or non-zero exit code, logs a warning and returns the original string.

Behaviour for UI output operations.

Production implementation of UI.Output that uses UI.Queue and Owl.IO.

Priority queue for UI operations to ensure proper serialization of output and user interactions.

Optional transcript writer for --tee. When started, every UI output (Logger messages, stdout puts, direct stderr writes) is mirrored to a plain-text file with ANSI escape codes stripped.

Erlang :logger handler that mirrors log messages to the tee file.

Human-friendly duration formatting utilities.

Utilities for interpreting environment variables used in fnord.

Evaluates and runs project-scoped validation rules after code-modifying tool usage.