v0.1.0
execute_fn receives agent_id
Tool.execute_fntype updated to(agent_id, tool_call_id, args)— every tool now receives the calling agent's id as the first argument.ask_agentdrops theown_idclosure capture — reads fromagent_id.spawn_agentdrops theorchestrator_idclosure capture — reads fromagent_id.worker_tools/3(was/4) andorchestrator_tools/6(was/7) — each lost one parameter as a result.list_modelsmarks the caller's current model withcurrent: truevia a dynamicAgent.get_statelookup — works correctly when granted to workers.AIBehaviour— addedget_model/3callback for base-url-aware lookups.
Explicit agent targeting
ask_agent,delegate_task,destroy_agent,interrupt_agent— replaced the three optionaltype/name/idfields with a requiredidentifierstring and a requiredidentifier_typeenum ("type","name","id"). The LLM can no longer omit all three and silently fail to target an agent.
spawn_agent hardening
base_urlis now always required inspawn_agent(cloud providers may pass a placeholder; only ollama/llama_cpp use it).spawn_agentexecute_fn refactored into focused helpers:validate_base_url,resolve_spawn_model,build_spawn_start_opts,filter_granted.
Tool output truncation
- Tool results are now capped at 2 000 lines or 50 KB (whichever is
reached first) before being stored in the session. Outputs that exceed either
limit are truncated and suffixed with
\n[output truncated]. Both limits are always enforced — line truncation is applied first, then byte truncation on the result.
Compactor fixes
estimate_tokensnow counts{:tool_call, id, name, args}content parts (previously ignored, causing systematic underestimates).compact_localfilters all{:custom, :summary}messages fromoldbefore callingsummarize/2— only messages since the last checkpoint are summarised, preventing the previous checkpoint from bloating the request.format_historystrips thinking blocks and truncates tool results to 2 000 chars — keeps the summarisation input small without losing signal.
Queued message follow-up fix
- A user message sent while the orchestrator is executing tools now correctly
triggers a dedicated follow-up turn after all tools complete. Previously,
do_run_llmcalled during tool continuation advancedstream_startpast the queued message, somaybe_turn_startfound no pending input.
Runtime model switching
Agent.change_model/2— replaces the model in the agent's GenServer state for subsequent LLM turns without affecting the current conversation history or status.
AGENTS.md prepending for all agents
Tools.prepend_agents_md/2is now public — walks up fromcwdto the nearest.gitroot, readsAGENTS.mdif found, and prepends its content to the given system prompt. Returns the prompt unchanged when no file is found orcwdis empty.orchestrator_tools/7— addedcwdparameter (default""); passed into thespawn_agentclosure so dynamically spawned workers inherit the same project context.spawn_agenttool — prependsAGENTS.mdto the worker's system prompt before starting the agent process;cwdis stored in the new agent's state.Agent.t— addedcwd: String.t()field (default""); set from start opts.
Skills — explicit load_skill / list_skills tools
Skill.load_skill_tool/1— builds aload_skilltool as a closure over the skill pool; automatically injected byAgentSpec.to_start_opts/2for every agent whenskill_pool:is non-empty. No TEAM.json declaration needed.Skill.list_skills_tool/1— builds alist_skillstool returning all available skill names and descriptions. Opt-in: add"list_skills"to an agent's TEAM.json"tools"array to enable autonomous skill discovery.Skill.system_prompt_section/1updated: no longer includes file paths or resources dir; instructs agents to useload_skillinstead ofread.AgentSpec.resolve_tools/2updated: automatically appendsload_skill_toolwhenskill_pool:is non-empty, regardless ofspec.skills.
Inter-agent tools — deadlock detection + improvements
ask_agent/2— now acceptsown_idfor deadlock detection; before blocking, registers{:waiting, own_id} → target_idinPlanck.Agent.Registry(auto- cleared on task exit) and checks for a circular wait chain; returns a clear error instead of deadlocking if a cycle is detected.worker_tools/4— addedown_idparameter (passed toask_agentfor cycle detection); callers must now supply the agent's own id.orchestrator_tools/6— addedgrantable_skillsparameter so skills can be granted to dynamically spawned workers viaspawn_agent.spawn_agent— spawned workers now receive asenderidentity so the orchestrator knows which worker replied viasend_response.list_team/1— addedverbose: booleanparameter; verbose mode includes tool names and model for each team member.list_models/1— output now includesbase_urlfor each model so the LLM can pass the correct base_url when callingspawn_agent.- Agent
initbroadcasts:worker_spawnedon the session PubSub topic when a worker with adelegator_idstarts, enabling UIs to refresh the agent list. - Non-blocking tool execution:
handle_continue({:execute_tools})now spawns each tool as a supervised fire-and-forget task; results collected viahandle_info({:tool_done}); the GenServer loop stays free for abort/prompt during tool execution. abort/1changed from cast to call; blocks until the agent is idle, closing the race condition between abort and subsequent prompt/rewind calls.cost: float()added to agent state; accumulated from model rates on each:doneevent; persisted to session metadata; broadcast in:usage_delta.Message.estimate_tokens/1— public character-based token estimator.Agent.estimate_tokens/1— public API that computes current context size.running_tools/tool_results_accadded to agent state for non-blocking tool tracking.
Prior entries
First release.
Planck.Agent.Sidecar— behaviour for distributed sidecar extensions; singletools/0callback; module-level RPC entry points:discover/0(auto-detects the entry module via:persistent_term-cached scan, only caches on success),list_tools/0,list_tools/1,execute_tool/3,execute_tool/4Planck.Agent.Compactor— redesigned:compact/2andcompact_timeout/0callbacks; unifiedbuild/2acceptingsidecar_node:andcompactor:opts for remote sidecar compactors with local fallback;compactor:string is converted to:"Elixir.<name>"atom before RPC;load/1removedAgentSpec.compactor— per-agent compactor module name string; resolved viaCompactor.build/2at session start- OTP-based agent runtime with GenServer per agent
- Team lifecycle: orchestrator owns team, team dies with orchestrator
- Inter-agent tools:
ask_agent,delegate_task,send_response,list_team - Orchestrator-only tools:
spawn_agent,destroy_agent,interrupt_agent,list_models spawn_agentaccepts a"tools"JSON array; the orchestrator may grant any subset of its owngrantable_toolsto the spawned worker (no privilege escalation)Planck.Agent.ExternalTool— declarative external tool spec loaded from<name>/TOOL.json;{{key}}interpolation in commands;erlexec-backed execution;load_all/1,from_file/1Planck.Agent.Compactor— defines@callback compact/1; custom compactors implement this behaviour in a module inside a.exsfile, allowing helper functions alongside the main callback;load/1compiles the file and wraps the module'scompact/1as anon_compactfunction- Registry-based agent discovery by type, name, or id
- Parallel tool execution via
Task.async_stream - Phoenix.PubSub broadcasting on
"agent:#{id}"and"session:#{session_id}"topics - Token usage tracking:
:usage_deltaevents in real-time andusagein:turn_end stop/1— graceful shutdown; cancels in-flight stream viaterminate/2get_info/1— lightweight metadata snapshotPlanck.Agent.BuiltinTools—read/0,write/0,edit/0,bash/0tool factoriesreadstreams line-by-line with optionaloffsetandlimitbashis backed byerlexec; acceptscwdandtimeoutas runtime JSON args; stdout and stderr both captured
Planck.Agent.Skill— filesystem-based skill loader;load_all/1,from_file/1,system_prompt_section/1; skills are<name>/SKILL.mddirectories with YAML-style frontmatterPlanck.Agent.Session— SQLite-backed session store with checkpoint-based pagination; caller-supplied:dir(no default)Planck.Agent.Compactor— default LLM-based context compaction anchored onmodel.context_windowPlanck.Agent.Team— directory-based team loader (TEAM.json+members/<name>.md);%Team{source: :filesystem | :dynamic};Team.load/1andTeam.dynamic/1Planck.Agent.AgentSpec— explicit constructornew/1; JSON parsersfrom_map/2andfrom_list/2for member entries;description,tools: [String.t()], andskills: [String.t()]fields;to_start_opts/2acceptstool_pool:andskill_pool:overrides — tool names resolve fromtool_pool:(falling back to thetools:override whenspec.toolsis empty); skill names resolve fromskill_pool:and their descriptions are appended tosystem_promptviaSkill.system_prompt_section/1whenspec.skillsis non-empty- Member
namedefaults totypewhen not provided;Team.load/1rejects duplicate names so multiple same-type members must be explicitly named spawn_agenttool accepts a"skills"parameter and agrantable_skillsclosure arg, symmetric withgrantable_toolsPlanck.AI.Model.providers/0— valid provider atoms- Pluggable
on_compacthook —Compactor.build/2returns a ready-to-use function @type agentand@type tnow have full@typedocdocumentation with all fields typed
Session API additions
Session.append/3changed from fire-and-forget cast to synchronous call — returnspos_integer() | nil(the SQLite autoincrement row id, ornilwhen the session is not found); enables the agent to setMessage.id = db_idimmediately after each persistSession.truncate_after/2— deletes all messages withid >= db_idacross all agents in a session; used by the edit-message featureSession.messages/1rows now includedb_id: pos_integer()— the SQLite row idMessage.idis now the SQLite row id after persistence (previously a random UUID); this unifies the two identifiers so callers never need to track bothMessage.idis not stored in the serialised blob — the field is stripped before writing and set from the DBidcolumn on every read; the row id is therefore authoritative for all rows, including legacy ones that stored a UUIDAgent.rewind_to_message/2— truncates both the session and in-memory history to strictly before the given db_id, then reloads from the DB to restore canonical order and rebuildturn_checkpoints; replaces the oldrewind/2(removed)Agent.rewind/2removed — replaced byrewind_to_message/2
Message persistence ordering
- Queued messages (received while the agent is streaming) are no longer persisted
immediately; they retain a UUID id in memory and are flushed to the session at
the start of the next LLM turn via
flush_unpersisted_messages. This guarantees that the queued message's db_id is always greater than the current turn's assistant response, preserving correct insertion order in the DB flush_unpersisted_messagesandreload_messages_from_sessionare internal helpers that keep in-memory message order consistent with DB order after queuing or rewind;turn_checkpointsis rebuilt from the reloaded list
Agent API
Agent.prompt/3is now a synchronouscall(was acast) — returns:okonce the agent has set its status to:streaming; if the agent is already busy the message is queued (appended to history) and re-triggered automatically after the current turn ends viamaybe_turn_start/1send_responsetool now carries sender attribution: orchestrator receives{:agent_response, response, %{id, name}}and storessender_id/sender_namein the message metadatato_ai_messages/1converts{:custom, :agent_response}messages to:userrole, prefixed with"Response from <name>: "whensender_namemetadata is presentask_agentno longer accepts atimeout_msparameter — blocks indefinitely; monitors the target process and returns{:error, "Agent terminated: ..."}if it crashes; subscribes before prompting to close the race conditiondelegate_tasktool result now includes guidance to end the turn
Notes
planck_agentis a pure library with no runtime config module; filesystem-path configuration (sessions, skills, tools, compactor) lives inPlanck.Headless.Config. Callers usingplanck_agentdirectly pass paths as explicit arguments.Planck.Agent.TeamTemplateiterated out during development — superseded byPlanck.Agent.TeamandAgentSpec.from_map/2/from_list/2.