Synaptic Technical Overview
View SourceThis document describes how the Synaptic workflow engine is structured inside the OTP application and where to look when extending it.
Entry point and supervision tree
Synaptic(lib/synaptic.ex) is the public API. It exposesstart/3,resume/2,inspect/1,history/1, andworkflow_definition/1which all delegate toSynaptic.Engine.Synaptic.Applicationis the OTP application callback. Its child list starts the workflow runtime before any Phoenix components:Synaptic.Registry– aRegistryprocess keyed by run id soSynaptic.Runnerprocesses can be addressed via{:via, Registry, {Synaptic.Registry, run_id}}.Synaptic.RuntimeSupervisor– aDynamicSupervisorthat owns everySynaptic.Runnerprocess. Each workflow run is supervised independently, so a crash only restarts that single run.- Phoenix telemetry/pubsub/Finch/endpoint services follow afterwards.
Workflow compilation DSL
Synaptic.Workflowis a macro module imported by workflow definitions. It:- Registers the accumulating
@synaptic_stepsattribute, buildsStepstructs viastep/3, and injects per-step handlers named__synaptic_handle__/2. - Provides
commit/0for marking the workflow complete andsuspend_for_human/2for pausing. - Emits
__synaptic_definition__/0, which returns%{module: workflow_module, steps: [%Synaptic.Step{}, ...]}consumed by the engine.
- Registers the accumulating
Synaptic.Stepdefines the struct + helper for calling back into the generated handlers.
Runtime execution
Synaptic.Engineis responsible for orchestratingSynaptic.Runners:- When
Synaptic.start/3is called it fetches the workflow definition, generates a run id, and asksSynaptic.RuntimeSupervisorto start a new runner with that definition + initial context. - The
:start_at_stepoption allows starting execution at a specific step by name. The engine validates the step exists, finds its index in the steps list, and passesstart_at_step_indexto the runner. Invalid step names return{:error, :invalid_step}. resume/2,inspect/1, andhistory/1are convenience wrappers around the runner GenServer calls.stop/2sends a shutdown request so the runner can mark itself as:stopped, broadcast an event, and terminate cleanly.
- When
Synaptic.Runneris a GenServer that owns the mutable workflow state:- Holds the definition, context, current step index, status, waiting payload, retry budgets, and history timeline.
- On init it accepts an optional
:start_at_step_indexoption. If provided, the runner initializescurrent_step_indexto that value instead of 0, allowing execution to begin at a specific step. The provided context should contain all data that would have been accumulated up to that step. - On init it immediately
{:continue, :process_next_step}so runs execute as soon as the child boots. - Each step execution happens inside
Task.async/awaitso crashes are caught and retried via the configured:retrybudget. - Suspension is represented by setting
status: :waiting_for_humanand storing%{step: ..., resume_schema: ...}inwaiting.resume/2injects a%{human_input: payload}into context and continues the step loop. - A step can intentionally stop a run early by returning
{:stop, reason}instead of{:ok, map}/{:suspend, info}/{:error, reason}. In that case the runner:- Sets
status: :stopped - Appends
%{event: :stopped, reason: reason}to history - Publishes a
:stoppedPubSub event with the same reason - Does not consume the step's retry budget (no retries are attempted)
This applies to sequential, async, and parallel steps (for parallel steps, the
first task that returns
{:stop, reason}wins and stops the run).
- Sets
- Every state transition publishes an event on
Synaptic.PubSub(topic"synaptic:run:" <> run_id) so UIs can observe:waiting_for_human,:resumed,:step_completed,:retrying,:failed, etc. Each event contains the:run_idand:current_step. Consumers callSynaptic.subscribe/1/Synaptic.unsubscribe/1to manage those listeners.
Putting it all together (beginner-friendly flow)
- You write a workflow module using
use Synaptic.Workflow. At compile time that macro records eachstep/3, creates aSynaptic.Stepstruct for it, and generates hidden functions (__synaptic_handle__/2and__synaptic_definition__/0). Nothing is executed yet—you just defined the blueprint. - The app boots. When you run
iex -S mix,Synaptic.Applicationspins up the supervision tree (Registry + RuntimeSupervisor). They sit idle waiting for workflow runs. - You start a run (e.g.,
Synaptic.start(MyWorkflow, %{foo: :bar})). The public API calls intoSynaptic.Engine, which pulls the blueprint fromMyWorkflow.__synaptic_definition__/0, generates a run id, and asksSynaptic.RuntimeSupervisorto start aSynaptic.Runnerchild with that definition + context. Optionally, you can passstart_at_step: :step_nameto begin execution at a specific step; the engine validates the step exists and finds its index before starting the runner. - The runner executes steps. Once the child process starts, it immediately
begins calling your step handlers in order. Returned maps merge into the
context,
{:suspend, ...}pauses the run, and errors trigger retries per the step metadata. - You interact with the run using
Synaptic.inspect/1andSynaptic.history/1(read-only) orSynaptic.resume/2(writeshuman_inputand restarts the loop).
No extra wiring is needed for new workflows—the moment your module is compiled
and available, the runtime can execute it via Synaptic.start/3.
Message routing + persistence boundaries
- There is no durable persistence yet. Context/history lives inside each
Synaptic.Runnerprocess. Restarting the app clears all runs; this is by design for Phase 1. - Client code can read state via
Synaptic.inspect/1andSynaptic.history/1to build APIs or UIs.
Tooling and LLM adapters
Synaptic.Toolsis a thin facade with configurable adapters + agents:- Global defaults are configured in
config/config.exsunderSynaptic.Tools. - Named agents can override model/temperature/adapter per workflow via the
agent: :nameoption. Synaptic.Tools.chat/2merges options, picks the adapter, and delegates toadapter.chat/2. Passtools: [...]with%Synaptic.Tools.Tool{}structs to enable OpenAI-style tool calling; the helper will execute the tool handlers whenever the model emitstool_callsand continue the conversation until a final assistant response is produced.
- Global defaults are configured in
Synaptic.Tools.OpenAIis the default adapter. It builds a Finch request with a JSON body, sends it viaSynaptic.Finch, and returns either{:ok, content}or{:ok, content, %{usage: %{...}}}(with usage metrics). Lack of an API key raises so misconfiguration fails fast. Whenstream: trueis passed, it usesFinch.stream/4to handle Server-Sent Events (SSE) from OpenAI, parsing chunks and accumulating content. Streaming automatically falls back to non-streaming when tools are provided.- Usage metrics: Adapters can optionally return usage information (token counts,
cost, etc.) in a third tuple element:
{:ok, content, %{usage: %{...}}}. The OpenAI adapter automatically extractsprompt_tokens,completion_tokens, andtotal_tokensfrom API responses. This information is included in Telemetry events and can be used by eval integrations. - Telemetry: All LLM calls are instrumented with Telemetry spans under
[:synaptic, :llm], emitting:start,:stop, and:exceptionevents with metadata includingrun_id,step_name,adapter,model,stream, and optionalusagemetrics.
Streaming implementation
- SSE parsing: OpenAI streaming responses use Server-Sent Events format. Each
event is a line starting with
data:followed by JSON (or[DONE]to signal completion). The adapter splits on\n\n, extracts JSON, and parseschoices[0].delta.contentfrom each chunk. - Content accumulation: Chunks are accumulated incrementally. The
on_chunkcallback receives both the new chunk and the accumulated content so far. - PubSub integration:
Synaptic.Toolspublishes:stream_chunkevents for each chunk and:stream_donewhen streaming completes. The Runner injectsrun_idandstep_nameinto the process dictionary so Tools can access them for event publishing. - Limitations: Streaming doesn't support tool calling (auto fallback) or
response_formatoptions. The step function still receives the complete accumulated content when streaming finishes.
Dev-only demo workflow
Synaptic.Dev.DemoWorkflow(lib/synaptic/dev/demo_workflow.ex) is wrapped inif Mix.env() == :devso it only compiles in development. It demonstrates the full lifecycle:- Collects or defaults a
:requestpayload. - Calls
Synaptic.Tools.chat/2to draft a plan, falling back to a canned string if the adapter errors (e.g., missingOPENAI_API_KEY). - Suspends for a human approval with the generated plan in metadata.
- Collects or defaults a
Use it from iex -S mix with:
{:ok, run_id} = Synaptic.start(Synaptic.Dev.DemoWorkflow, %{request: "Plan a kickoff"})
Synaptic.inspect(run_id)
Synaptic.resume(run_id, %{approved: true})That sample mirrors how real workflows behave and is a good starting point for experimentation.
Eval integrations
Synaptic.Eval.Integrationis a behaviour for integrating with 3rd party eval services (Braintrust, LangSmith, etc.). Implementations observe LLM calls and scorer results via Telemetry events and can combine them into complete eval records.- The integration behaviour provides optional callbacks:
on_llm_call/4- called when an LLM call completes (via[:synaptic, :llm, :stop])on_scorer_result/4- called when a scorer completes (via[:synaptic, :scorer, :stop])on_step_complete/4- called when a step completes (via[:synaptic, :step, :stop])
- Use
Synaptic.Eval.Integration.attach/2to set up Telemetry handlers that call your integration's callbacks. This allows users to implement their own eval integrations without modifying Synaptic core. - LLM call metadata includes usage metrics (token counts) when available from adapters, allowing eval services to track costs and usage alongside quality scores.