This document defines the initial integration contract for:
- Phoenix applications
- OTP applications with an existing
Repo - existing installations that already run background jobs
Tested Toolchain
Current CI and onboarding smoke tests run with:
- Erlang/OTP
28.4.1 - Elixir
1.19.5-otp-28 Oban 2.21and2.22Jido 2.0+
Installation
Add :squid_mesh to the host application's dependencies and fetch dependencies
as usual with Mix.
Preferred Hex dependency:
defp deps do
[
{:squid_mesh, "~> 0.1.0-alpha.3"}
]
endIf the host app defines custom steps with use Jido.Action, add :jido
explicitly to the host app as well rather than relying on a transitive
dependency:
defp deps do
[
{:jido, "~> 2.0"},
{:squid_mesh, "~> 0.1.0-alpha.3"}
]
endThen install Squid Mesh's library-owned migrations into the host app:
mix squid_mesh.install
mix ecto.migrate
mix squid_mesh.install copies Squid Mesh migrations into the host
application's priv/repo/migrations directory. It does not install or run
Oban migrations.
For a fresh host app, add the host app's own Oban migration before running
mix ecto.migrate:
defmodule MyApp.Repo.Migrations.AddObanJobs do
use Ecto.Migration
def up, do: Oban.Migrations.up()
def down, do: Oban.Migrations.down()
endConfiguration
The host application configures Squid Mesh under the :squid_mesh application:
config :squid_mesh,
repo: MyApp.Repo,
execution: [
name: Oban,
queue: :squid_mesh
]The host application's Oban config must also include the queue Squid Mesh is
configured to use. For the default queue name:
config :my_app, Oban,
repo: MyApp.Repo,
queues: [squid_mesh: 10]Required keys:
:repo- the Ecto repo Squid Mesh uses for persisted runtime state
Optional keys:
:execution- execution system settings:execution[:name]- the background job system name to target:execution[:queue]- queue used for Squid Mesh jobs, defaults to:squid_mesh:execution[:stale_step_timeout]-:disabledby default; set a non-negative millisecond timeout to let redelivered jobs reclaim stalerunningsteps after worker interruption
First Run Checklist
For a new integration, the shortest path to a successful first run is:
- Add
:squid_meshto the host app's dependencies. - Add or confirm a working Postgres-backed
Repo. - Add or confirm a working
Obaninstance. - Add the host app's
Obanmigration if the app does not already haveoban_jobs. - Run
mix squid_mesh.install. - Run
mix ecto.migrate. - Configure
:squid_meshwith the host app'sRepoandObanqueue. - Configure the host app's
Obanqueues to include:squid_mesh. - Start the host app's
RepoandObanunder supervision. - Start one workflow through the public API and inspect it with history enabled.
Existing Application Setup
For an existing Phoenix or OTP application:
- Add the
:squid_meshdependency. - Configure
:repoto point at the app's existing repo. - Configure
:executionto point at the app's existing background job setup. - Call
SquidMesh.config!/0during boot or integration setup to verify the required contract is present. - Integrate Squid Mesh from the host application's contexts, services, controllers, or internal APIs.
The host application is responsible for:
- database setup and migrations
- background job infrastructure lifecycle
- any HTTP or internal API endpoints exposed to end users
That means the embedded install path assumes:
- the host app already owns its
Repo - the host app already owns its
Obanconfiguration - the host app already manages its
oban_jobstable
Minimal OTP Host Skeleton
For a plain OTP application, the minimum moving pieces are:
- a
Repomodule - an
Obanconfiguration RepoandObanin the application supervision tree:squid_meshconfiguration pointing at thatRepoand queue- one host-facing module that calls
SquidMesh
Dependency shape:
defp deps do
[
{:ecto_sql, "~> 3.13"},
{:postgrex, "~> 0.20"},
{:oban, "~> 2.21"},
{:jido, "~> 2.0"},
{:squid_mesh, "~> 0.1.0-alpha.3"}
]
endApplication supervision shape:
children = [
MyApp.Repo,
{Oban, Application.fetch_env!(:my_app, Oban)}
]Host-facing boundary:
defmodule MyApp.WorkflowRuns do
def start_payment_recovery(payload) do
SquidMesh.start_run(MyApp.Workflows.PaymentRecovery, :payment_recovery, payload)
end
def inspect_run(run_id) do
SquidMesh.inspect_run(run_id, include_history: true)
end
def unblock_run(run_id, attrs \\ %{}) do
SquidMesh.unblock_run(run_id, attrs)
end
def approve_run(run_id, attrs) do
SquidMesh.approve_run(run_id, attrs)
end
def reject_run(run_id, attrs) do
SquidMesh.reject_run(run_id, attrs)
end
endIf the host app exposes pause-resume or approval workflows, keep the latest
Squid Mesh migrations applied before deploying the feature. Paused step runs
now persist internal resume metadata so unblock_run/2, approve_run/3, and
reject_run/3 can continue with stable output and transition semantics after
restarts or code changes.
Operational review shape:
{:ok, paused_run} = MyApp.WorkflowRuns.inspect_run(run_id)
Enum.map(paused_run.audit_events, &{&1.type, &1.step})
#=> [{:paused, :wait_for_review}]
{:ok, _run} =
MyApp.WorkflowRuns.approve_run(run_id, %{
actor: "ops_123",
comment: "customer verified",
metadata: %{ticket: "SUP-42"}
})
{:ok, completed_run} = MyApp.WorkflowRuns.inspect_run(run_id)
Enum.map(completed_run.audit_events, &{&1.type, &1.actor, &1.comment})
#=> [{:paused, nil, nil}, {:approved, "ops_123", "customer verified"}]include_history: true is the public audit boundary. With history enabled, the
run includes chronological step_runs, declared steps state, and durable
audit_events for pause, resume, approval, and rejection actions.
Minimal Phoenix Host Skeleton
A Phoenix application uses the same runtime contract. The main difference is that Squid Mesh usually sits behind a context or controller boundary.
Typical shape:
- add
:squid_mesh,:oban, and:jidoto the Phoenix app - keep using the Phoenix app's existing
Repo - start
Obanin the application supervision tree - configure
:squid_meshto use thatRepoand queue - expose workflow operations through a context or controller
Context boundary:
defmodule MyApp.WorkflowRuns do
def start_payment_recovery(attrs) do
SquidMesh.start_run(MyApp.Workflows.PaymentRecovery, :payment_recovery, attrs)
end
def inspect_run(run_id) do
SquidMesh.inspect_run(run_id, include_history: true)
end
def unblock_run(run_id, attrs \\ %{}) do
SquidMesh.unblock_run(run_id, attrs)
end
def approve_run(run_id, attrs) do
SquidMesh.approve_run(run_id, attrs)
end
def reject_run(run_id, attrs) do
SquidMesh.reject_run(run_id, attrs)
end
def list_runs(opts \\ []) do
SquidMesh.list_runs(opts)
end
endController shape:
def create(conn, params) do
with {:ok, run} <- MyApp.WorkflowRuns.start_payment_recovery(params) do
json(conn, %{id: run.id, status: run.status})
end
endDevelopment Setup
For local development and examples, a minimal host app can provide:
- a local Postgres-backed repo
- a local background job setup
- direct application code calls into Squid Mesh
This uses the same configuration contract as an existing application setup.
In that mode, the example app may also own its own Oban migration because it
is acting as a standalone development harness rather than an embedded install.
Validation
Host applications can validate the contract directly:
{:ok, config} = SquidMesh.config()Or raise on missing required keys:
config = SquidMesh.config!()Example Development Harness
The example host app smoke-test harness builds on this same contract and is the reference setup for end-to-end development and verification.
Path:
examples/minimal_host_app
Suggested workflow:
- Start Postgres for the example app.
- Run
mix setupinsideexamples/minimal_host_app. - Run
mix example.smoketo exercise the host app boundary.
Fast verification path:
- run
MIX_ENV=test mix example.smokeinsideexamples/minimal_host_app
The example app wires:
- its own
MinimalHostApp.Repo - its own
Obaninstance - Squid Mesh through
MinimalHostApp.WorkflowRuns
Inspecting History
For real host apps, inspect_run/2 is most useful with history enabled:
SquidMesh.inspect_run(run_id, include_history: true)That returns the top-level run plus:
steps: logical per-step state in workflow order, including dependency edgesstep_runs: persisted execution historyattempts: persisted retry history for each step run
This split gives host apps both declared per-step state and the raw execution timeline from one inspection call.