LATER (Local Agent & Tool Execution Runtime) Protocol v1.0
View SourceVersion: 1.0.0 Status: Final Date: August 5, 2025
1. Introduction
1.1. Vision & Guiding Principles
The LATER (Local Agent & Tool Execution Runtime) Protocol provides a language-agnostic standard for local, in-process AI tool execution. It is designed to be the frictionless on-ramp to production for the ALTAR ecosystem. Its primary purpose is to enable developers to build and test tools locally that are guaranteed to be compatible with the secure, scalable GRID execution environment, ensuring a seamless transition from development to deployment.
LATER is governed by three core principles:
- The Frictionless On-Ramp to Production: Every feature in LATER is designed with the "promotion path" in mind. The developer experience is optimized to ensure that code written for local execution works identically when pointed at the distributed GRID backend, eliminating the need for costly and error-prone rewrites.
- Implements the ADM: LATER is a consumer of the ALTAR Data Model (ADM). All data structures it produces and consumes (
FunctionDeclaration
,FunctionCall
,ToolResult
, etc.) must conform to the ADM v1.0 specification. This shared contract is what makes the promotion path possible. - Developer Experience & Introspection: The protocol prioritizes a world-class developer experience. A compliant implementation must favor automated schema generation from native function signatures and documentation, minimizing boilerplate and manual configuration. It must also provide adapters for popular existing AI frameworks (see Section 2.4).
1.2. Relationship to ADM & GRID
LATER is the second layer in the three-layer ALTAR architecture, positioned between the foundational data model and the distributed execution protocol.
graph TB
subgraph AE["ALTAR Ecosystem"]
L3("
<strong>Layer 3: GRID Protocol</strong><br/><br/>
Distributed Tool Orchestration<br/>
Inter-Process & Network Communication<br/>
Manages Security & Transport
")
L2("
<strong>Layer 2: LATER Protocol (This Specification)</strong><br/><br/>
Local Tool Execution Runtime<br/>
In-Process Function Calls<br/>
Automated Introspection
")
L1("
<strong>Layer 1: ADM (ALTAR Data Model)</strong><br/><br/>
Universal Data Structures<br/>
Canonical Schemas & Contracts<br/>
(e.g., FunctionDeclaration, FunctionCall)
")
end
%% --- Define Connections ---
L3 -- consumes --> L1
L2 -- implements --> L1
%% --- Styling ---
style AE fill: #FFF,color:#000
style L3 fill:#42a5f5,stroke:#1e88e5,color:#000000
style L2 fill:#1e88e5,stroke:#1565c0,color:#ffffff
style L1 fill:#0d47a1,stroke:#002171,color:#ffffff
- LATER implements the ADM: It provides a standard for creating and executing tools that are described by ADM data structures.
- LATER is the local companion to GRID: Where GRID defines how tools operate across a network, LATER defines how they operate within a single process. This clear separation of concerns allows for a "promotion path" where a tool can graduate from a local LATER runtime to a distributed GRID runtime with no changes to its fundamental contract.
2. Abstract Protocol Definition
A LATER-compliant implementation must provide the following conceptual components and behaviors. These definitions are language-agnostic; the subsequent section provides a canonical implementation pattern in Elixir.
2.1. Tool Declaration Mechanism
A LATER implementation must provide an idiomatic, introspective mechanism for developers to declare native functions as AI tools.
Requirements:
- Idiomatic Interface: The mechanism must feel natural to the host programming language (e.g., decorators in Python, annotations in Java/C#, macros in Elixir).
- Automated Schema Generation: The mechanism must introspect the native function's signature and documentation to automatically generate an ADM-compliant
FunctionDeclaration
schema. This includes:- Name: The function's name.
- Description: The function's primary documentation string.
- Parameters:
- An ADM
Schema
object of typeOBJECT
. -
properties
derived from the function's parameter names and types. Native types must be mapped to their ADMSchemaType
equivalents (e.g.,string
->STRING
,int
->INTEGER
). -
description
for each parameter derived from its documentation. -
required
fields are inferred; parameters with default values are considered optional.
- An ADM
- Registration: Upon declaration, the generated
FunctionDeclaration
and a reference to the executable function (e.g., a function pointer or lambda) must be registered with the Global Tool Definition Registry.
2.2. Two-Tier Registry Architecture
LATER requires a two-tier registry system to manage the difference between a tool's definition and its availability within a specific operational context.
2.2.1. Global Tool Definition Registry
- Scope: Application-wide, singleton.
- Lifecycle: Populated at application startup or compile-time as tools are declared. Persists for the life of the application.
- Contents:
- The complete, ADM-compliant
FunctionDeclaration
for every tool. - An internal reference or handle to the actual executable function.
- The complete, ADM-compliant
- Responsibility: Acts as the single source of truth for all tool schemas and their corresponding business logic.
2.2.2. Session-Scoped Registry
- Scope: Ephemeral, tied to a specific "session" or "conversation."
- Lifecycle: Created when a session begins and destroyed when it ends.
- Contents: A list of tool names that are active for that session. It does not store schemas directly, but rather references the tools available in the Global Registry.
- Responsibility: Manages which tools are available for a given AI interaction. This allows a host application to selectively expose a subset of all globally-defined tools to the agent based on context, user permissions, or conversation state.
2.3. Local Tool Executor
The Executor is responsible for invoking a tool's business logic in response to an agent's request.
Requirements:
- Lookup: Given a
FunctionCall
from an agent, the Executor must first look up the corresponding tool in the relevant Session-Scoped Registry to confirm its availability. - Validation: It must then retrieve the tool's
FunctionDeclaration
from the Global Registry and validate the incomingargs
from theFunctionCall
against the parameter schema. This includes checking for required parameters, validating data types, and respectingenum
constraints. - Invocation: If validation succeeds, the Executor invokes the referenced native function, passing the
args
as arguments. - Response Handling:
- On successful execution, it must wrap the function's return value in an ADM-compliant
ToolResult
with astatus
ofSUCCESS
. - On any failure (validation error, runtime exception, etc.), it must construct a
ToolResult
with astatus
ofERROR
and a structuredErrorObject
containing a clear message.
- On successful execution, it must wrap the function's return value in an ADM-compliant
2.4. Tool Adapters & Ecosystem Compatibility
To lower the barrier to adoption, a LATER implementation must provide bi-directional tool adapters for popular existing AI frameworks. These adapters allow developers to use their existing tools written for other frameworks within ALTAR, and vice-versa, without modification.
Conceptual Adapter Functions:
# Conceptual Python adapter for LangChain
import altar
import inspect
from typing import Type
from pydantic import BaseModel
from langchain_core.tools import BaseTool
# --- Ingestion: LangChain -> LATER ---
def LATER.import_from_langchain(lc_tool: BaseTool):
"""
Converts a LangChain tool into an ADM FunctionDeclaration and registers it
with the LATER Global Registry.
"""
# 1. Map LangChain's Pydantic schema to an ADM Schema
adm_schema = _convert_pydantic_to_adm_schema(lc_tool.args_schema)
# 2. Construct the FunctionDeclaration
declaration = altar.ADM.FunctionDeclaration(
name=lc_tool.name,
description=lc_tool.description,
parameters=adm_schema
)
# 3. Define the execution wrapper and register it
def _wrapper_func(**kwargs):
# LangChain tools expect a single string or dictionary input
result = lc_tool.invoke(kwargs)
return altar.ADM.ToolResult(status="SUCCESS", content=result)
LATER.GlobalRegistry.register(declaration, _wrapper_func)
def _convert_pydantic_to_adm_schema(pydantic_model: Type[BaseModel]) -> altar.ADM.Schema:
"""
Inspects a Pydantic model to generate an ADM Schema.
"""
properties = {}
required = []
for field_name, model_field in pydantic_model.model_fields.items():
field_type = model_field.annotation
# Map Python types to ADM types
adm_type = "STRING" # Default
if field_type is int:
adm_type = "INTEGER"
elif field_type is float:
adm_type = "NUMBER"
elif field_type is bool:
adm_type = "BOOLEAN"
properties[field_name] = altar.ADM.Schema(
type=adm_type,
description=model_field.description or ""
)
if model_field.is_required():
required.append(field_name)
return altar.ADM.Schema(type="OBJECT", properties=properties, required=required)
# --- Egress: LATER -> LangChain ---
def LATER.export_to_langchain(tool_name: str) -> BaseTool:
"""
Exposes a LATER-native tool as a LangChain-compatible BaseTool.
"""
# 1. Retrieve the tool's contract from the LATER Global Registry
declaration = LATER.GlobalRegistry.lookup_declaration(tool_name)
if not declaration:
raise ValueError(f"Tool '{tool_name}' not found in LATER registry.")
# 2. Dynamically create a Pydantic model for the arguments
args_schema = _convert_adm_to_pydantic_schema(declaration.parameters)
# 3. Create a dynamic class that inherits from BaseTool
class LangChainToolWrapper(BaseTool):
name: str = declaration.name
description: str = declaration.description
args_schema: Type[BaseModel] = args_schema
def _run(self, **kwargs):
# This is the bridge to the LATER execution runtime
function_call = altar.ADM.FunctionCall(name=self.name, args=kwargs)
# Assumes a session_id is available in the context
session_id = _get_current_session_id()
tool_result = LATER.Executor.execute(session_id, function_call)
if tool_result.status == "SUCCESS":
return tool_result.content
else:
# LangChain expects exceptions on failure
raise Exception(f"Tool execution failed: {tool_result.error.message}")
return LangChainToolWrapper()
def _convert_adm_to_pydantic_schema(adm_schema: altar.ADM.Schema) -> Type[BaseModel]:
# This would involve more complex dynamic Pydantic model creation logic
# For simplicity, we'll imagine a helper function handles this.
from pydantic import create_model
fields = {}
for name, prop_schema in adm_schema.properties.items():
# Map ADM types back to Python types
py_type = str # Default
if prop_schema.type == "INTEGER":
py_type = int
elif prop_schema.type == "NUMBER":
py_type = float
elif prop_schema.type == "BOOLEAN":
py_type = bool
# Pydantic fields are tuples of (type, default_value)
# If the field is not required, we can give it a default of None.
default = ... if name in adm_schema.required else None
fields[name] = (py_type, default)
return create_model(f"{adm_schema.name}Args", **fields)
// Conceptual C# adapter for Semantic Kernel
using Microsoft.SemanticKernel;
using System.Reflection;
using System.ComponentModel;
// --- Ingestion: Semantic Kernel -> LATER ---
public void LATER.import_from_sk(KernelPlugin sk_plugin)
{
foreach (var function in sk_plugin)
{
// 1. Convert the SK function to an ADM FunctionDeclaration
var declaration = _convert_sk_function_to_adm(function);
// 2. Create a wrapper for execution and register it
Func<Dictionary<string, object>, Altar.ADM.ToolResult> wrapper = (args) => {
var sk_args = new KernelArguments(args);
// Assumes a Kernel is available in the execution context
var result = function.InvokeAsync(kernel, sk_args).Result;
return new Altar.ADM.ToolResult { Status = "SUCCESS", Content = result };
};
LATER.GlobalRegistry.register(declaration, wrapper);
}
}
private Altar.ADM.FunctionDeclaration _convert_sk_function_to_adm(KernelFunction sk_function)
{
var parameters = new Altar.ADM.Schema { Type = "OBJECT", Properties = new(), Required = new() };
foreach (var param in sk_function.Metadata.Parameters)
{
// Map .NET types to ADM types
string adm_type = "STRING"; // Default
if (param.ParameterType == typeof(int) || param.ParameterType == typeof(long))
adm_type = "INTEGER";
else if (param.ParameterType == typeof(float) || param.ParameterType == typeof(double) || param.ParameterType == typeof(decimal))
adm_type = "NUMBER";
else if (param.ParameterType == typeof(bool))
adm_type = "BOOLEAN";
parameters.Properties[param.Name] = new Altar.ADM.Schema
{
Type = adm_type,
Description = param.Description ?? ""
};
if (param.IsRequired)
{
parameters.Required.Add(param.Name);
}
}
return new Altar.ADM.FunctionDeclaration
{
Name = $"{sk_function.PluginName}_{sk_function.Name}",
Description = sk_function.Description ?? "",
Parameters = parameters
};
}
// --- Egress: LATER -> Semantic Kernel ---
public KernelPlugin LATER.export_to_sk(string[] tool_names)
{
var functions = new List<KernelFunction>();
foreach (var tool_name in tool_names)
{
// 1. Get the ADM declaration from the LATER registry
var declaration = LATER.GlobalRegistry.lookup_declaration(tool_name);
if (declaration == null) continue;
// 2. Create a KernelFunction from the ADM declaration
var kernel_function = KernelFunctionFactory.CreateFromMethod(
() => {
// This is the execution bridge back to LATER.
// The actual invocation logic is more complex and would be
// handled by the SK function's internal implementation, which
// would call the LATER executor.
// Placeholder for the actual call
Console.WriteLine($"Executing LATER tool '{tool_name}' via SK wrapper.");
return Task.CompletedTask;
},
functionName: declaration.Name,
description: declaration.Description
// Further work would be needed to map ADM parameters to SK parameters
);
functions.Add(kernel_function);
}
return KernelPluginFactory.CreateFromFunctions("LATER_Exported", "Tools exported from the LATER runtime.", functions);
}
This commitment to interoperability is central to LATER's mission. It ensures developers can try the ALTAR promotion path without needing to first rewrite their existing, battle-tested tools.
3. Canonical Implementation Pattern: Elixir
This section provides a brief, non-normative example of how the abstract protocol can be idiomatically implemented in Elixir. This serves as a reference for implementers in other languages.
3.1. Tool Declaration with deftool
A deftool
macro leverages Elixir's metaprogramming to satisfy the Tool Declaration Mechanism requirement.
# lib/my_app/calculator_tools.ex
defmodule MyApp.CalculatorTools do
use Later.Tools # Imports the deftool macro
@doc """
Adds two numbers together.
"""
deftool add(a, b) do
{:ok, a + b}
end
@doc """
Calculates the total price including tax.
@param unit_price The price of a single item.
@param quantity The number of items.
@param tax_rate The tax rate as a decimal (e.g., 0.08 for 8%).
"""
deftool calculate_total(unit_price, quantity, tax_rate \\ 0.0) do
total = unit_price * quantity * (1 + tax_rate)
{:ok, total}
end
end
- Introspection: The
deftool
macro usesCode.get_doc/2
at compile time to get the function and parameter documentation. It introspects the abstract syntax tree (AST) to find parameter names and default values. - Registration: It generates an ADM
FunctionDeclaration
and registers it along with the function reference (&add/2
) into an ETS-based Global Tool Definition Registry.
3.2. Registries and Executor
- Global Registry: A simple
GenServer
or ETS table that stores{function_name, arity}
as a key and theFunctionDeclaration
andMFA
{module, function, args}
as the value. - Session Registry: A
GenServer
per session, holding aMapSet
of active tool names for that session. - Executor: A module with an
execute/2
function that performs the lookup, validation, and invocation logic.
# Simplified Executor Logic
defmodule Later.Executor do
def execute(session_id, %FunctionCall{name: name, args: args}) do
with {:ok, mfa} <- Registry.lookup(session_id, name),
:ok <- Validator.validate(mfa, args) do
# Apply the function and wrap in a ToolResult
apply(mfa.module, mfa.function, Map.values(args))
|> wrap_in_tool_result(name)
else
{:error, reason} ->
# Return an error ToolResult
Later.Types.ToolResult.error(name, reason)
end
end
end
4. The Core Workflow: From Local IDE to Production Deployment
This section illustrates the complete end-to-end workflow, demonstrating the core value proposition of LATER: developing a tool locally and seamlessly promoting it to a secure, distributed GRID environment by changing a single line of configuration.
4.1. The End-to-End LATER Flow
The following diagram shows the sequence of events when a tool is executed locally using the LATER protocol.
sequenceDiagram
participant Dev as Developer
participant App as Host Application (e.g., gemini_ex)
participant LATER as LATER Runtime
participant LLM as Large Language Model
Dev->>+App: Defines `CalculatorTools.add/2` with `deftool`
Note over App,LATER: At compile time, `deftool` introspects<br/>`add/2` and populates the<br/>Global Tool Registry.
App->>+LATER: Start Session("session-123", tools: [:add])
LATER->>LATER: Create Session-Scoped Registry for "session-123"
App->>+LLM: generate_content("What is 5 + 7?", tools: [FunctionDeclaration for :add])
LLM-->>-App: FunctionCall(name: "add", args: %{a: 5, b: 7})
App->>LATER: Executor.execute("session-123", FunctionCall)
LATER->>LATER: 1. Validate "add" is in session registry
LATER->>LATER: 2. Validate args `{a: 5, b: 7}` against schema
LATER->>LATER: 3. Invoke `CalculatorTools.add(5, 7)`
LATER-->>-App: ToolResult(name: "add", status: :SUCCESS, content: 12)
App->>+LLM: generate_content(..., tool_results: [ToolResult])
LLM-->>-App: Final Response("The sum of 5 and 7 is 12.")
4.2. The Seamless Promotion Path to GRID
A core architectural benefit of LATER is the seamless "promotion path" for a tool to a distributed GRID (Global Runtime & Interface Definition) environment. This migration requires no changes to the tool's ADM contract (FunctionDeclaration
) or the host application's core logic.
The promotion is achieved entirely through configuration.
Step 1: Develop and Test Locally with LATER
The developer writes and tests their tool using LATER. The host application is configured to use the local LATER tool source.
# config/dev.exs
config :my_app, MyApp.Endpoint,
tool_source: {:later, MyApp.LocalToolSource}
# --- Host Application Logic (remains unchanged) ---
# 1. Start a session and get tool declarations
{:ok, session} = MyApp.Endpoint.start_session(tools: ["add/2"])
declarations = MyApp.Endpoint.get_tool_declarations(session)
# 2. Interact with the LLM
response = Gemini.generate("What is 15 + 30?", tools: declarations)
# ... LLM returns a FunctionCall
# 3. Dispatch the call via the configured endpoint
result = MyApp.Endpoint.execute(session, response.function_call)
# result = %ToolResult{name: "add", status: :SUCCESS, content: 45}
Step 2: Deploy Tool to a GRID Runtime
The same tool code (e.g., MyApp.CalculatorTools
) is deployed as part of a standalone GRID-compliant Runtime service. This service exposes the tool over the network.
Step 3: Promote by Changing Configuration
To switch to the production-ready, secure backend, the developer changes a single line in their configuration file. The application code does not change.
# config/prod.exs
config :my_app, MyApp.Endpoint,
tool_source: {:grid, MyApp.GridToolSource, [
host: "grid.example.com",
port: 8080,
transport: :grpc
]}
When the application is restarted with this configuration, calls to MyApp.Endpoint.execute/2
are now routed through the GridToolSource
, which handles the secure, networked call to the remote GRID Runtime.
Because both LATER and GRID share the same ADM contract, the LLM and the host application are completely unaware of the change in execution backend. This fulfills the "write once, run anywhere" promise of the ALTAR architecture.