Elixir AI/ML Ecosystem Integration Architecture
View SourceDate: 2025-10-07 Author: System Analysis Projects Analyzed: 42 total, 26 AI/ML-focused
Executive Summary
Your ecosystem follows a unique "promotion path" philosophy:
Local/Dev → Testing → Staging → Production → DistributedThis is embodied in ALTAR's LATER → GRID progression and should be the organizing principle for the entire ecosystem.
Core Insight: You're not building competitors to LangChain/LlamaIndex - you're building the production deployment path they don't have.
The 6-Layer Architecture
┌─────────────────────────────────────────────────────────┐
│ Layer 5: Applications (User-Facing) │
│ AurumAI, SmartCoder, Assessor, Citadel │
└─────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────┐
│ Layer 4: Agent Frameworks & Orchestration │
│ DSPex, foundation+jido, axon, automata, pipeline_ex │
└─────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────┐
│ Layer 3: Schema/Validation & Tool Protocol │
│ ALTAR (core!), sinter, exdantic, instructor_lite │
└─────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────┐
│ Layer 2: LLM Integration & Clients │
│ gemini_ex, claude_code_sdk, llm_ex, req_llm │
└─────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────┐
│ Layer 1: Infrastructure & Process Management │
│ snakepit, foundation, AITrace, handoff │
└─────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────┐
│ Layer 0: Foundation Primitives │
│ json_remedy, supertester, arsenal, perimeter │
└─────────────────────────────────────────────────────────┘Layer-by-Layer Breakdown
Layer 0: Foundation Primitives (The Bedrock)
These are dependency-free utilities used by everything above.
| Project | Purpose | Status | Used By |
|---|---|---|---|
| json_remedy | JSON repair for malformed LLM outputs | ✅ Production (v20⭐) | All LLM clients |
| supertester | Battle-tested testing toolkit | ✅ Production | snakepit, foundation, arsenal |
| arsenal | Auto REST API generation from OTP | ✅ Production | Exposing agents as APIs |
| perimeter | Elixir typing mechanism | ⚠️ Partial | Type safety layer |
| ex_dbg | State-of-art debugging | ✅ Production | Development/debugging |
Integration: These are pure utilities. No changes needed.
Layer 1: Infrastructure & Process Management (The Platform)
The BEAM-native infrastructure that makes everything else production-ready.
Core: snakepit (v0.4.2, 8⭐)
Purpose: High-performance Python/external language bridge
Key Features:
- 1000x faster worker initialization
- gRPC streaming
- Bidirectional tool bridge
- Session affinity
- Persistent process tracking
Dependencies: jason, grpc, protobuf, supertester
Dependents: DSPex (Python bridge), foundation (external workers)Integration Point: Snakepit.execute(session, tool, args)
Core: foundation (v0.1.5, 10⭐)
Purpose: Multi-agent platform with circuit breakers, rate limiting
Key Features:
- Jido agent framework integration
- Protocol-based agent design
- Circuit breakers (fuse)
- Rate limiting (hammer)
- Observability
Dependencies: jido, jason, telemetry, poolboy, hammer, fuse, finch
Dependents: Multi-agent applicationsIntegration Point: Foundation.Agent.execute(agent, action)
Supporting: AITrace (0⭐) - NEEDS WORK
Purpose: Unified observability for AI Control Plane
Status: Stub, needs integration with telemetry events
Should integrate: snakepit, foundation, gemini_ex telemetrySupporting: handoff (0⭐) - POTENTIAL
Purpose: Distributed graph execution (DAG workflows)
Status: Fork of existing project, unclear integration
Opportunity: Could power pipeline_ex backendCurrent State:
- ✅ snakepit: Production-ready
- ✅ foundation: Core ready, needs ALTAR integration
- ❌ AITrace: Stub
- ⚠️ handoff: Unclear status
Recommendation:
- Add ALTAR tool execution to foundation
- Implement AITrace telemetry aggregation
- Evaluate handoff vs pipeline_ex consolidation
Layer 2: LLM Integration & Clients (The Gateway)
All roads lead to LLM APIs. Unified interface critical.
Core: gemini_ex (v0.2.2, 15⭐) - PRODUCTION READY
Purpose: Production Gemini client with auto tool execution
Key Features:
- Dual auth (API key + Vertex AI)
- Streaming support
- Automatic tool execution loop
- ALTAR integration (first reference!)
- Thinking budget control
- Multimodal support
Dependencies: req, jason, ALTAR, joken, telemetry
Status: ✅ Complete, actively maintainedALTAR Integration Example:
# gemini_ex automatically discovers and executes ALTAR tools
defmodule WeatherTool do
use Altar.Tool
@doc "Get current weather"
def get_weather(location), do: {:ok, "Sunny in #{location}"}
end
Gemini.generate("What's the weather in SF?", tools: [WeatherTool])
# → Automatically calls WeatherTool.get_weather("SF")
# → Returns "It's sunny in SF"Emerging: req_llm (0⭐) - STRATEGIC
Purpose: Req plugin for unified LLM provider interface
Status: Unknown implementation status
Opportunity: Could unify gemini_ex, claude_code_sdk, llm_ex
Strategy: Provider pattern using Req middlewareIntegration Vision:
# Unified interface across all providers
Req.new(base_url: "https://api.anthropic.com")
|> ReqLLM.attach(provider: :anthropic, model: "claude-3-5-sonnet")
|> ReqLLM.chat("Hello")
# Or
Req.new(base_url: "https://generativelanguage.googleapis.com")
|> ReqLLM.attach(provider: :gemini, model: "gemini-2.0-flash-thinking-exp")
|> ReqLLM.chat("Hello")Legacy: llm_ex (0⭐)
Purpose: All-in-one LLM library (multi-provider)
Status: Unclear if maintained
Dependencies: req, finch, jason, joken, goth, websockex, telemetry
Overlap: Duplicates gemini_ex functionalitySpecialized: claude_code_sdk_elixir (v0.0.1, 7⭐)
Purpose: Claude Code CLI integration
Key Features:
- Streaming message processing
- Mocking system for testing
- stdin support for interactive mode
Dependencies: erlexec, jason
Status: ✅ Working, niche use caseCurrent State:
- ✅ gemini_ex: Production-ready, ALTAR integrated
- ⚠️ req_llm: Potential unifier, needs investigation
- ❌ llm_ex: Overlaps with gemini_ex
- ✅ claude_code_sdk: Niche but working
Recommendation:
- Short-term: Use gemini_ex as the reference implementation
- Mid-term: Develop req_llm as unified interface
- Long-term: Migrate gemini_ex to be a req_llm provider
- Decision: Archive or integrate llm_ex functionality
Layer 3: Schema/Validation & Tool Protocol (The Contract)
This is where your ecosystem shines. ALTAR is the differentiator.
CORE: ALTAR (v0.1.7, 4⭐) - ARCHITECTURAL FOUNDATION
Purpose: Agent & Tool Arbitration Protocol
Philosophy: Promotion path from dev to production
Architecture:
ADM (ALTAR Data Model)
↓
LATER (Local Agent Tool Execution Runtime)
↓
GRID (Global Resilient Instruction Dispatching) - FUTUREKey Innovation: Type-safe tool definitions with zero runtime deps
defmodule MyTool do
use Altar.Tool
@doc "Description for LLM"
@spec execute(String.t()) :: {:ok, result} | {:error, reason}
def execute(input) do
# Implementation
end
endDependencies: NONE (zero runtime deps is strategic!) Dependents: gemini_ex (integrated), DSPex (planned)
Status:
- ✅ ADM: Complete
- ✅ LATER: Complete (local execution)
- ❌ GRID: Not implemented (distributed execution)
Schema Libraries: sinter vs exdantic
sinter (v0.0.1, 8⭐):
Purpose: Runtime-first schema validation
Philosophy: Dynamic schemas for agent frameworks
Key Feature: Schema inference from examples
Use Case: DSPy-style dynamic programs
Dependencies: jason (minimal)exdantic (v0.0.2, 8⭐):
Purpose: Pydantic-inspired compile-time schemas
Philosophy: Static validation with LLM optimization
Key Features:
- Model validators
- Computed fields
- LLM provider optimization (OpenAI/Anthropic)
Use Case: Structured LLM output parsing
Dependencies: jason, stream_dataThe Tension: Runtime (sinter) vs Compile-time (exdantic)
Resolution: Both serve different purposes!
- sinter: Agent frameworks needing runtime flexibility
- exdantic: API clients needing type safety
Supporting: instructor_lite (0⭐)
Purpose: Lightweight structured output parsing
Status: Used by DSPex and pipeline_ex
Dependencies: Unknown
Integration: Should use sinter or exdantic under the hoodSupporting: jsv (0⭐)
Purpose: Full JSON Schema validator
Use Case: When you need industry-standard JSON Schema
Opportunity: Bridge to/from sinter/exdanticCurrent State:
- ✅ ALTAR: Core complete, needs GRID
- ✅ sinter: Runtime schemas work
- ✅ exdantic: Compile-time schemas work
- ⚠️ instructor_lite: Needs consolidation
- ⚠️ jsv: Needs integration story
Recommendation:
- ALTAR: Implement GRID for distributed tools
- Schema unification:
# Unified interface defmodule MySchema do use Altar.Schema # Auto-detects runtime vs compile-time schema do field :name, :string field :age, :integer end end # Backends: sinter (runtime), exdantic (compile), jsv (standard) - instructor_lite: Merge into sinter as
Sinter.LLM.parse/2
Layer 4: Agent Frameworks & Orchestration (The Intelligence)
Where agents come to life.
Core: DSPex (v0.2.0, 14⭐)
Purpose: Declarative Self-improving Programs (DSPy port)
Philosophy: Compile-time optimization of prompts/chains
Key Features:
- 70+ DSPy schema classes discovered
- Bidirectional Python bridge (via snakepit)
- Native Elixir signatures
- Schema validation (sinter integration)
Dependencies: snakepit, sinter, jason, telemetry, instructor_lite, gemini_ex
Status: ⚠️ Core working, optimization layer incompleteIntegration Example:
defmodule RAGPipeline do
use DSPex.Module
signature "question -> answer" do
input :question, :string
output :answer, :string
end
def forward(question) do
context = retrieve(question) # Vector search
generate(question, context) # LLM call
end
end
# Compile/optimize
optimized = DSPex.compile(RAGPipeline, examples: training_data)Core: foundation + jido (v0.1.5 + 0⭐)
Purpose: Multi-agent platform with autonomous behavior
Philosophy: OTP-style supervision for agents
jido (0⭐):
- Core agent primitives
- Dynamic workflows
- Distributed coordination
Dependencies: NONE
foundation (10⭐):
- Agent hosting/supervision
- Circuit breakers, rate limiting
- Observability
Dependencies: jido, jason, telemetry, poolboy, hammer, fuse, finchIntegration Example:
defmodule ResearchAgent do
use Foundation.Agent
def handle_task(:research, topic) do
# Multi-step research using ALTAR tools
results = Altar.execute(SearchTool, query: topic)
summary = Altar.execute(SummarizeTool, text: results)
{:ok, summary}
end
end
Foundation.Supervisor.start_agent(ResearchAgent)Missing Integration: foundation doesn't know about ALTAR tools yet!
Supporting: axon (v0.1.0, 19⭐)
Purpose: Polyglot agent orchestration (Python pydantic-ai)
Philosophy: Elixir orchestrates Python agents
Key Features:
- pydantic-ai integration
- HTTP/gRPC communication
- Session management
Dependencies: jason, grpc, protobuf, tesla, finch
Overlap: Similar to foundation but Python-focusedQuestion: Merge with foundation or keep separate?
Experimental: automata (0⭐)
Purpose: Decentralized autonomous systems
Philosophy: Blockchain-style consensus for agents
Status: Highly experimental, unclear implementationSupporting: pipeline_ex (v0.0.1, 6⭐)
Purpose: AI pipeline orchestration
Features:
- YAML-based pipeline definition
- Claude/Gemini chaining
- Recursive/meta pipelines
Dependencies: jason, yaml_elixir, req, instructor_lite, claude_code_sdk
Status: ⚠️ Works but overlaps with DSPexCurrent State:
- ✅ DSPex: Core working, needs optimization layer
- ✅ foundation: Solid but needs ALTAR integration
- ✅ axon: Working but overlaps with foundation
- ❌ automata: Too experimental
- ⚠️ pipeline_ex: Overlaps with DSPex
Recommendation:
Integrate ALTAR into foundation:
defmodule Foundation.Agent do def call_tool(tool_module, args) do Altar.execute(tool_module, args) end endClarify DSPex vs pipeline_ex:
- DSPex: Compile-time optimization, training/eval
- pipeline_ex: Runtime orchestration, YAML config
- Consider: Merge as
DSPex.Pipeline
Decide on axon:
- Option A: Merge Python-specific features into foundation
- Option B: Keep as "foundation for polyglot agents"
Archive automata: Too early-stage
Layer 5: Applications (The Products)
User-facing applications built on the stack.
Enterprise Suite (All 0⭐ - WIP)
Citadel: Command & control for AI enterprise
- Deployment, secrets, config
- Status: Stub
AITrace: Observability layer
- Telemetry aggregation
- Status: Stub
Assessor: CI/CD for AI quality
- LLM eval harnesses
- Regression testing
- Status: Stub
evals: Model evaluation
- Testing frameworks
- Status: StubThe Vision: Complete enterprise AI platform Reality: All stubs, need foundation + ALTAR integration first
Development Tools
AurumAI (0⭐): Phoenix Framework AI Manager
SmartCoder (0⭐): Multi-agent code generation
ElixirScope (3⭐): AST-based code intelligenceStatus: Various stages of completion
Current State:
- ❌ Enterprise suite: All stubs
- ⚠️ Dev tools: Partial implementations
Recommendation:
- Pause enterprise suite until core is production-ready
- Focus dev tools on eating own dogfood (use DSPex/foundation to build them)
Dependency Graph (Critical Paths)
Layer 0 (Foundation)
json_remedy ────────────┐
supertester ─────────┐ │
arsenal ──────────┐ │ │
↓ ↓ ↓
Layer 1 (Infrastructure)
snakepit ←──────────────┼─ (used by DSPex, foundation)
foundation ←─ jido │
AITrace (stub) │
↓
Layer 2 (LLM Clients)
gemini_ex ←─ ALTAR ←────┤
req_llm (potential) │
llm_ex (legacy?) │
claude_code_sdk │
↓
Layer 3 (Schema/Tools)
ALTAR ←─────────────────┤ (CORE!)
sinter │
exdantic │
instructor_lite │
↓
Layer 4 (Agents)
DSPex ←─ snakepit, sinter, gemini_ex, instructor_lite
foundation ←─ jido
axon
pipeline_ex ←─ claude_code_sdk, instructor_lite
↓
Layer 5 (Apps)
Citadel, AITrace, Assessor (all stubs)
AurumAI, SmartCoder (partial)Critical Integration Points
- ALTAR → gemini_ex: ✅ DONE (reference implementation)
- ALTAR → foundation: ❌ MISSING (critical!)
- ALTAR → DSPex: ⚠️ PARTIAL (needs tighter integration)
- snakepit → DSPex: ✅ WORKING (Python bridge)
- sinter → DSPex: ✅ WORKING (schemas)
- instructor_lite → sinter: ❌ MISSING (should consolidate)
The "Golden Path" (Minimal Working System)
Goal: Build a working AI agent in minimal LOC
# mix.exs
defp deps do
[
{:altar, "~> 0.1"}, # Tool protocol
{:gemini_ex, "~> 0.2"}, # LLM client
{:foundation, "~> 0.1"}, # Agent framework
{:snakepit, "~> 0.4"} # Python bridge (if needed)
]
end
# lib/my_agent.ex
defmodule MyAgent do
use Foundation.Agent
# Define tools using ALTAR
defmodule WeatherTool do
use Altar.Tool
def get_weather(city), do: {:ok, "Sunny in #{city}"}
end
# Agent behavior
def handle_task(:answer_question, question) do
# Gemini automatically discovers and executes ALTAR tools
{:ok, response} = Gemini.chat(question, tools: [WeatherTool])
{:ok, response}
end
end
# Usage
{:ok, agent} = Foundation.start_agent(MyAgent)
Foundation.Agent.execute(agent, :answer_question, "What's the weather in Tokyo?")
# => "It's sunny in Tokyo"Total: ~20 lines of code for a working AI agent with tools!
Consolidation Recommendations
High Priority Merges
1. Unify Schema Libraries
Current: sinter, exdantic, instructor_lite, jsv (4 projects)
Proposal:
altar_schema (umbrella project)
├── Altar.Schema.Runtime (sinter backend)
├── Altar.Schema.Compiled (exdantic backend)
├── Altar.Schema.JSON (jsv backend)
└── Altar.Schema.LLM (instructor_lite logic)
Benefits:
- Unified API: `use Altar.Schema, mode: :runtime`
- Backend switching without code changes
- Shared test suite
- Single dependency for users2. Unify LLM Clients
Current: gemini_ex, llm_ex, req_llm, claude_code_sdk (4 projects)
Proposal:
req_llm (core)
├── ReqLLM.Providers.Gemini (gemini_ex logic)
├── ReqLLM.Providers.Anthropic
├── ReqLLM.Providers.ClaudeCode (claude_code_sdk)
└── ReqLLM.Providers.OpenAI
Benefits:
- Provider pattern (swap LLM without code change)
- Shared ALTAR integration
- Unified streaming/telemetry
- Single testing framework3. Consolidate Orchestration
Current: DSPex, pipeline_ex, handoff (3 projects)
Proposal:
DSPex (umbrella)
├── DSPex.Compile (current DSPex core)
├── DSPex.Pipeline (pipeline_ex YAML logic)
└── DSPex.DAG (handoff graph execution)
Benefits:
- One orchestration story
- Compile-time + runtime flexibility
- Shared optimization layerMedium Priority Merges
4. Integrate foundation + jido
Current: jido (primitives), foundation (platform) - separate repos
Proposal: Merge jido into foundation as `Foundation.Core`
Rationale:
- jido has 0 stars (not public-facing)
- foundation depends on jido
- Simpler mental model (one agent framework)
Result: foundation becomes self-contained5. Merge Enterprise Suite
Current: Citadel, AITrace, Assessor, evals (4 stubs)
Proposal:
elixir_ai_platform (umbrella)
├── ElixirAI.Control (Citadel)
├── ElixirAI.Observe (AITrace)
├── ElixirAI.Quality (Assessor)
└── ElixirAI.Evals (evals)
Rationale:
- All stubs (easy to merge now)
- Sold as integrated suite
- Shared telemetry/config
- One installationLow Priority (Keep Separate)
- snakepit: Unique value (Python bridge), standalone
- ex_dbg: Generic debugging, not AI-specific
- supertester: Generic testing, not AI-specific
- json_remedy: Generic utility, not AI-specific
- arsenal: Generic REST generation, not AI-specific
Integration Examples
Example 1: RAG Agent with Tools
# Uses: ALTAR + gemini_ex + snakepit + foundation
defmodule RAGAgent do
use Foundation.Agent
# Python vector search via snakepit
defmodule VectorSearch do
use Altar.Tool
def search(query) do
Snakepit.execute("vector_db", "search", %{
query: query,
top_k: 5
})
end
end
# Elixir summarization tool
defmodule Summarize do
use Altar.Tool
def summarize(text) do
Gemini.chat("Summarize: #{text}", model: "gemini-2.0-flash")
end
end
def handle_task(:answer, question) do
# Gemini auto-executes ALTAR tools
{:ok, answer} = Gemini.chat(
question,
tools: [VectorSearch, Summarize],
model: "gemini-2.0-flash-thinking-exp"
)
{:ok, answer}
end
end
# Start agent
{:ok, agent} = Foundation.start_agent(RAGAgent)
# Query
Foundation.Agent.execute(agent, :answer, "What did the CEO say about AI?")
# → Searches vector DB (Python)
# → Summarizes results (Elixir)
# → Returns answerIntegration Points:
- ALTAR: Tool protocol
- gemini_ex: LLM client with auto tool execution
- snakepit: Python vector DB bridge
- foundation: Agent supervision
Example 2: DSPy-Style Optimization
# Uses: DSPex + sinter + gemini_ex + ALTAR
defmodule QAPipeline do
use DSPex.Module
signature "question, context -> answer" do
input :question, :string
input :context, :string
output :answer, :string
end
def forward(question, context) do
# Use ALTAR tools
refined = Altar.execute(RefineQueryTool, question)
answer = Gemini.chat(
"Answer: #{refined} using context: #{context}",
model: "gemini-2.0-flash"
)
{:ok, answer}
end
end
# Compile with examples
examples = [
%{question: "What is AI?", context: "...", answer: "..."},
# ... more examples
]
optimized = DSPex.compile(QAPipeline,
examples: examples,
metric: :accuracy
)
# Use optimized version
{:ok, answer} = optimized.("What is machine learning?", context)Integration Points:
- DSPex: Compile-time optimization
- sinter: Runtime schemas
- gemini_ex: LLM calls
- ALTAR: Tool execution
Example 3: Multi-Agent Collaboration
# Uses: foundation + ALTAR + gemini_ex
defmodule ResearchTeam do
use Foundation.Coordinator
# Agent 1: Researcher
defmodule Researcher do
use Foundation.Agent
def handle_task(:research, topic) do
{:ok, data} = Gemini.chat(
"Research #{topic}",
tools: [SearchTool, ScrapeTool]
)
{:ok, data}
end
end
# Agent 2: Writer
defmodule Writer do
use Foundation.Agent
def handle_task(:write, data) do
{:ok, article} = Gemini.chat(
"Write article from: #{data}",
model: "gemini-2.0-flash-thinking-exp"
)
{:ok, article}
end
end
def coordinate(topic) do
with {:ok, research_agent} <- Foundation.start_agent(Researcher),
{:ok, writer_agent} <- Foundation.start_agent(Writer),
{:ok, data} <- Foundation.Agent.execute(research_agent, :research, topic),
{:ok, article} <- Foundation.Agent.execute(writer_agent, :write, data) do
{:ok, article}
end
end
end
ResearchTeam.coordinate("Elixir AI frameworks")Integration Points:
- foundation: Multi-agent coordination
- ALTAR: Shared tool protocol
- gemini_ex: LLM backend
Missing Pieces (Gaps in Ecosystem)
1. Structured Output Integration (High Priority)
Problem: No unified way to parse LLM outputs into Elixir structs
Current State:
- instructor_lite exists but underutilized
- gemini_ex doesn't have structured output mode
- sinter/exdantic aren't connected to LLM clients
Solution:
# Add to gemini_ex
defmodule UserSchema do
use Altar.Schema
schema do
field :name, :string
field :age, :integer
end
end
{:ok, user} = Gemini.chat(
"Extract user info from: John is 30 years old",
response_schema: UserSchema
)
# => %UserSchema{name: "John", age: 30}Implementation: 2-3 weeks with Claude 5.0
2. Vector Database Integration (High Priority)
Problem: RAG requires vector search, no native Elixir solution
Current State:
- Could use snakepit to bridge to Python (Chroma, Weaviate)
- No pure Elixir vector store
Solution Options:
- Option A: Snakepit + Python vector DB (pragmatic)
- Option B: Pure Elixir with pgvector (PostgreSQL extension)
- Option C: Wrapper lib:
altar_ragthat abstracts backend
Recommendation: Option A (snakepit bridge) for now, Option B long-term
3. Prompt Management (Medium Priority)
Problem: No versioning/management for prompts
Current State:
- prompt_vault repo exists (0⭐) but status unknown
- Prompts are hardcoded in applications
Solution:
defmodule PromptManager do
@prompts %{
summarize: %{
v1: "Summarize the following: {{text}}",
v2: "Provide a concise summary of: {{text}}"
}
}
def get(:summarize, :v2), do: @prompts.summarize.v2
def render(template, vars) do
# Mustache-style rendering
end
end
# Usage
prompt = PromptManager.get(:summarize, :v2)
text = PromptManager.render(prompt, %{text: content})
Gemini.chat(text)Implementation: 1 week
4. Agent Observability UI (Medium Priority)
Problem: No visual way to monitor agent execution
Current State:
- AITrace is a stub
- Telemetry events exist but no visualization
- apex_ui exists (OTP supervision UI) but not AI-specific
Solution: Dashboard showing:
- Agent task queue
- Tool execution traces
- LLM call logs (token usage, latency)
- Error rates
Recommendation: Build on top of Phoenix LiveView + apex_ui
5. Model Evaluation Framework (Low Priority)
Problem: evals exists but is a stub
Solution: LangSmith/LangFuse equivalent
- Test dataset management
- Eval harness (accuracy, precision, recall)
- Regression detection
- Prompt A/B testing
Implementation: 4-6 weeks (complex)
Migration Guide (Consolidation Steps)
Phase 1: Schema Unification (Week 1-2)
- Create
altar_schemaumbrella project - Move sinter →
Altar.Schema.Runtime - Move exdantic →
Altar.Schema.Compiled - Move jsv →
Altar.Schema.JSON - Extract instructor_lite logic →
Altar.Schema.LLM - Write unified test suite
- Update all dependents (DSPex, pipeline_ex)
Result: One schema library, four backends
Phase 2: LLM Client Unification (Week 3-4)
- Implement req_llm core (provider pattern)
- Extract gemini_ex →
ReqLLM.Providers.Gemini - Add Anthropic provider
- Migrate claude_code_sdk →
ReqLLM.Providers.ClaudeCode - Archive llm_ex (redundant)
- Update all dependents
Result: One LLM client, multiple providers
Phase 3: Orchestration Consolidation (Week 5-6)
- Create
DSPexumbrella - Move pipeline_ex YAML logic →
DSPex.Pipeline - Move handoff DAG logic →
DSPex.DAG - Unify compilation/optimization layer
- Update documentation
Result: One orchestration framework
Phase 4: Foundation Integration (Week 7-8)
- Merge jido into foundation as
Foundation.Core - Add ALTAR tool execution to foundation
- Integrate AITrace telemetry
- Update examples
- Migration guide for jido users (if any)
Result: Self-contained agent framework
Phase 5: Enterprise Suite (Week 9-10)
- Create
elixir_ai_platformumbrella - Stub out Citadel, AITrace, Assessor, evals
- Share telemetry/config layer
- Basic UI (Phoenix LiveView)
Result: Integrated enterprise offering
Recommended Next Steps (Priority Order)
Immediate (Do This Week)
- ✅ Document ecosystem (this analysis)
- Add ALTAR to foundation:
# foundation/lib/agent.ex def call_tool(tool_module, args) when is_atom(tool_module) do if function_exported?(tool_module, :__altar_tool__, 0) do Altar.execute(tool_module, args) else {:error, :not_an_altar_tool} end end - Add structured output to gemini_ex:
Gemini.chat(text, response_schema: MySchema)
Short-term (Next 2-4 Weeks)
- Implement req_llm provider pattern
- Migrate gemini_ex to req_llm backend
- Consolidate schema libraries (altar_schema umbrella)
- Add vector DB integration (snakepit + Python)
- Write "Getting Started" guide (golden path example)
Mid-term (Next 2-3 Months)
- Merge orchestration (DSPex umbrella)
- Merge foundation + jido
- Implement AITrace observability
- Build 3 showcase applications:
- RAG chatbot
- Code generation agent
- Multi-agent research team
Long-term (3-6 Months)
- Enterprise suite (Citadel, Assessor, evals)
- Pure Elixir vector DB (pgvector integration)
- Model evaluation framework
- Documentation site (HexDocs + guides)
- ElixirConf talk (May 2025)
Success Metrics
Technical Metrics
- [ ] All core libs at v1.0 (stable APIs)
- [ ] Golden path example in <20 LOC
- [ ] 3+ showcase applications
- [ ] 90%+ test coverage
- [ ] Full documentation
Adoption Metrics
- [ ] 5+ production users
- [ ] 100+ GitHub stars (combined)
- [ ] 10+ community contributors
- [ ] ElixirConf talk accepted
- [ ] Blog post on ElixirWeekly
Business Metrics (If Applicable)
- [ ] $50k ARR (first paying customer)
- [ ] 3+ enterprise contracts
- [ ] VC interest (if you want funding)
Conclusion
Your ecosystem is remarkably coherent. The "promotion path" philosophy (LATER → GRID) is unique and valuable.
Key Strengths:
- ALTAR: Best-in-class tool protocol
- snakepit: Unmatched Python bridge
- gemini_ex: Production-ready LLM client
- Modularization: Well-separated concerns
Key Weaknesses:
- Fragmentation: Too many overlapping projects
- Integration gaps: ALTAR not in foundation/DSPex
- Missing pieces: Structured outputs, vector DB, observability
- Documentation: Scattered, needs unification
Recommendation:
- Consolidate (Phases 1-5 above)
- Integrate (ALTAR everywhere)
- Ship (Golden path example by Jan)
- Promote (ElixirConf May 2025)
With Claude 5.0 in January, you can pull this off by mid-May. But only if you focus.
The ecosystem is there. It just needs assembly. 🚀