DSPex Production Strategy: Valley Learnings & AWS Integration
View SourceDate: 2025-10-07 Context: Research on DSPy production patterns, AWS tooling, and strategic direction for DSPex
Executive Summary
After analyzing production DSPy deployments and AWS integration patterns, DSPex is positioned to become the production-grade DSPy runtime for the BEAM ecosystem. The key insight: while DSPy excels at compile-time optimization, it lacks enterprise deployment infrastructure. DSPex + ALTAR + Foundation can fill this gap.
Core Value Proposition:
DSPy (Python): Compile-time optimization & training
↓ (compilation artifacts)
DSPex (Elixir): Production runtime with BEAM reliability
↓ (execution)
ALTAR: Tool protocol & distributed execution (GRID)
↓ (deployment)
Foundation: Multi-agent supervision & orchestrationWhat's Working in Production: Valley Patterns
1. MLflow Integration is Critical (Production Standard)
Pattern: DSPy programs are deployed via MLflow with full observability
# Production pattern from DSPy docs
import mlflow
import dspy
# 1. Develop & optimize
optimized_program = dspy.MIPROv2(
prompt_model=gpt4,
task_model=gpt3_turbo,
metric=accuracy
).compile(program, trainset=examples)
# 2. Log to MLflow
with mlflow.start_run():
mlflow.log_param("optimizer", "MIPROv2")
mlflow.log_metric("accuracy", score)
mlflow.dspy.log_model(optimized_program, "model")
# 3. Deploy
model_uri = "runs:/<run_id>/model"
loaded_program = mlflow.dspy.load_model(model_uri)DSPex Opportunity:
- Elixir doesn't have MLflow client (gap!)
- Solution: Use Snakepit to bridge to Python MLflow
- Store compiled DSPy programs as artifacts
- Load and execute in Elixir runtime
Implementation:
# Load optimized DSPy program from MLflow
{:ok, program} = DSPex.MLflow.load_model(
model_uri: "models:/QA_Pipeline/production",
session: snakepit_session
)
# Execute in Elixir with BEAM supervision
{:ok, result} = DSPex.execute(program,
question: "What is BEAM?",
context: retrieval_results
)2. Async + Caching = Production Scale (100k+ req/day)
Pattern: High-throughput deployments use async execution with aggressive caching
# Production async pattern
import asyncio
import hashlib
from functools import lru_cache
class CachedDSPyProgram:
def __init__(self, program):
self.program = program
self.cache = {}
@lru_cache(maxsize=10000)
def _cache_key(self, question):
return hashlib.md5(question.encode()).hexdigest()
async def forward_async(self, question):
key = self._cache_key(question)
if key in self.cache:
return self.cache[key]
# Async execution
result = await asyncio.to_thread(
self.program.forward,
question
)
self.cache[key] = result
return resultWhat they're caching:
- Expensive LLM calls (obvious)
- Retrieval results (vector search)
- Intermediate reasoning steps (ChainOfThought traces)
DSPex Advantage:
- BEAM has built-in ETS for distributed caching
- GenServer state for per-session caching
- Phoenix PubSub for cache invalidation
Implementation:
defmodule DSPex.Cache do
use GenServer
# ETS-backed cache with TTL
def get_or_compute(cache_key, ttl \\ :timer.hours(1), compute_fn) do
case :ets.lookup(__MODULE__, cache_key) do
[{^cache_key, value, expires_at}] when expires_at > :os.system_time(:second) ->
{:ok, value}
_ ->
# Cache miss - compute
{:ok, value} = compute_fn.()
# Store with TTL
:ets.insert(__MODULE__, {
cache_key,
value,
:os.system_time(:second) + ttl
})
{:ok, value}
end
end
end
# Usage in DSPex program
def forward(question, context) do
cache_key = :crypto.hash(:md5, question) |> Base.encode16()
DSPex.Cache.get_or_compute(cache_key, fn ->
# Expensive LLM call
Gemini.chat(question, context: context)
end)
end3. FastAPI Serving is Common (Lightweight Production)
Pattern: Compiled DSPy programs served as REST APIs
# Production serving pattern
from fastapi import FastAPI
from pydantic import BaseModel
app = FastAPI()
program = mlflow.dspy.load_model("models:/QA/prod")
class Query(BaseModel):
question: str
context: str
@app.post("/predict")
async def predict(query: Query):
result = program(question=query.question, context=query.context)
return {"answer": result.answer}DSPex Advantage: Arsenal!
- Your
arsenallibrary auto-generates REST APIs from OTP apps - DSPex modules as GenServers → automatic REST endpoints
- Zero boilerplate for production serving
Implementation:
defmodule QAPipeline do
use DSPex.Module
use Arsenal.Resource # Auto-generates REST API!
signature "question, context -> answer"
def forward(question, context) do
# DSPy logic
end
end
# Arsenal automatically creates:
# POST /qa_pipeline/forward
# Body: {"question": "...", "context": "..."}
# Response: {"answer": "..."}
# Start server
{:ok, _} = Arsenal.start_server(QAPipeline, port: 4000)4. Pydantic for Structured Outputs (Type Safety at Scale)
Pattern: Production systems use Pydantic for strict output schemas
from pydantic import BaseModel, Field
from typing import List
class Answer(BaseModel):
answer: str = Field(description="The main answer")
confidence: float = Field(ge=0.0, le=1.0)
sources: List[str] = Field(description="Citations")
class QA(dspy.Signature):
"""Answer questions with citations."""
question = dspy.InputField()
context = dspy.InputField()
answer = dspy.OutputField(desc="Structured answer",
type=Answer)
# Usage
predictor = dspy.TypedPredictor(QA)
result = predictor(question="...", context="...")
assert isinstance(result.answer, Answer) # Type-safe!DSPex has this: exdantic + sinter
- exdantic: Compile-time schemas (Pydantic equivalent)
- sinter: Runtime schemas (dynamic validation)
Implementation:
defmodule Answer do
use Exdantic.Schema
schema do
field :answer, :string, required: true
field :confidence, :float,
validators: [min: 0.0, max: 1.0]
field :sources, {:array, :string},
description: "Citations"
end
end
defmodule QA do
use DSPex.Signature
input :question, :string
input :context, :string
output :answer, Answer # Exdantic schema!
end
# Type-safe execution
{:ok, %Answer{} = result} = DSPex.execute(QA, question, context)AWS Bedrock Integration: What We Can Learn
Key AWS Pattern: Boto3 + Bedrock Embeddings
import boto3
from dspy import Retrieve
class BedrockRetriever(Retrieve):
def __init__(self, model="amazon.titan-embed-text-v2:0"):
self.client = boto3.client('bedrock-runtime')
self.model = model
def forward(self, query):
# Get embedding
response = self.client.invoke_model(
modelId=self.model,
body=json.dumps({"inputText": query})
)
embedding = json.loads(response['body'].read())
# Vector search (Pinecone, Weaviate, etc.)
results = vector_db.search(embedding, top_k=5)
return resultsDSPex Integration via Snakepit:
# Python worker for AWS Bedrock
# priv/python/aws_bedrock_worker.py
import boto3
class BedrockWorker:
def embed_text(self, text, model="amazon.titan-embed-text-v2:0"):
client = boto3.client('bedrock-runtime')
response = client.invoke_model(
modelId=model,
body=json.dumps({"inputText": text})
)
return json.loads(response['body'].read())
# Elixir wrapper
defmodule DSPex.AWS.Bedrock do
use Altar.Tool
@doc "Generate embeddings using AWS Bedrock"
def embed_text(text, model \\ "amazon.titan-embed-text-v2:0") do
Snakepit.execute(
"bedrock_worker",
"embed_text",
%{text: text, model: model}
)
end
end
# Use in DSPex pipeline
defmodule RAGPipeline do
use DSPex.Module
def forward(question) do
# 1. Embed query via AWS Bedrock (Python)
{:ok, embedding} = DSPex.AWS.Bedrock.embed_text(question)
# 2. Vector search (Elixir - pgvector or Python - Pinecone)
{:ok, context} = VectorDB.search(embedding, top_k: 5)
# 3. Generate answer (Elixir - Gemini)
Gemini.chat(question, context: context)
end
endAWS Lambda Deployment (Serverless DSPy)
Challenge: Lambda execution role vs IAM credentials
- GitHub issue #8470: DSPy can't use Lambda's IAM role directly
- Current workaround: Pass credentials explicitly
DSPex Solution: Leverage BEAM clustering
- Deploy DSPex cluster on ECS/Fargate (not Lambda)
- Use IAM roles naturally (no credential passing)
- libcluster for auto-discovery
- GRID (future) for distributed tool execution
Architecture:
ECS Service (DSPex nodes)
├── libcluster (auto-discovery)
├── DSPex.Supervisor (per-node)
├── Snakepit.Pool (Python workers with boto3)
└── ALTAR.GRID (distributed tools)
├── Node 1: AWS Bedrock tools
├── Node 2: Vector DB tools
└── Node 3: LLM generation tools
Load Balancer
↓
API Gateway → ECS ServiceBenefits:
- No cold start (pre-warmed GenServers)
- Session affinity (sticky routing to same node)
- Distributed caching (ETS across nodes)
- Natural IAM integration
Production Best Practices: What Valley Learned
1. 95%+ Reliability is Non-Negotiable
Problem: LLMs hallucinate, fail, timeout Solution: Guardrails + Fallbacks + Retries
defmodule DSPex.Reliability do
use DSPex.Module
# Circuit breaker for LLM failures
defmodule SafePredict do
use DSPex.Predict
use Fuse, name: __MODULE__, opts: {{:standard, 2, 10_000}, {:reset, 30_000}}
def forward(question) do
case Fuse.ask(__MODULE__, :sync) do
:ok ->
# Circuit closed - proceed
case super(question) do
{:ok, result} = success -> success
{:error, _} = error ->
Fuse.melt(__MODULE__) # Open circuit
error
end
:blown ->
# Circuit open - use fallback
{:ok, "Service temporarily unavailable. Please try again."}
end
end
end
endALTAR + Foundation Integration:
# Foundation already has circuit breakers!
defmodule DSPexAgent do
use Foundation.Agent
# Automatic circuit breaker via Foundation
@impl true
def handle_task({:predict, question}) do
# Foundation wraps this in fuse + hammer (rate limiting)
DSPex.execute(QAPipeline, question: question)
end
end2. Feedback Loops Drive Improvement
Pattern: Production systems collect failure cases for retraining
defmodule DSPex.Feedback do
@moduledoc """
Collect feedback for continuous improvement.
"""
def log_prediction(program, inputs, output, metadata \\ %{}) do
# Store in time-series DB (InfluxDB, TimescaleDB)
Telemetry.execute(
[:dspex, :prediction],
%{latency_ms: metadata.latency},
%{
program: program.__module__,
inputs: inputs,
output: output,
user_feedback: nil, # Will be filled later
timestamp: DateTime.utc_now()
}
)
end
def mark_as_failure(prediction_id, reason) do
# Add to retraining dataset
DSPex.Training.add_negative_example(prediction_id, reason)
# Trigger recompilation if threshold exceeded
if failure_rate() > 0.10 do
DSPex.Recompile.schedule(prediction_id.program)
end
end
endAITrace Integration (from ECOSYSTEM_ARCHITECTURE.md):
# AITrace aggregates telemetry across DSPex, Foundation, Gemini
defmodule AITrace.DSPex do
use AITrace.Collector
# Subscribe to DSPex telemetry
attach_many([
[:dspex, :prediction],
[:dspex, :compilation],
[:dspex, :cache, :hit],
[:dspex, :cache, :miss]
])
# Dashboard shows:
# - Prediction accuracy over time
# - Cache hit rates
# - Compilation frequency
# - Token usage trends
end3. Modular Optimization is Key
Pattern: Don't recompile the entire program - optimize modules independently
# Production optimization pattern
class RAGPipeline(dspy.Module):
def __init__(self):
self.retrieve = dspy.Retrieve(k=5)
self.generate = dspy.ChainOfThought("context, question -> answer")
def forward(self, question):
context = self.retrieve(question)
answer = self.generate(context=context, question=question)
return answer
# Optimize ONLY the generator (retrieval stays fixed)
optimizer = dspy.MIPROv2()
optimized_generate = optimizer.compile(
program.generate, # Just this module!
trainset=examples
)
program.generate = optimized_generate # Hot-swapDSPex Implementation:
defmodule RAGPipeline do
use DSPex.Module
# Modules can be optimized independently
module :retrieve, DSPex.Retrieve, k: 5
module :generate, DSPex.ChainOfThought,
signature: "context, question -> answer",
optimizable: true # Mark for optimization
def forward(question) do
context = retrieve(question)
generate(context: context, question: question)
end
end
# Optimize specific module
{:ok, optimized} = DSPex.Optimize.compile(
RAGPipeline,
optimize_modules: [:generate], # Only this!
trainset: examples,
metric: &accuracy/1
)
# Hot-swap in production
DSPex.Module.update(RAGPipeline, :generate, optimized)4. Model Switching Without Code Changes
Pattern: Swap models via config (not code)
# Don't do this (hardcoded):
lm = dspy.OpenAI(model="gpt-4")
# Do this (configurable):
model_config = os.getenv("DSPY_MODEL", "gpt-3.5-turbo")
lm = dspy.OpenAI(model=model_config)
dspy.settings.configure(lm=lm)DSPex + ALTAR Solution:
# config/runtime.exs
config :dspex, :default_lm,
provider: :gemini,
model: System.get_env("DSPY_MODEL", "gemini-2.0-flash")
# Or use ALTAR variables (session-specific)
defmodule QAPipeline do
use DSPex.Module
use DSPex.VariableAware # Sync with session variables
# Automatically bound to session variable "model"
@variable_bindings %{
model: "current_model",
temperature: "generation_temperature"
}
def forward(question) do
# Uses session's current model (hot-swappable!)
Gemini.chat(question,
model: get_variable("current_model"),
temperature: get_variable("generation_temperature")
)
end
endStrategic Recommendations for DSPex
Phase 1: Production Foundations (Next 4 Weeks)
1. MLflow Bridge via Snakepit
# Load compiled DSPy programs from MLflow
defmodule DSPex.MLflow do
def load_model(model_uri, session) do
# Use Python MLflow via Snakepit
Snakepit.execute(session, "mlflow", "load_model", %{
model_uri: model_uri,
return_format: "pickle" # Serialized DSPy program
})
end
def log_model(program, artifact_path, session) do
Snakepit.execute(session, "mlflow", "log_model", %{
model: program,
artifact_path: artifact_path,
registered_model_name: program.__module__
})
end
endWhy this matters:
- Interop with Python DSPy ecosystem
- Production deployment pattern
- Version control for compiled programs
2. Arsenal Integration for REST APIs
# Auto-generate REST endpoints from DSPex modules
defmodule DSPex.Server do
use Arsenal.Application
# Automatically creates:
# POST /modules/:module_name/execute
# GET /modules/:module_name/schema
# POST /modules/:module_name/optimize
def start(_type, _args) do
children = [
# Discover all DSPex modules
{Arsenal.Discovery, [namespace: DSPex.Module]},
# Start REST server
{Arsenal.Server, [port: 4000]}
]
Supervisor.start_link(children, strategy: :one_for_one)
end
endWhy this matters:
- Zero-boilerplate production serving
- Matches FastAPI pattern from Python world
- Uses existing tooling (arsenal already built!)
3. Production Caching Layer
defmodule DSPex.Cache do
@moduledoc """
Multi-tier caching for DSPex programs.
Tiers:
1. Process state (GenServer)
2. ETS (node-local)
3. Redis (distributed) - optional
"""
def get_or_compute(cache_key, opts \\ [], compute_fn) do
tier = opts[:tier] || :ets
ttl = opts[:ttl] || :timer.hours(1)
case tier do
:process ->
# GenServer state (fastest, session-scoped)
GenServer.call(__MODULE__, {:get_or_compute, cache_key, compute_fn})
:ets ->
# ETS (fast, node-scoped)
ets_get_or_compute(cache_key, ttl, compute_fn)
:redis ->
# Redis (slower, cluster-scoped)
redis_get_or_compute(cache_key, ttl, compute_fn)
end
end
# Invalidation hooks
def invalidate(cache_key, tier \\ :all) do
Phoenix.PubSub.broadcast(
DSPex.PubSub,
"cache:invalidate",
{:invalidate, cache_key, tier}
)
end
endWhy this matters:
- Production pattern from valley research
- BEAM advantages (ETS, distributed state)
- Cost savings (avoid redundant LLM calls)
4. Circuit Breakers + Foundation Integration
# Make DSPex modules Foundation-aware
defmodule DSPex.Module do
defmacro __using__(_opts) do
quote do
use GenServer # DSPex modules as processes
# Optional Foundation integration
def as_foundation_agent do
defmodule __MODULE__.Agent do
use Foundation.Agent
@impl true
def handle_task({:execute, inputs}) do
# Automatic circuit breaker + rate limiting
unquote(__MODULE__).execute(inputs)
end
end
end
end
end
end
# Usage
QAPipeline.as_foundation_agent()
{:ok, agent} = Foundation.start_agent(QAPipeline.Agent)
# Calls are now supervised + circuit-broken + rate-limited!
Foundation.Agent.execute(agent, {:execute, %{question: "..."}})Why this matters:
- 95%+ reliability requirement
- Reuses Foundation infrastructure (fuse, hammer)
- Seamless integration (one function call)
Phase 2: Advanced Features (Weeks 5-8)
5. AWS Bedrock Integration
# AWS toolkit for DSPex
defmodule DSPex.AWS do
# Bedrock embeddings
defmodule Bedrock do
use Altar.Tool
def embed(text, opts \\ []) do
model = opts[:model] || "amazon.titan-embed-text-v2:0"
Snakepit.execute("bedrock", "embed_text", %{
text: text,
model: model
})
end
end
# SageMaker inference
defmodule SageMaker do
use Altar.Tool
def invoke_endpoint(endpoint, data) do
Snakepit.execute("sagemaker", "invoke", %{
endpoint_name: endpoint,
body: data
})
end
end
end
# Use in pipeline
defmodule AWSRAGPipeline do
use DSPex.Module
def forward(question) do
# Embed via Bedrock
{:ok, embedding} = DSPex.AWS.Bedrock.embed(question)
# Search vectors
{:ok, context} = VectorDB.search(embedding)
# Generate on SageMaker (fine-tuned Llama)
DSPex.AWS.SageMaker.invoke_endpoint(
"llama-qa-endpoint",
%{question: question, context: context}
)
end
end6. Observability Dashboard (AITrace)
# Phoenix LiveView dashboard for DSPex
defmodule DSPex.Live.Dashboard do
use Phoenix.LiveView
def render(assigns) do
~H"""
<div class="dspex-dashboard">
<!-- Real-time metrics -->
<div class="metrics">
<.metric name="Predictions/sec" value={@predictions_per_sec} />
<.metric name="Cache Hit Rate" value={@cache_hit_rate} />
<.metric name="Avg Latency" value={@avg_latency_ms} />
<.metric name="Error Rate" value={@error_rate} />
</div>
<!-- Active programs -->
<div class="programs">
<%= for program <- @active_programs do %>
<.program_card program={program} />
<% end %>
</div>
<!-- Trace viewer -->
<div class="traces">
<.live_component
module={TraceViewer}
traces={@recent_traces}
/>
</div>
</div>
"""
end
def mount(_params, _session, socket) do
# Subscribe to telemetry
:telemetry.attach_many(
"dspex-dashboard",
[
[:dspex, :prediction],
[:dspex, :cache, :hit],
[:dspex, :error]
],
&handle_telemetry/4,
nil
)
{:ok, assign(socket, load_metrics())}
end
endPhase 3: GRID Integration (Weeks 9-12)
7. Distributed Tool Execution
# ALTAR GRID for distributed DSPex
defmodule DSPex.GRID do
@moduledoc """
Execute DSPex tools across cluster.
Pattern:
- Tools registered on different nodes (specialization)
- Work routed to nodes with required tools
- Automatic failover if node goes down
"""
def execute_tool(tool_name, inputs, opts \\ []) do
case find_nodes_with_tool(tool_name) do
[] ->
{:error, :tool_not_found}
nodes ->
# Pick node (round-robin, least-loaded, etc.)
node = select_node(nodes, opts[:strategy] || :round_robin)
# Execute remotely
:rpc.call(node, Altar, :execute, [tool_name, inputs])
end
end
defp find_nodes_with_tool(tool_name) do
# Query tool registry across cluster
:pg.get_members(DSPex.ToolRegistry, tool_name)
end
end
# Tool specialization
# Node 1: GPU-heavy tools
Node.spawn(:"dspex@gpu1", fn ->
Altar.register_tool(ImageEmbedding)
Altar.register_tool(VideoAnalysis)
end)
# Node 2: AWS tools
Node.spawn(:"dspex@aws", fn ->
Altar.register_tool(DSPex.AWS.Bedrock)
Altar.register_tool(DSPex.AWS.SageMaker)
end)
# Node 3: Vector DB tools
Node.spawn(:"dspex@vector", fn ->
Altar.register_tool(VectorDB.Search)
Altar.register_tool(VectorDB.Insert)
end)
# Usage (location-transparent)
{:ok, embedding} = DSPex.GRID.execute_tool(
ImageEmbedding, # Automatically routes to GPU node
%{image: image_data}
)Why this is badass:
- Distributed by default (BEAM clustering)
- Tool specialization (GPU vs CPU vs AWS)
- Automatic failover (OTP supervision)
- No other DSPy runtime has this!
Unique Value Propositions
What DSPex Offers That Python DSPy Can't
| Feature | Python DSPy | DSPex (Elixir) |
|---|---|---|
| Runtime | Single-threaded, GIL-limited | Massively concurrent (BEAM) |
| State | Requires Redis/external | Built-in ETS, GenServer state |
| Deployment | FastAPI + Docker + orchestrator | Single BEAM release, libcluster |
| Observability | Custom logging + tracing | Built-in Telemetry + LiveView |
| Tool Execution | Synchronous only | Async + distributed (GRID) |
| Hot Code Reload | Requires restart | BEAM hot code swapping |
| Circuit Breakers | Manual (need library) | Built-in (fuse + Foundation) |
| Session Affinity | Complex load balancer config | GenServer natural affinity |
| Zero-Downtime Deploys | Blue/green with downtime | BEAM appup/relup |
The "Golden Path" for DSPex Production
# 1. Define program (Elixir)
defmodule QAPipeline do
use DSPex.Module
signature "question -> answer"
def forward(question) do
context = retrieve(question)
generate(question, context)
end
end
# 2. Optimize (Python DSPy via Snakepit)
{:ok, optimized} = DSPex.Optimize.compile(
QAPipeline,
trainset: examples,
optimizer: :mipro_v2 # Uses Python DSPy
)
# 3. Deploy (Elixir runtime)
{:ok, _} = DSPex.Supervisor.start_program(
optimized,
name: :qa_production,
cache: true,
circuit_breaker: true
)
# 4. Serve (Arsenal auto-API)
# POST /qa_production/execute
# → {"question": "What is BEAM?"}
# ← {"answer": "...", "confidence": 0.95}
# 5. Observe (AITrace dashboard)
# http://localhost:4000/aitrace/dspex/qa_production
# → Real-time metrics, traces, errors
# 6. Scale (libcluster)
# Add nodes → automatic distribution via GRIDLines of code: ~30 (vs 200+ for equivalent Python + FastAPI + Redis + monitoring)
Next Actions (Prioritized)
This Week
- ✅ Document strategy (this doc)
- Add Arsenal integration to DSPex (auto REST APIs)
- Prototype MLflow bridge via Snakepit
Next Sprint (2 weeks)
- Production caching (ETS + optional Redis)
- Foundation integration (circuit breaker + rate limiting)
- Simple AWS Bedrock example (embeddings via Snakepit)
Next Month
- AITrace dashboard for DSPex (LiveView)
- Write showcase example: Full RAG pipeline with:
- AWS Bedrock embeddings
- pgvector search
- Gemini generation
- Arsenal API
- Foundation supervision
- AITrace observability
Q1 2025
- GRID implementation (distributed tools)
- ElixirConf submission (May 2025)
- Blog post: "DSPex: Production DSPy on BEAM"
Conclusion
DSPex isn't competing with DSPy - it's completing it.
Python DSPy: Research → Development → Optimization
↓
DSPex: Production → Scale → DistributionThe valley has validated the patterns:
- MLflow for deployment
- Async + caching for scale
- FastAPI for serving
- Pydantic for type safety
- AWS for infrastructure
DSPex can do all of this better because BEAM:
- Async by default (not bolted on)
- Distributed by default (not bolted on)
- Observable by default (Telemetry)
- Reliable by default (OTP)
The ecosystem is ready:
- ALTAR: Tool protocol (LATER → GRID path)
- Snakepit: Python bridge (DSPy interop)
- Arsenal: Auto REST APIs (FastAPI equivalent)
- Foundation: Agent supervision (production reliability)
- Gemini_ex: LLM client (provider integration)
The missing piece: Execution focus. Build the integrations, ship the showcase, prove the value.
With Claude 5.0 in January, we can have a production-ready DSPex by ElixirConf (May 2025). 🚀