DSPex (DSPex v0.3.0)
View SourceDSPex - DSPy for Elixir via SnakeBridge.
Minimal wrapper that provides transparent access to DSPy through SnakeBridge's Universal FFI. No code generation needed - just call Python directly.
Quick Start
DSPex.run(fn ->
# Configure LM
lm = DSPex.lm!("openai/gpt-4o-mini")
DSPex.configure!(lm: lm)
# Create predictor and run
predict = DSPex.call!("dspy", "Predict", ["question -> answer"])
result = DSPex.method!(predict, "forward", [], question: "What is 2+2?")
# Get the answer
answer = DSPex.attr!(result, "answer")
IO.puts("Answer: #{answer}")
end)Timeout Configuration
DSPex leverages SnakeBridge 0.7.7+'s timeout architecture for LLM workloads.
By default, all DSPy calls use the :ml_inference profile (10 minute timeout).
Timeout Profiles
| Profile | Timeout | Use Case |
|---|---|---|
:default | 2 min | Standard Python calls |
:streaming | 30 min | Streaming responses |
:ml_inference | 10 min | LLM inference (DSPex default) |
:batch_job | 1 hour | Long-running batch operations |
Per-Call Timeout Override
Override timeout for individual calls using __runtime__ option:
# Use a different profile
DSPex.method!(predict, "forward", [],
question: "Complex question",
__runtime__: [timeout_profile: :batch_job]
)
# Set exact timeout in milliseconds
DSPex.method!(predict, "forward", [],
question: "Quick question",
__runtime__: [timeout: 30_000] # 30 seconds
)Global Configuration
Configure timeouts in config/config.exs:
config :snakebridge,
runtime: [
library_profiles: %{"dspy" => :ml_inference},
# Or set global default:
# timeout_profile: :ml_inference
]Architecture
DSPex uses SnakeBridge's Universal FFI to call DSPy directly:
Elixir (DSPex.call/4)
↓
SnakeBridge.call/4
↓
Snakepit gRPC
↓
Python DSPy
↓
LLM ProvidersAll Python lifecycle is managed automatically by Snakepit.
Summary
Functions
Get an attribute from a Python object reference.
Bang version of attr/2.
Encode binary data as Python bytes.
Call any DSPy function or class.
Bang version - raises on error, returns value directly.
Create a ChainOfThought module.
Bang version of chain_of_thought/2.
Configure DSPy global settings.
Bang version of configure/1 - raises on error.
Get a module attribute.
Bang version of get/2.
Create a DSPy language model.
Bang version of lm/2 - raises on error.
Call a method on a Python object reference.
Bang version of method/4.
Create a Predict module.
Bang version of predict/2.
Check if a value is a Python object reference.
Run DSPex code with automatic Python lifecycle management.
Set an attribute on a Python object reference.
Create a timeout option for exact milliseconds.
Timeout profile atoms for use with __runtime__ option.
Add timeout configuration to options.
Functions
Get an attribute from a Python object reference.
Bang version of attr/2.
Encode binary data as Python bytes.
Call any DSPy function or class.
Examples
{:ok, result} = DSPex.call("dspy", "Predict", ["question -> answer"])
{:ok, result} = DSPex.call("dspy.teleprompt", "BootstrapFewShot", [], metric: metric)
Bang version - raises on error, returns value directly.
Create a ChainOfThought module.
Examples
{:ok, cot} = DSPex.chain_of_thought("question -> answer")
Bang version of chain_of_thought/2.
Configure DSPy global settings.
Examples
:ok = DSPex.configure(lm: lm)
:ok = DSPex.configure(lm: lm, rm: retriever)
Bang version of configure/1 - raises on error.
Get a module attribute.
Bang version of get/2.
Create a DSPy language model.
Examples
{:ok, lm} = DSPex.lm("openai/gpt-4o-mini")
{:ok, lm} = DSPex.lm("anthropic/claude-3-sonnet-20240229", temperature: 0.7)
Bang version of lm/2 - raises on error.
Call a method on a Python object reference.
Bang version of method/4.
Create a Predict module.
Examples
{:ok, predict} = DSPex.predict("question -> answer")
{:ok, predict} = DSPex.predict("context, question -> answer")
Bang version of predict/2.
Check if a value is a Python object reference.
Run DSPex code with automatic Python lifecycle management.
Wraps your code in Snakepit.run_as_script/2 which:
- Starts the Python process pool
- Runs your code
- Cleans up on exit
Pass halt: true in opts if you need to force the BEAM to exit
(for example, when running inside wrapper scripts).
Example
DSPex.run(fn ->
lm = DSPex.lm!("openai/gpt-4o-mini")
DSPex.configure!(lm: lm)
# ... your DSPy code
end)
Set an attribute on a Python object reference.
Create a timeout option for exact milliseconds.
Returns a keyword list ready to merge into call options.
Examples
DSPex.method!(predict, "forward", [],
Keyword.merge([question: "test"], DSPex.timeout_ms(120_000))
)
Timeout profile atoms for use with __runtime__ option.
Returns a keyword list ready to merge into call options.
Examples
DSPex.method!(predict, "forward", [],
Keyword.merge([question: "test"], DSPex.timeout_profile(:batch_job))
)
Add timeout configuration to options.
This is a convenience helper for adding __runtime__ timeout options.
Options
:timeout- Exact timeout in milliseconds:timeout_profile- Use a predefined profile (:default,:streaming,:ml_inference,:batch_job)
Examples
# Set exact timeout
opts = DSPex.with_timeout([], timeout: 60_000) # 1 minute
DSPex.method!(predict, "forward", [], Keyword.merge(opts, question: "..."))
# Use batch profile for long operations
opts = DSPex.with_timeout([question: "complex"], timeout_profile: :batch_job)
DSPex.method!(predict, "forward", [], opts)
# Inline usage
DSPex.method!(predict, "forward", [],
DSPex.with_timeout([question: "test"], timeout: 30_000)
)