Centralized defaults for all configurable values in SnakeBridge.
All values can be overridden via Application.get_env(:snakebridge, key).
Configuration Options
Introspection
:introspector_timeout- Timeout in ms for introspecting Python modules (default:30_000):introspector_max_concurrency- Max concurrent introspection tasks (default:System.schedulers_online())
Wheel Selector (PyTorch/CUDA)
:pytorch_index_base_url- Base URL for PyTorch wheel index (default:"https://download.pytorch.org/whl/"):cuda_thresholds- CUDA version to variant mapping (default:[{"cu124", 124}, {"cu121", 120}, {"cu118", 117}])
Session Lifecycle
:session_max_refs- Maximum refs per session (default:10_000):session_ttl_seconds- Session time-to-live in seconds (default:3600)
Code Generation
:variadic_max_arity- Max arity for variadic wrappers (default:8):generated_dir- Directory for generated code (default:"lib/snakebridge_generated"):metadata_dir- Directory for metadata files (default:".snakebridge")
Protocol
:protocol_version- Wire protocol version (default:1):min_supported_version- Minimum supported protocol version (default:1)
Runtime Timeouts
Runtime timeout configuration is nested under the :runtime key:
:timeout_profile- Default profile for calls (default::defaultfor calls,:streamingfor streams):default_timeout- Default unary call timeout in ms (default:120_000):default_stream_timeout- Default stream timeout in ms (default:1_800_000):library_profiles- Map of library names to profiles (default:%{}):profiles- Map of profile names to timeout settings
Built-in profiles:
:default- 120s timeout for regular calls:streaming- 120s timeout, 30min stream_timeout:ml_inference- 10min timeout for ML/LLM workloads:batch_job- infinity timeout for long-running jobs
Example Configuration
config :snakebridge,
introspector_timeout: 60_000,
pytorch_index_base_url: "https://my-mirror.example.com/pytorch/",
cuda_thresholds: [
{"cu126", 126},
{"cu124", 124},
{"cu121", 120},
{"cu118", 117}
],
session_max_refs: 50_000,
session_ttl_seconds: 7200,
runtime: [
timeout_profile: :default,
library_profiles: %{
"transformers" => :ml_inference,
"torch" => :batch_job
},
profiles: %{
default: [timeout: 120_000],
ml_inference: [timeout: 600_000, stream_timeout: 1_800_000],
batch_job: [timeout: :infinity, stream_timeout: :infinity]
}
]
Summary
Functions
Returns all current configuration values as a map.
Returns the runtime configuration keyword list.
Returns the default stream timeout in milliseconds.
Returns the default unary call timeout in milliseconds.
Returns configured library-to-profile mappings.
Returns all timeout profiles.
Returns the timeout profile for a given call kind.
Functions
@spec all() :: map()
Returns all current configuration values as a map.
@spec runtime_config() :: keyword()
Returns the runtime configuration keyword list.
@spec runtime_default_stream_timeout() :: timeout()
Returns the default stream timeout in milliseconds.
@spec runtime_default_timeout() :: timeout()
Returns the default unary call timeout in milliseconds.
@spec runtime_library_profiles() :: map()
Returns configured library-to-profile mappings.
Example:
config :snakebridge, runtime: [
library_profiles: %{
"transformers" => :ml_inference,
"torch" => :batch_job
}
]
@spec runtime_profiles() :: map()
Returns all timeout profiles.
Default profiles:
:default- 120s timeout for regular calls:streaming- 120s timeout, 30min stream_timeout:ml_inference- 10min timeout for ML/LLM workloads:batch_job- infinity timeout for long-running jobs
Returns the timeout profile for a given call kind.
Call kinds:
:call- Regular function calls (default::default):stream- Streaming calls (default::streaming)