Submodule bindings for vllm.tokenizers.
Version
- Requested: 0.14.0
- Observed at generation: 0.14.0
Runtime Options
All functions accept a __runtime__ option for controlling execution behavior:
Vllm.Tokenizers.some_function(args, __runtime__: [timeout: 120_000])Supported runtime options
:timeout- Call timeout in milliseconds (default: 120,000ms / 2 minutes):timeout_profile- Use a named profile (:default,:ml_inference,:batch_job,:streaming):stream_timeout- Timeout for streaming operations (default: 1,800,000ms / 30 minutes):session_id- Override the session ID for this call:pool_name- Target a specific Snakepit pool (multi-pool setups):affinity- Override session affinity (:hint,:strict_queue,:strict_fail_fast)
Timeout Profiles
:default- 2 minute timeout for regular calls:ml_inference- 10 minute timeout for ML/LLM workloads:batch_job- Unlimited timeout for long-running jobs:streaming- 2 minute timeout, 30 minute stream_timeout
Example with timeout override
# For a long-running ML inference call
Vllm.Tokenizers.predict(data, __runtime__: [timeout_profile: :ml_inference])
# Or explicit timeout
Vllm.Tokenizers.predict(data, __runtime__: [timeout: 600_000])
# Route to a pool and enforce strict affinity
Vllm.Tokenizers.predict(data, __runtime__: [pool_name: :strict_pool, affinity: :strict_queue])See SnakeBridge.Defaults for global timeout configuration.
Summary
Functions
Python module attribute vllm.tokenizers.__all__.
Gets a tokenizer for the given model name via HuggingFace or ModelScope.
Python binding for vllm.tokenizers.cached_tokenizer_from_config.
Gets a tokenizer for the given model name via HuggingFace or ModelScope.
Python module attribute vllm.tokenizers.TokenizerRegistry.
Functions
@spec __all__() :: {:ok, [term()]} | {:error, Snakepit.Error.t()}
Python module attribute vllm.tokenizers.__all__.
Returns
list(term())
@spec cached_get_tokenizer( term(), keyword() ) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Gets a tokenizer for the given model name via HuggingFace or ModelScope.
Parameters
tokenizer_name(term())args(term())tokenizer_cls(term() keyword-only default: <class 'vllm.tokenizers.protocol.TokenizerLike'>)trust_remote_code(boolean() keyword-only default: False)revision(term() keyword-only default: None)download_dir(term() keyword-only default: None)kwargs(term())
Returns
term()
@spec cached_tokenizer_from_config( term(), keyword() ) :: {:ok, nil} | {:error, Snakepit.Error.t()}
Python binding for vllm.tokenizers.cached_tokenizer_from_config.
Parameters
model_config(term())
Returns
nil
@spec get_tokenizer( term(), keyword() ) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Gets a tokenizer for the given model name via HuggingFace or ModelScope.
Parameters
tokenizer_name(term())args(term())tokenizer_cls(term() keyword-only default: <class 'vllm.tokenizers.protocol.TokenizerLike'>)trust_remote_code(boolean() keyword-only default: False)revision(term() keyword-only default: None)download_dir(term() keyword-only default: None)kwargs(term())
Returns
term()
@spec tokenizer_registry() :: {:ok, term()} | {:error, Snakepit.Error.t()}
Python module attribute vllm.tokenizers.TokenizerRegistry.
Returns
term()