Vllm.LLM (VLLM v0.3.0)

Copy Markdown View Source

An LLM for generating texts from given prompts and sampling parameters.

This class includes a tokenizer, a language model (possibly distributed across multiple GPUs), and GPU memory space allocated for intermediate states (aka KV cache). Given a batch of prompts and sampling parameters, this class generates texts from the model, using an intelligent batching mechanism and efficient memory management.

Parameters

  • model - The name or path of a HuggingFace Transformers model.
  • tokenizer - The name or path of a HuggingFace Transformers tokenizer.
  • tokenizer_mode - The tokenizer mode. "auto" will use the fast tokenizer if available, and "slow" will always use the slow tokenizer.
  • skip_tokenizer_init - If true, skip initialization of tokenizer and detokenizer. Expect valid prompt_token_ids and None for prompt from the input.
  • trust_remote_code - Trust remote code (e.g., from HuggingFace) when downloading the model and tokenizer.
  • allowed_local_media_path - Allowing API requests to read local images or videos from directories specified by the server file system. This is a security risk. Should only be enabled in trusted environments.
  • allowed_media_domains - If set, only media URLs that belong to this domain can be used for multi-modal inputs.
  • tensor_parallel_size - The number of GPUs to use for distributed execution with tensor parallelism.
  • dtype - The data type for the model weights and activations. Currently, we support float32, float16, and bfloat16. If auto, we use the dtype attribute of the Transformers model's config. However, if the dtype in the config is float32, we will use float16 instead.
  • quantization - The method used to quantize the model weights. Currently, we support "awq", "gptq", and "fp8" (experimental). If None, we first check the quantization_config attribute in the model config file. If that is None, we assume the model weights are not quantized and use dtype to determine the data type of the weights.
  • revision - The specific model version to use. It can be a branch name, a tag name, or a commit id.
  • tokenizer_revision - The specific tokenizer version to use. It can be a branch name, a tag name, or a commit id.
  • seed - The seed to initialize the random number generator for sampling.
  • gpu_memory_utilization - The ratio (between 0 and 1) of GPU memory to reserve for the model weights, activations, and KV cache. Higher values will increase the KV cache size and thus improve the model's throughput. However, if the value is too high, it may cause out-of- memory (OOM) errors.
  • kv_cache_memory_bytes - Size of KV Cache per GPU in bytes. By default, this is set to None and vllm can automatically infer the kv cache size based on gpu_memory_utilization. However, users may want to manually specify the kv cache memory size. kv_cache_memory_bytes allows more fine-grain control of how much memory gets used when compared with using gpu_memory_utilization. Note that kv_cache_memory_bytes (when not-None) ignores gpu_memory_utilization
  • swap_space - The size (GiB) of CPU memory per GPU to use as swap space. This can be used for temporarily storing the states of the requests when their best_of sampling parameters are larger than 1. If all requests will have best_of=1, you can safely set this to 0. Noting that best_of is only supported in V0. Otherwise, too small values may cause out-of-memory (OOM) errors.
  • cpu_offload_gb - The size (GiB) of CPU memory to use for offloading the model weights. This virtually increases the GPU memory space you can use to hold the model weights, at the cost of CPU-GPU data transfer for every forward pass.
  • enforce_eager - Whether to enforce eager execution. If True, we will disable CUDA graph and always execute the model in eager mode. If False, we will use CUDA graph and eager execution in hybrid.
  • enable_return_routed_experts - Whether to return routed experts.
  • disable_custom_all_reduce - See [ParallelConfig][vllm.config.ParallelConfig].
  • hf_token - The token to use as HTTP bearer authorization for remote files . If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).
  • hf_overrides - If a dictionary, contains arguments to be forwarded to the HuggingFace config. If a callable, it is called to update the HuggingFace config.
  • mm_processor_kwargs - Arguments to be forwarded to the model's processor for multi-modal data, e.g., image processor. Overrides for the multi-modal processor obtained from AutoProcessor.from_pretrained. The available overrides depend on the model that is being run. For example, for Phi-3-Vision: {"num_crops": 4}.
  • pooler_config - Initialize non-default pooling config for the pooling model. e.g. PoolerConfig(seq_pooling_type="MEAN", normalize=False).
  • compilation_config - Either an integer or a dictionary. If it is an integer, it is used as the mode of compilation optimization. If it is a dictionary, it can specify the full compilation configuration.
  • attention_config - Configuration for attention mechanisms. Can be a dictionary or an AttentionConfig instance. If a dictionary, it will be converted to an AttentionConfig. Allows specifying the attention backend and other attention-related settings. **kwargs: Arguments for [EngineArgs][vllm.EngineArgs].

Notes

This class is intended to be used for offline inference. For online

serving, use the [AsyncLLMEngine][vllm.AsyncLLMEngine] class instead.

Summary

Functions

vLLM: a high-throughput and memory-efficient inference engine for LLMs

vLLM: a high-throughput and memory-efficient inference engine for LLMs

vLLM: a high-throughput and memory-efficient inference engine for LLMs

Get the optional lora request corresponding to each prompt.

vLLM: a high-throughput and memory-efficient inference engine for LLMs

Use the Processor to process inputs for LLMEngine.

vLLM: a high-throughput and memory-efficient inference engine for LLMs

vLLM: a high-throughput and memory-efficient inference engine for LLMs

vLLM: a high-throughput and memory-efficient inference engine for LLMs

Validate that if any multi-modal data is skipped (i.e. None),

Run a function directly on the model inside each worker,

Generate sequences using beam search.

Generate responses for a chat conversation.

Generate class logits for each prompt.

Execute an RPC call on all workers.

Generate an embedding vector for each prompt.

Apply pooling to the hidden states corresponding to the input

Generates the completions for the input prompts.

vLLM: a high-throughput and memory-efficient inference engine for LLMs

Return a snapshot of aggregated metrics from Prometheus.

vLLM: a high-throughput and memory-efficient inference engine for LLMs

LLM constructor.

Generate prompt for a chat conversation. The pre-processed

vLLM: a high-throughput and memory-efficient inference engine for LLMs

vLLM: a high-throughput and memory-efficient inference engine for LLMs

Generate rewards for each prompt.

Generate similarity scores for all pairs <text,text_pair> or

Put the engine to sleep. The engine should not process any requests.

vLLM: a high-throughput and memory-efficient inference engine for LLMs

vLLM: a high-throughput and memory-efficient inference engine for LLMs

Wake up the engine from sleep mode. See the [sleep][vllm.LLM.sleep]

Types

t()

@opaque t()

Functions

_add_request(ref, prompt, params, args, opts \\ [])

@spec _add_request(SnakeBridge.Ref.t(), term(), term(), [term()], keyword()) ::
  {:ok, String.t()} | {:error, Snakepit.Error.t()}

vLLM: a high-throughput and memory-efficient inference engine for LLMs

Parameters

  • prompt (term())
  • params (term())
  • lora_request (term() default: None)
  • priority (integer() default: 0)
  • tokenization_kwargs (term() default: None)

Returns

  • String.t()

_cross_encoding_score(ref, tokenizer, data_1, data_2, args, opts \\ [])

@spec _cross_encoding_score(
  SnakeBridge.Ref.t(),
  term(),
  term(),
  term(),
  [term()],
  keyword()
) ::
  {:ok, [Vllm.Outputs.ScoringRequestOutput.t()]} | {:error, Snakepit.Error.t()}

vLLM: a high-throughput and memory-efficient inference engine for LLMs

Parameters

  • tokenizer (term())
  • data_1 (term())
  • data_2 (term())
  • truncate_prompt_tokens (term() default: None)
  • use_tqdm (term() default: True)
  • pooling_params (term() default: None)
  • lora_request (term() default: None)
  • tokenization_kwargs (term() default: None)
  • score_template (term() default: None)

Returns

  • list(Vllm.Outputs.ScoringRequestOutput.t())

_embedding_score(ref, tokenizer, text_1, text_2, args, opts \\ [])

@spec _embedding_score(
  SnakeBridge.Ref.t(),
  term(),
  [term()],
  [term()],
  [term()],
  keyword()
) ::
  {:ok, [Vllm.Outputs.ScoringRequestOutput.t()]} | {:error, Snakepit.Error.t()}

vLLM: a high-throughput and memory-efficient inference engine for LLMs

Parameters

  • tokenizer (term())
  • text_1 (list(term()))
  • text_2 (list(term()))
  • truncate_prompt_tokens (term() default: None)
  • use_tqdm (term() default: True)
  • pooling_params (term() default: None)
  • lora_request (term() default: None)
  • tokenization_kwargs (term() default: None)

Returns

  • list(Vllm.Outputs.ScoringRequestOutput.t())

_get_beam_search_lora_requests(ref, lora_request, prompts, opts \\ [])

@spec _get_beam_search_lora_requests(SnakeBridge.Ref.t(), term(), [term()], keyword()) ::
  {:ok, [term()]} | {:error, Snakepit.Error.t()}

Get the optional lora request corresponding to each prompt.

Parameters

  • lora_request (term())
  • prompts (list(term()))

Returns

  • list(term())

_get_modality_specific_lora_reqs(ref, prompts, lora_request, opts \\ [])

@spec _get_modality_specific_lora_reqs(SnakeBridge.Ref.t(), term(), term(), keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}

vLLM: a high-throughput and memory-efficient inference engine for LLMs

Parameters

  • prompts (term())
  • lora_request (term())

Returns

  • term()

_process_inputs(ref, request_id, engine_prompt, params, opts \\ [])

@spec _process_inputs(SnakeBridge.Ref.t(), String.t(), term(), term(), keyword()) ::
  {:ok, {term(), %{optional(String.t()) => term()}}}
  | {:error, Snakepit.Error.t()}

Use the Processor to process inputs for LLMEngine.

Parameters

  • request_id (String.t())
  • engine_prompt (term())
  • params (term())
  • lora_request (term() keyword-only, required)
  • priority (integer() keyword-only, required)
  • tokenization_kwargs (term() keyword-only default: None)

Returns

  • {term(), %{optional(String.t()) => term()}}

_resolve_single_prompt_mm_lora(ref, prompt, lora_request, default_mm_loras, opts \\ [])

@spec _resolve_single_prompt_mm_lora(
  SnakeBridge.Ref.t(),
  term(),
  term(),
  term(),
  keyword()
) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}

vLLM: a high-throughput and memory-efficient inference engine for LLMs

Parameters

  • prompt (term())
  • lora_request (term())
  • default_mm_loras (term())

Returns

  • term()

_run_engine(ref, opts \\ [])

@spec _run_engine(
  SnakeBridge.Ref.t(),
  keyword()
) :: {:ok, [term()]} | {:error, Snakepit.Error.t()}

vLLM: a high-throughput and memory-efficient inference engine for LLMs

Parameters

  • use_tqdm (term() keyword-only default: True)

Returns

  • list(term())

_validate_and_add_requests(ref, prompts, params, opts \\ [])

@spec _validate_and_add_requests(SnakeBridge.Ref.t(), term(), term(), keyword()) ::
  {:ok, nil} | {:error, Snakepit.Error.t()}

vLLM: a high-throughput and memory-efficient inference engine for LLMs

Parameters

  • prompts (term())
  • params (term())
  • use_tqdm (term() keyword-only default: True)
  • lora_request (term() keyword-only, required)
  • priority (term() keyword-only default: None)
  • tokenization_kwargs (term() keyword-only default: None)

Returns

  • nil

_validate_mm_data_and_uuids(ref, multi_modal_data, multi_modal_uuids, opts \\ [])

@spec _validate_mm_data_and_uuids(SnakeBridge.Ref.t(), term(), term(), keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}

Validate that if any multi-modal data is skipped (i.e. None),

then its corresponding UUID must be set.

Parameters

  • multi_modal_data (term())
  • multi_modal_uuids (term())

Returns

  • term()

apply_model(ref, func, opts \\ [])

@spec apply_model(SnakeBridge.Ref.t(), term(), keyword()) ::
  {:ok, [term()]} | {:error, Snakepit.Error.t()}

Run a function directly on the model inside each worker,

returning the result for each of them.

!!! warning

To reduce the overhead of data transfer, avoid returning large
arrays or tensors from this method. If you must return them,
make sure you move them to CPU first to avoid taking up additional
VRAM!

Parameters

  • func (term())

Returns

  • list(term())

beam_search(ref, prompts, params, args, opts \\ [])

Generate sequences using beam search.

Parameters

  • prompts - A list of prompts. Each prompt can be a string or a list of token IDs.
  • params - The beam search parameters.
  • lora_request - LoRA request to use for generation, if any.
  • use_tqdm - Whether to use tqdm to display the progress bar.
  • concurrency_limit - The maximum number of concurrent requests. If None, the number of concurrent requests is unlimited.

Returns

  • list(Vllm.BeamSearch.BeamSearchOutput.t())

chat(ref, messages, args, opts \\ [])

@spec chat(SnakeBridge.Ref.t(), term(), [term()], keyword()) ::
  {:ok, [Vllm.Outputs.RequestOutput.t()]} | {:error, Snakepit.Error.t()}

Generate responses for a chat conversation.

The chat conversation is converted into a text prompt using the tokenizer and calls the [generate][vllm.LLM.generate] method to generate the responses.

Multi-modal inputs can be passed in the same way you would pass them to the OpenAI API.

Parameters

  • messages - A list of conversations or a single conversation.
  • sampling_params - The sampling parameters for text generation. If None, we use the default sampling parameters. When it is a single value, it is applied to every prompt. When it is a list, the list must have the same length as the prompts and it is paired one by one with the prompt.
  • use_tqdm - If True, shows a tqdm progress bar. If a callable (e.g., functools.partial(tqdm, leave=False)), it is used to create the progress bar. If False, no progress bar is created.
  • lora_request - LoRA request to use for generation, if any.
  • chat_template - The template to use for structuring the chat. If not provided, the model's default chat template will be used.
  • chat_template_content_format - The format to render message content.
  • Example - "Who are you?" - "openai" will render the content as a list of dictionaries, similar to OpenAI schema.
  • Example - [{"type": "text", "text": "Who are you?"}]
  • add_generation_prompt - If True, adds a generation template to each message.
  • continue_final_message - If True, continues the final message in the conversation instead of starting a new one. Cannot be True if add_generation_prompt is also True.
  • chat_template_kwargs - Additional kwargs to pass to the chat template.
  • mm_processor_kwargs - Multimodal processor kwarg overrides for this chat request. Only used for offline requests.

Returns

  • list(Vllm.Outputs.RequestOutput.t())

classify(ref, prompts, opts \\ [])

@spec classify(SnakeBridge.Ref.t(), term(), keyword()) ::
  {:ok, [Vllm.Outputs.ClassificationRequestOutput.t()]}
  | {:error, Snakepit.Error.t()}

Generate class logits for each prompt.

This class automatically batches the given prompts, considering the memory constraint. For the best performance, put all of your prompts into a single list and pass it to this method.

Parameters

  • prompts - The prompts to the LLM. You may pass a sequence of prompts for batch inference. See [PromptType][vllm.inputs.PromptType] for more details about the format of each prompt.
  • use_tqdm - If True, shows a tqdm progress bar. If a callable (e.g., functools.partial(tqdm, leave=False)), it is used to create the progress bar. If False, no progress bar is created.
  • lora_request - LoRA request to use for generation, if any.
  • pooling_params - The pooling parameters for pooling. If None, we use the default pooling parameters.

Returns

  • list(Vllm.Outputs.ClassificationRequestOutput.t())

collective_rpc(ref, method, args, opts \\ [])

@spec collective_rpc(SnakeBridge.Ref.t(), term(), [term()], keyword()) ::
  {:ok, [term()]} | {:error, Snakepit.Error.t()}

Execute an RPC call on all workers.

Parameters

  • method - Name of the worker method to execute, or a callable that is serialized and sent to all workers to execute.
  • timeout - Maximum time in seconds to wait for execution. Raises a [TimeoutError][] on timeout. None means wait indefinitely.
  • args - Positional arguments to pass to the worker method.
  • kwargs - Keyword arguments to pass to the worker method.

Notes

It is recommended to use this API to only pass control messages,

and set up data-plane communication to pass data.

Returns

  • list(term())

embed(ref, prompts, opts \\ [])

@spec embed(SnakeBridge.Ref.t(), term(), keyword()) ::
  {:ok, [Vllm.Outputs.EmbeddingRequestOutput.t()]}
  | {:error, Snakepit.Error.t()}

Generate an embedding vector for each prompt.

This class automatically batches the given prompts, considering the memory constraint. For the best performance, put all of your prompts into a single list and pass it to this method.

Parameters

  • prompts - The prompts to the LLM. You may pass a sequence of prompts for batch inference. See [PromptType][vllm.inputs.PromptType] for more details about the format of each prompt.
  • pooling_params - The pooling parameters for pooling. If None, we use the default pooling parameters.
  • use_tqdm - If True, shows a tqdm progress bar. If a callable (e.g., functools.partial(tqdm, leave=False)), it is used to create the progress bar. If False, no progress bar is created.
  • lora_request - LoRA request to use for generation, if any.

Returns

  • list(Vllm.Outputs.EmbeddingRequestOutput.t())

encode(ref, prompts, args, opts \\ [])

@spec encode(SnakeBridge.Ref.t(), term(), [term()], keyword()) ::
  {:ok, [Vllm.Outputs.PoolingRequestOutput.t()]} | {:error, Snakepit.Error.t()}

Apply pooling to the hidden states corresponding to the input

prompts.

This class automatically batches the given prompts, considering the memory constraint. For the best performance, put all of your prompts into a single list and pass it to this method.

Parameters

  • prompts - The prompts to the LLM. You may pass a sequence of prompts for batch inference. See [PromptType][vllm.inputs.PromptType] for more details about the format of each prompt.
  • pooling_params - The pooling parameters for pooling. If None, we use the default pooling parameters.
  • use_tqdm - If True, shows a tqdm progress bar. If a callable (e.g., functools.partial(tqdm, leave=False)), it is used to create the progress bar. If False, no progress bar is created.
  • lora_request - LoRA request to use for generation, if any.
  • pooling_task - Override the pooling task to use.
  • tokenization_kwargs - overrides tokenization_kwargs set in pooling_params

Notes

Using prompts and prompt_token_ids as keyword parameters is

considered legacy and may be deprecated in the future. You should
instead pass them via the `inputs` parameter.

Returns

  • list(Vllm.Outputs.PoolingRequestOutput.t())

generate(ref, prompts, args, opts \\ [])

@spec generate(SnakeBridge.Ref.t(), term(), [term()], keyword()) ::
  {:ok, [Vllm.Outputs.RequestOutput.t()]} | {:error, Snakepit.Error.t()}

Generates the completions for the input prompts.

This class automatically batches the given prompts, considering the memory constraint. For the best performance, put all of your prompts into a single list and pass it to this method.

Parameters

  • prompts - The prompts to the LLM. You may pass a sequence of prompts for batch inference. See [PromptType][vllm.inputs.PromptType] for more details about the format of each prompt.
  • sampling_params - The sampling parameters for text generation. If None, we use the default sampling parameters. When it is a single value, it is applied to every prompt. When it is a list, the list must have the same length as the prompts and it is paired one by one with the prompt.
  • use_tqdm - If True, shows a tqdm progress bar. If a callable (e.g., functools.partial(tqdm, leave=False)), it is used to create the progress bar. If False, no progress bar is created.
  • lora_request - LoRA request to use for generation, if any.
  • priority - The priority of the requests, if any. Only applicable when priority scheduling policy is enabled. If provided, must be a list of integers matching the length of prompts, where each priority value corresponds to the prompt at the same index.

Notes

Using prompts and prompt_token_ids as keyword parameters is

considered legacy and may be deprecated in the future. You should
instead pass them via the `inputs` parameter.

Returns

  • list(Vllm.Outputs.RequestOutput.t())

get_default_sampling_params(ref, opts \\ [])

@spec get_default_sampling_params(
  SnakeBridge.Ref.t(),
  keyword()
) :: {:ok, Vllm.SamplingParamsClass.t()} | {:error, Snakepit.Error.t()}

vLLM: a high-throughput and memory-efficient inference engine for LLMs

Returns

  • Vllm.SamplingParamsClass.t()

get_metrics(ref, opts \\ [])

@spec get_metrics(
  SnakeBridge.Ref.t(),
  keyword()
) :: {:ok, [term()]} | {:error, Snakepit.Error.t()}

Return a snapshot of aggregated metrics from Prometheus.

Notes

This method is only available with the V1 LLM engine.

Returns

  • list(term())

get_tokenizer(ref, opts \\ [])

@spec get_tokenizer(
  SnakeBridge.Ref.t(),
  keyword()
) :: {:ok, term()} | {:error, Snakepit.Error.t()}

vLLM: a high-throughput and memory-efficient inference engine for LLMs

Returns

  • term()

new(model, opts \\ [])

@spec new(
  String.t(),
  keyword()
) :: {:ok, SnakeBridge.Ref.t()} | {:error, Snakepit.Error.t()}

LLM constructor.

Parameters

  • model (String.t())
  • runner (term() keyword-only default: 'auto')
  • convert (term() keyword-only default: 'auto')
  • tokenizer (term() keyword-only default: None)
  • tokenizer_mode (term() | String.t() keyword-only default: 'auto')

  • skip_tokenizer_init (boolean() keyword-only default: False)
  • trust_remote_code (boolean() keyword-only default: False)
  • allowed_local_media_path (String.t() keyword-only default: '')
  • allowed_media_domains (term() keyword-only default: None)
  • tensor_parallel_size (integer() keyword-only default: 1)
  • dtype (term() keyword-only default: 'auto')
  • quantization (term() | nil keyword-only default: None)

  • revision (term() keyword-only default: None)
  • tokenizer_revision (term() keyword-only default: None)
  • seed (integer() keyword-only default: 0)
  • gpu_memory_utilization (float() keyword-only default: 0.9)
  • swap_space (float() keyword-only default: 4)
  • cpu_offload_gb (float() keyword-only default: 0)
  • enforce_eager (boolean() keyword-only default: False)
  • enable_return_routed_experts (boolean() keyword-only default: False)
  • disable_custom_all_reduce (boolean() keyword-only default: False)
  • hf_token (term() keyword-only default: None)
  • hf_overrides (term() keyword-only default: None)
  • mm_processor_kwargs (term() keyword-only default: None)
  • pooler_config (term() keyword-only default: None)
  • structured_outputs_config (term() keyword-only default: None)
  • profiler_config (term() keyword-only default: None)
  • attention_config (term() keyword-only default: None)
  • kv_cache_memory_bytes (term() keyword-only default: None)
  • compilation_config (term() keyword-only default: None)
  • logits_processors (term() keyword-only default: None)
  • kwargs (term())

preprocess_chat(ref, messages, args, opts \\ [])

@spec preprocess_chat(SnakeBridge.Ref.t(), term(), [term()], keyword()) ::
  {:ok, [term()]} | {:error, Snakepit.Error.t()}

Generate prompt for a chat conversation. The pre-processed

prompt can then be used as input for the other LLM methods.

Refer to chat for a complete description of the arguments.

Parameters

  • messages (term())
  • chat_template (term() default: None)
  • chat_template_content_format (term() default: 'auto')
  • add_generation_prompt (boolean() default: True)
  • continue_final_message (boolean() default: False)
  • tools (term() default: None)
  • chat_template_kwargs (term() default: None)
  • mm_processor_kwargs (term() default: None)

Returns

  • list(term())

reset_mm_cache(ref, opts \\ [])

@spec reset_mm_cache(
  SnakeBridge.Ref.t(),
  keyword()
) :: {:ok, nil} | {:error, Snakepit.Error.t()}

vLLM: a high-throughput and memory-efficient inference engine for LLMs

Returns

  • nil

reset_prefix_cache(ref, args, opts \\ [])

@spec reset_prefix_cache(SnakeBridge.Ref.t(), [term()], keyword()) ::
  {:ok, boolean()} | {:error, Snakepit.Error.t()}

vLLM: a high-throughput and memory-efficient inference engine for LLMs

Parameters

  • reset_running_requests (boolean() default: False)
  • reset_connector (boolean() default: False)

Returns

  • boolean()

reward(ref, prompts, opts \\ [])

@spec reward(SnakeBridge.Ref.t(), term(), keyword()) ::
  {:ok, [Vllm.Outputs.PoolingRequestOutput.t()]} | {:error, Snakepit.Error.t()}

Generate rewards for each prompt.

Parameters

  • prompts - The prompts to the LLM. You may pass a sequence of prompts for batch inference. See [PromptType][vllm.inputs.PromptType] for more details about the format of each prompt.
  • use_tqdm - If True, shows a tqdm progress bar. If a callable (e.g., functools.partial(tqdm, leave=False)), it is used to create the progress bar. If False, no progress bar is created.
  • lora_request - LoRA request to use for generation, if any.
  • pooling_params - The pooling parameters for pooling. If None, we use the default pooling parameters.

Returns

  • list(Vllm.Outputs.PoolingRequestOutput.t())

score(ref, data_1, data_2, opts \\ [])

@spec score(SnakeBridge.Ref.t(), term(), term(), keyword()) ::
  {:ok, [Vllm.Outputs.ScoringRequestOutput.t()]} | {:error, Snakepit.Error.t()}

Generate similarity scores for all pairs <text,text_pair> or

<multi-modal data, multi-modal data pair>.

The inputs can be 1 -> 1, 1 -> N or N -> N. In the 1 - N case the data_1 input will be replicated N times to pair with the data_2 inputs. The input pairs are used to build a list of prompts for the cross encoder model. This class automatically batches the prompts, considering the memory constraint. For the best performance, put all of your inputs into a single list and pass it to this method.

Supports both text and multi-modal data (images, etc.) when used with appropriate multi-modal models. For multi-modal inputs, ensure the prompt structure matches the model's expected input format.

Parameters

  • data_1 - Can be a single prompt, a list of prompts or ScoreMultiModalParam, which can contain either text or multi-modal data. When a list, it must have the same length as the data_2 list.
  • data_2 - The data to pair with the query to form the input to the LLM. Can be text or multi-modal data. See [PromptType] [vllm.inputs.PromptType] for more details about the format of each prompt.
  • use_tqdm - If True, shows a tqdm progress bar. If a callable (e.g., functools.partial(tqdm, leave=False)), it is used to create the progress bar. If False, no progress bar is created.
  • lora_request - LoRA request to use for generation, if any.
  • pooling_params - The pooling parameters for pooling. If None, we use the default pooling parameters.
  • chat_template - The chat template to use for the scoring. If None, we use the model's default chat template.

Returns

  • list(Vllm.Outputs.ScoringRequestOutput.t())

sleep(ref, args, opts \\ [])

@spec sleep(SnakeBridge.Ref.t(), [term()], keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}

Put the engine to sleep. The engine should not process any requests.

The caller should guarantee that no requests are being processed during the sleep period, before wake_up is called.

Parameters

  • level - The sleep level. Level 1 sleep will offload the model weights and discard the kv cache. The content of kv cache is forgotten. Level 1 sleep is good for sleeping and waking up the engine to run the same model again. The model weights are backed up in CPU memory. Please make sure there's enough CPU memory to store the model weights. Level 2 sleep will discard both the model weights and the kv cache. The content of both the model weights and kv cache is forgotten. Level 2 sleep is good for sleeping and waking up the engine to run a different model or update the model, where previous model weights are not needed. It reduces CPU memory pressure.

Returns

  • term()

start_profile(ref, opts \\ [])

@spec start_profile(
  SnakeBridge.Ref.t(),
  keyword()
) :: {:ok, nil} | {:error, Snakepit.Error.t()}

vLLM: a high-throughput and memory-efficient inference engine for LLMs

Returns

  • nil

stop_profile(ref, opts \\ [])

@spec stop_profile(
  SnakeBridge.Ref.t(),
  keyword()
) :: {:ok, nil} | {:error, Snakepit.Error.t()}

vLLM: a high-throughput and memory-efficient inference engine for LLMs

Returns

  • nil

wake_up(ref, args, opts \\ [])

@spec wake_up(SnakeBridge.Ref.t(), [term()], keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}

Wake up the engine from sleep mode. See the [sleep][vllm.LLM.sleep]

method for more details.

Parameters

  • tags - An optional list of tags to reallocate the engine memory for specific memory allocations. Values must be in ("weights", "kv_cache"). If None, all memory is reallocated. wake_up should be called with all tags (or None) before the engine is used again.

Returns

  • term()