A language model supporting chat or text completion requests for use with DSPy modules.
Summary
Functions
Python method LM._check_truncation.
Extract citations from LiteLLM response if available.
Python method LM._get_cached_completion_fn.
Process the response of OpenAI chat completion API and extract outputs.
Python method LM._process_lm_response.
Process the response of OpenAI Response API and extract outputs.
Python method LM._run_finetune_job.
Python method LM._warn_zero_temp_rollout.
Python method LM.acall.
Async forward pass for the language model.
Returns a copy of the language model with possibly updated parameters.
Python method LM.dump_state.
Python method LM.finetune.
Forward pass for the language model.
Python method LM.infer_provider.
Python method LM.inspect_history.
Python method LM.kill.
Python method LM.launch.
Create a new language model instance for use with DSPy modules and programs.
Python method LM.reinforce.
Python method LM.update_history.
Types
Functions
@spec _check_truncation(SnakeBridge.Ref.t(), term(), keyword()) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Python method LM._check_truncation.
Parameters
results(term())
Returns
term()
@spec _extract_citations_from_response(SnakeBridge.Ref.t(), term(), keyword()) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Extract citations from LiteLLM response if available.
Reference: https://docs.litellm.ai/docs/providers/anthropic#beta-citations-api
Parameters
choice- The choice object from response.choices
Returns
term()
@spec _get_cached_completion_fn(SnakeBridge.Ref.t(), term(), term(), keyword()) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Python method LM._get_cached_completion_fn.
Parameters
completion_fn(term())cache(term())
Returns
term()
@spec _process_completion(SnakeBridge.Ref.t(), term(), term(), keyword()) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Process the response of OpenAI chat completion API and extract outputs.
Parameters
response- The OpenAI chat completion responsehttps- //platform.openai.com/docs/api-reference/chat/objectmerged_kwargs- Merged kwargs from self.kwargs and method kwargs
Returns
term()
@spec _process_lm_response(SnakeBridge.Ref.t(), term(), term(), term(), keyword()) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Python method LM._process_lm_response.
Parameters
response(term())prompt(term())messages(term())kwargs(term())
Returns
term()
@spec _process_response(SnakeBridge.Ref.t(), term(), keyword()) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Process the response of OpenAI Response API and extract outputs.
Parameters
response- OpenAI Response API responsehttps- //platform.openai.com/docs/api-reference/responses/object
Returns
term()
@spec _run_finetune_job(SnakeBridge.Ref.t(), term(), keyword()) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Python method LM._run_finetune_job.
Parameters
job(term())
Returns
term()
@spec _warn_zero_temp_rollout(SnakeBridge.Ref.t(), term(), term(), keyword()) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Python method LM._warn_zero_temp_rollout.
Parameters
temperature(term())rollout_id(term())
Returns
term()
@spec acall(SnakeBridge.Ref.t(), [term()], keyword()) :: {:ok, [term()]} | {:error, Snakepit.Error.t()}
Python method LM.acall.
Parameters
prompt(term() default: None)messages(term() default: None)kwargs(term())
Returns
list(term())
@spec aforward(SnakeBridge.Ref.t(), [term()], keyword()) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Async forward pass for the language model.
Subclasses must implement this method, and the response should be identical to either of the following formats:
Parameters
prompt(term() default: None)messages(term() default: None)kwargs(term())
Returns
term()
@spec copy( SnakeBridge.Ref.t(), keyword() ) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Returns a copy of the language model with possibly updated parameters.
Any provided keyword arguments update the corresponding attributes or LM kwargs of
the copy. For example, lm.copy(rollout_id=1, temperature=1.0) returns an LM whose
requests use a different rollout ID at non-zero temperature to bypass cache collisions.
Parameters
kwargs(term())
Returns
term()
@spec dump_state( SnakeBridge.Ref.t(), keyword() ) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Python method LM.dump_state.
Returns
term()
@spec finetune( SnakeBridge.Ref.t(), [%{optional(String.t()) => term()}], term(), [term()], keyword() ) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Python method LM.finetune.
Parameters
train_data(list(%{optional(String.t()) => term()}))train_data_format(term())train_kwargs(term() default: None)
Returns
term()
@spec forward(SnakeBridge.Ref.t(), [term()], keyword()) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Forward pass for the language model.
Subclasses must implement this method, and the response should be identical to either of the following formats:
Parameters
prompt(term() default: None)messages(term() default: None)kwargs(term())
Returns
term()
@spec infer_provider( SnakeBridge.Ref.t(), keyword() ) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Python method LM.infer_provider.
Returns
term()
@spec inspect_history(SnakeBridge.Ref.t(), [term()], keyword()) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Python method LM.inspect_history.
Parameters
n(integer() default: 1)
Returns
term()
@spec kill(SnakeBridge.Ref.t(), [term()], keyword()) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Python method LM.kill.
Parameters
launch_kwargs(term() default: None)
Returns
term()
@spec launch(SnakeBridge.Ref.t(), [term()], keyword()) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Python method LM.launch.
Parameters
launch_kwargs(term() default: None)
Returns
term()
@spec new(String.t(), [term()], keyword()) :: {:ok, SnakeBridge.Ref.t()} | {:error, Snakepit.Error.t()}
Create a new language model instance for use with DSPy modules and programs.
Parameters
model- The model to use. This should be a string of the form"llm_provider/llm_name"supported by LiteLLM. For example,"openai/gpt-4o".model_type- The type of the model, either"chat"or"text".temperature- The sampling temperature to use when generating responses.max_tokens- The maximum number of tokens to generate per response.cache- Whether to cache the model responses for reuse to improve performance and reduce costs.callbacks- A list of callback functions to run before and after each request.num_retries- The number of times to retry a request if it fails transiently due to network error, rate limiting, etc. Requests are retried with exponential backoff.provider- The provider to use. If not specified, the provider will be inferred from the model.finetuning_model- The model to finetune. In some providers, the models available for finetuning is different from the models available for inference.rollout_id- Optional integer used to differentiate cache entries for otherwise identical requests. Different values bypass DSPy's caches while still caching future calls with the same inputs and rollout ID. Note thatrollout_idonly affects generation whentemperatureis non-zero. This argument is stripped before sending requests to the provider.
@spec reinforce(SnakeBridge.Ref.t(), term(), keyword()) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Python method LM.reinforce.
Parameters
train_kwargs(term())
Returns
term()
@spec update_history(SnakeBridge.Ref.t(), term(), keyword()) :: {:ok, term()} | {:error, Snakepit.Error.t()}
Python method LM.update_history.
Parameters
entry(term())
Returns
term()