Dspy.LM (DSPex v0.11.0)

Copy Markdown View Source

A language model supporting chat or text completion requests for use with DSPy modules.

Summary

Functions

Python method LM._check_truncation.

Extract citations from LiteLLM response if available.

Python method LM._get_cached_completion_fn.

Process the response of OpenAI chat completion API and extract outputs.

Process the response of OpenAI Response API and extract outputs.

Python method LM._run_finetune_job.

Python method LM._warn_zero_temp_rollout.

Python method LM.acall.

Async forward pass for the language model.

Returns a copy of the language model with possibly updated parameters.

Python method LM.dump_state.

Forward pass for the language model.

Python method LM.infer_provider.

Python method LM.inspect_history.

Python method LM.kill.

Python method LM.launch.

Create a new language model instance for use with DSPy modules and programs.

Python method LM.reinforce.

Python method LM.update_history.

Types

t()

@opaque t()

Functions

_check_truncation(ref, results, opts \\ [])

@spec _check_truncation(SnakeBridge.Ref.t(), term(), keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}

Python method LM._check_truncation.

Parameters

  • results (term())

Returns

  • term()

_extract_citations_from_response(ref, choice, opts \\ [])

@spec _extract_citations_from_response(SnakeBridge.Ref.t(), term(), keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}

Extract citations from LiteLLM response if available.

Reference: https://docs.litellm.ai/docs/providers/anthropic#beta-citations-api

Parameters

  • choice - The choice object from response.choices

Returns

  • term()

_get_cached_completion_fn(ref, completion_fn, cache, opts \\ [])

@spec _get_cached_completion_fn(SnakeBridge.Ref.t(), term(), term(), keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}

Python method LM._get_cached_completion_fn.

Parameters

  • completion_fn (term())
  • cache (term())

Returns

  • term()

_process_completion(ref, response, merged_kwargs, opts \\ [])

@spec _process_completion(SnakeBridge.Ref.t(), term(), term(), keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}

Process the response of OpenAI chat completion API and extract outputs.

Parameters

  • response - The OpenAI chat completion response
  • https - //platform.openai.com/docs/api-reference/chat/object
  • merged_kwargs - Merged kwargs from self.kwargs and method kwargs

Returns

  • term()

_process_lm_response(ref, response, prompt, messages, opts \\ [])

@spec _process_lm_response(SnakeBridge.Ref.t(), term(), term(), term(), keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}

Python method LM._process_lm_response.

Parameters

  • response (term())
  • prompt (term())
  • messages (term())
  • kwargs (term())

Returns

  • term()

_process_response(ref, response, opts \\ [])

@spec _process_response(SnakeBridge.Ref.t(), term(), keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}

Process the response of OpenAI Response API and extract outputs.

Parameters

  • response - OpenAI Response API response
  • https - //platform.openai.com/docs/api-reference/responses/object

Returns

  • term()

_run_finetune_job(ref, job, opts \\ [])

@spec _run_finetune_job(SnakeBridge.Ref.t(), term(), keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}

Python method LM._run_finetune_job.

Parameters

  • job (term())

Returns

  • term()

_warn_zero_temp_rollout(ref, temperature, rollout_id, opts \\ [])

@spec _warn_zero_temp_rollout(SnakeBridge.Ref.t(), term(), term(), keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}

Python method LM._warn_zero_temp_rollout.

Parameters

  • temperature (term())
  • rollout_id (term())

Returns

  • term()

acall(ref, args, opts \\ [])

@spec acall(SnakeBridge.Ref.t(), [term()], keyword()) ::
  {:ok, [term()]} | {:error, Snakepit.Error.t()}

Python method LM.acall.

Parameters

  • prompt (term() default: None)
  • messages (term() default: None)
  • kwargs (term())

Returns

  • list(term())

aforward(ref, args, opts \\ [])

@spec aforward(SnakeBridge.Ref.t(), [term()], keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}

Async forward pass for the language model.

Subclasses must implement this method, and the response should be identical to either of the following formats:

Parameters

  • prompt (term() default: None)
  • messages (term() default: None)
  • kwargs (term())

Returns

  • term()

copy(ref, opts \\ [])

@spec copy(
  SnakeBridge.Ref.t(),
  keyword()
) :: {:ok, term()} | {:error, Snakepit.Error.t()}

Returns a copy of the language model with possibly updated parameters.

Any provided keyword arguments update the corresponding attributes or LM kwargs of the copy. For example, lm.copy(rollout_id=1, temperature=1.0) returns an LM whose requests use a different rollout ID at non-zero temperature to bypass cache collisions.

Parameters

  • kwargs (term())

Returns

  • term()

dump_state(ref, opts \\ [])

@spec dump_state(
  SnakeBridge.Ref.t(),
  keyword()
) :: {:ok, term()} | {:error, Snakepit.Error.t()}

Python method LM.dump_state.

Returns

  • term()

finetune(ref, train_data, train_data_format, args, opts \\ [])

@spec finetune(
  SnakeBridge.Ref.t(),
  [%{optional(String.t()) => term()}],
  term(),
  [term()],
  keyword()
) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}

Python method LM.finetune.

Parameters

  • train_data (list(%{optional(String.t()) => term()}))
  • train_data_format (term())
  • train_kwargs (term() default: None)

Returns

  • term()

forward(ref, args, opts \\ [])

@spec forward(SnakeBridge.Ref.t(), [term()], keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}

Forward pass for the language model.

Subclasses must implement this method, and the response should be identical to either of the following formats:

Parameters

  • prompt (term() default: None)
  • messages (term() default: None)
  • kwargs (term())

Returns

  • term()

infer_provider(ref, opts \\ [])

@spec infer_provider(
  SnakeBridge.Ref.t(),
  keyword()
) :: {:ok, term()} | {:error, Snakepit.Error.t()}

Python method LM.infer_provider.

Returns

  • term()

inspect_history(ref, args, opts \\ [])

@spec inspect_history(SnakeBridge.Ref.t(), [term()], keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}

Python method LM.inspect_history.

Parameters

  • n (integer() default: 1)

Returns

  • term()

kill(ref, args, opts \\ [])

@spec kill(SnakeBridge.Ref.t(), [term()], keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}

Python method LM.kill.

Parameters

  • launch_kwargs (term() default: None)

Returns

  • term()

launch(ref, args, opts \\ [])

@spec launch(SnakeBridge.Ref.t(), [term()], keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}

Python method LM.launch.

Parameters

  • launch_kwargs (term() default: None)

Returns

  • term()

new(model, args, opts \\ [])

@spec new(String.t(), [term()], keyword()) ::
  {:ok, SnakeBridge.Ref.t()} | {:error, Snakepit.Error.t()}

Create a new language model instance for use with DSPy modules and programs.

Parameters

  • model - The model to use. This should be a string of the form "llm_provider/llm_name" supported by LiteLLM. For example, "openai/gpt-4o".
  • model_type - The type of the model, either "chat" or "text".
  • temperature - The sampling temperature to use when generating responses.
  • max_tokens - The maximum number of tokens to generate per response.
  • cache - Whether to cache the model responses for reuse to improve performance and reduce costs.
  • callbacks - A list of callback functions to run before and after each request.
  • num_retries - The number of times to retry a request if it fails transiently due to network error, rate limiting, etc. Requests are retried with exponential backoff.
  • provider - The provider to use. If not specified, the provider will be inferred from the model.
  • finetuning_model - The model to finetune. In some providers, the models available for finetuning is different from the models available for inference.
  • rollout_id - Optional integer used to differentiate cache entries for otherwise identical requests. Different values bypass DSPy's caches while still caching future calls with the same inputs and rollout ID. Note that rollout_id only affects generation when temperature is non-zero. This argument is stripped before sending requests to the provider.

reinforce(ref, train_kwargs, opts \\ [])

@spec reinforce(SnakeBridge.Ref.t(), term(), keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}

Python method LM.reinforce.

Parameters

  • train_kwargs (term())

Returns

  • term()

update_history(ref, entry, opts \\ [])

@spec update_history(SnakeBridge.Ref.t(), term(), keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}

Python method LM.update_history.

Parameters

  • entry (term())

Returns

  • term()