# `Dspy.LM`
[🔗](https://github.com/nshkrdotcom/dspex/blob/v0.11.0/lib/snakebridge_generated/dspy/lm.ex#L7)

A language model supporting chat or text completion requests for use with DSPy modules.

# `t`

```elixir
@opaque t()
```

# `_check_truncation`

```elixir
@spec _check_truncation(SnakeBridge.Ref.t(), term(), keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}
```

Python method `LM._check_truncation`.

## Parameters

- `results` (term())

## Returns

- `term()`

# `_extract_citations_from_response`

```elixir
@spec _extract_citations_from_response(SnakeBridge.Ref.t(), term(), keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}
```

Extract citations from LiteLLM response if available.

Reference: https://docs.litellm.ai/docs/providers/anthropic#beta-citations-api

## Parameters

- `choice` - The choice object from response.choices

## Returns

- `term()`

# `_get_cached_completion_fn`

```elixir
@spec _get_cached_completion_fn(SnakeBridge.Ref.t(), term(), term(), keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}
```

Python method `LM._get_cached_completion_fn`.

## Parameters

- `completion_fn` (term())
- `cache` (term())

## Returns

- `term()`

# `_process_completion`

```elixir
@spec _process_completion(SnakeBridge.Ref.t(), term(), term(), keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}
```

Process the response of OpenAI chat completion API and extract outputs.

## Parameters

- `response` - The OpenAI chat completion response
- `https` - //platform.openai.com/docs/api-reference/chat/object
- `merged_kwargs` - Merged kwargs from self.kwargs and method kwargs

## Returns

- `term()`

# `_process_lm_response`

```elixir
@spec _process_lm_response(SnakeBridge.Ref.t(), term(), term(), term(), keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}
```

Python method `LM._process_lm_response`.

## Parameters

- `response` (term())
- `prompt` (term())
- `messages` (term())
- `kwargs` (term())

## Returns

- `term()`

# `_process_response`

```elixir
@spec _process_response(SnakeBridge.Ref.t(), term(), keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}
```

Process the response of OpenAI Response API and extract outputs.

## Parameters

- `response` - OpenAI Response API response
- `https` - //platform.openai.com/docs/api-reference/responses/object

## Returns

- `term()`

# `_run_finetune_job`

```elixir
@spec _run_finetune_job(SnakeBridge.Ref.t(), term(), keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}
```

Python method `LM._run_finetune_job`.

## Parameters

- `job` (term())

## Returns

- `term()`

# `_warn_zero_temp_rollout`

```elixir
@spec _warn_zero_temp_rollout(SnakeBridge.Ref.t(), term(), term(), keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}
```

Python method `LM._warn_zero_temp_rollout`.

## Parameters

- `temperature` (term())
- `rollout_id` (term())

## Returns

- `term()`

# `acall`

```elixir
@spec acall(SnakeBridge.Ref.t(), [term()], keyword()) ::
  {:ok, [term()]} | {:error, Snakepit.Error.t()}
```

Python method `LM.acall`.

## Parameters

- `prompt` (term() default: None)
- `messages` (term() default: None)
- `kwargs` (term())

## Returns

- `list(term())`

# `aforward`

```elixir
@spec aforward(SnakeBridge.Ref.t(), [term()], keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}
```

Async forward pass for the language model.

Subclasses must implement this method, and the response should be identical to either of the following formats:
- [OpenAI response format](https://platform.openai.com/docs/api-reference/responses/object)
- [OpenAI chat completion format](https://platform.openai.com/docs/api-reference/chat/object)
- [OpenAI text completion format](https://platform.openai.com/docs/api-reference/completions/object)

## Parameters

- `prompt` (term() default: None)
- `messages` (term() default: None)
- `kwargs` (term())

## Returns

- `term()`

# `copy`

```elixir
@spec copy(
  SnakeBridge.Ref.t(),
  keyword()
) :: {:ok, term()} | {:error, Snakepit.Error.t()}
```

Returns a copy of the language model with possibly updated parameters.

Any provided keyword arguments update the corresponding attributes or LM kwargs of
the copy. For example, ``lm.copy(rollout_id=1, temperature=1.0)`` returns an LM whose
requests use a different rollout ID at non-zero temperature to bypass cache collisions.

## Parameters

- `kwargs` (term())

## Returns

- `term()`

# `dump_state`

```elixir
@spec dump_state(
  SnakeBridge.Ref.t(),
  keyword()
) :: {:ok, term()} | {:error, Snakepit.Error.t()}
```

Python method `LM.dump_state`.

## Returns

- `term()`

# `finetune`

```elixir
@spec finetune(
  SnakeBridge.Ref.t(),
  [%{optional(String.t()) =&gt; term()}],
  term(),
  [term()],
  keyword()
) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}
```

Python method `LM.finetune`.

## Parameters

- `train_data` (list(%{optional(String.t()) => term()}))
- `train_data_format` (term())
- `train_kwargs` (term() default: None)

## Returns

- `term()`

# `forward`

```elixir
@spec forward(SnakeBridge.Ref.t(), [term()], keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}
```

Forward pass for the language model.

Subclasses must implement this method, and the response should be identical to either of the following formats:
- [OpenAI response format](https://platform.openai.com/docs/api-reference/responses/object)
- [OpenAI chat completion format](https://platform.openai.com/docs/api-reference/chat/object)
- [OpenAI text completion format](https://platform.openai.com/docs/api-reference/completions/object)

## Parameters

- `prompt` (term() default: None)
- `messages` (term() default: None)
- `kwargs` (term())

## Returns

- `term()`

# `infer_provider`

```elixir
@spec infer_provider(
  SnakeBridge.Ref.t(),
  keyword()
) :: {:ok, term()} | {:error, Snakepit.Error.t()}
```

Python method `LM.infer_provider`.

## Returns

- `term()`

# `inspect_history`

```elixir
@spec inspect_history(SnakeBridge.Ref.t(), [term()], keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}
```

Python method `LM.inspect_history`.

## Parameters

- `n` (integer() default: 1)

## Returns

- `term()`

# `kill`

```elixir
@spec kill(SnakeBridge.Ref.t(), [term()], keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}
```

Python method `LM.kill`.

## Parameters

- `launch_kwargs` (term() default: None)

## Returns

- `term()`

# `launch`

```elixir
@spec launch(SnakeBridge.Ref.t(), [term()], keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}
```

Python method `LM.launch`.

## Parameters

- `launch_kwargs` (term() default: None)

## Returns

- `term()`

# `new`

```elixir
@spec new(String.t(), [term()], keyword()) ::
  {:ok, SnakeBridge.Ref.t()} | {:error, Snakepit.Error.t()}
```

Create a new language model instance for use with DSPy modules and programs.

## Parameters

- `model` - The model to use. This should be a string of the form ``"llm_provider/llm_name"`` supported by LiteLLM. For example, ``"openai/gpt-4o"``.
- `model_type` - The type of the model, either ``"chat"`` or ``"text"``.
- `temperature` - The sampling temperature to use when generating responses.
- `max_tokens` - The maximum number of tokens to generate per response.
- `cache` - Whether to cache the model responses for reuse to improve performance and reduce costs.
- `callbacks` - A list of callback functions to run before and after each request.
- `num_retries` - The number of times to retry a request if it fails transiently due to network error, rate limiting, etc. Requests are retried with exponential backoff.
- `provider` - The provider to use. If not specified, the provider will be inferred from the model.
- `finetuning_model` - The model to finetune. In some providers, the models available for finetuning is different from the models available for inference.
- `rollout_id` - Optional integer used to differentiate cache entries for otherwise identical requests. Different values bypass DSPy's caches while still caching future calls with the same inputs and rollout ID. Note that `rollout_id` only affects generation when `temperature` is non-zero. This argument is stripped before sending requests to the provider.

# `reinforce`

```elixir
@spec reinforce(SnakeBridge.Ref.t(), term(), keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}
```

Python method `LM.reinforce`.

## Parameters

- `train_kwargs` (term())

## Returns

- `term()`

# `update_history`

```elixir
@spec update_history(SnakeBridge.Ref.t(), term(), keyword()) ::
  {:ok, term()} | {:error, Snakepit.Error.t()}
```

Python method `LM.update_history`.

## Parameters

- `entry` (term())

## Returns

- `term()`

---

*Consult [api-reference.md](api-reference.md) for complete listing*
