# `PhoenixKit.Modules.AI`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L1)

Main context for PhoenixKit AI system.

Provides AI endpoint management and usage tracking for AI API requests.

## Architecture

Each **Endpoint** is a unified configuration that combines:
- Provider credentials (api_key, base_url, provider_settings)
- Model selection (single model per endpoint)
- Generation parameters (temperature, max_tokens, etc.)

Users create as many endpoints as needed, each representing one complete
AI configuration ready for making API requests.

## Core Functions

### System Management
- `enabled?/0` - Check if AI module is enabled
- `enable_system/0` - Enable the AI module
- `disable_system/0` - Disable the AI module
- `get_config/0` - Get module configuration with statistics

### Endpoint CRUD
- `list_endpoints/1` - List all endpoints with filters
- `get_endpoint!/1` - Get endpoint by UUID (raises)
- `get_endpoint/1` - Get endpoint by UUID
- `create_endpoint/1` - Create new endpoint
- `update_endpoint/2` - Update existing endpoint
- `delete_endpoint/1` - Delete endpoint

### Completion API
- `ask/3` - Simple single-turn completion
- `complete/3` - Multi-turn chat completion
- `embed/3` - Generate embeddings

### Usage Tracking
- `list_requests/1` - List requests with pagination/filters
- `create_request/1` - Log a new request
- `get_usage_stats/1` - Get aggregated statistics
- `get_dashboard_stats/0` - Get stats for dashboard display

## Usage Examples

    # Enable the module
    PhoenixKit.Modules.AI.enable_system()

    # Create an endpoint
    {:ok, endpoint} = PhoenixKit.Modules.AI.create_endpoint(%{
      name: "Claude Fast",
      provider: "openrouter",
      api_key: "sk-or-v1-...",
      model: "anthropic/claude-3-haiku",
      temperature: 0.7
    })

    # Use the endpoint
    {:ok, response} = PhoenixKit.Modules.AI.ask(endpoint.uuid, "Hello!")

    # Extract the response text
    {:ok, text} = PhoenixKit.Modules.AI.extract_content(response)

# `ask`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L1473)

Simple helper for single-turn chat completion.

## Parameters

- `endpoint_uuid` - Endpoint UUID string or Endpoint struct
- `prompt` - User prompt string
- `opts` - Optional parameter overrides and system message

## Options

All options from `complete/3` plus:
- `:system` - System message string
- `:source` - Override auto-detected source for request tracking

## Examples

    # Simple question
    {:ok, response} = PhoenixKit.Modules.AI.ask(endpoint_uuid, "What is the capital of France?")

    # With system message
    {:ok, response} = PhoenixKit.Modules.AI.ask(endpoint_uuid, "Translate: Hello",
      system: "You are a translator. Translate to French."
    )

    # With custom source for tracking
    {:ok, response} = PhoenixKit.Modules.AI.ask(endpoint_uuid, "Hello!",
      source: "Languages"
    )

    # Extract just the text content
    {:ok, response} = PhoenixKit.Modules.AI.ask(endpoint_uuid, "Hello!")
    {:ok, text} = PhoenixKit.Modules.AI.extract_content(response)

## Returns

Same as `complete/3`

# `ask_with_prompt`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L780)

Makes an AI completion using a prompt template.

The prompt content is rendered with the provided variables and sent as
the user message.

# `change_endpoint`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L527)

Returns an endpoint changeset for use in forms.

# `change_prompt`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L696)

Returns a prompt changeset for use in forms.

# `complete`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L1389)

Makes a chat completion request using a configured endpoint.

## Parameters

- `endpoint_uuid` - Endpoint UUID string or Endpoint struct
- `messages` - List of message maps with `:role` and `:content`
- `opts` - Optional parameter overrides

## Options

All standard completion parameters plus:
- `:source` - Override auto-detected source for request tracking

## Examples

    {:ok, response} = PhoenixKit.Modules.AI.complete(endpoint_uuid, [
      %{role: "user", content: "Hello!"}
    ])

    # With system message
    {:ok, response} = PhoenixKit.Modules.AI.complete(endpoint_uuid, [
      %{role: "system", content: "You are a helpful assistant."},
      %{role: "user", content: "What is 2+2?"}
    ])

    # With parameter overrides
    {:ok, response} = PhoenixKit.Modules.AI.complete(endpoint_uuid, messages,
      temperature: 0.5,
      max_tokens: 500
    )

    # With custom source for tracking
    {:ok, response} = PhoenixKit.Modules.AI.complete(endpoint_uuid, messages,
      source: "MyModule"
    )

## Returns

- `{:ok, response}` - Full API response including usage stats
- `{:error, reason}` - Error with reason string

# `complete_with_system_prompt`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L808)

Makes an AI completion with a prompt template as the system message.

The prompt is rendered and used as the system message, with the user_message
as the user message.

# `count_enabled_endpoints`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L550)

Counts the number of enabled endpoints.

# `count_enabled_prompts`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L719)

Counts the number of enabled prompts.

# `count_endpoints`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L543)

Counts the total number of endpoints.

# `count_prompts`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L712)

Counts the total number of prompts.

# `count_requests`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L1147)

Counts the total number of requests.

# `create_endpoint`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L499)

Creates a new AI endpoint.

## Examples

    {:ok, endpoint} = PhoenixKit.Modules.AI.create_endpoint(%{
      name: "Claude Fast",
      provider: "openrouter",
      api_key: "sk-or-v1-...",
      model: "anthropic/claude-3-haiku",
      temperature: 0.7
    })

# `create_prompt`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L668)

Creates a new AI prompt.

## Examples

    {:ok, prompt} = PhoenixKit.Modules.AI.create_prompt(%{
      name: "Translator",
      content: "Translate the following text to {{Language}}:\n\n{{Text}}"
    })

# `create_request`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L1137)

Creates a new AI request record.

Used to log every AI API call for tracking and statistics.

# `delete_endpoint`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L519)

Deletes an AI endpoint.

# `delete_prompt`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L688)

Deletes an AI prompt.

# `disable_prompt`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L882)

Disables a prompt.

# `disable_system`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L189)

Disables the AI module.

# `duplicate_prompt`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L857)

Duplicates a prompt with a new name.

# `embed`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L1520)

Makes an embeddings request using a configured endpoint.

## Parameters

- `endpoint_uuid` - Endpoint UUID string or Endpoint struct
- `input` - Text or list of texts to embed
- `opts` - Optional parameter overrides

## Options

- `:dimensions` - Override embedding dimensions
- `:source` - Override auto-detected source for request tracking

## Examples

    # Single text
    {:ok, response} = PhoenixKit.Modules.AI.embed(endpoint_uuid, "Hello, world!")

    # Multiple texts
    {:ok, response} = PhoenixKit.Modules.AI.embed(endpoint_uuid, ["Hello", "World"])

    # With dimension override
    {:ok, response} = PhoenixKit.Modules.AI.embed(endpoint_uuid, "Hello", dimensions: 512)

    # With custom source for tracking
    {:ok, response} = PhoenixKit.Modules.AI.embed(endpoint_uuid, "Hello",
      source: "SemanticSearch"
    )

## Returns

- `{:ok, response}` - Response with embeddings
- `{:error, reason}` - Error with reason

# `enable_prompt`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L873)

Enables a prompt.

# `enable_system`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L181)

Enables the AI module.

# `enabled?`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L173)

Checks if the AI module is enabled.

# `endpoints_topic`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L91)

Returns the PubSub topic for AI endpoints.
Subscribe to this topic to receive real-time updates.

# `extract_content`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L1551)

Extracts the text content from a completion response.

## Examples

    {:ok, response} = PhoenixKit.Modules.AI.ask(endpoint_uuid, "Hello!")
    {:ok, text} = PhoenixKit.Modules.AI.extract_content(response)
    # => "Hello! How can I help you today?"

# `extract_usage`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L1562)

Extracts usage information from a response.

## Examples

    {:ok, response} = PhoenixKit.Modules.AI.complete(endpoint_uuid, messages)
    usage = PhoenixKit.Modules.AI.extract_usage(response)
    # => %{prompt_tokens: 10, completion_tokens: 15, total_tokens: 25}

# `get_config`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L197)

Gets the AI module configuration with statistics.

# `get_dashboard_stats`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L1280)

Gets dashboard statistics for display.

Returns stats for the last 30 days plus all-time totals.

# `get_endpoint`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L457)

Gets a single endpoint by UUID.

Accepts a UUID string (e.g., "550e8400-e29b-41d4-a716-446655440000").

Returns `nil` if the endpoint does not exist.

# `get_endpoint!`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L443)

Gets a single endpoint by UUID.

Raises `Ecto.NoResultsError` if the endpoint does not exist.

# `get_endpoint_usage_stats`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L417)

Returns usage statistics for each endpoint.

Returns a map of endpoint_uuid => %{request_count, total_tokens, total_cost, last_used_at}

# `get_prompt`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L639)

Gets a single prompt by UUID.

Accepts a UUID string (e.g., "550e8400-e29b-41d4-a716-446655440000").

Returns `nil` if the prompt does not exist.

# `get_prompt!`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L625)

Gets a single prompt by UUID.

Raises `Ecto.NoResultsError` if the prompt does not exist.

# `get_prompt_by_slug`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L654)

Gets a prompt by slug.

Returns `nil` if the prompt does not exist.

# `get_prompt_usage_stats`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L975)

Gets usage statistics for all prompts.

# `get_prompt_variables`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L891)

Gets the variables defined in a prompt.

# `get_prompts_with_variable`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L942)

Finds all prompts that use a specific variable.

# `get_request`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L1122)

Gets a single request by UUID.

Accepts a UUID string (e.g., "550e8400-e29b-41d4-a716-446655440000").

Returns `nil` if the request does not exist.

# `get_request!`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L1108)

Gets a single request by UUID.

# `get_request_filter_options`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L1190)

Returns filter options for requests (distinct endpoints, models, and sources).

# `get_requests_by_day`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L1324)

Gets request counts grouped by day.

# `get_tokens_by_model`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L1303)

Gets token usage grouped by model.

# `get_usage_stats`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L1239)

Gets aggregated usage statistics.

## Options
- `:since` - Start date for statistics
- `:until` - End date for statistics
- `:endpoint_uuid` - Filter by endpoint

## Returns
Map with statistics including total_requests, total_tokens, success_rate, etc.

# `increment_prompt_usage`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L768)

Increments the usage count for a prompt and updates last_used_at.

# `list_enabled_prompts`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L616)

Lists only enabled prompts.

Convenience wrapper for `list_prompts(enabled: true)`.

## Examples

    PhoenixKit.Modules.AI.list_enabled_prompts()

# `list_endpoints`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L292)

Lists all AI endpoints.

## Options
- `:provider` - Filter by provider type
- `:enabled` - Filter by enabled status
- `:preload` - Associations to preload

## Examples

    PhoenixKit.Modules.AI.list_endpoints()
    PhoenixKit.Modules.AI.list_endpoints(provider: "openrouter", enabled: true)

# `list_prompts`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L573)

Lists all AI prompts.

## Options
- `:sort_by` - Field to sort by (default: :sort_order)
- `:sort_dir` - Sort direction, :asc or :desc (default: :asc)
- `:enabled` - Filter by enabled status

## Examples

    PhoenixKit.Modules.AI.list_prompts()
    PhoenixKit.Modules.AI.list_prompts(sort_by: :name, sort_dir: :asc)
    PhoenixKit.Modules.AI.list_prompts(enabled: true)

# `list_requests`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L1059)

Lists AI requests with pagination and filters.

## Options
- `:page` - Page number (default: 1)
- `:page_size` - Results per page (default: 20)
- `:endpoint_uuid` - Filter by endpoint
- `:user_uuid` - Filter by user
- `:status` - Filter by status
- `:model` - Filter by model
- `:source` - Filter by source (from metadata)
- `:since` - Filter by date (requests after this date)
- `:preload` - Associations to preload

## Returns
`{requests, total_count}`

# `mark_endpoint_validated`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L534)

Marks an endpoint as validated by updating its last_validated_at timestamp.

# `preview_prompt`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L900)

Previews a rendered prompt without making an AI call.

# `prompts_topic`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L96)

Returns the PubSub topic for AI prompts.

# `record_prompt_usage`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L703)

Increments the usage count for a prompt and updates last_used_at.

# `render_prompt`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L759)

Renders a prompt by replacing variables with provided values.

Returns `{:ok, rendered_text}` or `{:error, reason}`.

# `reorder_prompts`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L1017)

Updates the sort order for multiple prompts.

Accepts prompt UUIDs.

# `requests_topic`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L101)

Returns the PubSub topic for AI requests/usage.

# `reset_prompt_usage`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L1004)

Resets the usage statistics for a prompt.

# `resolve_endpoint`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L475)

Resolves an endpoint from an ID (UUID string) or Endpoint struct.

## Examples

    {:ok, endpoint} = PhoenixKit.Modules.AI.resolve_endpoint("019abc12-3456-7def-8901-234567890abc")
    {:ok, endpoint} = PhoenixKit.Modules.AI.resolve_endpoint(endpoint)

# `resolve_prompt`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L734)

Resolves a prompt from various input types.

Accepts:
- UUID string (e.g., "019abc12-3456-7def-8901-234567890abc")
- String slug (e.g., "my-prompt")
- Prompt struct (returned as-is)

Returns `{:ok, prompt}` or `{:error, reason}`.

# `search_prompts`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L916)

Searches prompts by name, description, or content.

# `subscribe_endpoints`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L106)

Subscribes the current process to AI endpoint changes.

# `subscribe_prompts`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L113)

Subscribes the current process to AI prompt changes.

# `subscribe_requests`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L120)

Subscribes the current process to AI request/usage changes.

# `sum_tokens`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L1154)

Sums the total tokens used across all requests.

# `update_endpoint`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L509)

Updates an existing AI endpoint.

# `update_prompt`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L678)

Updates an existing AI prompt.

# `validate_prompt`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L841)

Validates that a prompt is ready for use.

Returns `{:ok, prompt}` if valid, or `{:error, reason}` if not.

# `validate_prompt_content`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L955)

Validates that the content has valid variable syntax.

# `validate_prompt_variables`
[🔗](https://github.com/BeamLabEU/phoenix_kit/blob/v1.7.63/lib/modules/ai/ai.ex#L907)

Validates that all required variables are provided for a prompt.

---

*Consult [api-reference.md](api-reference.md) for complete listing*
