PhoenixKit.Modules.AI (phoenix_kit v1.7.71)

Copy Markdown View Source

Main context for PhoenixKit AI system.

Provides AI endpoint management and usage tracking for AI API requests.

Architecture

Each Endpoint is a unified configuration that combines:

  • Provider credentials (api_key, base_url, provider_settings)
  • Model selection (single model per endpoint)
  • Generation parameters (temperature, max_tokens, etc.)

Users create as many endpoints as needed, each representing one complete AI configuration ready for making API requests.

Core Functions

System Management

Endpoint CRUD

Completion API

Usage Tracking

Usage Examples

# Enable the module
PhoenixKit.Modules.AI.enable_system()

# Create an endpoint
{:ok, endpoint} = PhoenixKit.Modules.AI.create_endpoint(%{
  name: "Claude Fast",
  provider: "openrouter",
  api_key: "sk-or-v1-...",
  model: "anthropic/claude-3-haiku",
  temperature: 0.7
})

# Use the endpoint
{:ok, response} = PhoenixKit.Modules.AI.ask(endpoint.uuid, "Hello!")

# Extract the response text
{:ok, text} = PhoenixKit.Modules.AI.extract_content(response)

Summary

Functions

Simple helper for single-turn chat completion.

Makes an AI completion using a prompt template.

Returns an endpoint changeset for use in forms.

Returns a prompt changeset for use in forms.

Makes a chat completion request using a configured endpoint.

Makes an AI completion with a prompt template as the system message.

Counts the number of enabled endpoints.

Counts the number of enabled prompts.

Counts the total number of endpoints.

Counts the total number of prompts.

Counts the total number of requests.

Creates a new AI endpoint.

Creates a new AI prompt.

Creates a new AI request record.

Deletes an AI endpoint.

Deletes an AI prompt.

Disables a prompt.

Disables the AI module.

Duplicates a prompt with a new name.

Makes an embeddings request using a configured endpoint.

Enables a prompt.

Enables the AI module.

Checks if the AI module is enabled.

Returns the PubSub topic for AI endpoints. Subscribe to this topic to receive real-time updates.

Extracts the text content from a completion response.

Extracts usage information from a response.

Gets the AI module configuration with statistics.

Gets dashboard statistics for display.

Gets a single endpoint by UUID.

Gets a single endpoint by UUID.

Returns usage statistics for each endpoint.

Gets a single prompt by UUID.

Gets a single prompt by UUID.

Gets a prompt by slug.

Gets usage statistics for all prompts.

Gets the variables defined in a prompt.

Finds all prompts that use a specific variable.

Gets a single request by UUID.

Gets a single request by UUID.

Returns filter options for requests (distinct endpoints, models, and sources).

Gets request counts grouped by day.

Gets token usage grouped by model.

Gets aggregated usage statistics.

Increments the usage count for a prompt and updates last_used_at.

Lists only enabled prompts.

Lists all AI endpoints.

Lists all AI prompts.

Lists AI requests with pagination and filters.

Marks an endpoint as validated by updating its last_validated_at timestamp.

Previews a rendered prompt without making an AI call.

Returns the PubSub topic for AI prompts.

Increments the usage count for a prompt and updates last_used_at.

Renders a prompt by replacing variables with provided values.

Updates the sort order for multiple prompts.

Returns the PubSub topic for AI requests/usage.

Resets the usage statistics for a prompt.

Resolves an endpoint from an ID (UUID string) or Endpoint struct.

Resolves a prompt from various input types.

Searches prompts by name, description, or content.

Subscribes the current process to AI endpoint changes.

Subscribes the current process to AI prompt changes.

Subscribes the current process to AI request/usage changes.

Sums the total tokens used across all requests.

Updates an existing AI endpoint.

Updates an existing AI prompt.

Validates that a prompt is ready for use.

Validates that the content has valid variable syntax.

Validates that all required variables are provided for a prompt.

Functions

ask(endpoint_uuid, prompt, opts \\ [])

Simple helper for single-turn chat completion.

Parameters

  • endpoint_uuid - Endpoint UUID string or Endpoint struct
  • prompt - User prompt string
  • opts - Optional parameter overrides and system message

Options

All options from complete/3 plus:

  • :system - System message string
  • :source - Override auto-detected source for request tracking

Examples

# Simple question
{:ok, response} = PhoenixKit.Modules.AI.ask(endpoint_uuid, "What is the capital of France?")

# With system message
{:ok, response} = PhoenixKit.Modules.AI.ask(endpoint_uuid, "Translate: Hello",
  system: "You are a translator. Translate to French."
)

# With custom source for tracking
{:ok, response} = PhoenixKit.Modules.AI.ask(endpoint_uuid, "Hello!",
  source: "Languages"
)

# Extract just the text content
{:ok, response} = PhoenixKit.Modules.AI.ask(endpoint_uuid, "Hello!")
{:ok, text} = PhoenixKit.Modules.AI.extract_content(response)

Returns

Same as complete/3

ask_with_prompt(endpoint_uuid, prompt_uuid, variables \\ %{}, opts \\ [])

Makes an AI completion using a prompt template.

The prompt content is rendered with the provided variables and sent as the user message.

change_endpoint(endpoint, attrs \\ %{})

Returns an endpoint changeset for use in forms.

change_prompt(prompt, attrs \\ %{})

Returns a prompt changeset for use in forms.

complete(endpoint_uuid, messages, opts \\ [])

Makes a chat completion request using a configured endpoint.

Parameters

  • endpoint_uuid - Endpoint UUID string or Endpoint struct
  • messages - List of message maps with :role and :content
  • opts - Optional parameter overrides

Options

All standard completion parameters plus:

  • :source - Override auto-detected source for request tracking

Examples

{:ok, response} = PhoenixKit.Modules.AI.complete(endpoint_uuid, [
  %{role: "user", content: "Hello!"}
])

# With system message
{:ok, response} = PhoenixKit.Modules.AI.complete(endpoint_uuid, [
  %{role: "system", content: "You are a helpful assistant."},
  %{role: "user", content: "What is 2+2?"}
])

# With parameter overrides
{:ok, response} = PhoenixKit.Modules.AI.complete(endpoint_uuid, messages,
  temperature: 0.5,
  max_tokens: 500
)

# With custom source for tracking
{:ok, response} = PhoenixKit.Modules.AI.complete(endpoint_uuid, messages,
  source: "MyModule"
)

Returns

  • {:ok, response} - Full API response including usage stats
  • {:error, reason} - Error with reason string

complete_with_system_prompt(endpoint_uuid, prompt_uuid, variables, user_message, opts \\ [])

Makes an AI completion with a prompt template as the system message.

The prompt is rendered and used as the system message, with the user_message as the user message.

count_enabled_endpoints()

Counts the number of enabled endpoints.

count_enabled_prompts()

Counts the number of enabled prompts.

count_endpoints()

Counts the total number of endpoints.

count_prompts()

Counts the total number of prompts.

count_requests()

Counts the total number of requests.

create_endpoint(attrs)

Creates a new AI endpoint.

Examples

{:ok, endpoint} = PhoenixKit.Modules.AI.create_endpoint(%{
  name: "Claude Fast",
  provider: "openrouter",
  api_key: "sk-or-v1-...",
  model: "anthropic/claude-3-haiku",
  temperature: 0.7
})

create_prompt(attrs)

Creates a new AI prompt.

Examples

{:ok, prompt} = PhoenixKit.Modules.AI.create_prompt(%{
  name: "Translator",
  content: "Translate the following text to {{Language}}:\n\n{{Text}}"
})

create_request(attrs)

Creates a new AI request record.

Used to log every AI API call for tracking and statistics.

delete_endpoint(endpoint)

Deletes an AI endpoint.

delete_prompt(prompt)

Deletes an AI prompt.

disable_prompt(prompt_uuid)

Disables a prompt.

disable_system()

Disables the AI module.

duplicate_prompt(prompt_uuid, new_name)

Duplicates a prompt with a new name.

embed(endpoint_uuid, input, opts \\ [])

Makes an embeddings request using a configured endpoint.

Parameters

  • endpoint_uuid - Endpoint UUID string or Endpoint struct
  • input - Text or list of texts to embed
  • opts - Optional parameter overrides

Options

  • :dimensions - Override embedding dimensions
  • :source - Override auto-detected source for request tracking

Examples

# Single text
{:ok, response} = PhoenixKit.Modules.AI.embed(endpoint_uuid, "Hello, world!")

# Multiple texts
{:ok, response} = PhoenixKit.Modules.AI.embed(endpoint_uuid, ["Hello", "World"])

# With dimension override
{:ok, response} = PhoenixKit.Modules.AI.embed(endpoint_uuid, "Hello", dimensions: 512)

# With custom source for tracking
{:ok, response} = PhoenixKit.Modules.AI.embed(endpoint_uuid, "Hello",
  source: "SemanticSearch"
)

Returns

  • {:ok, response} - Response with embeddings
  • {:error, reason} - Error with reason

enable_prompt(prompt_uuid)

Enables a prompt.

enable_system()

Enables the AI module.

enabled?()

Checks if the AI module is enabled.

endpoints_topic()

Returns the PubSub topic for AI endpoints. Subscribe to this topic to receive real-time updates.

extract_content(response)

Extracts the text content from a completion response.

Examples

{:ok, response} = PhoenixKit.Modules.AI.ask(endpoint_uuid, "Hello!")
{:ok, text} = PhoenixKit.Modules.AI.extract_content(response)
# => "Hello! How can I help you today?"

extract_usage(response)

Extracts usage information from a response.

Examples

{:ok, response} = PhoenixKit.Modules.AI.complete(endpoint_uuid, messages)
usage = PhoenixKit.Modules.AI.extract_usage(response)
# => %{prompt_tokens: 10, completion_tokens: 15, total_tokens: 25}

get_config()

Gets the AI module configuration with statistics.

get_dashboard_stats()

Gets dashboard statistics for display.

Returns stats for the last 30 days plus all-time totals.

get_endpoint(id)

Gets a single endpoint by UUID.

Accepts a UUID string (e.g., "550e8400-e29b-41d4-a716-446655440000").

Returns nil if the endpoint does not exist.

get_endpoint!(id)

Gets a single endpoint by UUID.

Raises Ecto.NoResultsError if the endpoint does not exist.

get_endpoint_usage_stats()

Returns usage statistics for each endpoint.

Returns a map of endpoint_uuid => %{request_count, total_tokens, total_cost, last_used_at}

get_prompt(id)

Gets a single prompt by UUID.

Accepts a UUID string (e.g., "550e8400-e29b-41d4-a716-446655440000").

Returns nil if the prompt does not exist.

get_prompt!(id)

Gets a single prompt by UUID.

Raises Ecto.NoResultsError if the prompt does not exist.

get_prompt_by_slug(slug)

Gets a prompt by slug.

Returns nil if the prompt does not exist.

get_prompt_usage_stats(opts \\ [])

Gets usage statistics for all prompts.

get_prompt_variables(prompt_uuid)

Gets the variables defined in a prompt.

get_prompts_with_variable(variable_name)

Finds all prompts that use a specific variable.

get_request(id)

Gets a single request by UUID.

Accepts a UUID string (e.g., "550e8400-e29b-41d4-a716-446655440000").

Returns nil if the request does not exist.

get_request!(id)

Gets a single request by UUID.

get_request_filter_options()

Returns filter options for requests (distinct endpoints, models, and sources).

get_requests_by_day(opts \\ [])

Gets request counts grouped by day.

get_tokens_by_model(opts \\ [])

Gets token usage grouped by model.

get_usage_stats(opts \\ [])

Gets aggregated usage statistics.

Options

  • :since - Start date for statistics
  • :until - End date for statistics
  • :endpoint_uuid - Filter by endpoint

Returns

Map with statistics including total_requests, total_tokens, success_rate, etc.

increment_prompt_usage(prompt_uuid)

Increments the usage count for a prompt and updates last_used_at.

list_enabled_prompts()

Lists only enabled prompts.

Convenience wrapper for list_prompts(enabled: true).

Examples

PhoenixKit.Modules.AI.list_enabled_prompts()

list_endpoints(opts \\ [])

Lists all AI endpoints.

Options

  • :provider - Filter by provider type
  • :enabled - Filter by enabled status
  • :preload - Associations to preload

Examples

PhoenixKit.Modules.AI.list_endpoints()
PhoenixKit.Modules.AI.list_endpoints(provider: "openrouter", enabled: true)

list_prompts(opts \\ [])

Lists all AI prompts.

Options

  • :sort_by - Field to sort by (default: :sort_order)
  • :sort_dir - Sort direction, :asc or :desc (default: :asc)
  • :enabled - Filter by enabled status

Examples

PhoenixKit.Modules.AI.list_prompts()
PhoenixKit.Modules.AI.list_prompts(sort_by: :name, sort_dir: :asc)
PhoenixKit.Modules.AI.list_prompts(enabled: true)

list_requests(opts \\ [])

Lists AI requests with pagination and filters.

Options

  • :page - Page number (default: 1)
  • :page_size - Results per page (default: 20)
  • :endpoint_uuid - Filter by endpoint
  • :user_uuid - Filter by user
  • :status - Filter by status
  • :model - Filter by model
  • :source - Filter by source (from metadata)
  • :since - Filter by date (requests after this date)
  • :preload - Associations to preload

Returns

{requests, total_count}

mark_endpoint_validated(endpoint)

Marks an endpoint as validated by updating its last_validated_at timestamp.

preview_prompt(prompt_uuid, variables \\ %{})

Previews a rendered prompt without making an AI call.

prompts_topic()

Returns the PubSub topic for AI prompts.

record_prompt_usage(prompt)

Increments the usage count for a prompt and updates last_used_at.

render_prompt(prompt_uuid, variables \\ %{})

Renders a prompt by replacing variables with provided values.

Returns {:ok, rendered_text} or {:error, reason}.

reorder_prompts(order_list)

Updates the sort order for multiple prompts.

Accepts prompt UUIDs.

requests_topic()

Returns the PubSub topic for AI requests/usage.

reset_prompt_usage(prompt_uuid)

Resets the usage statistics for a prompt.

resolve_endpoint(id)

Resolves an endpoint from an ID (UUID string) or Endpoint struct.

Examples

{:ok, endpoint} = PhoenixKit.Modules.AI.resolve_endpoint("019abc12-3456-7def-8901-234567890abc")
{:ok, endpoint} = PhoenixKit.Modules.AI.resolve_endpoint(endpoint)

resolve_prompt(prompt)

Resolves a prompt from various input types.

Accepts:

  • UUID string (e.g., "019abc12-3456-7def-8901-234567890abc")
  • String slug (e.g., "my-prompt")
  • Prompt struct (returned as-is)

Returns {:ok, prompt} or {:error, reason}.

search_prompts(query, opts \\ [])

Searches prompts by name, description, or content.

subscribe_endpoints()

Subscribes the current process to AI endpoint changes.

subscribe_prompts()

Subscribes the current process to AI prompt changes.

subscribe_requests()

Subscribes the current process to AI request/usage changes.

sum_tokens()

Sums the total tokens used across all requests.

update_endpoint(endpoint, attrs)

Updates an existing AI endpoint.

update_prompt(prompt, attrs)

Updates an existing AI prompt.

validate_prompt(prompt)

Validates that a prompt is ready for use.

Returns {:ok, prompt} if valid, or {:error, reason} if not.

validate_prompt_content(content)

Validates that the content has valid variable syntax.

validate_prompt_variables(prompt_uuid, variables)

Validates that all required variables are provided for a prompt.