Main context for PhoenixKit AI system.
Provides AI endpoint management and usage tracking for AI API requests.
Architecture
Each Endpoint is a unified configuration that combines:
- Provider credentials (api_key, base_url, provider_settings)
- Model selection (single model per endpoint)
- Generation parameters (temperature, max_tokens, etc.)
Users create as many endpoints as needed, each representing one complete AI configuration ready for making API requests.
Core Functions
System Management
enabled?/0- Check if AI module is enabledenable_system/0- Enable the AI moduledisable_system/0- Disable the AI moduleget_config/0- Get module configuration with statistics
Endpoint CRUD
list_endpoints/1- List all endpoints with filtersget_endpoint!/1- Get endpoint by UUID (raises)get_endpoint/1- Get endpoint by UUIDcreate_endpoint/1- Create new endpointupdate_endpoint/2- Update existing endpointdelete_endpoint/1- Delete endpoint
Completion API
ask/3- Simple single-turn completioncomplete/3- Multi-turn chat completionembed/3- Generate embeddings
Usage Tracking
list_requests/1- List requests with pagination/filterscreate_request/1- Log a new requestget_usage_stats/1- Get aggregated statisticsget_dashboard_stats/0- Get stats for dashboard display
Usage Examples
# Enable the module
PhoenixKit.Modules.AI.enable_system()
# Create an endpoint
{:ok, endpoint} = PhoenixKit.Modules.AI.create_endpoint(%{
name: "Claude Fast",
provider: "openrouter",
api_key: "sk-or-v1-...",
model: "anthropic/claude-3-haiku",
temperature: 0.7
})
# Use the endpoint
{:ok, response} = PhoenixKit.Modules.AI.ask(endpoint.uuid, "Hello!")
# Extract the response text
{:ok, text} = PhoenixKit.Modules.AI.extract_content(response)
Summary
Functions
Simple helper for single-turn chat completion.
Makes an AI completion using a prompt template.
Returns an endpoint changeset for use in forms.
Returns a prompt changeset for use in forms.
Makes a chat completion request using a configured endpoint.
Makes an AI completion with a prompt template as the system message.
Counts the number of enabled endpoints.
Counts the number of enabled prompts.
Counts the total number of endpoints.
Counts the total number of prompts.
Counts the total number of requests.
Creates a new AI endpoint.
Creates a new AI prompt.
Creates a new AI request record.
Deletes an AI endpoint.
Deletes an AI prompt.
Disables a prompt.
Disables the AI module.
Duplicates a prompt with a new name.
Makes an embeddings request using a configured endpoint.
Enables a prompt.
Enables the AI module.
Checks if the AI module is enabled.
Returns the PubSub topic for AI endpoints. Subscribe to this topic to receive real-time updates.
Extracts the text content from a completion response.
Extracts usage information from a response.
Gets the AI module configuration with statistics.
Gets dashboard statistics for display.
Gets a single endpoint by UUID.
Gets a single endpoint by UUID.
Returns usage statistics for each endpoint.
Gets a single prompt by UUID.
Gets a single prompt by UUID.
Gets a prompt by slug.
Gets usage statistics for all prompts.
Gets the variables defined in a prompt.
Finds all prompts that use a specific variable.
Gets a single request by UUID.
Gets a single request by UUID.
Returns filter options for requests (distinct endpoints, models, and sources).
Gets request counts grouped by day.
Gets token usage grouped by model.
Gets aggregated usage statistics.
Increments the usage count for a prompt and updates last_used_at.
Lists only enabled prompts.
Lists all AI endpoints.
Lists all AI prompts.
Lists AI requests with pagination and filters.
Marks an endpoint as validated by updating its last_validated_at timestamp.
Previews a rendered prompt without making an AI call.
Returns the PubSub topic for AI prompts.
Increments the usage count for a prompt and updates last_used_at.
Renders a prompt by replacing variables with provided values.
Updates the sort order for multiple prompts.
Returns the PubSub topic for AI requests/usage.
Resets the usage statistics for a prompt.
Resolves an endpoint from an ID (UUID string) or Endpoint struct.
Resolves a prompt from various input types.
Searches prompts by name, description, or content.
Subscribes the current process to AI endpoint changes.
Subscribes the current process to AI prompt changes.
Subscribes the current process to AI request/usage changes.
Sums the total tokens used across all requests.
Updates an existing AI endpoint.
Updates an existing AI prompt.
Validates that a prompt is ready for use.
Validates that the content has valid variable syntax.
Validates that all required variables are provided for a prompt.
Functions
Simple helper for single-turn chat completion.
Parameters
endpoint_uuid- Endpoint UUID string or Endpoint structprompt- User prompt stringopts- Optional parameter overrides and system message
Options
All options from complete/3 plus:
:system- System message string:source- Override auto-detected source for request tracking
Examples
# Simple question
{:ok, response} = PhoenixKit.Modules.AI.ask(endpoint_uuid, "What is the capital of France?")
# With system message
{:ok, response} = PhoenixKit.Modules.AI.ask(endpoint_uuid, "Translate: Hello",
system: "You are a translator. Translate to French."
)
# With custom source for tracking
{:ok, response} = PhoenixKit.Modules.AI.ask(endpoint_uuid, "Hello!",
source: "Languages"
)
# Extract just the text content
{:ok, response} = PhoenixKit.Modules.AI.ask(endpoint_uuid, "Hello!")
{:ok, text} = PhoenixKit.Modules.AI.extract_content(response)Returns
Same as complete/3
Makes an AI completion using a prompt template.
The prompt content is rendered with the provided variables and sent as the user message.
Returns an endpoint changeset for use in forms.
Returns a prompt changeset for use in forms.
Makes a chat completion request using a configured endpoint.
Parameters
endpoint_uuid- Endpoint UUID string or Endpoint structmessages- List of message maps with:roleand:contentopts- Optional parameter overrides
Options
All standard completion parameters plus:
:source- Override auto-detected source for request tracking
Examples
{:ok, response} = PhoenixKit.Modules.AI.complete(endpoint_uuid, [
%{role: "user", content: "Hello!"}
])
# With system message
{:ok, response} = PhoenixKit.Modules.AI.complete(endpoint_uuid, [
%{role: "system", content: "You are a helpful assistant."},
%{role: "user", content: "What is 2+2?"}
])
# With parameter overrides
{:ok, response} = PhoenixKit.Modules.AI.complete(endpoint_uuid, messages,
temperature: 0.5,
max_tokens: 500
)
# With custom source for tracking
{:ok, response} = PhoenixKit.Modules.AI.complete(endpoint_uuid, messages,
source: "MyModule"
)Returns
{:ok, response}- Full API response including usage stats{:error, reason}- Error with reason string
Makes an AI completion with a prompt template as the system message.
The prompt is rendered and used as the system message, with the user_message as the user message.
Counts the number of enabled endpoints.
Counts the number of enabled prompts.
Counts the total number of endpoints.
Counts the total number of prompts.
Counts the total number of requests.
Creates a new AI endpoint.
Examples
{:ok, endpoint} = PhoenixKit.Modules.AI.create_endpoint(%{
name: "Claude Fast",
provider: "openrouter",
api_key: "sk-or-v1-...",
model: "anthropic/claude-3-haiku",
temperature: 0.7
})
Creates a new AI prompt.
Examples
{:ok, prompt} = PhoenixKit.Modules.AI.create_prompt(%{
name: "Translator",
content: "Translate the following text to {{Language}}:\n\n{{Text}}"
})
Creates a new AI request record.
Used to log every AI API call for tracking and statistics.
Deletes an AI endpoint.
Deletes an AI prompt.
Disables a prompt.
Disables the AI module.
Duplicates a prompt with a new name.
Makes an embeddings request using a configured endpoint.
Parameters
endpoint_uuid- Endpoint UUID string or Endpoint structinput- Text or list of texts to embedopts- Optional parameter overrides
Options
:dimensions- Override embedding dimensions:source- Override auto-detected source for request tracking
Examples
# Single text
{:ok, response} = PhoenixKit.Modules.AI.embed(endpoint_uuid, "Hello, world!")
# Multiple texts
{:ok, response} = PhoenixKit.Modules.AI.embed(endpoint_uuid, ["Hello", "World"])
# With dimension override
{:ok, response} = PhoenixKit.Modules.AI.embed(endpoint_uuid, "Hello", dimensions: 512)
# With custom source for tracking
{:ok, response} = PhoenixKit.Modules.AI.embed(endpoint_uuid, "Hello",
source: "SemanticSearch"
)Returns
{:ok, response}- Response with embeddings{:error, reason}- Error with reason
Enables a prompt.
Enables the AI module.
Checks if the AI module is enabled.
Returns the PubSub topic for AI endpoints. Subscribe to this topic to receive real-time updates.
Extracts the text content from a completion response.
Examples
{:ok, response} = PhoenixKit.Modules.AI.ask(endpoint_uuid, "Hello!")
{:ok, text} = PhoenixKit.Modules.AI.extract_content(response)
# => "Hello! How can I help you today?"
Extracts usage information from a response.
Examples
{:ok, response} = PhoenixKit.Modules.AI.complete(endpoint_uuid, messages)
usage = PhoenixKit.Modules.AI.extract_usage(response)
# => %{prompt_tokens: 10, completion_tokens: 15, total_tokens: 25}
Gets the AI module configuration with statistics.
Gets dashboard statistics for display.
Returns stats for the last 30 days plus all-time totals.
Gets a single endpoint by UUID.
Accepts a UUID string (e.g., "550e8400-e29b-41d4-a716-446655440000").
Returns nil if the endpoint does not exist.
Gets a single endpoint by UUID.
Raises Ecto.NoResultsError if the endpoint does not exist.
Returns usage statistics for each endpoint.
Returns a map of endpoint_uuid => %{request_count, total_tokens, total_cost, last_used_at}
Gets a single prompt by UUID.
Accepts a UUID string (e.g., "550e8400-e29b-41d4-a716-446655440000").
Returns nil if the prompt does not exist.
Gets a single prompt by UUID.
Raises Ecto.NoResultsError if the prompt does not exist.
Gets a prompt by slug.
Returns nil if the prompt does not exist.
Gets usage statistics for all prompts.
Gets the variables defined in a prompt.
Finds all prompts that use a specific variable.
Gets a single request by UUID.
Accepts a UUID string (e.g., "550e8400-e29b-41d4-a716-446655440000").
Returns nil if the request does not exist.
Gets a single request by UUID.
Returns filter options for requests (distinct endpoints, models, and sources).
Gets request counts grouped by day.
Gets token usage grouped by model.
Gets aggregated usage statistics.
Options
:since- Start date for statistics:until- End date for statistics:endpoint_uuid- Filter by endpoint
Returns
Map with statistics including total_requests, total_tokens, success_rate, etc.
Increments the usage count for a prompt and updates last_used_at.
Lists only enabled prompts.
Convenience wrapper for list_prompts(enabled: true).
Examples
PhoenixKit.Modules.AI.list_enabled_prompts()
Lists all AI endpoints.
Options
:provider- Filter by provider type:enabled- Filter by enabled status:preload- Associations to preload
Examples
PhoenixKit.Modules.AI.list_endpoints()
PhoenixKit.Modules.AI.list_endpoints(provider: "openrouter", enabled: true)
Lists all AI prompts.
Options
:sort_by- Field to sort by (default: :sort_order):sort_dir- Sort direction, :asc or :desc (default: :asc):enabled- Filter by enabled status
Examples
PhoenixKit.Modules.AI.list_prompts()
PhoenixKit.Modules.AI.list_prompts(sort_by: :name, sort_dir: :asc)
PhoenixKit.Modules.AI.list_prompts(enabled: true)
Lists AI requests with pagination and filters.
Options
:page- Page number (default: 1):page_size- Results per page (default: 20):endpoint_uuid- Filter by endpoint:user_uuid- Filter by user:status- Filter by status:model- Filter by model:source- Filter by source (from metadata):since- Filter by date (requests after this date):preload- Associations to preload
Returns
{requests, total_count}
Marks an endpoint as validated by updating its last_validated_at timestamp.
Previews a rendered prompt without making an AI call.
Returns the PubSub topic for AI prompts.
Increments the usage count for a prompt and updates last_used_at.
Renders a prompt by replacing variables with provided values.
Returns {:ok, rendered_text} or {:error, reason}.
Updates the sort order for multiple prompts.
Accepts prompt UUIDs.
Returns the PubSub topic for AI requests/usage.
Resets the usage statistics for a prompt.
Resolves an endpoint from an ID (UUID string) or Endpoint struct.
Examples
{:ok, endpoint} = PhoenixKit.Modules.AI.resolve_endpoint("019abc12-3456-7def-8901-234567890abc")
{:ok, endpoint} = PhoenixKit.Modules.AI.resolve_endpoint(endpoint)
Resolves a prompt from various input types.
Accepts:
- UUID string (e.g., "019abc12-3456-7def-8901-234567890abc")
- String slug (e.g., "my-prompt")
- Prompt struct (returned as-is)
Returns {:ok, prompt} or {:error, reason}.
Searches prompts by name, description, or content.
Subscribes the current process to AI endpoint changes.
Subscribes the current process to AI prompt changes.
Subscribes the current process to AI request/usage changes.
Sums the total tokens used across all requests.
Updates an existing AI endpoint.
Updates an existing AI prompt.
Validates that a prompt is ready for use.
Returns {:ok, prompt} if valid, or {:error, reason} if not.
Validates that the content has valid variable syntax.
Validates that all required variables are provided for a prompt.