Rag.Ai.Codex (rag v0.3.4)

View Source

Codex/OpenAI provider implementation using codex_sdk.

This provider supports text generation with advanced reasoning capabilities and structured output. It does NOT support embeddings - use Gemini for that.

Note: This module is only available when codex_sdk is installed.

Examples

# Basic text generation
provider = Codex.new(%{})
{:ok, response} = Codex.generate_text(provider, "Hello!", [])

# Structured output
{:ok, response} = Codex.generate_text(provider, "...", output_schema: schema)

Summary

Functions

Returns the cost per 1K tokens as {input_cost, output_cost} in USD.

Returns the maximum context window in tokens.

Returns whether this provider supports embeddings.

Returns whether this provider supports tool calling.

Types

t()

@type t() :: %Rag.Ai.Codex{
  model: String.t(),
  reasoning_effort: :low | :medium | :high,
  thread: pid() | reference()
}

Functions

cost_per_1k_tokens()

@spec cost_per_1k_tokens() :: {float(), float()}

Returns the cost per 1K tokens as {input_cost, output_cost} in USD.

max_context_tokens()

@spec max_context_tokens() :: pos_integer()

Returns the maximum context window in tokens.

supports_embeddings?()

@spec supports_embeddings?() :: boolean()

Returns whether this provider supports embeddings.

supports_tools?()

@spec supports_tools?() :: boolean()

Returns whether this provider supports tool calling.