config() = #{endpoint => string(), api_key => binary(), model => binary(), temperature => float(), max_tokens => integer(), system_prompt => binary(), additional_options => map()}
message() = #{role => binary(), content => binary()}
messages() = [message()]
openai_result() = {ok, binary()} | {error, term()}
| chat/1 | Chat completion using messages format with default/environment configuration. |
| chat/2 | Chat completion using messages format with custom configuration. |
| default_config/0 | Get default hardcoded configuration. |
| format_prompt/2 | Format a prompt template with given arguments. |
| generate/1 | Generate text using a simple prompt with default/environment configuration. |
| generate/2 | Generate text using a simple prompt with custom configuration. |
| generate_with_context/2 | Generate text with additional context using default/environment configuration. |
| generate_with_context/3 | Generate text with additional context using custom configuration. |
| get_env_config/0 | Get configuration from environment variables with fallback to defaults. |
| merge_config/2 | Merge two configurations, with the second one taking precedence. |
| print_result/1 | Print the result of an OpenAI operation to stdout. |
chat(Messages::messages()) -> openai_result()
Chat completion using messages format with default/environment configuration.
chat(Messages::messages(), Config::config()) -> openai_result()
Chat completion using messages format with custom configuration.
default_config() -> config()
Get default hardcoded configuration.
format_prompt(Template::string(), Args::list()) -> binary()
Format a prompt template with given arguments. Similar to io_lib:format but returns binary.
generate(Prompt::string() | binary()) -> openai_result()
Generate text using a simple prompt with default/environment configuration.
generate(Prompt::string() | binary(), Config::config()) -> openai_result()
Generate text using a simple prompt with custom configuration.
generate_with_context(Context::string() | binary(), Prompt::string() | binary()) -> openai_result()
Generate text with additional context using default/environment configuration.
generate_with_context(Context::string() | binary(), Prompt::string() | binary(), Config::config()) -> openai_result()
Generate text with additional context using custom configuration.
get_env_config() -> config()
Get configuration from environment variables with fallback to defaults. Environment variables: - OPENAI_API_KEY: OpenAI API key (required) - OPENAI_ENDPOINT: OpenAI API endpoint (default: https://api.openai.com/v1/chat/completions) - OPENAI_MODEL: Model name to use (default: gpt-4o) - OPENAI_TEMPERATURE: Temperature for generation (default: 0.7) - OPENAI_MAX_TOKENS: Maximum tokens to generate (default: 1000) - OPENAI_SYSTEM_PROMPT: System prompt to use
Merge two configurations, with the second one taking precedence.
print_result(X1::openai_result()) -> ok | error
Print the result of an OpenAI operation to stdout. Returns 'ok' if successful, 'error' otherwise.
Generated by EDoc