Module openai_handler

OpenAI Handler Library Generic library to interact with OpenAI API in a simple and flexible way.

Description

OpenAI Handler Library Generic library to interact with OpenAI API in a simple and flexible way. Supports both default configuration and custom configuration per request. Environment variables can be used to override default settings. Uses OTP 27 json module for JSON encoding/decoding and httpc for HTTP requests.

Data Types

config()

config() = #{endpoint => string(), api_key => binary(), model => binary(), temperature => float(), max_tokens => integer(), system_prompt => binary(), additional_options => map()}

message()

message() = #{role => binary(), content => binary()}

messages()

messages() = [message()]

openai_result()

openai_result() = {ok, binary()} | {error, term()}

Function Index

chat/1
chat/2
default_config/0
format_prompt/2
generate/1
generate/2
generate_with_context/2
generate_with_context/3
get_env_config/0 Get configuration from environment variables with fallback to defaults.
merge_config/2
print_result/1

Function Details

chat/1

chat(Messages::messages()) -> openai_result()

chat/2

chat(Messages::messages(), Config::config()) -> openai_result()

default_config/0

default_config() -> config()

format_prompt/2

format_prompt(Template::string(), Args::list()) -> binary()

generate/1

generate(Prompt::string() | binary()) -> openai_result()

generate/2

generate(Prompt::string() | binary(), Config::config()) -> openai_result()

generate_with_context/2

generate_with_context(Context::string() | binary(), Prompt::string() | binary()) -> openai_result()

generate_with_context/3

generate_with_context(Context::string() | binary(), Prompt::string() | binary(), Config::config()) -> openai_result()

get_env_config/0

get_env_config() -> config()

Get configuration from environment variables with fallback to defaults. Environment variables: - OPENAI_API_KEY: OpenAI API key (required) - OPENAI_ENDPOINT: OpenAI API endpoint (default: https://api.openai.com/v1/chat/completions) - OPENAI_MODEL: Model name to use (default: gpt-4o) - OPENAI_TEMPERATURE: Temperature for generation (default: 0.7) - OPENAI_MAX_TOKENS: Maximum tokens to generate (default: 1000) - OPENAI_SYSTEM_PROMPT: System prompt to use

merge_config/2

merge_config(BaseConfig::config(), OverrideConfig::config()) -> config()

print_result/1

print_result(X1::openai_result()) -> ok | error


Generated by EDoc