config() = #{endpoint => string(), api_key => binary(), model => binary(), temperature => float(), max_tokens => integer(), system_prompt => binary(), additional_options => map()}
message() = #{role => binary(), content => binary()}
messages() = [message()]
openai_result() = {ok, binary()} | {error, term()}
| chat/1 | |
| chat/2 | |
| default_config/0 | |
| format_prompt/2 | |
| generate/1 | |
| generate/2 | |
| generate_with_context/2 | |
| generate_with_context/3 | |
| get_env_config/0 | Get configuration from environment variables with fallback to defaults. |
| merge_config/2 | |
| print_result/1 |
chat(Messages::messages()) -> openai_result()
chat(Messages::messages(), Config::config()) -> openai_result()
default_config() -> config()
format_prompt(Template::string(), Args::list()) -> binary()
generate(Prompt::string() | binary()) -> openai_result()
generate(Prompt::string() | binary(), Config::config()) -> openai_result()
generate_with_context(Context::string() | binary(), Prompt::string() | binary()) -> openai_result()
generate_with_context(Context::string() | binary(), Prompt::string() | binary(), Config::config()) -> openai_result()
get_env_config() -> config()
Get configuration from environment variables with fallback to defaults. Environment variables: - OPENAI_API_KEY: OpenAI API key (required) - OPENAI_ENDPOINT: OpenAI API endpoint (default: https://api.openai.com/v1/chat/completions) - OPENAI_MODEL: Model name to use (default: gpt-4o) - OPENAI_TEMPERATURE: Temperature for generation (default: 0.7) - OPENAI_MAX_TOKENS: Maximum tokens to generate (default: 1000) - OPENAI_SYSTEM_PROMPT: System prompt to use
print_result(X1::openai_result()) -> ok | error
Generated by EDoc