View Source ExOpenAI.Audio (ex_openai.ex v1.7.0)

Modules for interacting with the audio group of OpenAI APIs

API Reference: https://platform.openai.com/docs/api-reference/audio

Summary

Functions

Generates audio from the input text.

Transcribes audio into the input language.

Translates audio into English.

Functions

Link to this function

create_speech(input, model, voice, opts \\ [])

View Source
@spec create_speech(
  String.t(),
  (:"tts-1-hd" | :"tts-1") | String.t(),
  :shimmer | :nova | :onyx | :fable | :echo | :alloy,
  base_url: String.t(),
  openai_organization_key: String.t(),
  openai_api_key: String.t(),
  speed: float(),
  response_format: :pcm | :wav | :flac | :aac | :opus | :mp3,
  stream_to: (... -> any()) | pid()
) :: {:ok, String.t()} | {:error, any()}

Generates audio from the input text.

Endpoint: https://api.openai.com/v1/audio/speech

Method: POST

Docs: https://platform.openai.com/docs/api-reference/audio


Required Arguments:

  • input: The text to generate audio for. The maximum length is 4096 characters.

  • model: One of the available TTS models: tts-1 or tts-1-hd

  • voice: The voice to use when generating the audio. Supported voices are alloy, echo, fable, onyx, nova, and shimmer. Previews of the voices are available in the Text to speech guide.

Optional Arguments:

  • stream_to: PID or function of where to stream content to

  • response_format: The format to audio in. Supported formats are mp3, opus, aac, flac, wav, and pcm.

  • speed: The speed of the generated audio. Select a value from 0.25 to 4.0. 1.0 is the default.

  • openai_api_key: OpenAI API key to pass directly. If this is specified, it will override the api_key config value.

  • openai_organization_key: OpenAI API key to pass directly. If this is specified, it will override the organization_key config value.

  • base_url: Which API endpoint to use as base, defaults to https://api.openai.com/v1

Link to this function

create_transcription(file, model, opts \\ [])

View Source
@spec create_transcription(
  bitstring() | {String.t(), bitstring()},
  :"whisper-1" | String.t(),
  base_url: String.t(),
  openai_organization_key: String.t(),
  openai_api_key: String.t(),
  "timestamp_granularities[]": [:segment | :word],
  temperature: float(),
  response_format: :vtt | :verbose_json | :srt | :text | :json,
  prompt: String.t(),
  language: String.t(),
  stream_to: (... -> any()) | pid()
) ::
  ({:ok, ExOpenAI.Components.CreateTranscriptionResponseVerboseJson.t()}
   | {:ok, ExOpenAI.Components.CreateTranscriptionResponseJson.t()})
  | {:error, any()}

Transcribes audio into the input language.

Endpoint: https://api.openai.com/v1/audio/transcriptions

Method: POST

Docs: https://platform.openai.com/docs/api-reference/audio


Required Arguments:

  • file: The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm. (Pass in a file object created with something like File.open!, or a {filename, file object} tuple to preserve the filename information, eg {"filename.ext", File.open!("/tmp/file.ext")})

  • model: ID of the model to use. Only whisper-1 (which is powered by our open source Whisper V2 model) is currently available.

Optional Arguments:

  • stream_to: PID or function of where to stream content to

  • language: The language of the input audio. Supplying the input language in ISO-639-1 format will improve accuracy and latency.

  • prompt: An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.

  • response_format: The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.

  • temperature: The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.

  • timestamp_granularities[]: The timestamp granularities to populate for this transcription. response_format must be set verbose_json to use timestamp granularities. Either or both of these options are supported: word, or segment. Note: There is no additional latency for segment timestamps, but generating word timestamps incurs additional latency.

  • openai_api_key: OpenAI API key to pass directly. If this is specified, it will override the api_key config value.

  • openai_organization_key: OpenAI API key to pass directly. If this is specified, it will override the organization_key config value.

  • base_url: Which API endpoint to use as base, defaults to https://api.openai.com/v1

Link to this function

create_translation(file, model, opts \\ [])

View Source
@spec create_translation(
  bitstring() | {String.t(), bitstring()},
  :"whisper-1" | String.t(),
  base_url: String.t(),
  openai_organization_key: String.t(),
  openai_api_key: String.t(),
  temperature: float(),
  response_format: String.t(),
  prompt: String.t(),
  stream_to: (... -> any()) | pid()
) ::
  ({:ok, ExOpenAI.Components.CreateTranslationResponseVerboseJson.t()}
   | {:ok, ExOpenAI.Components.CreateTranslationResponseJson.t()})
  | {:error, any()}

Translates audio into English.

Endpoint: https://api.openai.com/v1/audio/translations

Method: POST

Docs: https://platform.openai.com/docs/api-reference/audio


Required Arguments:

  • file: The audio file object (not file name) translate, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm. (Pass in a file object created with something like File.open!, or a {filename, file object} tuple to preserve the filename information, eg {"filename.ext", File.open!("/tmp/file.ext")})

  • model: ID of the model to use. Only whisper-1 (which is powered by our open source Whisper V2 model) is currently available.

Optional Arguments:

  • stream_to: PID or function of where to stream content to

  • prompt: An optional text to guide the model's style or continue a previous audio segment. The prompt should be in English.

  • response_format: The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.

  • temperature: The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.

  • openai_api_key: OpenAI API key to pass directly. If this is specified, it will override the api_key config value.

  • openai_organization_key: OpenAI API key to pass directly. If this is specified, it will override the organization_key config value.

  • base_url: Which API endpoint to use as base, defaults to https://api.openai.com/v1