View Source ExOpenAI.Components.AudioTranscription (ex_openai.ex v2.0.0-beta2)
Module for representing the OpenAI schema AudioTranscription.
Fields
:language- optional -String.t()
The language of the input audio. Supplying the input language in ISO-639-1 (e.g.en) format will improve accuracy and latency.:model- optional -String.t() | :"whisper-1" | :"gpt-4o-mini-transcribe" | :"gpt-4o-mini-transcribe-2025-12-15" | :"gpt-4o-transcribe" | :"gpt-4o-transcribe-diarize"
The model to use for transcription. Current options arewhisper-1,gpt-4o-mini-transcribe,gpt-4o-mini-transcribe-2025-12-15,gpt-4o-transcribe, andgpt-4o-transcribe-diarize. Usegpt-4o-transcribe-diarizewhen you need diarization with speaker labels.:prompt- optional -String.t()
An optional text to guide the model's style or continue a previous audio segment. Forwhisper-1, the prompt is a list of keywords. Forgpt-4o-transcribemodels (excludinggpt-4o-transcribe-diarize), the prompt is a free text string, for example "expect words related to technology".