Share via


VoiceLiveModelFactory.AudioInputTranscriptionOptions Method

Definition

Configuration for input audio transcription.

public static Azure.AI.VoiceLive.AudioInputTranscriptionOptions AudioInputTranscriptionOptions(Azure.AI.VoiceLive.AudioInputTranscriptionOptionsModel model = default, string language = default, System.Collections.Generic.IDictionary<string,string> customSpeech = default, System.Collections.Generic.IEnumerable<string> phraseList = default);
static member AudioInputTranscriptionOptions : Azure.AI.VoiceLive.AudioInputTranscriptionOptionsModel * string * System.Collections.Generic.IDictionary<string, string> * seq<string> -> Azure.AI.VoiceLive.AudioInputTranscriptionOptions
Public Shared Function AudioInputTranscriptionOptions (Optional model As AudioInputTranscriptionOptionsModel = Nothing, Optional language As String = Nothing, Optional customSpeech As IDictionary(Of String, String) = Nothing, Optional phraseList As IEnumerable(Of String) = Nothing) As AudioInputTranscriptionOptions

Parameters

model
AudioInputTranscriptionOptionsModel

The transcription model to use. Supported values: 'whisper-1', 'gpt-4o-transcribe', 'gpt-4o-mini-transcribe', 'azure-speech'.

language
String

Optional language code in BCP-47 (e.g., 'en-US'), or ISO-639-1 (e.g., 'en'), or multi languages with auto detection, (e.g., 'en,zh').

customSpeech
IDictionary<String,String>

Optional configuration for custom speech models.

phraseList
IEnumerable<String>

Optional list of phrase hints to bias recognition.

Returns

A new AudioInputTranscriptionOptions instance for mocking.

Applies to