View Source Bumblebee.Audio.Whisper (Bumblebee v0.6.0)
Whisper model family.
Architectures
:base
- plain Whisper without any head on top:for_conditional_generation
- Whisper with a language modeling head. The head returns logits for each token in the original sequence
Inputs
"input_features"
-{batch_size, input_length, feature_size}
Indices of input sequence tokens in the vocabulary.
"attention_head_mask"
-{num_blocks, num_attention_heads}
Mask to nullify selected heads of the self-attention blocks in the encoder.
"input_embeddings"
-{batch_size, sequence_length, hidden_size}
Embedded representation of
"input_features"
, which can be specified for more control over how"input_features"
are embedded than the model's internal embedding lookup. If"input_embeddings"
are present, then"input_features"
will be ignored."decoder_input_ids"
-{batch_size, target_sequence_length}
Indices of decoder input sequence tokens in the vocabulary.
"decoder_attention_mask"
-{batch_size, target_sequence_length}
Mask indicating which decoder tokens to attend to. This is used to ignore padding tokens, which are added when processing a batch of sequences with different length.
"decoder_attention_head_mask"
-{decoder_num_blocks, decoder_num_attention_heads}
Mask to nullify selected heads of the self-attention blocks in the decoder.
"decoder_input_embeddings"
-{batch_size, sequence_length, hidden_size}
Embedded representation of
"decoder_input_ids"
, which can be specified for more control over how"decoder_input_ids"
are embedded than the model's internal embedding lookup. If"decoder_input_embeddings"
are present, then"decoder_input_ids"
will be ignored."encoder_hidden_state"
-{batch_size, sequence_length, hidden_size}
Last hidden state output from the encoder. This hidden state is used in cross-attention blocks in the decoder. If specified, the model will skip the encoding process and use this value directly for cross-attentions in the decoder.
"cross_attention_head_mask"
-{decoder_num_blocks, decoder_num_attention_heads}
Mask to nullify selected heads of the cross-attention blocks in the decoder with shape.
"cache"
A container with cached layer results used to speed up sequential decoding (autoregression). With cache, certain hidden states are taken from the cache, rather than recomputed on every decoding pass. The cache should be treated as opaque and initialized with
Bumblebee.Text.Generation.init_cache/4
.
Global layer options
:output_hidden_states
- whentrue
, the model output includes all hidden states:output_attentions
- whentrue
, the model output includes all attention weights
Configuration
:vocab_size
- the vocabulary size of the model. This corresponds to the number of distinct tokens that can be represented by the decoder . Defaults to51865
:feature_size
- the dimensionality of the input features. This corresponds to the number of Mel bins in the preprocessed input . Defaults to80
:encoder_max_positions
- the vocabulary size of the encoder position embedding. This corresponds to the maximum sequence length of log-mel filter-bank features that the model can process . Defaults to1500
:decoder_max_positions
- the vocabulary size of the decoder position embedding. This corresponds to the maximum sequence length that this model can generate. Typically this is set to a large value just in case, such as 512, 1024 or 2048 . Defaults to448
:hidden_size
- the dimensionality of hidden layers. Defaults to1024
:encoder_num_blocks
- the number of Transformer blocks in the encoder. Defaults to12
:decoder_num_blocks
- the number of Transformer blocks in the decoder. Defaults to12
:encoder_num_attention_heads
- the number of attention heads for each attention layer in the encoder. Defaults to16
:decoder_num_attention_heads
- the number of attention heads for each attention layer in the decoder. Defaults to16
:activation
- the activation function. Defaults to:gelu
:dropout_rate
- the dropout rate for encoder and decoder. Defaults to0.1
:attention_dropout_rate
- the dropout rate for attention weights. Defaults to0.0
:activation_dropout_rate
- the dropout rate for activations inside fully connected layers. Defaults to0.0
:initializer_scale
- the standard deviation of the normal initializer used for initializing kernel parameters. Defaults to0.02