View Source Bumblebee.Text.Albert (Bumblebee v0.5.3)
ALBERT model family.
Architectures
:base- plain ALBERT without any head on top:for_masked_language_modeling- ALBERT with a language modeling head. The head returns logits for each token in the original sequence:for_sequence_classification- ALBERT with a sequence classification head. The head returns logits corresponding to possible classes:for_token_classification- ALBERT with a token classification head. The head returns logits for each token in the original sequence:for_question_answering- ALBERT with a span classification head. The head returns logits for the span start and end positions:for_multiple_choice- ALBERT with a multiple choice prediction head. Each input in the batch consists of several sequences to choose from and the model returns logits corresponding to those choices:for_pre_training- ALBERT with both MLM and NSP heads as done during the pre-training
Inputs
"input_ids"-{batch_size, sequence_length}Indices of input sequence tokens in the vocabulary.
"attention_mask"-{batch_size, sequence_length}Mask indicating which tokens to attend to. This is used to ignore padding tokens, which are added when processing a batch of sequences with different length.
"token_type_ids"-{batch_size, sequence_length}Mask distinguishing groups in the input sequence. This is used in when the input sequence is a semantically a pair of sequences.
"position_ids"-{batch_size, sequence_length}Indices of positions of each input sequence tokens in the position embeddings.
Exceptions
The :for_multiple_choice model accepts groups of sequences, so the
expected sequence shape is {batch_size, num_choices, sequence_length}.
Configuration
:vocab_size- the vocabulary size of the token embedding. This corresponds to the number of distinct tokens that can be represented in model input and output . Defaults to30000:max_positions- the vocabulary size of the position embedding. This corresponds to the maximum sequence length that this model can process. Typically this is set to a large value just in case, such as 512, 1024 or 2048 . Defaults to512:max_token_types- the vocabulary size of the token type embedding (also referred to as segment embedding). This corresponds to how many different token groups can be distinguished in the input . Defaults to2:embedding_size- the dimensionality of all input embeddings. Defaults to128:hidden_size- the dimensionality of hidden layers. Defaults to768:num_blocks- the number of blocks in the encoder. Note that each block contains:block_depthTransformer blocks . Defaults to12:num_groups- the number of groups of encoder blocks. Parameters in the same group are shared. Defaults to1:block_depth- the number of Transformer blocks in each encoder block. Defaults to1:num_attention_heads- the number of attention heads for each attention layer in the encoder. Defaults to12:intermediate_size- the dimensionality of the intermediate layer in the transformer feed-forward network (FFN) in the encoder. Defaults to16384:activation- the activation function. Defaults to:gelu:dropout_rate- the dropout rate for embedding and encoder. Defaults to0.0:attention_dropout_rate- the dropout rate for attention weights. Defaults to0.0:classifier_dropout_rate- the dropout rate for the classification head. If not specified, the value of:dropout_rateis used instead:layer_norm_epsilon- the epsilon used by the layer normalization layers. Defaults to1.0e-12:initializer_scale- the standard deviation of the normal initializer used for initializing kernel parameters. Defaults to0.02:output_hidden_states- whether the model should return all hidden states. Defaults tofalse:output_attentions- whether the model should return all attentions. Defaults tofalse:num_labels- the number of labels to use in the last layer for the classification task. Defaults to2:id_to_label- a map from class index to label. Defaults to%{}