View Source Bumblebee.Text.Mistral (Bumblebee v0.6.0)

Mistral model family.

Architectures

  • :base - plain Mistral without any head on top

  • :for_causal_language_modeling - Mistral with a language modeling head. The head returns logits for each token in the original sequence

  • :for_sequence_classification - Mistral with a sequence classification head. The head returns logits corresponding to possible classes

Inputs

  • "input_ids" - {batch_size, sequence_length}

    Indices of input sequence tokens in the vocabulary.

  • "attention_mask" - {batch_size, sequence_length}

    Mask indicating which tokens to attend to. This is used to ignore padding tokens, which are added when processing a batch of sequences with different length.

  • "position_ids" - {batch_size, sequence_length}

    Indices of positions of each input sequence tokens in the position embeddings.

  • "attention_head_mask" - {encoder_num_blocks, encoder_num_attention_heads}

    Mask to nullify selected heads of the self-attention blocks in the encoder.

  • "input_embeddings" - {batch_size, sequence_length, hidden_size}

    Embedded representation of "input_ids", which can be specified for more control over how "input_ids" are embedded than the model's internal embedding lookup. If "input_embeddings" are present, then "input_ids" will be ignored.

  • "cache"

    A container with cached layer results used to speed up sequential decoding (autoregression). With cache, certain hidden states are taken from the cache, rather than recomputed on every decoding pass. The cache should be treated as opaque and initialized with Bumblebee.Text.Generation.init_cache/4.

Global layer options

  • :output_hidden_states - when true, the model output includes all hidden states

  • :output_attentions - when true, the model output includes all attention weights

Configuration

  • :vocab_size - the vocabulary size of the token embedding. This corresponds to the number of distinct tokens that can be represented in model input and output . Defaults to 32000

  • :max_positions - the vocabulary size of the position embedding. This corresponds to the maximum sequence length that this model can process. Typically this is set to a large value just in case, such as 512, 1024 or 2048 . Defaults to 131072

  • :hidden_size - the dimensionality of hidden layers. Defaults to 4096

  • :intermediate_size - the dimensionality of intermediate layers. Defaults to 14336

  • :num_blocks - the number of Transformer blocks in the model. Defaults to 32

  • :num_attention_heads - the number of attention heads for each attention layer in the model. Defaults to 32

  • :num_key_value_heads - the number of key-value heads used to implement Grouped Query Attention. If this value is set to the same as the number of attention heads, it will use regular MHA. If it's set to 1, it will use MQA, otherwise it uses Grouped Query Attention . Defaults to 8

  • :attention_window_size - window size for both sides of the sliding attention window. Defaults to 4096

  • :activation - the activation function. Defaults to :silu

  • :layer_norm_epsilon - the epsilon used by RMS normalization layers. Defaults to 1.0e-12

  • :initializer_scale - the standard deviation of the normal initializer used for initializing kernel parameters. Defaults to 0.02

  • :rotary_embedding_base - base for computing rotary embedding frequency. Defaults to 10000

  • :num_labels - the number of labels to use in the last layer for the classification task. Defaults to 2

  • :id_to_label - a map from class index to label. Defaults to %{}