View Source Bumblebee.Text.GptBigCode (Bumblebee v0.5.3)

GPT-BigCode model family.

Architectures

  • :base - plain GPT-BigCode without any head on top

  • :for_causal_language_modeling - GPT-BigCode with a language modeling head. The head returns logits for each token in the original sequence

  • :for_sequence_classification - GPT-BigCode with a sequence classification head. The head returns logits corresponding to possible classes

  • :for_token_classification - GPT-BigCode with a token classification head. The head returns logits for each token in the original sequence

Inputs

  • "input_ids" - {batch_size, sequence_length}

    Indices of input sequence tokens in the vocabulary.

  • "attention_mask" - {batch_size, sequence_length}

    Mask indicating which tokens to attend to. This is used to ignore padding tokens, which are added when processing a batch of sequences with different length.

  • "position_ids" - {batch_size, sequence_length}

    Indices of positions of each input sequence tokens in the position embeddings.

  • "attention_head_mask" - {num_blocks, num_attention_heads}

    Mask to nullify selected heads of the self-attention blocks in the encoder.

  • "input_embeddings" - {batch_size, sequence_length, hidden_size}

    Embedded representation of "input_ids", which can be specified for more control over how "input_ids" are embedded than the model's internal embedding lookup. If "input_embeddings" are present, then "input_ids" will be ignored.

  • "encoder_hidden_state" - {batch_size, encoder_sequence_length, hidden_size}

    Last hidden state output from the encoder. This hidden state is used in cross-attention blocks in the decoder. If specified, the model will skip the encoding process and use this value directly for cross-attentions in the decoder.

  • "encoder_attention_mask" - {batch_size, encoder_sequence_length}

    Mask indicating which tokens to attend to. This is used to ignore padding tokens, which are added when processing a batch of sequences with different length.

  • "cross_attention_head_mask" - {num_blocks, num_attention_heads}

    Mask to nullify selected heads of the cross-attention blocks in the decoder with shape.

  • "cache"

    A container with cached layer results used to speed up sequential decoding (autoregression). With cache, certain hidden states are taken from the cache, rather than recomputed on every decoding pass. The cache should be treated as opaque and initialized with Bumblebee.Text.Generation.init_cache/4.

Configuration

  • :vocab_size - the vocabulary size of the token embedding. This corresponds to the number of distinct tokens that can be represented in model input and output . Defaults to 50257

  • :max_positions - the vocabulary size of the position embedding. This corresponds to the maximum sequence length that this model can process. Typically this is set to a large value just in case, such as 512, 1024 or 2048 . Defaults to 1024

  • :hidden_size - the dimensionality of hidden layers. Defaults to 768

  • :num_blocks - the number of Transformer blocks in the decoder. Defaults to 24

  • :num_attention_heads - the number of attention heads for each attention layer in the decoder. Defaults to 16

  • :num_key_value_heads - the number of key value heads for each attention layer in the model

  • :intermediate_size - the dimensionality of the intermediate layer in the transformer feed-forward network (FFN) in the decoder. If not specified, defaults to 4 times :hidden_size

  • :activation - the activation function. Defaults to :gelu_approx_tanh

  • :scale_attention_weights - whether to scale attention weights to have variance of 1. Defaults to true

  • :dropout_rate - the dropout rate for embedding and encoder. Defaults to 0.1

  • :embeddings_dropout_rate - the dropout rate for embeddings. Defaults to 0.1

  • :attention_dropout_rate - the dropout rate for attention weights. Defaults to 0.1

  • :classifier_dropout_rate - the dropout rate for the classification head. Defaults to 0.1

  • :layer_norm_epsilon - the epsilon used by the layer normalization layers. Defaults to 1.0e-5

  • :initializer_scale - the standard deviation of the normal initializer used for initializing kernel parameters. Defaults to 0.02

  • :output_hidden_states - whether the model should return all hidden states. Defaults to false

  • :output_attentions - whether the model should return all attentions. Defaults to false

  • :num_labels - the number of labels to use in the last layer for the classification task. Defaults to 2

  • :id_to_label - a map from class index to label. Defaults to %{}

  • :use_cross_attention - whether cross-attention layers should be added to the model.This is only relevant for decoder models. Defaults to false