View Source Bumblebee.Text.ClipText (Bumblebee v0.5.3)

The CLIP model for text encoding.

Architectures

  • :base - the base text model

  • :for_embedding - the base model with a single projection layer on top. The head returns a vector embedded in the joint text-image CLIP space

Inputs

  • "input_ids" - {batch_size, sequence_length}

    Indices of input sequence tokens in the vocabulary.

  • "attention_mask" - {batch_size, sequence_length}

    Mask indicating which tokens to attend to. This is used to ignore padding tokens, which are added when processing a batch of sequences with different length.

  • "position_ids" - {batch_size, sequence_length}

    Indices of positions of each input sequence tokens in the position embeddings.

Configuration

  • :vocab_size - the vocabulary size of the token embedding. This corresponds to the number of distinct tokens that can be represented in model input and output . Defaults to 49408

  • :max_positions - the vocabulary size of the position embedding. This corresponds to the maximum sequence length that this model can process. Typically this is set to a large value just in case, such as 512, 1024 or 2048 . Defaults to 77

  • :hidden_size - the dimensionality of hidden layers. Defaults to 512

  • :num_blocks - the number of Transformer blocks in the encoder. Defaults to 12

  • :num_attention_heads - the number of attention heads for each attention layer in the encoder. Defaults to 8

  • :intermediate_size - the dimensionality of the intermediate layer in the transformer feed-forward network (FFN) in the encoder. Defaults to 2048

  • :projection_size - the dimensionality of the projection layer. Defaults to 512

  • :activation - the activation function. Defaults to :gelu_approx_sigmoid

  • :attention_dropout_rate - the dropout rate for attention weights. Defaults to 0.0

  • :layer_norm_epsilon - the epsilon used by the layer normalization layers. Defaults to 1.0e-5

  • :output_hidden_states - whether the model should return all hidden states. Defaults to false

  • :output_attentions - whether the model should return all attentions. Defaults to false

  • :num_labels - the number of labels to use in the last layer for the classification task. Defaults to 2

  • :id_to_label - a map from class index to label. Defaults to %{}