View Source Bumblebee.Vision.Vit (Bumblebee v0.2.0)

ViT model family.



  • :base - plain ViT without any head on top

  • :for_image_classification - ViT with a classification head. The head consists of a single dense layer on top of the pooled features

  • :for_masked_image_modeling - ViT with a language modeling head on top for predicting visual tokens



  • "pixel_values" - {batch_size, image_size, image_size, num_channels}

    Featurized image pixel values.

  • "patch_mask" - {batch_size, num_patches}

    Mask to nullify selected embedded patches.



  • :image_size - the size of the input spatial dimensions. Defaults to 224

  • :num_channels - the number of channels in the input. Defaults to 3

  • :patch_size - the size of the patch spatial dimensions. Defaults to 16

  • :hidden_size - the dimensionality of hidden layers. Defaults to 768

  • :num_blocks - the number of Transformer blocks in the encoder. Defaults to 12

  • :num_attention_heads - the number of attention heads for each attention layer in the encoder. Defaults to 12

  • :use_qkv_bias - whether to use bias in query, key, and value projections. Defaults to true

  • :activation - the activation function. Defaults to :gelu

  • :dropout_rate - the dropout rate for encoder and decoder. Defaults to 0.0

  • :attention_dropout_rate - the dropout rate for attention weights. Defaults to 0.0

  • :layer_norm_epsilon - the epsilon used by the layer normalization layers. Defaults to 1.0e-12

  • :initializer_scale - the standard deviation of the normal initializer used for initializing kernel parameters. Defaults to 0.02

  • :output_hidden_states - whether the model should return all hidden states. Defaults to false

  • :output_attentions - whether the model should return all attentions. Defaults to false

  • :num_labels - the number of labels to use in the last layer for the classification task. Defaults to 2

  • :id_to_label - a map from class index to label. Defaults to %{}