View Source Bumblebee.Vision.Vit (Bumblebee v0.6.0)
ViT model family.
Architectures
:base
- plain ViT without any head on top:for_image_classification
- ViT with a classification head. The head consists of a single dense layer on top of the pooled features:for_masked_image_modeling
- ViT with a language modeling head on top for predicting visual tokens
Inputs
"pixel_values"
-{batch_size, image_size, image_size, num_channels}
Featurized image pixel values.
"patch_mask"
-{batch_size, num_patches}
Mask to nullify selected embedded patches.
Global layer options
:output_hidden_states
- whentrue
, the model output includes all hidden states:output_attentions
- whentrue
, the model output includes all attention weights
Configuration
:image_size
- the size of the input spatial dimensions. Defaults to224
:num_channels
- the number of channels in the input. Defaults to3
:patch_size
- the size of the patch spatial dimensions. Defaults to16
:hidden_size
- the dimensionality of hidden layers. Defaults to768
:num_blocks
- the number of Transformer blocks in the encoder. Defaults to12
:num_attention_heads
- the number of attention heads for each attention layer in the encoder. Defaults to12
:use_attention_bias
- whether to use bias in query, key, and value projections. Defaults totrue
:activation
- the activation function. Defaults to:gelu
:dropout_rate
- the dropout rate for encoder and decoder. Defaults to0.0
:attention_dropout_rate
- the dropout rate for attention weights. Defaults to0.0
:layer_norm_epsilon
- the epsilon used by the layer normalization layers. Defaults to1.0e-12
:initializer_scale
- the standard deviation of the normal initializer used for initializing kernel parameters. Defaults to0.02
:num_labels
- the number of labels to use in the last layer for the classification task. Defaults to2
:id_to_label
- a map from class index to label. Defaults to%{}