View Source Bumblebee.Vision.Vit (Bumblebee v0.1.2)
ViT model family.
architectures
Architectures
:base
- plain ViT without any head on top:for_image_classification
- ViT with a classification head. The head consists of a single dense layer on top of the pooled features:for_masked_image_modeling
- ViT with a language modeling head on top for predicting visual tokens
inputs
Inputs
"pixel_values"
-{batch_size, image_size, image_size, num_channels}
Featurized image pixel values.
"patch_mask"
-{batch_size, num_patches}
Mask to nullify selected embedded patches.
configuration
Configuration
:image_size
- the size of the input spatial dimensions. Defaults to224
:num_channels
- the number of channels in the input. Defaults to3
:patch_size
- the size of the patch spatial dimensions. Defaults to16
:hidden_size
- the dimensionality of hidden layers. Defaults to768
:num_blocks
- the number of Transformer blocks in the encoder. Defaults to12
:num_attention_heads
- the number of attention heads for each attention layer in the encoder. Defaults to12
:use_qkv_bias
- whether to use bias in query, key, and value projections. Defaults totrue
:activation
- the activation function. Defaults to:gelu
:dropout_rate
- the dropout rate for encoder and decoder. Defaults to0.0
:attention_dropout_rate
- the dropout rate for attention weights. Defaults to0.0
:layer_norm_epsilon
- the epsilon used by the layer normalization layers. Defaults to1.0e-12
:initializer_scale
- the standard deviation of the normal initializer used for initializing kernel parameters. Defaults to0.02
:output_hidden_states
- whether the model should return all hidden states. Defaults tofalse
:output_attentions
- whether the model should return all attentions. Defaults tofalse
:num_labels
- the number of labels to use in the last layer for the classification task. Defaults to2
:id_to_label
- a map from class index to label. Defaults to%{}