View Source Bumblebee.Vision.BlipVision (Bumblebee v0.6.0)
The BLIP model for image encoding.
Architectures
:base
- the base image model
Inputs
"pixel_values"
-{batch_size, image_size, image_size, num_channels}
Featurized image pixel values.
Global layer options
:output_hidden_states
- whentrue
, the model output includes all hidden states:output_attentions
- whentrue
, the model output includes all attention weights
Configuration
:image_size
- the size of the input spatial dimensions. Defaults to384
:num_channels
- the number of channels in the input. Defaults to3
:patch_size
- the size of the patch spatial dimensions. Defaults to16
:hidden_size
- the dimensionality of hidden layers. Defaults to768
:num_blocks
- the number of Transformer blocks in the encoder. Defaults to12
:num_attention_heads
- the number of attention heads for each attention layer in the encoder. Defaults to12
:activation
- the activation function. Defaults to:gelu
:attention_dropout_rate
- the dropout rate for attention weights. Defaults to0.0
:layer_norm_epsilon
- the epsilon used by the layer normalization layers. Defaults to1.0e-5
:initializer_scale
- the standard deviation of the normal initializer used for initializing kernel parameters. Defaults to0.02