View Source Bumblebee.Vision.ClipVision (Bumblebee v0.4.2)
The CLIP model for image encoding.
Architectures
:base
- the base image model
Inputs
"pixel_values"
-{batch_size, image_size, image_size, num_channels}
Featurized image pixel values.
Configuration
:image_size
- the size of the input spatial dimensions. Defaults to224
:num_channels
- the number of channels in the input. Defaults to3
:patch_size
- the size of the patch spatial dimensions. Defaults to32
:hidden_size
- the dimensionality of hidden layers. Defaults to768
:num_blocks
- the number of Transformer blocks in the encoder. Defaults to12
:num_attention_heads
- the number of attention heads for each attention layer in the encoder. Defaults to12
:activation
- the activation function. Defaults to:quick_gelu
:attention_dropout_rate
- the dropout rate for attention weights. Defaults to0.0
:layer_norm_epsilon
- the epsilon used by the layer normalization layers. Defaults to1.0e-5
:output_hidden_states
- whether the model should return all hidden states. Defaults tofalse
:output_attentions
- whether the model should return all attentions. Defaults tofalse