View Source Bumblebee.Vision.ClipVision (Bumblebee v0.5.3)
The CLIP model for image encoding.
Architectures
:base- the base image model:for_embedding- the base model with a single projection layer on top. The head returns a vector embedded in the joint text-image CLIP space
Inputs
"pixel_values"-{batch_size, image_size, image_size, num_channels}Featurized image pixel values.
Configuration
:image_size- the size of the input spatial dimensions. Defaults to224:num_channels- the number of channels in the input. Defaults to3:patch_size- the size of the patch spatial dimensions. Defaults to32:hidden_size- the dimensionality of hidden layers. Defaults to768:num_blocks- the number of Transformer blocks in the encoder. Defaults to12:num_attention_heads- the number of attention heads for each attention layer in the encoder. Defaults to12:projection_size- the dimensionality of the projection layer. Defaults to512:activation- the activation function. Defaults to:gelu_approx_sigmoid:attention_dropout_rate- the dropout rate for attention weights. Defaults to0.0:layer_norm_epsilon- the epsilon used by the layer normalization layers. Defaults to1.0e-5:output_hidden_states- whether the model should return all hidden states. Defaults tofalse:output_attentions- whether the model should return all attentions. Defaults tofalse