View Source Bumblebee.Vision.ClipVision (Bumblebee v0.1.2)
The CLIP model for image encoding.
architectures
Architectures
:base- the base image model
inputs
Inputs
"pixel_values"-{batch_size, image_size, image_size, num_channels}Featurized image pixel values.
configuration
Configuration
:image_size- the size of the input spatial dimensions. Defaults to224:num_channels- the number of channels in the input. Defaults to3:patch_size- the size of the patch spatial dimensions. Defaults to32:hidden_size- the dimensionality of hidden layers. Defaults to768:num_blocks- the number of Transformer blocks in the encoder. Defaults to12:num_attention_heads- the number of attention heads for each attention layer in the encoder. Defaults to12:activation- the activation function. Defaults to:quick_gelu:dropout_rate- the dropout rate for encoder. Defaults to0.0:attention_dropout_rate- the dropout rate for attention weights. Defaults to0.0:layer_norm_epsilon- the epsilon used by the layer normalization layers. Defaults to1.0e-5:output_hidden_states- whether the model should return all hidden states. Defaults tofalse:output_attentions- whether the model should return all attentions. Defaults tofalse:num_labels- the number of labels to use in the last layer for the classification task. Defaults to2:id_to_label- a map from class index to label. Defaults to%{}