View Source Bumblebee.Vision.ClipVision (Bumblebee v0.6.0)

The CLIP model for image encoding.

Architectures

  • :base - the base image model

  • :for_embedding - the base model with a single projection layer on top. The head returns a vector embedded in the joint text-image CLIP space

Inputs

  • "pixel_values" - {batch_size, image_size, image_size, num_channels}

    Featurized image pixel values.

Global layer options

  • :output_hidden_states - when true, the model output includes all hidden states

  • :output_attentions - when true, the model output includes all attention weights

Configuration

  • :image_size - the size of the input spatial dimensions. Defaults to 224

  • :num_channels - the number of channels in the input. Defaults to 3

  • :patch_size - the size of the patch spatial dimensions. Defaults to 32

  • :hidden_size - the dimensionality of hidden layers. Defaults to 768

  • :num_blocks - the number of Transformer blocks in the encoder. Defaults to 12

  • :num_attention_heads - the number of attention heads for each attention layer in the encoder. Defaults to 12

  • :projection_size - the dimensionality of the projection layer. Defaults to 512

  • :activation - the activation function. Defaults to :gelu_approx_sigmoid

  • :attention_dropout_rate - the dropout rate for attention weights. Defaults to 0.0

  • :layer_norm_epsilon - the epsilon used by the layer normalization layers. Defaults to 1.0e-5