View Source Bumblebee.Vision.ClipVision (Bumblebee v0.1.2)

The CLIP model for image encoding.

architectures

Architectures

  • :base - the base image model

inputs

Inputs

  • "pixel_values" - {batch_size, image_size, image_size, num_channels}

    Featurized image pixel values.

configuration

Configuration

  • :image_size - the size of the input spatial dimensions. Defaults to 224

  • :num_channels - the number of channels in the input. Defaults to 3

  • :patch_size - the size of the patch spatial dimensions. Defaults to 32

  • :hidden_size - the dimensionality of hidden layers. Defaults to 768

  • :num_blocks - the number of Transformer blocks in the encoder. Defaults to 12

  • :num_attention_heads - the number of attention heads for each attention layer in the encoder. Defaults to 12

  • :activation - the activation function. Defaults to :quick_gelu

  • :dropout_rate - the dropout rate for encoder. Defaults to 0.0

  • :attention_dropout_rate - the dropout rate for attention weights. Defaults to 0.0

  • :layer_norm_epsilon - the epsilon used by the layer normalization layers. Defaults to 1.0e-5

  • :output_hidden_states - whether the model should return all hidden states. Defaults to false

  • :output_attentions - whether the model should return all attentions. Defaults to false

  • :num_labels - the number of labels to use in the last layer for the classification task. Defaults to 2

  • :id_to_label - a map from class index to label. Defaults to %{}