Gemini.Types.Response.ContentEmbedding (GeminiEx v0.3.0)
View SourceA list of floats representing an embedding.
Embeddings are numerical representations of text that can be used for various purposes such as similarity comparison, clustering, and retrieval.
Fields
values
: List of float values representing the embedding vector
Examples
%ContentEmbedding{
values: [0.123, -0.456, 0.789, 0.234, ...]
}
Summary
Functions
Calculates cosine similarity between two embeddings.
Gets the dimensionality of the embedding.
Calculates the dot product between two embeddings.
Calculates Euclidean distance between two embeddings.
Creates a new content embedding from API response data.
Gets the embedding values.
Calculates the L2 norm (Euclidean magnitude) of the embedding.
Normalizes the embedding to unit length (L2 norm = 1).
Types
@type t() :: %Gemini.Types.Response.ContentEmbedding{values: [float()]}
Functions
Calculates cosine similarity between two embeddings.
Cosine similarity measures the cosine of the angle between two vectors, ranging from -1 (opposite) to 1 (identical).
This metric focuses on direction rather than magnitude, making it ideal
for semantic similarity. For best results with dimensions other than 3072,
normalize embeddings first using normalize/1
.
Parameters
embedding1
: First embeddingembedding2
: Second embedding
Returns
- Float value between -1.0 and 1.0, or
{:error, reason}
if embeddings have different dimensions
Examples
emb1 = %ContentEmbedding{values: [1.0, 0.0, 0.0]}
emb2 = %ContentEmbedding{values: [0.0, 1.0, 0.0]}
ContentEmbedding.cosine_similarity(emb1, emb2)
# => 0.0
# For best results with non-3072 dimensions, normalize first
norm1 = ContentEmbedding.normalize(emb1)
norm2 = ContentEmbedding.normalize(emb2)
ContentEmbedding.cosine_similarity(norm1, norm2)
@spec dimensionality(t()) :: non_neg_integer()
Gets the dimensionality of the embedding.
Examples
embedding = %ContentEmbedding{values: [0.1, 0.2, 0.3]}
ContentEmbedding.dimensionality(embedding)
# => 3
Calculates the dot product between two embeddings.
The dot product is a fundamental vector operation used in many similarity metrics. For normalized vectors, the dot product equals the cosine similarity.
Parameters
embedding1
: First embeddingembedding2
: Second embedding
Returns
- Float value, or
{:error, reason}
if embeddings have different dimensions
Examples
emb1 = %ContentEmbedding{values: [1.0, 2.0, 3.0]}
emb2 = %ContentEmbedding{values: [4.0, 5.0, 6.0]}
ContentEmbedding.dot_product(emb1, emb2)
# => 32.0 (1*4 + 2*5 + 3*6)
Calculates Euclidean distance between two embeddings.
Euclidean distance represents the straight-line distance between two points in multidimensional space. Unlike cosine similarity, it considers both direction and magnitude.
Parameters
embedding1
: First embeddingembedding2
: Second embedding
Returns
- Float value >= 0, or
{:error, reason}
if embeddings have different dimensions
Examples
emb1 = %ContentEmbedding{values: [0.0, 0.0]}
emb2 = %ContentEmbedding{values: [3.0, 4.0]}
ContentEmbedding.euclidean_distance(emb1, emb2)
# => 5.0
Creates a new content embedding from API response data.
Parameters
data
: Map containing the embedding values
Examples
ContentEmbedding.from_api_response(%{"values" => [0.1, 0.2, 0.3]})
Gets the embedding values.
Examples
embedding = %ContentEmbedding{values: [0.1, 0.2, 0.3]}
ContentEmbedding.get_values(embedding)
# => [0.1, 0.2, 0.3]
Calculates the L2 norm (Euclidean magnitude) of the embedding.
The norm represents the length of the vector in multidimensional space. For normalized embeddings, the norm should be 1.0.
Examples
embedding = %ContentEmbedding{values: [3.0, 4.0]}
ContentEmbedding.norm(embedding)
# => 5.0
normalized = ContentEmbedding.normalize(embedding)
ContentEmbedding.norm(normalized)
# => 1.0
Normalizes the embedding to unit length (L2 norm = 1).
Per the Gemini API specification, embeddings with dimensions other than 3072 should be normalized for accurate semantic similarity comparison.
The 3072-dimensional embeddings are already normalized by the API, but embeddings with other dimensions (768, 1536, etc.) need explicit normalization.
Examples
embedding = %ContentEmbedding{values: [3.0, 4.0]}
normalized = ContentEmbedding.normalize(embedding)
# => %ContentEmbedding{values: [0.6, 0.8]}
ContentEmbedding.norm(normalized)
# => 1.0