AI.Memory (fnord v0.8.82)
View SourcePure functions for memory matching logic.
Memories are Bayesian-weighted patterns that fire automatic thoughts based on conversation context. Each memory stores a bag-of-words pattern and computes match probabilities against accumulated conversation tokens.
Summary
Functions
Clamps weight to valid range.
Computes the Bayesian match probability between accumulated conversation tokens and a memory's pattern tokens.
Computes the final score for a memory by combining match probability and weight. Weight is clamped to prevent runaway values.
Generates a slug from a label using Django/newspaper style
Returns maximum allowed characters for memory label.
Merges new token frequencies into an existing accumulator.
Creates a new memory with default values.
Normalizes text into a bag-of-words with frequencies. Pipeline: lowercase -> split -> stem -> remove stopwords -> count frequencies
Sublinearly increases token counts based on context tokens. For each {token, ctx_count} in context_tokens with ctx_count > 0
Updates memory pattern tokens by training with new bag-of-words. Used for strengthen/weaken operations.
Trims accumulated tokens to top K by frequency to prevent unbounded growth.
Validates memory attributes. Returns {:ok, memory} or {:error, reason}.
Sublinearly decreases token counts based on context tokens. For each {token, ctx_count} in context_tokens with ctx_count > 0
Types
@type scope() :: :global | :project
@type t() :: %AI.Memory{ children: [String.t()], created_at: String.t(), fire_count: non_neg_integer(), id: String.t(), label: String.t(), last_fired: String.t() | nil, parent_id: String.t() | nil, pattern_tokens: %{required(String.t()) => non_neg_integer()}, response_template: String.t(), scope: scope(), slug: String.t(), success_count: non_neg_integer(), weight: float() }
Functions
Clamps weight to valid range.
@spec compute_match_probability(%{required(String.t()) => non_neg_integer()}, %{ required(String.t()) => non_neg_integer() }) :: float()
Computes the Bayesian match probability between accumulated conversation tokens and a memory's pattern tokens.
Returns a score between 0.0 and 1.0 representing match confidence. Uses log probabilities with Laplace smoothing to avoid underflow.
@spec compute_score(t(), %{required(String.t()) => non_neg_integer()}) :: float()
Computes the final score for a memory by combining match probability and weight. Weight is clamped to prevent runaway values.
@spec debug(String.t()) :: :ok
Generates a slug from a label using Django/newspaper style:
- Lowercase
- Remove articles (a, an, the)
- Stem tokens
- Join with dashes
- Truncate to 50 characters
@spec max_label_chars() :: non_neg_integer()
Returns maximum allowed characters for memory label.
@spec merge_tokens(%{required(String.t()) => non_neg_integer()}, %{ required(String.t()) => non_neg_integer() }) :: %{required(String.t()) => non_neg_integer()}
Merges new token frequencies into an existing accumulator.
Creates a new memory with default values.
@spec normalize_to_tokens(String.t()) :: %{required(String.t()) => non_neg_integer()}
Normalizes text into a bag-of-words with frequencies. Pipeline: lowercase -> split -> stem -> remove stopwords -> count frequencies
@spec strengthen_tokens(%{required(String.t()) => number()}, %{ required(String.t()) => number() }) :: %{ required(String.t()) => number() }
Sublinearly increases token counts based on context tokens. For each {token, ctx_count} in context_tokens with ctx_count > 0:
- If token not in pattern_tokens: adds token with count equal to ctx_count.
- If token exists: increment = log10(1.0 + ctx_count), new count = old + increment.
Updates memory pattern tokens by training with new bag-of-words. Used for strengthen/weaken operations.
@spec trim_to_top_k(%{required(String.t()) => non_neg_integer()}, non_neg_integer()) :: %{ required(String.t()) => non_neg_integer() }
Trims accumulated tokens to top K by frequency to prevent unbounded growth.
Validates memory attributes. Returns {:ok, memory} or {:error, reason}.
@spec weaken_tokens(%{required(String.t()) => number()}, %{ required(String.t()) => number() }) :: %{ required(String.t()) => number() }
Sublinearly decreases token counts based on context tokens. For each {token, ctx_count} in context_tokens with ctx_count > 0:
- If token not in pattern_tokens: ignored.
- If token exists: decrement = log10(1.0 + ctx_count); new count = old - decrement; tokens with new count < 1.0 are removed.