AI.Memory (fnord v0.8.82)

View Source

Pure functions for memory matching logic.

Memories are Bayesian-weighted patterns that fire automatic thoughts based on conversation context. Each memory stores a bag-of-words pattern and computes match probabilities against accumulated conversation tokens.

Summary

Functions

Clamps weight to valid range.

Computes the Bayesian match probability between accumulated conversation tokens and a memory's pattern tokens.

Computes the final score for a memory by combining match probability and weight. Weight is clamped to prevent runaway values.

Generates a slug from a label using Django/newspaper style

Returns maximum allowed characters for memory label.

Merges new token frequencies into an existing accumulator.

Creates a new memory with default values.

Normalizes text into a bag-of-words with frequencies. Pipeline: lowercase -> split -> stem -> remove stopwords -> count frequencies

Sublinearly increases token counts based on context tokens. For each {token, ctx_count} in context_tokens with ctx_count > 0

Updates memory pattern tokens by training with new bag-of-words. Used for strengthen/weaken operations.

Trims accumulated tokens to top K by frequency to prevent unbounded growth.

Validates memory attributes. Returns {:ok, memory} or {:error, reason}.

Sublinearly decreases token counts based on context tokens. For each {token, ctx_count} in context_tokens with ctx_count > 0

Types

scope()

@type scope() :: :global | :project

t()

@type t() :: %AI.Memory{
  children: [String.t()],
  created_at: String.t(),
  fire_count: non_neg_integer(),
  id: String.t(),
  label: String.t(),
  last_fired: String.t() | nil,
  parent_id: String.t() | nil,
  pattern_tokens: %{required(String.t()) => non_neg_integer()},
  response_template: String.t(),
  scope: scope(),
  slug: String.t(),
  success_count: non_neg_integer(),
  weight: float()
}

Functions

clamp_weight(weight)

@spec clamp_weight(float()) :: float()

Clamps weight to valid range.

compute_match_probability(accumulated_tokens, pattern_tokens)

@spec compute_match_probability(%{required(String.t()) => non_neg_integer()}, %{
  required(String.t()) => non_neg_integer()
}) :: float()

Computes the Bayesian match probability between accumulated conversation tokens and a memory's pattern tokens.

Returns a score between 0.0 and 1.0 representing match confidence. Uses log probabilities with Laplace smoothing to avoid underflow.

compute_score(memory, accumulated_tokens)

@spec compute_score(t(), %{required(String.t()) => non_neg_integer()}) :: float()

Computes the final score for a memory by combining match probability and weight. Weight is clamped to prevent runaway values.

debug(msg)

@spec debug(String.t()) :: :ok

generate_slug(label)

@spec generate_slug(String.t()) :: String.t()

Generates a slug from a label using Django/newspaper style:

  • Lowercase
  • Remove articles (a, an, the)
  • Stem tokens
  • Join with dashes
  • Truncate to 50 characters

max_label_chars()

@spec max_label_chars() :: non_neg_integer()

Returns maximum allowed characters for memory label.

merge_tokens(accumulator, new_tokens)

@spec merge_tokens(%{required(String.t()) => non_neg_integer()}, %{
  required(String.t()) => non_neg_integer()
}) :: %{required(String.t()) => non_neg_integer()}

Merges new token frequencies into an existing accumulator.

new(attrs)

@spec new(map()) :: t()

Creates a new memory with default values.

normalize_to_tokens(text)

@spec normalize_to_tokens(String.t()) :: %{required(String.t()) => non_neg_integer()}

Normalizes text into a bag-of-words with frequencies. Pipeline: lowercase -> split -> stem -> remove stopwords -> count frequencies

strengthen_tokens(pattern_tokens, context_tokens)

@spec strengthen_tokens(%{required(String.t()) => number()}, %{
  required(String.t()) => number()
}) :: %{
  required(String.t()) => number()
}

Sublinearly increases token counts based on context tokens. For each {token, ctx_count} in context_tokens with ctx_count > 0:

  • If token not in pattern_tokens: adds token with count equal to ctx_count.
  • If token exists: increment = log10(1.0 + ctx_count), new count = old + increment.

train(memory, match_input, weight_delta)

@spec train(t(), String.t(), float()) :: t()

Updates memory pattern tokens by training with new bag-of-words. Used for strengthen/weaken operations.

trim_to_top_k(tokens, k)

@spec trim_to_top_k(%{required(String.t()) => non_neg_integer()}, non_neg_integer()) ::
  %{
    required(String.t()) => non_neg_integer()
  }

Trims accumulated tokens to top K by frequency to prevent unbounded growth.

validate(memory)

@spec validate(t()) :: {:ok, t()} | {:error, String.t()}

Validates memory attributes. Returns {:ok, memory} or {:error, reason}.

weaken_tokens(pattern_tokens, context_tokens)

@spec weaken_tokens(%{required(String.t()) => number()}, %{
  required(String.t()) => number()
}) :: %{
  required(String.t()) => number()
}

Sublinearly decreases token counts based on context tokens. For each {token, ctx_count} in context_tokens with ctx_count > 0:

  • If token not in pattern_tokens: ignored.
  • If token exists: decrement = log10(1.0 + ctx_count); new count = old - decrement; tokens with new count < 1.0 are removed.