viva_glyph/association
Association - Hebbian learning for glyph-context mapping
VIVA learns which glyphs to “speak” in which situations through Hebbian association: “neurons that fire together wire together”
Theory
Based on Hebbian learning (Hebb, 1949) with Oja’s rule (Oja, 1982):
- When context C and glyph G co-occur with positive outcome, strengthen C→G
- Connections decay over time without reinforcement
- Winner-takes-all selection for output
Oja’s Rule (LLM Consensus Validated 2025-01-24)
Δw = η(t) × y × (x - w × y)
Where η(t) decays with time for memory consolidation:
η(t) = η₀ / (1 + fire_count / τ)
This provides:
- Automatic weight normalization (prevents explosion, w* → 1.0)
- Memory consolidation (old associations become “crystallized”)
- Dead neuron prevention (y = max(w, ε) with ε = 0.01)
- Competitive learning (stronger connections dominate)
Active Inference Interpretation
Weights w can be interpreted as log-prior probabilities:
P(Glyph | Context) ∝ exp(w)
Recall minimizes surprise: argmax_G P(G|C)
References
- Hebb (1949). The Organization of Behavior. Wiley.
- Oja (1982). Simplified neuron model as a principal component analyzer.
- Friston (2010). The free-energy principle: a unified brain theory.
Types
Association between context and glyph
pub type Association {
Association(
context: Int,
glyph: glyph.Glyph,
strength: Float,
fire_count: Int,
)
}
Constructors
-
Association( context: Int, glyph: glyph.Glyph, strength: Float, fire_count: Int, )Arguments
- context
-
Context identifier
- glyph
-
Associated glyph
- strength
-
Connection strength [0.0, 1.0]
- fire_count
-
Number of times this association fired
Collection of learned associations
pub type AssociationMemory {
AssociationMemory(
associations: List(Association),
learning_rate: Float,
decay_rate: Float,
prune_threshold: Float,
learning_rule: LearningRule,
learning_decay: LearningDecay,
)
}
Constructors
-
AssociationMemory( associations: List(Association), learning_rate: Float, decay_rate: Float, prune_threshold: Float, learning_rule: LearningRule, learning_decay: LearningDecay, )Arguments
- associations
-
All learned associations
- learning_rate
-
Base learning rate (η₀)
- decay_rate
-
Decay rate per tick
- prune_threshold
-
Minimum strength before pruning
- learning_rule
-
Learning rule (Classic or Oja)
- learning_decay
-
Learning rate decay schedule for consolidation
Learning rate decay schedule for memory consolidation
pub type LearningDecay {
NoDecay
InverseDecay(tau: Float)
InverseSqrtDecay(tau: Float)
}
Constructors
-
NoDecayNo decay: η(t) = η₀ (plastic forever, “goldfish memory”)
-
InverseDecay(tau: Float)Inverse decay: η(t) = η₀ / (1 + t/τ) (gradual consolidation)
-
InverseSqrtDecay(tau: Float)Inverse sqrt: η(t) = η₀ / √(1 + t/τ) (slower consolidation)
Learning rule selection
pub type LearningRule {
ClassicHebbian
OjaRule
}
Constructors
-
ClassicHebbianClassic Hebbian: Δw = η × (1 - w)
-
OjaRuleOja’s rule: Δw = η × (x × y - w × y²) - more stable
Values
pub fn consolidation_progress(
assoc: Association,
decay: LearningDecay,
) -> Float
Get consolidation progress [0.0, 1.0] 0.0 = fully plastic, 1.0 = fully crystallized
pub fn contexts_for_glyph(
memory: AssociationMemory,
glyph: glyph.Glyph,
) -> List(Association)
Get associations for specific glyph (reverse lookup)
pub fn free_energy(
memory: AssociationMemory,
context: Int,
) -> Float
Calculate free energy for context (expected surprise) F = Σ P(G|C) × Surprise(G|C) Lower F = more confident predictions
pub fn is_crystallized(
assoc: Association,
decay: LearningDecay,
) -> Bool
Check if association is “crystallized” (consolidated memory) Crystallized = fire_count high enough that learning rate < 10% of base
pub fn learn(
memory: AssociationMemory,
context: Int,
glyph: glyph.Glyph,
) -> AssociationMemory
Learn: reinforce context-glyph association using configured rule
pub fn memory_with_config(
learning_rate: Float,
decay_rate: Float,
prune_threshold: Float,
) -> AssociationMemory
Create association memory with custom config
pub fn memory_with_consolidation(
learning_rate: Float,
decay_rate: Float,
prune_threshold: Float,
rule: LearningRule,
consolidation: LearningDecay,
) -> AssociationMemory
Create memory with full customization including consolidation schedule
pub fn memory_with_rule(
learning_rate: Float,
decay_rate: Float,
prune_threshold: Float,
rule: LearningRule,
) -> AssociationMemory
Create memory with specific learning rule
pub fn new_association(
context: Int,
glyph: glyph.Glyph,
) -> Association
Create new association
pub fn new_memory() -> AssociationMemory
Create empty association memory with default config (Oja’s rule + consolidation)
pub fn recall(
memory: AssociationMemory,
context: Int,
) -> option.Option(glyph.Glyph)
Find best glyph for context (winner-takes-all)
pub fn recall_all(
memory: AssociationMemory,
context: Int,
) -> List(Association)
Find all glyphs for context, sorted by strength
pub fn strengthen(assoc: Association, rate: Float) -> Association
Strengthen an association (Classic Hebbian) Δw = η × (1 - w) - asymptotes to 1.0
pub fn strengthen_oja(
assoc: Association,
rate: Float,
) -> Association
Strengthen using Oja’s rule (more stable) Δw = η × (x × y - w × y²)
Validated by DeepSeek R1-0528 (2025-01-24):
- Uses y = max(w, ε) with ε=0.01 to prevent dead neurons
- Equilibrium: w* ≈ 0.995 (solving w² + 0.01w - 1 = 0)
- Preserves competitive dynamics while avoiding w=0 trap
The y² term provides automatic normalization and competitive learning.
pub fn strongest(
memory: AssociationMemory,
n: Int,
) -> List(Association)
Get strongest associations across all contexts
pub fn surprise(
memory: AssociationMemory,
context: Int,
g: glyph.Glyph,
) -> Float
Calculate surprise for a glyph given context Surprise = -ln(P(Glyph|Context)) where P ∝ strength Lower surprise = better prediction
pub fn tick(memory: AssociationMemory) -> AssociationMemory
Tick: apply decay to all associations and prune weak ones