plasticity_hebbian (macula_tweann v0.18.1)

View Source

Basic Hebbian plasticity rule implementation.

This module implements the classic Hebbian learning rule, often summarized as "neurons that fire together wire together."

Theory

Hebbian learning was proposed by Donald Hebb in 1949 as a model for how neural connections are strengthened through experience. The basic rule is:

Δw_ij = η × pre_i × post_j

Where: - Δw_ij is the weight change from neuron i to j - η (eta) is the learning rate - pre_i is the presynaptic (input) activation - post_j is the postsynaptic (output) activation

Variants Implemented

1. **Basic Hebbian** (default): Δw = η × pre × post

2. **Bounded Hebbian** (with weight clamping): Δw = η × pre × post, clamped to [-1, 1]

3. **Oja's Rule** (with normalization): Δw = η × post × (pre - post × w) This prevents unbounded weight growth.

Usage

Weight = {0.5, 0.0, 0.01, []}, % Initial weight spec PreActivity = 0.8, PostActivity = 0.6, Reward = 0.0, % Not used in basic Hebbian

NewWeight = plasticity_hebbian:apply_rule(Weight, PreActivity, PostActivity, Reward). %% => {0.5048, 0.0048, 0.01, []}

Configuration

The learning rate is stored in the weight_spec tuple (3rd element). Additional parameters can be stored in the parameter list (4th element):

- {bounded, Min, Max}` - Clamp weights to [Min, Max] - `{oja, true}` - Use Ojas normalized rule - {decay, Rate}` - Apply weight decay: w = w × (1 - decay)

References

[1] Hebb, D.O. (1949). The Organization of Behavior. Wiley. [2] Oja, E. (1982). "A simplified neuron model as a principal component analyzer." Journal of Mathematical Biology.

Summary

Functions

Apply bounded Hebbian rule with explicit bounds.

Apply Oja's normalized Hebbian rule.

Apply the Hebbian learning rule to a weight.

Return a description of this rule.

Initialize any state for this rule.

Return the rule name.

Reset the rule state.

Types

weight_spec/0

-type weight_spec() :: plasticity:weight_spec().

Functions

apply_bounded(_, PreActivity, PostActivity, Min, Max)

-spec apply_bounded(weight_spec(), float(), float(), float(), float()) -> weight_spec().

Apply bounded Hebbian rule with explicit bounds.

apply_oja(_, PreActivity, PostActivity, Reward)

-spec apply_oja(weight_spec(), float(), float(), float()) -> weight_spec().

Apply Oja's normalized Hebbian rule.

Oja's rule includes a "forgetting" term that prevents unbounded weight growth, making it suitable for self-organizing maps and principal component analysis.

apply_rule(_, PreActivity, PostActivity, Reward)

-spec apply_rule(weight_spec(), float(), float(), float()) -> weight_spec().

Apply the Hebbian learning rule to a weight.

This is the main callback that updates a single weight based on the activities of the pre and post neurons.

apply_with_decay(_, PreActivity, PostActivity, Reward, DecayRate)

-spec apply_with_decay(weight_spec(), float(), float(), float(), float()) -> weight_spec().

Apply Hebbian with weight decay.

Weight decay prevents weights from growing too large over time by slightly reducing all weights each update.

description()

-spec description() -> binary().

Return a description of this rule.

init(Params)

-spec init(map()) -> undefined.

Initialize any state for this rule.

For basic Hebbian, no state is needed.

name()

-spec name() -> atom().

Return the rule name.

reset(State)

-spec reset(term()) -> undefined.

Reset the rule state.