plasticity behaviour (macula_tweann v0.18.1)

View Source

Plasticity behavior module - defines the interface for learning rules.

This module provides a behavior (interface) for implementing different plasticity rules that enable neural networks to learn during operation. Unlike evolutionary weight changes, plasticity rules update weights based on neural activity patterns.

Theory

Plasticity refers to the brain's ability to modify its connections based on experience. The most fundamental rule is Hebbian learning: "neurons that fire together wire together" (Hebb, 1949).

Mathematically, basic Hebbian learning is: Δw_ij = η × pre_i × post_j

Where: - Δw_ij is the change in weight from neuron i to j - η is the learning rate - pre_i is the presynaptic (input) activity - post_j is the postsynaptic (output) activity

More sophisticated rules include: - Oja's rule: adds weight normalization to prevent unbounded growth - BCM rule: includes a sliding threshold for potentiation/depression - STDP: Spike-Timing Dependent Plasticity, considers timing of spikes - Modulated Hebbian: multiplies by a reward/punishment signal

Usage

Implement the behavior in a module:

-module(plasticity_hebbian). -behaviour(plasticity).

-export([apply_rule/4, name/0, description/0]).

name() -> hebbian. description() -> "Basic Hebbian learning rule" (as binary).

apply_rule(Weight, PreActivity, PostActivity, _Reward) -> plasticity:hebbian_delta(Weight, PreActivity, PostActivity).

Then use the plasticity module to apply rules:

NewWeights = plasticity:apply_to_network(hebbian, Weights, Activations, Reward)

References

[1] Hebb, D.O. (1949). The Organization of Behavior. Wiley. [2] Oja, E. (1982). A simplified neuron model as a principal component analyzer. Journal of Mathematical Biology, 15(3). [3] Bi, G., Poo, M. (1998). Synaptic Modifications in Cultured Hippocampal Neurons. Journal of Neuroscience, 18(24).

Summary

Functions

Apply a plasticity rule to all weights in a layer.

Apply a plasticity rule to an entire network's weights.

List available plasticity rules.

Clamp weight to stay within bounds.

Extract the delta weight from a weight_spec tuple.

Extract the learning rate from a weight_spec tuple.

Extract the weight value from a weight_spec tuple.

Calculate the Hebbian weight delta.

Calculate Hebbian delta with explicit learning rate.

Normalize weight to prevent unbounded growth (Oja's modification).

Get the module implementing a plasticity rule.

Set the delta weight in a weight_spec tuple.

Set the weight value in a weight_spec tuple.

Types

layer_weights/0

-type layer_weights() :: [{SourceId :: term(), [weight_spec()]}].

weight_spec/0

-type weight_spec() ::
          {Weight :: float(), DeltaWeight :: float(), LearningRate :: float(), ParamList :: list()}.

Callbacks

apply_rule/4

-callback apply_rule(Weight :: weight_spec(),
                     PreActivity :: float(),
                     PostActivity :: float(),
                     Reward :: float()) ->
                        weight_spec().

description/0

-callback description() -> binary().

init/1

(optional)
-callback init(Params :: map()) -> State :: term().

name/0

-callback name() -> atom().

reset/1

(optional)
-callback reset(State :: term()) -> State :: term().

Functions

apply_to_layer(RuleModule, LayerWeights, PreActivations, PostActivity)

-spec apply_to_layer(module(), layer_weights(), [float()], float()) -> layer_weights().

Apply a plasticity rule to all weights in a layer.

Given a layer's weight structure (list of {SourceId, [weights]}), applies the rule using the corresponding activations.

apply_to_network(RuleAtom, AllWeights, AllActivations, Reward)

-spec apply_to_network(atom(), [[layer_weights()]], [[float()]], float()) -> [[layer_weights()]].

Apply a plasticity rule to an entire network's weights.

This is the main entry point for applying learning to a network. It takes all weights organized by layer, activations per layer, and an optional reward signal.

apply_to_weights(RuleModule, Weight, PreActivity, PostActivity, Reward)

-spec apply_to_weights(module(), weight_spec(), float(), float(), float()) -> weight_spec().

Apply a plasticity rule to a single weight.

Takes a rule module, the current weight spec, pre/post activities, and an optional reward signal. Returns the updated weight spec.

available_rules()

-spec available_rules() -> [{atom(), binary()}].

List available plasticity rules.

clamp_weight(Weight, Min, Max)

-spec clamp_weight(float(), float(), float()) -> float().

Clamp weight to stay within bounds.

get_delta(_)

-spec get_delta(weight_spec()) -> float().

Extract the delta weight from a weight_spec tuple.

get_learning_rate(_)

-spec get_learning_rate(weight_spec()) -> float().

Extract the learning rate from a weight_spec tuple.

get_weight(_)

-spec get_weight(weight_spec()) -> float().

Extract the weight value from a weight_spec tuple.

hebbian_delta(LearningRate, PreActivity, PostActivity)

-spec hebbian_delta(weight_spec() | float(), float(), float()) -> float().

Calculate the Hebbian weight delta.

Basic Hebbian rule: Δw = η × pre × post

hebbian_delta(LearningRate, CurrentWeight, PreActivity, PostActivity)

-spec hebbian_delta(float(), float(), float(), float()) -> float().

Calculate Hebbian delta with explicit learning rate.

normalize_weight(Weight, Magnitude)

-spec normalize_weight(float(), float()) -> float().

Normalize weight to prevent unbounded growth (Oja's modification).

Applies: w' = w / ||w||

rule_module(RuleAtom)

-spec rule_module(atom()) -> module().

Get the module implementing a plasticity rule.

set_delta(_, NewDW)

-spec set_delta(weight_spec(), float()) -> weight_spec().

Set the delta weight in a weight_spec tuple.

set_weight(_, NewW)

-spec set_weight(weight_spec(), float()) -> weight_spec().

Set the weight value in a weight_spec tuple.