v0.3.0 - Neural Core

View Source

Overview

This release refactors the core neural processing modules: plasticity.erl and neuron.erl. These modules implement learning rules and the fundamental neural computation unit. This completes the Foundation Phase.

Phase: Foundation Duration: 2 weeks Prerequisites: v0.2.0 (signal_aggregator, functions refactored)

Objectives

  1. Refactor plasticity.erl with clear learning rule implementations
  2. Refactor neuron.erl with documented state and transitions
  3. Rename all cryptic fields to descriptive names
  4. Complete Foundation Phase with comprehensive tests

Modules to Refactor

1. plasticity.erl (444 lines)

Analysis Reference: Section 3.5 of DXNN2_CODEBASE_ANALYSIS.md (lines 425-429)

Issues to Address:

  • Parameter format inconsistency between rules
  • Redundant code between Hebbian variants
  • Heavy nesting in weight update functions

Tasks:

  1. Add module documentation

    %% @module plasticity
    %% @doc Synaptic plasticity learning rules for neural networks
    %%
    %% This module implements various learning rules that modify synaptic
    %% weights based on pre- and post-synaptic activity, simulating
    %% biological neural plasticity.
    %%
    %% == Learning Rules ==
    %% - none: No plasticity (static weights)
    %% - hebbian: Standard Hebbian learning ("fire together, wire together")
    %% - hebbian_w: Windowed Hebbian with decay
    %% - ojas: Oja's rule (normalized Hebbian)
    %% - ojas_w: Windowed Oja's rule
    %% - self_modulatory: Self-modulating plasticity
    %%
    %% == Weight Update Formula ==
    %% Delta_W = LearningRate * PlasticityRule(PreActivity, PostActivity, CurrentWeight)
  2. Consolidate redundant Hebbian variants

    %% Before: separate hebbian/1, hebbian_w/1 with duplicated logic
    %% After: single parametric function
    
    -spec hebbian_learning(Parameters, Inputs, Weights, Output) -> UpdatedWeights
        when Parameters :: plasticity_params(),
             Inputs :: [input_signal()],
             Weights :: [weighted_input()],
             Output :: [float()],
             UpdatedWeights :: [weighted_input()].
    
    hebbian_learning(Params, Inputs, Weights, Output) ->
        case Params of
            {hebbian, [LearningRate]} ->
                apply_hebbian(LearningRate, none, Inputs, Weights, Output);
            {hebbian_w, [LearningRate, WindowSize]} ->
                apply_hebbian(LearningRate, {window, WindowSize}, Inputs, Weights, Output)
        end.
  3. Add type specifications for all learning rules

    -type plasticity_rule() :: none | hebbian | hebbian_w | ojas | ojas_w |
                               self_modulatory | neuromodulated.
    -type plasticity_params() :: {plasticity_rule(), [float()]}.
    
    -spec apply_plasticity(plasticity_params(), [input_signal()],
                           [weighted_input()], [float()]) -> [weighted_input()].
  4. Document parameter conventions for each rule

    %% @doc Apply Hebbian learning rule
    %%
    %% Hebbian learning strengthens connections when pre and post
    %% neurons fire together: Delta_W = eta * pre * post
    %%
    %% Parameters:
    %%   - [LearningRate] for basic hebbian
    %%   - [LearningRate, WindowSize] for windowed hebbian
    %%
    %% The windowed variant includes a decay term to prevent
    %% unbounded weight growth.
  5. Reduce nesting in weight update functions

    • Extract helper functions for weight tuple manipulation
    • Use pattern matching instead of nested case

2. neuron.erl (302 lines)

Analysis Reference: Section 3.2 of DXNN2_CODEBASE_ANALYSIS.md (lines 186-265)

Issues to Address:

  • Cryptic field names in state record (si_pidps_bl, etc.)
  • Macro abuse (?RO_SIGNAL)
  • Weight structure mystery
  • Hardcoded tanh for modulation (line 124)
  • Backup/restore logic unclear

Tasks:

  1. Rename state record fields (reference Section 6.1 naming table)

    %% Before
    -record(state,{
        id, cx_pid, af, aggrf, heredity_type,
        si_pids=[], si_pidps_bl=[], si_pidps_current=[],
        si_pidps_backup=[], mi_pids=[], mi_pidps_current=[],
        mi_pidps_backup=[], pf_current, pf_backup,
        output_pids=[], ro_pids=[]
    }).
    
    %% After
    -record(neuron_state, {
        id,
        cortex_process_id,
        activation_function,
        aggregation_function,
        heredity_type,                    % darwinian | lamarckian
        signal_integrator_pids = [],
        weighted_inputs_baseline = [],    % Original weights (was si_pidps_bl)
        weighted_inputs_current = [],     % Working weights
        weighted_inputs_backup = [],      % Saved for hill-climbing
        modulation_input_pids = [],
        modulation_inputs_current = [],
        modulation_inputs_backup = [],
        plasticity_function_current,
        plasticity_function_backup,
        output_destination_pids = [],
        recurrent_output_pids = []
    }).
  2. Remove ?RO_SIGNAL macro, use direct function call (line 25, 52, 211)

    %% Before
    -define(RO_SIGNAL, get_ROSig(AF, SI_PIdPs)).
    %% ... later ...
    Output = ?RO_SIGNAL,
    
    %% After
    Output = get_recurrent_output_signal(ActivationFunction, WeightedInputs),
  3. Document weight management lifecycle

    %% @doc Neuron weight management
    %%
    %% Neurons maintain three versions of weights:
    %% - baseline: Original weights from genotype (never modified)
    %% - current: Working weights (modified by plasticity)
    %% - backup: Saved current weights for hill-climbing restoration
    %%
    %% Weight lifecycle during tuning:
    %% 1. Initialize: baseline -> current
    %% 2. Perturb: add noise to current
    %% 3. Evaluate: run network
    %% 4. If better: current -> backup
    %% 5. If worse: backup -> current
    %% 6. After tuning (Lamarckian): current -> genotype
    %%
    %% Darwinian vs Lamarckian:
    %% - Darwinian: baseline is preserved across generations
    %% - Lamarckian: learned weights become new baseline
  4. Fix hardcoded tanh for modulation (line 124)

    %% Before
    MOutput = sat(functions:tanh(MAggregation_Product), ?SAT_LIMIT),
    
    %% After (configurable)
    ModulationActivation = get_modulation_activation_function(State),
    MOutput = sat(functions:ModulationActivation(MAggregation_Product), ?SAT_LIMIT),
  5. Add comprehensive module documentation

    %% @module neuron
    %% @doc Neural processing unit - core computational element
    %%
    %% A neuron is an Erlang process that:
    %% - Receives input signals from sensors and other neurons
    %% - Aggregates inputs using a configurable function (dot product, etc.)
    %% - Applies activation function to produce output
    %% - Optionally applies plasticity rules to update weights
    %% - Forwards output to destination neurons/actuators
    %%
    %% == Process Lifecycle ==
    %% 1. gen/2 spawns neuron process
    %% 2. prep/1 initializes state and registers with exoself
    %% 3. loop receives and processes signals until termination
    %%
    %% == Message Protocol ==
    %% Receives: {SourcePid, forward, Signal}
    %% Sends: {self(), forward, Output} to output_destination_pids
  6. Extract weight perturbation to function (duplicated at lines 189-194, 289-295)

    %% Before: inline perturbation code repeated 3 times
    
    %% After: extracted function
    -spec perturb_weight(weight_spec(), perturbation_range()) -> weight_spec().
    perturb_weight({W, DW, LP, LPs}, Spread) ->
        NewDW = (rand:uniform() - 0.5) * Spread + DW * 0.5,  % Momentum
        NewW = sat(W + NewDW, ?SAT_LIMIT),
        {NewW, NewDW, LP, LPs}.
  7. Document saturation limits (lines 24-25)

    %% @doc Internal saturation limit for neuron signals
    %%
    %% Using pi*10 (~31.4) provides sufficient range for accumulated
    %% signals while preventing numerical overflow. This value was
    %% empirically determined to work well with typical network sizes.
    -define(SAT_LIMIT, math:pi() * 10).
    
    %% @doc Output saturation limit
    %%
    %% Outputs are saturated to [-1, 1] for consistent signal ranges
    %% across the network.
    -define(OUTPUT_SAT_LIMIT, 1).

Tests to Write

plasticity_test.erl

-module(plasticity_test).
-include_lib("eunit/include/eunit.hrl").
-include("types.hrl").

%% ============================================================================
%% Hebbian learning tests
%% ============================================================================

hebbian_positive_correlation_test() ->
    %% When pre and post both positive, weight should increase
    Inputs = [{source1, [0.8]}],
    Weights = [{source1, [{0.5, 0.0, 0.1, [0.1]}]}],
    Output = [0.9],
    Params = {hebbian, [0.1]},

    [{source1, [{NewW, _, _, _}]}] =
        plasticity:apply_plasticity(Params, Inputs, Weights, Output),

    ?assert(NewW > 0.5).

hebbian_negative_correlation_test() ->
    %% Opposite signs should decrease weight
    Inputs = [{source1, [0.8]}],
    Weights = [{source1, [{0.5, 0.0, 0.1, [0.1]}]}],
    Output = [-0.9],
    Params = {hebbian, [0.1]},

    [{source1, [{NewW, _, _, _}]}] =
        plasticity:apply_plasticity(Params, Inputs, Weights, Output),

    ?assert(NewW < 0.5).

hebbian_learning_rate_effect_test() ->
    %% Higher learning rate should cause larger weight change
    Inputs = [{source1, [0.5]}],
    Weights = [{source1, [{0.5, 0.0, 0.1, [0.1]}]}],
    Output = [0.5],

    [{source1, [{W_slow, _, _, _}]}] =
        plasticity:apply_plasticity({hebbian, [0.01]}, Inputs, Weights, Output),
    [{source1, [{W_fast, _, _, _}]}] =
        plasticity:apply_plasticity({hebbian, [0.1]}, Inputs, Weights, Output),

    ?assert(abs(W_fast - 0.5) > abs(W_slow - 0.5)).

%% ============================================================================
%% Oja's rule tests
%% ============================================================================

ojas_normalization_test() ->
    %% Oja's rule should normalize weights over time
    Inputs = [{source1, [1.0]}],
    InitialWeight = 0.5,
    Weights = [{source1, [{InitialWeight, 0.0, 0.1, [0.1]}]}],
    Output = [0.8],
    Params = {ojas, [0.1]},

    %% Oja's rule: dw = eta * y * (x - y*w)
    %% For normalization: this pushes w toward x/y
    [{source1, [{NewW, _, _, _}]}] =
        plasticity:apply_plasticity(Params, Inputs, Weights, Output),

    ?assert(is_float(NewW)).

%% ============================================================================
%% No plasticity tests
%% ============================================================================

none_plasticity_test() ->
    %% With no plasticity, weights should remain unchanged
    Inputs = [{source1, [0.8]}],
    Weights = [{source1, [{0.5, 0.0, 0.1, [0.1]}]}],
    Output = [0.9],
    Params = {none, []},

    Result = plasticity:apply_plasticity(Params, Inputs, Weights, Output),

    ?assertEqual(Weights, Result).

%% ============================================================================
%% Edge case tests
%% ============================================================================

empty_inputs_test() ->
    %% Should handle empty input gracefully
    Result = plasticity:apply_plasticity({hebbian, [0.1]}, [], [], [0.0]),
    ?assertEqual([], Result).

zero_output_test() ->
    %% Zero output should result in no weight change (Hebbian)
    Inputs = [{source1, [0.5]}],
    Weights = [{source1, [{0.5, 0.0, 0.1, []}]}],
    Output = [0.0],

    [{source1, [{NewW, _, _, _}]}] =
        plasticity:apply_plasticity({hebbian, [0.1]}, Inputs, Weights, Output),

    ?assertEqual(0.5, NewW).

neuron_test.erl

-module(neuron_test).
-include_lib("eunit/include/eunit.hrl").
-include("records.hrl").

%% ============================================================================
%% State initialization tests
%% ============================================================================

state_initialization_test() ->
    State = #neuron_state{
        id = {{1.0, 0.5}, neuron},
        activation_function = tanh,
        aggregation_function = dot_product,
        heredity_type = darwinian
    },
    ?assertEqual(tanh, State#neuron_state.activation_function),
    ?assertEqual(darwinian, State#neuron_state.heredity_type).

%% ============================================================================
%% Weight management tests
%% ============================================================================

weight_perturbation_test() ->
    %% Perturbed weight should be different but within range
    Original = {0.5, 0.0, 0.1, []},
    Spread = 0.1,
    {NewW, NewDW, LP, LPs} = neuron:perturb_weight(Original, Spread),

    ?assert(NewW /= 0.5 orelse NewDW /= 0.0),
    ?assert(abs(NewW) < ?SAT_LIMIT),
    ?assertEqual(0.1, LP),
    ?assertEqual([], LPs).

weight_backup_restore_test() ->
    %% Test backup and restore cycle
    Current = [{source1, [{0.5, 0.0, 0.1, []}]}],
    Backup = [{source1, [{0.3, 0.0, 0.1, []}]}],

    State = #neuron_state{
        weighted_inputs_current = Current,
        weighted_inputs_backup = Backup
    },

    %% Simulate restore
    RestoredState = State#neuron_state{
        weighted_inputs_current = State#neuron_state.weighted_inputs_backup
    },

    ?assertEqual(Backup, RestoredState#neuron_state.weighted_inputs_current).

%% ============================================================================
%% Signal computation tests (unit tests for pure logic)
%% ============================================================================

forward_signal_computation_test() ->
    %% Test the pure computation part
    Inputs = [{source1, [0.5, 0.3]}],
    Weights = [{source1, [{0.4, 0.0, 0.1, []}, {0.6, 0.0, 0.1, []}]}],

    Aggregated = signal_aggregator:dot_product(Weights, Inputs),
    Output = functions:tanh(Aggregated),

    Expected = math:tanh(0.5*0.4 + 0.3*0.6),
    ?assert(abs(Output - Expected) < 0.0001).

saturation_test() ->
    %% Large inputs should be saturated
    LargeValue = 100.0,
    Saturated = functions:sat(LargeValue, ?SAT_LIMIT),
    ?assertEqual(?SAT_LIMIT, Saturated).

%% ============================================================================
%% Recurrent connection tests
%% ============================================================================

recurrent_output_format_test() ->
    %% Recurrent outputs should follow same format
    State = #neuron_state{
        recurrent_output_pids = [pid1, pid2]
    },
    ?assertEqual(2, length(State#neuron_state.recurrent_output_pids)).

%% ============================================================================
%% Process lifecycle tests (integration)
%% ============================================================================

neuron_spawn_test() ->
    %% Test that neuron can be spawned
    Config = test_helpers:create_test_neuron_config(),
    Pid = neuron:gen(self(), Config),
    ?assert(is_pid(Pid)),
    Pid ! terminate,
    ok.

Documentation Requirements

Required Documentation

  1. plasticity.erl

    • Module overview with learning rule descriptions
    • Mathematical formulas for each rule
    • Parameter conventions
    • All function specs
  2. neuron.erl

    • Module overview with lifecycle
    • State record field documentation
    • Weight management explanation
    • Message protocol
    • All function specs

Documentation Checklist

  • [ ] Plasticity module documentation
  • [ ] All plasticity rules documented with formulas
  • [ ] Neuron module documentation
  • [ ] Neuron state record fully documented
  • [ ] Weight lifecycle explained
  • [ ] All function specs complete

Quality Gates

v0.3.0 Acceptance Criteria

  1. Test Coverage

    • [ ] plasticity.erl: 100% line coverage
    • [ ] neuron.erl: 90%+ line coverage
    • [ ] All tests pass
  2. Naming

    • [ ] All cryptic field names renamed in neuron
    • [ ] ?RO_SIGNAL macro removed
    • [ ] Consistent naming throughout
  3. Documentation

    • [ ] Weight management lifecycle documented
    • [ ] All plasticity rules have mathematical definitions
    • [ ] Saturation limits explained
  4. Code Quality

    • [ ] No duplicate weight perturbation code
    • [ ] Pattern matching style
    • [ ] Maximum 1 level nesting
  5. Static Analysis

    • [ ] Zero dialyzer warnings
    • [ ] All types properly specified

Known Limitations

  • Process interaction tests are basic (full integration in v0.4.0)
  • Substrate plasticity not addressed (later version)
  • Circuit activation functions not fully tested

Next Steps

After v0.3.0 completion:

  1. v0.4.0 will refactor cortex.erl and exoself.erl
  2. Foundation Phase complete - all core primitives refactored
  3. Structural Phase begins with lifecycle management

Implementation Notes

Consolidating Hebbian Variants

%% Unified implementation for hebbian and hebbian_w
apply_hebbian(LearningRate, WindowOption, Inputs, Weights, Output) ->
    update_weights(
        fun(Input, {W, DW, LP, LPs}) ->
            %% Hebbian rule: delta_w = eta * pre * post
            Delta = LearningRate * Input * hd(Output),
            NewW = case WindowOption of
                none -> W + Delta;
                {window, Size} -> W + Delta - (W * (1/Size))  % Decay
            end,
            {sat(NewW, ?SAT_LIMIT), DW, LP, LPs}
        end,
        Inputs,
        Weights
    ).

Weight Update Helper

%% Generic weight update function
update_weights(UpdateFun, Inputs, Weights) ->
    lists:map(
        fun({SourceId, WeightSpecs}) ->
            {SourceId, InputSignals} = lists:keyfind(SourceId, 1, Inputs),
            UpdatedWeights = lists:zipwith(
                fun(Signal, WeightSpec) ->
                    UpdateFun(Signal, WeightSpec)
                end,
                InputSignals,
                WeightSpecs
            ),
            {SourceId, UpdatedWeights}
        end,
        Weights
    ).

Dependencies

External Dependencies

  • Erlang rand module (for perturbation)

Internal Dependencies

  • v0.2.0: signal_aggregator, functions
  • v0.1.0: types.hrl, test infrastructure

Effort Estimate

TaskEstimate
Plasticity tests2 days
Plasticity refactoring1.5 days
Neuron tests2 days
Neuron refactoring2 days
Field renaming1 day
Documentation1.5 days
Total10 days

Risks

RiskMitigation
Field renaming breaks referencesSystematic search-replace
Plasticity behavior changesTests verify formulas
Process tests flakyUse synchronization

Version: 0.3.0 Phase: Foundation Status: Planned