v0.2.0 - Core Primitives

View Source

Overview

This release refactors the foundational mathematical modules that have no internal dependencies. These are pure functions ideal for TDD and serve as building blocks for neural network computation.

Phase: Foundation Duration: 1 week Prerequisites: v0.1.0 (test infrastructure, type specifications)

Objectives

  1. Refactor signal_aggregator.erl with full test coverage
  2. Refactor functions.erl with documentation and tests
  3. Establish TDD workflow for subsequent releases
  4. Remove deprecated API usage in these modules

Modules to Refactor

1. signal_aggregator.erl (85 lines)

Analysis Reference: Section 3.5 of DXNN2_CODEBASE_ANALYSIS.md (lines 418-423)

Issues to Address:

  • Weight format {W,_PDW,_LP,_LPs} used but undocumented
  • Unused parameters in weight tuple

Tasks:

  1. Add comprehensive module documentation

    %% @module signal_aggregator
    %% @doc Signal aggregation functions for neural computation
    %%
    %% This module provides functions that aggregate weighted inputs
    %% from multiple sources into a single scalar value for activation.
    %%
    %% == Aggregation Methods ==
    %% - dot_product: Standard weighted sum
    %% - mult_product: Multiplicative aggregation
    %% - diff_product: Differentiation-based aggregation
    %%
    %% == Weight Format ==
    %% Weights are provided as {Weight, DeltaWeight, LearningRate, ParamList}
    %% where only Weight is used for aggregation. Other fields support plasticity.
  2. Add type specifications to all functions

    -type weighted_input() :: {source_id(), [weight_spec()]}.
    -type input_signal() :: {source_id(), [float()]}.
    
    -spec dot_product([weighted_input()], [input_signal()]) -> float().
    -spec mult_product([weighted_input()], [input_signal()]) -> float().
    -spec diff_product([weighted_input()], [input_signal()]) -> float().
  3. Document weight tuple usage explicitly

    %% @doc Compute dot product of inputs and weights
    %%
    %% For each input source, multiplies each input signal component
    %% by its corresponding weight and sums the results.
    %%
    %% Weight tuple format: {W, DW, LP, LPs}
    %%   W  - Weight value (used for computation)
    %%   DW - Delta weight (ignored here, used by plasticity)
    %%   LP - Learning parameter (ignored here)
    %%   LPs - Parameter list (ignored here)
  4. Add function-level documentation with examples

    %% Example:
    %% ```
    %% Inputs = [{sensor1, [0.5, 0.3]}],
    %% Weights = [{sensor1, [{0.2, 0.0, 0.1, []}, {0.4, 0.0, 0.1, []}]}],
    %% Result = signal_aggregator:dot_product(Weights, Inputs).
    %% % Result = 0.5*0.2 + 0.3*0.4 = 0.22
    %% ```
  5. Clean up code style

    • Use list comprehensions where applicable
    • Ensure pattern matching style

2. functions.erl (405 lines)

Analysis Reference: Section 3.5 of DXNN2_CODEBASE_ANALYSIS.md (lines 432-435)

Issues to Address:

  • Dead code (commented-out functions)
  • No documentation on activation functions

Tasks:

  1. Remove dead code

    • Delete all commented-out functions
    • Remove unused helper functions
  2. Add module documentation

    %% @module functions
    %% @doc Activation and utility functions for neural computation
    %%
    %% This module provides activation functions used by neurons to
    %% transform aggregated input signals into output signals.
    %%
    %% == Activation Functions ==
    %% - tanh: Hyperbolic tangent [-1, 1]
    %% - sin/cos: Sinusoidal functions [-1, 1]
    %% - gaussian: Bell curve centered at 0
    %% - sgn: Sign function {-1, 0, 1}
    %% - bin: Binary threshold {0, 1}
    %% - linear: Identity function
    %%
    %% == Utility Functions ==
    %% - sat: Saturation/clamping
    %% - scale: Input scaling
  3. Add type specifications

    -type activation_function() :: tanh | sin | cos | gaussian | sgn | bin |
                                   trinary | multiquadric | linear | step |
                                   absolute | {circuit, term()}.
    
    -spec tanh(float()) -> float().
    -spec sin(float()) -> float().
    -spec gaussian(float()) -> float().
    -spec sat(float(), float()) -> float().
  4. Document each activation function with mathematical definition

    %% @doc Hyperbolic tangent activation function
    %%
    %% Maps input to range [-1, 1] with smooth gradient.
    %% Mathematical definition: tanh(x) = (e^x - e^-x) / (e^x + e^-x)
    %%
    %% Properties:
    %% - Output range: [-1, 1]
    %% - tanh(0) = 0
    %% - Smooth derivative (good for learning)
    %%
    %% @param X the input signal value
    %% @returns output in range [-1, 1]
  5. Document saturation functions and magic numbers

    • Explain why pi*10 is used for saturation
    • Document sat_dzone (dead zone saturation)

Tests to Write

signal_aggregator_test.erl

-module(signal_aggregator_test).
-include_lib("eunit/include/eunit.hrl").
-include("types.hrl").

%% ============================================================================
%% dot_product tests
%% ============================================================================

dot_product_single_input_test() ->
    %% Single input with single weight
    Inputs = [{source1, [1.0]}],
    Weights = [{source1, [{0.5, 0.0, 0.1, []}]}],
    Result = signal_aggregator:dot_product(Weights, Inputs),
    ?assertEqual(0.5, Result).

dot_product_multiple_inputs_test() ->
    %% Multiple inputs from same source
    Inputs = [{source1, [0.5, 0.3]}],
    Weights = [{source1, [{0.2, 0.0, 0.1, []}, {0.4, 0.0, 0.1, []}]}],
    Result = signal_aggregator:dot_product(Weights, Inputs),
    Expected = 0.5 * 0.2 + 0.3 * 0.4,  % = 0.22
    ?assert(abs(Result - Expected) < 0.0001).

dot_product_multiple_sources_test() ->
    %% Multiple input sources
    Inputs = [
        {source1, [0.5]},
        {source2, [0.8]}
    ],
    Weights = [
        {source1, [{0.3, 0.0, 0.1, []}]},
        {source2, [{0.7, 0.0, 0.1, []}]}
    ],
    Result = signal_aggregator:dot_product(Weights, Inputs),
    Expected = 0.5 * 0.3 + 0.8 * 0.7,  % = 0.71
    ?assert(abs(Result - Expected) < 0.0001).

dot_product_negative_weights_test() ->
    Inputs = [{source1, [1.0]}],
    Weights = [{source1, [{-0.5, 0.0, 0.1, []}]}],
    Result = signal_aggregator:dot_product(Weights, Inputs),
    ?assertEqual(-0.5, Result).

dot_product_empty_inputs_test() ->
    %% Edge case: no inputs
    Result = signal_aggregator:dot_product([], []),
    ?assertEqual(0.0, Result).

%% ============================================================================
%% mult_product tests
%% ============================================================================

mult_product_basic_test() ->
    Inputs = [{source1, [0.5, 0.4]}],
    Weights = [{source1, [{2.0, 0.0, 0.1, []}, {3.0, 0.0, 0.1, []}]}],
    Result = signal_aggregator:mult_product(Weights, Inputs),
    Expected = (0.5 * 2.0) * (0.4 * 3.0),  % = 1.2
    ?assert(abs(Result - Expected) < 0.0001).

mult_product_zero_input_test() ->
    %% Zero input should produce zero result
    Inputs = [{source1, [0.0, 0.5]}],
    Weights = [{source1, [{1.0, 0.0, 0.1, []}, {1.0, 0.0, 0.1, []}]}],
    Result = signal_aggregator:mult_product(Weights, Inputs),
    ?assertEqual(0.0, Result).

%% ============================================================================
%% diff_product tests
%% ============================================================================

diff_product_basic_test() ->
    %% Test differentiation aggregation
    Inputs = [{source1, [1.0, 0.5]}],
    Weights = [{source1, [{1.0, 0.0, 0.1, []}, {1.0, 0.0, 0.1, []}]}],
    Result = signal_aggregator:diff_product(Weights, Inputs),
    ?assert(is_float(Result)).

functions_test.erl

-module(functions_test).
-include_lib("eunit/include/eunit.hrl").

%% ============================================================================
%% Activation function tests
%% ============================================================================

tanh_zero_test() ->
    ?assertEqual(0.0, functions:tanh(0.0)).

tanh_positive_test() ->
    Result = functions:tanh(1.0),
    ?assert(Result > 0),
    ?assert(Result < 1).

tanh_negative_test() ->
    Result = functions:tanh(-1.0),
    ?assert(Result < 0),
    ?assert(Result > -1).

tanh_symmetry_test() ->
    PosResult = functions:tanh(0.5),
    NegResult = functions:tanh(-0.5),
    ?assert(abs(PosResult + NegResult) < 0.0001).

sin_zero_test() ->
    ?assertEqual(0.0, functions:sin(0.0)).

sin_pi_test() ->
    Result = functions:sin(math:pi()),
    ?assert(abs(Result) < 0.0001).

cos_zero_test() ->
    Result = functions:cos(0.0),
    ?assertEqual(1.0, Result).

gaussian_zero_test() ->
    %% Gaussian peaks at 0
    Result = functions:gaussian(0.0),
    ?assertEqual(1.0, Result).

gaussian_decay_test() ->
    %% Values away from 0 should be smaller
    AtZero = functions:gaussian(0.0),
    AtOne = functions:gaussian(1.0),
    ?assert(AtOne < AtZero).

sgn_positive_test() ->
    ?assertEqual(1, functions:sgn(0.5)).

sgn_negative_test() ->
    ?assertEqual(-1, functions:sgn(-0.5)).

sgn_zero_test() ->
    ?assertEqual(0, functions:sgn(0.0)).

bin_positive_test() ->
    ?assertEqual(1, functions:bin(0.5)).

bin_negative_test() ->
    ?assertEqual(0, functions:bin(-0.5)).

%% ============================================================================
%% Saturation tests
%% ============================================================================

sat_within_range_test() ->
    Result = functions:sat(0.5, 1.0),
    ?assertEqual(0.5, Result).

sat_exceeds_upper_test() ->
    Result = functions:sat(1.5, 1.0),
    ?assertEqual(1.0, Result).

sat_exceeds_lower_test() ->
    Result = functions:sat(-1.5, 1.0),
    ?assertEqual(-1.0, Result).

%% ============================================================================
%% Scale tests
%% ============================================================================

scale_basic_test() ->
    Result = functions:scale(0.5, 0.0, 1.0, 0.0, 10.0),
    ?assertEqual(5.0, Result).

scale_inverse_test() ->
    Result = functions:scale(0.5, 0.0, 1.0, 10.0, 0.0),
    ?assertEqual(5.0, Result).

Documentation Requirements

Required Documentation

  1. signal_aggregator.erl

    • Module overview with architecture
    • Weight tuple format explanation
    • All function specs and docs
    • Usage examples
  2. functions.erl

    • Module overview with function categories
    • Mathematical definitions for all activations
    • Saturation limit explanations
    • All function specs and docs

Documentation Checklist

  • [ ] Module-level @doc for signal_aggregator
  • [ ] Module-level @doc for functions
  • [ ] Type specifications for all exports
  • [ ] Function documentation with @param, @returns
  • [ ] Examples for primary functions

Quality Gates

v0.2.0 Acceptance Criteria

  1. Test Coverage

    • [ ] signal_aggregator.erl: 100% line coverage
    • [ ] functions.erl: 100% line coverage
    • [ ] All tests pass
  2. Documentation

    • [ ] All public functions have @doc
    • [ ] All public functions have -spec
    • [ ] Weight tuple format documented
  3. Code Quality

    • [ ] No dead code in functions.erl
    • [ ] Pattern matching style used
    • [ ] List comprehensions where appropriate
  4. Static Analysis

    • [ ] Zero dialyzer warnings for both modules
    • [ ] Types match actual usage

Known Limitations

  • Only pure functions refactored
  • No process interaction yet
  • Plasticity not yet addressed (uses weight format)

Next Steps

After v0.2.0 completion:

  1. v0.3.0 will refactor plasticity.erl and neuron.erl
  2. Weight format documentation used for plasticity
  3. Functions tests serve as model for neuron tests

Implementation Notes

TDD Workflow Example

%% Step 1: Write failing test
dot_product_bias_test() ->
    Inputs = [{bias, [1.0]}],
    Weights = [{bias, [{0.5, 0.0, 0.0, []}]}],
    Result = signal_aggregator:dot_product(Weights, Inputs),
    ?assertEqual(0.5, Result).

%% Step 2: Run test (should fail if not implemented)
%% $ rebar3 eunit --module=signal_aggregator_test

%% Step 3: Implement minimal code to pass
dot_product([], _) -> 0.0;
dot_product([{SourceId, WeightSpecs} | Rest], Inputs) ->
    {SourceId, InputValues} = lists:keyfind(SourceId, 1, Inputs),
    Sum = weighted_sum(WeightSpecs, InputValues),
    Sum + dot_product(Rest, Inputs).

%% Step 4: Run test (should pass)

%% Step 5: Refactor to improve clarity
dot_product(WeightedInputs, InputSignals) ->
    lists:sum([
        weighted_sum(Weights, get_input(SourceId, InputSignals))
        || {SourceId, Weights} <- WeightedInputs
    ]).

Removing Dead Code

Process for cleaning functions.erl:

  1. Search for commented functions
  2. Check if any are referenced
  3. Remove if unused
  4. Run tests to verify

Dependencies

External Dependencies

  • Erlang math library (for math:pi, etc.)

Internal Dependencies

  • v0.1.0: types.hrl for type specifications
  • v0.1.0: test infrastructure

Effort Estimate

TaskEstimate
signal_aggregator tests1 day
signal_aggregator refactoring0.5 days
functions tests1 day
functions refactoring1 day
Dead code removal0.5 days
Documentation1 day
Total5 days

Risks

RiskMitigation
Missing edge casesProperty-based testing
Changing function behaviorTests before refactoring
Type spec complexityStart simple, refine

Version: 0.2.0 Phase: Foundation Status: Planned