v0.2.0 - Core Primitives
View SourceOverview
This release refactors the foundational mathematical modules that have no internal dependencies. These are pure functions ideal for TDD and serve as building blocks for neural network computation.
Phase: Foundation Duration: 1 week Prerequisites: v0.1.0 (test infrastructure, type specifications)
Objectives
- Refactor
signal_aggregator.erlwith full test coverage - Refactor
functions.erlwith documentation and tests - Establish TDD workflow for subsequent releases
- Remove deprecated API usage in these modules
Modules to Refactor
1. signal_aggregator.erl (85 lines)
Analysis Reference: Section 3.5 of DXNN2_CODEBASE_ANALYSIS.md (lines 418-423)
Issues to Address:
- Weight format
{W,_PDW,_LP,_LPs}used but undocumented - Unused parameters in weight tuple
Tasks:
Add comprehensive module documentation
%% @module signal_aggregator %% @doc Signal aggregation functions for neural computation %% %% This module provides functions that aggregate weighted inputs %% from multiple sources into a single scalar value for activation. %% %% == Aggregation Methods == %% - dot_product: Standard weighted sum %% - mult_product: Multiplicative aggregation %% - diff_product: Differentiation-based aggregation %% %% == Weight Format == %% Weights are provided as {Weight, DeltaWeight, LearningRate, ParamList} %% where only Weight is used for aggregation. Other fields support plasticity.Add type specifications to all functions
-type weighted_input() :: {source_id(), [weight_spec()]}. -type input_signal() :: {source_id(), [float()]}. -spec dot_product([weighted_input()], [input_signal()]) -> float(). -spec mult_product([weighted_input()], [input_signal()]) -> float(). -spec diff_product([weighted_input()], [input_signal()]) -> float().Document weight tuple usage explicitly
%% @doc Compute dot product of inputs and weights %% %% For each input source, multiplies each input signal component %% by its corresponding weight and sums the results. %% %% Weight tuple format: {W, DW, LP, LPs} %% W - Weight value (used for computation) %% DW - Delta weight (ignored here, used by plasticity) %% LP - Learning parameter (ignored here) %% LPs - Parameter list (ignored here)Add function-level documentation with examples
%% Example: %% ``` %% Inputs = [{sensor1, [0.5, 0.3]}], %% Weights = [{sensor1, [{0.2, 0.0, 0.1, []}, {0.4, 0.0, 0.1, []}]}], %% Result = signal_aggregator:dot_product(Weights, Inputs). %% % Result = 0.5*0.2 + 0.3*0.4 = 0.22 %% ```Clean up code style
- Use list comprehensions where applicable
- Ensure pattern matching style
2. functions.erl (405 lines)
Analysis Reference: Section 3.5 of DXNN2_CODEBASE_ANALYSIS.md (lines 432-435)
Issues to Address:
- Dead code (commented-out functions)
- No documentation on activation functions
Tasks:
Remove dead code
- Delete all commented-out functions
- Remove unused helper functions
Add module documentation
%% @module functions %% @doc Activation and utility functions for neural computation %% %% This module provides activation functions used by neurons to %% transform aggregated input signals into output signals. %% %% == Activation Functions == %% - tanh: Hyperbolic tangent [-1, 1] %% - sin/cos: Sinusoidal functions [-1, 1] %% - gaussian: Bell curve centered at 0 %% - sgn: Sign function {-1, 0, 1} %% - bin: Binary threshold {0, 1} %% - linear: Identity function %% %% == Utility Functions == %% - sat: Saturation/clamping %% - scale: Input scalingAdd type specifications
-type activation_function() :: tanh | sin | cos | gaussian | sgn | bin | trinary | multiquadric | linear | step | absolute | {circuit, term()}. -spec tanh(float()) -> float(). -spec sin(float()) -> float(). -spec gaussian(float()) -> float(). -spec sat(float(), float()) -> float().Document each activation function with mathematical definition
%% @doc Hyperbolic tangent activation function %% %% Maps input to range [-1, 1] with smooth gradient. %% Mathematical definition: tanh(x) = (e^x - e^-x) / (e^x + e^-x) %% %% Properties: %% - Output range: [-1, 1] %% - tanh(0) = 0 %% - Smooth derivative (good for learning) %% %% @param X the input signal value %% @returns output in range [-1, 1]Document saturation functions and magic numbers
- Explain why pi*10 is used for saturation
- Document
sat_dzone(dead zone saturation)
Tests to Write
signal_aggregator_test.erl
-module(signal_aggregator_test).
-include_lib("eunit/include/eunit.hrl").
-include("types.hrl").
%% ============================================================================
%% dot_product tests
%% ============================================================================
dot_product_single_input_test() ->
%% Single input with single weight
Inputs = [{source1, [1.0]}],
Weights = [{source1, [{0.5, 0.0, 0.1, []}]}],
Result = signal_aggregator:dot_product(Weights, Inputs),
?assertEqual(0.5, Result).
dot_product_multiple_inputs_test() ->
%% Multiple inputs from same source
Inputs = [{source1, [0.5, 0.3]}],
Weights = [{source1, [{0.2, 0.0, 0.1, []}, {0.4, 0.0, 0.1, []}]}],
Result = signal_aggregator:dot_product(Weights, Inputs),
Expected = 0.5 * 0.2 + 0.3 * 0.4, % = 0.22
?assert(abs(Result - Expected) < 0.0001).
dot_product_multiple_sources_test() ->
%% Multiple input sources
Inputs = [
{source1, [0.5]},
{source2, [0.8]}
],
Weights = [
{source1, [{0.3, 0.0, 0.1, []}]},
{source2, [{0.7, 0.0, 0.1, []}]}
],
Result = signal_aggregator:dot_product(Weights, Inputs),
Expected = 0.5 * 0.3 + 0.8 * 0.7, % = 0.71
?assert(abs(Result - Expected) < 0.0001).
dot_product_negative_weights_test() ->
Inputs = [{source1, [1.0]}],
Weights = [{source1, [{-0.5, 0.0, 0.1, []}]}],
Result = signal_aggregator:dot_product(Weights, Inputs),
?assertEqual(-0.5, Result).
dot_product_empty_inputs_test() ->
%% Edge case: no inputs
Result = signal_aggregator:dot_product([], []),
?assertEqual(0.0, Result).
%% ============================================================================
%% mult_product tests
%% ============================================================================
mult_product_basic_test() ->
Inputs = [{source1, [0.5, 0.4]}],
Weights = [{source1, [{2.0, 0.0, 0.1, []}, {3.0, 0.0, 0.1, []}]}],
Result = signal_aggregator:mult_product(Weights, Inputs),
Expected = (0.5 * 2.0) * (0.4 * 3.0), % = 1.2
?assert(abs(Result - Expected) < 0.0001).
mult_product_zero_input_test() ->
%% Zero input should produce zero result
Inputs = [{source1, [0.0, 0.5]}],
Weights = [{source1, [{1.0, 0.0, 0.1, []}, {1.0, 0.0, 0.1, []}]}],
Result = signal_aggregator:mult_product(Weights, Inputs),
?assertEqual(0.0, Result).
%% ============================================================================
%% diff_product tests
%% ============================================================================
diff_product_basic_test() ->
%% Test differentiation aggregation
Inputs = [{source1, [1.0, 0.5]}],
Weights = [{source1, [{1.0, 0.0, 0.1, []}, {1.0, 0.0, 0.1, []}]}],
Result = signal_aggregator:diff_product(Weights, Inputs),
?assert(is_float(Result)).functions_test.erl
-module(functions_test).
-include_lib("eunit/include/eunit.hrl").
%% ============================================================================
%% Activation function tests
%% ============================================================================
tanh_zero_test() ->
?assertEqual(0.0, functions:tanh(0.0)).
tanh_positive_test() ->
Result = functions:tanh(1.0),
?assert(Result > 0),
?assert(Result < 1).
tanh_negative_test() ->
Result = functions:tanh(-1.0),
?assert(Result < 0),
?assert(Result > -1).
tanh_symmetry_test() ->
PosResult = functions:tanh(0.5),
NegResult = functions:tanh(-0.5),
?assert(abs(PosResult + NegResult) < 0.0001).
sin_zero_test() ->
?assertEqual(0.0, functions:sin(0.0)).
sin_pi_test() ->
Result = functions:sin(math:pi()),
?assert(abs(Result) < 0.0001).
cos_zero_test() ->
Result = functions:cos(0.0),
?assertEqual(1.0, Result).
gaussian_zero_test() ->
%% Gaussian peaks at 0
Result = functions:gaussian(0.0),
?assertEqual(1.0, Result).
gaussian_decay_test() ->
%% Values away from 0 should be smaller
AtZero = functions:gaussian(0.0),
AtOne = functions:gaussian(1.0),
?assert(AtOne < AtZero).
sgn_positive_test() ->
?assertEqual(1, functions:sgn(0.5)).
sgn_negative_test() ->
?assertEqual(-1, functions:sgn(-0.5)).
sgn_zero_test() ->
?assertEqual(0, functions:sgn(0.0)).
bin_positive_test() ->
?assertEqual(1, functions:bin(0.5)).
bin_negative_test() ->
?assertEqual(0, functions:bin(-0.5)).
%% ============================================================================
%% Saturation tests
%% ============================================================================
sat_within_range_test() ->
Result = functions:sat(0.5, 1.0),
?assertEqual(0.5, Result).
sat_exceeds_upper_test() ->
Result = functions:sat(1.5, 1.0),
?assertEqual(1.0, Result).
sat_exceeds_lower_test() ->
Result = functions:sat(-1.5, 1.0),
?assertEqual(-1.0, Result).
%% ============================================================================
%% Scale tests
%% ============================================================================
scale_basic_test() ->
Result = functions:scale(0.5, 0.0, 1.0, 0.0, 10.0),
?assertEqual(5.0, Result).
scale_inverse_test() ->
Result = functions:scale(0.5, 0.0, 1.0, 10.0, 0.0),
?assertEqual(5.0, Result).Documentation Requirements
Required Documentation
signal_aggregator.erl
- Module overview with architecture
- Weight tuple format explanation
- All function specs and docs
- Usage examples
functions.erl
- Module overview with function categories
- Mathematical definitions for all activations
- Saturation limit explanations
- All function specs and docs
Documentation Checklist
- [ ] Module-level @doc for signal_aggregator
- [ ] Module-level @doc for functions
- [ ] Type specifications for all exports
- [ ] Function documentation with @param, @returns
- [ ] Examples for primary functions
Quality Gates
v0.2.0 Acceptance Criteria
Test Coverage
- [ ] signal_aggregator.erl: 100% line coverage
- [ ] functions.erl: 100% line coverage
- [ ] All tests pass
Documentation
- [ ] All public functions have @doc
- [ ] All public functions have -spec
- [ ] Weight tuple format documented
Code Quality
- [ ] No dead code in functions.erl
- [ ] Pattern matching style used
- [ ] List comprehensions where appropriate
Static Analysis
- [ ] Zero dialyzer warnings for both modules
- [ ] Types match actual usage
Known Limitations
- Only pure functions refactored
- No process interaction yet
- Plasticity not yet addressed (uses weight format)
Next Steps
After v0.2.0 completion:
- v0.3.0 will refactor
plasticity.erlandneuron.erl - Weight format documentation used for plasticity
- Functions tests serve as model for neuron tests
Implementation Notes
TDD Workflow Example
%% Step 1: Write failing test
dot_product_bias_test() ->
Inputs = [{bias, [1.0]}],
Weights = [{bias, [{0.5, 0.0, 0.0, []}]}],
Result = signal_aggregator:dot_product(Weights, Inputs),
?assertEqual(0.5, Result).
%% Step 2: Run test (should fail if not implemented)
%% $ rebar3 eunit --module=signal_aggregator_test
%% Step 3: Implement minimal code to pass
dot_product([], _) -> 0.0;
dot_product([{SourceId, WeightSpecs} | Rest], Inputs) ->
{SourceId, InputValues} = lists:keyfind(SourceId, 1, Inputs),
Sum = weighted_sum(WeightSpecs, InputValues),
Sum + dot_product(Rest, Inputs).
%% Step 4: Run test (should pass)
%% Step 5: Refactor to improve clarity
dot_product(WeightedInputs, InputSignals) ->
lists:sum([
weighted_sum(Weights, get_input(SourceId, InputSignals))
|| {SourceId, Weights} <- WeightedInputs
]).Removing Dead Code
Process for cleaning functions.erl:
- Search for commented functions
- Check if any are referenced
- Remove if unused
- Run tests to verify
Dependencies
External Dependencies
- Erlang math library (for math:pi, etc.)
Internal Dependencies
- v0.1.0: types.hrl for type specifications
- v0.1.0: test infrastructure
Effort Estimate
| Task | Estimate |
|---|---|
| signal_aggregator tests | 1 day |
| signal_aggregator refactoring | 0.5 days |
| functions tests | 1 day |
| functions refactoring | 1 day |
| Dead code removal | 0.5 days |
| Documentation | 1 day |
| Total | 5 days |
Risks
| Risk | Mitigation |
|---|---|
| Missing edge cases | Property-based testing |
| Changing function behavior | Tests before refactoring |
| Type spec complexity | Start simple, refine |
Version: 0.2.0 Phase: Foundation Status: Planned