Bardo.Plasticity (Bardo v0.1.0)
View SourceContains plasticity functions for neural network learning.
True learning is not achieved when a static NN is trained on some data set through destruction and recreation by the exoself based on its performance, but instead is the self organization of the NN, the self adaptation and changing of the NN based on the information it is processing.
The learning rule, the way in which the neurons adapt independently, the way in which their synaptic weights change based on the neuron's experience, that is true learning, and that is neuroplasticity.
There are numerous plasticity rules, some more faithful to their biological counterparts than others, and some more efficient than their biological counterparts.
Note: The self_modulation_v1, self_modulation_v2, and self_modulation_v3 are all very similar, mainly differing in the parameter lists returned by the PlasticityFunctionName(neural_parameters) function. All three of these plasticity functions use the neuromodulation/5 function which accepts the H, A, B, C, and D learning parameters, and updates the synaptic weights of the neuron using the general Hebbian rule: Updated_Wi = Wi + H(AIiOutput + BIi + C*Output + D).
The self_modulation_v4 – v5 differ only in that the weight_parameters is a list of length 2, and the A parameter is no longer specified in the neural_parameters list, and is instead calculated by the second dedicated modulatory neuron.
The self_modulation_v6 function specifies the neural_parameters as an empty list, and the weight_parameters list is of length 5, a single weight for every embedded modulatory neuron.
Summary
Functions
Apply a plasticity function by name to get parameters.
Returns parameters for the hebbian learning rule.
Hebbian plasticity function with a global learning rate.
Returns parameters for the hebbian_w learning rule.
Hebbian plasticity function with weight-specific learning rates.
Returns parameters for the neuromodulation learning rule.
Neuromodulation plasticity function.
Returns a set of learning parameters needed by the none/4 plasticity function.
None plasticity function - no learning happens.
Returns parameters for the ojas learning rule.
Oja's plasticity function with a global learning rate.
Returns parameters for the ojas_w learning rule.
Oja's plasticity function with weight-specific learning rates.
Returns parameters for the self_modulation_v1 learning rule.
Self modulation plasticity function (version 1).
Returns parameters for the self_modulation_v2 learning rule.
Self modulation plasticity function (version 2).
Returns parameters for the self_modulation_v3 learning rule.
Self modulation plasticity function (version 3).
Returns parameters for the self_modulation_v4 learning rule.
Self modulation plasticity function (version 4).
Returns parameters for the self_modulation_v5 learning rule.
Self modulation plasticity function (version 5).
Returns parameters for the self_modulation_v6 learning rule.
Self modulation plasticity function (version 6).
Functions
Apply a plasticity function by name to get parameters.
This is a convenience function that routes to the appropriate plasticity function based on the provided name.
Returns parameters for the hebbian learning rule.
The parameter list for the standard hebbian learning rule is a parameter list composed of a single parameter H: [H], used by the neuron for all its synaptic weights.
@spec hebbian([float()], [{pid(), [float()]}], [{pid(), [{float(), [float()]}]}], [ float() ]) :: [ {pid(), [{float(), [float()]}]} ]
Hebbian plasticity function with a global learning rate.
The function applies the hebbian learning rule to all weights using a single, neuron-wide learning rate.
Returns parameters for the hebbian_w learning rule.
The parameter list for the simple hebbian_w learning rule is a parameter list composed of a single parameter H: [H], for every synaptic weight of the neuron.
@spec hebbian_w(any(), [{pid(), [float()]}], [{pid(), [{float(), [float()]}]}], [ float() ]) :: [ {pid(), [{float(), [float()]}]} ]
Hebbian plasticity function with weight-specific learning rates.
The function operates on each InputPidP, applying the hebbian learning rule to each weight using its own specific learning rate.
Returns parameters for the neuromodulation learning rule.
Neuromodulation is a form of heterosynaptic plasticity where the synaptic weights are changed due to the synaptic activity of other neurons.
@spec neuromodulation( [float()], [{pid(), [float()]}], [{pid(), [{float(), [float()]}]}], [float()] ) :: [{pid(), [{float(), [float()]}]}]
Neuromodulation plasticity function.
Updates the synaptic weights of the neuron using a modulated Hebbian learning rule.
Returns a set of learning parameters needed by the none/4 plasticity function.
Since this function specifies that the neuron has no plasticity, the parameter lists are empty. When executed with the {neuron_id, :mutate} parameter, the function exits, since there is nothing to mutate. The exit allows for the neuroevolutionary system to try another mutation operator on the NN system.
None plasticity function - no learning happens.
Returns the original InputPidPs to the caller.
Returns parameters for the ojas learning rule.
The parameter list for Oja's learning rule is a list composed of a single parameter H: [H], used by the neuron for all its synaptic weights.
@spec ojas([float()], [{pid(), [float()]}], [{pid(), [{float(), [float()]}]}], [ float() ]) :: [ {pid(), [{float(), [float()]}]} ]
Oja's plasticity function with a global learning rate.
The function applies Oja's learning rule to all weights using a single, neuron-wide learning rate.
Returns parameters for the ojas_w learning rule.
The parameter list for Oja's learning rule is a list composed of a single parameter H: [H] per synaptic weight.
@spec ojas_w(any(), [{pid(), [float()]}], [{pid(), [{float(), [float()]}]}], [float()]) :: [ {pid(), [{float(), [float()]}]} ]
Oja's plasticity function with weight-specific learning rates.
The function operates on each InputPidP, applying Oja's learning rule to each weight using its own specific learning rate.
Returns parameters for the self_modulation_v1 learning rule.
Version-1: where the secondary embedded neuron only outputs the H learning parameter, with the parameter A set to some predetermined constant value within the neural_parameters list, and B=C=D=0.
@spec self_modulation_v1( [float()], [{pid(), [float()]}], [{pid(), [{float(), [float()]}]}], [float()] ) :: [{pid(), [{float(), [float()]}]}]
Self modulation plasticity function (version 1).
Updates the synaptic weights of the neuron using a modulated Hebbian learning rule.
Returns parameters for the self_modulation_v2 learning rule.
Version-2: where A is generated randomly when generating the neural_parameters list, and B=C=D=0.
@spec self_modulation_v2( [float()], [{pid(), [float()]}], [{pid(), [{float(), [float()]}]}], [float()] ) :: [{pid(), [{float(), [float()]}]}]
Self modulation plasticity function (version 2).
Updates the synaptic weights of the neuron using a modulated Hebbian learning rule.
Returns parameters for the self_modulation_v3 learning rule.
Version-3: where B, C, and D are also generated randomly in the neural_parameters list.
@spec self_modulation_v3( [float()], [{pid(), [float()]}], [{pid(), [{float(), [float()]}]}], [float()] ) :: [{pid(), [{float(), [float()]}]}]
Self modulation plasticity function (version 3).
Updates the synaptic weights of the neuron using a modulated Hebbian learning rule.
Returns parameters for the self_modulation_v4 learning rule.
Version-4: where the weight_parameters generates a list of length 2, thus allowing the neuron to have 2 embedded modulatory neurons, one outputting a parameter we use for H, and another outputting the value we can use as A, with B=C=D=0.
@spec self_modulation_v4( [float()], [{pid(), [float()]}], [{pid(), [{float(), [float()]}]}], [float()] ) :: [{pid(), [{float(), [float()]}]}]
Self modulation plasticity function (version 4).
Updates the synaptic weights of the neuron using a modulated Hebbian learning rule.
Returns parameters for the self_modulation_v5 learning rule.
Version-5: Where B, C, and D are generated randomly by the PlasticityFunctionName(neural_parameters) function.
@spec self_modulation_v5( [float()], [{pid(), [float()]}], [{pid(), [{float(), [float()]}]}], [float()] ) :: [{pid(), [{float(), [float()]}]}]
Self modulation plasticity function (version 5).
Updates the synaptic weights of the neuron using a modulated Hebbian learning rule.
Returns parameters for the self_modulation_v6 learning rule.
Version-6: Where the weight_parameters produces a list of length 5, allowing the neuron to have 5 embedded modulatory neurons, whose outputs are used for H, A, B, C, and D.
@spec self_modulation_v6( [float()], [{pid(), [float()]}], [{pid(), [{float(), [float()]}]}], [float()] ) :: [{pid(), [{float(), [float()]}]}]
Self modulation plasticity function (version 6).
Updates the synaptic weights of the neuron using a modulated Hebbian learning rule.