Neat-Ex v1.3.0 Backprop

This module provides the means for backpropogation.

Example Usage:

iex> ann = Ann.newFeedforward([1, 2], [3], [3, 3]) #Neurons 1 and 2 are inputs, neuron 3 is output, and there are two layers of 3 hidden neurons.
iex> patterns = [
...>   { %{1 => 0.0, 2 => 1.0},  %{3 => 1.0} },
...>   { %{1 => 1.0, 2 => 0.0},  %{3 => 1.0} },
...>   { %{1 => 0.0, 2 => 0.0},  %{3 => 0.0} },
...>   { %{1 => 1.0, 2 => 1.0},  %{3 => 0.0} },
...> ]
iex> {_ann, info} = Backprop.train ann, fn _ -> Enum.take_random(patterns, 2) end
iex> IO.puts "Steps taken: #{info.step}"
:ok
OptionDefaultDescription
learn_val5.0Affects learning speed.
terminate?fn _ann, info -> info.step > 50_000 endTakes 2 arguments, the latest ann (or sim), and the info map. If the function returns true, training is terminated.
sigmoid&Backprop.sigmoid/1The sigmoid function used.
sigmoidPrime&Backprop.sigmoidPrime/1The derivative of the sigmoid function used.
worker_count1Number of workers for computing batch elements independently. (Should always be less than batch size.)

The info map is a map of miscellaneous info with the keys [:step, :adjError]. Step goes up once per iteration, and adjError is an adjusted and smoothed form of the error evaluated through backpropogation. These two values are intended for use in the :terminate? function, possibly for determining what quantity of patterns to train with at a given time, and for debugging.

Summary

Functions

Handy math formula. When x is 0 the return value will be close to from. When x is width the return value will be close to to. In between the graph appears like a sigmoid providing gentle asymptoting at either end

Populates the given map with the default options used in the Backprop module. Throws an error when given an invalid option. See the module description for details on available options

Fetches a value from the backprop_map if it exists, and if not it will update the backprop_map. Returns {backprop_error, backprop_map}. Uses a similar strategy to getFeedforward in it evaluates forwards and recursively (instead of backwards) to evaluate dependencies, and then unfolds (which goes backwards)

Fetches a value from the feedforward_map if it exists, and if not it will update the feedforward_map. Returns {{left, right}, feedforward_map}. Left and right are the left/right hand values according to the back-propogation algorithm found here. The right side is the standard neuron value. This evaluates the feedforward portion of the back-propogation algorithm, only it does it backwards and recursively for the purposes of dynamic and efficient dependency management, meaning this will work just as well for abnormal topologies. It starts at an output, evaluates dependencies, traces everything back to the inputs via recursion, and then unfolds to evaluate the feedforward left/right values. The feedforward_map contains all of this collected data. This method is done so that when fetching the feedforward data, it is either retrieved immediately, or recursively evaluated. This avoids repeating computations

Gets the gradient vector (as a map with format %{conn_id => grad_value}) for a single training pattern using backpropogation. Returns {gradient_map, error_sum}

The sigmoid function suggested for use in this backpropogation implementation. The c parameter affects the steepness of the sigmoid

The derivative of the suggested sigmoid function. The c parameter affects the steepness of the sigmoid

Returns {ann, info}. Takes a neural network (preferably with randomized weights), a function that takes one argument (the info map) and returns a list of training patterns, and an optional map of options (see module description for option details and details on info)

Takes a prepared simulation (for the purposes of collecting neuron dependency information), a patternFun, and an option map (see &Backprop.train/1 for details)

Functions

customSigmoid(x, from, to, width)

Handy math formula. When x is 0 the return value will be close to from. When x is width the return value will be close to to. In between the graph appears like a sigmoid providing gentle asymptoting at either end.

fillDefaults!(opts)

Populates the given map with the default options used in the Backprop module. Throws an error when given an invalid option. See the module description for details on available options.

getBackprop(id, backprop_map, feedforward_map, sim)

Fetches a value from the backprop_map if it exists, and if not it will update the backprop_map. Returns {backprop_error, backprop_map}. Uses a similar strategy to getFeedforward in it evaluates forwards and recursively (instead of backwards) to evaluate dependencies, and then unfolds (which goes backwards).

getFeedforward(id, feedforward_map, sim, opts)

Fetches a value from the feedforward_map if it exists, and if not it will update the feedforward_map. Returns {{left, right}, feedforward_map}. Left and right are the left/right hand values according to the back-propogation algorithm found here. The right side is the standard neuron value. This evaluates the feedforward portion of the back-propogation algorithm, only it does it backwards and recursively for the purposes of dynamic and efficient dependency management, meaning this will work just as well for abnormal topologies. It starts at an output, evaluates dependencies, traces everything back to the inputs via recursion, and then unfolds to evaluate the feedforward left/right values. The feedforward_map contains all of this collected data. This method is done so that when fetching the feedforward data, it is either retrieved immediately, or recursively evaluated. This avoids repeating computations.

getGradient(sim, inputs, outputs, opts)

Gets the gradient vector (as a map with format %{conn_id => grad_value}) for a single training pattern using backpropogation. Returns {gradient_map, error_sum}.

sigmoid(x, c \\ 3)

The sigmoid function suggested for use in this backpropogation implementation. The c parameter affects the steepness of the sigmoid.

sigmoidPrime(x, c \\ 3)

The derivative of the suggested sigmoid function. The c parameter affects the steepness of the sigmoid.

train(ann, patternFun, opts \\ %{})

Returns {ann, info}. Takes a neural network (preferably with randomized weights), a function that takes one argument (the info map) and returns a list of training patterns, and an optional map of options (see module description for option details and details on info).

trainSim(sim, patternFun, opts, info \\ %{step: 0, adjError: 1.0})

Takes a prepared simulation (for the purposes of collecting neuron dependency information), a patternFun, and an option map (see &Backprop.train/1 for details).