Neat-Ex v1.3.0 GradAprox.NeuralTrainer

GradAprox.NeuralTrainer uses iterative gradient approximation and descent/ascent to optimize the weights of a neural network using only a fitness/error function. The function takes the ann as its argument and returns a value, which in turn is either minimized or maximized.

Example Usage

iex> defmodule XOR do
...>   def dataset(), do: [{{-1, -1}, -1}, {{1, -1}, 1}, {{-1, 1}, 1}, {{1, 1}, -1}]
...>   def fitness(ann, sample_size \\ 1) do
...>     sim = Ann.Simulation.new(ann)
...>     error = Enum.reduce Enum.take_random(dataset, sample_size), 0, fn {{in1, in2}, out}, error ->
...>       result = Map.get(Ann.Simulation.eval(sim, %{1 => in1, 2 => in2}).data, 3, 0) #node 3 is a "bias node"
...>       error + abs(result - out)
...>     end
...>     min(:math.pow(8 - error, 2), 62.5)
...>   end
...> end
iex> ann = Ann.newFeedforward([1, 2], [3], [2])
iex> {ann, _data} = GradAprox.NeuralTrainer.maximize(ann, &XOR.fitness/1, %{learn_val: 0.001, terminate?: fn ann, _info -> XOR.fitness(ann, 4) >= 59 end})
iex> IO.puts Ann.json(ann) #display a json representation of the ANN.
:ok

Obviously this XOR example is using a dataset to evaluate fitness, so the Backprop module would be more effective for this task. However, the flexibility a fitness/error function provides (as opposed to being limited to using datasets) is powerful. Note that the main difference between this fitness function and the one used in the Neat example is the addition of Enum.take_random(dataset, sample_size). It is good add a random component to the fitness function, as this can pull the optimization process out of local extrema. NeuralTrainer pre-seeds :rand and :random consistently when necesary to ensure the gradient aproximation process is not hindered up by the randomness.

Summary

Functions

Maximizes the return value of fun by modifying the weights of ann. The available opts are the same as GradAprox, except for the addition of the :delta option which defaults to 0.05, and is used to modify the weights for the partial derivative approximation. Returns the tuple {ann, info}. See the GradAprox module description for information on the info map

Minimizes the return value of fun by modifying the weights of ann. The available opts are the same as GradAprox, except for the addition of the :delta option which defaults to 0.05, and is used to modify the weights for the partial derivative approximation. Returns the tuple {ann, info}. See the GradAprox module description for information on the info map

Maximizes or minimizes (if sign is 1 and -1 respectively) the return value of fun by modifying the weights of ann. The available opts are the same as GradAprox, except for the addition of the :delta option which defaults to 0.05, and is used to modify the weights for the partial derivative approximation. Returns the tuple {ann, info}. See the GradAprox module description for information on the info map

Functions

maximize(ann, fun, opts \\ %{})

Maximizes the return value of fun by modifying the weights of ann. The available opts are the same as GradAprox, except for the addition of the :delta option which defaults to 0.05, and is used to modify the weights for the partial derivative approximation. Returns the tuple {ann, info}. See the GradAprox module description for information on the info map.

minimize(ann, fun, opts \\ %{})

Minimizes the return value of fun by modifying the weights of ann. The available opts are the same as GradAprox, except for the addition of the :delta option which defaults to 0.05, and is used to modify the weights for the partial derivative approximation. Returns the tuple {ann, info}. See the GradAprox module description for information on the info map.

optimize(ann, fun, sign, opts \\ %{})

Maximizes or minimizes (if sign is 1 and -1 respectively) the return value of fun by modifying the weights of ann. The available opts are the same as GradAprox, except for the addition of the :delta option which defaults to 0.05, and is used to modify the weights for the partial derivative approximation. Returns the tuple {ann, info}. See the GradAprox module description for information on the info map.