Neat-Ex v1.3.0 GradAprox

GradAprox uses iterative gradient approximation and descent/ascent to optimize a set of non-discrete numeric parameters using only a fitness/error function. The function takes the parameter map as its argument and returns a value, which in turn is either minimized or maximized.

Example Usage:

iex> settings = {-5, 5, 0.001} #initial values range from -5 to 5, and will be modified by 0.001 for derivative approximation.
iex> {_params, info} = GradAprox.minimize %{x: settings, y: settings}, fn %{x: x, y: y} ->
...>   :math.sqrt(:math.pow(x - 18, 2) + :math.pow(y + 24, 2)) #distance from coordinates (18, -24)
...> end, %{learn_val: 0.1, terminate?: fn _, info -> info.value < 0.1 end}
iex> IO.puts "GradAprox took #{info.step} steps"
:ok
OptionDefaultDescription
learn_val5.0Affects optimization/learning speed.
terminate?fn _params, info -> info.step > 50_000 endTakes 2 arguments, the latest map of params (or ann if using NeuralTrainer), and the info map. If the function returns true, optimization is terminated.

The info map is a map of miscellaneous info with the keys [:step, :value]. Step goes up once per iteration, and value is the value returned by the fitness/error function. These two values are intended for use in the :terminate? function and for debugging.

Summary

Functions

Populates the given map with the default options used in the Optimize module. Throws an error when given an invalid option. See the module description for details on available options

Finds paramaters to maximize the output of fun. Uses the optimize function with a sign of 1. See optimize for details on arguments. Returns the tuple {params, info}. See the module description for information on the info map

Finds paramaters to minimize the output of fun. Uses the optimize function with a sign of -1. See optimize for details on arguments. Returns the tuple {params, info}. See the module description for information on the info map

paramSetup should be of the format %{param_name: {min, max, delta}}. min and max will be used as bounds for generating initial values, but the parameter’s values are able to exceed these bounds during optimization. fun should take a Map in the format %{name: val}, and should return a numeric value that will either be minimized or maximized by changing the parameters (fun is usually the bottleneck of this system, so design carefully). A sign of -1 is for gradient descent (minimization), and a sign of 1 is for gradient ascent (maximization). Returns the tuple {params, info}. See the module description for information on the info map

minimize or maximize should be used instead. This function allows for manual control over initial values. params should be a map of in the format %{param_name: init_val}, and delta should be a map in the format %{param_name: delta}. Delta is the value used to change each parameter per step for evaluating the parial derivative. Note, this is not the delta used to perform the actual paramater modifcation, the gradient multiplied by the learn_val (which can be changed in the opts map) is used. Returns the tuple {params, info}. See the module description for information on the info map

Functions

fillDefaults!(opts)

Populates the given map with the default options used in the Optimize module. Throws an error when given an invalid option. See the module description for details on available options.

maximize(paramSetup, fun, opts \\ %{})

Finds paramaters to maximize the output of fun. Uses the optimize function with a sign of 1. See optimize for details on arguments. Returns the tuple {params, info}. See the module description for information on the info map.

minimize(paramSetup, fun, opts \\ %{})

Finds paramaters to minimize the output of fun. Uses the optimize function with a sign of -1. See optimize for details on arguments. Returns the tuple {params, info}. See the module description for information on the info map.

optimize(paramSetup, fun, sign, opts \\ %{})

paramSetup should be of the format %{param_name: {min, max, delta}}. min and max will be used as bounds for generating initial values, but the parameter’s values are able to exceed these bounds during optimization. fun should take a Map in the format %{name: val}, and should return a numeric value that will either be minimized or maximized by changing the parameters (fun is usually the bottleneck of this system, so design carefully). A sign of -1 is for gradient descent (minimization), and a sign of 1 is for gradient ascent (maximization). Returns the tuple {params, info}. See the module description for information on the info map.

optimize(params, deltas, fun, sign, opts, step \\ 0)

minimize or maximize should be used instead. This function allows for manual control over initial values. params should be a map of in the format %{param_name: init_val}, and delta should be a map in the format %{param_name: delta}. Delta is the value used to change each parameter per step for evaluating the parial derivative. Note, this is not the delta used to perform the actual paramater modifcation, the gradient multiplied by the learn_val (which can be changed in the opts map) is used. Returns the tuple {params, info}. See the module description for information on the info map.