View Source Evision.ML.ANNMLP (Evision v0.1.38)

Summary

Types

t()

Type that represents an ML.ANNMLP struct.

Functions

Computes error on the training or test dataset

Computes error on the training or test dataset

Clears the algorithm state

Creates empty model

empty

getAnnealCoolingRatio

getAnnealFinalT

getAnnealInitialT

getAnnealItePerStep

getBackpropMomentumScale

getBackpropWeightScale

getDefaultName

getLayerSizes

getRpropDW0

getRpropDWMax

getRpropDWMin

getRpropDWMinus

getRpropDWPlus

getTermCriteria

getTrainMethod

Returns the number of variables in training samples

Returns true if the model is classifier

Returns true if the model is trained

Loads and creates a serialized ANN from a file

Predicts response(s) for the provided sample(s)

Predicts response(s) for the provided sample(s)

Reads algorithm parameters from a file storage

setActivationFunction

setAnnealCoolingRatio

setAnnealFinalT

setAnnealInitialT

setAnnealItePerStep

setBackpropMomentumScale

setBackpropWeightScale

setRpropDWMax

setRpropDWMin

setRpropDWMinus

setRpropDWPlus

setTermCriteria

Trains the statistical model

Trains the statistical model

Trains the statistical model

Stores algorithm parameters in a file storage

Types

@type t() :: %Evision.ML.ANNMLP{ref: reference()}

Type that represents an ML.ANNMLP struct.

  • ref. reference()

    The underlying erlang resource variable.

Functions

Link to this function

calcError(self, data, test)

View Source
@spec calcError(t(), Evision.ML.TrainData.t(), boolean()) ::
  {number(), Evision.Mat.t()} | {:error, String.t()}

Computes error on the training or test dataset

Positional Arguments
  • self: Evision.ML.ANNMLP.t()

  • data: Evision.ML.TrainData.t().

    the training data

  • test: bool.

    if true, the error is computed over the test subset of the data, otherwise it's computed over the training subset of the data. Please note that if you loaded a completely different dataset to evaluate already trained classifier, you will probably want not to set the test subset at all with TrainData::setTrainTestSplitRatio and specify test=false, so that the error is computed for the whole new set. Yes, this sounds a bit confusing.

Return
  • retval: float

  • resp: Evision.Mat.t().

    the optional output responses.

The method uses StatModel::predict to compute the error. For regression models the error is computed as RMS, for classifiers - as a percent of missclassified samples (0%-100%).

Python prototype (for reference only):

calcError(data, test[, resp]) -> retval, resp
Link to this function

calcError(self, data, test, opts)

View Source
@spec calcError(
  t(),
  Evision.ML.TrainData.t(),
  boolean(),
  [{atom(), term()}, ...] | nil
) ::
  {number(), Evision.Mat.t()} | {:error, String.t()}

Computes error on the training or test dataset

Positional Arguments
  • self: Evision.ML.ANNMLP.t()

  • data: Evision.ML.TrainData.t().

    the training data

  • test: bool.

    if true, the error is computed over the test subset of the data, otherwise it's computed over the training subset of the data. Please note that if you loaded a completely different dataset to evaluate already trained classifier, you will probably want not to set the test subset at all with TrainData::setTrainTestSplitRatio and specify test=false, so that the error is computed for the whole new set. Yes, this sounds a bit confusing.

Return
  • retval: float

  • resp: Evision.Mat.t().

    the optional output responses.

The method uses StatModel::predict to compute the error. For regression models the error is computed as RMS, for classifiers - as a percent of missclassified samples (0%-100%).

Python prototype (for reference only):

calcError(data, test[, resp]) -> retval, resp
@spec clear(t()) :: t() | {:error, String.t()}

Clears the algorithm state

Positional Arguments
  • self: Evision.ML.ANNMLP.t()

Python prototype (for reference only):

clear() -> None
@spec create() :: t() | {:error, String.t()}

Creates empty model

Return
  • retval: Evision.ML.ANNMLP.t()

Use StatModel::train to train the model, Algorithm::load\<ANN_MLP>(filename) to load the pre-trained model. Note that the train method has optional flags: ANN_MLP::TrainFlags.

Python prototype (for reference only):

create() -> retval
@spec empty(t()) :: boolean() | {:error, String.t()}

empty

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
Return
  • retval: bool

Python prototype (for reference only):

empty() -> retval
Link to this function

getAnnealCoolingRatio(self)

View Source
@spec getAnnealCoolingRatio(t()) :: number() | {:error, String.t()}

getAnnealCoolingRatio

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
Return
  • retval: double

@see setAnnealCoolingRatio/2

Python prototype (for reference only):

getAnnealCoolingRatio() -> retval
@spec getAnnealFinalT(t()) :: number() | {:error, String.t()}

getAnnealFinalT

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
Return
  • retval: double

@see setAnnealFinalT/2

Python prototype (for reference only):

getAnnealFinalT() -> retval
@spec getAnnealInitialT(t()) :: number() | {:error, String.t()}

getAnnealInitialT

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
Return
  • retval: double

@see setAnnealInitialT/2

Python prototype (for reference only):

getAnnealInitialT() -> retval
Link to this function

getAnnealItePerStep(self)

View Source
@spec getAnnealItePerStep(t()) :: integer() | {:error, String.t()}

getAnnealItePerStep

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
Return
  • retval: int

@see setAnnealItePerStep/2

Python prototype (for reference only):

getAnnealItePerStep() -> retval
Link to this function

getBackpropMomentumScale(self)

View Source
@spec getBackpropMomentumScale(t()) :: number() | {:error, String.t()}

getBackpropMomentumScale

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
Return
  • retval: double

@see setBackpropMomentumScale/2

Python prototype (for reference only):

getBackpropMomentumScale() -> retval
Link to this function

getBackpropWeightScale(self)

View Source
@spec getBackpropWeightScale(t()) :: number() | {:error, String.t()}

getBackpropWeightScale

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
Return
  • retval: double

@see setBackpropWeightScale/2

Python prototype (for reference only):

getBackpropWeightScale() -> retval
@spec getDefaultName(t()) :: binary() | {:error, String.t()}

getDefaultName

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
Return

Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string.

Python prototype (for reference only):

getDefaultName() -> retval
@spec getLayerSizes(t()) :: Evision.Mat.t() | {:error, String.t()}

getLayerSizes

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
Return
  • retval: Evision.Mat.t()

Integer vector specifying the number of neurons in each layer including the input and output layers. The very first element specifies the number of elements in the input layer. The last element - number of elements in the output layer. @sa setLayerSizes

Python prototype (for reference only):

getLayerSizes() -> retval
@spec getRpropDW0(t()) :: number() | {:error, String.t()}

getRpropDW0

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
Return
  • retval: double

@see setRpropDW0/2

Python prototype (for reference only):

getRpropDW0() -> retval
@spec getRpropDWMax(t()) :: number() | {:error, String.t()}

getRpropDWMax

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
Return
  • retval: double

@see setRpropDWMax/2

Python prototype (for reference only):

getRpropDWMax() -> retval
@spec getRpropDWMin(t()) :: number() | {:error, String.t()}

getRpropDWMin

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
Return
  • retval: double

@see setRpropDWMin/2

Python prototype (for reference only):

getRpropDWMin() -> retval
@spec getRpropDWMinus(t()) :: number() | {:error, String.t()}

getRpropDWMinus

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
Return
  • retval: double

@see setRpropDWMinus/2

Python prototype (for reference only):

getRpropDWMinus() -> retval
@spec getRpropDWPlus(t()) :: number() | {:error, String.t()}

getRpropDWPlus

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
Return
  • retval: double

@see setRpropDWPlus/2

Python prototype (for reference only):

getRpropDWPlus() -> retval
@spec getTermCriteria(t()) :: {integer(), integer(), number()} | {:error, String.t()}

getTermCriteria

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
Return
  • retval: TermCriteria

@see setTermCriteria/2

Python prototype (for reference only):

getTermCriteria() -> retval
@spec getTrainMethod(t()) :: integer() | {:error, String.t()}

getTrainMethod

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
Return
  • retval: int

Returns current training method

Python prototype (for reference only):

getTrainMethod() -> retval
@spec getVarCount(t()) :: integer() | {:error, String.t()}

Returns the number of variables in training samples

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
Return
  • retval: int

Python prototype (for reference only):

getVarCount() -> retval
Link to this function

getWeights(self, layerIdx)

View Source
@spec getWeights(t(), integer()) :: Evision.Mat.t() | {:error, String.t()}

getWeights

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
  • layerIdx: int
Return
  • retval: Evision.Mat.t()

Python prototype (for reference only):

getWeights(layerIdx) -> retval
@spec isClassifier(t()) :: boolean() | {:error, String.t()}

Returns true if the model is classifier

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
Return
  • retval: bool

Python prototype (for reference only):

isClassifier() -> retval
@spec isTrained(t()) :: boolean() | {:error, String.t()}

Returns true if the model is trained

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
Return
  • retval: bool

Python prototype (for reference only):

isTrained() -> retval
@spec load(binary()) :: t() | {:error, String.t()}

Loads and creates a serialized ANN from a file

Positional Arguments
  • filepath: String.

    path to serialized ANN

Return
  • retval: Evision.ML.ANNMLP.t()

Use ANN::save to serialize and store an ANN to disk. Load the ANN from this file again, by calling this function with the path to the file.

Python prototype (for reference only):

load(filepath) -> retval
@spec predict(t(), Evision.Mat.maybe_mat_in()) ::
  {number(), Evision.Mat.t()} | {:error, String.t()}

Predicts response(s) for the provided sample(s)

Positional Arguments
  • self: Evision.ML.ANNMLP.t()

  • samples: Evision.Mat.t().

    The input samples, floating-point matrix

Keyword Arguments
  • flags: int.

    The optional flags, model-dependent. See cv::ml::StatModel::Flags.

Return
  • retval: float

  • results: Evision.Mat.t().

    The optional output matrix of results.

Python prototype (for reference only):

predict(samples[, results[, flags]]) -> retval, results
Link to this function

predict(self, samples, opts)

View Source
@spec predict(t(), Evision.Mat.maybe_mat_in(), [{atom(), term()}, ...] | nil) ::
  {number(), Evision.Mat.t()} | {:error, String.t()}

Predicts response(s) for the provided sample(s)

Positional Arguments
  • self: Evision.ML.ANNMLP.t()

  • samples: Evision.Mat.t().

    The input samples, floating-point matrix

Keyword Arguments
  • flags: int.

    The optional flags, model-dependent. See cv::ml::StatModel::Flags.

Return
  • retval: float

  • results: Evision.Mat.t().

    The optional output matrix of results.

Python prototype (for reference only):

predict(samples[, results[, flags]]) -> retval, results
@spec read(t(), Evision.FileNode.t()) :: t() | {:error, String.t()}

Reads algorithm parameters from a file storage

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
  • fn_: Evision.FileNode.t()

Python prototype (for reference only):

read(fn_) -> None
@spec save(t(), binary()) :: t() | {:error, String.t()}

save

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
  • filename: String

Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs).

Python prototype (for reference only):

save(filename) -> None
Link to this function

setActivationFunction(self, type)

View Source
@spec setActivationFunction(t(), integer()) :: t() | {:error, String.t()}

setActivationFunction

Positional Arguments
  • self: Evision.ML.ANNMLP.t()

  • type: int.

    The type of activation function. See ANN_MLP::ActivationFunctions.

Keyword Arguments
  • param1: double.

    The first parameter of the activation function, \f$\alpha\f$. Default value is 0.

  • param2: double.

    The second parameter of the activation function, \f$\beta\f$. Default value is 0.

Initialize the activation function for each neuron. Currently the default and the only fully supported activation function is ANN_MLP::SIGMOID_SYM.

Python prototype (for reference only):

setActivationFunction(type[, param1[, param2]]) -> None
Link to this function

setActivationFunction(self, type, opts)

View Source
@spec setActivationFunction(t(), integer(), [{atom(), term()}, ...] | nil) ::
  t() | {:error, String.t()}

setActivationFunction

Positional Arguments
  • self: Evision.ML.ANNMLP.t()

  • type: int.

    The type of activation function. See ANN_MLP::ActivationFunctions.

Keyword Arguments
  • param1: double.

    The first parameter of the activation function, \f$\alpha\f$. Default value is 0.

  • param2: double.

    The second parameter of the activation function, \f$\beta\f$. Default value is 0.

Initialize the activation function for each neuron. Currently the default and the only fully supported activation function is ANN_MLP::SIGMOID_SYM.

Python prototype (for reference only):

setActivationFunction(type[, param1[, param2]]) -> None
Link to this function

setAnnealCoolingRatio(self, val)

View Source
@spec setAnnealCoolingRatio(t(), number()) :: t() | {:error, String.t()}

setAnnealCoolingRatio

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
  • val: double

@see getAnnealCoolingRatio/1

Python prototype (for reference only):

setAnnealCoolingRatio(val) -> None
Link to this function

setAnnealFinalT(self, val)

View Source
@spec setAnnealFinalT(t(), number()) :: t() | {:error, String.t()}

setAnnealFinalT

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
  • val: double

@see getAnnealFinalT/1

Python prototype (for reference only):

setAnnealFinalT(val) -> None
Link to this function

setAnnealInitialT(self, val)

View Source
@spec setAnnealInitialT(t(), number()) :: t() | {:error, String.t()}

setAnnealInitialT

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
  • val: double

@see getAnnealInitialT/1

Python prototype (for reference only):

setAnnealInitialT(val) -> None
Link to this function

setAnnealItePerStep(self, val)

View Source
@spec setAnnealItePerStep(t(), integer()) :: t() | {:error, String.t()}

setAnnealItePerStep

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
  • val: int

@see getAnnealItePerStep/1

Python prototype (for reference only):

setAnnealItePerStep(val) -> None
Link to this function

setBackpropMomentumScale(self, val)

View Source
@spec setBackpropMomentumScale(t(), number()) :: t() | {:error, String.t()}

setBackpropMomentumScale

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
  • val: double

@see getBackpropMomentumScale/1

Python prototype (for reference only):

setBackpropMomentumScale(val) -> None
Link to this function

setBackpropWeightScale(self, val)

View Source
@spec setBackpropWeightScale(t(), number()) :: t() | {:error, String.t()}

setBackpropWeightScale

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
  • val: double

@see getBackpropWeightScale/1

Python prototype (for reference only):

setBackpropWeightScale(val) -> None
Link to this function

setLayerSizes(self, layer_sizes)

View Source
@spec setLayerSizes(t(), Evision.Mat.maybe_mat_in()) :: t() | {:error, String.t()}

setLayerSizes

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
  • layer_sizes: Evision.Mat.t()

Integer vector specifying the number of neurons in each layer including the input and output layers. The very first element specifies the number of elements in the input layer. The last element - number of elements in the output layer. Default value is empty Mat. @sa getLayerSizes

Python prototype (for reference only):

setLayerSizes(_layer_sizes) -> None
@spec setRpropDW0(t(), number()) :: t() | {:error, String.t()}

setRpropDW0

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
  • val: double

@see getRpropDW0/1

Python prototype (for reference only):

setRpropDW0(val) -> None
Link to this function

setRpropDWMax(self, val)

View Source
@spec setRpropDWMax(t(), number()) :: t() | {:error, String.t()}

setRpropDWMax

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
  • val: double

@see getRpropDWMax/1

Python prototype (for reference only):

setRpropDWMax(val) -> None
Link to this function

setRpropDWMin(self, val)

View Source
@spec setRpropDWMin(t(), number()) :: t() | {:error, String.t()}

setRpropDWMin

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
  • val: double

@see getRpropDWMin/1

Python prototype (for reference only):

setRpropDWMin(val) -> None
Link to this function

setRpropDWMinus(self, val)

View Source
@spec setRpropDWMinus(t(), number()) :: t() | {:error, String.t()}

setRpropDWMinus

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
  • val: double

@see getRpropDWMinus/1

Python prototype (for reference only):

setRpropDWMinus(val) -> None
Link to this function

setRpropDWPlus(self, val)

View Source
@spec setRpropDWPlus(t(), number()) :: t() | {:error, String.t()}

setRpropDWPlus

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
  • val: double

@see getRpropDWPlus/1

Python prototype (for reference only):

setRpropDWPlus(val) -> None
Link to this function

setTermCriteria(self, val)

View Source
@spec setTermCriteria(t(), {integer(), integer(), number()}) ::
  t() | {:error, String.t()}

setTermCriteria

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
  • val: TermCriteria

@see getTermCriteria/1

Python prototype (for reference only):

setTermCriteria(val) -> None
Link to this function

setTrainMethod(self, method)

View Source
@spec setTrainMethod(t(), integer()) :: t() | {:error, String.t()}

setTrainMethod

Positional Arguments
  • self: Evision.ML.ANNMLP.t()

  • method: int.

    Default value is ANN_MLP::RPROP. See ANN_MLP::TrainingMethods.

Keyword Arguments
  • param1: double.

    passed to setRpropDW0 for ANN_MLP::RPROP and to setBackpropWeightScale for ANN_MLP::BACKPROP and to initialT for ANN_MLP::ANNEAL.

  • param2: double.

    passed to setRpropDWMin for ANN_MLP::RPROP and to setBackpropMomentumScale for ANN_MLP::BACKPROP and to finalT for ANN_MLP::ANNEAL.

Sets training method and common parameters.

Python prototype (for reference only):

setTrainMethod(method[, param1[, param2]]) -> None
Link to this function

setTrainMethod(self, method, opts)

View Source
@spec setTrainMethod(t(), integer(), [{atom(), term()}, ...] | nil) ::
  t() | {:error, String.t()}

setTrainMethod

Positional Arguments
  • self: Evision.ML.ANNMLP.t()

  • method: int.

    Default value is ANN_MLP::RPROP. See ANN_MLP::TrainingMethods.

Keyword Arguments
  • param1: double.

    passed to setRpropDW0 for ANN_MLP::RPROP and to setBackpropWeightScale for ANN_MLP::BACKPROP and to initialT for ANN_MLP::ANNEAL.

  • param2: double.

    passed to setRpropDWMin for ANN_MLP::RPROP and to setBackpropMomentumScale for ANN_MLP::BACKPROP and to finalT for ANN_MLP::ANNEAL.

Sets training method and common parameters.

Python prototype (for reference only):

setTrainMethod(method[, param1[, param2]]) -> None
@spec train(t(), Evision.ML.TrainData.t()) :: boolean() | {:error, String.t()}

Trains the statistical model

Positional Arguments
  • self: Evision.ML.ANNMLP.t()

  • trainData: Evision.ML.TrainData.t().

    training data that can be loaded from file using TrainData::loadFromCSV or created with TrainData::create.

Keyword Arguments
  • flags: int.

    optional flags, depending on the model. Some of the models can be updated with the new training samples, not completely overwritten (such as NormalBayesClassifier or ANN_MLP).

Return
  • retval: bool

Python prototype (for reference only):

train(trainData[, flags]) -> retval
Link to this function

train(self, trainData, opts)

View Source
@spec train(t(), Evision.ML.TrainData.t(), [{atom(), term()}, ...] | nil) ::
  boolean() | {:error, String.t()}

Trains the statistical model

Positional Arguments
  • self: Evision.ML.ANNMLP.t()

  • trainData: Evision.ML.TrainData.t().

    training data that can be loaded from file using TrainData::loadFromCSV or created with TrainData::create.

Keyword Arguments
  • flags: int.

    optional flags, depending on the model. Some of the models can be updated with the new training samples, not completely overwritten (such as NormalBayesClassifier or ANN_MLP).

Return
  • retval: bool

Python prototype (for reference only):

train(trainData[, flags]) -> retval
Link to this function

train(self, samples, layout, responses)

View Source
@spec train(t(), Evision.Mat.maybe_mat_in(), integer(), Evision.Mat.maybe_mat_in()) ::
  boolean() | {:error, String.t()}

Trains the statistical model

Positional Arguments
  • self: Evision.ML.ANNMLP.t()

  • samples: Evision.Mat.t().

    training samples

  • layout: int.

    See ml::SampleTypes.

  • responses: Evision.Mat.t().

    vector of responses associated with the training samples.

Return
  • retval: bool

Python prototype (for reference only):

train(samples, layout, responses) -> retval
@spec write(t(), Evision.FileStorage.t()) :: t() | {:error, String.t()}

Stores algorithm parameters in a file storage

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
  • fs: Evision.FileStorage.t()

Python prototype (for reference only):

write(fs) -> None
@spec write(t(), Evision.FileStorage.t(), binary()) :: t() | {:error, String.t()}

write

Positional Arguments
  • self: Evision.ML.ANNMLP.t()
  • fs: Evision.FileStorage.t()
  • name: String

Has overloading in C++

Python prototype (for reference only):

write(fs, name) -> None