View Source Evision.ML.ANNMLP (Evision v0.2.9)
Summary
Functions
Computes error on the training or test dataset
Computes error on the training or test dataset
Clears the algorithm state
Creates empty model
empty
getAnnealCoolingRatio
getAnnealFinalT
getAnnealInitialT
getAnnealItePerStep
getBackpropMomentumScale
getBackpropWeightScale
getDefaultName
getLayerSizes
getRpropDW0
getRpropDWMax
getRpropDWMin
getRpropDWMinus
getRpropDWPlus
getTermCriteria
getTrainMethod
Returns the number of variables in training samples
getWeights
Returns true if the model is classifier
Returns true if the model is trained
Loads and creates a serialized ANN from a file
Predicts response(s) for the provided sample(s)
Predicts response(s) for the provided sample(s)
Reads algorithm parameters from a file storage
save
setActivationFunction
setActivationFunction
setAnnealCoolingRatio
setAnnealFinalT
setAnnealInitialT
setAnnealItePerStep
setBackpropMomentumScale
setBackpropWeightScale
setLayerSizes
setRpropDW0
setRpropDWMax
setRpropDWMin
setRpropDWMinus
setRpropDWPlus
setTermCriteria
setTrainMethod
setTrainMethod
Trains the statistical model
Trains the statistical model
Trains the statistical model
Stores algorithm parameters in a file storage
write
Types
@type t() :: %Evision.ML.ANNMLP{ref: reference()}
Type that represents an ML.ANNMLP
struct.
ref.
reference()
The underlying erlang resource variable.
Functions
@spec calcError(t(), Evision.ML.TrainData.t(), boolean()) :: {number(), Evision.Mat.t()} | {:error, String.t()}
Computes error on the training or test dataset
Positional Arguments
self:
Evision.ML.ANNMLP.t()
data:
Evision.ML.TrainData.t()
.the training data
test:
bool
.if true, the error is computed over the test subset of the data, otherwise it's computed over the training subset of the data. Please note that if you loaded a completely different dataset to evaluate already trained classifier, you will probably want not to set the test subset at all with TrainData::setTrainTestSplitRatio and specify test=false, so that the error is computed for the whole new set. Yes, this sounds a bit confusing.
Return
retval:
float
resp:
Evision.Mat.t()
.the optional output responses.
The method uses StatModel::predict to compute the error. For regression models the error is computed as RMS, for classifiers - as a percent of missclassified samples (0%-100%).
Python prototype (for reference only):
calcError(data, test[, resp]) -> retval, resp
@spec calcError( t(), Evision.ML.TrainData.t(), boolean(), [{atom(), term()}, ...] | nil ) :: {number(), Evision.Mat.t()} | {:error, String.t()}
Computes error on the training or test dataset
Positional Arguments
self:
Evision.ML.ANNMLP.t()
data:
Evision.ML.TrainData.t()
.the training data
test:
bool
.if true, the error is computed over the test subset of the data, otherwise it's computed over the training subset of the data. Please note that if you loaded a completely different dataset to evaluate already trained classifier, you will probably want not to set the test subset at all with TrainData::setTrainTestSplitRatio and specify test=false, so that the error is computed for the whole new set. Yes, this sounds a bit confusing.
Return
retval:
float
resp:
Evision.Mat.t()
.the optional output responses.
The method uses StatModel::predict to compute the error. For regression models the error is computed as RMS, for classifiers - as a percent of missclassified samples (0%-100%).
Python prototype (for reference only):
calcError(data, test[, resp]) -> retval, resp
@spec clear(Keyword.t()) :: any() | {:error, String.t()}
@spec clear(t()) :: t() | {:error, String.t()}
Clears the algorithm state
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
Python prototype (for reference only):
clear() -> None
Creates empty model
Return
- retval:
Evision.ML.ANNMLP.t()
Use StatModel::train to train the model, Algorithm::load\<ANN_MLP>(filename) to load the pre-trained model. Note that the train method has optional flags: ANN_MLP::TrainFlags.
Python prototype (for reference only):
create() -> retval
@spec empty(Keyword.t()) :: any() | {:error, String.t()}
@spec empty(t()) :: boolean() | {:error, String.t()}
empty
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
Return
- retval:
bool
Python prototype (for reference only):
empty() -> retval
@spec getAnnealCoolingRatio(Keyword.t()) :: any() | {:error, String.t()}
@spec getAnnealCoolingRatio(t()) :: number() | {:error, String.t()}
getAnnealCoolingRatio
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
Return
- retval:
double
Python prototype (for reference only):
getAnnealCoolingRatio() -> retval
@spec getAnnealFinalT(Keyword.t()) :: any() | {:error, String.t()}
@spec getAnnealFinalT(t()) :: number() | {:error, String.t()}
getAnnealFinalT
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
Return
- retval:
double
@see setAnnealFinalT/2
Python prototype (for reference only):
getAnnealFinalT() -> retval
@spec getAnnealInitialT(Keyword.t()) :: any() | {:error, String.t()}
@spec getAnnealInitialT(t()) :: number() | {:error, String.t()}
getAnnealInitialT
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
Return
- retval:
double
@see setAnnealInitialT/2
Python prototype (for reference only):
getAnnealInitialT() -> retval
@spec getAnnealItePerStep(Keyword.t()) :: any() | {:error, String.t()}
@spec getAnnealItePerStep(t()) :: integer() | {:error, String.t()}
getAnnealItePerStep
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
Return
- retval:
integer()
Python prototype (for reference only):
getAnnealItePerStep() -> retval
@spec getBackpropMomentumScale(Keyword.t()) :: any() | {:error, String.t()}
@spec getBackpropMomentumScale(t()) :: number() | {:error, String.t()}
getBackpropMomentumScale
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
Return
- retval:
double
@see setBackpropMomentumScale/2
Python prototype (for reference only):
getBackpropMomentumScale() -> retval
@spec getBackpropWeightScale(Keyword.t()) :: any() | {:error, String.t()}
@spec getBackpropWeightScale(t()) :: number() | {:error, String.t()}
getBackpropWeightScale
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
Return
- retval:
double
Python prototype (for reference only):
getBackpropWeightScale() -> retval
@spec getDefaultName(Keyword.t()) :: any() | {:error, String.t()}
@spec getDefaultName(t()) :: binary() | {:error, String.t()}
getDefaultName
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
Return
- retval:
String
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string.
Python prototype (for reference only):
getDefaultName() -> retval
@spec getLayerSizes(Keyword.t()) :: any() | {:error, String.t()}
@spec getLayerSizes(t()) :: Evision.Mat.t() | {:error, String.t()}
getLayerSizes
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
Return
- retval:
Evision.Mat.t()
Integer vector specifying the number of neurons in each layer including the input and output layers. The very first element specifies the number of elements in the input layer. The last element - number of elements in the output layer. @sa setLayerSizes
Python prototype (for reference only):
getLayerSizes() -> retval
@spec getRpropDW0(Keyword.t()) :: any() | {:error, String.t()}
@spec getRpropDW0(t()) :: number() | {:error, String.t()}
getRpropDW0
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
Return
- retval:
double
@see setRpropDW0/2
Python prototype (for reference only):
getRpropDW0() -> retval
@spec getRpropDWMax(Keyword.t()) :: any() | {:error, String.t()}
@spec getRpropDWMax(t()) :: number() | {:error, String.t()}
getRpropDWMax
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
Return
- retval:
double
@see setRpropDWMax/2
Python prototype (for reference only):
getRpropDWMax() -> retval
@spec getRpropDWMin(Keyword.t()) :: any() | {:error, String.t()}
@spec getRpropDWMin(t()) :: number() | {:error, String.t()}
getRpropDWMin
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
Return
- retval:
double
@see setRpropDWMin/2
Python prototype (for reference only):
getRpropDWMin() -> retval
@spec getRpropDWMinus(Keyword.t()) :: any() | {:error, String.t()}
@spec getRpropDWMinus(t()) :: number() | {:error, String.t()}
getRpropDWMinus
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
Return
- retval:
double
@see setRpropDWMinus/2
Python prototype (for reference only):
getRpropDWMinus() -> retval
@spec getRpropDWPlus(Keyword.t()) :: any() | {:error, String.t()}
@spec getRpropDWPlus(t()) :: number() | {:error, String.t()}
getRpropDWPlus
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
Return
- retval:
double
@see setRpropDWPlus/2
Python prototype (for reference only):
getRpropDWPlus() -> retval
@spec getTermCriteria(Keyword.t()) :: any() | {:error, String.t()}
@spec getTermCriteria(t()) :: {integer(), integer(), number()} | {:error, String.t()}
getTermCriteria
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
Return
- retval:
TermCriteria
@see setTermCriteria/2
Python prototype (for reference only):
getTermCriteria() -> retval
@spec getTrainMethod(Keyword.t()) :: any() | {:error, String.t()}
@spec getTrainMethod(t()) :: integer() | {:error, String.t()}
getTrainMethod
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
Return
- retval:
integer()
Returns current training method
Python prototype (for reference only):
getTrainMethod() -> retval
@spec getVarCount(Keyword.t()) :: any() | {:error, String.t()}
@spec getVarCount(t()) :: integer() | {:error, String.t()}
Returns the number of variables in training samples
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
Return
- retval:
integer()
Python prototype (for reference only):
getVarCount() -> retval
@spec getWeights(t(), integer()) :: Evision.Mat.t() | {:error, String.t()}
getWeights
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
- layerIdx:
integer()
Return
- retval:
Evision.Mat.t()
Python prototype (for reference only):
getWeights(layerIdx) -> retval
@spec isClassifier(Keyword.t()) :: any() | {:error, String.t()}
@spec isClassifier(t()) :: boolean() | {:error, String.t()}
Returns true if the model is classifier
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
Return
- retval:
bool
Python prototype (for reference only):
isClassifier() -> retval
@spec isTrained(Keyword.t()) :: any() | {:error, String.t()}
@spec isTrained(t()) :: boolean() | {:error, String.t()}
Returns true if the model is trained
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
Return
- retval:
bool
Python prototype (for reference only):
isTrained() -> retval
@spec load(Keyword.t()) :: any() | {:error, String.t()}
@spec load(binary()) :: t() | {:error, String.t()}
Loads and creates a serialized ANN from a file
Positional Arguments
filepath:
String
.path to serialized ANN
Return
- retval:
Evision.ML.ANNMLP.t()
Use ANN::save to serialize and store an ANN to disk. Load the ANN from this file again, by calling this function with the path to the file.
Python prototype (for reference only):
load(filepath) -> retval
@spec predict(t(), Evision.Mat.maybe_mat_in()) :: {number(), Evision.Mat.t()} | {:error, String.t()}
Predicts response(s) for the provided sample(s)
Positional Arguments
self:
Evision.ML.ANNMLP.t()
samples:
Evision.Mat
.The input samples, floating-point matrix
Keyword Arguments
flags:
integer()
.The optional flags, model-dependent. See cv::ml::StatModel::Flags.
Return
retval:
float
results:
Evision.Mat.t()
.The optional output matrix of results.
Python prototype (for reference only):
predict(samples[, results[, flags]]) -> retval, results
@spec predict(t(), Evision.Mat.maybe_mat_in(), [{:flags, term()}] | nil) :: {number(), Evision.Mat.t()} | {:error, String.t()}
Predicts response(s) for the provided sample(s)
Positional Arguments
self:
Evision.ML.ANNMLP.t()
samples:
Evision.Mat
.The input samples, floating-point matrix
Keyword Arguments
flags:
integer()
.The optional flags, model-dependent. See cv::ml::StatModel::Flags.
Return
retval:
float
results:
Evision.Mat.t()
.The optional output matrix of results.
Python prototype (for reference only):
predict(samples[, results[, flags]]) -> retval, results
@spec read(t(), Evision.FileNode.t()) :: t() | {:error, String.t()}
Reads algorithm parameters from a file storage
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
- func:
Evision.FileNode
Python prototype (for reference only):
read(fn) -> None
save
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
- filename:
String
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs).
Python prototype (for reference only):
save(filename) -> None
setActivationFunction
Positional Arguments
self:
Evision.ML.ANNMLP.t()
type:
integer()
.The type of activation function. See ANN_MLP::ActivationFunctions.
Keyword Arguments
param1:
double
.The first parameter of the activation function, \f$\alpha\f$. Default value is 0.
param2:
double
.The second parameter of the activation function, \f$\beta\f$. Default value is 0.
Initialize the activation function for each neuron. Currently the default and the only fully supported activation function is ANN_MLP::SIGMOID_SYM.
Python prototype (for reference only):
setActivationFunction(type[, param1[, param2]]) -> None
@spec setActivationFunction(t(), integer(), [param1: term(), param2: term()] | nil) :: t() | {:error, String.t()}
setActivationFunction
Positional Arguments
self:
Evision.ML.ANNMLP.t()
type:
integer()
.The type of activation function. See ANN_MLP::ActivationFunctions.
Keyword Arguments
param1:
double
.The first parameter of the activation function, \f$\alpha\f$. Default value is 0.
param2:
double
.The second parameter of the activation function, \f$\beta\f$. Default value is 0.
Initialize the activation function for each neuron. Currently the default and the only fully supported activation function is ANN_MLP::SIGMOID_SYM.
Python prototype (for reference only):
setActivationFunction(type[, param1[, param2]]) -> None
setAnnealCoolingRatio
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
- val:
double
Python prototype (for reference only):
setAnnealCoolingRatio(val) -> None
setAnnealFinalT
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
- val:
double
@see getAnnealFinalT/1
Python prototype (for reference only):
setAnnealFinalT(val) -> None
setAnnealInitialT
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
- val:
double
@see getAnnealInitialT/1
Python prototype (for reference only):
setAnnealInitialT(val) -> None
setAnnealItePerStep
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
- val:
integer()
Python prototype (for reference only):
setAnnealItePerStep(val) -> None
setBackpropMomentumScale
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
- val:
double
@see getBackpropMomentumScale/1
Python prototype (for reference only):
setBackpropMomentumScale(val) -> None
setBackpropWeightScale
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
- val:
double
Python prototype (for reference only):
setBackpropWeightScale(val) -> None
@spec setLayerSizes(t(), Evision.Mat.maybe_mat_in()) :: t() | {:error, String.t()}
setLayerSizes
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
- layer_sizes:
Evision.Mat
Integer vector specifying the number of neurons in each layer including the input and output layers. The very first element specifies the number of elements in the input layer. The last element - number of elements in the output layer. Default value is empty Mat. @sa getLayerSizes
Python prototype (for reference only):
setLayerSizes(_layer_sizes) -> None
setRpropDW0
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
- val:
double
@see getRpropDW0/1
Python prototype (for reference only):
setRpropDW0(val) -> None
setRpropDWMax
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
- val:
double
@see getRpropDWMax/1
Python prototype (for reference only):
setRpropDWMax(val) -> None
setRpropDWMin
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
- val:
double
@see getRpropDWMin/1
Python prototype (for reference only):
setRpropDWMin(val) -> None
setRpropDWMinus
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
- val:
double
@see getRpropDWMinus/1
Python prototype (for reference only):
setRpropDWMinus(val) -> None
setRpropDWPlus
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
- val:
double
@see getRpropDWPlus/1
Python prototype (for reference only):
setRpropDWPlus(val) -> None
setTermCriteria
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
- val:
TermCriteria
@see getTermCriteria/1
Python prototype (for reference only):
setTermCriteria(val) -> None
setTrainMethod
Positional Arguments
self:
Evision.ML.ANNMLP.t()
method:
integer()
.Default value is ANN_MLP::RPROP. See ANN_MLP::TrainingMethods.
Keyword Arguments
param1:
double
.passed to setRpropDW0 for ANN_MLP::RPROP and to setBackpropWeightScale for ANN_MLP::BACKPROP and to initialT for ANN_MLP::ANNEAL.
param2:
double
.passed to setRpropDWMin for ANN_MLP::RPROP and to setBackpropMomentumScale for ANN_MLP::BACKPROP and to finalT for ANN_MLP::ANNEAL.
Sets training method and common parameters.
Python prototype (for reference only):
setTrainMethod(method[, param1[, param2]]) -> None
@spec setTrainMethod(t(), integer(), [param1: term(), param2: term()] | nil) :: t() | {:error, String.t()}
setTrainMethod
Positional Arguments
self:
Evision.ML.ANNMLP.t()
method:
integer()
.Default value is ANN_MLP::RPROP. See ANN_MLP::TrainingMethods.
Keyword Arguments
param1:
double
.passed to setRpropDW0 for ANN_MLP::RPROP and to setBackpropWeightScale for ANN_MLP::BACKPROP and to initialT for ANN_MLP::ANNEAL.
param2:
double
.passed to setRpropDWMin for ANN_MLP::RPROP and to setBackpropMomentumScale for ANN_MLP::BACKPROP and to finalT for ANN_MLP::ANNEAL.
Sets training method and common parameters.
Python prototype (for reference only):
setTrainMethod(method[, param1[, param2]]) -> None
@spec train(t(), Evision.ML.TrainData.t()) :: boolean() | {:error, String.t()}
Trains the statistical model
Positional Arguments
self:
Evision.ML.ANNMLP.t()
trainData:
Evision.ML.TrainData.t()
.training data that can be loaded from file using TrainData::loadFromCSV or created with TrainData::create.
Keyword Arguments
flags:
integer()
.optional flags, depending on the model. Some of the models can be updated with the new training samples, not completely overwritten (such as NormalBayesClassifier or ANN_MLP).
Return
- retval:
bool
Python prototype (for reference only):
train(trainData[, flags]) -> retval
@spec train(t(), Evision.ML.TrainData.t(), [{:flags, term()}] | nil) :: boolean() | {:error, String.t()}
Trains the statistical model
Positional Arguments
self:
Evision.ML.ANNMLP.t()
trainData:
Evision.ML.TrainData.t()
.training data that can be loaded from file using TrainData::loadFromCSV or created with TrainData::create.
Keyword Arguments
flags:
integer()
.optional flags, depending on the model. Some of the models can be updated with the new training samples, not completely overwritten (such as NormalBayesClassifier or ANN_MLP).
Return
- retval:
bool
Python prototype (for reference only):
train(trainData[, flags]) -> retval
@spec train(t(), Evision.Mat.maybe_mat_in(), integer(), Evision.Mat.maybe_mat_in()) :: boolean() | {:error, String.t()}
Trains the statistical model
Positional Arguments
self:
Evision.ML.ANNMLP.t()
samples:
Evision.Mat
.training samples
layout:
integer()
.See ml::SampleTypes.
responses:
Evision.Mat
.vector of responses associated with the training samples.
Return
- retval:
bool
Python prototype (for reference only):
train(samples, layout, responses) -> retval
@spec write(t(), Evision.FileStorage.t()) :: t() | {:error, String.t()}
Stores algorithm parameters in a file storage
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
- fs:
Evision.FileStorage
Python prototype (for reference only):
write(fs) -> None
@spec write(t(), Evision.FileStorage.t(), binary()) :: t() | {:error, String.t()}
write
Positional Arguments
- self:
Evision.ML.ANNMLP.t()
- fs:
Evision.FileStorage
- name:
String
Has overloading in C++
Python prototype (for reference only):
write(fs, name) -> None