View Source Evision.ML.SVM (Evision v0.2.9)
Summary
Functions
Computes error on the training or test dataset
Computes error on the training or test dataset
Clears the algorithm state
create
empty
getC
getClassWeights
getCoef0
Retrieves the decision function
Retrieves the decision function
Generates a grid for %SVM parameters.
getDefaultName
getDegree
getGamma
getKernelType
getNu
getP
Retrieves all the support vectors
getTermCriteria
getType
Retrieves all the uncompressed support vectors of a linear %SVM
Returns the number of variables in training samples
Returns true if the model is classifier
Returns true if the model is trained
Loads and creates a serialized svm from a file
Predicts response(s) for the provided sample(s)
Predicts response(s) for the provided sample(s)
Reads algorithm parameters from a file storage
save
setC
setClassWeights
setCoef0
setDegree
setGamma
setKernel
setNu
setP
setTermCriteria
setType
Trains the statistical model
Trains the statistical model
Trains the statistical model
Trains an %SVM with optimal parameters
Trains an %SVM with optimal parameters
Stores algorithm parameters in a file storage
write
Types
@type t() :: %Evision.ML.SVM{ref: reference()}
Type that represents an ML.SVM
struct.
ref.
reference()
The underlying erlang resource variable.
Functions
@spec calcError(t(), Evision.ML.TrainData.t(), boolean()) :: {number(), Evision.Mat.t()} | {:error, String.t()}
Computes error on the training or test dataset
Positional Arguments
self:
Evision.ML.SVM.t()
data:
Evision.ML.TrainData.t()
.the training data
test:
bool
.if true, the error is computed over the test subset of the data, otherwise it's computed over the training subset of the data. Please note that if you loaded a completely different dataset to evaluate already trained classifier, you will probably want not to set the test subset at all with TrainData::setTrainTestSplitRatio and specify test=false, so that the error is computed for the whole new set. Yes, this sounds a bit confusing.
Return
retval:
float
resp:
Evision.Mat.t()
.the optional output responses.
The method uses StatModel::predict to compute the error. For regression models the error is computed as RMS, for classifiers - as a percent of missclassified samples (0%-100%).
Python prototype (for reference only):
calcError(data, test[, resp]) -> retval, resp
@spec calcError( t(), Evision.ML.TrainData.t(), boolean(), [{atom(), term()}, ...] | nil ) :: {number(), Evision.Mat.t()} | {:error, String.t()}
Computes error on the training or test dataset
Positional Arguments
self:
Evision.ML.SVM.t()
data:
Evision.ML.TrainData.t()
.the training data
test:
bool
.if true, the error is computed over the test subset of the data, otherwise it's computed over the training subset of the data. Please note that if you loaded a completely different dataset to evaluate already trained classifier, you will probably want not to set the test subset at all with TrainData::setTrainTestSplitRatio and specify test=false, so that the error is computed for the whole new set. Yes, this sounds a bit confusing.
Return
retval:
float
resp:
Evision.Mat.t()
.the optional output responses.
The method uses StatModel::predict to compute the error. For regression models the error is computed as RMS, for classifiers - as a percent of missclassified samples (0%-100%).
Python prototype (for reference only):
calcError(data, test[, resp]) -> retval, resp
@spec clear(Keyword.t()) :: any() | {:error, String.t()}
@spec clear(t()) :: t() | {:error, String.t()}
Clears the algorithm state
Positional Arguments
- self:
Evision.ML.SVM.t()
Python prototype (for reference only):
clear() -> None
create
Return
- retval:
Evision.ML.SVM.t()
Creates empty model. Use StatModel::train to train the model. Since %SVM has several parameters, you may want to find the best parameters for your problem, it can be done with SVM::trainAuto.
Python prototype (for reference only):
create() -> retval
@spec empty(Keyword.t()) :: any() | {:error, String.t()}
@spec empty(t()) :: boolean() | {:error, String.t()}
empty
Positional Arguments
- self:
Evision.ML.SVM.t()
Return
- retval:
bool
Python prototype (for reference only):
empty() -> retval
@spec getC(Keyword.t()) :: any() | {:error, String.t()}
@spec getC(t()) :: number() | {:error, String.t()}
getC
Positional Arguments
- self:
Evision.ML.SVM.t()
Return
- retval:
double
@see setC/2
Python prototype (for reference only):
getC() -> retval
@spec getClassWeights(Keyword.t()) :: any() | {:error, String.t()}
@spec getClassWeights(t()) :: Evision.Mat.t() | {:error, String.t()}
getClassWeights
Positional Arguments
- self:
Evision.ML.SVM.t()
Return
- retval:
Evision.Mat.t()
@see setClassWeights/2
Python prototype (for reference only):
getClassWeights() -> retval
@spec getCoef0(Keyword.t()) :: any() | {:error, String.t()}
@spec getCoef0(t()) :: number() | {:error, String.t()}
getCoef0
Positional Arguments
- self:
Evision.ML.SVM.t()
Return
- retval:
double
@see setCoef0/2
Python prototype (for reference only):
getCoef0() -> retval
@spec getDecisionFunction(t(), integer()) :: {number(), Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}
Retrieves the decision function
Positional Arguments
self:
Evision.ML.SVM.t()
i:
integer()
.the index of the decision function. If the problem solved is regression, 1-class or 2-class classification, then there will be just one decision function and the index should always be 0. Otherwise, in the case of N-class classification, there will be \f$N(N-1)/2\f$ decision functions.
Return
retval:
double
alpha:
Evision.Mat.t()
.the optional output vector for weights, corresponding to different support vectors. In the case of linear %SVM all the alpha's will be 1's.
svidx:
Evision.Mat.t()
.the optional output vector of indices of support vectors within the matrix of support vectors (which can be retrieved by SVM::getSupportVectors). In the case of linear %SVM each decision function consists of a single "compressed" support vector.
The method returns rho parameter of the decision function, a scalar subtracted from the weighted sum of kernel responses.
Python prototype (for reference only):
getDecisionFunction(i[, alpha[, svidx]]) -> retval, alpha, svidx
@spec getDecisionFunction(t(), integer(), [{atom(), term()}, ...] | nil) :: {number(), Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}
Retrieves the decision function
Positional Arguments
self:
Evision.ML.SVM.t()
i:
integer()
.the index of the decision function. If the problem solved is regression, 1-class or 2-class classification, then there will be just one decision function and the index should always be 0. Otherwise, in the case of N-class classification, there will be \f$N(N-1)/2\f$ decision functions.
Return
retval:
double
alpha:
Evision.Mat.t()
.the optional output vector for weights, corresponding to different support vectors. In the case of linear %SVM all the alpha's will be 1's.
svidx:
Evision.Mat.t()
.the optional output vector of indices of support vectors within the matrix of support vectors (which can be retrieved by SVM::getSupportVectors). In the case of linear %SVM each decision function consists of a single "compressed" support vector.
The method returns rho parameter of the decision function, a scalar subtracted from the weighted sum of kernel responses.
Python prototype (for reference only):
getDecisionFunction(i[, alpha[, svidx]]) -> retval, alpha, svidx
@spec getDefaultGridPtr(Keyword.t()) :: any() | {:error, String.t()}
@spec getDefaultGridPtr(integer()) :: Evision.ML.ParamGrid.t() | {:error, String.t()}
Generates a grid for %SVM parameters.
Positional Arguments
param_id:
integer()
.%SVM parameters IDs that must be one of the SVM::ParamTypes. The grid is generated for the parameter with this ID.
Return
- retval:
Evision.ML.ParamGrid.t()
The function generates a grid pointer for the specified parameter of the %SVM algorithm. The grid may be passed to the function SVM::trainAuto.
Python prototype (for reference only):
getDefaultGridPtr(param_id) -> retval
@spec getDefaultName(Keyword.t()) :: any() | {:error, String.t()}
@spec getDefaultName(t()) :: binary() | {:error, String.t()}
getDefaultName
Positional Arguments
- self:
Evision.ML.SVM.t()
Return
- retval:
String
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string.
Python prototype (for reference only):
getDefaultName() -> retval
@spec getDegree(Keyword.t()) :: any() | {:error, String.t()}
@spec getDegree(t()) :: number() | {:error, String.t()}
getDegree
Positional Arguments
- self:
Evision.ML.SVM.t()
Return
- retval:
double
@see setDegree/2
Python prototype (for reference only):
getDegree() -> retval
@spec getGamma(Keyword.t()) :: any() | {:error, String.t()}
@spec getGamma(t()) :: number() | {:error, String.t()}
getGamma
Positional Arguments
- self:
Evision.ML.SVM.t()
Return
- retval:
double
@see setGamma/2
Python prototype (for reference only):
getGamma() -> retval
@spec getKernelType(Keyword.t()) :: any() | {:error, String.t()}
@spec getKernelType(t()) :: integer() | {:error, String.t()}
getKernelType
Positional Arguments
- self:
Evision.ML.SVM.t()
Return
- retval:
integer()
Type of a %SVM kernel. See SVM::KernelTypes. Default value is SVM::RBF.
Python prototype (for reference only):
getKernelType() -> retval
@spec getNu(Keyword.t()) :: any() | {:error, String.t()}
@spec getNu(t()) :: number() | {:error, String.t()}
getNu
Positional Arguments
- self:
Evision.ML.SVM.t()
Return
- retval:
double
@see setNu/2
Python prototype (for reference only):
getNu() -> retval
@spec getP(Keyword.t()) :: any() | {:error, String.t()}
@spec getP(t()) :: number() | {:error, String.t()}
getP
Positional Arguments
- self:
Evision.ML.SVM.t()
Return
- retval:
double
@see setP/2
Python prototype (for reference only):
getP() -> retval
@spec getSupportVectors(Keyword.t()) :: any() | {:error, String.t()}
@spec getSupportVectors(t()) :: Evision.Mat.t() | {:error, String.t()}
Retrieves all the support vectors
Positional Arguments
- self:
Evision.ML.SVM.t()
Return
- retval:
Evision.Mat.t()
The method returns all the support vectors as a floating-point matrix, where support vectors are stored as matrix rows.
Python prototype (for reference only):
getSupportVectors() -> retval
@spec getTermCriteria(Keyword.t()) :: any() | {:error, String.t()}
@spec getTermCriteria(t()) :: {integer(), integer(), number()} | {:error, String.t()}
getTermCriteria
Positional Arguments
- self:
Evision.ML.SVM.t()
Return
- retval:
cv::TermCriteria
@see setTermCriteria/2
Python prototype (for reference only):
getTermCriteria() -> retval
@spec getType(Keyword.t()) :: any() | {:error, String.t()}
@spec getType(t()) :: integer() | {:error, String.t()}
getType
Positional Arguments
- self:
Evision.ML.SVM.t()
Return
- retval:
integer()
@see setType/2
Python prototype (for reference only):
getType() -> retval
@spec getUncompressedSupportVectors(Keyword.t()) :: any() | {:error, String.t()}
@spec getUncompressedSupportVectors(t()) :: Evision.Mat.t() | {:error, String.t()}
Retrieves all the uncompressed support vectors of a linear %SVM
Positional Arguments
- self:
Evision.ML.SVM.t()
Return
- retval:
Evision.Mat.t()
The method returns all the uncompressed support vectors of a linear %SVM that the compressed support vector, used for prediction, was derived from. They are returned in a floating-point matrix, where the support vectors are stored as matrix rows.
Python prototype (for reference only):
getUncompressedSupportVectors() -> retval
@spec getVarCount(Keyword.t()) :: any() | {:error, String.t()}
@spec getVarCount(t()) :: integer() | {:error, String.t()}
Returns the number of variables in training samples
Positional Arguments
- self:
Evision.ML.SVM.t()
Return
- retval:
integer()
Python prototype (for reference only):
getVarCount() -> retval
@spec isClassifier(Keyword.t()) :: any() | {:error, String.t()}
@spec isClassifier(t()) :: boolean() | {:error, String.t()}
Returns true if the model is classifier
Positional Arguments
- self:
Evision.ML.SVM.t()
Return
- retval:
bool
Python prototype (for reference only):
isClassifier() -> retval
@spec isTrained(Keyword.t()) :: any() | {:error, String.t()}
@spec isTrained(t()) :: boolean() | {:error, String.t()}
Returns true if the model is trained
Positional Arguments
- self:
Evision.ML.SVM.t()
Return
- retval:
bool
Python prototype (for reference only):
isTrained() -> retval
@spec load(Keyword.t()) :: any() | {:error, String.t()}
@spec load(binary()) :: t() | {:error, String.t()}
Loads and creates a serialized svm from a file
Positional Arguments
filepath:
String
.path to serialized svm
Return
- retval:
Evision.ML.SVM.t()
Use SVM::save to serialize and store an SVM to disk. Load the SVM from this file again, by calling this function with the path to the file.
Python prototype (for reference only):
load(filepath) -> retval
@spec predict(t(), Evision.Mat.maybe_mat_in()) :: {number(), Evision.Mat.t()} | {:error, String.t()}
Predicts response(s) for the provided sample(s)
Positional Arguments
self:
Evision.ML.SVM.t()
samples:
Evision.Mat
.The input samples, floating-point matrix
Keyword Arguments
flags:
integer()
.The optional flags, model-dependent. See cv::ml::StatModel::Flags.
Return
retval:
float
results:
Evision.Mat.t()
.The optional output matrix of results.
Python prototype (for reference only):
predict(samples[, results[, flags]]) -> retval, results
@spec predict(t(), Evision.Mat.maybe_mat_in(), [{:flags, term()}] | nil) :: {number(), Evision.Mat.t()} | {:error, String.t()}
Predicts response(s) for the provided sample(s)
Positional Arguments
self:
Evision.ML.SVM.t()
samples:
Evision.Mat
.The input samples, floating-point matrix
Keyword Arguments
flags:
integer()
.The optional flags, model-dependent. See cv::ml::StatModel::Flags.
Return
retval:
float
results:
Evision.Mat.t()
.The optional output matrix of results.
Python prototype (for reference only):
predict(samples[, results[, flags]]) -> retval, results
@spec read(t(), Evision.FileNode.t()) :: t() | {:error, String.t()}
Reads algorithm parameters from a file storage
Positional Arguments
- self:
Evision.ML.SVM.t()
- func:
Evision.FileNode
Python prototype (for reference only):
read(fn) -> None
save
Positional Arguments
- self:
Evision.ML.SVM.t()
- filename:
String
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs).
Python prototype (for reference only):
save(filename) -> None
setC
Positional Arguments
- self:
Evision.ML.SVM.t()
- val:
double
@see getC/1
Python prototype (for reference only):
setC(val) -> None
@spec setClassWeights(t(), Evision.Mat.maybe_mat_in()) :: t() | {:error, String.t()}
setClassWeights
Positional Arguments
- self:
Evision.ML.SVM.t()
- val:
Evision.Mat
@see getClassWeights/1
Python prototype (for reference only):
setClassWeights(val) -> None
setCoef0
Positional Arguments
- self:
Evision.ML.SVM.t()
- val:
double
@see getCoef0/1
Python prototype (for reference only):
setCoef0(val) -> None
setDegree
Positional Arguments
- self:
Evision.ML.SVM.t()
- val:
double
@see getDegree/1
Python prototype (for reference only):
setDegree(val) -> None
setGamma
Positional Arguments
- self:
Evision.ML.SVM.t()
- val:
double
@see getGamma/1
Python prototype (for reference only):
setGamma(val) -> None
setKernel
Positional Arguments
- self:
Evision.ML.SVM.t()
- kernelType:
integer()
Initialize with one of predefined kernels. See SVM::KernelTypes.
Python prototype (for reference only):
setKernel(kernelType) -> None
setNu
Positional Arguments
- self:
Evision.ML.SVM.t()
- val:
double
@see getNu/1
Python prototype (for reference only):
setNu(val) -> None
setP
Positional Arguments
- self:
Evision.ML.SVM.t()
- val:
double
@see getP/1
Python prototype (for reference only):
setP(val) -> None
setTermCriteria
Positional Arguments
- self:
Evision.ML.SVM.t()
- val:
TermCriteria
@see getTermCriteria/1
Python prototype (for reference only):
setTermCriteria(val) -> None
setType
Positional Arguments
- self:
Evision.ML.SVM.t()
- val:
integer()
@see getType/1
Python prototype (for reference only):
setType(val) -> None
@spec train(t(), Evision.ML.TrainData.t()) :: boolean() | {:error, String.t()}
Trains the statistical model
Positional Arguments
self:
Evision.ML.SVM.t()
trainData:
Evision.ML.TrainData.t()
.training data that can be loaded from file using TrainData::loadFromCSV or created with TrainData::create.
Keyword Arguments
flags:
integer()
.optional flags, depending on the model. Some of the models can be updated with the new training samples, not completely overwritten (such as NormalBayesClassifier or ANN_MLP).
Return
- retval:
bool
Python prototype (for reference only):
train(trainData[, flags]) -> retval
@spec train(t(), Evision.ML.TrainData.t(), [{:flags, term()}] | nil) :: boolean() | {:error, String.t()}
Trains the statistical model
Positional Arguments
self:
Evision.ML.SVM.t()
trainData:
Evision.ML.TrainData.t()
.training data that can be loaded from file using TrainData::loadFromCSV or created with TrainData::create.
Keyword Arguments
flags:
integer()
.optional flags, depending on the model. Some of the models can be updated with the new training samples, not completely overwritten (such as NormalBayesClassifier or ANN_MLP).
Return
- retval:
bool
Python prototype (for reference only):
train(trainData[, flags]) -> retval
@spec train(t(), Evision.Mat.maybe_mat_in(), integer(), Evision.Mat.maybe_mat_in()) :: boolean() | {:error, String.t()}
Trains the statistical model
Positional Arguments
self:
Evision.ML.SVM.t()
samples:
Evision.Mat
.training samples
layout:
integer()
.See ml::SampleTypes.
responses:
Evision.Mat
.vector of responses associated with the training samples.
Return
- retval:
bool
Python prototype (for reference only):
train(samples, layout, responses) -> retval
@spec trainAuto( t(), Evision.Mat.maybe_mat_in(), integer(), Evision.Mat.maybe_mat_in() ) :: boolean() | {:error, String.t()}
Trains an %SVM with optimal parameters
Positional Arguments
self:
Evision.ML.SVM.t()
samples:
Evision.Mat
.training samples
layout:
integer()
.See ml::SampleTypes.
responses:
Evision.Mat
.vector of responses associated with the training samples.
Keyword Arguments
kFold:
integer()
.Cross-validation parameter. The training set is divided into kFold subsets. One subset is used to test the model, the others form the train set. So, the %SVM algorithm is
cgrid:
Evision.ML.ParamGrid.t()
.gammaGrid:
Evision.ML.ParamGrid.t()
.grid for gamma
pGrid:
Evision.ML.ParamGrid.t()
.nuGrid:
Evision.ML.ParamGrid.t()
.coeffGrid:
Evision.ML.ParamGrid.t()
.grid for coeff
degreeGrid:
Evision.ML.ParamGrid.t()
.grid for degree
balanced:
bool
.If true and the problem is 2-class classification then the method creates more balanced cross-validation subsets that is proportions between classes in subsets are close to such proportion in the whole train dataset.
Return
- retval:
bool
The method trains the %SVM model automatically by choosing the optimal parameters C, gamma, p, nu, coef0, degree. Parameters are considered optimal when the cross-validation estimate of the test set error is minimal. This function only makes use of SVM::getDefaultGrid for parameter optimization and thus only offers rudimentary parameter options. This function works for the classification (SVM::C_SVC or SVM::NU_SVC) as well as for the regression (SVM::EPS_SVR or SVM::NU_SVR). If it is SVM::ONE_CLASS, no optimization is made and the usual %SVM with parameters specified in params is executed.
Python prototype (for reference only):
trainAuto(samples, layout, responses[, kFold[, Cgrid[, gammaGrid[, pGrid[, nuGrid[, coeffGrid[, degreeGrid[, balanced]]]]]]]]) -> retval
@spec trainAuto( t(), Evision.Mat.maybe_mat_in(), integer(), Evision.Mat.maybe_mat_in(), [ balanced: term(), cgrid: term(), coeffGrid: term(), degreeGrid: term(), gammaGrid: term(), kFold: term(), nuGrid: term(), pGrid: term() ] | nil ) :: boolean() | {:error, String.t()}
Trains an %SVM with optimal parameters
Positional Arguments
self:
Evision.ML.SVM.t()
samples:
Evision.Mat
.training samples
layout:
integer()
.See ml::SampleTypes.
responses:
Evision.Mat
.vector of responses associated with the training samples.
Keyword Arguments
kFold:
integer()
.Cross-validation parameter. The training set is divided into kFold subsets. One subset is used to test the model, the others form the train set. So, the %SVM algorithm is
cgrid:
Evision.ML.ParamGrid.t()
.gammaGrid:
Evision.ML.ParamGrid.t()
.grid for gamma
pGrid:
Evision.ML.ParamGrid.t()
.nuGrid:
Evision.ML.ParamGrid.t()
.coeffGrid:
Evision.ML.ParamGrid.t()
.grid for coeff
degreeGrid:
Evision.ML.ParamGrid.t()
.grid for degree
balanced:
bool
.If true and the problem is 2-class classification then the method creates more balanced cross-validation subsets that is proportions between classes in subsets are close to such proportion in the whole train dataset.
Return
- retval:
bool
The method trains the %SVM model automatically by choosing the optimal parameters C, gamma, p, nu, coef0, degree. Parameters are considered optimal when the cross-validation estimate of the test set error is minimal. This function only makes use of SVM::getDefaultGrid for parameter optimization and thus only offers rudimentary parameter options. This function works for the classification (SVM::C_SVC or SVM::NU_SVC) as well as for the regression (SVM::EPS_SVR or SVM::NU_SVR). If it is SVM::ONE_CLASS, no optimization is made and the usual %SVM with parameters specified in params is executed.
Python prototype (for reference only):
trainAuto(samples, layout, responses[, kFold[, Cgrid[, gammaGrid[, pGrid[, nuGrid[, coeffGrid[, degreeGrid[, balanced]]]]]]]]) -> retval
@spec write(t(), Evision.FileStorage.t()) :: t() | {:error, String.t()}
Stores algorithm parameters in a file storage
Positional Arguments
- self:
Evision.ML.SVM.t()
- fs:
Evision.FileStorage
Python prototype (for reference only):
write(fs) -> None
@spec write(t(), Evision.FileStorage.t(), binary()) :: t() | {:error, String.t()}
write
Positional Arguments
- self:
Evision.ML.SVM.t()
- fs:
Evision.FileStorage
- name:
String
Has overloading in C++
Python prototype (for reference only):
write(fs, name) -> None