View Source Evision.ML.SVM (Evision v0.2.9)

Summary

Types

t()

Type that represents an ML.SVM struct.

Functions

Computes error on the training or test dataset

Computes error on the training or test dataset

Clears the algorithm state

create

getClassWeights

Retrieves the decision function

Retrieves the decision function

Generates a grid for %SVM parameters.

Retrieves all the support vectors

getTermCriteria

Retrieves all the uncompressed support vectors of a linear %SVM

Returns the number of variables in training samples

Returns true if the model is classifier

Returns true if the model is trained

Loads and creates a serialized svm from a file

Predicts response(s) for the provided sample(s)

Predicts response(s) for the provided sample(s)

Reads algorithm parameters from a file storage

setClassWeights

setTermCriteria

Trains the statistical model

Trains the statistical model

Trains the statistical model

Trains an %SVM with optimal parameters

Trains an %SVM with optimal parameters

Stores algorithm parameters in a file storage

Types

@type t() :: %Evision.ML.SVM{ref: reference()}

Type that represents an ML.SVM struct.

  • ref. reference()

    The underlying erlang resource variable.

Functions

@spec calcError(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

calcError(self, data, test)

View Source
@spec calcError(t(), Evision.ML.TrainData.t(), boolean()) ::
  {number(), Evision.Mat.t()} | {:error, String.t()}

Computes error on the training or test dataset

Positional Arguments
  • self: Evision.ML.SVM.t()

  • data: Evision.ML.TrainData.t().

    the training data

  • test: bool.

    if true, the error is computed over the test subset of the data, otherwise it's computed over the training subset of the data. Please note that if you loaded a completely different dataset to evaluate already trained classifier, you will probably want not to set the test subset at all with TrainData::setTrainTestSplitRatio and specify test=false, so that the error is computed for the whole new set. Yes, this sounds a bit confusing.

Return
  • retval: float

  • resp: Evision.Mat.t().

    the optional output responses.

The method uses StatModel::predict to compute the error. For regression models the error is computed as RMS, for classifiers - as a percent of missclassified samples (0%-100%).

Python prototype (for reference only):

calcError(data, test[, resp]) -> retval, resp
Link to this function

calcError(self, data, test, opts)

View Source
@spec calcError(
  t(),
  Evision.ML.TrainData.t(),
  boolean(),
  [{atom(), term()}, ...] | nil
) ::
  {number(), Evision.Mat.t()} | {:error, String.t()}

Computes error on the training or test dataset

Positional Arguments
  • self: Evision.ML.SVM.t()

  • data: Evision.ML.TrainData.t().

    the training data

  • test: bool.

    if true, the error is computed over the test subset of the data, otherwise it's computed over the training subset of the data. Please note that if you loaded a completely different dataset to evaluate already trained classifier, you will probably want not to set the test subset at all with TrainData::setTrainTestSplitRatio and specify test=false, so that the error is computed for the whole new set. Yes, this sounds a bit confusing.

Return
  • retval: float

  • resp: Evision.Mat.t().

    the optional output responses.

The method uses StatModel::predict to compute the error. For regression models the error is computed as RMS, for classifiers - as a percent of missclassified samples (0%-100%).

Python prototype (for reference only):

calcError(data, test[, resp]) -> retval, resp
@spec clear(Keyword.t()) :: any() | {:error, String.t()}
@spec clear(t()) :: t() | {:error, String.t()}

Clears the algorithm state

Positional Arguments
  • self: Evision.ML.SVM.t()

Python prototype (for reference only):

clear() -> None
@spec create() :: t() | {:error, String.t()}

create

Return
  • retval: Evision.ML.SVM.t()

Creates empty model. Use StatModel::train to train the model. Since %SVM has several parameters, you may want to find the best parameters for your problem, it can be done with SVM::trainAuto.

Python prototype (for reference only):

create() -> retval
@spec create(Keyword.t()) :: any() | {:error, String.t()}
@spec empty(Keyword.t()) :: any() | {:error, String.t()}
@spec empty(t()) :: boolean() | {:error, String.t()}

empty

Positional Arguments
  • self: Evision.ML.SVM.t()
Return
  • retval: bool

Python prototype (for reference only):

empty() -> retval
@spec getC(Keyword.t()) :: any() | {:error, String.t()}
@spec getC(t()) :: number() | {:error, String.t()}

getC

Positional Arguments
  • self: Evision.ML.SVM.t()
Return
  • retval: double

@see setC/2

Python prototype (for reference only):

getC() -> retval
Link to this function

getClassWeights(named_args)

View Source
@spec getClassWeights(Keyword.t()) :: any() | {:error, String.t()}
@spec getClassWeights(t()) :: Evision.Mat.t() | {:error, String.t()}

getClassWeights

Positional Arguments
  • self: Evision.ML.SVM.t()
Return
  • retval: Evision.Mat.t()

@see setClassWeights/2

Python prototype (for reference only):

getClassWeights() -> retval
@spec getCoef0(Keyword.t()) :: any() | {:error, String.t()}
@spec getCoef0(t()) :: number() | {:error, String.t()}

getCoef0

Positional Arguments
  • self: Evision.ML.SVM.t()
Return
  • retval: double

@see setCoef0/2

Python prototype (for reference only):

getCoef0() -> retval
Link to this function

getDecisionFunction(named_args)

View Source
@spec getDecisionFunction(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

getDecisionFunction(self, i)

View Source
@spec getDecisionFunction(t(), integer()) ::
  {number(), Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}

Retrieves the decision function

Positional Arguments
  • self: Evision.ML.SVM.t()

  • i: integer().

    the index of the decision function. If the problem solved is regression, 1-class or 2-class classification, then there will be just one decision function and the index should always be 0. Otherwise, in the case of N-class classification, there will be \f$N(N-1)/2\f$ decision functions.

Return
  • retval: double

  • alpha: Evision.Mat.t().

    the optional output vector for weights, corresponding to different support vectors. In the case of linear %SVM all the alpha's will be 1's.

  • svidx: Evision.Mat.t().

    the optional output vector of indices of support vectors within the matrix of support vectors (which can be retrieved by SVM::getSupportVectors). In the case of linear %SVM each decision function consists of a single "compressed" support vector.

The method returns rho parameter of the decision function, a scalar subtracted from the weighted sum of kernel responses.

Python prototype (for reference only):

getDecisionFunction(i[, alpha[, svidx]]) -> retval, alpha, svidx
Link to this function

getDecisionFunction(self, i, opts)

View Source
@spec getDecisionFunction(t(), integer(), [{atom(), term()}, ...] | nil) ::
  {number(), Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}

Retrieves the decision function

Positional Arguments
  • self: Evision.ML.SVM.t()

  • i: integer().

    the index of the decision function. If the problem solved is regression, 1-class or 2-class classification, then there will be just one decision function and the index should always be 0. Otherwise, in the case of N-class classification, there will be \f$N(N-1)/2\f$ decision functions.

Return
  • retval: double

  • alpha: Evision.Mat.t().

    the optional output vector for weights, corresponding to different support vectors. In the case of linear %SVM all the alpha's will be 1's.

  • svidx: Evision.Mat.t().

    the optional output vector of indices of support vectors within the matrix of support vectors (which can be retrieved by SVM::getSupportVectors). In the case of linear %SVM each decision function consists of a single "compressed" support vector.

The method returns rho parameter of the decision function, a scalar subtracted from the weighted sum of kernel responses.

Python prototype (for reference only):

getDecisionFunction(i[, alpha[, svidx]]) -> retval, alpha, svidx
Link to this function

getDefaultGridPtr(named_args)

View Source
@spec getDefaultGridPtr(Keyword.t()) :: any() | {:error, String.t()}
@spec getDefaultGridPtr(integer()) :: Evision.ML.ParamGrid.t() | {:error, String.t()}

Generates a grid for %SVM parameters.

Positional Arguments
  • param_id: integer().

    %SVM parameters IDs that must be one of the SVM::ParamTypes. The grid is generated for the parameter with this ID.

Return
  • retval: Evision.ML.ParamGrid.t()

The function generates a grid pointer for the specified parameter of the %SVM algorithm. The grid may be passed to the function SVM::trainAuto.

Python prototype (for reference only):

getDefaultGridPtr(param_id) -> retval
Link to this function

getDefaultName(named_args)

View Source
@spec getDefaultName(Keyword.t()) :: any() | {:error, String.t()}
@spec getDefaultName(t()) :: binary() | {:error, String.t()}

getDefaultName

Positional Arguments
  • self: Evision.ML.SVM.t()
Return

Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string.

Python prototype (for reference only):

getDefaultName() -> retval
@spec getDegree(Keyword.t()) :: any() | {:error, String.t()}
@spec getDegree(t()) :: number() | {:error, String.t()}

getDegree

Positional Arguments
  • self: Evision.ML.SVM.t()
Return
  • retval: double

@see setDegree/2

Python prototype (for reference only):

getDegree() -> retval
@spec getGamma(Keyword.t()) :: any() | {:error, String.t()}
@spec getGamma(t()) :: number() | {:error, String.t()}

getGamma

Positional Arguments
  • self: Evision.ML.SVM.t()
Return
  • retval: double

@see setGamma/2

Python prototype (for reference only):

getGamma() -> retval
Link to this function

getKernelType(named_args)

View Source
@spec getKernelType(Keyword.t()) :: any() | {:error, String.t()}
@spec getKernelType(t()) :: integer() | {:error, String.t()}

getKernelType

Positional Arguments
  • self: Evision.ML.SVM.t()
Return
  • retval: integer()

Type of a %SVM kernel. See SVM::KernelTypes. Default value is SVM::RBF.

Python prototype (for reference only):

getKernelType() -> retval
@spec getNu(Keyword.t()) :: any() | {:error, String.t()}
@spec getNu(t()) :: number() | {:error, String.t()}

getNu

Positional Arguments
  • self: Evision.ML.SVM.t()
Return
  • retval: double

@see setNu/2

Python prototype (for reference only):

getNu() -> retval
@spec getP(Keyword.t()) :: any() | {:error, String.t()}
@spec getP(t()) :: number() | {:error, String.t()}

getP

Positional Arguments
  • self: Evision.ML.SVM.t()
Return
  • retval: double

@see setP/2

Python prototype (for reference only):

getP() -> retval
Link to this function

getSupportVectors(named_args)

View Source
@spec getSupportVectors(Keyword.t()) :: any() | {:error, String.t()}
@spec getSupportVectors(t()) :: Evision.Mat.t() | {:error, String.t()}

Retrieves all the support vectors

Positional Arguments
  • self: Evision.ML.SVM.t()
Return
  • retval: Evision.Mat.t()

The method returns all the support vectors as a floating-point matrix, where support vectors are stored as matrix rows.

Python prototype (for reference only):

getSupportVectors() -> retval
Link to this function

getTermCriteria(named_args)

View Source
@spec getTermCriteria(Keyword.t()) :: any() | {:error, String.t()}
@spec getTermCriteria(t()) :: {integer(), integer(), number()} | {:error, String.t()}

getTermCriteria

Positional Arguments
  • self: Evision.ML.SVM.t()
Return
  • retval: cv::TermCriteria

@see setTermCriteria/2

Python prototype (for reference only):

getTermCriteria() -> retval
@spec getType(Keyword.t()) :: any() | {:error, String.t()}
@spec getType(t()) :: integer() | {:error, String.t()}

getType

Positional Arguments
  • self: Evision.ML.SVM.t()
Return
  • retval: integer()

@see setType/2

Python prototype (for reference only):

getType() -> retval
Link to this function

getUncompressedSupportVectors(named_args)

View Source
@spec getUncompressedSupportVectors(Keyword.t()) :: any() | {:error, String.t()}
@spec getUncompressedSupportVectors(t()) :: Evision.Mat.t() | {:error, String.t()}

Retrieves all the uncompressed support vectors of a linear %SVM

Positional Arguments
  • self: Evision.ML.SVM.t()
Return
  • retval: Evision.Mat.t()

The method returns all the uncompressed support vectors of a linear %SVM that the compressed support vector, used for prediction, was derived from. They are returned in a floating-point matrix, where the support vectors are stored as matrix rows.

Python prototype (for reference only):

getUncompressedSupportVectors() -> retval
@spec getVarCount(Keyword.t()) :: any() | {:error, String.t()}
@spec getVarCount(t()) :: integer() | {:error, String.t()}

Returns the number of variables in training samples

Positional Arguments
  • self: Evision.ML.SVM.t()
Return
  • retval: integer()

Python prototype (for reference only):

getVarCount() -> retval
Link to this function

isClassifier(named_args)

View Source
@spec isClassifier(Keyword.t()) :: any() | {:error, String.t()}
@spec isClassifier(t()) :: boolean() | {:error, String.t()}

Returns true if the model is classifier

Positional Arguments
  • self: Evision.ML.SVM.t()
Return
  • retval: bool

Python prototype (for reference only):

isClassifier() -> retval
@spec isTrained(Keyword.t()) :: any() | {:error, String.t()}
@spec isTrained(t()) :: boolean() | {:error, String.t()}

Returns true if the model is trained

Positional Arguments
  • self: Evision.ML.SVM.t()
Return
  • retval: bool

Python prototype (for reference only):

isTrained() -> retval
@spec load(Keyword.t()) :: any() | {:error, String.t()}
@spec load(binary()) :: t() | {:error, String.t()}

Loads and creates a serialized svm from a file

Positional Arguments
  • filepath: String.

    path to serialized svm

Return
  • retval: Evision.ML.SVM.t()

Use SVM::save to serialize and store an SVM to disk. Load the SVM from this file again, by calling this function with the path to the file.

Python prototype (for reference only):

load(filepath) -> retval
@spec predict(Keyword.t()) :: any() | {:error, String.t()}
@spec predict(t(), Evision.Mat.maybe_mat_in()) ::
  {number(), Evision.Mat.t()} | {:error, String.t()}

Predicts response(s) for the provided sample(s)

Positional Arguments
  • self: Evision.ML.SVM.t()

  • samples: Evision.Mat.

    The input samples, floating-point matrix

Keyword Arguments
  • flags: integer().

    The optional flags, model-dependent. See cv::ml::StatModel::Flags.

Return
  • retval: float

  • results: Evision.Mat.t().

    The optional output matrix of results.

Python prototype (for reference only):

predict(samples[, results[, flags]]) -> retval, results
Link to this function

predict(self, samples, opts)

View Source
@spec predict(t(), Evision.Mat.maybe_mat_in(), [{:flags, term()}] | nil) ::
  {number(), Evision.Mat.t()} | {:error, String.t()}

Predicts response(s) for the provided sample(s)

Positional Arguments
  • self: Evision.ML.SVM.t()

  • samples: Evision.Mat.

    The input samples, floating-point matrix

Keyword Arguments
  • flags: integer().

    The optional flags, model-dependent. See cv::ml::StatModel::Flags.

Return
  • retval: float

  • results: Evision.Mat.t().

    The optional output matrix of results.

Python prototype (for reference only):

predict(samples[, results[, flags]]) -> retval, results
@spec read(Keyword.t()) :: any() | {:error, String.t()}
@spec read(t(), Evision.FileNode.t()) :: t() | {:error, String.t()}

Reads algorithm parameters from a file storage

Positional Arguments

Python prototype (for reference only):

read(fn) -> None
@spec save(Keyword.t()) :: any() | {:error, String.t()}
@spec save(t(), binary()) :: t() | {:error, String.t()}

save

Positional Arguments
  • self: Evision.ML.SVM.t()
  • filename: String

Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs).

Python prototype (for reference only):

save(filename) -> None
@spec setC(Keyword.t()) :: any() | {:error, String.t()}
@spec setC(t(), number()) :: t() | {:error, String.t()}

setC

Positional Arguments
  • self: Evision.ML.SVM.t()
  • val: double

@see getC/1

Python prototype (for reference only):

setC(val) -> None
Link to this function

setClassWeights(named_args)

View Source
@spec setClassWeights(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

setClassWeights(self, val)

View Source
@spec setClassWeights(t(), Evision.Mat.maybe_mat_in()) :: t() | {:error, String.t()}

setClassWeights

Positional Arguments

@see getClassWeights/1

Python prototype (for reference only):

setClassWeights(val) -> None
@spec setCoef0(Keyword.t()) :: any() | {:error, String.t()}
@spec setCoef0(t(), number()) :: t() | {:error, String.t()}

setCoef0

Positional Arguments
  • self: Evision.ML.SVM.t()
  • val: double

@see getCoef0/1

Python prototype (for reference only):

setCoef0(val) -> None
@spec setDegree(Keyword.t()) :: any() | {:error, String.t()}
@spec setDegree(t(), number()) :: t() | {:error, String.t()}

setDegree

Positional Arguments
  • self: Evision.ML.SVM.t()
  • val: double

@see getDegree/1

Python prototype (for reference only):

setDegree(val) -> None
@spec setGamma(Keyword.t()) :: any() | {:error, String.t()}
@spec setGamma(t(), number()) :: t() | {:error, String.t()}

setGamma

Positional Arguments
  • self: Evision.ML.SVM.t()
  • val: double

@see getGamma/1

Python prototype (for reference only):

setGamma(val) -> None
@spec setKernel(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

setKernel(self, kernelType)

View Source
@spec setKernel(t(), integer()) :: t() | {:error, String.t()}

setKernel

Positional Arguments
  • self: Evision.ML.SVM.t()
  • kernelType: integer()

Initialize with one of predefined kernels. See SVM::KernelTypes.

Python prototype (for reference only):

setKernel(kernelType) -> None
@spec setNu(Keyword.t()) :: any() | {:error, String.t()}
@spec setNu(t(), number()) :: t() | {:error, String.t()}

setNu

Positional Arguments
  • self: Evision.ML.SVM.t()
  • val: double

@see getNu/1

Python prototype (for reference only):

setNu(val) -> None
@spec setP(Keyword.t()) :: any() | {:error, String.t()}
@spec setP(t(), number()) :: t() | {:error, String.t()}

setP

Positional Arguments
  • self: Evision.ML.SVM.t()
  • val: double

@see getP/1

Python prototype (for reference only):

setP(val) -> None
Link to this function

setTermCriteria(named_args)

View Source
@spec setTermCriteria(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

setTermCriteria(self, val)

View Source
@spec setTermCriteria(t(), {integer(), integer(), number()}) ::
  t() | {:error, String.t()}

setTermCriteria

Positional Arguments
  • self: Evision.ML.SVM.t()
  • val: TermCriteria

@see getTermCriteria/1

Python prototype (for reference only):

setTermCriteria(val) -> None
@spec setType(Keyword.t()) :: any() | {:error, String.t()}
@spec setType(t(), integer()) :: t() | {:error, String.t()}

setType

Positional Arguments
  • self: Evision.ML.SVM.t()
  • val: integer()

@see getType/1

Python prototype (for reference only):

setType(val) -> None
@spec train(Keyword.t()) :: any() | {:error, String.t()}
@spec train(t(), Evision.ML.TrainData.t()) :: boolean() | {:error, String.t()}

Trains the statistical model

Positional Arguments
  • self: Evision.ML.SVM.t()

  • trainData: Evision.ML.TrainData.t().

    training data that can be loaded from file using TrainData::loadFromCSV or created with TrainData::create.

Keyword Arguments
  • flags: integer().

    optional flags, depending on the model. Some of the models can be updated with the new training samples, not completely overwritten (such as NormalBayesClassifier or ANN_MLP).

Return
  • retval: bool

Python prototype (for reference only):

train(trainData[, flags]) -> retval
Link to this function

train(self, trainData, opts)

View Source
@spec train(t(), Evision.ML.TrainData.t(), [{:flags, term()}] | nil) ::
  boolean() | {:error, String.t()}

Trains the statistical model

Positional Arguments
  • self: Evision.ML.SVM.t()

  • trainData: Evision.ML.TrainData.t().

    training data that can be loaded from file using TrainData::loadFromCSV or created with TrainData::create.

Keyword Arguments
  • flags: integer().

    optional flags, depending on the model. Some of the models can be updated with the new training samples, not completely overwritten (such as NormalBayesClassifier or ANN_MLP).

Return
  • retval: bool

Python prototype (for reference only):

train(trainData[, flags]) -> retval
Link to this function

train(self, samples, layout, responses)

View Source
@spec train(t(), Evision.Mat.maybe_mat_in(), integer(), Evision.Mat.maybe_mat_in()) ::
  boolean() | {:error, String.t()}

Trains the statistical model

Positional Arguments
  • self: Evision.ML.SVM.t()

  • samples: Evision.Mat.

    training samples

  • layout: integer().

    See ml::SampleTypes.

  • responses: Evision.Mat.

    vector of responses associated with the training samples.

Return
  • retval: bool

Python prototype (for reference only):

train(samples, layout, responses) -> retval
@spec trainAuto(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

trainAuto(self, samples, layout, responses)

View Source
@spec trainAuto(
  t(),
  Evision.Mat.maybe_mat_in(),
  integer(),
  Evision.Mat.maybe_mat_in()
) ::
  boolean() | {:error, String.t()}

Trains an %SVM with optimal parameters

Positional Arguments
  • self: Evision.ML.SVM.t()

  • samples: Evision.Mat.

    training samples

  • layout: integer().

    See ml::SampleTypes.

  • responses: Evision.Mat.

    vector of responses associated with the training samples.

Keyword Arguments
  • kFold: integer().

    Cross-validation parameter. The training set is divided into kFold subsets. One subset is used to test the model, the others form the train set. So, the %SVM algorithm is

  • cgrid: Evision.ML.ParamGrid.t().

  • gammaGrid: Evision.ML.ParamGrid.t().

    grid for gamma

  • pGrid: Evision.ML.ParamGrid.t().

  • nuGrid: Evision.ML.ParamGrid.t().

  • coeffGrid: Evision.ML.ParamGrid.t().

    grid for coeff

  • degreeGrid: Evision.ML.ParamGrid.t().

    grid for degree

  • balanced: bool.

    If true and the problem is 2-class classification then the method creates more balanced cross-validation subsets that is proportions between classes in subsets are close to such proportion in the whole train dataset.

Return
  • retval: bool

The method trains the %SVM model automatically by choosing the optimal parameters C, gamma, p, nu, coef0, degree. Parameters are considered optimal when the cross-validation estimate of the test set error is minimal. This function only makes use of SVM::getDefaultGrid for parameter optimization and thus only offers rudimentary parameter options. This function works for the classification (SVM::C_SVC or SVM::NU_SVC) as well as for the regression (SVM::EPS_SVR or SVM::NU_SVR). If it is SVM::ONE_CLASS, no optimization is made and the usual %SVM with parameters specified in params is executed.

Python prototype (for reference only):

trainAuto(samples, layout, responses[, kFold[, Cgrid[, gammaGrid[, pGrid[, nuGrid[, coeffGrid[, degreeGrid[, balanced]]]]]]]]) -> retval
Link to this function

trainAuto(self, samples, layout, responses, opts)

View Source
@spec trainAuto(
  t(),
  Evision.Mat.maybe_mat_in(),
  integer(),
  Evision.Mat.maybe_mat_in(),
  [
    balanced: term(),
    cgrid: term(),
    coeffGrid: term(),
    degreeGrid: term(),
    gammaGrid: term(),
    kFold: term(),
    nuGrid: term(),
    pGrid: term()
  ]
  | nil
) :: boolean() | {:error, String.t()}

Trains an %SVM with optimal parameters

Positional Arguments
  • self: Evision.ML.SVM.t()

  • samples: Evision.Mat.

    training samples

  • layout: integer().

    See ml::SampleTypes.

  • responses: Evision.Mat.

    vector of responses associated with the training samples.

Keyword Arguments
  • kFold: integer().

    Cross-validation parameter. The training set is divided into kFold subsets. One subset is used to test the model, the others form the train set. So, the %SVM algorithm is

  • cgrid: Evision.ML.ParamGrid.t().

  • gammaGrid: Evision.ML.ParamGrid.t().

    grid for gamma

  • pGrid: Evision.ML.ParamGrid.t().

  • nuGrid: Evision.ML.ParamGrid.t().

  • coeffGrid: Evision.ML.ParamGrid.t().

    grid for coeff

  • degreeGrid: Evision.ML.ParamGrid.t().

    grid for degree

  • balanced: bool.

    If true and the problem is 2-class classification then the method creates more balanced cross-validation subsets that is proportions between classes in subsets are close to such proportion in the whole train dataset.

Return
  • retval: bool

The method trains the %SVM model automatically by choosing the optimal parameters C, gamma, p, nu, coef0, degree. Parameters are considered optimal when the cross-validation estimate of the test set error is minimal. This function only makes use of SVM::getDefaultGrid for parameter optimization and thus only offers rudimentary parameter options. This function works for the classification (SVM::C_SVC or SVM::NU_SVC) as well as for the regression (SVM::EPS_SVR or SVM::NU_SVR). If it is SVM::ONE_CLASS, no optimization is made and the usual %SVM with parameters specified in params is executed.

Python prototype (for reference only):

trainAuto(samples, layout, responses[, kFold[, Cgrid[, gammaGrid[, pGrid[, nuGrid[, coeffGrid[, degreeGrid[, balanced]]]]]]]]) -> retval
@spec write(Keyword.t()) :: any() | {:error, String.t()}
@spec write(t(), Evision.FileStorage.t()) :: t() | {:error, String.t()}

Stores algorithm parameters in a file storage

Positional Arguments

Python prototype (for reference only):

write(fs) -> None
@spec write(t(), Evision.FileStorage.t(), binary()) :: t() | {:error, String.t()}

write

Positional Arguments

Has overloading in C++

Python prototype (for reference only):

write(fs, name) -> None