View Source Scholar.Linear.SVM (Scholar v0.3.1)

Support Vector Machine linear classifier.

It uses the One-vs-Rest strategy to handle both binary and multinomial classification. This implementation uses stochastic gradient descent from default or any other optimizer available in Polaris. This makes it similar to a sklearn SGDClassifier [1].

On average it is slower than algorithms that use QP and kernel trick (LIBSVM [2]) or Coordinate Descent Algorithm (LIBLINEAR [3]). It also cannot use different kernels like in LIBSVM, but you can use any type of optimizer available in Polaris.

Time complexity is $O(N * K * I * C)$ where $N$ is the number of samples, $K$ is the number of features $I$ is the number of iterations and $C$ is the number of classes.

[1] - https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html [2] - https://www.csie.ntu.edu.tw/~cjlin/libsvm/ [3] - https://www.csie.ntu.edu.tw/~cjlin/liblinear/

Summary

Functions

Fits an SVM model for sample inputs x and sample targets y.

Makes predictions with the given model on inputs x.

Functions

Fits an SVM model for sample inputs x and sample targets y.

Options

  • :num_classes (pos_integer/0) - Required. number of classes contained in the input tensors.

  • :iterations (pos_integer/0) - number of iterations of gradient descent performed inside SVM. The default value is 1000.

  • :learning_loop_unroll (boolean/0) - If true, the learning loop is unrolled. The default value is false.

  • :optimizer - The optimizer name or {init, update} pair of functions (see Polaris.Optimizers for more details). The default value is :sgd.

  • :eps (float/0) - The convergence tolerance. If the abs(loss) < size(x) * :eps, the algorithm is considered to have converged. The default value is 1.0e-8.

  • :loss_fn - The loss function that is used in the algorithm. Functions should take two arguments: y_predicted and y_true. If now provided it is set to highe loss without regularization. The default value is nil.

Return Values

The function returns a struct with the following parameters:

  • :coefficients - Coefficient of the features in the decision function.

  • :bias - Bias added to the decision function.

Examples

iex> x = Nx.tensor([[1.0, 2.0, 2.1], [3.0, 2.0, 1.4], [4.0, 7.0, 5.3], [3.0, 4.0, 6.3]])
iex> y = Nx.tensor([1, 0, 1, 1])
iex> Scholar.Linear.SVM.fit(x, y, num_classes: 2)
%Scholar.Linear.SVM{
  coefficients: Nx.tensor(
    [
      [1.6899993419647217, 1.4599995613098145, 1.322001338005066],
      [1.4799995422363281, 1.9599990844726562, 2.0080013275146484]
    ]
  ),
  bias: Nx.tensor(
    [0.23000003397464752, 0.4799998104572296]
  )
}
Link to this function

hinge_loss(y_pred, ys, opts \\ [])

View Source

Makes predictions with the given model on inputs x.

Examples

iex> x = Nx.tensor([[1.0, 2.0], [3.0, 2.0], [4.0, 7.0]])
iex> y = Nx.tensor([1, 0, 1])
iex> model = Scholar.Linear.SVM.fit(x, y, num_classes: 2)
iex> Scholar.Linear.SVM.predict(model, Nx.tensor([[-3.0, 5.0]]))
#Nx.Tensor<
  s64[1]
  [1]
>