View Source Scholar.Metrics.Regression (Scholar v0.2.1)

Regression Metric functions.

Metrics are used to measure the performance and compare the performance of any kind of regressor in easy-to-understand terms.

All of the functions in this module are implemented as numerical functions and can be JIT or AOT compiled with any supported Nx compiler.

Summary

Functions

Explained variance regression score function.

Calculates the maximum residual error.

Calculates the mean absolute error of predictions with respect to targets.

Calculates the mean absolute percentage error of predictions with respect to targets. If y_true values are equal or close to zero, it returns an arbitrarily large value.

Calculates the mean square error of predictions with respect to targets.

Calculates the mean square logarithmic error of predictions with respect to targets.

Calculates the $R^2$ score of predictions with respect to targets.

Functions

Link to this function

explained_variance_score(y_true, y_pred, opts \\ [])

View Source

Explained variance regression score function.

Best possible score is 1.0, lower values are worse.

Options

  • :force_finite (boolean/0) - Flag indicating if NaN and -Inf scores resulting from constant data should be replaced with real numbers (1.0 if prediction is perfect, 0.0 otherwise) The default value is true.

Examples

iex> y_true = Nx.tensor([3, -0.5, 2, 7], type: {:f, 32})
iex> y_pred = Nx.tensor([2.5, 0.0, 2, 8], type: {:f, 32})
iex> Scholar.Metrics.Regression.explained_variance_score(y_true, y_pred)
#Nx.Tensor<
  f32
  0.9571734666824341
>

iex> y_true = Nx.tensor([-2.0, -2.0, -2.0], type: :f64)
iex> y_pred = Nx.tensor([-2.0, -2.0, -2.0 + 1.0e-8], type: :f64)
iex> Scholar.Metrics.Regression.explained_variance_score(y_true, y_pred, force_finite: true)
#Nx.Tensor<
  f64
  0.0
>

iex> y_true = Nx.tensor([-2.0, -2.0, -2.0], type: :f64)
iex> y_pred = Nx.tensor([-2.0, -2.0, -2.0 + 1.0e-8], type: :f64)
iex> Scholar.Metrics.Regression.explained_variance_score(y_true, y_pred, force_finite: false)
#Nx.Tensor<
  f64
  -Inf
>

iex> y_true = Nx.tensor([-2.0, -2.0, -2.0])
iex> y_pred = Nx.tensor([-2.0, -2.0, -2.0])
iex> Scholar.Metrics.Regression.explained_variance_score(y_true, y_pred, force_finite: false)
#Nx.Tensor<
  f32
  NaN
>

iex> y_true = Nx.tensor([-2.0, -2.0, -2.0])
iex> y_pred = Nx.tensor([-2.0, -2.0, -2.0])
iex> Scholar.Metrics.Regression.explained_variance_score(y_true, y_pred, force_finite: true)
#Nx.Tensor<
  f32
  1.0
>
Link to this function

max_residual_error(y_true, y_pred)

View Source

Calculates the maximum residual error.

The residual error is defined as $$|y - \hat{y}|$$ where $y$ is a true value and $\hat{y}$ is a predicted value. This function returns the maximum residual error over all samples in the input: $max(|y_i - \hat{y_i}|)$. For perfect predictions, the maximum residual error is 0.0.

Examples

iex> y_true = Nx.tensor([3, -0.5, 2, 7])
iex> y_pred = Nx.tensor([2.5, 0.0, 2, 8.5])
iex> Scholar.Metrics.Regression.max_residual_error(y_true, y_pred)
#Nx.Tensor<
  f32
  1.5
>
Link to this function

mean_absolute_error(y_true, y_pred)

View Source

Calculates the mean absolute error of predictions with respect to targets.

$$MAE = \frac{\sum_{i=1}^{n} |\hat{y_i} - y_i|}{n}$$

Examples

iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]])
iex> y_pred = Nx.tensor([[1.0, 1.0], [1.0, 0.0]])
iex> Scholar.Metrics.Regression.mean_absolute_error(y_true, y_pred)
#Nx.Tensor<
  f32
  0.5
>
Link to this function

mean_absolute_percentage_error(y_true, y_pred)

View Source

Calculates the mean absolute percentage error of predictions with respect to targets. If y_true values are equal or close to zero, it returns an arbitrarily large value.

$$MAPE = \frac{\sum_{i=1}^{n} \frac{|\hat{y_i} - y_i|}{max(\epsilon, \hat{y_i})}}{n}$$

Examples

iex> y_true = Nx.tensor([3, -0.5, 2, 7])
iex> y_pred = Nx.tensor([2.5, 0.0, 2, 8])
iex> Scholar.Metrics.Regression.mean_absolute_percentage_error(y_true, y_pred)
#Nx.Tensor<
  f32
  0.3273809552192688
>

iex> y_true = Nx.tensor([1.0, 0.0, 2.4, 7.0])
iex> y_pred = Nx.tensor([1.2, 0.1, 2.4, 8.0])
iex> Scholar.Metrics.Regression.mean_absolute_percentage_error(y_true, y_pred)
#Nx.Tensor<
  f32
  209715.28125
>
Link to this function

mean_square_error(y_true, y_pred)

View Source

Calculates the mean square error of predictions with respect to targets.

$$MSE = \frac{\sum_{i=1}^{n} (\hat{y_i} - y_i)^2}{n}$$

Examples

iex> y_true = Nx.tensor([[0.0, 2.0], [0.5, 0.0]])
iex> y_pred = Nx.tensor([[1.0, 1.0], [1.0, 0.0]])
iex> Scholar.Metrics.Regression.mean_square_error(y_true, y_pred)
#Nx.Tensor<
  f32
  0.5625
>
Link to this function

mean_square_log_error(y_true, y_pred)

View Source

Calculates the mean square logarithmic error of predictions with respect to targets.

$$MSLE = \frac{\sum_{i=1}^{n} (\log(\hat{y_i} + 1) - \log(y_i + 1))^2}{n}$$

Examples

iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]])
iex> y_pred = Nx.tensor([[1.0, 1.0], [1.0, 0.0]])
iex> Scholar.Metrics.Regression.mean_square_log_error(y_true, y_pred)
#Nx.Tensor<
  f32
  0.24022650718688965
>
Link to this function

r2_score(y_true, y_pred, opts \\ [])

View Source

Calculates the $R^2$ score of predictions with respect to targets.

$$R^2 = 1 - \frac{\sum (y_i - \hat{y}_i)^2}{\sum (y_i - \bar{y})^2}$$

Options

  • :force_finite (boolean/0) - Flag indicating if NaN and -Inf scores resulting from constant data should be replaced with real numbers (1.0 if prediction is perfect, 0.0 otherwise) The default value is true.

Examples

iex> y_true = Nx.tensor([3, -0.5, 2, 7], type: {:f, 32})
iex> y_pred = Nx.tensor([2.5, 0.0, 2, 8], type: {:f, 32})
iex> Scholar.Metrics.Regression.r2_score(y_true, y_pred)
#Nx.Tensor<
  f32
  0.9486081600189209
>

iex> y_true = Nx.tensor([-2.0, -2.0, -2.0], type: :f64)
iex> y_pred = Nx.tensor([-2.0, -2.0, -2.0 + 1.0e-8], type: :f64)
iex> Scholar.Metrics.Regression.r2_score(y_true, y_pred, force_finite: true)
#Nx.Tensor<
  f64
  0.0
>

iex> y_true = Nx.tensor([-2.0, -2.0, -2.0], type: :f64)
iex> y_pred = Nx.tensor([-2.0, -2.0, -2.0 + 1.0e-8], type: :f64)
iex> Scholar.Metrics.Regression.r2_score(y_true, y_pred, force_finite: false)
#Nx.Tensor<
  f64
  -Inf
>

iex> y_true = Nx.tensor([-2.0, -2.0, -2.0])
iex> y_pred = Nx.tensor([-2.0, -2.0, -2.0])
iex> Scholar.Metrics.Regression.r2_score(y_true, y_pred, force_finite: false)
#Nx.Tensor<
  f32
  NaN
>

iex> y_true = Nx.tensor([-2.0, -2.0, -2.0])
iex> y_pred = Nx.tensor([-2.0, -2.0, -2.0])
iex> Scholar.Metrics.Regression.r2_score(y_true, y_pred, force_finite: true)
#Nx.Tensor<
  f32
  1.0
>