View Source Scholar.Metrics.Regression (Scholar v0.3.1)

Regression Metric functions.

Metrics are used to measure the performance and compare the performance of any kind of regressor in easy-to-understand terms.

All of the functions in this module are implemented as numerical functions and can be JIT or AOT compiled with any supported Nx compiler.

Summary

Functions

D^2 regression score function, fraction of absolute error explained.

D^2 regression score function, fraction of pinball loss explained.

$D^2$ regression score function, fraction of Tweedie deviance explained.

Explained variance regression score function.

Calculates the maximum residual error.

Calculates the mean absolute error of predictions with respect to targets.

Calculates the mean absolute percentage error of predictions with respect to targets. If y_true values are equal or close to zero, it returns an arbitrarily large value.

Calculates the mean Gamma deviance of predictions with respect to targets.

Calculates the mean pinball loss to evaluate predictive performance of quantile regression models.

Calculates the mean Poisson deviance of predictions with respect to targets.

Calculates the mean square error of predictions with respect to targets.

Calculates the mean square logarithmic error of predictions with respect to targets.

Calculates the mean Tweedie deviance of predictions with respect to targets. Includes the Gaussian, Poisson, Gamma and inverse-Gaussian families as special cases.

Similar to mean_tweedie_deviance/3 but raises RuntimeError if the inputs cannot be used with the given power argument.

Calculates the $R^2$ score of predictions with respect to targets.

Functions

Link to this function

d2_absolute_error_score(y_true, y_pred, opts \\ [])

View Source

D^2 regression score function, fraction of absolute error explained.

Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A model that always uses the empirical median of y_true as constant prediction, disregarding the input features, gets a D^2 score of 0.0.

Options

#{NimbleOptions.docs(@d2_absolute_error_score_opts)}

Return Values

It returns float or tensor of floats.

Examples

iex> y_true = Nx.tensor([1, 2, 3])
iex> y_pred = Nx.tensor([1, 2, 3])
iex> Scholar.Metrics.Regression.d2_absolute_error_score(y_true, y_pred)
#Nx.Tensor<
  f32
  1.0
>
iex> y_true = Nx.tensor([1, 2, 3])
iex> y_pred = Nx.tensor([2, 2, 2])
iex> Scholar.Metrics.Regression.d2_absolute_error_score(y_true, y_pred)
#Nx.Tensor<
  f32
  0.0
>
iex> y_true = Nx.tensor([1, 2, 3])
iex> y_pred = Nx.tensor([3, 2, 1])
iex> Scholar.Metrics.Regression.d2_absolute_error_score(y_true, y_pred)
#Nx.Tensor<
  f32
  -1.0
>
iex> y_true = Nx.tensor([3, -0.5, 2, 7])
iex> y_pred = Nx.tensor([2.5, 0.0, 2, 8])
iex> Scholar.Metrics.Regression.d2_absolute_error_score(y_true, y_pred)
#Nx.Tensor<
  f32
  0.7647058963775635
>
iex> y_true = Nx.tensor([[0.5, 1], [-1, 1], [7, -6]])
iex> y_pred = Nx.tensor([[0, 2], [-1, 2], [8, -5]])
iex> Scholar.Metrics.Regression.d2_absolute_error_score(y_true, y_pred)
#Nx.Tensor<
  f32
  0.6919642686843872
>
iex> y_true = Nx.tensor([[0.5, 1], [-1, 1], [7, -6]])
iex> y_pred = Nx.tensor([[0, 2], [-1, 2], [8, -5]])
iex> Scholar.Metrics.Regression.d2_absolute_error_score(y_true, y_pred, multioutput: :raw_values)
#Nx.Tensor<
  f32[2]
  [0.8125, 0.5714285373687744]
>
Link to this function

d2_absolute_error_score_n(y_true, y_pred, opts \\ [])

View Source
Link to this function

d2_pinball_score(y_true, y_pred, opts \\ [])

View Source

D^2 regression score function, fraction of pinball loss explained.

Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A model that always uses the empirical alpha-quantile of y_true as constant prediction, disregarding the input features, gets a D^2 score of 0.0.

Options

#{NimbleOptions.docs(@d2_pinball_score_opts)}

Return Values

It returns float or tensor of floats.

Examples

iex> y_true = Nx.tensor([1, 2, 3])
iex> y_pred = Nx.tensor([1, 3, 3])
iex> Scholar.Metrics.Regression.d2_pinball_score(y_true, y_pred)
#Nx.Tensor<
  f32
  0.5
>
iex> Scholar.Metrics.Regression.d2_pinball_score(y_true, y_pred, alpha: 0.9)
#Nx.Tensor<
  f32
  0.7727271914482117
>
iex> Scholar.Metrics.Regression.d2_pinball_score(y_true, y_true, alpha: 0.1)
#Nx.Tensor<
  f32
  1.0
>
Link to this function

d2_tweedie_score(y_true, y_pred, power)

View Source

$D^2$ regression score function, fraction of Tweedie deviance explained.

Best possible score is 1.0, lower values are worse and it can also be negative.

Since it uses the mean Tweedie deviance, it also includes the Gaussian, Poisson, Gamma and inverse-Gaussian distribution families as special cases.

Examples

iex> y_true = Nx.tensor([1, 1, 1, 1, 1, 2, 2, 1, 3, 1], type: :u32)
iex> y_pred = Nx.tensor([2, 2, 1, 1, 2, 2, 2, 1, 3, 1], type: :u32)
iex> Scholar.Metrics.Regression.d2_tweedie_score(y_true, y_pred, 1)
#Nx.Tensor<
  f32
  0.32202935218811035
>
Link to this function

explained_variance_score(y_true, y_pred, opts \\ [])

View Source

Explained variance regression score function.

Best possible score is 1.0, lower values are worse.

Options

  • :force_finite (boolean/0) - Flag indicating if NaN and -Inf scores resulting from constant data should be replaced with real numbers (1.0 if prediction is perfect, 0.0 otherwise) The default value is true.

Examples

iex> y_true = Nx.tensor([3, -0.5, 2, 7], type: {:f, 32})
iex> y_pred = Nx.tensor([2.5, 0.0, 2, 8], type: {:f, 32})
iex> Scholar.Metrics.Regression.explained_variance_score(y_true, y_pred)
#Nx.Tensor<
  f32
  0.9571734666824341
>

iex> y_true = Nx.tensor([-2.0, -2.0, -2.0], type: :f64)
iex> y_pred = Nx.tensor([-2.0, -2.0, -2.0 + 1.0e-8], type: :f64)
iex> Scholar.Metrics.Regression.explained_variance_score(y_true, y_pred, force_finite: true)
#Nx.Tensor<
  f64
  0.0
>

iex> y_true = Nx.tensor([-2.0, -2.0, -2.0], type: :f64)
iex> y_pred = Nx.tensor([-2.0, -2.0, -2.0 + 1.0e-8], type: :f64)
iex> Scholar.Metrics.Regression.explained_variance_score(y_true, y_pred, force_finite: false)
#Nx.Tensor<
  f64
  -Inf
>

iex> y_true = Nx.tensor([-2.0, -2.0, -2.0])
iex> y_pred = Nx.tensor([-2.0, -2.0, -2.0])
iex> Scholar.Metrics.Regression.explained_variance_score(y_true, y_pred, force_finite: false)
#Nx.Tensor<
  f32
  NaN
>

iex> y_true = Nx.tensor([-2.0, -2.0, -2.0])
iex> y_pred = Nx.tensor([-2.0, -2.0, -2.0])
iex> Scholar.Metrics.Regression.explained_variance_score(y_true, y_pred, force_finite: true)
#Nx.Tensor<
  f32
  1.0
>
Link to this function

max_residual_error(y_true, y_pred)

View Source

Calculates the maximum residual error.

The residual error is defined as $$ |y - \hat{y}| $$ where $y$ is a true value and $\hat{y}$ is a predicted value. This function returns the maximum residual error over all samples in the input: $max(|y_i - \hat{y_i}|)$. For perfect predictions, the maximum residual error is 0.0.

Examples

iex> y_true = Nx.tensor([3, -0.5, 2, 7])
iex> y_pred = Nx.tensor([2.5, 0.0, 2, 8.5])
iex> Scholar.Metrics.Regression.max_residual_error(y_true, y_pred)
#Nx.Tensor<
  f32
  1.5
>
Link to this function

mean_absolute_error(y_true, y_pred)

View Source

Calculates the mean absolute error of predictions with respect to targets.

$$ MAE = \frac{\sum_{i=1}^{n} |\hat{y_i} - y_i|}{n} $$

Examples

iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]])
iex> y_pred = Nx.tensor([[1.0, 1.0], [1.0, 0.0]])
iex> Scholar.Metrics.Regression.mean_absolute_error(y_true, y_pred)
#Nx.Tensor<
  f32
  0.5
>
Link to this function

mean_absolute_percentage_error(y_true, y_pred)

View Source

Calculates the mean absolute percentage error of predictions with respect to targets. If y_true values are equal or close to zero, it returns an arbitrarily large value.

$$ MAPE = \frac{\sum_{i=1}^{n} \frac{|\hat{y_i} - y_i|}{max(\epsilon, \hat{y_i})}}{n} $$

Examples

iex> y_true = Nx.tensor([3, -0.5, 2, 7])
iex> y_pred = Nx.tensor([2.5, 0.0, 2, 8])
iex> Scholar.Metrics.Regression.mean_absolute_percentage_error(y_true, y_pred)
#Nx.Tensor<
  f32
  0.3273809552192688
>

iex> y_true = Nx.tensor([1.0, 0.0, 2.4, 7.0])
iex> y_pred = Nx.tensor([1.2, 0.1, 2.4, 8.0])
iex> Scholar.Metrics.Regression.mean_absolute_percentage_error(y_true, y_pred)
#Nx.Tensor<
  f32
  209715.28125
>
Link to this function

mean_gamma_deviance(y_true, y_pred)

View Source

Calculates the mean Gamma deviance of predictions with respect to targets.

Examples

iex> y_true = Nx.tensor([1, 1, 1, 1, 1, 2, 2, 1, 3, 1], type: :u32)
iex> y_pred = Nx.tensor([2, 2, 1, 1, 2, 2, 2, 1, 3, 1], type: :u32)
iex> Scholar.Metrics.Regression.mean_gamma_deviance(y_true, y_pred)
#Nx.Tensor<
  f32
  0.115888312458992
>
Link to this function

mean_pinball_loss(y_true, y_pred, opts \\ [])

View Source

Calculates the mean pinball loss to evaluate predictive performance of quantile regression models.

$$ pinball(y, \hat{y}) = \frac{1}{n) \sum_{i=1}^{n} \alpha max(\hat{y_i} - y_i, 0) + (1 - \alpha) max(\hat{y_i} - y_i, 0) $$

The residual error is defined as $$ |y - \hat{y}| $$ where $y$ is a true value and $\hat{y}$ is a predicted value.

#{NimbleOptions.docs(@mean_pinball_loss_schema)}

Examples

iex> y_true = Nx.tensor([1, 2, 3])
iex> y_pred = Nx.tensor([2, 3, 4])
iex> Scholar.Metrics.Regression.mean_pinball_loss(y_true, y_pred)
#Nx.Tensor<
  f32
  0.5
>
iex> y_true = Nx.tensor([[1, 0, 0, 1], [0, 1, 1, 1], [1, 1, 0, 1]])
iex> y_pred = Nx.tensor([[0, 0, 0, 1], [1, 0, 1, 1], [0, 0, 0, 1]])
iex> Scholar.Metrics.Regression.mean_pinball_loss(y_true, y_pred, alpha: 0.5, multioutput: :raw_values)
#Nx.Tensor<
  f32[4]
  [0.5, 0.3333333432674408, 0.0, 0.0]
>
Link to this function

mean_poisson_deviance(y_true, y_pred)

View Source

Calculates the mean Poisson deviance of predictions with respect to targets.

Examples

iex> y_true = Nx.tensor([1, 1, 1, 1, 1, 2, 2, 1, 3, 1], type: :u32)
iex> y_pred = Nx.tensor([2, 2, 1, 1, 2, 2, 2, 1, 3, 1], type: :u32)
iex> Scholar.Metrics.Regression.mean_poisson_deviance(y_true, y_pred)
#Nx.Tensor<
  f32
  0.18411168456077576
>
Link to this function

mean_square_error(y_true, y_pred)

View Source

Calculates the mean square error of predictions with respect to targets.

$$ MSE = \frac{\sum_{i=1}^{n} (\hat{y_i} - y_i)^2}{n} $$

Examples

iex> y_true = Nx.tensor([[0.0, 2.0], [0.5, 0.0]])
iex> y_pred = Nx.tensor([[1.0, 1.0], [1.0, 0.0]])
iex> Scholar.Metrics.Regression.mean_square_error(y_true, y_pred)
#Nx.Tensor<
  f32
  0.5625
>
Link to this function

mean_square_log_error(y_true, y_pred)

View Source

Calculates the mean square logarithmic error of predictions with respect to targets.

$$ MSLE = \frac{\sum_{i=1}^{n} (\log(\hat{y_i} + 1) - \log(y_i + 1))^2}{n} $$

Examples

iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]])
iex> y_pred = Nx.tensor([[1.0, 1.0], [1.0, 0.0]])
iex> Scholar.Metrics.Regression.mean_square_log_error(y_true, y_pred)
#Nx.Tensor<
  f32
  0.24022650718688965
>
Link to this function

mean_tweedie_deviance(y_true, y_pred, power)

View Source

Calculates the mean Tweedie deviance of predictions with respect to targets. Includes the Gaussian, Poisson, Gamma and inverse-Gaussian families as special cases.

$$ d(y,\mu) = \begin{cases} (y-\mu)^2, & \text{for }p=0\\\\ 2(y \log(y/\mu) + \mu - y), & \text{for }p=1\\\\ 2(\log(\mu/y) + y/\mu - 1), & \text{for }p=2\\\\ 2\left(\frac{\max(y,0)^{2-p}}{(1-p)(2-p)}-\frac{y\mu^{1-p}}{1-p}+\frac{\mu^{2-p}}{2-p}\right), & \text{for }p<0 \vee p>2 \end{cases} $$

Examples

iex> y_true = Nx.tensor([1, 1, 1, 1, 1, 2, 2, 1, 3, 1], type: :u32)
iex> y_pred = Nx.tensor([2, 2, 1, 1, 2, 2, 2, 1, 3, 1], type: :u32)
iex> Scholar.Metrics.Regression.mean_tweedie_deviance(y_true, y_pred, 1)
#Nx.Tensor<
  f32
  0.18411168456077576
>
Link to this function

mean_tweedie_deviance!(y_true, y_pred, power)

View Source

Similar to mean_tweedie_deviance/3 but raises RuntimeError if the inputs cannot be used with the given power argument.

Note: This function cannot be used in defn.

Examples

iex> y_true = Nx.tensor([1, 1, 1, 1, 1, 2, 2, 1, 3, 1], type: :u32)
iex> y_pred = Nx.tensor([2, 2, 1, 1, 2, 2, 2, 1, 3, 1], type: :u32)
iex> Scholar.Metrics.Regression.mean_tweedie_deviance!(y_true, y_pred, 1)
#Nx.Tensor<
  f32
  0.18411168456077576
>
Link to this function

r2_score(y_true, y_pred, opts \\ [])

View Source

Calculates the $R^2$ score of predictions with respect to targets.

$$ R^2 = 1 - \frac{\sum (y_i - \hat{y}_i)^2}{\sum (y_i - \bar{y})^2} $$

Options

  • :force_finite (boolean/0) - Flag indicating if NaN and -Inf scores resulting from constant data should be replaced with real numbers (1.0 if prediction is perfect, 0.0 otherwise) The default value is true.

Examples

iex> y_true = Nx.tensor([3, -0.5, 2, 7], type: {:f, 32})
iex> y_pred = Nx.tensor([2.5, 0.0, 2, 8], type: {:f, 32})
iex> Scholar.Metrics.Regression.r2_score(y_true, y_pred)
#Nx.Tensor<
  f32
  0.9486081600189209
>

iex> y_true = Nx.tensor([-2.0, -2.0, -2.0], type: :f64)
iex> y_pred = Nx.tensor([-2.0, -2.0, -2.0 + 1.0e-8], type: :f64)
iex> Scholar.Metrics.Regression.r2_score(y_true, y_pred, force_finite: true)
#Nx.Tensor<
  f64
  0.0
>

iex> y_true = Nx.tensor([-2.0, -2.0, -2.0], type: :f64)
iex> y_pred = Nx.tensor([-2.0, -2.0, -2.0 + 1.0e-8], type: :f64)
iex> Scholar.Metrics.Regression.r2_score(y_true, y_pred, force_finite: false)
#Nx.Tensor<
  f64
  -Inf
>

iex> y_true = Nx.tensor([-2.0, -2.0, -2.0])
iex> y_pred = Nx.tensor([-2.0, -2.0, -2.0])
iex> Scholar.Metrics.Regression.r2_score(y_true, y_pred, force_finite: false)
#Nx.Tensor<
  f32
  NaN
>

iex> y_true = Nx.tensor([-2.0, -2.0, -2.0])
iex> y_pred = Nx.tensor([-2.0, -2.0, -2.0])
iex> Scholar.Metrics.Regression.r2_score(y_true, y_pred, force_finite: true)
#Nx.Tensor<
  f32
  1.0
>