# `Scholar.Metrics.Regression`
[🔗](https://github.com/elixir-nx/scholar/blob/main/lib/scholar/metrics/regression.ex#L1)

Regression Metric functions.

Metrics are used to measure the performance and compare
the performance of any kind of regressor in
easy-to-understand terms.

All of the functions in this module are implemented as
numerical functions and can be JIT or AOT compiled with
any supported `Nx` compiler.

# `d2_absolute_error_score`
[🔗](https://github.com/elixir-nx/scholar/blob/main/lib/scholar/metrics/regression.ex#L793)

`D^2` regression score function, fraction of absolute error explained.

Best possible score is 1.0 and it can be negative (because the model can be
arbitrarily worse). A model that always uses the empirical median of `y_true`
as constant prediction, disregarding the input features,
gets a `D^2` score of 0.0.

## Options

#{NimbleOptions.docs(@general_schema)}

## Return Values

It returns float or tensor of floats.

## Examples

    iex> y_true = Nx.tensor([1, 2, 3])
    iex> y_pred = Nx.tensor([1, 2, 3])
    iex> Scholar.Metrics.Regression.d2_absolute_error_score(y_true, y_pred)
    #Nx.Tensor<
      f32
      1.0
    >
    iex> y_true = Nx.tensor([1, 2, 3])
    iex> y_pred = Nx.tensor([2, 2, 2])
    iex> Scholar.Metrics.Regression.d2_absolute_error_score(y_true, y_pred)
    #Nx.Tensor<
      f32
      0.0
    >
    iex> y_true = Nx.tensor([1, 2, 3])
    iex> y_pred = Nx.tensor([3, 2, 1])
    iex> Scholar.Metrics.Regression.d2_absolute_error_score(y_true, y_pred)
    #Nx.Tensor<
      f32
      -1.0
    >
    iex> y_true = Nx.tensor([3, -0.5, 2, 7])
    iex> y_pred = Nx.tensor([2.5, 0.0, 2, 8])
    iex> Scholar.Metrics.Regression.d2_absolute_error_score(y_true, y_pred)
    #Nx.Tensor<
      f32
      0.7647058963775635
    >
    iex> y_true = Nx.tensor([[0.5, 1], [-1, 1], [7, -6]], type: {:f, 64})
    iex> y_pred = Nx.tensor([[0, 2], [-1, 2], [8, -5]], type: {:f, 64})
    iex> Scholar.Metrics.Regression.d2_absolute_error_score(y_true, y_pred)
    #Nx.Tensor<
      f64
      0.6919642857142856
    >
    iex> y_true = Nx.tensor([[0.5, 1], [-1, 1], [7, -6]])
    iex> y_pred = Nx.tensor([[0, 2], [-1, 2], [8, -5]])
    iex> Scholar.Metrics.Regression.d2_absolute_error_score(y_true, y_pred, axes: [0])
    #Nx.Tensor<
      f32[2]
      [0.8125, 0.5714285373687744]
    >

# `d2_absolute_error_score_n`
[🔗](https://github.com/elixir-nx/scholar/blob/main/lib/scholar/metrics/regression.ex#L801)

# `d2_pinball_score`
[🔗](https://github.com/elixir-nx/scholar/blob/main/lib/scholar/metrics/regression.ex#L864)

`D^2` regression score function, fraction of pinball loss explained.

Best possible score is 1.0 and it can be negative (because the model can be
arbitrarily worse). A model that always uses the empirical alpha-quantile of
`y_true` as constant prediction, disregarding the input features,
gets a `D^2` score of 0.0.

## Options

#{NimbleOptions.docs(@d2_pinball_score_schema)}

## Return Values

It returns float or tensor of floats.

## Examples

    iex> y_true = Nx.tensor([1, 2, 3])
    iex> y_pred = Nx.tensor([1, 3, 3])
    iex> Scholar.Metrics.Regression.d2_pinball_score(y_true, y_pred)
    #Nx.Tensor<
      f32
      0.5
    >
    iex> Scholar.Metrics.Regression.d2_pinball_score(y_true, y_pred, alpha: 0.9)
    #Nx.Tensor<
      f32
      0.7727271914482117
    >
    iex> Scholar.Metrics.Regression.d2_pinball_score(y_true, y_true, alpha: 0.1)
    #Nx.Tensor<
      f32
      1.0
    >
    iex> y_true = Nx.tensor([[0.5, 1], [-1, 1], [7, -6]])
    iex> y_pred = Nx.tensor([[0, 2], [-1, 2], [8, -5]])
    iex> Scholar.Metrics.Regression.d2_pinball_score(y_true, y_pred, axes: [0])
    #Nx.Tensor<
      f32[2]
      [0.8125, 0.5714285373687744]
    >

# `d2_tweedie_score`
[🔗](https://github.com/elixir-nx/scholar/blob/main/lib/scholar/metrics/regression.ex#L598)

$D^2$ regression score function, fraction of Tweedie
deviance explained.

Best possible score is 1.0, lower values are worse and it
can also be negative.

Since it uses the mean Tweedie deviance, it also includes
the Gaussian, Poisson, Gamma and inverse-Gaussian
distribution families as special cases.

## Examples

    iex> y_true = Nx.tensor([1, 1, 1, 1, 1, 2, 2, 1, 3, 1], type: :u32)
    iex> y_pred = Nx.tensor([2, 2, 1, 1, 2, 2, 2, 1, 3, 1], type: :u32)
    iex> Scholar.Metrics.Regression.d2_tweedie_score(y_true, y_pred, 1)
    #Nx.Tensor<
      f32
      0.32202935218811035
    >

# `explained_variance_score`
[🔗](https://github.com/elixir-nx/scholar/blob/main/lib/scholar/metrics/regression.ex#L535)

Explained variance regression score function.

Best possible score is 1.0, lower values are worse.

## Options

* `:force_finite` (`t:boolean/0`) - Flag indicating if NaN and -Inf scores resulting from constant data should be replaced with real numbers
  (1.0 if prediction is perfect, 0.0 otherwise) The default value is `true`.

* `:axes` - Axes to calculate the distance over. By default the distance
  is calculated between the whole tensors.

## Examples

    iex> y_true = Nx.tensor([3, -0.5, 2, 7], type: {:f, 32})
    iex> y_pred = Nx.tensor([2.5, 0.0, 2, 8], type: {:f, 32})
    iex> Scholar.Metrics.Regression.explained_variance_score(y_true, y_pred)
    #Nx.Tensor<
      f32
      0.9571734666824341
    >

    iex> y_true = Nx.tensor([-2.0, -2.0, -2.0], type: :f64)
    iex> y_pred = Nx.tensor([-2.0, -2.0, -2.0 + 1.0e-8], type: :f64)
    iex> Scholar.Metrics.Regression.explained_variance_score(y_true, y_pred, force_finite: true)
    #Nx.Tensor<
      f64
      0.0
    >

    iex> y_true = Nx.tensor([-2.0, -2.0, -2.0], type: :f64)
    iex> y_pred = Nx.tensor([-2.0, -2.0, -2.0 + 1.0e-8], type: :f64)
    iex> Scholar.Metrics.Regression.explained_variance_score(y_true, y_pred, force_finite: false)
    #Nx.Tensor<
      f64
      -Inf
    >

    iex> y_true = Nx.tensor([-2.0, -2.0, -2.0])
    iex> y_pred = Nx.tensor([-2.0, -2.0, -2.0])
    iex> Scholar.Metrics.Regression.explained_variance_score(y_true, y_pred, force_finite: false)
    #Nx.Tensor<
      f32
      NaN
    >

    iex> y_true = Nx.tensor([-2.0, -2.0, -2.0])
    iex> y_pred = Nx.tensor([-2.0, -2.0, -2.0])
    iex> Scholar.Metrics.Regression.explained_variance_score(y_true, y_pred, force_finite: true)
    #Nx.Tensor<
      f32
      1.0
    >

    iex> y_true = Nx.tensor([[3, -0.5], [2, 7]], type: {:f, 32})
    iex> y_pred = Nx.tensor([[2.5, 0.0], [2, 8]], type: {:f, 32})
    iex> Scholar.Metrics.Regression.explained_variance_score(y_true, y_pred, axes: [0])
    #Nx.Tensor<
      f32[2]
      [0.75, 0.995555579662323]
    >

# `max_residual_error`
[🔗](https://github.com/elixir-nx/scholar/blob/main/lib/scholar/metrics/regression.ex#L636)

Calculates the maximum residual error.

The residual error is defined as $$|y - \hat{y}|$$ where $y$ is a true value
and $\hat{y}$ is a predicted value.
This function returns the maximum residual error over all samples in the
input: $max(|y_i - \hat{y_i}|)$. For perfect predictions, the maximum
residual error is `0.0`.

## Examples

    iex> y_true = Nx.tensor([3, -0.5, 2, 7])
    iex> y_pred = Nx.tensor([2.5, 0.0, 2, 8.5])
    iex> Scholar.Metrics.Regression.max_residual_error(y_true, y_pred)
    #Nx.Tensor<
      f32
      1.5
    >

# `mean_absolute_error`
[🔗](https://github.com/elixir-nx/scholar/blob/main/lib/scholar/metrics/regression.ex#L70)

Calculates the mean absolute error of predictions
with respect to targets.

$$MAE = \frac{\sum_{i=1}^{n} |\hat{y_i} - y_i|}{n}$$

## Options

#{NimbleOptions.docs(@general_schema)}

## Examples

    iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]])
    iex> y_pred = Nx.tensor([[1.0, 1.0], [1.0, 0.0]])
    iex> Scholar.Metrics.Regression.mean_absolute_error(y_true, y_pred)
    #Nx.Tensor<
      f32
      0.5
    >
    iex> Scholar.Metrics.Regression.mean_absolute_error(y_true, y_pred, axes: [0])
    #Nx.Tensor<
      f32[2]
      [1.0, 0.0]
    >

# `mean_absolute_error_n`
[🔗](https://github.com/elixir-nx/scholar/blob/main/lib/scholar/metrics/regression.ex#L74)

# `mean_absolute_percentage_error`
[🔗](https://github.com/elixir-nx/scholar/blob/main/lib/scholar/metrics/regression.ex#L178)

Calculates the mean absolute percentage error of predictions
with respect to targets. If `y_true` values are equal or close
to zero, it returns an arbitrarily large value.

$$MAPE = \frac{\sum_{i=1}^{n} \frac{|\hat{y_i} - y_i|}{max(\epsilon, \hat{y_i})}}{n}$$

## Options

#{NimbleOptions.docs(@general_schema)}

## Examples

    iex> y_true = Nx.tensor([3, -0.5, 2, 7])
    iex> y_pred = Nx.tensor([2.5, 0.0, 2, 8])
    iex> Scholar.Metrics.Regression.mean_absolute_percentage_error(y_true, y_pred)
    #Nx.Tensor<
      f32
      0.3273809552192688
    >

    iex> y_true = Nx.tensor([1.0, 0.0, 2.4, 7.0])
    iex> y_pred = Nx.tensor([1.2, 0.1, 2.4, 8.0])
    iex> Scholar.Metrics.Regression.mean_absolute_percentage_error(y_true, y_pred)
    #Nx.Tensor<
      f32
      209715.28125
    >
    iex> y_true = Nx.tensor([[0.5, 1], [-1, 1], [7, -6]])
    iex> y_pred = Nx.tensor([[0, 2], [-1, 2], [8, -5]])
    iex> Scholar.Metrics.Regression.mean_absolute_percentage_error(y_true, y_pred, axes: [0])
    #Nx.Tensor<
      f32[2]
      [0.380952388048172, 0.7222222685813904]
    >

# `mean_gamma_deviance`
[🔗](https://github.com/elixir-nx/scholar/blob/main/lib/scholar/metrics/regression.ex#L397)

Calculates the mean Gamma deviance of predictions
with respect to targets.

## Options

* `:axes` - Axes to calculate the distance over. By default the distance
  is calculated between the whole tensors.

## Examples

    iex> y_true = Nx.tensor([1, 1, 1, 1, 1, 2, 2, 1, 3, 1], type: :u32)
    iex> y_pred = Nx.tensor([2, 2, 1, 1, 2, 2, 2, 1, 3, 1], type: :u32)
    iex> Scholar.Metrics.Regression.mean_gamma_deviance(y_true, y_pred)
    #Nx.Tensor<
      f32
      0.115888312458992
    >
    iex> y_true = Nx.tensor([[1, 1, 1, 1, 1], [2, 2, 1, 3, 1]], type: :u32)
    iex> y_pred = Nx.tensor([[2, 2, 1, 1, 2], [2, 2, 1, 3, 1]], type: :u32)
    iex> Scholar.Metrics.Regression.mean_gamma_deviance(y_true, y_pred, axes: [0])
    #Nx.Tensor<
      f32[5]
      [0.1931471824645996, 0.1931471824645996, 0.0, 0.0, 0.1931471824645996]
    >

# `mean_pinball_loss`
[🔗](https://github.com/elixir-nx/scholar/blob/main/lib/scholar/metrics/regression.ex#L702)

Calculates the mean pinball loss to evaluate predictive performance of quantile regression models.

$$pinball(y, \hat{y}) = \frac{1}{n) \sum_{i=1}^{n} \alpha max(\hat{y_i} - y_i, 0) +
(1 - \alpha) max(\hat{y_i} - y_i, 0)$$

The residual error is defined as $$|y - \hat{y}|$$ where $y$ is a true value
and $\hat{y}$ is a predicted value.

## Options

#{NimbleOptions.docs(@mean_pinball_loss_schema)}

## Examples

    iex> y_true = Nx.tensor([1, 2, 3])
    iex> y_pred = Nx.tensor([2, 3, 4])
    iex> Scholar.Metrics.Regression.mean_pinball_loss(y_true, y_pred)
    #Nx.Tensor<
      f32
      0.5
    >
    iex> y_true = Nx.tensor([[1, 0, 0, 1], [0, 1, 1, 1], [1, 1, 0, 1]])
    iex> y_pred = Nx.tensor([[0, 0, 0, 1], [1, 0, 1, 1], [0, 0, 0, 1]])
    iex> Scholar.Metrics.Regression.mean_pinball_loss(y_true, y_pred, alpha: 0.5, axes: [0])
    #Nx.Tensor<
      f32[4]
      [0.5, 0.3333333432674408, 0.0, 0.0]
    >

# `mean_poisson_deviance`
[🔗](https://github.com/elixir-nx/scholar/blob/main/lib/scholar/metrics/regression.ex#L368)

Calculates the mean Poisson deviance of predictions
with respect to targets.

## Options

* `:axes` - Axes to calculate the distance over. By default the distance
  is calculated between the whole tensors.

## Examples

    iex> y_true = Nx.tensor([1, 1, 1, 1, 1, 2, 2, 1, 3, 1], type: :u32)
    iex> y_pred = Nx.tensor([2, 2, 1, 1, 2, 2, 2, 1, 3, 1], type: :u32)
    iex> Scholar.Metrics.Regression.mean_poisson_deviance(y_true, y_pred)
    #Nx.Tensor<
      f32
      0.18411168456077576
    >

    iex> y_true = Nx.tensor([[1, 1, 1, 1], [1, 2, 2, 1]], type: :u32)
    iex> y_pred = Nx.tensor([[2, 2, 1, 1], [2, 2, 2, 1]], type: :u32)
    iex> Scholar.Metrics.Regression.mean_poisson_deviance(y_true, y_pred, axes: [1])
    #Nx.Tensor<
      f32[2]
      [0.3068528175354004, 0.1534264087677002]
    >

# `mean_square_error`
[🔗](https://github.com/elixir-nx/scholar/blob/main/lib/scholar/metrics/regression.ex#L107)

Calculates the mean square error of predictions
with respect to targets.

$$MSE = \frac{\sum_{i=1}^{n} (\hat{y_i} - y_i)^2}{n}$$

## Options

#{NimbleOptions.docs(@general_schema)}

## Examples

    iex> y_true = Nx.tensor([[0.0, 2.0], [0.5, 0.0]])
    iex> y_pred = Nx.tensor([[1.0, 1.0], [1.0, 0.0]])
    iex> Scholar.Metrics.Regression.mean_square_error(y_true, y_pred)
    #Nx.Tensor<
      f32
      0.5625
    >
    iex> Scholar.Metrics.Regression.mean_square_error(y_true, y_pred, axes: [0])
    #Nx.Tensor<
      f32[2]
      [0.625, 0.5]
    >

# `mean_square_log_error`
[🔗](https://github.com/elixir-nx/scholar/blob/main/lib/scholar/metrics/regression.ex#L136)

Calculates the mean square logarithmic error of predictions
with respect to targets.

$$MSLE = \frac{\sum_{i=1}^{n} (\log(\hat{y_i} + 1) - \log(y_i + 1))^2}{n}$$

## Options

#{NimbleOptions.docs(@general_schema)}

## Examples

    iex> y_true = Nx.tensor([[0.0, 1.0], [0.0, 0.0]])
    iex> y_pred = Nx.tensor([[1.0, 1.0], [1.0, 0.0]])
    iex> Scholar.Metrics.Regression.mean_square_log_error(y_true, y_pred)
    #Nx.Tensor<
      f32
      0.24022650718688965
    >
    iex> Scholar.Metrics.Regression.mean_square_log_error(y_true, y_pred, axes: [0])
    #Nx.Tensor<
      f32[2]
      [0.4804530143737793, 0.0]
    >

# `mean_tweedie_deviance`
[🔗](https://github.com/elixir-nx/scholar/blob/main/lib/scholar/metrics/regression.ex#L235)

Calculates the mean Tweedie deviance of predictions
with respect to targets. Includes the Gaussian, Poisson,
Gamma and inverse-Gaussian families as special cases.

$$d(y,\mu) =
\begin{cases}
(y-\mu)^2, & \text{for }p=0\\\\
2(y \log(y/\mu) + \mu - y), & \text{for }p=1\\\\
2(\log(\mu/y) + y/\mu - 1), & \text{for }p=2\\\\
2\left(\frac{\max(y,0)^{2-p}}{(1-p)(2-p)}-\frac{y\mu^{1-p}}{1-p}+\frac{\mu^{2-p}}{2-p}\right), & \text{for }p<0 \vee p>2
\end{cases}$$

## Options

* `:axes` - Axes to calculate the distance over. By default the distance
  is calculated between the whole tensors.

## Examples

    iex> y_true = Nx.tensor([1, 1, 1, 1, 1, 2, 2, 1, 3, 1], type: :u32)
    iex> y_pred = Nx.tensor([2, 2, 1, 1, 2, 2, 2, 1, 3, 1], type: :u32)
    iex> Scholar.Metrics.Regression.mean_tweedie_deviance(y_true, y_pred, 1)
    #Nx.Tensor<
      f32
      0.18411168456077576
    >
    iex> y_true = Nx.tensor([[1, 1, 1, 1], [1, 2, 2, 1]], type: :u32)
    iex> y_pred = Nx.tensor([[2, 2, 1, 1], [2, 2, 2, 1]], type: :u32)
    iex> Scholar.Metrics.Regression.mean_tweedie_deviance(y_true, y_pred, 1, axes: [0])
    #Nx.Tensor<
      f32[4]
      [0.6137056350708008, 0.3068528175354004, 0.0, 0.0]
    >

# `mean_tweedie_deviance!`
[🔗](https://github.com/elixir-nx/scholar/blob/main/lib/scholar/metrics/regression.ex#L259)

Similar to `mean_tweedie_deviance/3` but raises `RuntimeError` if the
inputs cannot be used with the given power argument.

Note: This function cannot be used in `defn`.

## Options

* `:axes` - Axes to calculate the distance over. By default the distance
  is calculated between the whole tensors.

## Examples

    iex> y_true = Nx.tensor([1, 1, 1, 1, 1, 2, 2, 1, 3, 1], type: :u32)
    iex> y_pred = Nx.tensor([2, 2, 1, 1, 2, 2, 2, 1, 3, 1], type: :u32)
    iex> Scholar.Metrics.Regression.mean_tweedie_deviance!(y_true, y_pred, 1)
    #Nx.Tensor<
      f32
      0.18411168456077576
    >

# `quantile`
[🔗](https://github.com/elixir-nx/scholar/blob/main/lib/scholar/metrics/regression.ex#L906)

# `r2_score`
[🔗](https://github.com/elixir-nx/scholar/blob/main/lib/scholar/metrics/regression.ex#L462)

Calculates the $R^2$ score of predictions with respect to targets.

$$R^2 = 1 - \frac{\sum (y_i - \hat{y}_i)^2}{\sum (y_i - \bar{y})^2}$$

## Options

* `:force_finite` (`t:boolean/0`) - Flag indicating if NaN and -Inf scores resulting from constant data should be replaced with real numbers
  (1.0 if prediction is perfect, 0.0 otherwise) The default value is `true`.

* `:axes` - Axes to calculate the distance over. By default the distance
  is calculated between the whole tensors.

## Examples

    iex> y_true = Nx.tensor([3, -0.5, 2, 7], type: {:f, 32})
    iex> y_pred = Nx.tensor([2.5, 0.0, 2, 8], type: {:f, 32})
    iex> Scholar.Metrics.Regression.r2_score(y_true, y_pred)
    #Nx.Tensor<
      f32
      0.9486081600189209
    >

    iex> y_true = Nx.tensor([[3, -0.5], [2, 7]], type: {:f, 32})
    iex>y_pred = Nx.tensor([[2.5, 0.0], [2, 8]], type: {:f, 32})
    iex> Scholar.Metrics.Regression.r2_score(y_true, y_pred, axes: [0])
    #Nx.Tensor<
      f32[2]
      [0.6800000071525574, 0.9559956192970276]
    >

    iex> y_true = Nx.tensor([-2.0, -2.0, -2.0], type: :f64)
    iex> y_pred = Nx.tensor([-2.0, -2.0, -2.0 + 1.0e-8], type: :f64)
    iex> Scholar.Metrics.Regression.r2_score(y_true, y_pred, force_finite: true)
    #Nx.Tensor<
      f64
      0.0
    >

    iex> y_true = Nx.tensor([-2.0, -2.0, -2.0], type: :f64)
    iex> y_pred = Nx.tensor([-2.0, -2.0, -2.0 + 1.0e-8], type: :f64)
    iex> Scholar.Metrics.Regression.r2_score(y_true, y_pred, force_finite: false)
    #Nx.Tensor<
      f64
      -Inf
    >

    iex> y_true = Nx.tensor([-2.0, -2.0, -2.0])
    iex> y_pred = Nx.tensor([-2.0, -2.0, -2.0])
    iex> Scholar.Metrics.Regression.r2_score(y_true, y_pred, force_finite: false)
    #Nx.Tensor<
      f32
      NaN
    >

    iex> y_true = Nx.tensor([-2.0, -2.0, -2.0])
    iex> y_pred = Nx.tensor([-2.0, -2.0, -2.0])
    iex> Scholar.Metrics.Regression.r2_score(y_true, y_pred, force_finite: true)
    #Nx.Tensor<
      f32
      1.0
    >

---

*Consult [api-reference.md](api-reference.md) for complete listing*
