View Source Scholar.Linear.RidgeRegression (Scholar v0.3.1)
Linear least squares with $L_2$ regularization.
Minimizes the objective function: $$ ||y - Xw||^2\_2 + \alpha||w||^2\_2 $$
Where:
$X$ is an input data
$y$ is an input target
$w$ is the model weights matrix
$\alpha$ is the parameter that controls level of regularization
Time complexity is $O(N^2)$ for :cholesky
solver and $O((N^2) * (K + N))$ for :svd
solver,
where $N$ is the number of observations and $K$ is the number of features.
Summary
Functions
Fits a Ridge regression model for sample inputs x
and
sample targets y
.
Makes predictions with the given model
on input x
.
Functions
Fits a Ridge regression model for sample inputs x
and
sample targets y
.
Options
:sample_weights
- The weights for each observation. If not provided, all observations are assigned equal weight.:fit_intercept?
(boolean/0
) - If set totrue
, a model will fit the intercept. Otherwise, the intercept is set to0.0
. The intercept is an independent term in a linear model. Specifically, it is the expected mean value of targets for a zero-vector on input. The default value istrue
.:solver
- Solver to use in the computational routines::svd
- Uses a Singular Value Decomposition of A to compute the Ridge coefficients. In particular, it is more stable for singular matrices than:cholesky
at the cost of being slower.:cholesky
- Uses the standardNx.LinAlg.solve
function to obtain a closed-form solution.
The default value is
:svd
.:alpha
- Constant that multiplies the $L_2$ term, controlling regularization strength.:alpha
must be a non-negative float i.e. in [0, inf).If
:alpha
is set to 0.0 the objective is the ordinary least squares regression. In this case, for numerical reasons, you should useScholar.Linear.LinearRegression
instead.The default value is
1.0
.
Return Values
The function returns a struct with the following parameters:
:coefficients
- Estimated coefficients for the linear regression problem.:intercept
- Independent term in the linear model.
Examples
iex> x = Nx.tensor([[1.0, 2.0], [3.0, 2.0], [4.0, 7.0]])
iex> y = Nx.tensor([4.0, 3.0, -1.0])
iex> Scholar.Linear.RidgeRegression.fit(x, y)
%Scholar.Linear.RidgeRegression{
coefficients: Nx.tensor(
[-0.4237867593765259, -0.6891377568244934]
),
intercept: Nx.tensor(
5.6569366455078125
)
}
Makes predictions with the given model
on input x
.
Examples
iex> x = Nx.tensor([[1.0, 2.0], [3.0, 2.0], [4.0, 7.0]])
iex> y = Nx.tensor([4.0, 3.0, -1.0])
iex> model = Scholar.Linear.RidgeRegression.fit(x, y)
iex> Scholar.Linear.RidgeRegression.predict(model, Nx.tensor([[2.0, 1.0]]))
Nx.tensor(
[4.120225429534912]
)