View Source Nx.LinAlg (Nx v0.8.0)
Nx conveniences for linear algebra.
This module can be used in defn
.
Summary
Functions
Returns the adjoint of a given tensor.
Performs a Cholesky decomposition of a batch of square matrices.
Calculates the determinant of batched square matrices.
Calculates the Eigenvalues and Eigenvectors of batched Hermitian 2-D matrices.
Inverts a batch of square matrices.
Return the least-squares solution to a linear matrix equation Ax = b.
Calculates the A = PLU decomposition of batched square 2-D matrices A.
Produces the tensor taken to the given power by matrix dot-product.
Return matrix rank of input M × N matrix using Singular Value Decomposition method.
Calculates the p-norm of a tensor.
Calculates the Moore-Penrose inverse, or the pseudoinverse, of a matrix.
Calculates the QR decomposition of a tensor with shape {..., M, N}
.
Solves the system AX = B
.
Calculates the Singular Value Decomposition of batched 2-D matrices.
Solve the equation a x = b
for x, assuming a
is a batch of triangular matrices.
Can also solve x a = b
for x. See the :left_side
option below.
Functions
Returns the adjoint of a given tensor.
If the input tensor is real it transposes it's two inner-most axes.
If the input tensor is complex, it additionally applies Nx.conjugate/1
to it.
Examples
iex> Nx.LinAlg.adjoint(Nx.tensor([[1, 2], [3, 4]]))
#Nx.Tensor<
s64[2][2]
[
[1, 3],
[2, 4]
]
>
iex> Nx.LinAlg.adjoint(Nx.tensor([[1, Complex.new(0, 2)], [3, Complex.new(0, -4)]]))
#Nx.Tensor<
c64[2][2]
[
[1.0-0.0i, 3.0-0.0i],
[0.0-2.0i, 0.0+4.0i]
]
>
Performs a Cholesky decomposition of a batch of square matrices.
The matrices must be positive-definite and either Hermitian if complex or symmetric if real. An error is raised by the default backend if those conditions are not met. Other backends may emit undefined behaviour.
Examples
iex> Nx.LinAlg.cholesky(Nx.tensor([[20.0, 17.6], [17.6, 16.0]]))
#Nx.Tensor<
f32[2][2]
[
[4.4721360206604, 0.0],
[3.9354796409606934, 0.7155418395996094]
]
>
iex> Nx.LinAlg.cholesky(Nx.tensor([[[2.0, 3.0], [3.0, 5.0]], [[1.0, 0.0], [0.0, 1.0]]]))
#Nx.Tensor<
f32[2][2][2]
[
[
[1.4142135381698608, 0.0],
[2.1213204860687256, 0.7071064710617065]
],
[
[1.0, 0.0],
[0.0, 1.0]
]
]
>
iex> t = Nx.tensor([
...> [6.0, 3.0, 4.0, 8.0],
...> [3.0, 6.0, 5.0, 1.0],
...> [4.0, 5.0, 10.0, 7.0],
...> [8.0, 1.0, 7.0, 25.0]
...> ])
iex> Nx.LinAlg.cholesky(t)
#Nx.Tensor<
f32[4][4]
[
[2.4494898319244385, 0.0, 0.0, 0.0],
[1.2247447967529297, 2.1213202476501465, 0.0, 0.0],
[1.6329931020736694, 1.41421377658844, 2.309401035308838, 0.0],
[3.265986204147339, -1.4142134189605713, 1.5877134799957275, 3.132491111755371]
]
>
iex> Nx.LinAlg.cholesky(Nx.tensor([[1.0, Complex.new(0, -2)], [Complex.new(0, 2), 5.0]]))
#Nx.Tensor<
c64[2][2]
[
[1.0+0.0i, 0.0+0.0i],
[0.0+2.0i, 1.0+0.0i]
]
>
iex> t = Nx.tensor([[[2.0, 3.0], [3.0, 5.0]], [[1.0, 0.0], [0.0, 1.0]]]) |> Nx.vectorize(x: 2)
iex> Nx.LinAlg.cholesky(t)
#Nx.Tensor<
vectorized[x: 2]
f32[2][2]
[
[
[1.4142135381698608, 0.0],
[2.1213204860687256, 0.7071064710617065]
],
[
[1.0, 0.0],
[0.0, 1.0]
]
]
>
Calculates the determinant of batched square matrices.
Examples
For 2x2 and 3x3, the results are given by the closed formulas:
iex> Nx.LinAlg.determinant(Nx.tensor([[1, 2], [3, 4]]))
#Nx.Tensor<
f32
-2.0
>
iex> Nx.LinAlg.determinant(Nx.tensor([[1.0, 2.0, 3.0], [1.0, -2.0, 3.0], [7.0, 8.0, 9.0]]))
#Nx.Tensor<
f32
48.0
>
When there are linearly dependent rows or columns, the determinant is 0:
iex> Nx.LinAlg.determinant(Nx.tensor([[1.0, 0.0], [3.0, 0.0]]))
#Nx.Tensor<
f32
0.0
>
iex> Nx.LinAlg.determinant(Nx.tensor([[1.0, 2.0, 3.0], [-1.0, -2.0, -3.0], [4.0, 5.0, 6.0]]))
#Nx.Tensor<
f32
0.0
>
The determinant can also be calculated when the axes are bigger than 3:
iex> Nx.LinAlg.determinant(Nx.tensor([
...> [1, 0, 0, 0],
...> [0, 1, 2, 3],
...> [0, 1, -2, 3],
...> [0, 7, 8, 9.0]
...> ]))
#Nx.Tensor<
f32
48.0
>
iex> Nx.LinAlg.determinant(Nx.tensor([
...> [0, 0, 0, 0, -1],
...> [0, 1, 2, 3, 0],
...> [0, 1, -2, 3, 0],
...> [0, 7, 8, 9, 0],
...> [1, 0, 0, 0, 0]
...> ]))
#Nx.Tensor<
f32
48.0
>
iex> Nx.LinAlg.determinant(Nx.tensor([
...> [[2, 4, 6, 7], [5, 1, 8, 8], [1, 7, 3, 1], [3, 9, 2, 4]],
...> [[2, 5, 1, 3], [4, 1, 7, 9], [6, 8, 3, 2], [7, 8, 1, 4]]
...> ]))
#Nx.Tensor<
f32[2]
[630.0, 630.0]
>
iex> t = Nx.tensor([[[1, 0], [0, 2]], [[3, 0], [0, 4]]]) |> Nx.vectorize(x: 2)
iex> Nx.LinAlg.determinant(t)
#Nx.Tensor<
vectorized[x: 2]
f32
[2.0, 12.0]
>
If the axes are named, their names are not preserved in the output:
iex> two_by_two = Nx.tensor([[1, 2], [3, 4]], names: [:x, :y])
iex> Nx.LinAlg.determinant(two_by_two)
#Nx.Tensor<
f32
-2.0
>
iex> three_by_three = Nx.tensor([[1.0, 2.0, 3.0], [1.0, -2.0, 3.0], [7.0, 8.0, 9.0]], names: [:x, :y])
iex> Nx.LinAlg.determinant(three_by_three)
#Nx.Tensor<
f32
48.0
>
Also supports complex inputs:
iex> t = Nx.tensor([[1, 0, 0], [0, Complex.new(0, 2), 0], [0, 0, 3]])
iex> Nx.LinAlg.determinant(t)
#Nx.Tensor<
c64
0.0+6.0i
>
iex> t = Nx.tensor([[0, 0, 0, 1], [0, Complex.new(0, 2), 0, 0], [0, 0, 3, 0], [1, 0, 0, 0]])
iex> Nx.LinAlg.determinant(t)
#Nx.Tensor<
c64
-0.0-6.0i
>
Calculates the Eigenvalues and Eigenvectors of batched Hermitian 2-D matrices.
It returns {eigenvals, eigenvecs}
.
Options
:max_iter
-integer
. Defaults to1_000
Number of maximum iterations before stopping the decomposition:eps
-float
. Defaults to 1.0e-4 Tolerance applied during the decomposition
Note not all options apply to all backends, as backends may have specific optimizations that render these mechanisms unnecessary.
Examples
iex> {eigenvals, eigenvecs} = Nx.LinAlg.eigh(Nx.tensor([[1, 0], [0, 2]]))
iex> Nx.round(eigenvals)
#Nx.Tensor<
f32[2]
[1.0, 2.0]
>
iex> eigenvecs
#Nx.Tensor<
f32[2][2]
[
[1.0, 0.0],
[0.0, 1.0]
]
>
iex> {eigenvals, eigenvecs} = Nx.LinAlg.eigh(Nx.tensor([[0, 1, 2], [1, 0, 2], [2, 2, 3]]))
iex> Nx.round(eigenvals)
#Nx.Tensor<
f32[3]
[5.0, -1.0, -1.0]
>
iex> eigenvecs
#Nx.Tensor<
f32[3][3]
[
[0.4075949788093567, 0.9131628274917603, 0.0],
[0.40837883949279785, -0.18228201568126678, 0.8944271802902222],
[0.8167576789855957, -0.36456403136253357, -0.4472135901451111]
]
>
iex> {eigenvals, eigenvecs} = Nx.LinAlg.eigh(Nx.tensor([[[2, 5],[5, 6]], [[1, 0], [0, 4]]]))
iex> Nx.round(eigenvals)
#Nx.Tensor<
f32[2][2]
[
[9.0, -1.0],
[1.0, 4.0]
]
>
iex> eigenvecs
#Nx.Tensor<
f32[2][2][2]
[
[
[0.5612090229988098, -0.8276740908622742],
[0.8276740908622742, 0.5612090229988098]
],
[
[1.0, 0.0],
[0.0, 1.0]
]
]
>
iex> t = Nx.tensor([[[2, 5],[5, 6]], [[1, 0], [0, 4]]]) |> Nx.vectorize(x: 2)
iex> {eigenvals, eigenvecs} = Nx.LinAlg.eigh(t)
iex> Nx.round(eigenvals)
#Nx.Tensor<
vectorized[x: 2]
f32[2]
[
[9.0, -1.0],
[1.0, 4.0]
]
>
iex> eigenvecs
#Nx.Tensor<
vectorized[x: 2]
f32[2][2]
[
[
[0.5612090229988098, -0.8276740908622742],
[0.8276740908622742, 0.5612090229988098]
],
[
[1.0, 0.0],
[0.0, 1.0]
]
]
>
Error cases
iex> Nx.LinAlg.eigh(Nx.tensor([[1, 2, 3], [4, 5, 6]]))
** (ArgumentError) tensor must be a square matrix or a batch of square matrices, got shape: {2, 3}
Inverts a batch of square matrices.
For non-square matrices, use pinv/2
for pseudo-inverse calculations.
Examples
iex> a = Nx.tensor([[1, 2, 1, 1], [0, 1, 0, 1], [0, 0, 1, 1], [0 , 0, 0, 1]])
iex> a_inv = Nx.LinAlg.invert(a)
#Nx.Tensor<
f32[4][4]
[
[1.0, -2.0, -1.0, 2.0],
[0.0, 1.0, 0.0, -1.0],
[0.0, 0.0, 1.0, -1.0],
[0.0, 0.0, 0.0, 1.0]
]
>
iex> Nx.dot(a, a_inv)
#Nx.Tensor<
f32[4][4]
[
[1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 1.0]
]
>
iex> Nx.dot(a_inv, a)
#Nx.Tensor<
f32[4][4]
[
[1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 1.0]
]
>
iex> a = Nx.tensor([[[1, 2], [0, 1]], [[1, 1], [0, 1]]])
iex> a_inv = Nx.LinAlg.invert(a)
#Nx.Tensor<
f32[2][2][2]
[
[
[1.0, -2.0],
[0.0, 1.0]
],
[
[1.0, -1.0],
[0.0, 1.0]
]
]
>
iex> Nx.dot(a, [2], [0], a_inv, [1], [0])
#Nx.Tensor<
f32[2][2][2]
[
[
[1.0, 0.0],
[0.0, 1.0]
],
[
[1.0, 0.0],
[0.0, 1.0]
]
]
>
iex> Nx.dot(a_inv, [2], [0], a, [1], [0])
#Nx.Tensor<
f32[2][2][2]
[
[
[1.0, 0.0],
[0.0, 1.0]
],
[
[1.0, 0.0],
[0.0, 1.0]
]
]
>
If a singular matrix is passed, the result will silently fail.
iex> Nx.LinAlg.invert(Nx.tensor([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1]]))
#Nx.Tensor<
f32[4][4]
[
[NaN, NaN, NaN, NaN],
[NaN, NaN, NaN, NaN],
[NaN, NaN, NaN, NaN],
[NaN, NaN, NaN, NaN]
]
>
Error cases
iex> Nx.LinAlg.invert(Nx.tensor([[3, 0, 0, 0], [2, 1, 0, 0]]))
** (ArgumentError) invert/1 expects a square matrix or a batch of square matrices, got tensor with shape: {2, 4}
Return the least-squares solution to a linear matrix equation Ax = b.
Examples
iex> Nx.LinAlg.least_squares(Nx.tensor([[1, 2], [2, 3]]), Nx.tensor([1, 2]))
#Nx.Tensor<
f32[2]
[1.0000004768371582, -2.665601925855299e-7]
>
iex> Nx.LinAlg.least_squares(Nx.tensor([[0, 1], [1, 1], [2, 1], [3, 1]]), Nx.tensor([-1, 0.2, 0.9, 2.1]))
#Nx.Tensor<
f32[2]
[0.9966151118278503, -0.947966456413269]
>
iex> Nx.LinAlg.least_squares(Nx.tensor([[1, 2, 3], [4, 5, 6]]), Nx.tensor([1, 2]))
#Nx.Tensor<
f32[3]
[-0.05534052848815918, 0.1111316829919815, 0.27760395407676697]
>
Error cases
iex> Nx.LinAlg.least_squares(Nx.tensor([1, 2, 3]), Nx.tensor([1, 2]))
** (ArgumentError) tensor of 1st argument must have rank 2, got rank 1 with shape {3}
iex> Nx.LinAlg.least_squares(Nx.tensor([[1, 2], [2, 3]]), Nx.tensor([[1, 2], [3, 4]]))
** (ArgumentError) tensor of 2nd argument must have rank 1, got rank 2 with shape {2, 2}
iex> Nx.LinAlg.least_squares(Nx.tensor([[1, Complex.new(0, 2)], [3, Complex.new(0, -4)]]), Nx.tensor([1, 2]))
** (ArgumentError) Nx.LinAlg.least_squares/2 is not yet implemented for complex inputs
iex> Nx.LinAlg.least_squares(Nx.tensor([[1, 2], [2, 3]]), Nx.tensor([1, 2, 3]))
** (ArgumentError) the number of rows of the matrix as the 1st argument and the number of columns of the vector as the 2nd argument must be the same, got 1st argument shape {2, 2} and 2nd argument shape {3}
Calculates the A = PLU decomposition of batched square 2-D matrices A.
Options
:eps
- Rounding error threshold that can be applied during the factorization
Examples
iex> {p, l, u} = Nx.LinAlg.lu(Nx.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]]))
iex> p
#Nx.Tensor<
s64[3][3]
[
[0, 0, 1],
[0, 1, 0],
[1, 0, 0]
]
>
iex> l
#Nx.Tensor<
f32[3][3]
[
[1.0, 0.0, 0.0],
[0.5714285969734192, 1.0, 0.0],
[0.1428571492433548, 2.0, 1.0]
]
>
iex> u
#Nx.Tensor<
f32[3][3]
[
[7.0, 8.0, 9.0],
[0.0, 0.4285714328289032, 0.8571428656578064],
[0.0, 0.0, 0.0]
]
>
iex> p |> Nx.dot(l) |> Nx.dot(u)
#Nx.Tensor<
f32[3][3]
[
[1.0, 2.0, 3.0],
[4.0, 5.0, 6.0],
[7.0, 8.0, 9.0]
]
>
iex> {p, l, u} = Nx.LinAlg.lu(Nx.tensor([[1, 0, 1], [-1, 0, -1], [1, 1, 1]]))
iex> p
#Nx.Tensor<
s64[3][3]
[
[1, 0, 0],
[0, 0, 1],
[0, 1, 0]
]
>
iex> l
#Nx.Tensor<
f32[3][3]
[
[1.0, 0.0, 0.0],
[1.0, 1.0, 0.0],
[-1.0, 0.0, 1.0]
]
>
iex> u
#Nx.Tensor<
f32[3][3]
[
[1.0, 0.0, 1.0],
[0.0, 1.0, 0.0],
[0.0, 0.0, 0.0]
]
>
iex> p |> Nx.dot(l) |> Nx.dot(u)
#Nx.Tensor<
f32[3][3]
[
[1.0, 0.0, 1.0],
[-1.0, 0.0, -1.0],
[1.0, 1.0, 1.0]
]
>
iex> {p, l, u} = Nx.LinAlg.lu(Nx.tensor([[[9, 8, 7], [6, 5, 4], [3, 2, 1]], [[-1, 0, -1], [1, 0, 1], [1, 1, 1]]]))
iex> p
#Nx.Tensor<
s64[2][3][3]
[
[
[1, 0, 0],
[0, 1, 0],
[0, 0, 1]
],
[
[1, 0, 0],
[0, 0, 1],
[0, 1, 0]
]
]
>
iex> l
#Nx.Tensor<
f32[2][3][3]
[
[
[1.0, 0.0, 0.0],
[0.6666666865348816, 1.0, 0.0],
[0.3333333432674408, 2.0, 1.0]
],
[
[1.0, 0.0, 0.0],
[-1.0, 1.0, 0.0],
[-1.0, 0.0, 1.0]
]
]
>
iex> u
#Nx.Tensor<
f32[2][3][3]
[
[
[9.0, 8.0, 7.0],
[0.0, -0.3333333432674408, -0.6666666865348816],
[0.0, 0.0, 0.0]
],
[
[-1.0, 0.0, -1.0],
[0.0, 1.0, 0.0],
[0.0, 0.0, 0.0]
]
]
>
iex> p |> Nx.dot([2], [0], l, [1], [0]) |> Nx.dot([2], [0], u, [1], [0])
#Nx.Tensor<
f32[2][3][3]
[
[
[9.0, 8.0, 7.0],
[6.0, 5.0, 4.0],
[3.0, 2.0, 1.0]
],
[
[-1.0, 0.0, -1.0],
[1.0, 0.0, 1.0],
[1.0, 1.0, 1.0]
]
]
>
iex> t = Nx.tensor([[[9, 8, 7], [6, 5, 4], [3, 2, 1]], [[-1, 0, -1], [1, 0, 1], [1, 1, 1]]]) |> Nx.vectorize(x: 2)
iex> {p, l, u} = Nx.LinAlg.lu(t)
iex> p
#Nx.Tensor<
vectorized[x: 2]
s64[3][3]
[
[
[1, 0, 0],
[0, 1, 0],
[0, 0, 1]
],
[
[1, 0, 0],
[0, 0, 1],
[0, 1, 0]
]
]
>
iex> l
#Nx.Tensor<
vectorized[x: 2]
f32[3][3]
[
[
[1.0, 0.0, 0.0],
[0.6666666865348816, 1.0, 0.0],
[0.3333333432674408, 2.0, 1.0]
],
[
[1.0, 0.0, 0.0],
[-1.0, 1.0, 0.0],
[-1.0, 0.0, 1.0]
]
]
>
iex> u
#Nx.Tensor<
vectorized[x: 2]
f32[3][3]
[
[
[9.0, 8.0, 7.0],
[0.0, -0.3333333432674408, -0.6666666865348816],
[0.0, 0.0, 0.0]
],
[
[-1.0, 0.0, -1.0],
[0.0, 1.0, 0.0],
[0.0, 0.0, 0.0]
]
]
>
Error cases
iex> Nx.LinAlg.lu(Nx.tensor([[1, 1, 1, 1], [-1, 4, 4, -1], [4, -2, 2, 0]]))
** (ArgumentError) tensor must be a square matrix or a batch of square matrices, got shape: {3, 4}
Produces the tensor taken to the given power by matrix dot-product.
The input is always a tensor of batched square matrices and an integer, and the output is a tensor of the same dimensions as the input tensor.
The dot-products are unrolled inside defn
.
Examples
iex> Nx.LinAlg.matrix_power(Nx.tensor([[1, 2], [3, 4]]), 0)
#Nx.Tensor<
s64[2][2]
[
[1, 0],
[0, 1]
]
>
iex> Nx.LinAlg.matrix_power(Nx.tensor([[1, 2], [3, 4]]), 6)
#Nx.Tensor<
s64[2][2]
[
[5743, 8370],
[12555, 18298]
]
>
iex> Nx.LinAlg.matrix_power(Nx.eye(3), 65535)
#Nx.Tensor<
s64[3][3]
[
[1, 0, 0],
[0, 1, 0],
[0, 0, 1]
]
>
iex> Nx.LinAlg.matrix_power(Nx.tensor([[1, 2], [3, 4]]), -1)
#Nx.Tensor<
f32[2][2]
[
[-2.000000476837158, 1.0000003576278687],
[1.5000004768371582, -0.5000002384185791]
]
>
iex> Nx.LinAlg.matrix_power(Nx.iota({2, 2, 2}), 3)
#Nx.Tensor<
s64[2][2][2]
[
[
[6, 11],
[22, 39]
],
[
[514, 615],
[738, 883]
]
]
>
iex> Nx.LinAlg.matrix_power(Nx.iota({2, 2, 2}), -3)
#Nx.Tensor<
f32[2][2][2]
[
[
[-4.875, 1.375],
[2.75, -0.75]
],
[
[-110.37397766113281, 76.8742904663086],
[92.24915313720703, -64.2494125366211]
]
]
>
iex> Nx.LinAlg.matrix_power(Nx.tensor([[1, 2], [3, 4], [5, 6]]), 1)
** (ArgumentError) matrix_power/2 expects a square matrix or a batch of square matrices, got tensor with shape: {3, 2}
Return matrix rank of input M × N matrix using Singular Value Decomposition method.
Approximate the number of linearly independent rows by calculating the number
of singular values greater than eps * max(singular values) * max(M, N)
.
This also appears in Numerical recipes in the discussion of SVD solutions for linear least squares [1].
[1] W. H. Press, S. A. Teukolsky, W. T. Vetterling and B. P. Flannery, “Numerical Recipes (3rd edition)”, Cambridge University Press, 2007, page 795.
Options
:eps
- Rounding error threshold used to assume values as 0. Defaults to1.0e-7
Examples
iex> Nx.LinAlg.matrix_rank(Nx.tensor([[1, 2], [3, 4]]))
#Nx.Tensor<
u64
2
>
iex> Nx.LinAlg.matrix_rank(Nx.tensor([[1, 1, 1, 1], [1, 1, 1, 1], [1, 2, 3, 4]]))
#Nx.Tensor<
u64
2
>
iex> Nx.LinAlg.matrix_rank(Nx.tensor([[1, 1, 1], [2, 2, 2], [8, 9, 10], [-2, 1, 5]]))
#Nx.Tensor<
u64
3
>
Error cases
iex> Nx.LinAlg.matrix_rank(Nx.tensor([1, 2, 3]))
** (ArgumentError) tensor must have rank 2, got rank 1 with shape {3}
iex> Nx.LinAlg.matrix_rank(Nx.tensor([[1, Complex.new(0, 2)], [3, Complex.new(0, -4)]]))
** (ArgumentError) Nx.LinAlg.matrix_rank/2 is not yet implemented for complex inputs
Calculates the p-norm of a tensor.
For the 0-norm, the norm is the number of non-zero elements in the tensor.
Options
:axes
- defines the axes upon which the norm will be calculated. Applies only on 2-norm for 2-D tensors. Default:nil
.:keep_axes
- whether the calculation axes should be kept with length 1. Defaults tofalse
:ord
- defines which norm will be calculated according to the table below. Default:nil
.
ord | 2-D | 1-D |
---|---|---|
nil | Frobenius norm | 2-norm |
:nuclear | Nuclear norm | - |
:frobenius | Frobenius norm | - |
:inf | max(sum(abs(x), axes: [1])) | max(abs(x)) |
:neg_inf | min(sum(abs(x), axes: [1])) | min(abs(x)) |
0 | - | Number of non-zero elements |
1 | max(sum(abs(x), axes: [0])) | as below |
-1 | min(sum(abs(x), axes: [0])) | as below |
2 | 2-norm | as below |
-2 | smallest singular value | as below |
other | - | pow(sum(pow(abs(x), p)), 1/p) |
Examples
Vector norms
iex> Nx.LinAlg.norm(Nx.tensor([3, 4]))
#Nx.Tensor<
f32
5.0
>
iex> Nx.LinAlg.norm(Nx.tensor([3, 4]), ord: 1)
#Nx.Tensor<
f32
7.0
>
iex> Nx.LinAlg.norm(Nx.tensor([3, -4]), ord: :inf)
#Nx.Tensor<
f32
4.0
>
iex> Nx.LinAlg.norm(Nx.tensor([3, -4]), ord: :neg_inf)
#Nx.Tensor<
f32
3.0
>
iex> Nx.LinAlg.norm(Nx.tensor([3, -4, 0, 0]), ord: 0)
#Nx.Tensor<
f32
2.0
>
Matrix norms
iex> Nx.LinAlg.norm(Nx.tensor([[3, -1], [2, -4]]), ord: -1)
#Nx.Tensor<
f32
5.0
>
iex> Nx.LinAlg.norm(Nx.tensor([[3, -2], [2, -4]]), ord: 1)
#Nx.Tensor<
f32
6.0
>
iex> Nx.LinAlg.norm(Nx.tensor([[3, -2], [2, -4]]), ord: :neg_inf)
#Nx.Tensor<
f32
5.0
>
iex> Nx.LinAlg.norm(Nx.tensor([[3, -2], [2, -4]]), ord: :inf)
#Nx.Tensor<
f32
6.0
>
iex> Nx.LinAlg.norm(Nx.tensor([[3, 0], [0, -4]]), ord: :frobenius)
#Nx.Tensor<
f32
5.0
>
iex> Nx.LinAlg.norm(Nx.tensor([[1, 0, 0], [0, -4, 0], [0, 0, 9]]), ord: :nuclear)
#Nx.Tensor<
f32
14.0
>
iex> Nx.LinAlg.norm(Nx.tensor([[1, 0, 0], [0, -4, 0], [0, 0, 9]]), ord: -2)
#Nx.Tensor<
f32
1.0
>
iex> Nx.LinAlg.norm(Nx.tensor([[3, 0], [0, -4]]))
#Nx.Tensor<
f32
5.0
>
iex> Nx.LinAlg.norm(Nx.tensor([[3, 4], [0, -4]]), axes: [1])
#Nx.Tensor<
f32[2]
[5.0, 4.0]
>
iex> Nx.LinAlg.norm(Nx.tensor([[Complex.new(0, 3), 4], [4, 0]]), axes: [0])
#Nx.Tensor<
f32[2]
[5.0, 4.0]
>
iex> Nx.LinAlg.norm(Nx.tensor([[Complex.new(0, 3), 0], [4, 0]]), ord: :neg_inf)
#Nx.Tensor<
f32
3.0
>
iex> Nx.LinAlg.norm(Nx.tensor([[0, 0], [0, 0]]))
#Nx.Tensor<
f32
0.0
>
Error cases
iex> Nx.LinAlg.norm(Nx.tensor([3, 4]), ord: :frobenius)
** (ArgumentError) expected a 2-D tensor for ord: :frobenius, got a 1-D tensor
Calculates the Moore-Penrose inverse, or the pseudoinverse, of a matrix.
Options
:eps
- Rounding error threshold used to assume values as 0. Defaults to1.0e-10
Examples
Scalar case:
iex> Nx.LinAlg.pinv(2)
#Nx.Tensor<
f32
0.5
>
iex> Nx.LinAlg.pinv(0)
#Nx.Tensor<
f32
0.0
>
Vector case:
iex> Nx.LinAlg.pinv(Nx.tensor([0, 1, 2]))
#Nx.Tensor<
f32[3]
[0.0, 0.20000000298023224, 0.4000000059604645]
>
iex> Nx.LinAlg.pinv(Nx.tensor([0, 0, 0]))
#Nx.Tensor<
f32[3]
[0.0, 0.0, 0.0]
>
Matrix case:
iex> Nx.LinAlg.pinv(Nx.tensor([[1, 1], [3, 4]]))
#Nx.Tensor<
f32[2][2]
[
[3.9924824237823486, -1.0052783489227295],
[-3.0051186084747314, 1.0071179866790771]
]
>
iex> Nx.LinAlg.pinv(Nx.tensor([[0.5, 0], [0, 1], [0.5, 0]]))
#Nx.Tensor<
f32[2][3]
[
[0.9999999403953552, 0.0, 0.9999998807907104],
[0.0, 1.0, 0.0]
]
>
Calculates the QR decomposition of a tensor with shape {..., M, N}
.
Options
:mode
- Can be one of:reduced
,:complete
. Defaults to:reduced
For the following,K = min(M, N)
:reduced
- returnsq
andr
with shapes{..., M, K}
and{..., K, N}
:complete
- returnsq
andr
with shapes{..., M, M}
and{..., M, N}
:eps
- Rounding error threshold that can be applied during the triangularization. Defaults to1.0e-10
Examples
iex> {q, r} = Nx.LinAlg.qr(Nx.tensor([[-3, 2, 1], [0, 1, 1], [0, 0, -1]]))
iex> q
#Nx.Tensor<
f32[3][3]
[
[1.0, 0.0, 0.0],
[0.0, 1.0, 0.0],
[0.0, 0.0, 1.0]
]
>
iex> r
#Nx.Tensor<
f32[3][3]
[
[-3.0, 2.0, 1.0],
[0.0, 1.0, 1.0],
[0.0, 0.0, -1.0]
]
>
iex> t = Nx.tensor([[3, 2, 1], [0, 1, 1], [0, 0, 1]])
iex> {q, r} = Nx.LinAlg.qr(t)
iex> q
#Nx.Tensor<
f32[3][3]
[
[1.0, 0.0, 0.0],
[0.0, 1.0, 0.0],
[0.0, 0.0, 1.0]
]
>
iex> r
#Nx.Tensor<
f32[3][3]
[
[3.0, 2.0, 1.0],
[0.0, 1.0, 1.0],
[0.0, 0.0, 1.0]
]
>
iex> {qs, rs} = Nx.LinAlg.qr(Nx.tensor([[[-3, 2, 1], [0, 1, 1], [0, 0, -1]],[[3, 2, 1], [0, 1, 1], [0, 0, 1]]]))
iex> qs
#Nx.Tensor<
f32[2][3][3]
[
[
[1.0, 0.0, 0.0],
[0.0, 1.0, 0.0],
[0.0, 0.0, 1.0]
],
[
[1.0, 0.0, 0.0],
[0.0, 1.0, 0.0],
[0.0, 0.0, 1.0]
]
]
>
iex> rs
#Nx.Tensor<
f32[2][3][3]
[
[
[-3.0, 2.0, 1.0],
[0.0, 1.0, 1.0],
[0.0, 0.0, -1.0]
],
[
[3.0, 2.0, 1.0],
[0.0, 1.0, 1.0],
[0.0, 0.0, 1.0]
]
]
>
iex> t = Nx.tensor([[3, 2, 1], [0, 1, 1], [0, 0, 1], [0, 0, 1]], type: :f32)
iex> {q, r} = Nx.LinAlg.qr(t, mode: :reduced)
iex> q
#Nx.Tensor<
f32[4][3]
[
[1.0, 0.0, 0.0],
[0.0, 1.0, 0.0],
[0.0, 0.0, 0.7071068286895752],
[0.0, 0.0, 0.7071067690849304]
]
>
iex> r
#Nx.Tensor<
f32[3][3]
[
[3.0, 2.0, 1.0],
[0.0, 1.0, 1.0],
[0.0, 0.0, 1.4142136573791504]
]
>
iex> t = Nx.tensor([[3, 2, 1], [0, 1, 1], [0, 0, 1], [0, 0, 0]], type: :f32)
iex> {q, r} = Nx.LinAlg.qr(t, mode: :complete)
iex> q
#Nx.Tensor<
f32[4][4]
[
[1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 1.0]
]
>
iex> r
#Nx.Tensor<
f32[4][3]
[
[3.0, 2.0, 1.0],
[0.0, 1.0, 1.0],
[0.0, 0.0, 1.0],
[0.0, 0.0, 0.0]
]
>
iex> t = Nx.tensor([[[-3, 2, 1], [0, 1, 1], [0, 0, -1]],[[3, 2, 1], [0, 1, 1], [0, 0, 1]]]) |> Nx.vectorize(x: 2)
iex> {qs, rs} = Nx.LinAlg.qr(t)
iex> qs
#Nx.Tensor<
vectorized[x: 2]
f32[3][3]
[
[
[1.0, 0.0, 0.0],
[0.0, 1.0, 0.0],
[0.0, 0.0, 1.0]
],
[
[1.0, 0.0, 0.0],
[0.0, 1.0, 0.0],
[0.0, 0.0, 1.0]
]
]
>
iex> rs
#Nx.Tensor<
vectorized[x: 2]
f32[3][3]
[
[
[-3.0, 2.0, 1.0],
[0.0, 1.0, 1.0],
[0.0, 0.0, -1.0]
],
[
[3.0, 2.0, 1.0],
[0.0, 1.0, 1.0],
[0.0, 0.0, 1.0]
]
]
>
Error cases
iex> Nx.LinAlg.qr(Nx.tensor([1, 2, 3, 4, 5]))
** (ArgumentError) tensor must have at least rank 2, got rank 1 with shape {5}
iex> t = Nx.tensor([[-3, 2, 1], [0, 1, 1], [0, 0, -1]])
iex> Nx.LinAlg.qr(t, mode: :error_test)
** (ArgumentError) invalid :mode received. Expected one of [:reduced, :complete], received: :error_test
Solves the system AX = B
.
A
must have shape {..., n, n}
and B
must have shape {..., n, m}
or {..., n}
.
X
has the same shape as B
.
Examples
iex> a = Nx.tensor([[1, 3, 2, 1], [2, 1, 0, 0], [1, 0, 1, 0], [1, 1, 1, 1]])
iex> Nx.LinAlg.solve(a, Nx.tensor([-3, 0, 4, -2])) |> Nx.round()
#Nx.Tensor<
f32[4]
[1.0, -2.0, 3.0, -4.0]
>
iex> a = Nx.tensor([[1, 0, 1], [1, 1, 0], [1, 1, 1]], type: :f64)
iex> Nx.LinAlg.solve(a, Nx.tensor([0, 2, 1])) |> Nx.round()
#Nx.Tensor<
f64[3]
[1.0, 1.0, -1.0]
>
iex> a = Nx.tensor([[1, 0, 1], [1, 1, 0], [0, 1, 1]])
iex> b = Nx.tensor([[2, 2, 3], [2, 2, 4], [2, 0, 1]])
iex> Nx.LinAlg.solve(a, b) |> Nx.round()
#Nx.Tensor<
f32[3][3]
[
[1.0, 2.0, 3.0],
[1.0, 0.0, 1.0],
[1.0, 0.0, 0.0]
]
>
iex> a = Nx.tensor([[[14, 10], [9, 9]], [[4, 11], [2, 3]]])
iex> b = Nx.tensor([[[2, 4], [3, 2]], [[1, 5], [-3, -1]]])
iex> Nx.LinAlg.solve(a, b) |> Nx.round()
#Nx.Tensor<
f32[2][2][2]
[
[
[0.0, 0.0],
[1.0, 0.0]
],
[
[-4.0, -3.0],
[1.0, 1.0]
]
]
>
iex> a = Nx.tensor([[[1, 1], [0, 1]], [[2, 0], [0, 2]]]) |> Nx.vectorize(x: 2)
iex> b = Nx.tensor([[[2, 1], [5, -1]]]) |> Nx.vectorize(x: 1, y: 2)
iex> Nx.LinAlg.solve(a, b)
#Nx.Tensor<
vectorized[x: 2][y: 2]
f32[2]
[
[
[1.0, 1.0],
[6.0, -1.0]
],
[
[1.0, 0.5],
[2.5, -0.5]
]
]
>
If the axes are named, their names are not preserved in the output:
iex> a = Nx.tensor([[1, 0, 1], [1, 1, 0], [1, 1, 1]], names: [:x, :y])
iex> Nx.LinAlg.solve(a, Nx.tensor([0, 2, 1], names: [:z])) |> Nx.round()
#Nx.Tensor<
f32[3]
[1.0, 1.0, -1.0]
>
Error cases
iex> Nx.LinAlg.solve(Nx.tensor([[1, 0], [0, 1]]), Nx.tensor([4, 2, 4, 2]))
** (ArgumentError) `b` tensor has incompatible dimensions, expected {2, 2} or {2}, got: {4}
iex> Nx.LinAlg.solve(Nx.tensor([[3, 0, 0, 0], [2, 1, 0, 0], [1, 1, 1, 1]]), Nx.tensor([4]))
** (ArgumentError) `a` tensor has incompatible dimensions, expected a square matrix or a batch of square matrices, got: {3, 4}
Calculates the Singular Value Decomposition of batched 2-D matrices.
It returns {u, s, vt}
where the elements of s
are sorted
from highest to lowest.
Options
:max_iter
-integer
. Defaults to100
Number of maximum iterations before stopping the decomposition:full_matrices?
-boolean
. Defaults totrue
Iftrue
,u
andvt
are of shape (M, M), (N, N). Otherwise, the shapes are (M, K) and (K, N), where K = min(M, N).
Note not all options apply to all backends, as backends may have specific optimizations that render these mechanisms unnecessary.
Examples
iex> {u, s, vt} = Nx.LinAlg.svd(Nx.tensor([[1, 0, 0], [0, 1, 0], [0, 0, -1]]))
iex> u
#Nx.Tensor<
f32[3][3]
[
[1.0, 0.0, 0.0],
[0.0, 1.0, 0.0],
[0.0, 0.0, -1.0]
]
>
iex> s
#Nx.Tensor<
f32[3]
[1.0, 1.0, 1.0]
>
iex> vt
#Nx.Tensor<
f32[3][3]
[
[1.0, 0.0, 0.0],
[0.0, 1.0, 0.0],
[0.0, 0.0, 1.0]
]
>
iex> {u, s, vt} = Nx.LinAlg.svd(Nx.tensor([[2, 0, 0], [0, 3, 0], [0, 0, -1], [0, 0, 0]]))
iex> u
#Nx.Tensor<
f32[4][4]
[
[0.0, 0.9999999403953552, 0.0, 0.0],
[1.0, 0.0, 0.0, 0.0],
[0.0, 0.0, -1.0, 0.0],
[0.0, 0.0, 0.0, 1.0]
]
>
iex> s
#Nx.Tensor<
f32[3]
[3.0, 1.9999998807907104, 1.0]
>
iex> vt
#Nx.Tensor<
f32[3][3]
[
[0.0, 1.0, 0.0],
[1.0, 0.0, 0.0],
[0.0, 0.0, 1.0]
]
>
iex> {u, s, vt} = Nx.LinAlg.svd(Nx.tensor([[2, 0, 0], [0, 3, 0], [0, 0, -1], [0, 0, 0]]), full_matrices?: false)
iex> u
#Nx.Tensor<
f32[4][3]
[
[0.0, 0.9999999403953552, 0.0],
[1.0, 0.0, 0.0],
[0.0, 0.0, -1.0],
[0.0, 0.0, 0.0]
]
>
iex> s
#Nx.Tensor<
f32[3]
[3.0, 1.9999998807907104, 1.0]
>
iex> vt
#Nx.Tensor<
f32[3][3]
[
[0.0, 1.0, 0.0],
[1.0, 0.0, 0.0],
[0.0, 0.0, 1.0]
]
>
Solve the equation a x = b
for x, assuming a
is a batch of triangular matrices.
Can also solve x a = b
for x. See the :left_side
option below.
b
must either be a batch of square matrices with the same dimensions as a
or a batch of 1-D tensors
with as many rows as a
. Batch dimensions of a
and b
must be the same.
Options
The following options are defined in order of precedence
:transform_a
- Definesop(a)
, depending on its value. Can be one of::none
->op(a) = a
:transpose
->op(a) = transpose(a)
Defaults to:none
:lower
- Whentrue
, defines thea
matrix as lower triangular. If false, a is upper triangular. Defaults totrue
:left_side
- Whentrue
, solves the system asop(A).X = B
. Otherwise, solvesX.op(A) = B
. Defaults totrue
.
Examples
iex> a = Nx.tensor([[3, 0, 0, 0], [2, 1, 0, 0], [1, 0, 1, 0], [1, 1, 1, 1]])
iex> Nx.LinAlg.triangular_solve(a, Nx.tensor([4, 2, 4, 2]))
#Nx.Tensor<
f32[4]
[1.3333333730697632, -0.6666666865348816, 2.6666667461395264, -1.3333333730697632]
>
iex> a = Nx.tensor([[1, 0, 0], [1, 1, 0], [1, 1, 1]], type: :f64)
iex> Nx.LinAlg.triangular_solve(a, Nx.tensor([1, 2, 1]))
#Nx.Tensor<
f64[3]
[1.0, 1.0, -1.0]
>
iex> a = Nx.tensor([[1, 0, 0], [1, 1, 0], [0, 1, 1]])
iex> b = Nx.tensor([[1, 2, 3], [2, 2, 4], [2, 0, 1]])
iex> Nx.LinAlg.triangular_solve(a, b)
#Nx.Tensor<
f32[3][3]
[
[1.0, 2.0, 3.0],
[1.0, 0.0, 1.0],
[1.0, 0.0, 0.0]
]
>
iex> a = Nx.tensor([[1, 1, 1, 1], [0, 1, 0, 1], [0, 0, 1, 2], [0, 0, 0, 3]])
iex> Nx.LinAlg.triangular_solve(a, Nx.tensor([2, 4, 2, 4]), lower: false)
#Nx.Tensor<
f32[4]
[-1.3333333730697632, 2.6666667461395264, -0.6666666865348816, 1.3333333730697632]
>
iex> a = Nx.tensor([[1, 0, 0], [1, 1, 0], [1, 2, 1]])
iex> b = Nx.tensor([[0, 2, 1], [1, 1, 0], [3, 3, 1]])
iex> Nx.LinAlg.triangular_solve(a, b, left_side: false)
#Nx.Tensor<
f32[3][3]
[
[-1.0, 0.0, 1.0],
[0.0, 1.0, 0.0],
[1.0, 1.0, 1.0]
]
>
iex> a = Nx.tensor([[1, 1, 1], [0, 1, 1], [0, 0, 1]], type: :f64)
iex> Nx.LinAlg.triangular_solve(a, Nx.tensor([1, 2, 1]), transform_a: :transpose, lower: false)
#Nx.Tensor<
f64[3]
[1.0, 1.0, -1.0]
>
iex> a = Nx.tensor([[1, 0, 0], [1, 1, 0], [1, 1, 1]], type: :f64)
iex> Nx.LinAlg.triangular_solve(a, Nx.tensor([1, 2, 1]), transform_a: :none)
#Nx.Tensor<
f64[3]
[1.0, 1.0, -1.0]
>
iex> a = Nx.tensor([[1, 0, 0], [1, 1, 0], [1, 2, 1]])
iex> b = Nx.tensor([[0, 1, 3], [2, 1, 3]])
iex> Nx.LinAlg.triangular_solve(a, b, left_side: false)
#Nx.Tensor<
f32[2][3]
[
[2.0, -5.0, 3.0],
[4.0, -5.0, 3.0]
]
>
iex> a = Nx.tensor([[1, 0, 0], [1, 1, 0], [1, 2, 1]])
iex> b = Nx.tensor([[0, 2], [3, 0], [0, 0]])
iex> Nx.LinAlg.triangular_solve(a, b, left_side: true)
#Nx.Tensor<
f32[3][2]
[
[0.0, 2.0],
[3.0, -2.0],
[-6.0, 2.0]
]
>
iex> a = Nx.tensor([
...> [1, 0, 0],
...> [1, Complex.new(0, 1), 0],
...> [Complex.new(0, 1), 1, 1]
...>])
iex> b = Nx.tensor([1, -1, Complex.new(3, 3)])
iex> Nx.LinAlg.triangular_solve(a, b)
#Nx.Tensor<
c64[3]
[1.0+0.0i, 0.0+2.0i, 3.0+0.0i]
>
iex> a = Nx.tensor([[[1, 0], [2, 3]], [[4, 0], [5, 6]]])
iex> b = Nx.tensor([[2, -1], [3, 7]])
iex> Nx.LinAlg.triangular_solve(a, b)
#Nx.Tensor<
f32[2][2]
[
[2.0, -1.6666666269302368],
[0.75, 0.5416666865348816]
]
>
iex> a = Nx.tensor([[[1, 1], [0, 1]], [[2, 0], [0, 2]]]) |> Nx.vectorize(x: 2)
iex> b = Nx.tensor([[[2, 1], [5, -1]]]) |> Nx.vectorize(x: 1, y: 2)
iex> Nx.LinAlg.triangular_solve(a, b, lower: false)
#Nx.Tensor<
vectorized[x: 2][y: 2]
f32[2]
[
[
[1.0, 1.0],
[6.0, -1.0]
],
[
[1.0, 0.5],
[2.5, -0.5]
]
]
>
Error cases
iex> Nx.LinAlg.triangular_solve(Nx.tensor([[3, 0, 0, 0], [2, 1, 0, 0]]), Nx.tensor([4, 2, 4, 2]))
** (ArgumentError) triangular_solve/3 expected a square matrix or a batch of square matrices, got tensor with shape: {2, 4}
iex> Nx.LinAlg.triangular_solve(Nx.tensor([[3, 0, 0, 0], [2, 1, 0, 0], [1, 1, 1, 1], [1, 1, 1, 1]]), Nx.tensor([4]))
** (ArgumentError) incompatible dimensions for a and b on triangular solve
iex> Nx.LinAlg.triangular_solve(Nx.tensor([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1]]), Nx.tensor([4, 2, 4, 2]))
** (ArgumentError) can't solve for singular matrix
iex> a = Nx.tensor([[1, 0, 0], [1, 1, 0], [1, 1, 1]], type: :f64)
iex> Nx.LinAlg.triangular_solve(a, Nx.tensor([1, 2, 1]), transform_a: :conjugate)
** (ArgumentError) complex numbers not supported yet
iex> a = Nx.tensor([[1, 0, 0], [1, 1, 0], [1, 1, 1]], type: :f64)
iex> Nx.LinAlg.triangular_solve(a, Nx.tensor([1, 2, 1]), transform_a: :other)
** (ArgumentError) invalid value for :transform_a option, expected :none, :transpose, or :conjugate, got: :other