View Source Nx (Nx v0.9.1)
Numerical Elixir.
The Nx
library is a collection of functions and data
types to work with Numerical Elixir. This module defines
the main entry point for building and working with said
data-structures. For example, to create an n-dimensional
tensor, do:
iex> t = Nx.tensor([[1, 2], [3, 4]])
iex> Nx.shape(t)
{2, 2}
Nx
also provides the so-called numerical definitions under
the Nx.Defn
module. They are a subset of Elixir tailored for
numerical computations. For example, it overrides Elixir's
default operators so they are tensor-aware:
defn softmax(t) do
Nx.exp(t) / Nx.sum(Nx.exp(t))
end
Code inside defn
functions can also be given to custom compilers,
which can compile said functions just-in-time (JIT) to run on the
CPU or on the GPU.
References
Here is a general outline of the main references in this library:
For an introduction, see our Intro to Nx guide
This module provides the main API for working with tensors
Nx.Defn
provides numerical definitions, CPU/GPU compilation, gradients, and moreNx.LinAlg
provides functions related to linear algebraNx.Constants
declares many constants commonly used in numerical code
Continue reading this documentation for an overview of creating, broadcasting, and accessing/slicing Nx tensors.
Creating tensors
The main APIs for creating tensors are tensor/2
, from_binary/2
,
iota/2
, eye/2
, and broadcast/3
.
The tensor types can be one of:
- unsigned integers (
u2
,u4
,u8
,u16
,u32
,u64
) - signed integers (
s2
,s4
,s8
,s16
,s32
,s64
) - floats (
f8
,f16
,f32
,f64
) - brain floats (
bf16
) - and complex numbers (
c64
,c128
)
The types are tracked as tuples:
iex> Nx.tensor([1, 2, 3], type: {:f, 32})
#Nx.Tensor<
f32[3]
[1.0, 2.0, 3.0]
>
But a shortcut atom notation is also available:
iex> Nx.tensor([1, 2, 3], type: :f32)
#Nx.Tensor<
f32[3]
[1.0, 2.0, 3.0]
>
The tensor dimensions can also be named, via the :names
option
available to all creation functions:
iex> Nx.iota({2, 3}, names: [:x, :y])
#Nx.Tensor<
s32[x: 2][y: 3]
[
[0, 1, 2],
[3, 4, 5]
]
>
Finally, for creating vectors and matrices, a sigil notation is available:
iex> import Nx, only: :sigils
iex> ~VEC[1 2 3]f32
#Nx.Tensor<
f32[3]
[1.0, 2.0, 3.0]
>
iex> import Nx, only: :sigils
iex> ~MAT'''
...> 1 2 3
...> 4 5 6
...> '''s32
#Nx.Tensor<
s32[2][3]
[
[1, 2, 3],
[4, 5, 6]
]
>
All other APIs accept exclusively numbers or tensors, unless explicitly noted otherwise.
Broadcasting
Broadcasting allows operations on two tensors of different shapes to match. For example, most often operations between tensors have the same shape:
iex> a = Nx.tensor([1, 2, 3])
iex> b = Nx.tensor([10, 20, 30])
iex> Nx.add(a, b)
#Nx.Tensor<
s32[3]
[11, 22, 33]
>
Now let's imagine you want to multiply a large tensor of dimensions 1000x1000x1000 by 2. If you had to create a similarly large tensor only to perform this operation, it would be inefficient. Therefore, you can simply multiply this large tensor by the scalar 2, and Nx will propagate its dimensions at the time the operation happens, without allocating a large intermediate tensor:
iex> Nx.multiply(Nx.tensor([1, 2, 3]), 2)
#Nx.Tensor<
s32[3]
[2, 4, 6]
>
In practice, broadcasting is not restricted only to scalars; it
is a general algorithm that applies to all dimensions of a tensor.
When broadcasting, Nx
compares the shapes of the two tensors,
starting with the trailing ones, such that:
If the dimensions have equal size, then they are compatible
If one of the dimensions have size of 1, it is "broadcast" to match the dimension of the other
In case one tensor has more dimensions than the other, the missing dimensions are considered to be of size one. Here are some examples of how broadcast would work when multiplying two tensors with the following shapes:
s32[3] * s64
#=> s32[3]
s32[255][255][3] * s32[3]
#=> s32[255][255][3]
s32[2][1] * s[1][2]
#=> s32[2][2]
s32[5][1][4][1] * s32[3][4][5]
#=> s32[5][3][4][5]
If any of the dimensions do not match or are not 1, an error is raised.
Access syntax (slicing)
Nx tensors implement Elixir's access syntax. This allows developers to slice tensors up and easily access sub-dimensions and values.
Access accepts integers:
iex> t = Nx.tensor([[1, 2], [3, 4]])
iex> t[0]
#Nx.Tensor<
s32[2]
[1, 2]
>
iex> t[1]
#Nx.Tensor<
s32[2]
[3, 4]
>
iex> t[1][1]
#Nx.Tensor<
s32
4
>
If a negative index is given, it accesses the element from the back:
iex> t = Nx.tensor([[1, 2], [3, 4]])
iex> t[-1][-1]
#Nx.Tensor<
s32
4
>
Out of bound access will raise:
iex> Nx.tensor([1, 2])[2]
** (ArgumentError) index 2 is out of bounds for axis 0 in shape {2}
iex> Nx.tensor([1, 2])[-3]
** (ArgumentError) index -3 is out of bounds for axis 0 in shape {2}
The index can also be another tensor. If the tensor is a scalar, it must be a value between 0 and the dimension size, and it behaves the same as an integer. Out of bound dynamic indexes are always clamped to the tensor dimensions:
iex> two = Nx.tensor(2)
iex> t = Nx.tensor([[1, 2], [3, 4]])
iex> t[two][two]
#Nx.Tensor<
s32
4
>
For example, a minus_one
dynamic index will be clamped to zero:
iex> minus_one = Nx.tensor(-1)
iex> t = Nx.tensor([[1, 2], [3, 4]])
iex> t[minus_one][minus_one]
#Nx.Tensor<
s32
1
>
A multi-dimensional tensor uses its values to fetch the leading
dimension of the tensor, placing them within the shape of the
indexing tensor. It is equivalent to take/3
:
iex> t = Nx.tensor([[1, 2], [3, 4]])
iex> t[Nx.tensor([1, 0])]
#Nx.Tensor<
s32[2][2]
[
[3, 4],
[1, 2]
]
>
The example shows how the retrieved indexes are nested with the accessed shape and that you may also access repeated indices:
iex> t = Nx.tensor([[1, 2], [3, 4]])
iex> t[Nx.tensor([[1, 0, 1]])]
#Nx.Tensor<
s32[1][3][2]
[
[
[3, 4],
[1, 2],
[3, 4]
]
]
>
Access also accepts ranges. Ranges in Elixir are inclusive:
iex> t = Nx.tensor([[1, 2], [3, 4], [5, 6], [7, 8]])
iex> t[0..1]
#Nx.Tensor<
s32[2][2]
[
[1, 2],
[3, 4]
]
>
Ranges can receive negative positions and they will read from the back. In such cases, the range step must be explicitly given and the right-side of the range must be equal or greater than the left-side:
iex> t = Nx.tensor([[1, 2], [3, 4], [5, 6], [7, 8]])
iex> t[1..-2//1]
#Nx.Tensor<
s32[2][2]
[
[3, 4],
[5, 6]
]
>
As you can see, accessing with a range does not eliminate the accessed axis. This means that, if you try to cascade ranges, you will always be filtering the highest dimension:
iex> t = Nx.tensor([[1, 2], [3, 4], [5, 6], [7, 8]])
iex> t[1..-1//1] # Drop the first "row"
#Nx.Tensor<
s32[3][2]
[
[3, 4],
[5, 6],
[7, 8]
]
>
iex> t[1..-1//1][1..-1//1] # Drop the first "row" twice
#Nx.Tensor<
s32[2][2]
[
[5, 6],
[7, 8]
]
>
Therefore, if you want to slice across multiple dimensions, you can wrap the ranges in a list:
iex> t = Nx.tensor([[1, 2], [3, 4], [5, 6], [7, 8]])
iex> t[[1..-1//1, 1..-1//1]] # Drop the first "row" and the first "column"
#Nx.Tensor<
s32[3][1]
[
[4],
[6],
[8]
]
>
You can also use ..
as the full-slice range, which means you want to
keep a given dimension as is:
iex> t = Nx.tensor([[1, 2], [3, 4], [5, 6], [7, 8]])
iex> t[[.., 1..-1//1]] # Drop only the first "column"
#Nx.Tensor<
s32[4][1]
[
[2],
[4],
[6],
[8]
]
>
You can mix both ranges and integers in the list too:
iex> t = Nx.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]])
iex> t[[1..2, 2]]
#Nx.Tensor<
s32[2]
[6, 9]
>
If the list has less elements than axes, the remaining dimensions are returned in full:
iex> t = Nx.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]])
iex> t[[1..2]]
#Nx.Tensor<
s32[2][3]
[
[4, 5, 6],
[7, 8, 9]
]
>
The access syntax also pairs nicely with named tensors. By using named tensors, you can pass only the axis you want to slice, leaving the other axes intact:
iex> t = Nx.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]], names: [:x, :y])
iex> t[x: 1..2]
#Nx.Tensor<
s32[x: 2][y: 3]
[
[4, 5, 6],
[7, 8, 9]
]
>
iex> t[x: 1..2, y: 0..1]
#Nx.Tensor<
s32[x: 2][y: 2]
[
[4, 5],
[7, 8]
]
>
iex> t[x: 1, y: 0..1]
#Nx.Tensor<
s32[y: 2]
[4, 5]
>
For a more complex slicing rules, including strides, you
can always fallback to Nx.slice/4
.
Backends
The Nx
library has built-in support for multiple backends.
A tensor is always handled by a backend, the default backend
being Nx.BinaryBackend
, which means the tensor is allocated
as a binary within the Erlang VM.
Most often backends are used to provide a completely different implementation of tensor operations, often accelerated to the GPU. In such cases, you want to guarantee all tensors are allocated in the new backend. This can be done by configuring your runtime:
# config/runtime.exs
import Config
config :nx, default_backend: EXLA.Backend
In your notebooks and on Mix.install/2
, you might:
Mix.install(
[
{:nx, ">= 0.0.0"}
],
config: [nx: [default_backend: EXLA.Backend]]
)
Or by calling Nx.global_default_backend/1
(less preferrable):
Nx.global_default_backend(EXLA.Backend)
To pass options to the backend, replacing EXLA.Backend
by
{EXLA.Backend, client: :cuda}
or similar. See the documentation
for EXLA and Torchx
for installation and GPU support.
To implement your own backend, check the Nx.Tensor
behaviour.
Summary
Guards
Checks whether the value is a valid numerical value.
Functions: Aggregates
Returns a scalar tensor of value 1 if all of the tensor values are not zero. Otherwise the value is 0.
Returns a scalar tensor of value 1 if all element-wise values are within tolerance of b. Otherwise returns value 0.
Returns a scalar tensor of value 1 if any of the tensor values are not zero. Otherwise the value is 0.
Returns the indices of the maximum values.
Returns the indices of the minimum values.
A shortcut to covariance/3
with either opts
or mean
as second argument.
Computes the covariance matrix of the input tensor.
Returns the logarithm of the sum of the exponentials of tensor elements.
Returns the mean for the tensor.
Returns the median for the tensor.
Returns the mode of a tensor.
Returns the product for the tensor.
Reduces over a tensor with the given accumulator.
Returns the maximum values of the tensor.
Returns the minimum values of the tensor.
Finds the standard deviation of a tensor.
Returns the sum for the tensor.
Finds the variance of a tensor.
Returns the weighted mean for the tensor and the weights.
Functions: Backend
Copies data to the given backend.
Deallocates data in a device.
Transfers data to the given backend.
Gets the default backend for the current process.
Sets the given backend
as default in the current process.
Sets the default backend globally.
Invokes the given function temporarily setting backend
as the
default backend.
Functions: Conversion
Deserializes a serialized representation of a tensor or a container with the given options.
Loads a .npy
file into a tensor.
Loads a .npz
archive into a list of tensors.
Serializes the given tensor or container of tensors to iodata.
Converts the underlying tensor to a stream of tensor batches.
Returns the underlying tensor as a binary.
Returns the underlying tensor as a flat list.
Returns a heatmap struct with the tensor data.
Converts the tensor into a list reflecting its structure.
Returns the underlying tensor as a number.
Converts a tensor (or tuples and maps of tensors) to tensor templates.
Converts a data structure into a tensor.
Functions: Creation
Short-hand function for creating tensor of type bf16
.
Creates the identity matrix of size n
.
Short-hand function for creating tensor of type f8
.
Short-hand function for creating tensor of type f16
.
Short-hand function for creating tensor of type f32
.
Short-hand function for creating tensor of type f64
.
Creates a one-dimensional tensor from a binary
with the given type
.
Creates an Nx-tensor from an already-allocated memory space.
Creates a tensor with the given shape which increments along the provided axis. You may optionally provide dimension names.
Creates a tensor of shape {n}
with linearly spaced samples between start
and stop
.
Creates a diagonal tensor from a 1D tensor.
Puts the individual values from a 1D diagonal into the diagonal indices of the given 2D tensor.
Short-hand function for creating tensor of type s2
.
Short-hand function for creating tensor of type s4
.
Short-hand function for creating tensor of type s8
.
Short-hand function for creating tensor of type s16
.
Short-hand function for creating tensor of type s32
.
Short-hand function for creating tensor of type s64
.
A convenient ~MAT
sigil for building matrices (two-dimensional tensors).
A convenient ~VEC
sigil for building vectors (one-dimensional tensors).
Extracts the diagonal of batched matrices.
Creates a tensor template.
Builds a tensor.
Returns an Nx.Pointer
that represents either a local pointer or an IPC handle for the given tensor.
An array with ones at and below the given diagonal and zeros elsewhere.
Lower triangle of a matrix.
Upper triangle of an array.
Short-hand function for creating tensor of type u2
.
Short-hand function for creating tensor of type u4
.
Short-hand function for creating tensor of type u8
.
Short-hand function for creating tensor of type u16
.
Short-hand function for creating tensor of type u32
.
Short-hand function for creating tensor of type u64
.
Functions: Cumulative
Returns the cumulative maximum of elements along an axis.
Returns the cumulative minimum of elements along an axis.
Returns the cumulative product of elements along an axis.
Returns the cumulative sum of elements along an axis.
Functions: Element-wise
Computes the absolute value of each element in the tensor.
Calculates the inverse cosine of each element in the tensor.
Calculates the inverse hyperbolic cosine of each element in the tensor.
Element-wise addition of two tensors.
Calculates the inverse sine of each element in the tensor.
Calculates the inverse hyperbolic sine of each element in the tensor.
Element-wise arc tangent of two tensors.
Calculates the inverse tangent of each element in the tensor.
Calculates the inverse hyperbolic tangent of each element in the tensor.
Element-wise bitwise AND of two tensors.
Applies bitwise not to each element in the tensor.
Element-wise bitwise OR of two tensors.
Element-wise bitwise XOR of two tensors.
Calculates the cube root of each element in the tensor.
Calculates the ceil of each element in the tensor.
Clips the values of the tensor on the closed
interval [min, max]
.
Constructs a complex tensor from two equally-shaped tensors.
Calculates the complex conjugate of each element in the tensor.
Calculates the cosine of each element in the tensor.
Calculates the hyperbolic cosine of each element in the tensor.
Counts the number of leading zeros of each element in the tensor.
Element-wise division of two tensors.
Element-wise equality comparison of two tensors.
Calculates the error function of each element in the tensor.
Calculates the inverse error function of each element in the tensor.
Calculates the one minus error function of each element in the tensor.
Calculates the exponential of each element in the tensor.
Calculates the exponential minus one of each element in the tensor.
Replaces every value in tensor
with value
.
Calculates the floor of each element in the tensor.
Element-wise greater than comparison of two tensors.
Element-wise greater than or equal comparison of two tensors.
Returns the imaginary component of each entry in a complex tensor as a floating point tensor.
Determines if each element in tensor
is Inf
or -Inf
.
Determines if each element in tensor
is a NaN
.
Element-wise left shift of two tensors.
Element-wise less than comparison of two tensors.
Element-wise less than or equal comparison of two tensors.
Calculates the natural log plus one of each element in the tensor.
Calculates the element-wise logarithm of a tensor with base 2.
Calculates the element-wise logarithm of a tensor with base 10.
Calculates the natural log of each element in the tensor.
Calculates the element-wise logarithm of a tensor with given base
.
Element-wise logical and of two tensors.
Element-wise logical not a tensor.
Element-wise logical or of two tensors.
Element-wise logical xor of two tensors.
Element-wise maximum of two tensors.
Element-wise minimum of two tensors.
Element-wise multiplication of two tensors.
Negates each element in the tensor.
Element-wise not-equal comparison of two tensors.
Calculates the complex phase angle of each element in the tensor. $$ phase(z) = atan2(b, a), z = a + bi \in \Complex $$
Computes the bitwise population count of each element in the tensor.
Element-wise power of two tensors.
Element-wise integer division of two tensors.
Returns the real component of each entry in a complex tensor as a floating point tensor.
Element-wise remainder of two tensors.
Element-wise right shift of two tensors.
Calculates the round (away from zero) of each element in the tensor.
Calculates the reverse square root of each element in the tensor.
Constructs a tensor from two tensors, based on a predicate.
Calculates the sigmoid of each element in the tensor.
Computes the sign of each element in the tensor.
Calculates the sine of each element in the tensor.
Calculates the hyperbolic sine of each element in the tensor.
Calculates the square root of each element in the tensor.
Element-wise subtraction of two tensors.
Calculates the tangent of each element in the tensor.
Calculates the hyperbolic tangent of each element in the tensor.
Functions: Indexed
Builds a new tensor by taking individual values from the original tensor at the given indices.
Performs an indexed add
operation on the target
tensor,
adding the updates
into the corresponding indices
positions.
Puts individual values from updates
into the given tensor at the corresponding indices
.
Puts the given slice
into the given tensor
at the given
start_indices
.
Slices a tensor from start_indices
with lengths
.
Slices a tensor along the given axis.
Split a tensor into train and test subsets.
Takes and concatenates slices along an axis.
Takes the values from a tensor given an indices
tensor, along the specified axis.
Functions: N-dim
Sorts the tensor along the given axis according to the given direction and returns the corresponding indices of the original tensor in the new sorted positions.
Concatenates tensors along the given axis.
Computes an n-D convolution (where n >= 3
) as used in neural networks.
Calculate the n-th discrete difference along the given axis.
Returns the dot product of two tensors.
Computes the generalized dot product between two tensors, given the contracting axes.
Computes the generalized dot product between two tensors, given the contracting and batch axes.
Calculates the 2D DFT of the given tensor.
Calculates the DFT of the given tensor.
Calculates the Inverse 2D DFT of the given tensor.
Calculates the Inverse DFT of the given tensor.
Computes the outer product of two tensors.
Reverses the tensor in the given dimensions.
Sorts the tensor along the given axis according to the given direction.
Stacks a list of tensors with the same shape along a new axis.
Returns a tuple of {values, indices}
for the top k
values in last dimension of the tensor.
Functions: Shape
Returns all of the axes in a tensor.
Returns the index of the given axis in the tensor.
Returns the size of a given axis of a tensor.
Returns the bit size of the data in the tensor computed from its shape and type.
Broadcasts tensor
to the given broadcast_shape
.
Returns the byte size of the data in the tensor computed from its shape and type.
Checks if two tensors have the same shape, type, and compatible names.
Returns the number of elements in the tensor (including vectorized axes).
Flattens a n-dimensional tensor to a 1-dimensional tensor.
Returns all of the names in a tensor.
Adds a new axis
of size 1 with optional name
.
Pads a tensor with a given value.
Returns the rank of a tensor.
Pads a tensor of rank 1 or greater along the given axes through periodic reflections.
Adds (or overrides) the given names to the tensor.
Changes the shape of a tensor.
Returns the shape of the tensor as a tuple.
Returns the number of elements in the tensor.
Squeezes the given size 1
dimensions out of the tensor.
Creates a new tensor by repeating the input tensor along the given axes.
Transposes a tensor to the given axes
.
Functions: Vectorization
Broadcasts vectorized axes, ensuring they end up with the same final size.
Transforms a vectorized tensor back into a regular tensor.
Reshapes input tensors so that they are all vectorized with the same vectors.
Changes the disposition of the vectorized axes of a tensor or Nx.Container
.
Transforms a tensor into a vectorized tensor.
Functions: Type
Changes the type of a tensor.
Changes the type of a tensor, using a bitcast.
Returns the type of the tensor.
Functions: Window
Returns the maximum over each window of size window_dimensions
in the given tensor, producing a tensor that contains the same
number of elements as valid positions of the window.
Averages over each window of size window_dimensions
in the
given tensor, producing a tensor that contains the same
number of elements as valid positions of the window.
Returns the minimum over each window of size window_dimensions
in the given tensor, producing a tensor that contains the same
number of elements as valid positions of the window.
Returns the product over each window of size window_dimensions
in the given tensor, producing a tensor that contains the same
number of elements as valid positions of the window.
Reduces over each window of size dimensions
in the given tensor, producing a tensor that contains the same
number of elements as valid positions of the window.
Performs a window_reduce
to select the maximum index in each
window of the input tensor according to and scatters source tensor
to corresponding maximum indices in the output tensor.
Performs a window_reduce
to select the minimum index in each
window of the input tensor according to and scatters source tensor
to corresponding minimum indices in the output tensor.
Sums over each window of size window_dimensions
in the
given tensor, producing a tensor that contains the same
number of elements as valid positions of the window.
Guards
Functions: Aggregates
Returns a scalar tensor of value 1 if all of the tensor values are not zero. Otherwise the value is 0.
If the :axes
option is given, it aggregates over
the given dimensions, effectively removing them.
axes: [0]
implies aggregating over the highest order
dimension and so forth. If the axis is negative, then
counts the axis from the back. For example, axes: [-1]
will always aggregate all rows.
You may optionally set :keep_axes
to true, which will
retain the rank of the input tensor by setting the reduced
axes to size 1.
Examples
iex> Nx.all(Nx.tensor([0, 1, 2]))
#Nx.Tensor<
u8
0
>
iex> Nx.all(Nx.tensor([[-1, 0, 1], [2, 3, 4]], names: [:x, :y]), axes: [:x])
#Nx.Tensor<
u8[y: 3]
[1, 0, 1]
>
iex> Nx.all(Nx.tensor([[-1, 0, 1], [2, 3, 4]], names: [:x, :y]), axes: [:y])
#Nx.Tensor<
u8[x: 2]
[0, 1]
>
Keeping axes
iex> Nx.all(Nx.tensor([[-1, 0, 1], [2, 3, 4]], names: [:x, :y]), axes: [:y], keep_axes: true)
#Nx.Tensor<
u8[x: 2][y: 1]
[
[0],
[1]
]
>
Vectorized tensors
iex> t = Nx.vectorize(Nx.tensor([[0, 1], [1, 1]]), :x)
iex> Nx.all(t, axes: [0], keep_axes: true)
#Nx.Tensor<
vectorized[x: 2]
u8[1]
[
[0],
[1]
]
>
iex> t = Nx.vectorize(Nx.tensor([1, 0]), :x)
iex> Nx.all(t)
#Nx.Tensor<
vectorized[x: 2]
u8
[1, 0]
>
Returns a scalar tensor of value 1 if all element-wise values are within tolerance of b. Otherwise returns value 0.
You may set the absolute tolerance, :atol
and relative tolerance
:rtol
. Given tolerances, this functions returns 1 if
absolute(a - b) <= (atol + rtol * absolute(b))
is true for all elements of a and b.
Options
:rtol
- relative tolerance between numbers, as described above. Defaults to 1.0e-5:atol
- absolute tolerance between numbers, as described above. Defaults to 1.0e-8:equal_nan
- iffalse
, NaN will always compare as false. OtherwiseNaN
will only equalNaN
. Defaults tofalse
Examples
iex> Nx.all_close(Nx.tensor([1.0e10, 1.0e-7]), Nx.tensor([1.00001e10, 1.0e-8])) #Nx.Tensor<
u8
0
iex> Nx.all_close(Nx.tensor([1.0e-8, 1.0e-8]), Nx.tensor([1.0e-8, 1.0e-9])) #Nx.Tensor<
u8
1
Although NaN
by definition isn't equal to itself, so this implementation
also considers all NaN
s different from each other by default:
iex> Nx.all_close(Nx.tensor(:nan), Nx.tensor(:nan)) #Nx.Tensor<
u8
0
iex> Nx.all_close(Nx.tensor(:nan), Nx.tensor(0)) #Nx.Tensor<
u8
0
We can change this behavior with the :equal_nan
option:
iex> t = Nx.tensor([:nan, 1]) iex> Nx.all_close(t, t, equal_nan: true) # nan == nan -> true #Nx.Tensor<
u8
1
iex> Nx.all_close(t, t, equal_nan: false) # nan == nan -> false, default behavior #Nx.Tensor<
u8
0
Infinities behave as expected, being "close" to themselves but not to other numbers:
iex> Nx.all_close(Nx.tensor(:infinity), Nx.tensor(:infinity)) #Nx.Tensor<
u8
1
iex> Nx.all_close(Nx.tensor(:infinity), Nx.tensor(:neg_infinity)) #Nx.Tensor<
u8
0
iex> Nx.all_close(Nx.tensor(1.0e30), Nx.tensor(:infinity)) #Nx.Tensor<
u8
0
Vectorized tensors
Vectorized inputs have their vectorized axes broadcast together before calculations are performed.
iex> x = Nx.tensor([0, 1]) |> Nx.vectorize(:x)
iex> Nx.all_close(x, x)
#Nx.Tensor<
vectorized[x: 2]
u8
[1, 1]
>
iex> x = Nx.tensor([0, 1, 2]) |> Nx.vectorize(:x)
iex> y = Nx.tensor([0, 1]) |> Nx.vectorize(:y)
iex> Nx.all_close(x, y)
#Nx.Tensor<
vectorized[x: 3][y: 2]
u8
[
[1, 0],
[0, 1],
[0, 0]
]
>
Returns a scalar tensor of value 1 if any of the tensor values are not zero. Otherwise the value is 0.
If the :axes
option is given, it aggregates over
the given dimensions, effectively removing them.
axes: [0]
implies aggregating over the highest order
dimension and so forth. If the axis is negative, then
counts the axis from the back. For example, axes: [-1]
will always aggregate all rows.
You may optionally set :keep_axes
to true, which will
retain the rank of the input tensor by setting the reduced
axes to size 1.
Examples
iex> Nx.any(Nx.tensor([0, 1, 2]))
#Nx.Tensor<
u8
1
>
iex> Nx.any(Nx.tensor([[0, 1, 0], [0, 1, 2]], names: [:x, :y]), axes: [:x])
#Nx.Tensor<
u8[y: 3]
[0, 1, 1]
>
iex> Nx.any(Nx.tensor([[0, 1, 0], [0, 1, 2]], names: [:x, :y]), axes: [:y])
#Nx.Tensor<
u8[x: 2]
[1, 1]
>
Keeping axes
iex> Nx.any(Nx.tensor([[0, 1, 0], [0, 1, 2]], names: [:x, :y]), axes: [:y], keep_axes: true)
#Nx.Tensor<
u8[x: 2][y: 1]
[
[1],
[1]
]
>
Vectorized tensors
iex> t = Nx.vectorize(Nx.tensor([[0, 1], [0, 0]]), :x)
iex> Nx.any(t, axes: [0], keep_axes: true)
#Nx.Tensor<
vectorized[x: 2]
u8[1]
[
[1],
[0]
]
>
Returns the indices of the maximum values.
Options
:axis
- the axis to aggregate on. If no axis is given, returns the index of the absolute maximum value in the tensor.:keep_axis
- whether or not to keep the reduced axis with a size of 1. Defaults tofalse
.:tie_break
- how to break ties. one of:high
, or:low
. default behavior is to always return the lower index.:type
- The type of the resulting tensor. Defaults to:s32
.
Examples
iex> Nx.argmax(4)
#Nx.Tensor<
s32
0
>
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]])
iex> Nx.argmax(t)
#Nx.Tensor<
s32
10
>
If a tensor of floats is given, it still returns integers:
iex> Nx.argmax(Nx.tensor([2.0, 4.0]))
#Nx.Tensor<
s32
1
>
If the tensor includes any NaNs, returns the index of any of them (NaNs are not equal, hence tie-break does not apply):
iex> Nx.argmax(Nx.tensor([2.0, :nan, 4.0]))
#Nx.Tensor<
s32
1
>
Aggregating over an axis
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]])
iex> Nx.argmax(t, axis: 0)
#Nx.Tensor<
s32[2][3]
[
[1, 0, 0],
[1, 1, 0]
]
>
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]], names: [:x, :y, :z])
iex> Nx.argmax(t, axis: :y)
#Nx.Tensor<
s32[x: 2][z: 3]
[
[0, 0, 0],
[0, 1, 0]
]
>
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]], names: [:x, :y, :z])
iex> Nx.argmax(t, axis: :z)
#Nx.Tensor<
s32[x: 2][y: 2]
[
[0, 2],
[0, 1]
]
>
Tie breaks
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]], names: [:x, :y, :z])
iex> Nx.argmax(t, tie_break: :low, axis: :y)
#Nx.Tensor<
s32[x: 2][z: 3]
[
[0, 0, 0],
[0, 1, 0]
]
>
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]], names: [:x, :y, :z])
iex> Nx.argmax(t, tie_break: :high, axis: :y, type: :u32)
#Nx.Tensor<
u32[x: 2][z: 3]
[
[0, 0, 1],
[0, 1, 1]
]
>
Keep axis
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]], names: [:x, :y, :z])
iex> Nx.argmax(t, axis: :y, keep_axis: true)
#Nx.Tensor<
s32[x: 2][y: 1][z: 3]
[
[
[0, 0, 0]
],
[
[0, 1, 0]
]
]
>
Vectorized tensors
iex> v = Nx.tensor([[1, 2, 3], [6, 5, 4]]) |> Nx.vectorize(:x)
iex> Nx.argmax(v)
#Nx.Tensor<
vectorized[x: 2]
s32
[2, 0]
>
iex> Nx.argmax(v, axis: 0)
#Nx.Tensor<
vectorized[x: 2]
s32
[2, 0]
>
iex> Nx.argmax(v, keep_axis: true)
#Nx.Tensor<
vectorized[x: 2]
s32[1]
[
[2],
[0]
]
>
Returns the indices of the minimum values.
Options
:axis
- the axis to aggregate on. If no axis is given, returns the index of the absolute minimum value in the tensor.:keep_axis
- whether or not to keep the reduced axis with a size of 1. Defaults tofalse
.:tie_break
- how to break ties. one of:high
, or:low
. Default behavior is to always return the lower index.:type
- The type of the resulting tensor. Defaults to:s32
.
Examples
iex> Nx.argmin(4)
#Nx.Tensor<
s32
0
>
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]])
iex> Nx.argmin(t)
#Nx.Tensor<
s32
4
>
If a tensor of floats is given, it still returns integers:
iex> Nx.argmin(Nx.tensor([2.0, 4.0]))
#Nx.Tensor<
s32
0
>
If the tensor includes any NaNs, returns the index of any of them (NaNs are not equal, hence tie-break does not apply):
iex> Nx.argmin(Nx.tensor([2.0, :nan, 4.0]))
#Nx.Tensor<
s32
1
>
Aggregating over an axis
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]])
iex> Nx.argmin(t, axis: 0)
#Nx.Tensor<
s32[2][3]
[
[0, 0, 0],
[0, 0, 0]
]
>
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]], names: [:x, :y, :z])
iex> Nx.argmin(t, axis: 1)
#Nx.Tensor<
s32[x: 2][z: 3]
[
[1, 1, 0],
[1, 0, 0]
]
>
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]], names: [:x, :y, :z])
iex> Nx.argmin(t, axis: :z)
#Nx.Tensor<
s32[x: 2][y: 2]
[
[1, 1],
[1, 2]
]
>
Tie breaks
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]], names: [:x, :y, :z])
iex> Nx.argmin(t, tie_break: :low, axis: :y)
#Nx.Tensor<
s32[x: 2][z: 3]
[
[1, 1, 0],
[1, 0, 0]
]
>
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]], names: [:x, :y, :z])
iex> Nx.argmin(t, tie_break: :high, axis: :y, type: :u32)
#Nx.Tensor<
u32[x: 2][z: 3]
[
[1, 1, 1],
[1, 0, 1]
]
>
Keep axis
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]], names: [:x, :y, :z])
iex> Nx.argmin(t, axis: :y, keep_axis: true)
#Nx.Tensor<
s32[x: 2][y: 1][z: 3]
[
[
[1, 1, 0]
],
[
[1, 0, 0]
]
]
>
Vectorized tensors
iex> v = Nx.tensor([[1, 2, 3], [6, 5, 4]]) |> Nx.vectorize(:x)
iex> Nx.argmin(v)
#Nx.Tensor<
vectorized[x: 2]
s32
[0, 2]
>
iex> Nx.argmin(v, axis: 0)
#Nx.Tensor<
vectorized[x: 2]
s32
[0, 2]
>
iex> Nx.argmin(v, keep_axis: true)
#Nx.Tensor<
vectorized[x: 2]
s32[1]
[
[0],
[2]
]
>
@spec covariance(tensor :: Nx.Tensor.t(), opts :: Keyword.t()) :: Nx.Tensor.t()
@spec covariance(tensor :: Nx.Tensor.t(), opts :: Keyword.t()) :: Nx.Tensor.t()
@spec covariance(tensor :: Nx.Tensor.t(), mean :: Nx.Tensor.t()) :: Nx.Tensor.t()
A shortcut to covariance/3
with either opts
or mean
as second argument.
@spec covariance(tensor :: Nx.Tensor.t(), mean :: Nx.Tensor.t(), opts :: Keyword.t()) :: Nx.Tensor.t()
Computes the covariance matrix of the input tensor.
The covariance of two random variables X and Y is calculated as $Cov(X, Y) = \frac{1}{N}\sum_{i=0}^{N-1}{(X_i - \overline{X}) * (Y_i - \overline{Y})}$.
The tensor must be at least of rank 2, with shape {n, d}
. Any additional
dimension will be treated as batch dimensions.
The column mean can be provided as the second argument and it must be
a tensor of shape {..., d}
, where the batch shape is broadcastable with
that of the input tensor. If not provided, the mean is estimated using Nx.mean/2
.
If the :ddof
(delta degrees of freedom) option is given, the divisor n - ddof
is used for the sum of the products.
Examples
iex> Nx.covariance(Nx.tensor([[1, 2], [3, 4], [5, 6]]))
#Nx.Tensor<
f32[2][2]
[
[2.6666667461395264, 2.6666667461395264],
[2.6666667461395264, 2.6666667461395264]
]
>
iex> Nx.covariance(Nx.tensor([[[1, 2], [3, 4], [5, 6]], [[7, 8], [9, 10], [11, 12]]]))
#Nx.Tensor<
f32[2][2][2]
[
[
[2.6666667461395264, 2.6666667461395264],
[2.6666667461395264, 2.6666667461395264]
],
[
[2.6666667461395264, 2.6666667461395264],
[2.6666667461395264, 2.6666667461395264]
]
]
>
iex> Nx.covariance(Nx.tensor([[1, 2], [3, 4], [5, 6]]), ddof: 1)
#Nx.Tensor<
f32[2][2]
[
[4.0, 4.0],
[4.0, 4.0]
]
>
iex> Nx.covariance(Nx.tensor([[1, 2], [3, 4], [5, 6]]), Nx.tensor([4, 3]))
#Nx.Tensor<
f32[2][2]
[
[3.6666667461395264, 1.6666666269302368],
[1.6666666269302368, 3.6666667461395264]
]
>
Returns the logarithm of the sum of the exponentials of tensor elements.
If the :axes
option is given, it aggregates over
the given dimensions, effectively removing them.
axes: [0]
implies aggregating over the highest order
dimension and so forth. If the axis is negative, then
counts the axis from the back. For example, axes: [-1]
will always aggregate all rows.
You may optionally set :keep_axes
to true, which will
retain the rank of the input tensor by setting the reduced
axes to size 1.
Exponentials can be scaled before summation by multiplying
them with :exp_scaling_factor
option. It must be of the same shape
as the input tensor or broadcastable to it.
Examples
iex> Nx.logsumexp(Nx.tensor([1, 2, 3, 4, 5, 6]))
#Nx.Tensor<
f32
6.456193447113037
>
iex> Nx.logsumexp(Nx.tensor([1, 2, 3, 4, 5, 6]), exp_scaling_factor: 0.5)
#Nx.Tensor<
f32
5.7630462646484375
>
iex> t = Nx.tensor([1, 2, 3, 4, 5, 6])
iex> a = Nx.tensor([-1, -1, -1, 1, 1, 1])
iex> Nx.logsumexp(t, exp_scaling_factor: a)
#Nx.Tensor<
f32
6.356536865234375
>
iex> Nx.logsumexp(Nx.tensor([[1, 2], [3, 4], [5, 6]]))
#Nx.Tensor<
f32
6.456193447113037
>
Aggregating over an axis
iex> t = Nx.tensor([[1, 2], [3, 4], [5, 6]], names: [:x, :y])
iex> Nx.logsumexp(t, axes: [:x])
#Nx.Tensor<
f32[y: 2]
[5.1429314613342285, 6.1429314613342285]
>
iex> t = Nx.tensor([[1, 2], [3, 4], [5, 6]], names: [:x, :y])
iex> Nx.logsumexp(t, axes: [:y])
#Nx.Tensor<
f32[x: 3]
[2.3132617473602295, 4.31326150894165, 6.31326150894165]
>
iex> t = Nx.tensor([[[1, 2], [3, 4]], [[5, 6], [7, 8]]], names: [:x, :y, :z])
iex> Nx.logsumexp(t, axes: [:x, :z])
#Nx.Tensor<
f32[y: 2]
[6.331411361694336, 8.331411361694336]
>
Keeping axes
iex> t = Nx.tensor([[[1, 2], [3, 4]], [[5, 6], [7, 8]]], names: [:x, :y, :z])
iex> Nx.logsumexp(t, axes: [:x, :z], keep_axes: true)
#Nx.Tensor<
f32[x: 1][y: 2][z: 1]
[
[
[6.331411361694336],
[8.331411361694336]
]
]
>
Vectorized tensors
iex> t = Nx.vectorize(Nx.tensor([[1, 2], [3, 4], [5, 6]]), :x)
iex> Nx.logsumexp(t, axes: [0], keep_axes: true)
#Nx.Tensor<
vectorized[x: 3]
f32[1]
[
[2.3132617473602295],
[4.31326150894165],
[6.31326150894165]
]
>
Returns the mean for the tensor.
If the :axes
option is given, it aggregates over
that dimension, effectively removing it. axes: [0]
implies aggregating over the highest order dimension
and so forth. If the axis is negative, then counts
the axis from the back. For example, axes: [-1]
will
always aggregate all rows.
You may optionally set :keep_axes
to true, which will
retain the rank of the input tensor by setting the averaged
axes to size 1.
Examples
iex> Nx.mean(Nx.tensor(42))
#Nx.Tensor<
f32
42.0
>
iex> Nx.mean(Nx.tensor([1, 2, 3]))
#Nx.Tensor<
f32
2.0
>
Aggregating over an axis
iex> Nx.mean(Nx.tensor([1, 2, 3]), axes: [0])
#Nx.Tensor<
f32
2.0
>
iex> Nx.mean(Nx.tensor([1, 2, 3], type: :u8, names: [:x]), axes: [:x])
#Nx.Tensor<
f32
2.0
>
iex> t = Nx.tensor(Nx.iota({2, 2, 3}), names: [:x, :y, :z])
iex> Nx.mean(t, axes: [:x])
#Nx.Tensor<
f32[y: 2][z: 3]
[
[3.0, 4.0, 5.0],
[6.0, 7.0, 8.0]
]
>
iex> t = Nx.tensor(Nx.iota({2, 2, 3}), names: [:x, :y, :z])
iex> Nx.mean(t, axes: [:x, :z])
#Nx.Tensor<
f32[y: 2]
[4.0, 7.0]
>
iex> t = Nx.tensor(Nx.iota({2, 2, 3}), names: [:x, :y, :z])
iex> Nx.mean(t, axes: [-1])
#Nx.Tensor<
f32[x: 2][y: 2]
[
[1.0, 4.0],
[7.0, 10.0]
]
>
Keeping axes
iex> t = Nx.tensor(Nx.iota({2, 2, 3}), names: [:x, :y, :z])
iex> Nx.mean(t, axes: [-1], keep_axes: true)
#Nx.Tensor<
f32[x: 2][y: 2][z: 1]
[
[
[1.0],
[4.0]
],
[
[7.0],
[10.0]
]
]
>
Vectorized tensors
iex> t = Nx.iota({2, 5}, vectorized_axes: [x: 2])
iex> Nx.mean(t)
#Nx.Tensor<
vectorized[x: 2]
f32
[4.5, 4.5]
>
iex> Nx.mean(t, axes: [0])
#Nx.Tensor<
vectorized[x: 2]
f32[5]
[
[2.5, 3.5, 4.5, 5.5, 6.5],
[2.5, 3.5, 4.5, 5.5, 6.5]
]
>
iex> Nx.mean(t, axes: [1])
#Nx.Tensor<
vectorized[x: 2]
f32[2]
[
[2.0, 7.0],
[2.0, 7.0]
]
>
Returns the median for the tensor.
The median is the value in the middle of a data set.
If the :axis
option is given, it aggregates over
that dimension, effectively removing it. axis: 0
implies aggregating over the highest order dimension
and so forth. If the axis is negative, then the axis will
be counted from the back. For example, axis: -1
will
always aggregate over the last dimension.
You may optionally set :keep_axis
to true, which will
retain the rank of the input tensor by setting the reduced
axis to size 1.
Examples
iex> Nx.median(Nx.tensor(42))
#Nx.Tensor<
s32
42
>
iex> Nx.median(Nx.tensor([1, 2, 3]))
#Nx.Tensor<
s32
2
>
iex> Nx.median(Nx.tensor([1, 2]))
#Nx.Tensor<
f32
1.5
>
iex> Nx.median(Nx.iota({2, 3, 3}))
#Nx.Tensor<
f32
8.5
>
Aggregating over an axis
iex> Nx.median(Nx.tensor([[1, 2, 3], [4, 5, 6]], names: [:x, :y]), axis: 0)
#Nx.Tensor<
f32[y: 3]
[2.5, 3.5, 4.5]
>
iex> Nx.median(Nx.tensor([[1, 2, 3], [4, 5, 6]], names: [:x, :y]), axis: :y)
#Nx.Tensor<
s32[x: 2]
[2, 5]
>
iex> t = Nx.tensor(Nx.iota({2, 2, 3}), names: [:x, :y, :z])
iex> Nx.median(t, axis: :x)
#Nx.Tensor<
f32[y: 2][z: 3]
[
[3.0, 4.0, 5.0],
[6.0, 7.0, 8.0]
]
>
iex> t = Nx.tensor([[[1, 2, 2], [3, 4, 2]], [[4, 5, 2], [7, 9, 2]]])
iex> Nx.median(t, axis: -1)
#Nx.Tensor<
s32[2][2]
[
[2, 3],
[4, 7]
]
>
Keeping axis
iex> t = Nx.tensor([[[1, 2, 2], [3, 4, 2]], [[4, 5, 2], [7, 9, 2]]])
iex> Nx.median(t, axis: -1, keep_axis: true)
#Nx.Tensor<
s32[2][2][1]
[
[
[2],
[3]
],
[
[4],
[7]
]
]
>
Vectorized tensors
For vectorized inputs, :axis
refers to the
non-vectorized shape:
iex> Nx.median(Nx.tensor([[1, 2, 3], [4, 5, 6]]) |> Nx.vectorize(:x), axis: 0)
#Nx.Tensor<
vectorized[x: 2]
s32
[2, 5]
>
Returns the mode of a tensor.
The mode is the value that appears most often.
If the :axis
option is given, it aggregates over
that dimension, effectively removing it. axis: 0
implies aggregating over the highest order dimension
and so forth. If the axis is negative, then the axis will
be counted from the back. For example, axis: -1
will
always aggregate over the last dimension.
You may optionally set :keep_axis
to true, which will
retain the rank of the input tensor by setting the reduced
axis to size 1.
Examples
iex> Nx.mode(Nx.tensor(42))
#Nx.Tensor<
s32
42
>
iex> Nx.mode(Nx.tensor([[1]]))
#Nx.Tensor<
s32
1
>
iex> Nx.mode(Nx.tensor([1, 2, 2, 3, 5]))
#Nx.Tensor<
s32
2
>
iex> Nx.mode(Nx.tensor([[1, 2, 2, 3, 5], [1, 1, 76, 8, 1]]))
#Nx.Tensor<
s32
1
>
Aggregating over an axis
iex> Nx.mode(Nx.tensor([[1, 2, 2, 3, 5], [1, 1, 76, 8, 1]]), axis: 0)
#Nx.Tensor<
s32[5]
[1, 1, 2, 3, 1]
>
iex> Nx.mode(Nx.tensor([[1, 2, 2, 3, 5], [1, 1, 76, 8, 1]]), axis: 1)
#Nx.Tensor<
s32[2]
[2, 1]
>
iex> Nx.mode(Nx.tensor([[[1]]]), axis: 1)
#Nx.Tensor<
s32[1][1]
[
[1]
]
>
Keeping axis
iex> Nx.mode(Nx.tensor([[1, 2, 2, 3, 5], [1, 1, 76, 8, 1]]), axis: 1, keep_axis: true)
#Nx.Tensor<
s32[2][1]
[
[2],
[1]
]
>
Vectorized tensors
For vectorized tensors, :axis
refers to the non-vectorized shape:
iex> t = Nx.tensor([[[1, 2, 2, 3, 5], [1, 1, 76, 8, 1]], [[1, 2, 2, 2, 5], [5, 2, 2, 2, 1]]]) |> Nx.vectorize(:x)
iex> Nx.mode(t, axis: 0)
#Nx.Tensor<
vectorized[x: 2]
s32[5]
[
[1, 1, 2, 3, 1],
[1, 2, 2, 2, 1]
]
>
iex> Nx.mode(t, axis: 1)
#Nx.Tensor<
vectorized[x: 2]
s32[2]
[
[2, 1],
[2, 2]
]
>
Returns the product for the tensor.
If the :axes
option is given, it aggregates over
the given dimensions, effectively removing them.
axes: [0]
implies aggregating over the highest order
dimension and so forth. If the axis is negative, then
counts the axis from the back. For example, axes: [-1]
will always aggregate all rows.
You may optionally set :keep_axes
to true, which will
retain the rank of the input tensor by setting the multiplied
axes to size 1.
Examples
By default the product always returns a scalar:
iex> Nx.product(Nx.tensor(42))
#Nx.Tensor<
s32
42
>
iex> Nx.product(Nx.tensor([1, 2, 3]))
#Nx.Tensor<
s32
6
>
iex> Nx.product(Nx.tensor([[1.0, 2.0], [3.0, 4.0]]))
#Nx.Tensor<
f32
24.0
>
Giving a tensor with low precision casts it to a higher precision to make sure the sum does not overflow:
iex> Nx.product(Nx.tensor([[10, 20], [30, 40]], type: :u8, names: [:x, :y]))
#Nx.Tensor<
u32
240000
>
iex> Nx.product(Nx.tensor([[10, 20], [30, 40]], type: :s8, names: [:x, :y]))
#Nx.Tensor<
s32
240000
>
Aggregating over an axis
iex> Nx.product(Nx.tensor([1, 2, 3]), axes: [0])
#Nx.Tensor<
s32
6
>
Same tensor over different axes combinations:
iex> t = Nx.iota({2, 2, 3}, names: [:x, :y, :z])
iex> Nx.product(t, axes: [:x])
#Nx.Tensor<
s32[y: 2][z: 3]
[
[0, 7, 16],
[27, 40, 55]
]
>
iex> Nx.product(t, axes: [:y])
#Nx.Tensor<
s32[x: 2][z: 3]
[
[0, 4, 10],
[54, 70, 88]
]
>
iex> Nx.product(t, axes: [:x, :z])
#Nx.Tensor<
s32[y: 2]
[0, 59400]
>
iex> Nx.product(t, axes: [:z])
#Nx.Tensor<
s32[x: 2][y: 2]
[
[0, 60],
[336, 990]
]
>
iex> Nx.product(t, axes: [-3])
#Nx.Tensor<
s32[y: 2][z: 3]
[
[0, 7, 16],
[27, 40, 55]
]
>
Keeping axes
iex> t = Nx.iota({2, 2, 3}, names: [:x, :y, :z])
iex> Nx.product(t, axes: [:z], keep_axes: true)
#Nx.Tensor<
s32[x: 2][y: 2][z: 1]
[
[
[0],
[60]
],
[
[336],
[990]
]
]
>
Vectorized tensors
iex> t = Nx.vectorize(Nx.tensor([[1, 2], [3, 4]]), :x)
iex> Nx.product(t, axes: [0], keep_axes: true)
#Nx.Tensor<
vectorized[x: 2]
s32[1]
[
[2],
[12]
]
>
Errors
iex> Nx.product(Nx.tensor([[1, 2]]), axes: [2])
** (ArgumentError) given axis (2) invalid for shape with rank 2
Reduces over a tensor with the given accumulator.
The given fun
will receive two tensors and it must
return the reduced value.
The tensor may be reduced in parallel and the reducer function can be called with arguments in any order, the initial accumulator may be given multiples, and it may be non-deterministic. Therefore, the reduction function should be associative (or as close as possible to associativity considered floats themselves are not strictly associative).
By default, it reduces all dimensions of the tensor and
return a scalar. If the :axes
option is given, it
aggregates over multiple dimensions, effectively removing
them. axes: [0]
implies aggregating over the highest
order dimension and so forth. If the axis is negative,
then counts the axis from the back. For example,
axes: [-1]
will always aggregate all rows.
The type of the returned tensor will be computed based on
the given tensor and the initial value. For example,
a tensor of integers with a float accumulator will be
cast to float, as done by most binary operators. You can
also pass a :type
option to change this behaviour.
You may optionally set :keep_axes
to true, which will
retain the rank of the input tensor by setting the reduced
axes to size 1.
Limitations
Given this function relies on anonymous functions, it
may not be available or efficient on all Nx backends.
Therefore, you should avoid using reduce/4
whenever
possible. Instead, use functions sum/2
, reduce_max/2
,
all/1
, and so forth.
Inside defn
, consider using Nx.Defn.Kernel.while/4
instead.
Examples
iex> Nx.reduce(Nx.tensor(42), 0, fn x, y -> Nx.add(x, y) end)
#Nx.Tensor<
s32
42
>
iex> Nx.reduce(Nx.tensor([1, 2, 3]), 0, fn x, y -> Nx.add(x, y) end)
#Nx.Tensor<
s32
6
>
iex> Nx.reduce(Nx.tensor([[1.0, 2.0], [3.0, 4.0]]), 0, fn x, y -> Nx.add(x, y) end)
#Nx.Tensor<
f32
10.0
>
Aggregating over axes
iex> t = Nx.tensor([1, 2, 3], names: [:x])
iex> Nx.reduce(t, 0, [axes: [:x]], fn x, y -> Nx.add(x, y) end)
#Nx.Tensor<
s32
6
>
iex> t = Nx.tensor([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]], names: [:x, :y, :z])
iex> Nx.reduce(t, 0, [axes: [:x]], fn x, y -> Nx.add(x, y) end)
#Nx.Tensor<
s32[y: 2][z: 3]
[
[8, 10, 12],
[14, 16, 18]
]
>
iex> t = Nx.tensor([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]], names: [:x, :y, :z])
iex> Nx.reduce(t, 0, [axes: [:y]], fn x, y -> Nx.add(x, y) end)
#Nx.Tensor<
s32[x: 2][z: 3]
[
[5, 7, 9],
[17, 19, 21]
]
>
iex> t = Nx.tensor([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]], names: [:x, :y, :z])
iex> Nx.reduce(t, 0, [axes: [:x, 2]], fn x, y -> Nx.add(x, y) end)
#Nx.Tensor<
s32[y: 2]
[30, 48]
>
iex> t = Nx.tensor([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]], names: [:x, :y, :z])
iex> Nx.reduce(t, 0, [axes: [-1]], fn x, y -> Nx.add(x, y) end)
#Nx.Tensor<
s32[x: 2][y: 2]
[
[6, 15],
[24, 33]
]
>
iex> t = Nx.tensor([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]], names: [:x, :y, :z])
iex> Nx.reduce(t, 0, [axes: [:x]], fn x, y -> Nx.add(x, y) end)
#Nx.Tensor<
s32[y: 2][z: 3]
[
[8, 10, 12],
[14, 16, 18]
]
>
iex> t = Nx.tensor([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]], names: [:x, :y, :z])
iex> Nx.reduce(t, 0, [axes: [:x], keep_axes: true], fn x, y -> Nx.add(x, y) end)
#Nx.Tensor<
s32[x: 1][y: 2][z: 3]
[
[
[8, 10, 12],
[14, 16, 18]
]
]
>
Vectorized tensors
Only tensor
can be vectorized. Normal behavior of reduce/4
is applied to each corresponding entry. :axes
refers to the
non-vectorized shape.
iex> t = Nx.tensor([[[1, 2, 3], [4, 5, 6]], [[10, 20, 30], [40, 50, 60]]]) |> Nx.vectorize(:x)
iex> Nx.reduce(t, 10, [axes: [1]], &Nx.add/2)
#Nx.Tensor<
vectorized[x: 2]
s32[2]
[
[16, 25],
[70, 160]
]
>
Returns the maximum values of the tensor.
If the :axes
option is given, it aggregates over
the given dimensions, effectively removing them.
axes: [0]
implies aggregating over the highest order
dimension and so forth. If the axis is negative, then
counts the axis from the back. For example, axes: [-1]
will always aggregate all rows.
You may optionally set :keep_axes
to true, which will
retain the rank of the input tensor by setting the reduced
axes to size 1.
Examples
iex> Nx.reduce_max(Nx.tensor(42))
#Nx.Tensor<
s32
42
>
iex> Nx.reduce_max(Nx.tensor(42.0))
#Nx.Tensor<
f32
42.0
>
iex> Nx.reduce_max(Nx.tensor([1, 2, 3]))
#Nx.Tensor<
s32
3
>
Aggregating over an axis
iex> t = Nx.tensor([[3, 1, 4], [2, 1, 1]], names: [:x, :y])
iex> Nx.reduce_max(t, axes: [:x])
#Nx.Tensor<
s32[y: 3]
[3, 1, 4]
>
iex> t = Nx.tensor([[3, 1, 4], [2, 1, 1]], names: [:x, :y])
iex> Nx.reduce_max(t, axes: [:y])
#Nx.Tensor<
s32[x: 2]
[4, 2]
>
iex> t = Nx.tensor([[[1, 2], [4, 5]], [[2, 4], [3, 8]]], names: [:x, :y, :z])
iex> Nx.reduce_max(t, axes: [:x, :z])
#Nx.Tensor<
s32[y: 2]
[4, 8]
>
Keeping axes
iex> t = Nx.tensor([[[1, 2], [4, 5]], [[2, 4], [3, 8]]], names: [:x, :y, :z])
iex> Nx.reduce_max(t, axes: [:x, :z], keep_axes: true)
#Nx.Tensor<
s32[x: 1][y: 2][z: 1]
[
[
[4],
[8]
]
]
>
Vectorized tensors
iex> t = Nx.vectorize(Nx.tensor([[1, 2], [3, 4]]), :x)
iex> Nx.reduce_max(t, axes: [0], keep_axes: true)
#Nx.Tensor<
vectorized[x: 2]
s32[1]
[
[2],
[4]
]
>
Returns the minimum values of the tensor.
If the :axes
option is given, it aggregates over
the given dimensions, effectively removing them.
axes: [0]
implies aggregating over the highest order
dimension and so forth. If the axis is negative, then
counts the axis from the back. For example, axes: [-1]
will always aggregate all rows.
You may optionally set :keep_axes
to true, which will
retain the rank of the input tensor by setting the reduced
axes to size 1.
Examples
iex> Nx.reduce_min(Nx.tensor(42))
#Nx.Tensor<
s32
42
>
iex> Nx.reduce_min(Nx.tensor(42.0))
#Nx.Tensor<
f32
42.0
>
iex> Nx.reduce_min(Nx.tensor([1, 2, 3]))
#Nx.Tensor<
s32
1
>
Aggregating over an axis
iex> t = Nx.tensor([[3, 1, 4], [2, 1, 1]], names: [:x, :y])
iex> Nx.reduce_min(t, axes: [:x])
#Nx.Tensor<
s32[y: 3]
[2, 1, 1]
>
iex> t = Nx.tensor([[3, 1, 4], [2, 1, 1]], names: [:x, :y])
iex> Nx.reduce_min(t, axes: [:y])
#Nx.Tensor<
s32[x: 2]
[1, 1]
>
iex> t = Nx.tensor([[[1, 2], [4, 5]], [[2, 4], [3, 8]]], names: [:x, :y, :z])
iex> Nx.reduce_min(t, axes: [:x, :z])
#Nx.Tensor<
s32[y: 2]
[1, 3]
>
Keeping axes
iex> t = Nx.tensor([[[1, 2], [4, 5]], [[2, 4], [3, 8]]], names: [:x, :y, :z])
iex> Nx.reduce_min(t, axes: [:x, :z], keep_axes: true)
#Nx.Tensor<
s32[x: 1][y: 2][z: 1]
[
[
[1],
[3]
]
]
>
Vectorized tensors
iex> t = Nx.vectorize(Nx.tensor([[1, 2], [3, 4]]), :x)
iex> Nx.reduce_min(t, axes: [0], keep_axes: true)
#Nx.Tensor<
vectorized[x: 2]
s32[1]
[
[1],
[3]
]
>
@spec standard_deviation(tensor :: Nx.Tensor.t(), opts :: Keyword.t()) :: Nx.Tensor.t()
Finds the standard deviation of a tensor.
The standard deviation is taken as the square root of the variance.
If the :ddof
(delta degrees of freedom) option is given, the divisor
n - ddof
is used to calculate the variance. See variance/2
.
Examples
iex> Nx.standard_deviation(Nx.tensor([[1, 2], [3, 4]]))
#Nx.Tensor<
f32
1.1180340051651
>
iex> Nx.standard_deviation(Nx.tensor([[1, 2], [3, 4]]), ddof: 1)
#Nx.Tensor<
f32
1.29099440574646
>
iex> Nx.standard_deviation(Nx.tensor([[1, 2], [10, 20]]), axes: [0])
#Nx.Tensor<
f32[2]
[4.5, 9.0]
>
iex> Nx.standard_deviation(Nx.tensor([[1, 2], [10, 20]]), axes: [1])
#Nx.Tensor<
f32[2]
[0.5, 5.0]
>
iex> Nx.standard_deviation(Nx.tensor([[1, 2], [10, 20]]), axes: [0], ddof: 1)
#Nx.Tensor<
f32[2]
[6.363961219787598, 12.727922439575195]
>
iex> Nx.standard_deviation(Nx.tensor([[1, 2], [10, 20]]), axes: [1], ddof: 1)
#Nx.Tensor<
f32[2]
[0.7071067690849304, 7.071067810058594]
>
Keeping axes
iex> Nx.standard_deviation(Nx.tensor([[1, 2], [10, 20]]), keep_axes: true)
#Nx.Tensor<
f32[1][1]
[
[7.628073215484619]
]
>
Vectorized tensors
iex> Nx.standard_deviation(Nx.tensor([[1, 2], [0, 4]]) |> Nx.vectorize(:x))
#Nx.Tensor<
vectorized[x: 2]
f32
[0.5, 2.0]
>
Returns the sum for the tensor.
If the :axes
option is given, it aggregates over
the given dimensions, effectively removing them.
axes: [0]
implies aggregating over the highest order
dimension and so forth. If the axis is negative, then
counts the axis from the back. For example, axes: [-1]
will always aggregate all rows.
You may optionally set :keep_axes
to true, which will
retain the rank of the input tensor by setting the summed
axes to size 1.
Examples
By default the sum always returns a scalar:
iex> Nx.sum(Nx.tensor(42))
#Nx.Tensor<
s32
42
>
iex> Nx.sum(Nx.tensor([1, 2, 3]))
#Nx.Tensor<
s32
6
>
iex> Nx.sum(Nx.tensor([[1.0, 2.0], [3.0, 4.0]]))
#Nx.Tensor<
f32
10.0
>
Giving a tensor with low precision casts it to a higher precision to make sure the sum does not overflow:
iex> Nx.sum(Nx.tensor([[101, 102], [103, 104]], type: :s8))
#Nx.Tensor<
s32
410
>
iex> Nx.sum(Nx.tensor([[101, 102], [103, 104]], type: :s16))
#Nx.Tensor<
s32
410
>
Aggregating over an axis
iex> Nx.sum(Nx.tensor([1, 2, 3]), axes: [0])
#Nx.Tensor<
s32
6
>
Same tensor over different axes combinations:
iex> t = Nx.iota({2, 2, 3}, names: [:x, :y, :z])
iex> Nx.sum(t, axes: [:x])
#Nx.Tensor<
s32[y: 2][z: 3]
[
[6, 8, 10],
[12, 14, 16]
]
>
iex> Nx.sum(t, axes: [:y])
#Nx.Tensor<
s32[x: 2][z: 3]
[
[3, 5, 7],
[15, 17, 19]
]
>
iex> Nx.sum(t, axes: [:z])
#Nx.Tensor<
s32[x: 2][y: 2]
[
[3, 12],
[21, 30]
]
>
iex> Nx.sum(t, axes: [:x, :z])
#Nx.Tensor<
s32[y: 2]
[24, 42]
>
iex> Nx.sum(t, axes: [-3])
#Nx.Tensor<
s32[y: 2][z: 3]
[
[6, 8, 10],
[12, 14, 16]
]
>
Keeping axes
iex> t = Nx.tensor([[1, 2], [3, 4]], names: [:x, :y])
iex> Nx.sum(t, axes: [:x], keep_axes: true)
#Nx.Tensor<
s32[x: 1][y: 2]
[
[4, 6]
]
>
Vectorized tensors
iex> t = Nx.tensor([[[[1, 2]], [[3, 4]]], [[[5, 6]], [[7, 8]]]]) |> Nx.vectorize(:x) |> Nx.vectorize(:y)
#Nx.Tensor<
vectorized[x: 2][y: 2]
s32[1][2]
[
[
[
[1, 2]
],
[
[3, 4]
]
],
[
[
[5, 6]
],
[
[7, 8]
]
]
]
>
iex> Nx.sum(t)
#Nx.Tensor<
vectorized[x: 2][y: 2]
s32
[
[3, 7],
[11, 15]
]
>
iex> Nx.sum(t, axes: [0])
#Nx.Tensor<
vectorized[x: 2][y: 2]
s32[2]
[
[
[1, 2],
[3, 4]
],
[
[5, 6],
[7, 8]
]
]
>
Errors
iex> Nx.sum(Nx.tensor([[1, 2]]), axes: [2])
** (ArgumentError) given axis (2) invalid for shape with rank 2
@spec variance(tensor :: Nx.Tensor.t(), opts :: Keyword.t()) :: Nx.Tensor.t()
Finds the variance of a tensor.
The variance is the average of the squared deviations from the mean.
The mean is typically calculated as sum(tensor) / n
, where n
is the total
of elements. If, however, :ddof
(delta degrees of freedom) is specified, the
divisor n - ddof
is used instead.
Examples
iex> Nx.variance(Nx.tensor([[1, 2], [3, 4]]))
#Nx.Tensor<
f32
1.25
>
iex> Nx.variance(Nx.tensor([[1, 2], [3, 4]]), ddof: 1)
#Nx.Tensor<
f32
1.6666666269302368
>
iex> Nx.variance(Nx.tensor([[1, 2], [3, 4]]), axes: [0])
#Nx.Tensor<
f32[2]
[1.0, 1.0]
>
iex> Nx.variance(Nx.tensor([[1, 2], [3, 4]]), axes: [1])
#Nx.Tensor<
f32[2]
[0.25, 0.25]
>
iex> Nx.variance(Nx.tensor([[1, 2], [3, 4]]), axes: [0], ddof: 1)
#Nx.Tensor<
f32[2]
[2.0, 2.0]
>
iex> Nx.variance(Nx.tensor([[1, 2], [3, 4]]), axes: [1], ddof: 1)
#Nx.Tensor<
f32[2]
[0.5, 0.5]
>
Keeping axes
iex> Nx.variance(Nx.tensor([[1, 2], [3, 4]]), axes: [1], keep_axes: true)
#Nx.Tensor<
f32[2][1]
[
[0.25],
[0.25]
]
>
Vectorized tensors
iex> Nx.variance(Nx.tensor([[1, 2], [0, 4]]) |> Nx.vectorize(:x))
#Nx.Tensor<
vectorized[x: 2]
f32
[0.25, 4.0]
>
Returns the weighted mean for the tensor and the weights.
If the :axes
option is given, it aggregates over
those dimensions, effectively removing them. axes: [0]
implies aggregating over the highest order dimension
and so forth. If the axes are negative, then the axes will
be counted from the back. For example, axes: [-1]
will
always aggregate over the last dimension.
You may optionally set :keep_axes
to true, which will
retain the rank of the input tensor by setting the averaged
axes to size 1.
Examples
iex> Nx.weighted_mean(Nx.tensor(42), Nx.tensor(2))
#Nx.Tensor<
f32
42.0
>
iex> Nx.weighted_mean(Nx.tensor([1, 2, 3]), Nx.tensor([3, 2, 1]))
#Nx.Tensor<
f32
1.6666666269302368
>
Aggregating over axes
iex> Nx.weighted_mean(Nx.tensor([1, 2, 3], names: [:x]), Nx.tensor([4, 5, 6]), axes: [0])
#Nx.Tensor<
f32
2.133333444595337
>
iex> Nx.weighted_mean(Nx.tensor([1, 2, 3], type: :u8, names: [:x]), Nx.tensor([1, 3, 5]), axes: [:x])
#Nx.Tensor<
f32
2.444444417953491
>
iex> t = Nx.iota({3, 4})
iex> weights = Nx.tensor([1, 2, 3, 4])
iex> Nx.weighted_mean(t, weights, axes: [1])
#Nx.Tensor<
f32[3]
[2.0, 6.0, 10.0]
>
iex> t = Nx.iota({2, 4, 4, 1})
iex> weights = Nx.broadcast(2, {4, 4})
iex> Nx.weighted_mean(t, weights, axes: [1, 2])
#Nx.Tensor<
f32[2][1]
[
[7.5],
[23.5]
]
>
Keeping axes
iex> t = Nx.tensor(Nx.iota({2, 2, 3}), names: [:x, :y, :z])
iex> weights = Nx.tensor([[[0, 1, 2], [1, 1, 0]], [[-1, 1, -1], [1, 1, -1]]])
iex> Nx.weighted_mean(t, weights, axes: [-1], keep_axes: true)
#Nx.Tensor<
f32[x: 2][y: 2][z: 1]
[
[
[1.6666666269302368],
[3.5]
],
[
[7.0],
[8.0]
]
]
>
Vectorized tensors
iex> t = Nx.tensor([[1, 2, 3], [1, 1, 1]]) |> Nx.vectorize(:x)
#Nx.Tensor<
vectorized[x: 2]
s32[3]
[
[1, 2, 3],
[1, 1, 1]
]
>
iex> w = Nx.tensor([[1, 1, 1], [0, 0, 1]]) |> Nx.vectorize(:y)
#Nx.Tensor<
vectorized[y: 2]
s32[3]
[
[1, 1, 1],
[0, 0, 1]
]
>
iex> Nx.weighted_mean(t, w)
#Nx.Tensor<
vectorized[x: 2][y: 2]
f32
[
[2.0, 3.0],
[1.0, 1.0]
]
>
Functions: Backend
Copies data to the given backend.
If a backend is not given, Nx.Tensor
is used, which means
the given tensor backend will pick the most appropriate
backend to copy the data to.
Note this function keeps the data in the original backend.
Therefore, use this function with care, as it may duplicate
large amounts of data across backends. Generally speaking,
you may want to use backend_transfer/2
, unless you explicitly
want to copy the data.
Note:
Nx.default_backend/1
does not affect the behaviour of this function.- This function cannot be used in
defn
.
Examples
iex> Nx.backend_copy(Nx.tensor([[1, 2, 3], [4, 5, 6]])) #Nx.Tensor<
s32[2][3]
[
[1, 2, 3],
[4, 5, 6]
]
Deallocates data in a device.
It returns either :ok
or :already_deallocated
.
Note: This function cannot be used in defn
.
backend_transfer(tensor_or_container, backend \\ Nx.BinaryBackend)
View SourceTransfers data to the given backend.
This operation can be seen as an equivalent to backend_copy/3
followed by a backend_deallocate/1
on the initial tensor:
new_tensor = Nx.backend_copy(old_tensor, new_backend)
Nx.backend_deallocate(old_tensor)
If a backend is not given, Nx.Tensor
is used, which means
the given tensor backend will pick the most appropriate
backend to transfer to.
For Elixir's builtin tensor, transferring to another backend
will call new_backend.from_binary(tensor, binary, opts)
.
Transferring from a mutable backend, such as GPU memory,
implies the data is copied from the GPU to the Erlang VM
and then deallocated from the device.
Note:
Nx.default_backend/1
does not affect the behaviour of this function.- This function cannot be used in
defn
.
Examples
Transfer a tensor to an EXLA device backend, stored in the GPU:
device_tensor = Nx.backend_transfer(tensor, {EXLA.Backend, client: :cuda})
Transfer the device tensor back to an Elixir tensor:
tensor = Nx.backend_transfer(device_tensor)
Gets the default backend for the current process.
Note: This function cannot be used in defn
.
Sets the given backend
as default in the current process.
The default backend is stored only in the process dictionary.
This means if you start a separate process, such as Task
,
the default backend must be set on the new process too.
Due to this reason, this function is mostly used for scripting and testing. In your applications, you must prefer to set the backend in your config files:
config :nx, :default_backend, {EXLA.Backend, device: :cuda}
In your notebooks and on Mix.install/2
, you might:
Mix.install(
[
{:nx, ">= 0.0.0"}
],
config: [nx: [default_backend: {EXLA.Backend, device: :cuda}]]
)
Or use Nx.global_default_backend/1
as it changes the
default backend on all processes.
The function returns the value that was previously set as backend.
Note: This function cannot be used in defn
.
Examples
Nx.default_backend({EXLA.Backend, device: :cuda})
#=> {Nx.BinaryBackend, []}
Sets the default backend globally.
You must avoid calling this function at runtime. It is mostly useful during scripts or code notebooks to set a default.
If you need to configure a global default backend in your
applications, it is generally preferred to do so in your
config/*.exs
files:
config :nx, :default_backend, {EXLA.Backend, []}
In your notebooks and on Mix.install/2
, you might:
Mix.install(
[
{:nx, ">= 0.0.0"}
],
config: [nx: [default_backend: {EXLA.Backend, device: :cuda}]]
)
The function returns the value that was previously set as global backend.
Invokes the given function temporarily setting backend
as the
default backend.
Functions: Conversion
Deserializes a serialized representation of a tensor or a container with the given options.
It is the opposite of Nx.serialize/2
.
Note: This function cannot be used in defn
.
Examples
iex> a = Nx.tensor([1, 2, 3])
iex> serialized_a = Nx.serialize(a)
iex> Nx.deserialize(serialized_a)
#Nx.Tensor<
s32[3]
[1, 2, 3]
>
iex> container = {Nx.vectorize(Nx.tensor([1, 2, 3]), :x), %{b: Nx.tensor([4, 5, 6])}}
iex> serialized_container = Nx.serialize(container)
iex> {a, %{b: b}} = Nx.deserialize(serialized_container)
iex> a
#Nx.Tensor<
vectorized[x: 3]
s32
[1, 2, 3]
>
iex> b
#Nx.Tensor<
s32[3]
[4, 5, 6]
>
@spec load_numpy!(data :: binary()) :: Nx.Tensor.t()
Loads a .npy
file into a tensor.
An .npy
file stores a single array created from Python's
NumPy library. This function can be useful for loading data
originally created or intended to be loaded from NumPy into
Elixir.
This function will raise if the archive or any of its contents are invalid.
Note: This function cannot be used in defn
.
Examples
"array.npy"
|> File.read!()
|> Nx.load_numpy!()
#=>
#Nx.Tensor<
s32[3]
[1, 2, 3]
>
@spec load_numpy_archive!(data :: binary()) :: [{name :: binary(), Nx.Tensor.t()}]
Loads a .npz
archive into a list of tensors.
An .npz
file is a zipped, possibly compressed
archive containing multiple .npy
files.
It returns a list of two elements tuples, where
the tensor name is first and the serialized tensor
is second. The list is returned in the same order
as in the archive. Use Map.new/1
afterwards if
you want to access the list elements by name.
It will raise if the archive or any of its contents are invalid.
Note: This function cannot be used in defn
.
Examples
"archive.npz"
|> File.read!()
|> Nx.load_numpy_archive!()
#=>
[
{"foo",
#Nx.Tensor<
s32[3]
[1, 2, 3]
>},
{"bar",
#Nx.Tensor<
f64[5]
[-1.0, -0.5, 0.0, 0.5, 1.0]
>}
]
Serializes the given tensor or container of tensors to iodata.
You may pass any tensor or Nx.Container
to serialization.
Opposite to other functions in this module, Nx.LazyContainer
cannot be serialized and they must be explicitly converted
to tensors before (that's because lazy containers do not preserve
their shape).
opts
controls the serialization options. For example, you can choose
to compress the given tensor or container of tensors by passing a
compression level:
Nx.serialize(tensor, compressed: 9)
Compression level corresponds to compression options in :erlang.term_to_iovec/2
.
iodata
is a list of binaries that can be written to any io device,
such as a file or a socket. You can ensure the result is a binary by
calling IO.iodata_to_binary/1
.
Note: This function cannot be used in defn
.
Examples
iex> a = Nx.tensor([1, 2, 3])
iex> serialized_a = Nx.serialize(a)
iex> Nx.deserialize(serialized_a)
#Nx.Tensor<
s32[3]
[1, 2, 3]
>
iex> container = {Nx.tensor([1, 2, 3]), %{b: Nx.tensor([4, 5, 6])}}
iex> serialized_container = Nx.serialize(container)
iex> {a, %{b: b}} = Nx.deserialize(serialized_container)
iex> a
#Nx.Tensor<
s32[3]
[1, 2, 3]
>
iex> b
#Nx.Tensor<
s32[3]
[4, 5, 6]
>
Converts the underlying tensor to a stream of tensor batches.
The first dimension (axis 0) is divided by batch_size
.
In case the dimension cannot be evenly divided by
batch_size
, you may specify what to do with leftover
data using :leftover
. :leftover
must be one of :repeat
or :discard
. :repeat
repeats the first n
values to
make the last batch match the desired batch size. :discard
discards excess elements.
Note: This function cannot be used in defn
.
Examples
In the examples below we immediately pipe to Enum.to_list/1
for convenience, but in practice you want to lazily traverse
the batches to avoid allocating multiple tensors at once in
certain backends:
iex> [first, second] = Nx.to_batched(Nx.iota({2, 2, 2}), 1) |> Enum.to_list()
iex> first
#Nx.Tensor<
s32[1][2][2]
[
[
[0, 1],
[2, 3]
]
]
>
iex> second
#Nx.Tensor<
s32[1][2][2]
[
[
[4, 5],
[6, 7]
]
]
>
If the batch size would result in uneven batches, you can repeat or discard excess data. By default, we repeat:
iex> [first, second, third] = Nx.to_batched(Nx.iota({5, 2}, names: [:x, :y]), 2) |> Enum.to_list()
iex> first
#Nx.Tensor<
s32[x: 2][y: 2]
[
[0, 1],
[2, 3]
]
>
iex> second
#Nx.Tensor<
s32[x: 2][y: 2]
[
[4, 5],
[6, 7]
]
>
iex> third
#Nx.Tensor<
s32[x: 2][y: 2]
[
[8, 9],
[0, 1]
]
>
But you can also discard:
iex> [first, second] = Nx.to_batched(Nx.iota({5, 2}, names: [:x, :y]), 2, leftover: :discard) |> Enum.to_list()
iex> first
#Nx.Tensor<
s32[x: 2][y: 2]
[
[0, 1],
[2, 3]
]
>
iex> second
#Nx.Tensor<
s32[x: 2][y: 2]
[
[4, 5],
[6, 7]
]
>
Vectorized tensors
Similarly to to_list/1
and to_binary/1
, to_batched/2
will
ignore vectorization to perform calculations. Because the output
still contains tensors, however, they will still be vectorized.
iex> t = Nx.iota({2, 2, 2}) |> Nx.vectorize(x: 2)
iex> [first, second] = Nx.to_batched(t, 1) |> Enum.to_list()
iex> first
#Nx.Tensor<
vectorized[x: 1]
s32[2][2]
[
[
[0, 1],
[2, 3]
]
]
>
iex> second
#Nx.Tensor<
vectorized[x: 1]
s32[2][2]
[
[
[4, 5],
[6, 7]
]
]
>
iex> t = Nx.iota({2, 2, 2}) |> Nx.vectorize(x: 2, y: 2)
iex> [first, second] = Nx.to_batched(t, 1) |> Enum.to_list()
iex> first
#Nx.Tensor<
vectorized[x: 1][y: 2]
s32[2]
[
[
[0, 1],
[2, 3]
]
]
>
iex> second
#Nx.Tensor<
vectorized[x: 1][y: 2]
s32[2]
[
[
[4, 5],
[6, 7]
]
]
>
Same rules about uneven batches still apply:
iex> t = Nx.iota({5, 2}, names: [:x, :y]) |> Nx.vectorize(:x)
iex> [first, second, third] = Nx.to_batched(t, 2) |> Enum.to_list()
iex> first
#Nx.Tensor<
vectorized[x: 2]
s32[y: 2]
[
[0, 1],
[2, 3]
]
>
iex> second
#Nx.Tensor<
vectorized[x: 2]
s32[y: 2]
[
[4, 5],
[6, 7]
]
>
iex> third
#Nx.Tensor<
vectorized[x: 2]
s32[y: 2]
[
[8, 9],
[0, 1]
]
>
Because we're dealing with vectorized tensors, a vectorized scalar tensor can also be batched.
iex> t = Nx.tensor([1, 2, 3]) |> Nx.vectorize(:x)
iex> [first, second] = t |> Nx.to_batched(2) |> Enum.to_list()
iex> first
#Nx.Tensor<
vectorized[x: 2]
s32
[1, 2]
>
iex> second
#Nx.Tensor<
vectorized[x: 2]
s32
[3, 1]
>
Returns the underlying tensor as a binary.
It returns the in-memory binary representation of the tensor in a row-major fashion. The binary is in the system endianness, which has to be taken into account if the binary is meant to be serialized to other systems.
This function cannot be used in defn
.
Potentially expensive operation
Converting a tensor to a binary can potentially be a very expensive operation, as it may copy a GPU tensor fully to the machine memory.
Binaries vs bitstrings
If a tensor of type u2/u4/s2/s4 is given to this function, this function may not return a binary (where the number of bits is divisible by 8) but rather a bitstring (where the number of bits may not be divisible by 8).
Options
:limit
- limit the number of entries represented in the binary
Examples
iex> Nx.to_binary(1)
<<1::32-native>>
iex> Nx.to_binary(Nx.tensor([1.0, 2.0, 3.0]))
<<1.0::float-32-native, 2.0::float-32-native, 3.0::float-32-native>>
iex> Nx.to_binary(Nx.tensor([1.0, 2.0, 3.0]), limit: 2)
<<1.0::float-32-native, 2.0::float-32-native>>
Vectorized tensors
to_binary/2
disregards the vectorized axes before calculating the data to be returned:
iex> Nx.to_binary(Nx.vectorize(Nx.tensor([[1, 2], [3, 4]]), :x))
<<1::32-native, 2::32-native, 3::32-native, 4::32-native>>
iex> Nx.to_binary(Nx.vectorize(Nx.tensor([1, 2, 3]), :x), limit: 2)
<<1::32-native, 2::32-native>>
Returns the underlying tensor as a flat list.
Negative infinity (-Inf), infinity (Inf), and "not a number" (NaN)
will be represented by the atoms :neg_infinity
, :infinity
, and
:nan
respectively.
Note: This function cannot be used in defn
.
Examples
iex> Nx.to_flat_list(1)
[1]
iex> Nx.to_flat_list(Nx.tensor([1.0, 2.0, 3.0]))
[1.0, 2.0, 3.0]
iex> Nx.to_flat_list(Nx.tensor([1.0, 2.0, 3.0]), limit: 2)
[1.0, 2.0]
Non-finite numbers are returned as atoms:
iex> t = Nx.tensor([:neg_infinity, :nan, :infinity])
iex> Nx.to_flat_list(t)
[:neg_infinity, :nan, :infinity]
Vectorized tensors
to_flat_list/2
disregards the vectorized axes before calculating the data to be returned.
Like to_binary/1
, :limit
refers to the flattened devectorized data.
iex> t = Nx.vectorize(Nx.tensor([[1], [2], [3], [4]]), :x)
iex> Nx.to_flat_list(t)
[1, 2, 3, 4]
iex> Nx.to_flat_list(t, limit: 2)
[1, 2]
Returns a heatmap struct with the tensor data.
On terminals, coloring is done via ANSI colors. If ANSI is not enabled, the tensor is normalized to show numbers between 0 and 9.
Terminal coloring
Coloring is enabled by default on most Unix terminals. It is also available on Windows consoles from Windows 10, although it must be explicitly enabled for the current user in the registry by running the following command:
reg add HKCU\Console /v VirtualTerminalLevel /t REG_DWORD /d 1
After running the command above, you must restart your current console.
Options
:ansi_enabled
- forces ansi to be enabled or disabled. Defaults toIO.ANSI.enabled?/0
:ansi_whitespace
- which whitespace character to use when printing. By default it uses"\u3000"
, which is a full-width whitespace which often prints more precise shapes
Converts the tensor into a list reflecting its structure.
Negative infinity (-Inf), infinity (Inf), and "not a number" (NaN)
will be represented by the atoms :neg_infinity
, :infinity
, and
:nan
respectively.
It raises if a scalar tensor is given, use to_number/1
instead.
Note: This function cannot be used in defn
.
Examples
iex> Nx.iota({2, 3}) |> Nx.to_list()
[
[0, 1, 2],
[3, 4, 5]
]
iex> Nx.tensor(123) |> Nx.to_list()
** (ArgumentError) cannot convert a scalar tensor to a list, got: #Nx.Tensor<
s32
123
>
Vectorized tensors
to_list/1
disregards the vectorized axes before calculating the data to be returned.
The special case below shows that a vectorized tensor with inner scalar shape will
still be converted to a list accordingly:
iex> %{shape: {}} = t = Nx.vectorize(Nx.tensor([1, 2, 3]), :x)
iex> Nx.to_list(t) # recall that normally, shape == {} would raise!
[1, 2, 3]
Returns the underlying tensor as a number.
Negative infinity (-Inf), infinity (Inf), and "not a number" (NaN)
will be represented by the atoms :neg_infinity
, :infinity
, and
:nan
respectively.
If the tensor has a dimension or is vectorized, it raises.
Note: This function cannot be used in defn
.
Examples
iex> Nx.to_number(1)
1
iex> Nx.to_number(Nx.tensor([1.0, 2.0, 3.0]))
** (ArgumentError) cannot convert tensor of shape {3} to number
iex> Nx.to_number(Nx.vectorize(Nx.tensor([1]), :x))
** (ArgumentError) cannot convert vectorized tensor with axes [x: 1] and shape {} to number
Converts a tensor (or tuples and maps of tensors) to tensor templates.
Templates are useful when you need to pass types and shapes to operations and the data is not yet available.
For convenience, this function accepts tensors and any container
(such as maps and tuples as defined by the Nx.LazyContainer
protocol)
and recursively converts all tensors to templates.
Examples
iex> Nx.iota({2, 3}) |> Nx.to_template()
#Nx.Tensor<
s32[2][3]
Nx.TemplateBackend
>
iex> {int, float} = Nx.to_template({1, 2.0})
iex> int
#Nx.Tensor<
s32
Nx.TemplateBackend
>
iex> float
#Nx.Tensor<
f32
Nx.TemplateBackend
>
Although note it is impossible to perform any operation on a tensor template:
iex> t = Nx.iota({2, 3}) |> Nx.to_template()
iex> Nx.abs(t)
** (RuntimeError) cannot perform operations on a Nx.TemplateBackend tensor
To build a template from scratch, use template/3
.
Converts a data structure into a tensor.
This function only converts types which are automatically
cast to tensors throughout Nx API: numbers, complex numbers,
tensors themselves, and implementations of Nx.LazyContainer
(and Nx.Container
).
If your goal is to create tensors from lists, see tensor/2
.
If you want to create a tensor from binary, see from_binary/3
.
If you want to convert a data structure with several tensors at
once into a single one, see stack/2
or concatenate/2
instead.
Functions: Creation
Short-hand function for creating tensor of type bf16
.
This is just an alias for Nx.tensor(tensor, type: bf16)
.
Creates the identity matrix of size n
.
Options
:type
- the type of the tensor:names
- the names of the tensor dimensions:backend
- the backend to allocate the tensor on. It is either an atom or a tuple in the shape{backend, options}
. This option is ignored insidedefn
:vectorized_axes
- a keyword list ofaxis_name: axis_size
. If given, the resulting tensor will be vectorized accordingly. Vectorization is not supported via tensor inputs.
Examples
iex> Nx.eye(2)
#Nx.Tensor<
s32[2][2]
[
[1, 0],
[0, 1]
]
>
iex> Nx.eye(3, type: :f32, names: [:height, :width])
#Nx.Tensor<
f32[height: 3][width: 3]
[
[1.0, 0.0, 0.0],
[0.0, 1.0, 0.0],
[0.0, 0.0, 1.0]
]
>
The first argument can also be a shape of a matrix:
iex> Nx.eye({1, 2})
#Nx.Tensor<
s32[1][2]
[
[1, 0]
]
>
The shape can also represent a tensor batch. In this case, the last two axes will represent the same identity matrix.
iex> Nx.eye({2, 4, 3})
#Nx.Tensor<
s32[2][4][3]
[
[
[1, 0, 0],
[0, 1, 0],
[0, 0, 1],
[0, 0, 0]
],
[
[1, 0, 0],
[0, 1, 0],
[0, 0, 1],
[0, 0, 0]
]
]
>
Vectorized tensors
If given, vectorized axes, are added as leading dimensions to the tensor, effectively broadcasting the base shape along them.
iex> Nx.eye({3}, vectorized_axes: [x: 1, y: 2])
#Nx.Tensor<
vectorized[x: 1][y: 2]
s32[3]
[
[
[1, 0, 0],
[1, 0, 0]
]
]
>
iex> Nx.eye({2, 3}, vectorized_axes: [x: 2])
#Nx.Tensor<
vectorized[x: 2]
s32[2][3]
[
[
[1, 0, 0],
[0, 1, 0]
],
[
[1, 0, 0],
[0, 1, 0]
]
]
>
Short-hand function for creating tensor of type f8
.
This is just an alias for Nx.tensor(tensor, type: f8)
.
Short-hand function for creating tensor of type f16
.
This is just an alias for Nx.tensor(tensor, type: f16)
.
Short-hand function for creating tensor of type f32
.
This is just an alias for Nx.tensor(tensor, type: f32)
.
Short-hand function for creating tensor of type f64
.
This is just an alias for Nx.tensor(tensor, type: f64)
.
Creates a one-dimensional tensor from a binary
with the given type
.
If the binary size does not match its type, an error is raised.
Examples
iex> Nx.from_binary(<<1, 2, 3, 4>>, :s8)
#Nx.Tensor<
s8[4]
[1, 2, 3, 4]
>
The atom notation for types is also supported:
iex> Nx.from_binary(<<12.3::float-64-native>>, :f64)
#Nx.Tensor<
f64[1]
[12.3]
>
An error is raised for incompatible sizes:
iex> Nx.from_binary(<<1, 2, 3, 4>>, :f64)
** (ArgumentError) binary does not match the given size
Options
:backend
- the backend to allocate the tensor on. It is either an atom or a tuple in the shape{backend, options}
. This option is ignored insidedefn
Creates an Nx-tensor from an already-allocated memory space.
This function should be used with caution, as it can lead to segmentation faults.
The backend
argument is either the backend module (such as Nx.BinaryBackend
),
or a tuple of {module, keyword()}
with specific backend configuration.
pointer
is the corresponding value that would be returned from
a call to get_pointer/2
.
Options
Besides the options listed below, all other options are forwarded to the underlying implementation.
:names
- refer totensor/2
Examples
pointer = %Nx.Pointer{kind: :local, address: 1234}
Nx.from_pointer(MyBackend, pointer, {:s, 32}, {1, 3})
#Nx.Tensor<
s32[1][3]
[
[10, 20, 30]
]
>
pointer = %Nx.Pointer{kind: :ipc, handle: "some-ipc-handle"}
Nx.from_pointer({MyBackend, some: :opt}, pointer, {:s, 32}, {1, 3}, names: [nil, :col])
#Nx.Tensor<
s32[1][col: 3]
[
[10, 20, 30]
]
>
Creates a tensor with the given shape which increments along the provided axis. You may optionally provide dimension names.
If no axis is provided, index counts up at each element.
If a tensor or a number are given, the shape and names are taken from the tensor.
Options
:type
- the type of the tensor:axis
- an axis to repeat the iota over:names
- the names of the tensor dimensions:backend
- the backend to allocate the tensor on. It is either an atom or a tuple in the shape{backend, options}
. This option is ignored insidedefn
:vectorized_axes
- a keyword list ofaxis_name: axis_size
. If given, the resulting tensor will be vectorized accordingly. Vectorization is not supported via tensor inputs.
Examples
iex> Nx.iota({})
#Nx.Tensor<
s32
0
>
iex> Nx.iota({5})
#Nx.Tensor<
s32[5]
[0, 1, 2, 3, 4]
>
iex> Nx.iota({3, 2, 3}, names: [:batch, :height, :width])
#Nx.Tensor<
s32[batch: 3][height: 2][width: 3]
[
[
[0, 1, 2],
[3, 4, 5]
],
[
[6, 7, 8],
[9, 10, 11]
],
[
[12, 13, 14],
[15, 16, 17]
]
]
>
iex> Nx.iota({3, 3}, axis: 1, names: [:batch, nil])
#Nx.Tensor<
s32[batch: 3][3]
[
[0, 1, 2],
[0, 1, 2],
[0, 1, 2]
]
>
iex> Nx.iota({3, 3}, axis: -1)
#Nx.Tensor<
s32[3][3]
[
[0, 1, 2],
[0, 1, 2],
[0, 1, 2]
]
>
iex> Nx.iota({3, 4, 3}, axis: 0, type: :f64)
#Nx.Tensor<
f64[3][4][3]
[
[
[0.0, 0.0, 0.0],
[0.0, 0.0, 0.0],
[0.0, 0.0, 0.0],
[0.0, 0.0, 0.0]
],
[
[1.0, 1.0, 1.0],
[1.0, 1.0, 1.0],
[1.0, 1.0, 1.0],
[1.0, 1.0, 1.0]
],
[
[2.0, 2.0, 2.0],
[2.0, 2.0, 2.0],
[2.0, 2.0, 2.0],
[2.0, 2.0, 2.0]
]
]
>
iex> Nx.iota({1, 3, 2}, axis: 2)
#Nx.Tensor<
s32[1][3][2]
[
[
[0, 1],
[0, 1],
[0, 1]
]
]
>
iex> Nx.iota({2, 3}, axis: 0, vectorized_axes: [x: 1, y: 2])
#Nx.Tensor<
vectorized[x: 1][y: 2]
s32[2][3]
[
[
[
[0, 0, 0],
[1, 1, 1]
],
[
[0, 0, 0],
[1, 1, 1]
]
]
]
>
Creates a tensor of shape {n}
with linearly spaced samples between start
and stop
.
Options
:n
- The number of samples in the tensor.:name
- Optional name for the output axis.:type
- Optional type for the output. Defaults to{:f, 32}
:endpoint
- Boolean that indicates whether to includestop
as the last point in the output. Defaults totrue
Examples
iex> Nx.linspace(5, 8, n: 5)
#Nx.Tensor<
f32[5]
[5.0, 5.75, 6.5, 7.25, 8.0]
>
iex> Nx.linspace(0, 10, n: 5, endpoint: false, name: :x)
#Nx.Tensor<
f32[x: 5]
[0.0, 2.0, 4.0, 6.0, 8.0]
>
For integer types, the results might not be what's expected.
When endpoint: true
(the default), the step is given by
step = (stop - start) / (n - 1)
, which means that instead
of a step of 3
in the example below, we get a step close to
3.42
. The results are calculated first and only cast in the
end, so that the :endpoint
condition is respected.
iex> Nx.linspace(0, 24, n: 8, type: {:u, 8}, endpoint: true)
#Nx.Tensor<
u8[8]
[0, 3, 6, 10, 13, 17, 20, 24]
>
iex> Nx.linspace(0, 24, n: 8, type: {:s, 32}, endpoint: false)
#Nx.Tensor<
s32[8]
[0, 3, 6, 9, 12, 15, 18, 21]
>
One can also pass two higher order tensors with the same shape {j, k, ...}
, in which case
the output will be of shape {j, k, ..., n}
.
iex> Nx.linspace(Nx.tensor([[[0, 10]]]), Nx.tensor([[[10, 100]]]), n: 10, name: :samples, type: {:u, 8}) #Nx.Tensor<
u8[1][1][2][samples: 10]
[
[
[
[0, 1, 2, 3, 4, 5, 6, 7, 8, 10],
[10, 20, 30, 40, 50, 60, 70, 80, 90, 100]
]
]
]
Vectorized tensors
iex> Nx.linspace(0, Nx.vectorize(Nx.tensor([10, 20]), :x), n: 5)
#Nx.Tensor<
vectorized[x: 2]
f32[5]
[
[0.0, 2.5, 5.0, 7.5, 10.0],
[0.0, 5.0, 10.0, 15.0, 20.0]
]
>
iex> start = Nx.vectorize(Nx.tensor([0, 1]), :x)
iex> stop = Nx.vectorize(Nx.tensor([10, 20]), :y)
iex> Nx.linspace(start, stop, n: 5)
#Nx.Tensor<
vectorized[x: 2][y: 2]
f32[5]
[
[
[0.0, 2.5, 5.0, 7.5, 10.0],
[0.0, 5.0, 10.0, 15.0, 20.0]
],
[
[1.0, 3.25, 5.5, 7.75, 10.0],
[1.0, 5.75, 10.5, 15.25, 20.0]
]
]
>
iex> start = Nx.vectorize(Nx.tensor([0, 1]), :x)
iex> stop = Nx.vectorize(Nx.tensor([10, 10]), :x)
iex> Nx.linspace(start, stop, n: 5)
#Nx.Tensor<
vectorized[x: 2]
f32[5]
[
[0.0, 2.5, 5.0, 7.5, 10.0],
[1.0, 3.25, 5.5, 7.75, 10.0]
]
>
Error cases
iex> Nx.linspace(0, 24, n: 1.0)
** (ArgumentError) expected n to be a non-negative integer, got: 1.0
iex> Nx.linspace(Nx.tensor([[0, 1]]), Nx.tensor([1, 2, 3]), n: 2)
** (ArgumentError) expected start and stop to have the same shape. Got shapes {1, 2} and {3}
Creates a diagonal tensor from a 1D tensor.
Converse of take_diagonal/2
.
The returned tensor will be a square matrix of dimensions equal to the size of the tensor. If an offset is given, the absolute value of the offset is added to the matrix dimensions sizes.
Options
:offset
- offset used for making the diagonal. Use offset > 0 for diagonals above the main diagonal, and offset < 0 for diagonals below the main diagonal. Defaults to 0.
Examples
Given a 1D tensor:
iex> Nx.make_diagonal(Nx.tensor([1, 2, 3, 4]))
#Nx.Tensor<
s32[4][4]
[
[1, 0, 0, 0],
[0, 2, 0, 0],
[0, 0, 3, 0],
[0, 0, 0, 4]
]
>
Given a 1D tensor with an offset:
iex> Nx.make_diagonal(Nx.tensor([1, 2, 3]), offset: 1)
#Nx.Tensor<
s32[4][4]
[
[0, 1, 0, 0],
[0, 0, 2, 0],
[0, 0, 0, 3],
[0, 0, 0, 0]
]
>
iex> Nx.make_diagonal(Nx.tensor([1, 2, 3]), offset: -1)
#Nx.Tensor<
s32[4][4]
[
[0, 0, 0, 0],
[1, 0, 0, 0],
[0, 2, 0, 0],
[0, 0, 3, 0]
]
>
You can also have offsets with an abs greater than the tensor length:
iex> Nx.make_diagonal(Nx.tensor([1, 2, 3]), offset: -4)
#Nx.Tensor<
s32[7][7]
[
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[1, 0, 0, 0, 0, 0, 0],
[0, 2, 0, 0, 0, 0, 0],
[0, 0, 3, 0, 0, 0, 0]
]
>
iex> Nx.make_diagonal(Nx.tensor([1, 2, 3]), offset: 4)
#Nx.Tensor<
s32[7][7]
[
[0, 0, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 2, 0],
[0, 0, 0, 0, 0, 0, 3],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]
]
>
Vectorized tensors
iex> t = Nx.vectorize(Nx.tensor([[1, 2], [3, 4]]), :x)
iex> Nx.make_diagonal(t, offset: 1)
#Nx.Tensor<
vectorized[x: 2]
s32[3][3]
[
[
[0, 1, 0],
[0, 0, 2],
[0, 0, 0]
],
[
[0, 3, 0],
[0, 0, 4],
[0, 0, 0]
]
]
>
iex> Nx.make_diagonal(t, offset: -1)
#Nx.Tensor<
vectorized[x: 2]
s32[3][3]
[
[
[0, 0, 0],
[1, 0, 0],
[0, 2, 0]
],
[
[0, 0, 0],
[3, 0, 0],
[0, 4, 0]
]
]
>
Error cases
iex> Nx.make_diagonal(Nx.tensor([[0, 0], [0, 1]]))
** (ArgumentError) make_diagonal/2 expects tensor of rank 1, got tensor of rank: 2
Puts the individual values from a 1D diagonal into the diagonal indices of the given 2D tensor.
See also: take_diagonal/2
, make_diagonal/2
.
Examples
Given a 2D tensor and a 1D diagonal:
iex> t = Nx.broadcast(0, {4, 4})
#Nx.Tensor<
s32[4][4]
[
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]
]
>
iex> Nx.put_diagonal(t, Nx.tensor([1, 2, 3, 4]))
#Nx.Tensor<
s32[4][4]
[
[1, 0, 0, 0],
[0, 2, 0, 0],
[0, 0, 3, 0],
[0, 0, 0, 4]
]
>
iex> t = Nx.broadcast(0, {4, 3})
#Nx.Tensor<
s32[4][3]
[
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]
]
>
iex> Nx.put_diagonal(t, Nx.tensor([1, 2, 3]))
#Nx.Tensor<
s32[4][3]
[
[1, 0, 0],
[0, 2, 0],
[0, 0, 3],
[0, 0, 0]
]
>
Given a 2D tensor and a 1D diagonal with a positive offset:
iex> Nx.put_diagonal(Nx.broadcast(0, {4, 4}), Nx.tensor([1, 2, 3]), offset: 1)
#Nx.Tensor<
s32[4][4]
[
[0, 1, 0, 0],
[0, 0, 2, 0],
[0, 0, 0, 3],
[0, 0, 0, 0]
]
>
iex> Nx.put_diagonal(Nx.broadcast(0, {4, 3}), Nx.tensor([1, 2]), offset: 1)
#Nx.Tensor<
s32[4][3]
[
[0, 1, 0],
[0, 0, 2],
[0, 0, 0],
[0, 0, 0]
]
>
Given a 2D tensor and a 1D diagonal with a negative offset:
iex> Nx.put_diagonal(Nx.broadcast(0, {4, 4}), Nx.tensor([1, 2, 3]), offset: -1)
#Nx.Tensor<
s32[4][4]
[
[0, 0, 0, 0],
[1, 0, 0, 0],
[0, 2, 0, 0],
[0, 0, 3, 0]
]
>
iex> Nx.put_diagonal(Nx.broadcast(0, {4, 3}), Nx.tensor([1, 2, 3]), offset: -1)
#Nx.Tensor<
s32[4][3]
[
[0, 0, 0],
[1, 0, 0],
[0, 2, 0],
[0, 0, 3]
]
>
Options
:offset
- offset used for putting the diagonal. Use offset > 0 for diagonals above the main diagonal, and offset < 0 for diagonals below the main diagonal. Defaults to 0.
Error cases
Given an invalid tensor:
iex> Nx.put_diagonal(Nx.iota({3, 3, 3}), Nx.iota({3}))
** (ArgumentError) put_diagonal/3 expects tensor of rank 2, got tensor of rank: 3
Given invalid diagonals:
iex> Nx.put_diagonal(Nx.iota({3, 3}), Nx.iota({3, 3}))
** (ArgumentError) put_diagonal/3 expects diagonal of rank 1, got tensor of rank: 2
iex> Nx.put_diagonal(Nx.iota({3, 3}), Nx.iota({2}))
** (ArgumentError) expected diagonal tensor of length: 3, got diagonal tensor of length: 2
iex> Nx.put_diagonal(Nx.iota({3, 3}), Nx.iota({3}), offset: 1)
** (ArgumentError) expected diagonal tensor of length: 2, got diagonal tensor of length: 3
Given invalid offsets:
iex> Nx.put_diagonal(Nx.iota({3, 3}), Nx.iota({3}), offset: 4)
** (ArgumentError) offset must be less than length of axis 1 when positive, got: 4
iex> Nx.put_diagonal(Nx.iota({3, 3}), Nx.iota({3}), offset: -3)
** (ArgumentError) absolute value of offset must be less than length of axis 0 when negative, got: -3
Short-hand function for creating tensor of type s2
.
This is just an alias for Nx.tensor(tensor, type: s2)
.
Short-hand function for creating tensor of type s4
.
This is just an alias for Nx.tensor(tensor, type: s4)
.
Short-hand function for creating tensor of type s8
.
This is just an alias for Nx.tensor(tensor, type: s8)
.
Short-hand function for creating tensor of type s16
.
This is just an alias for Nx.tensor(tensor, type: s16)
.
Short-hand function for creating tensor of type s32
.
This is just an alias for Nx.tensor(tensor, type: s32)
.
Short-hand function for creating tensor of type s64
.
This is just an alias for Nx.tensor(tensor, type: s64)
.
A convenient ~MAT
sigil for building matrices (two-dimensional tensors).
Examples
Before using sigils, you must first import them:
import Nx, only: :sigils
Then you use the sigil to create matrices. The sigil:
~MAT<
-1 0 0 1
0 2 0 0
0 0 3 0
0 0 0 4
>
Is equivalent to:
Nx.tensor([
[-1, 0, 0, 1],
[0, 2, 0, 0],
[0, 0, 3, 0],
[0, 0, 0, 4]
])
If the tensor has any complex type, it defaults to c64. If the tensor has any float type, it defaults to f32. Otherwise, it is s64. You can specify the tensor type as a sigil modifier:
iex> import Nx, only: :sigils
iex> ~MAT[0.1 0.2 0.3 0.4]f16
#Nx.Tensor<
f16[1][4]
[
[0.0999755859375, 0.199951171875, 0.300048828125, 0.39990234375]
]
>
iex> ~MAT[1+1i 2-2.0i -3]
#Nx.Tensor<
c64[1][3]
[
[1.0+1.0i, 2.0-2.0i, -3.0+0.0i]
]
>
iex> ~MAT[1 Inf NaN]
#Nx.Tensor<
f32[1][3]
[
[1.0, Inf, NaN]
]
>
iex> ~MAT[1i Inf NaN]
#Nx.Tensor<
c64[1][3]
[
[0.0+1.0i, Inf+0.0i, NaN+0.0i]
]
>
iex> ~MAT[1i Inf+2i NaN-Infi]
#Nx.Tensor<
c64[1][3]
[
[0.0+1.0i, Inf+2.0i, NaN-Infi]
]
>
A convenient ~VEC
sigil for building vectors (one-dimensional tensors).
Examples
Before using sigils, you must first import them:
import Nx, only: :sigils
Then you use the sigil to create vectors. The sigil:
~VEC[-1 0 0 1]
Is equivalent to:
Nx.tensor([-1, 0, 0, 1])
If the tensor has any complex type, it defaults to c64. If the tensor has any float type, it defaults to f32. Otherwise, it is s64. You can specify the tensor type as a sigil modifier:
iex> import Nx, only: :sigils
iex> ~VEC[0.1 0.2 0.3 0.4]f16
#Nx.Tensor<
f16[4]
[0.0999755859375, 0.199951171875, 0.300048828125, 0.39990234375]
>
iex> ~VEC[1+1i 2-2.0i -3]
#Nx.Tensor<
c64[3]
[1.0+1.0i, 2.0-2.0i, -3.0+0.0i]
>
iex> ~VEC[1 Inf NaN]
#Nx.Tensor<
f32[3]
[1.0, Inf, NaN]
>
iex> ~VEC[1i Inf NaN]
#Nx.Tensor<
c64[3]
[0.0+1.0i, Inf+0.0i, NaN+0.0i]
>
iex> ~VEC[1i Inf+2i NaN-Infi]
#Nx.Tensor<
c64[3]
[0.0+1.0i, Inf+2.0i, NaN-Infi]
>
Extracts the diagonal of batched matrices.
Converse of make_diagonal/2
.
Examples
Given a matrix without offset:
iex> Nx.take_diagonal(Nx.tensor([
...> [0, 1, 2],
...> [3, 4, 5],
...> [6, 7, 8]
...> ]))
#Nx.Tensor<
s32[3]
[0, 4, 8]
>
And if given a matrix along with an offset:
iex> Nx.take_diagonal(Nx.iota({3, 3}), offset: 1)
#Nx.Tensor<
s32[2]
[1, 5]
>
iex> Nx.take_diagonal(Nx.iota({3, 3}), offset: -1)
#Nx.Tensor<
s32[2]
[3, 7]
>
Given batched matrix:
iex> Nx.take_diagonal(Nx.iota({3, 2, 2}))
#Nx.Tensor<
s32[3][2]
[
[0, 3],
[4, 7],
[8, 11]
]
>
iex> Nx.take_diagonal(Nx.iota({3, 2, 2}), offset: -1)
#Nx.Tensor<
s32[3][1]
[
[2],
[6],
[10]
]
>
Options
:offset
- offset used for extracting the diagonal. Use offset > 0 for diagonals above the main diagonal, and offset < 0 for diagonals below the main diagonal. Defaults to 0.
Error cases
iex> Nx.take_diagonal(Nx.tensor([0, 1, 2]))
** (ArgumentError) take_diagonal/2 expects tensor of rank 2 or higher, got tensor of rank: 1
iex> Nx.take_diagonal(Nx.iota({3, 3}), offset: 3)
** (ArgumentError) offset must be less than length of axis 1 when positive, got: 3
iex> Nx.take_diagonal(Nx.iota({3, 3}), offset: -4)
** (ArgumentError) absolute value of offset must be less than length of axis 0 when negative, got: -4
Creates a tensor template.
You can't perform any operation on this tensor. It exists exclusively to define APIs that say a tensor with a certain type, shape, and names is expected in the future.
Examples
iex> Nx.template({2, 3}, :f32)
#Nx.Tensor<
f32[2][3]
Nx.TemplateBackend
>
iex> Nx.template({2, 3}, {:f, 32}, names: [:rows, :columns])
#Nx.Tensor<
f32[rows: 2][columns: 3]
Nx.TemplateBackend
>
Although note it is impossible to perform any operation on a tensor template:
iex> t = Nx.template({2, 3}, {:f, 32}, names: [:rows, :columns])
iex> Nx.abs(t)
** (RuntimeError) cannot perform operations on a Nx.TemplateBackend tensor
To convert existing tensors to templates, use to_template/1
.
Builds a tensor.
The argument must be one of:
- a tensor
- a number (which means the tensor is scalar/zero-dimensional)
- a boolean (also scalar/zero-dimensional)
- an arbitrarily nested list of numbers and booleans
If a new tensor has to be allocated, it will be allocated in
Nx.default_backend/0
, unless the :backend
option is given,
which overrides the default one.
Examples
A number returns a tensor of zero dimensions:
iex> Nx.tensor(0)
#Nx.Tensor<
s32
0
>
iex> Nx.tensor(1.0)
#Nx.Tensor<
f32
1.0
>
Giving a list returns a vector (a one-dimensional tensor):
iex> Nx.tensor([1, 2, 3])
#Nx.Tensor<
s32[3]
[1, 2, 3]
>
iex> Nx.tensor([1.2, 2.3, 3.4, 4.5])
#Nx.Tensor<
f32[4]
[1.2000000476837158, 2.299999952316284, 3.4000000953674316, 4.5]
>
The type can be explicitly given. Integers and floats bigger than the given size overflow:
iex> Nx.tensor([300, 301, 302], type: :s8)
#Nx.Tensor<
s8[3]
[44, 45, 46]
>
Mixed types give higher priority to floats:
iex> Nx.tensor([1, 2, 3.0])
#Nx.Tensor<
f32[3]
[1.0, 2.0, 3.0]
>
Boolean values are also accepted, where true
is
converted to 1
and false
to 0
, with the type
being inferred as {:u, 8}
iex> Nx.tensor(true)
#Nx.Tensor<
u8
1
>
iex> Nx.tensor(false)
#Nx.Tensor<
u8
0
>
iex> Nx.tensor([true, false])
#Nx.Tensor<
u8[2]
[1, 0]
>
Multi-dimensional tensors are also possible:
iex> Nx.tensor([[1, 2, 3], [4, 5, 6]])
#Nx.Tensor<
s32[2][3]
[
[1, 2, 3],
[4, 5, 6]
]
>
iex> Nx.tensor([[1, 2], [3, 4], [5, 6]])
#Nx.Tensor<
s32[3][2]
[
[1, 2],
[3, 4],
[5, 6]
]
>
iex> Nx.tensor([[[1, 2], [3, 4], [5, 6]], [[-1, -2], [-3, -4], [-5, -6]]])
#Nx.Tensor<
s32[2][3][2]
[
[
[1, 2],
[3, 4],
[5, 6]
],
[
[-1, -2],
[-3, -4],
[-5, -6]
]
]
>
Floats and complex numbers
Besides single-precision (32 bits), floats can also have half-precision (16) or double-precision (64):
iex> Nx.tensor([1, 2, 3], type: :f16)
#Nx.Tensor<
f16[3]
[1.0, 2.0, 3.0]
>
iex> Nx.tensor([1, 2, 3], type: :f64)
#Nx.Tensor<
f64[3]
[1.0, 2.0, 3.0]
>
Brain-floating points are also supported:
iex> Nx.tensor([1, 2, 3], type: :bf16)
#Nx.Tensor<
bf16[3]
[1.0, 2.0, 3.0]
>
Certain backends and compilers support 8-bit floats. The precision iomplementation of 8-bit floats may change per backend, so you must be careful when transferring data across. The binary backend implements F8E5M2:
iex> Nx.tensor([1, 2, 3], type: :f8)
#Nx.Tensor<
f8[3]
[1.0, 2.0, 3.0]
>
In all cases, the non-finite values negative infinity (-Inf),
infinity (Inf), and "not a number" (NaN) can be represented by
the atoms :neg_infinity
, :infinity
, and :nan
respectively:
iex> Nx.tensor([:neg_infinity, :nan, :infinity])
#Nx.Tensor<
f32[3]
[-Inf, NaN, Inf]
>
Finally, complex numbers are also supported in tensors:
iex> Nx.tensor(Complex.new(1, -1))
#Nx.Tensor<
c64
1.0-1.0i
>
Naming dimensions
You can provide names for tensor dimensions. Names are atoms:
iex> Nx.tensor([[1, 2, 3], [4, 5, 6]], names: [:x, :y])
#Nx.Tensor<
s32[x: 2][y: 3]
[
[1, 2, 3],
[4, 5, 6]
]
>
Names make your code more expressive:
iex> Nx.tensor([[[1, 2, 3], [4, 5, 6], [7, 8, 9]]], names: [:batch, :height, :width])
#Nx.Tensor<
s32[batch: 1][height: 3][width: 3]
[
[
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
]
]
>
You can also leave dimension names as nil
:
iex> Nx.tensor([[[1, 2, 3], [4, 5, 6], [7, 8, 9]]], names: [:batch, nil, nil])
#Nx.Tensor<
s32[batch: 1][3][3]
[
[
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
]
]
>
However, you must provide a name for every dimension in the tensor:
iex> Nx.tensor([[[1, 2, 3], [4, 5, 6], [7, 8, 9]]], names: [:batch])
** (ArgumentError) invalid names for tensor of rank 3, when specifying names every dimension must have a name or be nil
Tensors
Tensors can also be given as inputs:
iex> Nx.tensor(Nx.tensor([1, 2, 3]))
#Nx.Tensor<
s32[3]
[1, 2, 3]
>
If the :backend
and :type
options are given, the tensor will
compared against those values and raise in case of mismatch:
iex> Nx.tensor(Nx.tensor([1, 2, 3]), type: :f32)
** (ArgumentError) Nx.tensor/2 expects a tensor with type :f32 but it was given a tensor of type {:s, 32}
The :backend
option will check only against the backend name
and not specific backend configuration such as device and client.
In case the backend differs, it will also raise.
The names in the given tensor are always discarded but Nx will raise in case the tensor already has names that conflict with the assigned ones:
iex> Nx.tensor(Nx.tensor([1, 2, 3]), names: [:row])
#Nx.Tensor<
s32[row: 3]
[1, 2, 3]
>
iex> Nx.tensor(Nx.tensor([1, 2, 3], names: [:column]))
#Nx.Tensor<
s32[3]
[1, 2, 3]
>
iex> Nx.tensor(Nx.tensor([1, 2, 3], names: [:column]), names: [:row])
** (ArgumentError) cannot merge name :column on axis 0 with name :row on axis 0
Options
:type
- sets the type of the tensor. If one is not given, one is automatically inferred based on the input.:names
- dimension names. If you wish to specify dimension names you must specify a name for every dimension in the tensor. Onlynil
and atoms are supported as dimension names.:backend
- the backend to allocate the tensor on. It is either an atom or a tuple in the shape{backend, options}
. It defaults toNx.default_backend/0
for new tensors
Returns an Nx.Pointer
that represents either a local pointer or an IPC handle for the given tensor.
Can be used in conjunction with from_pointer/5
to share the same memory
for multiple tensors, as well as for interoperability with other programming
languages.
Options
:kind
- one of:local
,:ipc
.:local
means the returned value represents a pointer internal to the current process.:ipc
means the returned value represents an IPC handle that can be shared between processes. Defaults to:local
.
Other options are relayed to the backend.
Examples
t = Nx.u8([10, 20, 30])
Nx.to_pointer(t, kind: :local)
%Nx.Pointer{kind: :local, address: 1234, data_size: 3, handle: nil}
t = Nx.s32([1, 2, 3])
Nx.to_pointer(t, kind: :ipc)
%Nx.Pointer{kind: :ipc, address: nil, data_size: 32, handle: "some-ipc-handle"}
An array with ones at and below the given diagonal and zeros elsewhere.
Options
k
- The diagonal above which to zero elements. Default: 0.
Examples
iex> tensor = Nx.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
iex> {num_rows, num_cols} = Nx.shape(tensor)
iex> Nx.tri(num_rows, num_cols)
#Nx.Tensor<
u8[3][3]
[
[1, 0, 0],
[1, 1, 0],
[1, 1, 1]
]
>
iex> tensor = Nx.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
iex> {num_rows, num_cols} = Nx.shape(tensor)
iex> Nx.tri(num_rows, num_cols, k: 1)
#Nx.Tensor<
u8[3][3]
[
[1, 1, 0],
[1, 1, 1],
[1, 1, 1]
]
>
Lower triangle of a matrix.
Options
k
- The diagonal above which to zero elements. Default: 0.
Examples
iex> Nx.tril(Nx.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]]))
#Nx.Tensor<
s32[3][3]
[
[1, 0, 0],
[4, 5, 0],
[7, 8, 9]
]
>
iex> Nx.tril(Nx.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), k: 1)
#Nx.Tensor<
s32[3][3]
[
[1, 2, 0],
[4, 5, 6],
[7, 8, 9]
]
>
iex> Nx.tril(Nx.iota({2, 3, 4}))
#Nx.Tensor<
s32[2][3][4]
[
[
[0, 0, 0, 0],
[4, 5, 0, 0],
[8, 9, 10, 0]
],
[
[12, 0, 0, 0],
[16, 17, 0, 0],
[20, 21, 22, 0]
]
]
>
iex> Nx.tril(Nx.iota({6}))
** (ArgumentError) tril/2 expects a tensor with at least 2 dimensions, got: #Nx.Tensor<
s32[6]
[0, 1, 2, 3, 4, 5]
>
Upper triangle of an array.
Options
k
- The diagonal below which to zero elements. Default: 0.
Examples
iex> Nx.triu(Nx.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]]))
#Nx.Tensor<
s32[3][3]
[
[1, 2, 3],
[0, 5, 6],
[0, 0, 9]
]
>
iex> Nx.triu(Nx.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), k: 1)
#Nx.Tensor<
s32[3][3]
[
[0, 2, 3],
[0, 0, 6],
[0, 0, 0]
]
>
iex> Nx.triu(Nx.iota({2, 3, 4}))
#Nx.Tensor<
s32[2][3][4]
[
[
[0, 1, 2, 3],
[0, 5, 6, 7],
[0, 0, 10, 11]
],
[
[12, 13, 14, 15],
[0, 17, 18, 19],
[0, 0, 22, 23]
]
]
>
iex> Nx.triu(Nx.iota({6}))
** (ArgumentError) triu/2 expects a tensor with at least 2 dimensions, got: #Nx.Tensor<
s32[6]
[0, 1, 2, 3, 4, 5]
>
Short-hand function for creating tensor of type u2
.
This is just an alias for Nx.tensor(tensor, type: u2)
.
Short-hand function for creating tensor of type u4
.
This is just an alias for Nx.tensor(tensor, type: u4)
.
Short-hand function for creating tensor of type u8
.
This is just an alias for Nx.tensor(tensor, type: u8)
.
Short-hand function for creating tensor of type u16
.
This is just an alias for Nx.tensor(tensor, type: u16)
.