# View Source Nx (Nx v0.2.0)

Numerical Elixir.

The `Nx`

library is a collection of functions and data
types to work with Numerical Elixir. This module defines
the main entry point for building and working with said
data-structures. For example, to create an n-dimensional
tensor, do:

```
iex> t = Nx.tensor([[1, 2], [3, 4]])
iex> Nx.shape(t)
{2, 2}
```

`Nx`

also provides the so-called numerical definitions under
the `Nx.Defn`

module. They are a subset of Elixir tailored for
numerical computations. For example, it overrides Elixir's
default operators so they are tensor-aware:

```
defn softmax(t) do
Nx.exp(t) / Nx.sum(Nx.exp(t))
end
```

Code inside `defn`

functions can also be given to custom compilers,
which can compile said functions just-in-time (JIT) to run on the
CPU or on the GPU.

##
references

References

Here is a general outline of the main references in this library:

For an introduction, see our Intro to Nx guide

This module provides the main API for working with tensors

`Nx.Defn`

provides numerical definitions, CPU/GPU compilation, gradients, and more`Nx.LinAlg`

provides functions related to linear algebra`Nx.Constants`

declares many constants commonly used in numerical code

Continue reading this documentation for an overview of creating, broadcasting, and accessing/slicing Nx tensors.

##
creating-tensors

Creating tensors

The main APIs for creating tensors are `tensor/2`

, `from_binary/2`

,
`iota/2`

, `eye/2`

, `random_uniform/2`

, `random_normal/2`

, and
`broadcast/3`

.

The tensor types can be one of:

- unsigned integers (
`u8`

,`u16`

,`u32`

,`u64`

) - signed integers (
`s8`

,`s16`

,`s32`

,`s64`

) - floats (
`f16`

,`f32`

,`f64`

) - brain floats (
`bf16`

) - and complex numbers (
`c64`

,`c128`

)

The types are tracked as tuples:

```
iex> Nx.tensor([1, 2, 3], type: {:f, 32})
#Nx.Tensor<
f32[3]
[1.0, 2.0, 3.0]
>
```

But a shortcut atom notation is also available:

```
iex> Nx.tensor([1, 2, 3], type: :f32)
#Nx.Tensor<
f32[3]
[1.0, 2.0, 3.0]
>
```

The tensor dimensions can also be named, via the `:names`

option
available to all creation functions:

```
iex> Nx.iota({2, 3}, names: [:x, :y])
#Nx.Tensor<
s64[x: 2][y: 3]
[
[0, 1, 2],
[3, 4, 5]
]
>
```

Finally, for creating vectors and matrices, a sigil notation is available:

```
iex> import Nx, only: :sigils
iex> ~V[1 2 3]f32
#Nx.Tensor<
f32[3]
[1.0, 2.0, 3.0]
>
iex> import Nx, only: :sigils
iex> ~M'''
...> 1 2 3
...> 4 5 6
...> '''s32
#Nx.Tensor<
s32[2][3]
[
[1, 2, 3],
[4, 5, 6]
]
>
```

All other APIs accept exclusively numbers or tensors, unless explicitly noted otherwise.

##
broadcasting

Broadcasting

Broadcasting allows operations on two tensors of different shapes to match. For example, most often operations between tensors have the same shape:

```
iex> a = Nx.tensor([1, 2, 3])
iex> b = Nx.tensor([10, 20, 30])
iex> Nx.add(a, b)
#Nx.Tensor<
s64[3]
[11, 22, 33]
>
```

Now let's imagine you want to multiply a large tensor of dimensions 1000x1000x1000 by 2. If you had to create a similarly large tensor only to perform this operation, it would be inefficient. Therefore, you can simply multiply this large tensor by the scalar 2, and Nx will propagate its dimensions at the time the operation happens, without allocating a large intermediate tensor:

```
iex> Nx.multiply(Nx.tensor([1, 2, 3]), 2)
#Nx.Tensor<
s64[3]
[2, 4, 6]
>
```

In practice, broadcasting is not restricted only to scalars; it
is a general algorithm that applies to all dimensions of a tensor.
When broadcasting, `Nx`

compares the shapes of the two tensors,
starting with the trailing ones, such that:

If the dimensions have equal size, then they are compatible

If one of the dimensions have size of 1, it is "broadcast" to match the dimension of the other

In case one tensor has more dimensions than the other, the missing dimensions are considered to be of size one. Here are some examples of how broadcast would work when multiplying two tensors with the following shapes:

```
s64[3] * s64
#=> s64[3]
s64[255][255][3] * s64[3]
#=> s64[255][255][3]
s64[2][1] * s[1][2]
#=> s64[2][2]
s64[5][1][4][1] * s64[3][4][5]
#=> s64[5][3][4][5]
```

If any of the dimensions do not match or are not 1, an error is raised.

##
access-syntax-slicing

Access syntax (slicing)

Nx tensors implement Elixir's access syntax. This allows developers to slice tensors up and easily access sub-dimensions and values.

Access accepts integers:

```
iex> t = Nx.tensor([[1, 2], [3, 4]])
iex> t[0]
#Nx.Tensor<
s64[2]
[1, 2]
>
iex> t[1]
#Nx.Tensor<
s64[2]
[3, 4]
>
iex> t[1][1]
#Nx.Tensor<
s64
4
>
```

If a negative index is given, it accesses the element from the back:

```
iex> t = Nx.tensor([[1, 2], [3, 4]])
iex> t[-1][-1]
#Nx.Tensor<
s64
4
>
```

Out of bound access will raise:

```
iex> Nx.tensor([1, 2])[2]
** (ArgumentError) index 2 is out of bounds for axis 0 in shape {2}
iex> Nx.tensor([1, 2])[-3]
** (ArgumentError) index -3 is out of bounds for axis 0 in shape {2}
```

The index can also be another tensor but in such cases it must be a scalar between 0 and the dimension size. Out of bound dynamic indexes are always clamped to the tensor dimensions:

```
iex> two = Nx.tensor(2)
iex> t = Nx.tensor([[1, 2], [3, 4]])
iex> t[two][two]
#Nx.Tensor<
s64
4
>
```

For example, a `minus_one`

dynamic index will be clamped to zero:

```
iex> minus_one = Nx.tensor(-1)
iex> t = Nx.tensor([[1, 2], [3, 4]])
iex> t[minus_one][minus_one]
#Nx.Tensor<
s64
1
>
```

Access also accepts ranges. Ranges in Elixir are inclusive:

```
iex> t = Nx.tensor([[1, 2], [3, 4], [5, 6], [7, 8]])
iex> t[0..1]
#Nx.Tensor<
s64[2][2]
[
[1, 2],
[3, 4]
]
>
```

Ranges can receive negative positions and they will read from the back. In such cases, the range step must be explicitly given and the right-side of the range must be equal or greater than the left-side:

```
iex> t = Nx.tensor([[1, 2], [3, 4], [5, 6], [7, 8]])
iex> t[1..-2//1]
#Nx.Tensor<
s64[2][2]
[
[3, 4],
[5, 6]
]
>
```

As you can see, accessing with a range does not eliminate the accessed axis, therefore, when wanting to slice across multiple axes with ranges, it is often desired to use a list:

```
iex> t = Nx.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]])
iex> t[[1..2, 1..2]]
#Nx.Tensor<
s64[2][2]
[
[5, 6],
[8, 9]
]
>
```

You can mix both ranges and integers in the list too:

```
iex> t = Nx.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]])
iex> t[[1..2, 2]]
#Nx.Tensor<
s64[2]
[6, 9]
>
```

If the list has less elements than axes, the remaining dimensions are returned in full:

```
iex> t = Nx.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]])
iex> t[[1..2]]
#Nx.Tensor<
s64[2][3]
[
[4, 5, 6],
[7, 8, 9]
]
>
```

The access syntax also pairs nicely with named tensors. By using named tensors, you can pass only the axis you want to slice, leaving the other axis intact:

```
iex> t = Nx.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]], names: [:x, :y])
iex> t[x: 1..2]
#Nx.Tensor<
s64[x: 2][y: 3]
[
[4, 5, 6],
[7, 8, 9]
]
>
iex> t[x: 1..2, y: 0..1]
#Nx.Tensor<
s64[x: 2][y: 2]
[
[4, 5],
[7, 8]
]
>
iex> t[x: 1, y: 0..1]
#Nx.Tensor<
s64[y: 2]
[4, 5]
>
```

For a more complex slicing rules, including strides, you
can always fallback to `Nx.slice/4`

.

##
backends

Backends

The `Nx`

library has built-in support for multiple backends.
A tensor is always handled by a backend, the default backend
being `Nx.BinaryBackend`

, which means the tensor is allocated
as a binary within the Erlang VM.

Most often backends are used to provide a completely different implementation of tensor operations, often accelerated to the GPU. In such cases, you want to guarantee all tensors are allocated in the new backend. This can be done by configuring your runtime:

```
# config/runtime.exs
import Config
config :nx, default_backend: Lib.CustomBackend
```

Or by calling `Nx.default_backend/1`

:

`Nx.default_backend({Lib.CustomBackend, device: :cuda})`

To implement your own backend, check the `Nx.Tensor`

behaviour.

# Link to this section Summary

## Functions: Aggregates

Returns a scalar tensor of value 1 if all of the tensor values are not zero. Otherwise the value is 0.

Returns a scalar tensor of value 1 if all element-wise values are within tolerance of b. Otherwise returns value 0.

Returns a scalar tensor of value 1 if any of the tensor values are not zero. Otherwise the value is 0.

Returns the indices of the maximum values.

Returns the indices of the minimum values.

Returns the mean for the tensor.

Returns the product for the tensor.

Reduces over a tensor with the given accumulator.

Returns the maximum values of the tensor.

Returns the minimum values of the tensor.

Finds the standard deviation of a tensor.

Returns the sum for the tensor.

Finds the variance of a tensor.

## Functions: Backend

Copies data to the given backend.

Deallocates data in a device.

Transfers data to the given backend.

Gets the default backend for the current process.

Sets the current process default backend to `backend`

with the given `opts`

.

Sets the default backend globally.

## Functions: Conversion

Deserializes a serialized representation of a tensor or a container with the given options.

Loads a `.npy`

file into a tensor.

Loads a `.npz`

archive into a list of tensors.

Serializes the given tensor or container of tensors to a binary.

Converts the underlying tensor to a list of tensors.

Returns the underlying tensor as a binary.

Returns the underlying tensor as a flat list.

Returns a heatmap struct with the tensor data.

Returns the underlying tensor as a number.

Converts a tensor (or tuples and maps of tensors) to tensor templates.

Converts the given number (or tensor) to a tensor.

## Functions: Creation

Creates the identity matrix of size `n`

.

Creates a one-dimensional tensor from a `binary`

with the given `type`

.

Creates a tensor with the given shape which increments along the provided axis. You may optionally provide dimension names.

Creates a diagonal tensor from a 1D tensor.

Shortcut for `random_normal(shape, 0.0, 1.0, opts)`

.

Returns a normally-distributed random tensor with the given shape.

Shortcut for `random_uniform(shape, 0.0, 1.0, opts)`

.

Returns a uniformly-distributed random tensor with the given shape.

Shuffles tensor elements.

A convenient `~M`

sigil for building matrices (two-dimensional tensors).

A convenient `~V`

sigil for building vectors (one-dimensional tensors).

Extracts the diagonal of a 2D tensor.

Creates a tensor template.

Builds a tensor.

## Functions: Cumulative

Returns the cumulative maximum of elements along an axis.

Returns the cumulative minimum of elements along an axis.

Returns the cumulative product of elements along an axis.

Returns the cumulative sum of elements along an axis.

## Functions: Element-wise

Computes the absolute value of each element in the tensor.

Calculates the inverse cosine of each element in the tensor.

Calculates the inverse hyperbolic cosine of each element in the tensor.

Element-wise addition of two tensors.

Calculates the inverse sine of each element in the tensor.

Calculates the inverse hyperbolic sine of each element in the tensor.

Element-wise arc tangent of two tensors.

Calculates the inverse tangent of each element in the tensor.

Calculates the inverse hyperbolic tangent of each element in the tensor.

Element-wise bitwise AND of two tensors.

Applies bitwise not to each element in the tensor.

Element-wise bitwise OR of two tensors.

Element-wise bitwise XOR of two tensors.

Calculates the cube root of each element in the tensor.

Calculates the ceil of each element in the tensor.

Clips the values of the tensor on the closed
interval `[min, max]`

.

Constructs a complex tensor from two equally-shaped tensors.

Calculates the complex conjugate of each element in the tensor.

Calculates the cosine of each element in the tensor.

Calculates the hyperbolic cosine of each element in the tensor.

Counts the number of leading zeros of each element in the tensor.

Element-wise division of two tensors.

Element-wise equality comparison of two tensors.

Calculates the error function of each element in the tensor.

Calculates the inverse error function of each element in the tensor.

Calculates the one minus error function of each element in the tensor.

Calculates the exponential of each element in the tensor.

Calculates the exponential minus one of each element in the tensor.

Calculates the floor of each element in the tensor.

Element-wise greater than comparison of two tensors.

Element-wise greater than or equal comparison of two tensors.

Returns the imaginary component of each entry in a complex tensor as a floating point tensor.

Element-wise left shift of two tensors.

Element-wise less than comparison of two tensors.

Element-wise less than or equal comparison of two tensors.

Calculates the natural log plus one of each element in the tensor.

Calculates the natural log of each element in the tensor.

Element-wise logical and of two tensors.

Element-wise logical not a tensor.

Element-wise logical or of two tensors.

Element-wise logical xor of two tensors.

Calculates the standard logistic (a sigmoid) of each element in the tensor.

Maps the given scalar function over the entire tensor.

Element-wise maximum of two tensors.

Element-wise minimum of two tensors.

Element-wise multiplication of two tensors.

Negates each element in the tensor.

Element-wise not-equal comparison of two tensors.

Calculates the complex phase angle of each element in the tensor. $phase(z) = atan2(b, a), z = a + bi \in \Complex$

Computes the bitwise population count of each element in the tensor.

Element-wise power of two tensors.

Element-wise integer division of two tensors.

Returns the real component of each entry in a complex tensor as a floating point tensor.

Element-wise remainder of two tensors.

Element-wise right shift of two tensors.

Calculates the round (away from zero) of each element in the tensor.

Calculates the reverse square root of each element in the tensor.

Constructs a tensor from two tensors, based on a predicate.

Computes the sign of each element in the tensor.

Calculates the sine of each element in the tensor.

Calculates the hyperbolic sine of each element in the tensor.

Calculates the square root of each element in the tensor.

Element-wise subtraction of two tensors.

Calculates the tangent of each element in the tensor.

Calculates the hyperbolic tangent of each element in the tensor.

## Functions: Indexed

Builds a new tensor by taking individual values from the original tensor at the given indices.

Performs an indexed `add`

operation on the `target`

tensor,
adding the `updates`

into the corresponding `indices`

positions.

Puts the given slice into the given tensor at the given start indices.

Slices a tensor from `start_indices`

with `lengths`

.

Slices a tensor along the given axis.

Takes and concatenates slices along an axis.

Takes the values from a tensor given an `indices`

tensor, along the specified axis.

## Functions: N-dim

Sorts the tensor along the given axis according to the given direction and returns the corresponding indices of the original tensor in the new sorted positions.

Concatenates tensors along the given axis.

Computes an n-D convolution (where `n >= 3`

) as used in neural networks.

Returns the dot product of two tensors.

Computes the generalized dot product between two tensors, given the contracting axes.

Computes the generalized dot product between two tensors, given the contracting and batch axes.

Computes the outer product of two tensors.

Reverses the tensor in the given dimensions.

Sorts the tensor along the given axis according to the given direction.

Joins a list of tensors with the same shape along a new axis.

## Functions: Shape

Returns all of the axes in a tensor.

Returns the index of the given axis in the tensor.

Returns the size of a given axis of a tensor.

Broadcasts `tensor`

to the given `broadcast_shape`

.

Returns the byte size of the data in the tensor computed from its shape and type.

Checks if two tensors have the same shape, type, and compatible names.

Flattens a n-dimensional tensor to a 1-dimensional tensor.

Returns all of the names in a tensor.

Adds a new `axis`

of size 1 with optional `name`

.

Pads a tensor with a given value.

Returns the rank of a tensor.

Changes the shape of a tensor.

Returns the shape of the tensor as a tuple.

Returns the number of elements in the tensor.

Squeezes the given size `1`

dimensions out of the tensor.

Creates a new tensor by repeating the input tensor along the given axes.

Transposes a tensor to the given `axes`

.

## Functions: Type

Changes the type of a tensor.

Changes the type of a tensor, using a bitcast.

Returns the type of the tensor.

## Functions: Window

Returns the maximum over each window of size `window_dimensions`

in the given tensor, producing a tensor that contains the same
number of elements as valid positions of the window.

Averages over each window of size `window_dimensions`

in the
given tensor, producing a tensor that contains the same
number of elements as valid positions of the window.

Returns the minimum over each window of size `window_dimensions`

in the given tensor, producing a tensor that contains the same
number of elements as valid positions of the window.

Returns the product over each window of size `window_dimensions`

in the given tensor, producing a tensor that contains the same
number of elements as valid positions of the window.

Reduces over each window of size `dimensions`

in the given tensor, producing a tensor that contains the same
number of elements as valid positions of the window.

Performs a `window_reduce`

to select the maximum index in each
window of the input tensor according to and scatters source tensor
to corresponding maximum indices in the output tensor.

Performs a `window_reduce`

to select the minimum index in each
window of the input tensor according to and scatters source tensor
to corresponding minimum indices in the output tensor.

Sums over each window of size `window_dimensions`

in the
given tensor, producing a tensor that contains the same
number of elements as valid positions of the window.

# Link to this section Types

@type axes() :: Nx.Tensor.axes()

@type axis() :: Nx.Tensor.axis()

@type shape() :: t() | Nx.Tensor.shape()

@type t() :: number() | Nx.Tensor.t()

# Link to this section Functions: Aggregates

Returns a scalar tensor of value 1 if all of the tensor values are not zero. Otherwise the value is 0.

If the `:axes`

option is given, it aggregates over
the given dimensions, effectively removing them.
`axes: [0]`

implies aggregating over the highest order
dimension and so forth. If the axis is negative, then
counts the axis from the back. For example, `axes: [-1]`

will always aggregate all rows.

You may optionally set `:keep_axes`

to true, which will
retain the rank of the input tensor by setting the reduced
axes to size 1.

##
examples

Examples

```
iex> Nx.all(Nx.tensor([0, 1, 2]))
#Nx.Tensor<
u8
0
>
iex> Nx.all(Nx.tensor([[-1, 0, 1], [2, 3, 4]], names: [:x, :y]), axes: [:x])
#Nx.Tensor<
u8[y: 3]
[1, 0, 1]
>
iex> Nx.all(Nx.tensor([[-1, 0, 1], [2, 3, 4]], names: [:x, :y]), axes: [:y])
#Nx.Tensor<
u8[x: 2]
[0, 1]
>
```

Returns a scalar tensor of value 1 if all element-wise values are within tolerance of b. Otherwise returns value 0.

You may set the absolute tolerance, `:atol`

and relative tolerance
`:rtol`

. Given tolerances, this functions returns 1 if

`absolute(a - b) <= (atol + rtol * absolute(b))`

is true for all elements of a and b.

##
examples

Examples

```
iex> Nx.all_close(Nx.tensor([1.0e10, 1.0e-7]), Nx.tensor([1.00001e10, 1.0e-8]))
#Nx.Tensor<
u8
0
>
iex> Nx.all_close(Nx.tensor([1.0e-8, 1.0e-8]), Nx.tensor([1.0e-8, 1.0e-9]))
#Nx.Tensor<
u8
1
>
```

Returns a scalar tensor of value 1 if any of the tensor values are not zero. Otherwise the value is 0.

If the `:axes`

option is given, it aggregates over
the given dimensions, effectively removing them.
`axes: [0]`

implies aggregating over the highest order
dimension and so forth. If the axis is negative, then
counts the axis from the back. For example, `axes: [-1]`

will always aggregate all rows.

You may optionally set `:keep_axes`

to true, which will
retain the rank of the input tensor by setting the reduced
axes to size 1.

##
examples

Examples

```
iex> Nx.any(Nx.tensor([0, 1, 2]))
#Nx.Tensor<
u8
1
>
iex> Nx.any(Nx.tensor([[0, 1, 0], [0, 1, 2]], names: [:x, :y]), axes: [:x])
#Nx.Tensor<
u8[y: 3]
[0, 1, 1]
>
iex> Nx.any(Nx.tensor([[0, 1, 0], [0, 1, 2]], names: [:x, :y]), axes: [:y])
#Nx.Tensor<
u8[x: 2]
[1, 1]
>
```

Returns the indices of the maximum values.

##
options

Options

`:axis`

- the axis to aggregate on. If no axis is given, returns the index of the absolute maximum value in the tensor.`:keep_axis`

- whether or not to keep the reduced axis with a size of 1. Defaults to`false`

.`:tie_break`

- how to break ties. one of`:high`

, or`:low`

. default behavior is to always return the lower index.

##
examples

Examples

```
iex> Nx.argmax(4)
#Nx.Tensor<
s64
0
>
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]])
iex> Nx.argmax(t)
#Nx.Tensor<
s64
10
>
```

If a tensor of floats is given, it still returns integers:

```
iex> Nx.argmax(Nx.tensor([2.0, 4.0]))
#Nx.Tensor<
s64
1
>
```

###
aggregating-over-an-axis

Aggregating over an axis

```
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]], names: [:x, :y, :z])
iex> Nx.argmax(t, axis: :x)
#Nx.Tensor<
s64[y: 2][z: 3]
[
[1, 0, 0],
[1, 1, 0]
]
>
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]], names: [:x, :y, :z])
iex> Nx.argmax(t, axis: :y)
#Nx.Tensor<
s64[x: 2][z: 3]
[
[0, 0, 0],
[0, 1, 0]
]
>
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]], names: [:x, :y, :z])
iex> Nx.argmax(t, axis: :z)
#Nx.Tensor<
s64[x: 2][y: 2]
[
[0, 2],
[0, 1]
]
>
```

###
tie-breaks

Tie breaks

```
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]], names: [:x, :y, :z])
iex> Nx.argmax(t, tie_break: :low, axis: :y)
#Nx.Tensor<
s64[x: 2][z: 3]
[
[0, 0, 0],
[0, 1, 0]
]
>
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]], names: [:x, :y, :z])
iex> Nx.argmax(t, tie_break: :high, axis: :y)
#Nx.Tensor<
s64[x: 2][z: 3]
[
[0, 0, 1],
[0, 1, 1]
]
>
```

###
keep-axis

Keep axis

```
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]], names: [:x, :y, :z])
iex> Nx.argmax(t, axis: :y, keep_axis: true)
#Nx.Tensor<
s64[x: 2][y: 1][z: 3]
[
[
[0, 0, 0]
],
[
[0, 1, 0]
]
]
>
```

Returns the indices of the minimum values.

##
options

Options

`:axis`

- the axis to aggregate on. If no axis is given, returns the index of the absolute minimum value in the tensor.`:keep_axis`

- whether or not to keep the reduced axis with a size of 1. Defaults to`false`

.`:tie_break`

- how to break ties. one of`:high`

, or`:low`

. Default behavior is to always return the lower index.

##
examples

Examples

```
iex> Nx.argmin(4)
#Nx.Tensor<
s64
0
>
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]])
iex> Nx.argmin(t)
#Nx.Tensor<
s64
4
>
```

If a tensor of floats is given, it still returns integers:

```
iex> Nx.argmin(Nx.tensor([2.0, 4.0]))
#Nx.Tensor<
s64
0
>
```

###
aggregating-over-an-axis

Aggregating over an axis

```
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]], names: [:x, :y, :z])
iex> Nx.argmin(t, axis: :x)
#Nx.Tensor<
s64[y: 2][z: 3]
[
[0, 0, 0],
[0, 0, 0]
]
>
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]], names: [:x, :y, :z])
iex> Nx.argmin(t, axis: 1)
#Nx.Tensor<
s64[x: 2][z: 3]
[
[1, 1, 0],
[1, 0, 0]
]
>
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]], names: [:x, :y, :z])
iex> Nx.argmin(t, axis: :z)
#Nx.Tensor<
s64[x: 2][y: 2]
[
[1, 1],
[1, 2]
]
>
```

###
tie-breaks

Tie breaks

```
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]], names: [:x, :y, :z])
iex> Nx.argmin(t, tie_break: :low, axis: :y)
#Nx.Tensor<
s64[x: 2][z: 3]
[
[1, 1, 0],
[1, 0, 0]
]
>
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]], names: [:x, :y, :z])
iex> Nx.argmin(t, tie_break: :high, axis: :y)
#Nx.Tensor<
s64[x: 2][z: 3]
[
[1, 1, 1],
[1, 0, 1]
]
>
```

###
keep-axis

Keep axis

```
iex> t = Nx.tensor([[[4, 2, 3], [1, -5, 3]], [[6, 2, 3], [4, 8, 3]]], names: [:x, :y, :z])
iex> Nx.argmin(t, axis: :y, keep_axis: true)
#Nx.Tensor<
s64[x: 2][y: 1][z: 3]
[
[
[1, 1, 0]
],
[
[1, 0, 0]
]
]
>
```

Returns the mean for the tensor.

If the `:axes`

option is given, it aggregates over
that dimension, effectively removing it. `axes: [0]`

implies aggregating over the highest order dimension
and so forth. If the axis is negative, then counts
the axis from the back. For example, `axes: [-1]`

will
always aggregate all rows.

You may optionally set `:keep_axes`

to true, which will
retain the rank of the input tensor by setting the averaged
axes to size 1.

##
examples

Examples

```
iex> Nx.mean(Nx.tensor(42))
#Nx.Tensor<
f32
42.0
>
iex> Nx.mean(Nx.tensor([1, 2, 3]))
#Nx.Tensor<
f32
2.0
>
```

###
aggregating-over-an-axis

Aggregating over an axis

```
iex> Nx.mean(Nx.tensor([1, 2, 3], names: [:x]), axes: [0])
#Nx.Tensor<
f32
2.0
>
iex> Nx.mean(Nx.tensor([1, 2, 3], type: {:u, 8}, names: [:x]), axes: [:x])
#Nx.Tensor<
f32
2.0
>
iex> t = Nx.tensor([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]], names: [:x, :y, :z])
iex> Nx.mean(t, axes: [:x])
#Nx.Tensor<
f32[y: 2][z: 3]
[
[4.0, 5.0, 6.0],
[7.0, 8.0, 9.0]
]
>
iex> t = Nx.tensor([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]], names: [:x, :y, :z])
iex> Nx.mean(t, axes: [:x, :z])
#Nx.Tensor<
f32[y: 2]
[5.0, 8.0]
>
iex> t = Nx.tensor([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]], names: [:x, :y, :z])
iex> Nx.mean(t, axes: [-1])
#Nx.Tensor<
f32[x: 2][y: 2]
[
[2.0, 5.0],
[8.0, 11.0]
]
>
```

###
keeping-axes

Keeping axes

```
iex> t = Nx.tensor([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]], names: [:x, :y, :z])
iex> Nx.mean(t, axes: [-1], keep_axes: true)
#Nx.Tensor<
f32[x: 2][y: 2][z: 1]
[
[
[2.0],
[5.0]
],
[
[8.0],
[11.0]
]
]
>
```

Returns the product for the tensor.

If the `:axes`

option is given, it aggregates over
the given dimensions, effectively removing them.
`axes: [0]`

implies aggregating over the highest order
dimension and so forth. If the axis is negative, then
counts the axis from the back. For example, `axes: [-1]`

will always aggregate all rows.

You may optionally set `:keep_axes`

to true, which will
retain the rank of the input tensor by setting the multiplied
axes to size 1.

##
examples

Examples

```
iex> Nx.product(Nx.tensor(42))
#Nx.Tensor<
s64
42
>
iex> Nx.product(Nx.tensor([1, 2, 3], names: [:x]))
#Nx.Tensor<
s64
6
>
iex> Nx.product(Nx.tensor([[1.0, 2.0], [3.0, 4.0]], names: [:x, :y]))
#Nx.Tensor<
f32
24.0
>
```

Giving a tensor with low precision casts it to a higher precision to make sure the sum does not overflow:

```
iex> Nx.product(Nx.tensor([[10, 20], [30, 40]], type: {:u, 8}, names: [:x, :y]))
#Nx.Tensor<
u64
240000
>
iex> Nx.product(Nx.tensor([[10, 20], [30, 40]], type: {:s, 8}, names: [:x, :y]))
#Nx.Tensor<
s64
240000
>
```

###
aggregating-over-an-axis

Aggregating over an axis

```
iex> Nx.product(Nx.tensor([1, 2, 3], names: [:x]), axes: [0])
#Nx.Tensor<
s64
6
>
```

Same tensor over different axes combinations:

```
iex> t = Nx.tensor(
...> [
...> [
...> [1, 2, 3],
...> [4, 5, 6]
...> ],
...> [
...> [7, 8, 9],
...> [10, 11, 12]
...> ]
...> ],
...> names: [:x, :y, :z]
...> )
iex> Nx.product(t, axes: [:x])
#Nx.Tensor<
s64[y: 2][z: 3]
[
[7, 16, 27],
[40, 55, 72]
]
>
iex> Nx.product(t, axes: [:y])
#Nx.Tensor<
s64[x: 2][z: 3]
[
[4, 10, 18],
[70, 88, 108]
]
>
iex> Nx.product(t, axes: [:x, :z])
#Nx.Tensor<
s64[y: 2]
[3024, 158400]
>
iex> Nx.product(t, axes: [:z])
#Nx.Tensor<
s64[x: 2][y: 2]
[
[6, 120],
[504, 1320]
]
>
iex> Nx.product(t, axes: [-3])
#Nx.Tensor<
s64[y: 2][z: 3]
[
[7, 16, 27],
[40, 55, 72]
]
>
```

###
keeping-axes

Keeping axes

```
iex> t = Nx.tensor([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]], names: [:x, :y, :z])
iex> Nx.product(t, axes: [:z], keep_axes: true)
#Nx.Tensor<
s64[x: 2][y: 2][z: 1]
[
[
[6],
[120]
],
[
[504],
[1320]
]
]
>
```

###
errors

Errors

```
iex> Nx.product(Nx.tensor([[1, 2]]), axes: [2])
** (ArgumentError) given axis (2) invalid for shape with rank 2
```

Reduces over a tensor with the given accumulator.

The given `fun`

will receive two tensors and it must
return the reduced value.

The tensor may be reduced in parallel and the reducer function can be called with arguments in any order, the initial accumulator may be given multiples, and it may be non-deterministic. Therefore, the reduction function should be associative (or as close as possible to associativity considered floats themselves are not strictly associative).

By default, it reduces all dimensions of the tensor and
return a scalar. If the `:axes`

option is given, it
aggregates over multiple dimensions, effectively removing
them. `axes: [0]`

implies aggregating over the highest
order dimension and so forth. If the axis is negative,
then counts the axis from the back. For example,
`axes: [-1]`

will always aggregate all rows.

The type of the returned tensor will be computed based on
the given tensor and the initial value. For example,
a tensor of integers with a float accumulator will be
cast to float, as done by most binary operators. You can
also pass a `:type`

option to change this behaviour.

You may optionally set `:keep_axes`

to true, which will
retain the rank of the input tensor by setting the reduced
axes to size 1.

##
limitations

Limitations

Given this function relies on anonymous functions, it
may not be available or efficient on all Nx backends.
Therefore, you should avoid using `reduce/4`

whenever
possible. Instead, use functions `sum/2`

, `reduce_max/2`

,
`all/1`

, and so forth.

##
examples

Examples

```
iex> Nx.reduce(Nx.tensor(42), 0, fn x, y -> Nx.add(x, y) end)
#Nx.Tensor<
s64
42
>
iex> Nx.reduce(Nx.tensor([1, 2, 3]), 0, fn x, y -> Nx.add(x, y) end)
#Nx.Tensor<
s64
6
>
iex> Nx.reduce(Nx.tensor([[1.0, 2.0], [3.0, 4.0]]), 0, fn x, y -> Nx.add(x, y) end)
#Nx.Tensor<
f32
10.0
>
```

###
aggregating-over-axes

Aggregating over axes

```
iex> t = Nx.tensor([1, 2, 3], names: [:x])
iex> Nx.reduce(t, 0, [axes: [:x]], fn x, y -> Nx.add(x, y) end)
#Nx.Tensor<
s64
6
>
iex> t = Nx.tensor([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]], names: [:x, :y, :z])
iex> Nx.reduce(t, 0, [axes: [:x]], fn x, y -> Nx.add(x, y) end)
#Nx.Tensor<
s64[y: 2][z: 3]
[
[8, 10, 12],
[14, 16, 18]
]
>
iex> t = Nx.tensor([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]], names: [:x, :y, :z])
iex> Nx.reduce(t, 0, [axes: [:y]], fn x, y -> Nx.add(x, y) end)
#Nx.Tensor<
s64[x: 2][z: 3]
[
[5, 7, 9],
[17, 19, 21]
]
>
iex> t = Nx.tensor([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]], names: [:x, :y, :z])
iex> Nx.reduce(t, 0, [axes: [:x, 2]], fn x, y -> Nx.add(x, y) end)
#Nx.Tensor<
s64[y: 2]
[30, 48]
>
iex> t = Nx.tensor([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]], names: [:x, :y, :z])
iex> Nx.reduce(t, 0, [axes: [-1]], fn x, y -> Nx.add(x, y) end)
#Nx.Tensor<
s64[x: 2][y: 2]
[
[6, 15],
[24, 33]
]
>
iex> t = Nx.tensor([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]], names: [:x, :y, :z])
iex> Nx.reduce(t, 0, [axes: [:x]], fn x, y -> Nx.add(x, y) end)
#Nx.Tensor<
s64[y: 2][z: 3]
[
[8, 10, 12],
[14, 16, 18]
]
>
iex> t = Nx.tensor([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]], names: [:x, :y, :z])
iex> Nx.reduce(t, 0, [axes: [:x], keep_axes: true], fn x, y -> Nx.add(x, y) end)
#Nx.Tensor<
s64[x: 1][y: 2][z: 3]
[
[
[8, 10, 12],
[14, 16, 18]
]
]
>
```

Returns the maximum values of the tensor.

`:axes`

option is given, it aggregates over
the given dimensions, effectively removing them.
`axes: [0]`

implies aggregating over the highest order
dimension and so forth. If the axis is negative, then
counts the axis from the back. For example, `axes: [-1]`

will always aggregate all rows.

`:keep_axes`

to true, which will
retain the rank of the input tensor by setting the reduced
axes to size 1.

##
examples

Examples

```
iex> Nx.reduce_max(Nx.tensor(42))
#Nx.Tensor<
s64
42
>
iex> Nx.reduce_max(Nx.tensor(42.0))
#Nx.Tensor<
f32
42.0
>
iex> Nx.reduce_max(Nx.tensor([1, 2, 3]))
#Nx.Tensor<
s64
3
>
```

###
aggregating-over-an-axis

Aggregating over an axis

```
iex> t = Nx.tensor([[3, 1, 4], [2, 1, 1]], names: [:x, :y])
iex> Nx.reduce_max(t, axes: [:x])
#Nx.Tensor<
s64[y: 3]
[3, 1, 4]
>
iex> t = Nx.tensor([[3, 1, 4], [2, 1, 1]], names: [:x, :y])
iex> Nx.reduce_max(t, axes: [:y])
#Nx.Tensor<
s64[x: 2]
[4, 2]
>
iex> t = Nx.tensor([[[1, 2], [4, 5]], [[2, 4], [3, 8]]], names: [:x, :y, :z])
iex> Nx.reduce_max(t, axes: [:x, :z])
#Nx.Tensor<
s64[y: 2]
[4, 8]
>
```

###
keeping-axes

Keeping axes

```
iex> t = Nx.tensor([[[1, 2], [4, 5]], [[2, 4], [3, 8]]], names: [:x, :y, :z])
iex> Nx.reduce_max(t, axes: [:x, :z], keep_axes: true)
#Nx.Tensor<
s64[x: 1][y: 2][z: 1]
[
[
[4],
[8]
]
]
>
```

Returns the minimum values of the tensor.

`:axes`

option is given, it aggregates over
the given dimensions, effectively removing them.
`axes: [0]`

implies aggregating over the highest order
dimension and so forth. If the axis is negative, then
counts the axis from the back. For example, `axes: [-1]`

will always aggregate all rows.

`:keep_axes`

to true, which will
retain the rank of the input tensor by setting the reduced
axes to size 1.

##
examples

Examples

```
iex> Nx.reduce_min(Nx.tensor(42))
#Nx.Tensor<
s64
42
>
iex> Nx.reduce_min(Nx.tensor(42.0))
#Nx.Tensor<
f32
42.0
>
iex> Nx.reduce_min(Nx.tensor([1, 2, 3]))
#Nx.Tensor<
s64
1
>
```

###
aggregating-over-an-axis

Aggregating over an axis

```
iex> t = Nx.tensor([[3, 1, 4], [2, 1, 1]], names: [:x, :y])
iex> Nx.reduce_min(t, axes: [:x])
#Nx.Tensor<
s64[y: 3]
[2, 1, 1]
>
iex> t = Nx.tensor([[3, 1, 4], [2, 1, 1]], names: [:x, :y])
iex> Nx.reduce_min(t, axes: [:y])
#Nx.Tensor<
s64[x: 2]
[1, 1]
>
iex> t = Nx.tensor([[[1, 2], [4, 5]], [[2, 4], [3, 8]]], names: [:x, :y, :z])
iex> Nx.reduce_min(t, axes: [:x, :z])
#Nx.Tensor<
s64[y: 2]
[1, 3]
>
```

###
keeping-axes

Keeping axes

```
iex> t = Nx.tensor([[[1, 2], [4, 5]], [[2, 4], [3, 8]]], names: [:x, :y, :z])
iex> Nx.reduce_min(t, axes: [:x, :z], keep_axes: true)
#Nx.Tensor<
s64[x: 1][y: 2][z: 1]
[
[
[1],
[3]
]
]
>
```

@spec standard_deviation(tensor :: Nx.Tensor.t(), opts :: Keyword.t()) :: Nx.Tensor.t()

Finds the standard deviation of a tensor.

The standard deviation is taken as the square root of the variance.
If the `:ddof`

(delta degrees of freedom) option is given, the divisor
`n - ddof`

is used to calculate the variance. See `variance/2`

.

##
examples

Examples

```
iex> Nx.standard_deviation(Nx.tensor([[1, 2], [3, 4]]))
#Nx.Tensor<
f32
1.1180340051651
>
iex> Nx.standard_deviation(Nx.tensor([[1, 2], [3, 4]]), ddof: 1)
#Nx.Tensor<
f32
1.29099440574646
>
iex> Nx.standard_deviation(Nx.tensor([[1, 2], [3, 4]]), axes: [0])
#Nx.Tensor<
f32[2]
[1.0, 1.0]
>
iex> Nx.standard_deviation(Nx.tensor([[1, 2], [3, 4]]), axes: [1])
#Nx.Tensor<
f32[2]
[0.5, 0.5]
>
iex> Nx.standard_deviation(Nx.tensor([[1, 2], [3, 4]]), axes: [0], ddof: 1)
#Nx.Tensor<
f32[2]
[1.4142135381698608, 1.4142135381698608]
>
iex> Nx.standard_deviation(Nx.tensor([[1, 2], [3, 4]]), axes: [1], ddof: 1)
#Nx.Tensor<
f32[2]
[0.7071067690849304, 0.7071067690849304]
>
```

###
keeping-axes

Keeping axes

```
iex> Nx.standard_deviation(Nx.tensor([[1, 2], [3, 4]]), keep_axes: true)
#Nx.Tensor<
f32[1][1]
[
[1.1180340051651]
]
>
```

Returns the sum for the tensor.

`:axes`

option is given, it aggregates over
the given dimensions, effectively removing them.
`axes: [0]`

implies aggregating over the highest order
dimension and so forth. If the axis is negative, then
counts the axis from the back. For example, `axes: [-1]`

will always aggregate all rows.

You may optionally set `:keep_axes`

to true, which will
retain the rank of the input tensor by setting the summed
axes to size 1.

##
examples

Examples

```
iex> Nx.sum(Nx.tensor(42))
#Nx.Tensor<
s64
42
>
iex> Nx.sum(Nx.tensor([1, 2, 3], names: [:x]))
#Nx.Tensor<
s64
6
>
iex> Nx.sum(Nx.tensor([[1.0, 2.0], [3.0, 4.0]], names: [:x, :y]))
#Nx.Tensor<
f32
10.0
>
```

Giving a tensor with low precision casts it to a higher precision to make sure the sum does not overflow:

```
iex> Nx.sum(Nx.tensor([[101, 102], [103, 104]], type: {:s, 8}, names: [:x, :y]))
#Nx.Tensor<
s64
410
>
iex> Nx.sum(Nx.tensor([[101, 102], [103, 104]], type: {:s, 16}, names: [:x, :y]))
#Nx.Tensor<
s64
410
>
```

###
aggregating-over-an-axis

Aggregating over an axis

```
iex> Nx.sum(Nx.tensor([1, 2, 3], names: [:x]), axes: [0])
#Nx.Tensor<
s64
6
>
```

Same tensor over different axes combinations:

```
iex> t = Nx.tensor(
...> [
...> [
...> [1, 2, 3],
...> [4, 5, 6]
...> ],
...> [
...> [7, 8, 9],
...> [10, 11, 12]
...> ]
...> ],
...> names: [:x, :y, :z]
...> )
iex> Nx.sum(t, axes: [:x])
#Nx.Tensor<
s64[y: 2][z: 3]
[
[8, 10, 12],
[14, 16, 18]
]
>
iex> Nx.sum(t, axes: [:y])
#Nx.Tensor<
s64[x: 2][z: 3]
[
[5, 7, 9],
[17, 19, 21]
]
>
iex> Nx.sum(t, axes: [:z])
#Nx.Tensor<
s64[x: 2][y: 2]
[
[6, 15],
[24, 33]
]
>
iex> Nx.sum(t, axes: [:x, :z])
#Nx.Tensor<
s64[y: 2]
[30, 48]
>
iex> Nx.sum(t, axes: [:z])
#Nx.Tensor<
s64[x: 2][y: 2]
[
[6, 15],
[24, 33]
]
>
iex> Nx.sum(t, axes: [-3])
#Nx.Tensor<
s64[y: 2][z: 3]
[
[8, 10, 12],
[14, 16, 18]
]
>
```

###
keeping-axes

Keeping axes

```
iex> t = Nx.tensor([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]], names: [:x, :y, :z])
iex> Nx.sum(t, axes: [:z], keep_axes: true)
#Nx.Tensor<
s64[x: 2][y: 2][z: 1]
[
[
[6],
[15]
],
[
[24],
[33]
]
]
>
```

###
errors

Errors

```
iex> Nx.sum(Nx.tensor([[1, 2]]), axes: [2])
** (ArgumentError) given axis (2) invalid for shape with rank 2
```

@spec variance(tensor :: Nx.Tensor.t(), opts :: Keyword.t()) :: Nx.Tensor.t()

Finds the variance of a tensor.

The variance is the average of the squared deviations from the mean.
The mean is typically calculated as `sum(tensor) / n`

, where `n`

is the total
of elements. If, however, `:ddof`

(delta degrees of freedom) is specified, the
divisor `n - ddof`

is used instead.

##
examples

Examples

```
iex> Nx.variance(Nx.tensor([[1, 2], [3, 4]]))
#Nx.Tensor<
f32
1.25
>
iex> Nx.variance(Nx.tensor([[1, 2], [3, 4]]), ddof: 1)
#Nx.Tensor<
f32
1.6666666269302368
>
iex> Nx.variance(Nx.tensor([[1, 2], [3, 4]]), axes: [0])
#Nx.Tensor<
f32[2]
[1.0, 1.0]
>
iex> Nx.variance(Nx.tensor([[1, 2], [3, 4]]), axes: [1])
#Nx.Tensor<
f32[2]
[0.25, 0.25]
>
iex> Nx.variance(Nx.tensor([[1, 2], [3, 4]]), axes: [0], ddof: 1)
#Nx.Tensor<
f32[2]
[2.0, 2.0]
>
iex> Nx.variance(Nx.tensor([[1, 2], [3, 4]]), axes: [1], ddof: 1)
#Nx.Tensor<
f32[2]
[0.5, 0.5]
>
```

###
keeping-axes

Keeping axes

```
iex> Nx.variance(Nx.tensor([[1, 2], [3, 4]]), axes: [1], keep_axes: true)
#Nx.Tensor<
f32[2][1]
[
[0.25],
[0.25]
]
>
```

# Link to this section Functions: Backend

Copies data to the given backend.

If a backend is not given, `Nx.Tensor`

is used, which means
the given tensor backend will pick the most appropriate
backend to copy the data to.

Note this function keeps the data in the original backend.
Therefore, use this function with care, as it may duplicate
large amounts of data across backends. Generally speaking,
you may want to use `backend_transfer/2`

, unless you explicitly
want to copy the data.

For convenience, this function accepts tensors and any container
(such as maps and tuples as defined by the `Nx.Container`

protocol)
and recursively copies all tensors in them. This behaviour exists
as it is common to transfer data before and after `defn`

functions.

*Note: `Nx.default_backend/1`

does not affect the behaviour of
this function.

###
examples

Examples

iex> Nx.backend_copy(Nx.tensor([[1, 2, 3], [4, 5, 6]])) #Nx.Tensor<

```
s64[2][3]
[
[1, 2, 3],
[4, 5, 6]
]
```

Deallocates data in a device.

It returns either `:ok`

or `:already_deallocated`

.

For convenience, this function accepts tensors and any container
(such as maps and tuples as defined by the `Nx.Container`

protocol)
and deallocates all devices in them. This behaviour exists as it is
common to deallocate data after `defn`

functions.

Transfers data to the given backend.

This operation can be seen as an equivalent to `backend_copy/3`

followed by a `backend_deallocate/1`

on the initial tensor:

```
new_tensor = Nx.backend_copy(old_tensor, new_backend)
Nx.backend_deallocate(old_tensor)
```

If a backend is not given, `Nx.Tensor`

is used, which means
the given tensor backend will pick the most appropriate
backend to transfer to.

For Elixir's builtin tensor, transferring to another backend
will call `new_backend.from_binary(tensor, binary, opts)`

.
Transferring from a mutable backend, such as GPU memory,
implies the data is copied from the GPU to the Erlang VM
and then deallocated from the device.

For convenience, this function accepts tensors and any container
(such as maps and tuples as defined by the `Nx.Container`

protocol)
and transfers all tensors in them. This behaviour exists as it is
common to transfer data from tuples and maps before and after `defn`

functions.

*Note: `Nx.default_backend/1`

does not affect the behaviour of
this function.

##
examples

Examples

Transfer a tensor to an EXLA device backend, stored in the GPU:

`device_tensor = Nx.backend_transfer(tensor, {EXLA.Backend, client: :cuda})`

Transfer the device tensor back to an Elixir tensor:

`tensor = Nx.backend_transfer(device_tensor)`

Gets the default backend for the current process.

Sets the current process default backend to `backend`

with the given `opts`

.

The default backend is stored only in the process dictionary.
This means if you start a separate process, such as `Task`

,
the default backend must be set on the new process too.

This function is mostly used for scripting and testing. In your applications, you typically set the backend in your config files:

`config :nx, :default_backend, {Lib.CustomBackend, device: :cuda}`

##
examples

Examples

```
iex> Nx.default_backend({Lib.CustomBackend, device: :cuda})
{Nx.BinaryBackend, []}
iex> Nx.default_backend()
{Lib.CustomBackend, device: :cuda}
```

Sets the default backend globally.

You must avoid calling this function at runtime. It is mostly
useful during scripts or code notebooks to set a default.
If you need to configure a global default backend in your
applications, you can do so in your `config/*.exs`

files:

`config :nx, :default_backend, {Lib.CustomBackend, []}`

# Link to this section Functions: Conversion

Deserializes a serialized representation of a tensor or a container with the given options.

It is the opposite of `Nx.serialize/2`

.

##
examples

Examples

```
iex> a = Nx.tensor([1, 2, 3])
iex> serialized_a = Nx.serialize(a)
iex> Nx.deserialize(serialized_a)
#Nx.Tensor<
s64[3]
[1, 2, 3]
>
iex> container = {Nx.tensor([1, 2, 3]), %{b: Nx.tensor([4, 5, 6])}}
iex> serialized_container = Nx.serialize(container)
iex> {a, %{b: b}} = Nx.deserialize(serialized_container)
iex> a
#Nx.Tensor<
s64[3]
[1, 2, 3]
>
iex> b
#Nx.Tensor<
s64[3]
[4, 5, 6]
>
```

Loads a `.npy`

file into a tensor.

An `.npy`

file stores a single array created from Python's
NumPy library. This function can be useful for loading data
originally created or intended to be loaded from NumPy into
Elixir.

Loads a `.npz`

archive into a list of tensors.

An `.npz`

file is a zipped, possibly compressed archive containing
multiple `.npy`

files.

Serializes the given tensor or container of tensors to a binary.

You may pass a tensor, tuple, or map to serialize.

`opts`

controls the serialization options. For example, you can choose
to compress the given tensor or container of tensors by passing a
compression level:

`Nx.serialize(tensor, compressed: 9)`

Compression level corresponds to compression options in `:erlang.term_to_binary/2`

.

##
examples

Examples

```
iex> a = Nx.tensor([1, 2, 3])
iex> serialized_a = Nx.serialize(a)
iex> Nx.deserialize(serialized_a)
#Nx.Tensor<
s64[3]
[1, 2, 3]
>
iex> container = {Nx.tensor([1, 2, 3]), %{b: Nx.tensor([4, 5, 6])}}
iex> serialized_container = Nx.serialize(container)
iex> {a, %{b: b}} = Nx.deserialize(serialized_container)
iex> a
#Nx.Tensor<
s64[3]
[1, 2, 3]
>
iex> b
#Nx.Tensor<
s64[3]
[4, 5, 6]
>
```

Converts the underlying tensor to a list of tensors.

The first dimension (axis 0) is divided by `batch_size`

.
In case the dimension cannot be evenly divided by
`batch_size`

, you may specify what to do with leftover
data using `:leftover`

. `:leftover`

must be one of `:repeat`

or `:discard`

. `:repeat`

repeats the first `n`

values to
make the last batch match the desired batch size. `:discard`

discards excess elements.

##
examples

Examples

```
iex> [first, second] = Nx.to_batched_list(Nx.iota({2, 2, 2}), 1)
iex> first
#Nx.Tensor<
s64[1][2][2]
[
[
[0, 1],
[2, 3]
]
]
>
iex> second
#Nx.Tensor<
s64[1][2][2]
[
[
[4, 5],
[6, 7]
]
]
>
iex> [first, second, third] = Nx.to_batched_list(Nx.iota({6, 2}, names: [:x, :y]), 2)
iex> first
#Nx.Tensor<
s64[x: 2][y: 2]
[
[0, 1],
[2, 3]
]
>
iex> second
#Nx.Tensor<
s64[x: 2][y: 2]
[
[4, 5],
[6, 7]
]
>
iex> third
#Nx.Tensor<
s64[x: 2][y: 2]
[
[8, 9],
[10, 11]
]
>
```

If the batch size would result in uneven batches, you can repeat or discard excess data:

```
iex> [first, second, third] = Nx.to_batched_list(Nx.iota({5, 2}, names: [:x, :y]), 2)
iex> first
#Nx.Tensor<
s64[x: 2][y: 2]
[
[0, 1],
[2, 3]
]
>
iex> second
#Nx.Tensor<
s64[x: 2][y: 2]
[
[4, 5],
[6, 7]
]
>
iex> third
#Nx.Tensor<
s64[x: 2][y: 2]
[
[8, 9],
[0, 1]
]
>
iex> [first, second] = Nx.to_batched_list(Nx.iota({5, 2}, names: [:x, :y]), 2, leftover: :discard)
iex> first
#Nx.Tensor<
s64[x: 2][y: 2]
[
[0, 1],
[2, 3]
]
>
iex> second
#Nx.Tensor<
s64[x: 2][y: 2]
[
[4, 5],
[6, 7]
]
>
```

Returns the underlying tensor as a binary.

**Warning**: converting a tensor to a binary can
potentially be a very expensive operation, as it
may copy a GPU tensor fully to the machine memory.

It returns the in-memory binary representation of the tensor in a row-major fashion. The binary is in the system endianness, which has to be taken into account if the binary is meant to be serialized to other systems.

##
options

Options

`:limit`

- limit the number of entries represented in the binary

##
examples

Examples

```
iex> Nx.to_binary(1)
<<1::64-native>>
iex> Nx.to_binary(Nx.tensor([1.0, 2.0, 3.0]))
<<1.0::float-32-native, 2.0::float-32-native, 3.0::float-32-native>>
iex> Nx.to_binary(Nx.tensor([1.0, 2.0, 3.0]), limit: 2)
<<1.0::float-32-native, 2.0::float-32-native>>
```

Returns the underlying tensor as a flat list.

Negative infinity, infinity, and NaN will be respectively returned
as the atoms `:neg_infinity`

, `:infinity`

, and `:nan`

.

##
examples

Examples

```
iex> Nx.to_flat_list(1)
[1]
iex> Nx.to_flat_list(Nx.tensor([1.0, 2.0, 3.0]))
[1.0, 2.0, 3.0]
iex> Nx.to_flat_list(Nx.tensor([1.0, 2.0, 3.0]), limit: 2)
[1.0, 2.0]
```

Non-finite numbers are returned as atoms:

```
iex> t = Nx.tensor([:neg_infinity, :nan, :infinity])
iex> Nx.to_flat_list(t)
[:neg_infinity, :nan, :infinity]
```

Returns a heatmap struct with the tensor data.

On terminals, coloring is done via ANSI colors. If ANSI is not enabled, the tensor is normalized to show numbers between 0 and 9.

##
terminal-coloring

Terminal coloring

Coloring is enabled by default on most Unix terminals. It is also available on Windows consoles from Windows 10, although it must be explicitly enabled for the current user in the registry by running the following command:

`reg add HKCU\Console /v VirtualTerminalLevel /t REG_DWORD /d 1`

After running the command above, you must restart your current console.

##
options

Options

`:ansi_enabled`

- forces ansi to be enabled or disabled. Defaults to`IO.ANSI.enabled?/0`

`:ansi_whitespace`

- which whitespace character to use when printing. By default it uses`"\u3000"`

, which is a full-width whitespace which often prints more precise shapes

Returns the underlying tensor as a number.

If the tensor has a dimension, it raises.

##
examples

Examples

```
iex> Nx.to_number(1)
1
iex> Nx.to_number(Nx.tensor([1.0, 2.0, 3.0]))
** (ArgumentError) cannot convert tensor of shape {3} to number
```

Converts a tensor (or tuples and maps of tensors) to tensor templates.

Templates are useful when you need to pass types and shapes to operations and the data is not yet available.

For convenience, this function accepts tensors and any container
(such as maps and tuples as defined by the `Nx.Container`

protocol)
and recursively converts all tensors to templates.

##
examples

Examples

```
iex> Nx.iota({2, 3}) |> Nx.to_template()
#Nx.Tensor<
s64[2][3]
Nx.TemplateBackend
>
iex> {int, float} = Nx.to_template({1, 2.0})
iex> int
#Nx.Tensor<
s64
Nx.TemplateBackend
>
iex> float
#Nx.Tensor<
f32
Nx.TemplateBackend
>
```

Although note it is impossible to perform any operation on a tensor template:

```
iex> t = Nx.iota({2, 3}) |> Nx.to_template()
iex> Nx.abs(t)
** (RuntimeError) cannot perform operations on a Nx.TemplateBackend tensor
```

To build a template from scratch, use `template/3`

.

Converts the given number (or tensor) to a tensor.

This function exists for data normalization. If your
goal is to create tensors from lists, see `tensor/2`

.
If you want to create a tensor from binary, see
`from_binary/3`

.

# Link to this section Functions: Creation

Creates the identity matrix of size `n`

.

##
examples

Examples

```
iex> Nx.eye(2)
#Nx.Tensor<
s64[2][2]
[
[1, 0],
[0, 1]
]
>
iex> Nx.eye(3, type: {:f, 32}, names: [:height, :width])
#Nx.Tensor<
f32[height: 3][width: 3]
[
[1.0, 0.0, 0.0],
[0.0, 1.0, 0.0],
[0.0, 0.0, 1.0]
]
>
```

The first argument can also be a tensor or a shape of a square matrix:

```
iex> Nx.eye(Nx.iota({2, 2}))
#Nx.Tensor<
s64[2][2]
[
[1, 0],
[0, 1]
]
>
iex> Nx.eye({1, 1})
#Nx.Tensor<
s64[1][1]
[
[1]
]
>
```

##
options

Options

`:type`

- the type of the tensor`:names`

- the names of the tensor dimensions`:backend`

- the backend to allocate the tensor on. It is either an atom or a tuple in the shape`{backend, options}`

. This option is ignored inside`defn`

Creates a one-dimensional tensor from a `binary`

with the given `type`

.

If the binary size does not match its type, an error is raised.

##
examples

Examples

```
iex> Nx.from_binary(<<1, 2, 3, 4>>, {:s, 8})
#Nx.Tensor<
s8[4]
[1, 2, 3, 4]
>
```

The atom notation for types is also supported:

```
iex> Nx.from_binary(<<12.3::float-64-native>>, :f64)
#Nx.Tensor<
f64[1]
[12.3]
>
```

An error is raised for incompatible sizes:

```
iex> Nx.from_binary(<<1, 2, 3, 4>>, {:f, 64})
** (ArgumentError) binary does not match the given size
```

##
options

Options

`:backend`

- the backend to allocate the tensor on. It is either an atom or a tuple in the shape`{backend, options}`

. This option is ignored inside`defn`

Creates a tensor with the given shape which increments along the provided axis. You may optionally provide dimension names.

If no axis is provided, index counts up at each element.

If a tensor or a number are given, the shape and names are taken from the tensor.

##
examples

Examples

```
iex> Nx.iota({})
#Nx.Tensor<
s64
0
>
iex> Nx.iota({5})
#Nx.Tensor<
s64[5]
[0, 1, 2, 3, 4]
>
iex> Nx.iota({3, 2, 3}, names: [:batch, :height, :width])
#Nx.Tensor<
s64[batch: 3][height: 2][width: 3]
[
[
[0, 1, 2],
[3, 4, 5]
],
[
[6, 7, 8],
[9, 10, 11]
],
[
[12, 13, 14],
[15, 16, 17]
]
]
>
iex> Nx.iota({3, 3}, axis: 1, names: [:batch, nil])
#Nx.Tensor<
s64[batch: 3][3]
[
[0, 1, 2],
[0, 1, 2],
[0, 1, 2]
]
>
iex> Nx.iota({3, 3}, axis: -1)
#Nx.Tensor<
s64[3][3]
[
[0, 1, 2],
[0, 1, 2],
[0, 1, 2]
]
>
iex> Nx.iota({3, 4, 3}, axis: 0, type: {:f, 64})
#Nx.Tensor<
f64[3][4][3]
[
[
[0.0, 0.0, 0.0],
[0.0, 0.0, 0.0],
[0.0, 0.0, 0.0],
[0.0, 0.0, 0.0]
],
[
[1.0, 1.0, 1.0],
[1.0, 1.0, 1.0],
[1.0, 1.0, 1.0],
[1.0, 1.0, 1.0]
],
[
[2.0, 2.0, 2.0],
[2.0, 2.0, 2.0],
[2.0, 2.0, 2.0],
[2.0, 2.0, 2.0]
]
]
>
iex> Nx.iota({1, 3, 2}, axis: 2)
#Nx.Tensor<
s64[1][3][2]
[
[
[0, 1],
[0, 1],
[0, 1]
]
]
>
```

##
options

Options

`:type`

- the type of the tensor`:axis`

- an axis to repeat the iota over`:names`

- the names of the tensor dimensions`:backend`

- the backend to allocate the tensor on. It is either an atom or a tuple in the shape`{backend, options}`

. This option is ignored inside`defn`

Creates a diagonal tensor from a 1D tensor.

Converse of `take_diagonal/2`

.

The returned tensor will be a square matrix of dimensions equal to the size of the tensor. If an offset is given, the absolute value of the offset is added to the matrix dimensions sizes.

##
examples

Examples

Given a 1D tensor:

```
iex> Nx.make_diagonal(Nx.tensor([1, 2, 3, 4]))
#Nx.Tensor<
s64[4][4]
[
[1, 0, 0, 0],
[0, 2, 0, 0],
[0, 0, 3, 0],
[0, 0, 0, 4]
]
>
```

Given a 1D tensor with an offset:

```
iex> Nx.make_diagonal(Nx.tensor([1, 2, 3]), offset: 1)
#Nx.Tensor<
s64[4][4]
[
[0, 1, 0, 0],
[0, 0, 2, 0],
[0, 0, 0, 3],
[0, 0, 0, 0]
]
>
iex> Nx.make_diagonal(Nx.tensor([1, 2, 3]), offset: -1)
#Nx.Tensor<
s64[4][4]
[
[0, 0, 0, 0],
[1, 0, 0, 0],
[0, 2, 0, 0],
[0, 0, 3, 0]
]
>
```

You can also have offsets with an abs greater than the tensor length:

```
iex> Nx.make_diagonal(Nx.tensor([1, 2, 3]), offset: -4)
#Nx.Tensor<
s64[7][7]
[
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[1, 0, 0, 0, 0, 0, 0],
[0, 2, 0, 0, 0, 0, 0],
[0, 0, 3, 0, 0, 0, 0]
]
>
iex> Nx.make_diagonal(Nx.tensor([1, 2, 3]), offset: 4)
#Nx.Tensor<
s64[7][7]
[
[0, 0, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 2, 0],
[0, 0, 0, 0, 0, 0, 3],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]
]
>
```

##
options

Options

`:offset`

- offset used for making the diagonal. Use offset > 0 for diagonals above the main diagonal, and offset < 0 for diagonals below the main diagonal. Defaults to 0.

##
error-cases

Error cases

```
iex> Nx.make_diagonal(Nx.tensor([[0, 0], [0, 1]]))
** (ArgumentError) make_diagonal/2 expects tensor of rank 1, got tensor of rank: 2
```

Shortcut for `random_normal(shape, 0.0, 1.0, opts)`

.

Returns a normally-distributed random tensor with the given shape.

The distribution has mean of `mu`

and standard deviation of
`sigma`

. Return type is one of `{:bf, 16}`

, `{:f, 32}`

or `{:f, 64}`

.

If a tensor or a number are given, the shape is taken from the tensor.

##
examples

Examples

```
iex> t = Nx.random_normal({10})
iex> Nx.shape(t)
{10}
iex> Nx.type(t)
{:f, 32}
iex> t = Nx.random_normal({5, 5}, 2.0, 1.0, type: {:bf, 16})
iex> Nx.shape(t)
{5, 5}
iex> Nx.type(t)
{:bf, 16}
iex> t = Nx.random_normal({3, 3, 3}, -1.0, 1.0, type: {:f, 32})
iex> Nx.shape(t)
{3, 3, 3}
iex> Nx.type(t)
{:f, 32}
```

If given a tensor as a shape, it takes the shape, names, and default type from the tensor:

```
iex> t = Nx.tensor([[1.0, 2.0], [3.0, 4.0]], names: [:batch, :data])
iex> t = Nx.random_normal(t)
iex> Nx.shape(t)
{2, 2}
iex> Nx.type(t)
{:f, 32}
iex> Nx.names(t)
[:batch, :data]
iex> t = Nx.tensor([[1.0, 2.0], [3.0, 4.0]])
iex> t = Nx.random_normal(t, type: {:f, 32})
iex> Nx.shape(t)
{2, 2}
iex> Nx.type(t)
{:f, 32}
iex> Nx.names(t)
[nil, nil]
```

The same applies to numbers:

```
iex> t = Nx.random_normal(10.0)
iex> Nx.shape(t)
{}
iex> Nx.type(t)
{:f, 32}
iex> Nx.names(t)
[]
```

If you pass the `:names`

option, the resulting tensor will take on those names:

```
iex> t = Nx.tensor([[1, 2], [3, 4]], names: [:batch, :data])
iex> t = Nx.random_normal(t, names: [:batch, nil])
iex> Nx.shape(t)
{2, 2}
iex> Nx.type(t)
{:f, 32}
iex> Nx.names(t)
[:batch, nil]
```

##
options

Options

`:type`

- the type of the tensor`:names`

- the names of the tensor dimensions`:backend`

- the backend to allocate the tensor on. It is either an atom or a tuple in the shape`{backend, options}`

. This option is ignored inside`defn`

Shortcut for `random_uniform(shape, 0.0, 1.0, opts)`

.

Returns a uniformly-distributed random tensor with the given shape.

The distribution is bounded on the semi-open interval `[min, max)`

.
If `min`

and `max`

are integers, then the tensor has type `{:s, 64}`

.
Otherwise, a `{:f, 64}`

tensor is returned. You can also pass any
valid type via the `:type`

option.

If a tensor or a number are given, the shape and default type are taken from them.

##
examples

Examples

###
generating-floats

Generating Floats

```
iex> t = Nx.random_uniform({10})
iex> for <<x::float-32-native <- Nx.to_binary(t)>> do
...> true = x >= 0.0 and x < 1.0
...> end
iex> Nx.shape(t)
{10}
iex> Nx.type(t)
{:f, 32}
iex> t = Nx.random_uniform({5, 5}, type: {:bf, 16})
iex> byte_size(Nx.to_binary(t))
50
iex> Nx.shape(t)
{5, 5}
iex> Nx.type(t)
{:bf, 16}
iex> t = Nx.random_uniform({5, 5}, -1.0, 1.0, type: {:f, 64})
iex> for <<x::float-64-native <- Nx.to_binary(t)>> do
...> true = x >= -1.0 and x < 1.0
...> end
iex> Nx.shape(t)
{5, 5}
iex> Nx.type(t)
{:f, 64}
```

###
generating-integers

Generating Integers

```
iex> t = Nx.random_uniform({10}, 5, 10, type: {:u, 8})
iex> for <<x::8-unsigned-native <- Nx.to_binary(t)>> do
...> true = x >= 5 and x < 10
...> end
iex> Nx.shape(t)
{10}
iex> Nx.type(t)
{:u, 8}
iex> t = Nx.random_uniform({5, 5}, -5, 5, type: {:s, 64})
iex> for <<x::64-signed-native <- Nx.to_binary(t)>> do
...> true = x >= -5 and x < 5
...> end
iex> Nx.shape(t)
{5, 5}
iex> Nx.type(t)
{:s, 64}
```

###
tensors-as-shapes

Tensors as shapes

If given a tensor as a shape, it takes the shape and names from the tensor:

```
iex> t = Nx.tensor([[1, 2], [3, 4]], names: [:batch, :data])
iex> t = Nx.random_uniform(t)
iex> Nx.shape(t)
{2, 2}
iex> Nx.type(t)
{:f, 32}
iex> Nx.names(t)
[:batch, :data]
iex> t = Nx.tensor([[1, 2], [3, 4]])
iex> t = Nx.random_uniform(t, type: {:f, 32})
iex> Nx.shape(t)
{2, 2}
iex> Nx.type(t)
{:f, 32}
iex> Nx.names(t)
[nil, nil]
```

The same applies to numbers:

```
iex> t = Nx.random_uniform(10)
iex> Nx.shape(t)
{}
iex> Nx.type(t)
{:f, 32}
iex> t = Nx.random_uniform(10.0)
iex> Nx.shape(t)
{}
iex> Nx.type(t)
{:f, 32}
iex> Nx.names(t)
[]
```

If you pass `:names`

as an option, the resulting tensor will take on those names:

```
iex> t = Nx.tensor([[1, 2], [3, 4]], names: [:batch, :data])
iex> t = Nx.random_uniform(t, names: [:batch, nil])
iex> Nx.shape(t)
{2, 2}
iex> Nx.type(t)
{:f, 32}
iex> Nx.names(t)
[:batch, nil]
```

##
options

Options

`:type`

- the type of the tensor`:names`

- the names of the tensor dimensions`:backend`

- the backend to allocate the tensor on. It is either an atom or a tuple in the shape`{backend, options}`

. This option is ignored inside`defn`

Shuffles tensor elements.

By default, shuffles elements within the whole tensor. When `:axis`

is given, shuffles the tensor along the specific axis instead.

##
options

Options

`:axis`

- the axis to shuffle along

##
examples

Examples

Shuffling all elements:

```
t = Nx.tensor([[1, 2], [3, 4], [5, 6]])
Nx.shuffle(t)
#=>
#Nx.Tensor<
s64[3][2]
[
[5, 1],
[2, 3],
[6, 4]
]
>
```

Shuffling rows in a two-dimensional tensor:

```
t = Nx.tensor([[1, 2], [3, 4], [5, 6]])
Nx.shuffle(t, axis: 0)
#=>
#Nx.Tensor<
s64[3][2]
[
[5, 6],
[1, 2],
[3, 4]
]
>
```

A convenient `~M`

sigil for building matrices (two-dimensional tensors).

##
examples

Examples

Before using sigils, you must first import them:

`import Nx, only: :sigils`

Then you use the sigil to create matrices. The sigil:

```
~M<
-1 0 0 1
0 2 0 0
0 0 3 0
0 0 0 4
>
```

Is equivalent to:

```
Nx.tensor([
[-1, 0, 0, 1],
[0, 2, 0, 0],
[0, 0, 3, 0],
[0, 0, 0, 4]
])
```

If the tensor has any complex type, it defaults to c64. If the tensor has any float type, it defaults to f32. Otherwise, it is s64. You can specify the tensor type as a sigil modifier:

```
iex> import Nx, only: :sigils
iex> ~M[0.1 0.2 0.3 0.4]f16
#Nx.Tensor<
f16[1][4]
[
[0.0999755859375, 0.199951171875, 0.300048828125, 0.39990234375]
]
>
iex> ~M[1+1i 2-2.0i -3]
#Nx.Tensor<
c64[1][3]
[
[1.0+1.0i, 2.0-2.0i, -3.0+0.0i]
]
>
```

A convenient `~V`

sigil for building vectors (one-dimensional tensors).

##
examples

Examples

Before using sigils, you must first import them:

`import Nx, only: :sigils`

Then you use the sigil to create vectors. The sigil:

`~V[-1 0 0 1]`

Is equivalent to:

`Nx.tensor([-1, 0, 0, 1])`

If the tensor has any complex type, it defaults to c64. If the tensor has any float type, it defaults to f32. Otherwise, it is s64. You can specify the tensor type as a sigil modifier:

```
iex> import Nx, only: :sigils
iex> ~V[0.1 0.2 0.3 0.4]f16
#Nx.Tensor<
f16[4]
[0.0999755859375, 0.199951171875, 0.300048828125, 0.39990234375]
>
iex> ~V[1+1i 2-2.0i -3]
#Nx.Tensor<
c64[3]
[1.0+1.0i, 2.0-2.0i, -3.0+0.0i]
>
```

Extracts the diagonal of a 2D tensor.

Converse of `make_diagonal/2`

.

##
examples

Examples

Given a 2D tensor without offset:

```
iex> Nx.take_diagonal(Nx.tensor([
...> [0, 1, 2],
...> [3, 4, 5],
...> [6, 7, 8]
...> ]))
#Nx.Tensor<
s64[3]
[0, 4, 8]
>
```

And if given a 2D tensor along with an offset:

```
iex> Nx.take_diagonal(Nx.iota({3, 3}), offset: 1)
#Nx.Tensor<
s64[2]
[1, 5]
>
iex> Nx.take_diagonal(Nx.iota({3, 3}), offset: -1)
#Nx.Tensor<
s64[2]
[3, 7]
>
```

##
options

Options

`:offset`

- offset used for extracting the diagonal. Use offset > 0 for diagonals above the main diagonal, and offset < 0 for diagonals below the main diagonal. Defaults to 0.

##
error-cases

Error cases

```
iex> Nx.take_diagonal(Nx.tensor([0, 1, 2]))
** (ArgumentError) take_diagonal/2 expects tensor of rank 2, got tensor of rank: 1
iex> Nx.take_diagonal(Nx.iota({3, 3}), offset: 3)
** (ArgumentError) offset must be less than length of axis 1 when positive, got: 3
iex> Nx.take_diagonal(Nx.iota({3, 3}), offset: -4)
** (ArgumentError) absolute value of offset must be less than length of axis 0 when negative, got: -4
```

Creates a tensor template.

You can't perform any operation on this tensor. It exists exclusively to define APIs that say a tensor with a certain type, shape, and names is expected in the future.

##
examples

Examples

```
iex> Nx.template({2, 3}, {:f, 32})
#Nx.Tensor<
f32[2][3]
Nx.TemplateBackend
>
iex> Nx.template({2, 3}, {:f, 32}, names: [:rows, :columns])
#Nx.Tensor<
f32[rows: 2][columns: 3]
Nx.TemplateBackend
>
```

Although note it is impossible to perform any operation on a tensor template:

```
iex> t = Nx.template({2, 3}, {:f, 32}, names: [:rows, :columns])
iex> Nx.abs(t)
** (RuntimeError) cannot perform operations on a Nx.TemplateBackend tensor
```

To convert existing tensors to templates, use `to_template/1`

.

Builds a tensor.

The argument is either a number, which means the tensor is a scalar
(zero-dimensions), a list of those (the tensor is a vector) or
a list of n-lists of those, leading to n-dimensional tensors.
The tensor will be allocated in `Nx.default_backend/1`

, unless the
`:backend`

option is given, which overrides the default one.

##
examples

Examples

A number returns a tensor of zero dimensions:

```
iex> Nx.tensor(0)
#Nx.Tensor<
s64
0
>
iex> Nx.tensor(1.0)
#Nx.Tensor<
f32
1.0
>
```

Giving a list returns a vector (a one-dimensional tensor):

```
iex> Nx.tensor([1, 2, 3])
#Nx.Tensor<
s64[3]
[1, 2, 3]
>
iex> Nx.tensor([1.2, 2.3, 3.4, 4.5])
#Nx.Tensor<
f32[4]
[1.2000000476837158, 2.299999952316284, 3.4000000953674316, 4.5]
>
```

The type can be explicitly given. Integers and floats bigger than the given size overflow:

```
iex> Nx.tensor([300, 301, 302], type: {:s, 8})
#Nx.Tensor<
s8[3]
[44, 45, 46]
>
```

Mixed types give higher priority to floats:

```
iex> Nx.tensor([1, 2, 3.0])
#Nx.Tensor<
f32[3]
[1.0, 2.0, 3.0]
>
```

Multi-dimensional tensors are also possible:

```
iex> Nx.tensor([[1, 2, 3], [4, 5, 6]])
#Nx.Tensor<
s64[2][3]
[
[1, 2, 3],
[4, 5, 6]
]
>
iex> Nx.tensor([[1, 2], [3, 4], [5, 6]])
#Nx.Tensor<
s64[3][2]
[
[1, 2],
[3, 4],
[5, 6]
]
>
iex> Nx.tensor([[[1, 2], [3, 4], [5, 6]], [[-1, -2], [-3, -4], [-5, -6]]])
#Nx.Tensor<
s64[2][3][2]
[
[
[1, 2],
[3, 4],
[5, 6]
],
[
[-1, -2],
[-3, -4],
[-5, -6]
]
]
>
```

Besides single-precision (32 bits), floats can also have half-precision (16) or double-precision (64):

```
iex> Nx.tensor([1, 2, 3], type: {:f, 16})
#Nx.Tensor<
f16[3]
[1.0, 2.0, 3.0]
>
iex> Nx.tensor([1, 2, 3], type: {:f, 64})
#Nx.Tensor<
f64[3]
[1.0, 2.0, 3.0]
>
```

Brain-floating points are also supported:

```
iex> Nx.tensor([1, 2, 3], type: {:bf, 16})
#Nx.Tensor<
bf16[3]
[1.0, 2.0, 3.0]
>
```

You can also provide names for tensor dimensions. Names are either atoms or `nil`

:

```
iex> Nx.tensor([[1, 2, 3], [4, 5, 6]], names: [:x, :y])
#Nx.Tensor<
s64[x: 2][y: 3]
[
[1, 2, 3],
[4, 5, 6]
]
>
```

Names make your code more expressive:

```
iex> Nx.tensor([[[1, 2, 3], [4, 5, 6], [7, 8, 9]]], names: [:batch, :height, :width])
#Nx.Tensor<
s64[batch: 1][height: 3][width: 3]
[
[
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
]
]
>
```

You can also leave dimension names as `nil`

:

```
iex> Nx.tensor([[[1, 2, 3], [4, 5, 6], [7, 8, 9]]], names: [:batch, nil, nil])
#Nx.Tensor<
s64[batch: 1][3][3]
[
[
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
]
]
>
```

However, you must provide a name for every dimension in the tensor:

```
iex> Nx.tensor([[[1, 2, 3], [4, 5, 6], [7, 8, 9]]], names: [:batch])
** (ArgumentError) invalid names for tensor of rank 3, when specifying names every dimension must have a name or be nil
```

##
options

Options

`:type`

- sets the type of the tensor. If one is not given, one is automatically inferred based on the input.`:names`

- dimension names. If you wish to specify dimension names you must specify a name for every dimension in the tensor. Only`nil`

and atoms are supported as dimension names.`:backend`

- the backend to allocate the tensor on. It is either an atom or a tuple in the shape`{backend, options}`

. This option is ignored inside`defn`

# Link to this section Functions: Cumulative

Returns the cumulative maximum of elements along an axis.

##
options

Options

`:axis`

- the axis to compare elements along. Defaults to`0`

##
examples

Examples

```
iex> Nx.cumulative_max(Nx.tensor([3, 4, 2, 1]))
#Nx.Tensor<
s64[4]
[3, 4, 4, 4]
>
iex> Nx.cumulative_max(Nx.tensor([[2, 3, 1], [1, 3, 2], [2, 1, 3]]), axis: 0)
#Nx.Tensor<
s64[3][3]
[
[2, 3, 1],
[2, 3, 2],
[2, 3, 3]
]
>
iex> Nx.cumulative_max(Nx.tensor([[2, 3, 1], [1, 3, 2], [2, 1, 3]]), axis: 1)
#Nx.Tensor<
s64[3][3]
[
[2, 3, 3],
[1, 3, 3],
[2, 2, 3]
]
>
```

Returns the cumulative minimum of elements along an axis.

##
options

Options

`:axis`

- the axis to compare elements along. Defaults to`0`

##
examples

Examples

```
iex> Nx.cumulative_min(Nx.tensor([3, 4, 2, 1]))
#Nx.Tensor<
s64[4]
[3, 3, 2, 1]
>
iex> Nx.cumulative_min(Nx.tensor([[2, 3, 1], [1, 3, 2], [2, 1, 3]]), axis: 0)
#Nx.Tensor<
s64[3][3]
[
[2, 3, 1],
[1, 3, 1],
[1, 1, 1]
]
>
iex> Nx.cumulative_min(Nx.tensor([[2, 3, 1], [1, 3, 2], [2, 1, 3]]), axis: 1)
#Nx.Tensor<
s64[3][3]
[
[2, 2, 1],
[1, 1, 1],
[2, 1, 1]
]
>
```

Returns the cumulative product of elements along an axis.

##
options

Options

`:axis`

- the axis to multiply elements along. Defaults to`0`

##
examples

Examples

```
iex> Nx.cumulative_product(Nx.tensor([1, 2, 3, 4]))
#Nx.Tensor<
s64[4]
[1, 2, 6, 24]
>
iex> Nx.cumulative_product(Nx.iota({3, 3}), axis: 0)
#Nx.Tensor<
s64[3][3]
[
[0, 1, 2],
[0, 4, 10],
[0, 28, 80]
]
>
iex> Nx.cumulative_product(Nx.iota({3, 3}), axis: 1)
#Nx.Tensor<
s64[3][3]
[
[0, 0, 0],
[3, 12, 60],
[6, 42, 336]
]
>
```

Returns the cumulative sum of elements along an axis.

##
options

Options

`:axis`

- the axis to sum elements along. Defaults to`0`

##
examples

Examples

```
iex> Nx.cumulative_sum(Nx.tensor([1, 2, 3, 4]))
#Nx.Tensor<
s64[4]
[1, 3, 6, 10]
>
iex> Nx.cumulative_sum(Nx.iota({3, 3}), axis: 0)
#Nx.Tensor<
s64[3][3]
[
[0, 1, 2],
[3, 5, 7],
[9, 12, 15]
]
>
iex> Nx.cumulative_sum(Nx.iota({3, 3}), axis: 1)
#Nx.Tensor<
s64[3][3]
[
[0, 1, 3],
[3, 7, 12],
[6, 13, 21]
]
>
```

# Link to this section Functions: Element-wise

Computes the absolute value of each element in the tensor.

##
examples

Examples

```
iex> Nx.abs(Nx.tensor([-2, -1, 0, 1, 2], names: [:x]))
#Nx.Tensor<
s64[x: 5]
[2, 1, 0, 1, 2]
>
```

Calculates the inverse cosine of each element in the tensor.

It is equivalent to:

$$acos(cos(z)) = z$$

##
examples

Examples

```
iex> Nx.acos(0.10000000149011612)
#Nx.Tensor<
f32
1.4706288576126099
>
iex> Nx.acos(Nx.tensor([0.10000000149011612, 0.5, 0.8999999761581421], names: [:x]))
#Nx.Tensor<
f32[x: 3]
[1.4706288576126099, 1.0471975803375244, 0.4510268568992615]
>
```

Calculates the inverse hyperbolic cosine of each element in the tensor.

It is equivalent to:

$$acosh(cosh(z)) = z$$

##
examples

Examples

```
iex> Nx.acosh(1)
#Nx.Tensor<
f32
0.0
>
iex> Nx.acosh(Nx.tensor([1, 2, 3], names: [:x]))
#Nx.Tensor<
f32[x: 3]
[0.0, 1.316957950592041, 1.7627471685409546]
>
```

Element-wise addition of two tensors.

If a number is given, it is converted to a tensor.

It will broadcast tensors whenever the dimensions do not match and broadcasting is possible.

If you're using `Nx.Defn.defn/2`

, you can use the `+`

operator
in place of this function: `left + right`

.

##
examples

Examples

###
adding-scalars

Adding scalars

```
iex> Nx.add(1, 2)
#Nx.Tensor<
s64
3
>
iex> Nx.add(1, 2.2)
#Nx.Tensor<
f32
3.200000047683716
>
```

###
adding-a-scalar-to-a-tensor

Adding a scalar to a tensor

```
iex> Nx.add(Nx.tensor([1, 2, 3], names: [:data]), 1)
#Nx.Tensor<
s64[data: 3]
[2, 3, 4]
>
iex> Nx.add(1, Nx.tensor([1, 2, 3], names: [:data]))
#Nx.Tensor<
s64[data: 3]
[2, 3, 4]
>
```

Given a float scalar converts the tensor to a float:

```
iex> Nx.add(Nx.tensor([1, 2, 3], names: [:data]), 1.0)
#Nx.Tensor<
f32[data: 3]
[2.0, 3.0, 4.0]
>
iex> Nx.add(Nx.tensor([1.0, 2.0, 3.0], names: [:data]), 1)
#Nx.Tensor<
f32[data: 3]
[2.0, 3.0, 4.0]
>
iex> Nx.add(Nx.tensor([1.0, 2.0, 3.0], type: {:f, 32}, names: [:data]), 1)
#Nx.Tensor<
f32[data: 3]
[2.0, 3.0, 4.0]
>
```

Unsigned tensors become signed and double their size if a negative number is given:

```
iex> Nx.add(Nx.tensor([0, 1, 2], type: {:u, 8}, names: [:data]), -1)
#Nx.Tensor<
s16[data: 3]
[-1, 0, 1]
>
```

###
adding-tensors-of-the-same-shape

Adding tensors of the same shape

```
iex> left = Nx.tensor([[1, 2], [3, 4]], names: [:x, :y])
iex> right = Nx.tensor([[10, 20], [30, 40]], names: [nil, :y])
iex> Nx.add(left, right)
#Nx.Tensor<
s64[x: 2][y: 2]
[
[11, 22],
[33, 44]
]
>
```

###
adding-tensors-with-broadcasting

Adding tensors with broadcasting

```
iex> left = Nx.tensor([[1], [2]], names: [nil, :y])
iex> right = Nx.tensor([[10, 20]], names: [:x, nil])
iex> Nx.add(left, right)
#Nx.Tensor<
s64[x: 2][y: 2]
[
[11, 21],
[12, 22]
]
>
iex> left = Nx.tensor([[10, 20]], names: [:x, nil])
iex> right = Nx.tensor([[1], [2]], names: [nil, :y])
iex> Nx.add(left, right)
#Nx.Tensor<
s64[x: 2][y: 2]
[
[11, 21],
[12, 22]
]
>
iex> left = Nx.tensor([[1], [2]], names: [:x, nil])
iex> right = Nx.tensor([[10, 20], [30, 40]])
iex> Nx.add(left, right)
#Nx.Tensor<
s64[x: 2][2]
[
[11, 21],
[32, 42]
]
>
iex> left = Nx.tensor([[1, 2]])
iex> right = Nx.tensor([[10, 20], [30, 40]])
iex> Nx.add(left, right)
#Nx.Tensor<
s64[2][2]
[
[11, 22],
[31, 42]
]
>
```

Calculates the inverse sine of each element in the tensor.

It is equivalent to:

$$asin(sin(z)) = z$$

##
examples

Examples

```
iex> Nx.asin(0.10000000149011612)
#Nx.Tensor<
f32
0.1001674234867096
>
iex> Nx.asin(Nx.tensor([0.10000000149011612, 0.5, 0.8999999761581421], names: [:x]))
#Nx.Tensor<
f32[x: 3]
[0.1001674234867096, 0.5235987901687622, 1.1197694540023804]
>
```

Calculates the inverse hyperbolic sine of each element in the tensor.

It is equivalent to:

$$asinh(sinh(z)) = z$$

##
examples

Examples

```
iex> Nx.asinh(1)
#Nx.Tensor<
f32
0.8813735842704773
>
iex> Nx.asinh(Nx.tensor([1, 2, 3], names: [:x]))
#Nx.Tensor<
f32[x: 3]
[0.8813735842704773, 1.4436354637145996, 1.8184465169906616]
>
```

Element-wise arc tangent of two tensors.

If a number is given, it is converted to a tensor.

It always returns a float tensor. If any of the input tensors are not float, they are converted to f32.

It will broadcast tensors whenever the dimensions do not match and broadcasting is possible.

##
examples

Examples

###
arc-tangent-between-scalars

Arc tangent between scalars

```
iex> Nx.atan2(1, 2)
#Nx.Tensor<
f32
0.46364760398864746
>
```

###
arc-tangent-between-tensors-and-scalars

Arc tangent between tensors and scalars

```
iex> Nx.atan2(Nx.tensor([1, 2, 3], names: [:data]), 1)
#Nx.Tensor<
f32[data: 3]
[0.7853981852531433, 1.1071487665176392, 1.249045729637146]
>
iex> Nx.atan2(1, Nx.tensor([1.0, 2.0, 3.0], names: [:data]))
#Nx.Tensor<
f32[data: 3]
[0.7853981852531433, 0.46364760398864746, 0.32175055146217346]
>
```

###
arc-tangent-between-tensors

Arc tangent between tensors

```
iex> neg_and_pos_zero_columns = Nx.tensor([[-0.0], [0.0]], type: {:f, 64})
iex> neg_and_pos_zero_rows = Nx.tensor([-0.0, 0.0], type: {:f, 64})
iex> Nx.atan2(neg_and_pos_zero_columns, neg_and_pos_zero_rows)
#Nx.Tensor<
f64[2][2]
[
[-3.141592653589793, -0.0],
[3.141592653589793, 0.0]
]
>
```

Calculates the inverse tangent of each element in the tensor.

It is equivalent to:

$$atan(tan(z)) = z$$

##
examples

Examples

```
iex> Nx.atan(0.10000000149011612)
#Nx.Tensor<
f32
0.09966865181922913
>
iex> Nx.atan(Nx.tensor([0.10000000149011612, 0.5, 0.8999999761581421], names: [:x]))
#Nx.Tensor<
f32[x: 3]
[0.09966865181922913, 0.46364760398864746, 0.7328150868415833]
>
```

Calculates the inverse hyperbolic tangent of each element in the tensor.

It is equivalent to:

$$atanh(tanh(z)) = z$$

##
examples

Examples

```
iex> Nx.atanh(0.10000000149011612)
#Nx.Tensor<
f32
0.10033535212278366
>
iex> Nx.atanh(Nx.tensor([0.10000000149011612, 0.5, 0.8999999761581421], names: [:x]))
#Nx.Tensor<
f32[x: 3]
[0.10033535212278366, 0.5493061542510986, 1.4722193479537964]
>
```

Element-wise bitwise AND of two tensors.

Only integer tensors are supported. If a float or complex tensor is given, an error is raised.

It will broadcast tensors whenever the dimensions do not match and broadcasting is possible.

If you're using `Nx.Defn.defn/2`

, you can use the `&&&`

operator
in place of this function: `left &&& right`

.

##
examples

Examples

###
bitwise-and-between-scalars

bitwise and between scalars

```
iex> Nx.bitwise_and(1, 0)
#Nx.Tensor<
s64
0
>
```

###
bitwise-and-between-tensors-and-scalars

bitwise and between tensors and scalars

```
iex> Nx.bitwise_and(Nx.tensor([0, 1, 2], names: [:data]), 1)
#Nx.Tensor<
s64[data: 3]
[0, 1, 0]
>
iex> Nx.bitwise_and(Nx.tensor([0, -1, -2], names: [:data]), -1)
#Nx.Tensor<
s64[data: 3]
[0, -1, -2]
>
```

###
bitwise-and-between-tensors

bitwise and between tensors

```
iex> Nx.bitwise_and(Nx.tensor([0, 0, 1, 1], names: [:data]), Nx.tensor([0, 1, 0, 1]))
#Nx.Tensor<
s64[data: 4]
[0, 0, 0, 1]
>
```

###
error-cases

Error cases

```
iex> Nx.bitwise_and(Nx.tensor([0, 0, 1, 1]), 1.0)
** (ArgumentError) bitwise operators expect integer tensors as inputs and outputs an integer tensor, got: {:f, 32}
```

Applies bitwise not to each element in the tensor.

If you're using `Nx.Defn.defn/2`

, you can use the `~~~`

operator
in place of this function: `~~~tensor`

.

##
examples

Examples

```
iex> Nx.bitwise_not(1)
#Nx.Tensor<
s64
-2
>
iex> Nx.bitwise_not(Nx.tensor([-1, 0, 1], type: {:s, 8}, names: [:x]))
#Nx.Tensor<
s8[x: 3]
[0, -1, -2]
>
iex> Nx.bitwise_not(Nx.tensor([0, 1, 254, 255], type: {:u, 8}, names: [:x]))
#Nx.Tensor<
u8[x: 4]
[255, 254, 1, 0]
>
```

###
error-cases

Error cases

```
iex> Nx.bitwise_not(Nx.tensor([0.0, 1.0]))
** (ArgumentError) bitwise operators expect integer tensors as inputs and outputs an integer tensor, got: {:f, 32}
```

Element-wise bitwise OR of two tensors.

Only integer tensors are supported. If a float or complex tensor is given, an error is raised.

It will broadcast tensors whenever the dimensions do not match and broadcasting is possible.

If you're using `Nx.Defn.defn/2`

, you can use the `|||`

operator
in place of this function: `left ||| right`

.

##
examples

Examples

###
bitwise-or-between-scalars

bitwise or between scalars

```
iex> Nx.bitwise_or(1, 0)
#Nx.Tensor<
s64
1
>
```

###
bitwise-or-between-tensors-and-scalars

bitwise or between tensors and scalars

```
iex> Nx.bitwise_or(Nx.tensor([0, 1, 2], names: [:data]), 1)
#Nx.Tensor<
s64[data: 3]
[1, 1, 3]
>
iex> Nx.bitwise_or(Nx.tensor([0, -1, -2], names: [:data]), -1)
#Nx.Tensor<
s64[data: 3]
[-1, -1, -1]
>
```

###
bitwise-or-between-tensors

bitwise or between tensors

```
iex> Nx.bitwise_or(Nx.tensor([0, 0, 1, 1], names: [:data]), Nx.tensor([0, 1, 0, 1], names: [:data]))
#Nx.Tensor<
s64[data: 4]
[0, 1, 1, 1]
>
```

###
error-cases

Error cases

```
iex> Nx.bitwise_or(Nx.tensor([0, 0, 1, 1]), 1.0)
** (ArgumentError) bitwise operators expect integer tensors as inputs and outputs an integer tensor, got: {:f, 32}
```

Element-wise bitwise XOR of two tensors.

Only integer tensors are supported. If a float or complex tensor is given, an error is raised.

It will broadcast tensors whenever the dimensions do not match and broadcasting is possible.

##
examples

Examples

###
bitwise-xor-between-scalars

Bitwise xor between scalars

```
iex> Nx.bitwise_xor(1, 0)
#Nx.Tensor<
s64
1
>
```

###
bitwise-xor-and-between-tensors-and-scalars

Bitwise xor and between tensors and scalars

```
iex> Nx.bitwise_xor(Nx.tensor([1, 2, 3], names: [:data]), 2)
#Nx.Tensor<
s64[data: 3]
[3, 0, 1]
>
iex> Nx.bitwise_xor(Nx.tensor([-1, -2, -3], names: [:data]), 2)
#Nx.Tensor<
s64[data: 3]
[-3, -4, -1]
>
```

###
bitwise-xor-between-tensors

Bitwise xor between tensors

```
iex> Nx.bitwise_xor(Nx.tensor([0, 0, 1, 1]), Nx.tensor([0, 1, 0, 1], names: [:data]))
#Nx.Tensor<
s64[data: 4]
[0, 1, 1, 0]
>
```

###
error-cases

Error cases

```
iex> Nx.bitwise_xor(Nx.tensor([0, 0, 1, 1]), 1.0)
** (ArgumentError) bitwise operators expect integer tensors as inputs and outputs an integer tensor, got: {:f, 32}
```

Calculates the cube root of each element in the tensor.

It is equivalent to:

$$cbrt(z) = z^{\frac{1}{3}}$$

##
examples

Examples

```
iex> Nx.cbrt(1)
#Nx.Tensor<
f32
1.0
>
iex> Nx.cbrt(Nx.tensor([1, 2, 3], names: [:x]))
#Nx.Tensor<
f32[x: 3]
[1.0, 1.2599210739135742, 1.4422495365142822]
>
```

Calculates the ceil of each element in the tensor.

If a non-floating tensor is given, it is returned as is. If a floating tensor is given, then we apply the operation, but keep its type.

##
examples

Examples

```
iex> Nx.ceil(Nx.tensor([-1, 0, 1], names: [:x]))
#Nx.Tensor<
s64[x: 3]
[-1, 0, 1]
>
iex> Nx.ceil(Nx.tensor([-1.5, -0.5, 0.5, 1.5], names: [:x]))
#Nx.Tensor<
f32[x: 4]
[-1.0, 0.0, 1.0, 2.0]
>
```

Clips the values of the tensor on the closed
interval `[min, max]`

.

You can pass a tensor to `min`

or `max`

as long
as the tensor has a scalar shape.

###
examples

Examples

```
iex> t = Nx.tensor([[1, 2, 3], [4, 5, 6]], names: [:x, :y])
iex> Nx.clip(t, 2, 4)
#Nx.Tensor<
s64[x: 2][y: 3]
[
[2, 2, 3],
[4, 4, 4]
]
>
iex> t = Nx.tensor([[1, 2, 3], [4, 5, 6]], names: [:x, :y])
iex> Nx.clip(t, 2.0, 3)
#Nx.Tensor<
f32[x: 2][y: 3]
[
[2.0, 2.0, 3.0],
[3.0, 3.0, 3.0]
]
>
iex> t = Nx.tensor([[1, 2, 3], [4, 5, 6]], names: [:x, :y])
iex> Nx.clip(t, Nx.tensor(2.0), Nx.max(1.0, 3.0))
#Nx.Tensor<
f32[x: 2][y: 3]
[
[2.0, 2.0, 3.0],
[3.0, 3.0, 3.0]
]
>
iex> t = Nx.tensor([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], names: [:x, :y])
iex> Nx.clip(t, 2, 6.0)
#Nx.Tensor<
f32[x: 2][y: 3]
[
[2.0, 2.0, 3.0],
[4.0, 5.0, 6.0]
]
>
iex> t = Nx.tensor([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], type: {:f, 32}, names: [:x, :y])
iex> Nx.clip(t, 1, 4)
#Nx.Tensor<
f32[x: 2][y: 3]
[
[1.0, 2.0, 3.0],
[4.0, 4.0, 4.0]
]
>
```

Constructs a complex tensor from two equally-shaped tensors.

Does not accept complex tensors as inputs.

###
examples

Examples

```
iex> Nx.complex(Nx.tensor(1), Nx.tensor(2))
#Nx.Tensor<
c64
1.0+2.0i
>
iex> Nx.complex(Nx.tensor([1, 2]), Nx.tensor([3, 4]))
#Nx.Tensor<
c64[2]
[1.0+3.0i, 2.0+4.0i]
>
```

Calculates the complex conjugate of each element in the tensor.

If $z = a + bi = r e^\theta$, $conjugate(z) = z^* = a - bi = r e^{-\theta}$

##
examples

Examples

```
iex> Nx.conjugate(Complex.new(1, 2))
#Nx.Tensor<
c64
1.0-2.0i
>
iex> Nx.conjugate(1)
#Nx.Tensor<
c64
1.0+0.0i
>
iex> Nx.conjugate(Nx.tensor([Complex.new(1, 2), Complex.new(2, -4)]))
#Nx.Tensor<
c64[2]
[1.0-2.0i, 2.0+4.0i]
>
```

Calculates the cosine of each element in the tensor.

It is equivalent to:

$$cos(z) = \frac{e^{iz} + e^{-iz}}{2}$$

##
examples

Examples

```
iex> Nx.cos(1)
#Nx.Tensor<
f32
0.5403022766113281
>
iex> Nx.cos(Nx.tensor([1, 2, 3], names: [:x]))
#Nx.Tensor<
f32[x: 3]
[0.5403022766113281, -0.416146844625473, -0.9899924993515015]
>
```

Calculates the hyperbolic cosine of each element in the tensor.

It is equivalent to:

$$cosh(z) = \frac{e^z + e^{-z}}{2}$$

##
examples

Examples

```
iex> Nx.cosh(1)
#Nx.Tensor<
f32
1.5430806875228882
>
iex> Nx.cosh(Nx.tensor([1, 2, 3], names: [:x]))
#Nx.Tensor<
f32[x: 3]
[1.5430806875228882, 3.762195587158203, 10.067662239074707]
>
```

Counts the number of leading zeros of each element in the tensor.

##
examples

Examples

```
iex> Nx.count_leading_zeros(1)
#Nx.Tensor<
s64
63
>
iex> Nx.count_leading_zeros(-1)
#Nx.Tensor<
s64
0
>
iex> Nx.count_leading_zeros(Nx.tensor([0, 0xF, 0xFF, 0xFFFF], names: [:x]))
#Nx.Tensor<
s64[x: 4]
[64, 60, 56, 48]
>
iex> Nx.count_leading_zeros(Nx.tensor([0xF000000000000000, 0x0F00000000000000], names: [:x]))
#Nx.Tensor<
s64[x: 2]
[0, 4]
>
iex> Nx.count_leading_zeros(Nx.tensor([0, 0xF, 0xFF, 0xFFFF], type: {:s, 32}, names: [:x]))
#Nx.Tensor<
s32[x: 4]
[32, 28, 24, 16]
>
iex> Nx.count_leading_zeros(Nx.tensor([0, 0xF, 0xFF, 0xFFFF], type: {:s, 16}, names: [:x]))
#Nx.Tensor<
s16[x: 4]
[16, 12, 8, 0]
>
iex> Nx.count_leading_zeros(Nx.tensor([0, 1, 2, 4, 8, 16, 32, 64, -1, -128], type: {:s, 8}, names: [:x]))
#Nx.Tensor<
s8[x: 10]
[8, 7, 6, 5, 4, 3, 2, 1, 0, 0]
>
iex> Nx.count_leading_zeros(Nx.tensor([0, 1, 2, 4, 8, 16, 32, 64, 128], type: {:u, 8}, names: [:x]))
#Nx.Tensor<
u8[x: 9]
[8, 7, 6, 5, 4, 3, 2, 1, 0]
>
```

###
error-cases

Error cases

```
iex> Nx.count_leading_zeros(Nx.tensor([0.0, 1.0]))
** (ArgumentError) bitwise operators expect integer tensors as inputs and outputs an integer tensor, got: {:f, 32}
```

Element-wise division of two tensors.

If a number is given, it is converted to a tensor.

It always returns a float tensor. If any of the input tensors are not float, they are converted to f32. Division by zero raises, but it may trigger undefined behaviour on some compilers.

It will broadcast tensors whenever the dimensions do not match and broadcasting is possible.

If you're using `Nx.Defn.defn/2`

, you can use the `/`

operator
in place of this function: `left / right`

.

##
examples

Examples

###
dividing-scalars

Dividing scalars

```
iex> Nx.divide(1, 2)
#Nx.Tensor<
f32
0.5
>
```

###
dividing-tensors-and-scalars

Dividing tensors and scalars

```
iex> Nx.divide(Nx.tensor([1, 2, 3], names: [:data]), 1)
#Nx.Tensor<
f32[data: 3]
[1.0, 2.0, 3.0]
>
iex> Nx.divide(1, Nx.tensor([1.0, 2.0, 3.0], names: [:data]))
#Nx.Tensor<
f32[data: 3]
[1.0, 0.5, 0.3333333432674408]
>
```

###
dividing-tensors

Dividing tensors

```
iex> left = Nx.tensor([[1], [2]], names: [:x, nil])
iex> right = Nx.tensor([[10, 20]], names: [nil, :y])
iex> Nx.divide(left, right)
#Nx.Tensor<
f32[x: 2][y: 2]
[
[0.10000000149011612, 0.05000000074505806],
[0.20000000298023224, 0.10000000149011612]
]
>
iex> left = Nx.tensor([[1], [2]], type: {:s, 8})
iex> right = Nx.tensor([[10, 20]], type: {:s, 8}, names: [:x, :y])
iex> Nx.divide(left, right)
#Nx.Tensor<
f32[x: 2][y: 2]
[
[0.10000000149011612, 0.05000000074505806],
[0.20000000298023224, 0.10000000149011612]
]
>
iex> left = Nx.tensor([[1], [2]], type: {:f, 32}, names: [:x, nil])
iex> right = Nx.tensor([[10, 20]], type: {:f, 32}, names: [nil, :y])
iex> Nx.divide(left, right)
#Nx.Tensor<
f32[x: 2][y: 2]
[
[0.10000000149011612, 0.05000000074505806],
[0.20000000298023224, 0.10000000149011612]
]
>
```

Element-wise equality comparison of two tensors.

If a number is given, it is converted to a tensor.

It will broadcast tensors whenever the dimensions do not match and broadcasting is possible.

If you're using `Nx.Defn.defn/2`

, you can use the `==`

operator
in place of this function: `left == right`

.

##
examples

Examples

###
comparison-of-scalars

Comparison of scalars

```
iex> Nx.equal(1, 2)
#Nx.Tensor<
u8
0
>
```

###
comparison-of-tensors-and-scalars

Comparison of tensors and scalars

```
iex> Nx.equal(1, Nx.tensor([1, 2, 3], names: [:data]))
#Nx.Tensor<
u8[data: 3]
[1, 0, 0]
>
```

###
comparison-of-tensors

Comparison of tensors

```
iex> left = Nx.tensor([1, 2, 3], names: [:data])
iex> right = Nx.tensor([1, 2, 5])
iex> Nx.equal(left, right)
#Nx.Tensor<
u8[data: 3]
[1, 1, 0]
>
iex> left = Nx.tensor([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], names: [:x, nil])
iex> right = Nx.tensor([1, 2, 3])
iex> Nx.equal(left, right)
#Nx.Tensor<
u8[x: 2][3]
[
[1, 1, 1],
[0, 0, 0]
]
>
```

Calculates the error function of each element in the tensor.

It is equivalent to:

$$erf(z) = \frac{2}{\sqrt{\pi}} \int_{0}^{z} e^{-t^2}dt$$

##
examples

Examples

```
iex> Nx.erf(1)
#Nx.Tensor<
f32
0.8427007794380188
>
iex> Nx.erf(Nx.tensor([1, 2, 3], names: [:x]))
#Nx.Tensor<
f32[x: 3]
[0.8427007794380188, 0.9953222870826721, 0.9999778866767883]
>
```

Calculates the inverse error function of each element in the tensor.

It is equivalent to:

$$erf\text{\textunderscore}inv(erf(z)) = z$$

##
examples

Examples

```
iex> Nx.erf_inv(0.10000000149011612)
#Nx.Tensor<
f32
0.08885598927736282
>
iex> Nx.erf_inv(Nx.tensor([0.10000000149011612, 0.5, 0.8999999761581421], names: [:x]))
#Nx.Tensor<
f32[x: 3]
[0.08885598927736282, 0.4769362807273865, 1.163087010383606]
>
```

Calculates the one minus error function of each element in the tensor.

It is equivalent to:

$$erfc(z) = 1 - erf(z)$$

##
examples

Examples

```
iex> Nx.erfc(1)
#Nx.Tensor<
f32
0.15729920566082
>
iex> Nx.erfc(Nx.tensor([1, 2, 3], names: [:x]))
#Nx.Tensor<
f32[x: 3]
[0.15729920566082, 0.004677734803408384, 2.2090496713644825e-5]
>
```

Calculates the exponential of each element in the tensor.

It is equivalent to:

$$exp(z) = e^z$$

##
examples

Examples

```
iex> Nx.exp(1)
#Nx.Tensor<
f32
2.7182817459106445
>
iex> Nx.exp(Nx.tensor([1, 2, 3], names: [:x]))
#Nx.Tensor<
f32[x: 3]
[2.7182817459106445, 7.389056205749512, 20.08553695678711]
>
```

Calculates the exponential minus one of each element in the tensor.

It is equivalent to:

$$expm1(z) = e^z - 1$$

##
examples

Examples

```
iex> Nx.expm1(1)
#Nx.Tensor<
f32
1.718281865119934
>
iex> Nx.expm1(Nx.tensor([1, 2, 3], names: [:x]))
#Nx.Tensor<
f32[x: 3]
[1.718281865119934, 6.389056205749512, 19.08553695678711]
>
```

Calculates the floor of each element in the tensor.

If a non-floating tensor is given, it is returned as is. If a floating tensor is given, then we apply the operation, but keep its type.

##
examples

Examples

```
iex> Nx.floor(Nx.tensor([-1, 0, 1], names: [:x]))
#Nx.Tensor<
s64[x: 3]
[-1, 0, 1]
>
iex> Nx.floor(Nx.tensor([-1.5, -0.5, 0.5, 1.5], names: [:x]))
#Nx.Tensor<
f32[x: 4]
[-2.0, -1.0, 0.0, 1.0]
>
```

Element-wise greater than comparison of two tensors.

If a number is given, it is converted to a tensor.

It will broadcast tensors whenever the dimensions do not match and broadcasting is possible.

If you're using `Nx.Defn.defn/2`

, you can use the `>`

operator
in place of this function: `left > right`

.

##
examples

Examples

###
comparison-of-scalars

Comparison of scalars

```
iex> Nx.greater(1, 2)
#Nx.Tensor<
u8
0
>
```

###
comparison-of-tensors-and-scalars

Comparison of tensors and scalars

```
iex> Nx.greater(1, Nx.tensor([1, 2, 3], names: [:data]))
#Nx.Tensor<
u8[data: 3]
[0, 0, 0]
>
```

###
comparison-of-tensors

Comparison of tensors

```
iex> left = Nx.tensor([1, 2, 3], names: [:data])
iex> right = Nx.tensor([1, 2, 2])
iex> Nx.greater(left, right)
#Nx.Tensor<
u8[data: 3]
[0, 0, 1]
>
iex> left = Nx.tensor([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], names: [:x, :y])
iex> right = Nx.tensor([1, 2, 3])
iex> Nx.greater(left, right)
#Nx.Tensor<
u8[x: 2][y: 3]
[
[0, 0, 0],
[1, 1, 1]
]
>
```

Element-wise greater than or equal comparison of two tensors.

If a number is given, it is converted to a tensor.

It will broadcast tensors whenever the dimensions do not match and broadcasting is possible.

If you're using `Nx.Defn.defn/2`

, you can use the `>=`

operator
in place of this function: `left >= right`

.

##
examples

Examples

###
comparison-of-scalars

Comparison of scalars

```
iex> Nx.greater_equal(1, 2)
#Nx.Tensor<
u8
0
>
```

###
comparison-of-tensors-and-scalars

Comparison of tensors and scalars

```
iex> Nx.greater_equal(1, Nx.tensor([1, 2, 3], names: [:data]))
#Nx.Tensor<
u8[data: 3]
[1, 0, 0]
>
```

###
comparison-of-tensors

Comparison of tensors

```
iex> left = Nx.tensor([1, 2, 3], names: [:data])
iex> right = Nx.tensor([1, 2, 2])
iex> Nx.greater_equal(left, right)
#Nx.Tensor<
u8[data: 3]
[1, 1, 1]
>
iex> left = Nx.tensor([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], names: [:x, :y])
iex> right = Nx.tensor([1, 2, 3])
iex> Nx.greater_equal(left, right)
#Nx.Tensor<
u8[x: 2][y: 3]
[
[1, 1, 1],
[1, 1, 1]
]
>
```

Returns the imaginary component of each entry in a complex tensor as a floating point tensor.

##
examples

Examples

```
iex> Nx.imag(Complex.new(1, 2))
#Nx.Tensor<
f32
2.0
>
iex> Nx.imag(Nx.tensor(1))
#Nx.Tensor<
f32
0.0
>
iex> Nx.imag(Nx.tensor(1, type: {:bf, 16}))
#Nx.Tensor<
bf16
0.0
>
iex> Nx.imag(Nx.tensor([Complex.new(1, 2), Complex.new(2, -4)]))
#Nx.Tensor<
f32[2]
[2.0, -4.0]
>
```

Element-wise left shift of two tensors.

Only integer tensors are supported. If a float or complex tensor is given, an error is raised. If the right side is negative, it will raise, but it may trigger undefined behaviour on some compilers.

It will broadcast tensors whenever the dimensions do not match and broadcasting is possible. If the number of shifts are negative, Nx's default backend will raise, but it may trigger undefined behaviour in other backends.

If you're using `Nx.Defn.defn/2`

, you can use the `<<<`

operator
in place of this function: `left <<< right`

.

##
examples

Examples

###
left-shift-between-scalars

Left shift between scalars

```
iex> Nx.left_shift(1, 0)
#Nx.Tensor<
s64
1
>
```

###
left-shift-between-tensors-and-scalars

Left shift between tensors and scalars

```
iex> Nx.left_shift(Nx.tensor([1, 2, 3], names: [:data]), 2)
#Nx.Tensor<
s64[data: 3]
[4, 8, 12]
>
```

###
left-shift-between-tensors

Left shift between tensors

```
iex> left = Nx.tensor([1, 1, -1, -1], names: [:data])
iex> right = Nx.tensor([1, 2, 3, 4], names: [:data])
iex> Nx.left_shift(left, right)
#Nx.Tensor<
s64[data: 4]
[2, 4, -8, -16]
>
```

###
error-cases

Error cases

```
iex> Nx.left_shift(Nx.tensor([0, 0, 1, 1]), 1.0)
** (ArgumentError) bitwise operators expect integer tensors as inputs and outputs an integer tensor, got: {:f, 32}
```

Element-wise less than comparison of two tensors.

If a number is given, it is converted to a tensor.

It will broadcast tensors whenever the dimensions do not match and broadcasting is possible.

If you're using `Nx.Defn.defn/2`

, you can use the `<`

operator
in place of this function: `left < right`

.

##
examples

Examples

###
comparison-of-scalars

Comparison of scalars

```
iex> Nx.less(1, 2)
#Nx.Tensor<
u8
1
>
```

###
comparison-of-tensors-and-scalars

Comparison of tensors and scalars

```
iex> Nx.less(1, Nx.tensor([1, 2, 3], names: [:data]))
#Nx.Tensor<
u8[data: 3]
[0, 1, 1]
>
```

###
comparison-of-tensors

Comparison of tensors

```
iex> Nx.less(Nx.tensor([1, 2, 1]), Nx.tensor([1, 2, 2], names: [:data]))
#Nx.Tensor<
u8[data: 3]
[0, 0, 1]
>
iex> Nx.less(Nx.tensor([[1.0, 2.0, 3.0], [4.0, 2.0, 1.0]], names: [:x, :y]), Nx.tensor([1, 2, 3]))
#Nx.Tensor<
u8[x: 2][y: 3]
[
[0, 0, 0],
[0, 0, 1]
]
>
```

Element-wise less than or equal comparison of two tensors.

If a number is given, it is converted to a tensor.

It will broadcast tensors whenever the dimensions do not match and broadcasting is possible.

If you're using `Nx.Defn.defn/2`

, you can use the `<=`

operator
in place of this function: `left <= right`

.

##
examples

Examples

###
comparison-of-scalars

Comparison of scalars

```
iex> Nx.less_equal(1, 2)
#Nx.Tensor<
u8
1
>
```

###
comparison-of-tensors-and-scalars

Comparison of tensors and scalars

```
iex> Nx.less_equal(1, Nx.tensor([1, 2, 3], names: [:data]))
#Nx.Tensor<
u8[data: 3]
[1, 1, 1]
>
```

###
comparison-of-tensors

Comparison of tensors

```
iex> left = Nx.tensor([1, 2, 3], names: [:data])
iex> right = Nx.tensor([1, 2, 2])
iex> Nx.less_equal(left, right)
#Nx.Tensor<
u8[data: 3]
[1, 1, 0]
>
iex> left = Nx.tensor([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
iex> right = Nx.tensor([1, 2, 3], names: [:y])
iex> Nx.less_equal(left, right)
#Nx.Tensor<
u8[2][y: 3]
[
[1, 1, 1],
[0, 0, 0]
]
>
```

Calculates the natural log plus one of each element in the tensor.

It is equivalent to:

$$log1p(z) = log(z + 1)$$

##
examples

Examples

```
iex> Nx.log1p(1)
#Nx.Tensor<
f32
0.6931471824645996
>
iex> Nx.log1p(Nx.tensor([1, 2, 3], names: [:x]))
#Nx.Tensor<
f32[x: 3]
[0.6931471824645996, 1.0986123085021973, 1.3862943649291992]
>
```

Calculates the natural log of each element in the tensor.

It is equivalent to:

$log(z) = ln(z),\quad \text{if z} \in \Reals$

$log(z) = ln(r) + i\theta,\quad\text{if }z = re^{i\theta} \in \Complex$

##
examples

Examples

```
iex> Nx.log(1)
#Nx.Tensor<
f32
0.0
>
iex> Nx.log(Nx.tensor([1, 2, 3], names: [:x]))
#Nx.Tensor<
f32[x: 3]
[0.0, 0.6931471824645996, 1.0986123085021973]
>
```

Element-wise logical and of two tensors.

Zero is considered false, any other number is considered true.

It will broadcast tensors whenever the dimensions do not match and broadcasting is possible.

If you're using `Nx.Defn.defn/2`

, you can use the `and`

operator
in place of this function: `left and right`

.

##
examples

Examples

```
iex> Nx.logical_and(1, Nx.tensor([-1, 0, 1], names: [:data]))
#Nx.Tensor<
u8[data: 3]
[1, 0, 1]
>
iex> left = Nx.tensor([-1, 0, 1], names: [:data])
iex> right = Nx.tensor([[-1], [0], [1]])
iex> Nx.logical_and(left, right)
#Nx.Tensor<
u8[3][data: 3]
[
[1, 0, 1],
[0, 0, 0],
[1, 0, 1]
]
>
iex> left = Nx.tensor([-1.0, 0.0, 1.0], names: [:data])
iex> right = Nx.tensor([[-1], [0], [1]])
iex> Nx.logical_and(left, right)
#Nx.Tensor<
u8[3][data: 3]
[
[1, 0, 1],
[0, 0, 0],
[1, 0, 1]
]
>
```

Element-wise logical not a tensor.

Zero is considered false, any other number is considered true.

If you're using `Nx.Defn.defn/2`

, you can use the `not`

operator
in place of this function: `not tensor`

.

##
examples

Examples

```
iex> Nx.logical_not(Nx.tensor([-1, 0, 1], names: [:data]))
#Nx.Tensor<
u8[data: 3]
[0, 1, 0]
>
iex> Nx.logical_not(Nx.tensor([-1.0, 0.0, 1.0], names: [:data]))
#Nx.Tensor<
u8[data: 3]
[0, 1, 0]
>
```

Element-wise logical or of two tensors.

Zero is considered false, any other number is considered true.

It will broadcast tensors whenever the dimensions do not match and broadcasting is possible.

If you're using `Nx.Defn.defn/2`

, you can use the `or`

operator
in place of this function: `left or right`

.

##
examples

Examples

```
iex> Nx.logical_or(0, Nx.tensor([-1, 0, 1], names: [:data]))
#Nx.Tensor<
u8[data: 3]
[1, 0, 1]
>
iex> left = Nx.tensor([-1, 0, 1], names: [:data])
iex> right = Nx.tensor([[-1], [0], [1]])
iex> Nx.logical_or(left, right)
#Nx.Tensor<
u8[3][data: 3]
[
[1, 1, 1],
[1, 0, 1],
[1, 1, 1]
]
>
iex> left = Nx.tensor([-1.0, 0.0, 1.0], names: [:data])
iex> right = Nx.tensor([[-1], [0], [1]])
iex> Nx.logical_or(left, right)
#Nx.Tensor<
u8[3][data: 3]
[
[1, 1, 1],
[1, 0, 1],
[1, 1, 1]
]
>
```

Element-wise logical xor of two tensors.

Zero is considered false, any other number is considered true.

It will broadcast tensors whenever the dimensions do not match and broadcasting is possible.

##
examples

Examples

```
iex> Nx.logical_xor(0, Nx.tensor([-1, 0, 1], names: [:data]))
#Nx.Tensor<
u8[data: 3]
[1, 0, 1]
>
iex> left = Nx.tensor([-1, 0, 1], names: [:data])
iex> right = Nx.tensor([[-1], [0], [1]])
iex> Nx.logical_xor(left, right)
#Nx.Tensor<
u8[3][data: 3]
[
[0, 1, 0],
[1, 0, 1],
[0, 1, 0]
]
>
iex> left = Nx.tensor([-1.0, 0.0, 1.0], names: [:data])
iex> right = Nx.tensor([[-1], [0], [1]])
iex> Nx.logical_xor(left, right)
#Nx.Tensor<
u8[3][data: 3]
[
[0, 1, 0],
[1, 0, 1],
[0, 1, 0]
]
>
```

Calculates the standard logistic (a sigmoid) of each element in the tensor.

It is equivalent to:

$$logistic(z) = \frac{1}{1 + e^{-z}}$$

##
examples

Examples

```
iex> Nx.logistic(1)
#Nx.Tensor<
f32
0.7310585975646973
>
iex> Nx.logistic(Nx.tensor([1, 2, 3], names: [:x]))
#Nx.Tensor<
f32[x: 3]
[0.7310585975646973, 0.8807970881462097, 0.9525741338729858]
>
```

Maps the given scalar function over the entire tensor.

The type of the returned tensor will be