View Source EXLA.Op (EXLA v0.4.1)

Wrapper around XLA's ops.

Link to this section Summary

Functions

Unary abs.

Unary acos.

Unary acosh.

Element-wise add with broadcasting.

Unary asin.

Unary asinh.

Element-wise atan2 with broadcasting.

Unary atan.

Unary atanh.

Element-wise bitwise_and with broadcasting.

Unary bitwise_not.

Element-wise bitwise_or with broadcasting.

Element-wise bitwise_xor with broadcasting.

Broadcasts the tensor to shape.

Unary cbrt.

Unary ceil.

Creates a n-dimensional constant from binary data with shape.

Creates a numeric constant.

Unary cos.

Unary cosh.

Unary count_leading_zeros.

Element-wise divide with broadcasting.

Element-wise equal with broadcasting.

Unary erf.

Unary erf_inv.

Unary erfc.

Unary exp.

Unary expm1.

Unary floor.

The XLA gather operation stitches together several slices of an input array.

Gets the shape of an operator.

Element-wise greater with broadcasting.

Element-wise greater_equal with broadcasting.

Creates iota tensor.

Element-wise left_shift with broadcasting.

Element-wise less with broadcasting.

Element-wise less_equal with broadcasting.

Unary log1p.

Unary log.

Element-wise max with broadcasting.

Element-wise min with broadcasting.

Element-wise multiply with broadcasting.

Unary negate.

Element-wise not_equal with broadcasting.

Pads the tensor with value and padding config.

Specifies a parameter at position i with shape and name.

Unary population_count.

Element-wise power with broadcasting.

Element-wise remainder with broadcasting.

Reshapes the tensor to shape.

Element-wise right_shift_arithmetic with broadcasting.

Element-wise right_shift_logical with broadcasting.

Creates tensor with normal distribution.

Creates tensor with uniform distribution.

Unary round.

Unary rsqrt.

Unary sigmoid.

Unary sign.

Unary sin.

Unary sinh.

Unary sqrt.

Element-wise subtract with broadcasting.

Unary tanh.

Builds a tuple with the given elements.

Link to this section Functions

Unary abs.

Unary acos.

Unary acosh.

Link to this function

add(op1, op2, broadcast_dims \\ {})

View Source

Element-wise add with broadcasting.

Unary asin.

Unary asinh.

Link to this function

atan2(op1, op2, broadcast_dims \\ {})

View Source

Element-wise atan2 with broadcasting.

Unary atan.

Unary atanh.

Link to this function

bitcast_convert_type(op, dtype)

View Source
Link to this function

bitwise_and(op1, op2, broadcast_dims \\ {})

View Source

Element-wise bitwise_and with broadcasting.

Unary bitwise_not.

Link to this function

bitwise_or(op1, op2, broadcast_dims \\ {})

View Source

Element-wise bitwise_or with broadcasting.

Link to this function

bitwise_xor(op1, op2, broadcast_dims \\ {})

View Source

Element-wise bitwise_xor with broadcasting.

Link to this function

broadcast_in_dim(op, shape, broadcast_dims)

View Source

Broadcasts the tensor to shape.

Link to this function

call(builder, args, computation)

View Source

Unary cbrt.

Unary ceil.

Link to this function

concatenate(operands, dimension)

View Source
Link to this function

conditional(op, branches, operands)

View Source
Link to this function

conditional(op1, op2, computation1, op3, computation2)

View Source
Link to this function

constant_from_binary(builder, data, shape)

View Source

Creates a n-dimensional constant from binary data with shape.

Link to this function

constant_r0(builder, non_finite, dtype)

View Source

Creates a numeric constant.

Link to this function

conv_general_dilated(op1, op2, strides, padding, lhs_dilation, rhs_dilation, dim_nums, feature_group_count, batch_group_count, precision_config)

View Source
Link to this function

convert_element_type(op, dtype)

View Source

Unary cos.

Unary cosh.

Unary count_leading_zeros.

Link to this function

divide(op1, op2, broadcast_dims \\ {})

View Source

Element-wise divide with broadcasting.

Link to this function

dot(op1, op2, precision_config)

View Source
Link to this function

dot_general(op1, op2, dimnos, precision_config)

View Source
Link to this function

dynamic_slice(op, indices, slice_sizes)

View Source
Link to this function

dynamic_update_slice(op1, op2, indices)

View Source
Link to this function

equal(op1, op2, broadcast_dims \\ {})

View Source

Element-wise equal with broadcasting.

Unary erf.

Unary erf_inv.

Unary erfc.

Unary exp.

Unary expm1.

Unary floor.

Link to this function

gather(op1, op2, index_vector_dim, slice_sizes, offset_dims, collapsed_slice_dims, start_index_map)

View Source

The XLA gather operation stitches together several slices of an input array.

Note that this operation is extremely generic and far from intuitive for regular usage. However, it can be used to implement many specific operations that have to do with combining multiple tensor slices.

parameteres

Parameteres

The XLA docs are rather cryptic unless already understood, so here's an attempt of a more intuitive description.

index_vector_dim

index_vector_dim

Determines which dimension contains index vectors. In most cases we want to set this to the last dimension.

given
  start_indices = [[0, 1], [1, 1]]
and given
  index_vector_dim = 1
then
  index vectors are [0, 1] and [1, 1]

Note that we can set this to last_dimension + 1, in which case start_indices are implicitly reshaped to have a trailing dimension of 1.

given
  start_indices = [[0, 1], [1, 1]]
and given
  index_vector_dim = 2
then
  start_indices <- [[[0], [1]], [[1], [1]]]
  index vectors are [0], [1], [1], [1]

start_index_map

start_index_map

Note: though given as a list, it can be treated as a map of list_idx -> value.

An index vector may have less elements than the operand tensor shape. For example:

given
  operand = [[1, 2], [3, 4]]
  start_indices = [[1], [0]]
  index_vector_dim = 1

As described above, in this case index vectors are [1], [0] and they have length 1. However, the operand has rank 2, so we need vectors of the form [_, _] to point to a specific element in the operand. The start_index_map determines where indices go into this template:

and given
  start_index_map = [0] # effectively %{0 => 0}
then
  actual index vectors are [1, _] and [0, _]

and given
  start_index_map = [1] # effectively %{0 => 1}
then
  actual index vectors are [_, 1] and [_, 0]

Finally, the missing elements (_) are assumed to be 0.

Complete examples:

given
  operand = [[1, 2], [3, 4]]
  start_indices = [[0], [1]]
  index_vector_dim = 1
and given
  start_index_map = [1] # effectively %{0 => 1}
then
  actual index vectors are [0, 0], [0, 1] (leading 0 is inserted)

given
  operand = [[1, 2], [3, 4]]
  start_indices = [[0, 1], [1, 1]]
  index_vector_dim = 1
and given
  start_index_map = [0, 1] # effectively %{0 => 0, 1 => 1}
then
  actual index vectors are [0, 1], [1, 1] (as expected)

given
  operand = [[1, 2], [3, 4]]
  start_indices = [[0, 1], [1, 1]]
  index_vector_dim = 1
and given
  start_index_map = [1, 0] # effectively %{0 => 1, 1 => 0}
then
  actual index vectors are [1, 0], [1, 1] (see how the first vector is reversed)

slice_sizes

slice_sizes

For every starting point (as described above) we take a slice given by slice_sizes. Naturally, slice_sizes must have the same length as operand rank, so that we have one size per dimension.

given
  operand = [[1, 2], [3, 4]]
  actual index vector [1, 0]
and given
  slice_sizes = [1, 2]
then
  slice for actual index vector is [[3, 4]]

collapsed_slice_dims

collapsed_slice_dims

A list of dimensions that are collapsed (effectively removed) in the slice shape. Only dimensions of size 1 can be collapsed.

given
  slice is [[3, 4]] # shape: [1][2]
and given
  collapsed_slice_dims = [0]
then
  actual slice is [3, 4] # shape [2]

offset_dims

offset_dims

A list of dimensions in the output tensor corresponding to the non-collapsed dimensions in slice tensors. In other words, these dimensions are used for indexing elements of the slice tensors.

given
  operand = [[1, 2], [3, 4]]
  start_indices = [[1, 0], [0, 0], [1, 0]]
  index_vector_dim = 1
  start_index_map = [1, 2] # effectively %{0 => 0, 1 => 1}
  collapsed_slice_dims = [0]
and given
  offset_dims = [1]
then
  result is [[3, 4], [1, 2], [3, 4]]

In the above example the collapsed slices are [3, 4], [1, 2], [3, 4] and have rank 1. Using offset_dims we specify that the first dimension in each slice corresponds to the second dimension in the output tensor.

If we use the first output dimension instead, we get:

and given
  offset_dims = [0]
then
  result is [[3, 1, 3], [4, 2, 4]]

docs

Docs

More formal specification can be found in the XLA Gather docs.

Gets the shape of an operator.

Link to this function

get_tuple_element(op, index)

View Source
Link to this function

greater(op1, op2, broadcast_dims \\ {})

View Source

Element-wise greater with broadcasting.

Link to this function

greater_equal(op1, op2, broadcast_dims \\ {})

View Source

Element-wise greater_equal with broadcasting.

Link to this function

iota(builder, shape, dim)

View Source

Creates iota tensor.

Link to this function

is_infinity(op, type, shape, axes, state)

View Source
Link to this function

is_nan(op, type, shape, axes, state)

View Source
Link to this function

is_non_finite(nif_function, op, arg3, shape, axes, arg6)

View Source
Link to this function

left_shift(op1, op2, broadcast_dims \\ {})

View Source

Element-wise left_shift with broadcasting.

Link to this function

less(op1, op2, broadcast_dims \\ {})

View Source

Element-wise less with broadcasting.

Link to this function

less_equal(op1, op2, broadcast_dims \\ {})

View Source

Element-wise less_equal with broadcasting.

Unary log1p.

Unary log.

Link to this function

map(op, computation, dimensions)

View Source
Link to this function

max(op1, op2, broadcast_dims \\ {})

View Source

Element-wise max with broadcasting.

Link to this function

min(op1, op2, broadcast_dims \\ {})

View Source

Element-wise min with broadcasting.

Link to this function

multiply(op1, op2, broadcast_dims \\ {})

View Source

Element-wise multiply with broadcasting.

Unary negate.

Link to this function

not_equal(op1, op2, broadcast_dims \\ {})

View Source

Element-wise not_equal with broadcasting.

Link to this function

pad(op1, op2, padding_config)

View Source

Pads the tensor with value and padding config.

Link to this function

parameter(builder, i, shape, name)

View Source

Specifies a parameter at position i with shape and name.

Unary population_count.

Link to this function

power(op1, op2, broadcast_dims \\ {})

View Source

Element-wise power with broadcasting.

Link to this function

reduce(op1, op2, computation, reduction_dimensions)

View Source
Link to this function

remainder(op1, op2, broadcast_dims \\ {})

View Source

Element-wise remainder with broadcasting.

Reshapes the tensor to shape.

Link to this function

right_shift_arithmetic(op1, op2, broadcast_dims \\ {})

View Source

Element-wise right_shift_arithmetic with broadcasting.

Link to this function

right_shift_logical(op1, op2, broadcast_dims \\ {})

View Source

Element-wise right_shift_logical with broadcasting.

Link to this function

rng_normal(op1, op2, shape)

View Source

Creates tensor with normal distribution.

Link to this function

rng_uniform(op1, op2, shape)

View Source

Creates tensor with uniform distribution.

Unary round.

Unary rsqrt.

Link to this function

scatter(op1, op2, op3, computation, indices_rank, update_window_dims, inserted_window_dims, index_dims_to_window_dims)

View Source
Link to this function

select_and_scatter(op1, computation1, window_dimensions, window_strides, padding_config, op2, op3, computation2)

View Source

Unary sigmoid.

Unary sign.

Unary sin.

Unary sinh.

Link to this function

slice(op, start_indices, limit_indices, strides)

View Source
Link to this function

sort(op, computation, dimension)

View Source

Unary sqrt.

Link to this function

subtract(op1, op2, broadcast_dims \\ {})

View Source

Element-wise subtract with broadcasting.

Unary tanh.

Link to this function

transpose(op, permutation)

View Source
Link to this function

triangular_solve(op1, op2, left_side, lower, unit_diagonal, transpose_a)

View Source
Link to this function

tuple(builder, elements)

View Source

Builds a tuple with the given elements.

Link to this function

variadic_reduce(builder, operands, init_values, computation, reduction_dimensions)

View Source
Link to this function

variadic_sort(builder, operands, computation, dimension)

View Source
Link to this function

while(computation1, computation2, op)

View Source
Link to this function

window_reduce(op1, op2, computation, window_dimensions, window_strides, window_dilations, padding_config)

View Source