# `Torchx`
[🔗](https://github.com/elixir-nx/nx/blob/v0.12.0/torchx/lib/torchx.ex#L92)

Bindings and Nx integration for [PyTorch](https://pytorch.org/).

Torchx provides an Nx backend through `Torchx.Backend`, which
allows for integration with both the CPU and GPU functionality
that PyTorch provides. To enable Torchx as the default backend
you can add the following line to your desired config environment (`config/config.exs`,
`config/test.exs`, etc):

    import Config
    config :nx, :default_backend, Torchx.Backend

This will ensure that by default all tensors are created PyTorch tensors.
It's important to keep in mind that the default device is the CPU. If you
wish to allocated tensors to the GPU by default, you can pass the `:device`
option to the config line, as follows:

    import Config
    config :nx, :default_backend, {Torchx.Backend, device: :cuda}

The `device_available?/1` function can be used to determine whether
`:cuda` is available. If you have CUDA installed but it doesn't show
as available, check out the _Installation_ README section.

## Types

Torchx implements specific names for PyTorch types, which have Nx
counterparts as in the following table:

  Nx Type    |  Torchx Type    | Description
 ----------- | --------------- | --------------------------------------------------------
 `{:u, 8}`   | `:byte`           | Unsigned 8-bit integer
 `{:s, 8}`   | `:char`           | Signed 8-bit integer
 `{:s, 16}`  | `:short`          | Signed 16-bit integer
 `{:s, 32}`  | `:int`            | Signed 32-bit integer
 `{:s, 64}`  | `:long`           | Signed 64-bit integer
 `{:bf, 16}` | `:brain`          | 16-bit brain floating-point number
 `{:f, 8}`   | `:float8_e5m2`    | 8-bit floating-point number (E5M2)
 `{:f8_e4m3fn, 8}` | `:float8_e4m3fn` | 8-bit floating-point number (E4M3FN)
 `{:f, 16}`  | `:half`           | 16-bit floating-point number
 `{:f, 32}`  | `:float`          | 32-bit floating-point number
 `{:f, 64}`  | `:double`         | 64-bit floating-point number
 `{:c, 64}`  | `:complex`        | 64-bit complex number, with two 32-bit float components
 `{:c, 128}` | `:complex_double` | 128-bit complex number, with two 64-bit float components

## Devices

PyTorch implements a variety of devices, which can be seen below.

  * `:cpu`
  * `:cuda`
  * `:mkldnn`
  * `:opengl`
  * `:opencl`
  * `:ideep`
  * `:hip`
  * `:fpga`
  * `:msnpu`
  * `:xla`
  * `:vulkan`
  * `:metal`
  * `:xpu`
  * `:mps`

# `abs`

# `acos`

# `acosh`

# `add`

# `all`

# `all`

# `all_close`

# `amax`

# `amin`

# `any`

# `any`

# `arange`

# `arange`

# `argmax`

# `argmin`

# `argsort`

# `asin`

# `asinh`

# `atan2`

# `atan`

# `atanh`

# `bitwise_and`

# `bitwise_not`

# `bitwise_or`

# `bitwise_xor`

# `broadcast_to`

# `cbrt`

# `ceil`

# `cholesky`

# `cholesky`

# `clip`

# `concatenate`

# `conjugate`

# `conv`

# `cos`

# `cosh`

# `cumulative_max`

# `cumulative_min`

# `cumulative_product`

# `cumulative_sum`

# `default_device`

Returns the default device.

Here is the priority in the order of availability:

* `:cuda`
* `:cpu`

The default can also be set (albeit not recommended)
via the application environment by setting the
`:default_device` option under the `:torchx` application.

# `delete_tensor`

# `determinant`

# `device_available?`

Check if device of the given type is available for Torchx.

You can currently check the availability of:

* `:cuda`
* `:mps`
* `:cpu`

# `device_count`

Return devices quantity for the given device type.

You can check the device count of `:cuda` for now.

# `divide`

# `eigh`

# `equal`

# `erf`

# `erf_inv`

# `erfc`

# `exp`

# `expm1`

# `eye`

# `eye`

# `fft2`

# `fft`

# `flip`

# `floor`

# `fmod`

# `from_blob`

# `from_nx`

Gets a Torchx tensor from a Nx tensor.

# `full`

# `gather`

# `greater`

# `greater_equal`

# `ifft2`

# `ifft`

# `index`

# `index_put`

# `irfft`

# `is_infinity`

# `is_nan`

# `is_tensor`
*macro* 

# `item`

# `left_shift`

# `less`

# `less_equal`

# `log1p`

# `log`

# `logical_and`

# `logical_not`

# `logical_or`

# `logical_xor`

# `lu`

# `matmul`

# `max`

# `max_pool_3d`

# `min`

# `multiply`

# `nbytes`

# `negate`

# `normal`

# `not_equal`

# `ones`

# `pad`

# `permute`

# `pow`

# `product`

# `product`

# `put`

# `qr`

# `qr`

# `quotient`

# `rand`

# `randint`

# `remainder`

# `reshape`

# `rfft`

# `right_shift`

# `round`

# `rsqrt`

# `scalar_tensor`

# `scalar_type`

# `shape`

# `sigmoid`

# `sign`

# `sin`

# `sinh`

# `slice`

# `solve`

# `sort`

# `split`

# `sqrt`

# `squeeze`

# `squeeze`

# `subtract`

# `sum`

# `svd`

# `svd`

# `tan`

# `tanh`

# `tensordot`

# `tensordot`

# `to_blob`

# `to_blob`

# `to_device`

# `to_nx`

Converts a Torchx tensor to a Nx tensor.

# `to_type`

# `top_k`

# `transpose`

# `triangular_solve`

# `unfold`

# `view_as_real`

# `where`

---

*Consult [api-reference.md](api-reference.md) for complete listing*
