viva_tensor/autograd

Types

Function that calculates parent gradients based on this node’s gradient

pub type BackwardFn =
  fn(tensor.Tensor) -> List(#(Int, tensor.Tensor))

Unique identifier for each node in the computational graph

pub type NodeId =
  Int

The Tape maintains the history of operations in the computational graph. Unlike PyTorch (global/mutable), here the Tape is an explicit immutable value.

pub type Tape {
  Tape(
    next_id: Int,
    operations: dict.Dict(
      Int,
      fn(tensor.Tensor) -> List(#(Int, tensor.Tensor)),
    ),
  )
}

Constructors

The result of a traced operation. Encapsulates the produced value (e.g., Variable) and the new tape state. Analogous to a State Monad.

pub type Traced(a) {
  Traced(value: a, tape: Tape)
}

Constructors

  • Traced(value: a, tape: Tape)

A variable tracked in the autograd system

pub type Variable {
  Variable(id: Int, data: tensor.Tensor)
}

Constructors

Values

pub fn add(
  tape: Tape,
  a: Variable,
  b: Variable,
) -> Result(Traced(Variable), tensor.TensorError)

Traced addition: c = a + b (supports broadcasting)

pub fn backward(
  tape: Tape,
  loss: Variable,
) -> Result(dict.Dict(Int, tensor.Tensor), String)

Executes backpropagation starting from a scalar variable (loss). Returns a Map of NodeId -> Gradient (Tensor)

pub fn matmul(
  tape: Tape,
  a: Variable,
  b: Variable,
) -> Result(Traced(Variable), tensor.TensorError)

Traced Matrix Multiplication: c = a @ b

pub fn mean(tape: Tape, a: Variable) -> Traced(Variable)

Traced Mean (Reduce Mean): y = mean(x) Returns a scalar Tensor (rank 0 or 1 depending on base implementation, here we force 1)

pub fn mul(
  tape: Tape,
  a: Variable,
  b: Variable,
) -> Result(Traced(Variable), tensor.TensorError)

Traced Element-wise Multiplication: c = a * b

pub fn new_tape() -> Tape

Creates a new empty tape

pub fn new_variable(
  tape: Tape,
  data: tensor.Tensor,
) -> Traced(Variable)

Registers a new variable (leaf node) in the graph

pub fn relu(tape: Tape, a: Variable) -> Traced(Variable)

Traced ReLU

pub fn sequence(
  input: Result(Traced(Variable), e),
  layer_fn: fn(Tape, Variable) -> Result(Traced(Variable), e),
) -> Result(Traced(Variable), e)

Operation sequencing (Monadic Pipe) Allows chaining layers: x |> sequence(layer1) |> sequence(layer2)

pub fn sub(
  tape: Tape,
  a: Variable,
  b: Variable,
) -> Result(Traced(Variable), tensor.TensorError)

Traced subtraction: c = a - b

pub fn transpose(
  tape: Tape,
  a: Variable,
) -> Result(Traced(Variable), tensor.TensorError)

Traced Transpose: c = a.T

Search Document