viva_tensor/autograd
Types
Function that calculates parent gradients based on this node’s gradient
pub type BackwardFn =
fn(tensor.Tensor) -> List(#(Int, tensor.Tensor))
The Tape maintains the history of operations in the computational graph. Unlike PyTorch (global/mutable), here the Tape is an explicit immutable value.
pub type Tape {
Tape(
next_id: Int,
operations: dict.Dict(
Int,
fn(tensor.Tensor) -> List(#(Int, tensor.Tensor)),
),
)
}
Constructors
-
Tape( next_id: Int, operations: dict.Dict( Int, fn(tensor.Tensor) -> List(#(Int, tensor.Tensor)), ), )
A variable tracked in the autograd system
pub type Variable {
Variable(id: Int, data: tensor.Tensor)
}
Constructors
-
Variable(id: Int, data: tensor.Tensor)
Values
pub fn add(
tape: Tape,
a: Variable,
b: Variable,
) -> Result(Traced(Variable), tensor.TensorError)
Traced addition: c = a + b (supports broadcasting)
pub fn backward(
tape: Tape,
loss: Variable,
) -> Result(dict.Dict(Int, tensor.Tensor), String)
Executes backpropagation starting from a scalar variable (loss). Returns a Map of NodeId -> Gradient (Tensor)
pub fn matmul(
tape: Tape,
a: Variable,
b: Variable,
) -> Result(Traced(Variable), tensor.TensorError)
Traced Matrix Multiplication: c = a @ b
pub fn mean(tape: Tape, a: Variable) -> Traced(Variable)
Traced Mean (Reduce Mean): y = mean(x) Returns a scalar Tensor (rank 0 or 1 depending on base implementation, here we force 1)
pub fn mul(
tape: Tape,
a: Variable,
b: Variable,
) -> Result(Traced(Variable), tensor.TensorError)
Traced Element-wise Multiplication: c = a * b
pub fn new_variable(
tape: Tape,
data: tensor.Tensor,
) -> Traced(Variable)
Registers a new variable (leaf node) in the graph
pub fn sequence(
input: Result(Traced(Variable), e),
layer_fn: fn(Tape, Variable) -> Result(Traced(Variable), e),
) -> Result(Traced(Variable), e)
Operation sequencing (Monadic Pipe) Allows chaining layers: x |> sequence(layer1) |> sequence(layer2)