Edifice.Liquid (Edifice v0.2.0)

Copy Markdown View Source

Liquid Neural Networks (LNN) - Continuous-time adaptive neural networks.

LNNs use differential equations to model temporal dynamics, enabling continuous adaptation during inference. Uses the ODE solvers from Edifice.Utils.ODESolver including adaptive Dormand-Prince 4/5.

Key Innovation

Unlike traditional RNNs with discrete state updates, LNNs model the hidden state as evolving according to an ODE:

dx/dt = -x/tau + f(x, I, theta)/tau

Where:

  • tau is a learnable time constant (controls decay rate)
  • x is the hidden state
  • I is the input
  • f is a neural network

Available Solvers

SolverOrderAdaptiveSpeedStability
:exactExactNoFastestUnconditional (default)
:euler1NoFastRequires dt/tau < 2
:midpoint2NoFastRequires dt/tau < 2.8
:rk44NoMediumRequires dt/tau < 2.8
:dopri54/5YesSlowerAdaptive

See Edifice.Utils.ODESolver for implementation details.

Architecture

Input [batch, seq_len, embed_dim]
      |
      v
+-------------------------------------+
|  LTC Block                           |
|                                      |
|  For each timestep:                  |
|  1. Compute time constant tau(input) |
|  2. Compute activation f(x, input)   |
|  3. Integrate: dx/dt = -x/tau + f/tau|
|                                      |
|  (Optional: multiple sub-steps)      |
|                                      |
+-------------------------------------+
      | (repeat for num_layers)
      v
[batch, hidden_size]

Use Case

LNNs are particularly suited for real-time sequence processing because:

  • Can adapt to changing input patterns during inference
  • Robust to distributional drift (different data distributions)
  • Continuous dynamics model smooth transitions

Reference

  • Paper: "Liquid Time-constant Networks" (AAAI 2021)
  • Company: Liquid AI (MIT spin-off, $250M from AMD)

Summary

Types

Options for build/1.

Functions

Build a Liquid Neural Network model.

Build a single LTC (Liquid Time-Constant) layer.

Build a Liquid model with interleaved FFN layers.

Get high-accuracy configuration using Dormand-Prince 4/5.

Initialize hidden state cache for O(1) incremental inference.

Get the output size of a Liquid model.

Calculate approximate parameter count for a Liquid model.

Get recommended defaults for sequence processing.

Types

build_opt()

@type build_opt() ::
  {:dropout, float()}
  | {:embed_dim, pos_integer()}
  | {:hidden_size, pos_integer()}
  | {:integration_steps, float()}
  | {:num_layers, pos_integer()}
  | {:seq_len, pos_integer()}
  | {:solver, atom()}
  | {:window_size, pos_integer()}

Options for build/1.

Functions

build(opts \\ [])

@spec build([build_opt()]) :: Axon.t()

Build a Liquid Neural Network model.

Options

  • :embed_dim - Size of input embedding per frame (required)
  • :hidden_size - Internal hidden dimension (default: 256)
  • :num_layers - Number of LTC layers (default: 4)
  • :dropout - Dropout rate (default: 0.1)
  • :window_size - Expected sequence length (default: 60)
  • :integration_steps - ODE sub-steps per frame (default: 1)
  • :solver - ODE solver: :euler, :midpoint, :rk4, :dopri5 (default: :rk4)

Returns

An Axon model that outputs [batch, hidden_size] from the last position.

build_ltc_layer(input, opts)

@spec build_ltc_layer(
  Axon.t(),
  keyword()
) :: Axon.t()

Build a single LTC (Liquid Time-Constant) layer.

Each layer processes the sequence through a continuous-time cell.

build_with_ffn(opts \\ [])

@spec build_with_ffn(keyword()) :: Axon.t()

Build a Liquid model with interleaved FFN layers.

This variant adds feed-forward networks between LTC layers for more expressive power, similar to Transformer blocks.

high_accuracy_defaults()

@spec high_accuracy_defaults() :: keyword()

Get high-accuracy configuration using Dormand-Prince 4/5.

Uses adaptive stepsize ODE solver for best accuracy. Slower but more precise continuous-time dynamics.

init_cache(opts \\ [])

@spec init_cache(keyword()) :: map()

Initialize hidden state cache for O(1) incremental inference.

output_size(opts \\ [])

@spec output_size(keyword()) :: non_neg_integer()

Get the output size of a Liquid model.

param_count(opts)

@spec param_count(keyword()) :: non_neg_integer()

Calculate approximate parameter count for a Liquid model.