Liquid Neural Networks (LNN) - Continuous-time adaptive neural networks.
LNNs use differential equations to model temporal dynamics, enabling
continuous adaptation during inference. Uses the ODE solvers from
Edifice.Utils.ODESolver including adaptive Dormand-Prince 4/5.
Key Innovation
Unlike traditional RNNs with discrete state updates, LNNs model the hidden state as evolving according to an ODE:
dx/dt = -x/tau + f(x, I, theta)/tauWhere:
- tau is a learnable time constant (controls decay rate)
- x is the hidden state
- I is the input
- f is a neural network
Available Solvers
| Solver | Order | Adaptive | Speed | Stability |
|---|---|---|---|---|
:exact | Exact | No | Fastest | Unconditional (default) |
:euler | 1 | No | Fast | Requires dt/tau < 2 |
:midpoint | 2 | No | Fast | Requires dt/tau < 2.8 |
:rk4 | 4 | No | Medium | Requires dt/tau < 2.8 |
:dopri5 | 4/5 | Yes | Slower | Adaptive |
See Edifice.Utils.ODESolver for implementation details.
Architecture
Input [batch, seq_len, embed_dim]
|
v
+-------------------------------------+
| LTC Block |
| |
| For each timestep: |
| 1. Compute time constant tau(input) |
| 2. Compute activation f(x, input) |
| 3. Integrate: dx/dt = -x/tau + f/tau|
| |
| (Optional: multiple sub-steps) |
| |
+-------------------------------------+
| (repeat for num_layers)
v
[batch, hidden_size]Use Case
LNNs are particularly suited for real-time sequence processing because:
- Can adapt to changing input patterns during inference
- Robust to distributional drift (different data distributions)
- Continuous dynamics model smooth transitions
Reference
- Paper: "Liquid Time-constant Networks" (AAAI 2021)
- Company: Liquid AI (MIT spin-off, $250M from AMD)
Summary
Functions
Build a Liquid Neural Network model.
Build a single LTC (Liquid Time-Constant) layer.
Build a Liquid model with interleaved FFN layers.
Get high-accuracy configuration using Dormand-Prince 4/5.
Initialize hidden state cache for O(1) incremental inference.
Get the output size of a Liquid model.
Calculate approximate parameter count for a Liquid model.
Get recommended defaults for sequence processing.
Types
@type build_opt() :: {:dropout, float()} | {:embed_dim, pos_integer()} | {:hidden_size, pos_integer()} | {:integration_steps, float()} | {:num_layers, pos_integer()} | {:seq_len, pos_integer()} | {:solver, atom()} | {:window_size, pos_integer()}
Options for build/1.
Functions
Build a Liquid Neural Network model.
Options
:embed_dim- Size of input embedding per frame (required):hidden_size- Internal hidden dimension (default: 256):num_layers- Number of LTC layers (default: 4):dropout- Dropout rate (default: 0.1):window_size- Expected sequence length (default: 60):integration_steps- ODE sub-steps per frame (default: 1):solver- ODE solver::euler,:midpoint,:rk4,:dopri5(default: :rk4)
Returns
An Axon model that outputs [batch, hidden_size] from the last position.
Build a single LTC (Liquid Time-Constant) layer.
Each layer processes the sequence through a continuous-time cell.
Build a Liquid model with interleaved FFN layers.
This variant adds feed-forward networks between LTC layers for more expressive power, similar to Transformer blocks.
@spec high_accuracy_defaults() :: keyword()
Get high-accuracy configuration using Dormand-Prince 4/5.
Uses adaptive stepsize ODE solver for best accuracy. Slower but more precise continuous-time dynamics.
Initialize hidden state cache for O(1) incremental inference.
@spec output_size(keyword()) :: non_neg_integer()
Get the output size of a Liquid model.
@spec param_count(keyword()) :: non_neg_integer()
Calculate approximate parameter count for a Liquid model.
@spec recommended_defaults() :: keyword()
Get recommended defaults for sequence processing.