# `Edifice.Liquid`
[🔗](https://github.com/blasphemetheus/edifice/blob/main/lib/edifice/liquid/liquid.ex#L1)

Liquid Neural Networks (LNN) - Continuous-time adaptive neural networks.

LNNs use differential equations to model temporal dynamics, enabling
continuous adaptation during inference. Uses the ODE solvers from
`Edifice.Utils.ODESolver` including adaptive Dormand-Prince 4/5.

## Key Innovation

Unlike traditional RNNs with discrete state updates, LNNs model the
hidden state as evolving according to an ODE:

    dx/dt = -x/tau + f(x, I, theta)/tau

Where:
- tau is a learnable time constant (controls decay rate)
- x is the hidden state
- I is the input
- f is a neural network

## Available Solvers

| Solver | Order | Adaptive | Speed | Stability |
|--------|-------|----------|-------|-----------|
| `:exact` | Exact | No | Fastest | Unconditional (default) |
| `:euler` | 1 | No | Fast | Requires dt/tau < 2 |
| `:midpoint` | 2 | No | Fast | Requires dt/tau < 2.8 |
| `:rk4` | 4 | No | Medium | Requires dt/tau < 2.8 |
| `:dopri5` | 4/5 | Yes | Slower | Adaptive |

See `Edifice.Utils.ODESolver` for implementation details.

## Architecture

```
Input [batch, seq_len, embed_dim]
      |
      v
+-------------------------------------+
|  LTC Block                           |
|                                      |
|  For each timestep:                  |
|  1. Compute time constant tau(input) |
|  2. Compute activation f(x, input)   |
|  3. Integrate: dx/dt = -x/tau + f/tau|
|                                      |
|  (Optional: multiple sub-steps)      |
|                                      |
+-------------------------------------+
      | (repeat for num_layers)
      v
[batch, hidden_size]
```

## Use Case

LNNs are particularly suited for real-time sequence processing because:
- Can adapt to changing input patterns during inference
- Robust to distributional drift (different data distributions)
- Continuous dynamics model smooth transitions

## Reference

- Paper: "Liquid Time-constant Networks" (AAAI 2021)
- Company: Liquid AI (MIT spin-off, $250M from AMD)

# `build_opt`

```elixir
@type build_opt() ::
  {:dropout, float()}
  | {:embed_dim, pos_integer()}
  | {:hidden_size, pos_integer()}
  | {:integration_steps, float()}
  | {:num_layers, pos_integer()}
  | {:seq_len, pos_integer()}
  | {:solver, atom()}
  | {:window_size, pos_integer()}
```

Options for `build/1`.

# `build`

```elixir
@spec build([build_opt()]) :: Axon.t()
```

Build a Liquid Neural Network model.

## Options

  - `:embed_dim` - Size of input embedding per frame (required)
  - `:hidden_size` - Internal hidden dimension (default: 256)
  - `:num_layers` - Number of LTC layers (default: 4)
  - `:dropout` - Dropout rate (default: 0.1)
  - `:window_size` - Expected sequence length (default: 60)
  - `:integration_steps` - ODE sub-steps per frame (default: 1)
  - `:solver` - ODE solver: `:euler`, `:midpoint`, `:rk4`, `:dopri5` (default: :rk4)

## Returns

  An Axon model that outputs [batch, hidden_size] from the last position.

# `build_ltc_layer`

```elixir
@spec build_ltc_layer(
  Axon.t(),
  keyword()
) :: Axon.t()
```

Build a single LTC (Liquid Time-Constant) layer.

Each layer processes the sequence through a continuous-time cell.

# `build_with_ffn`

```elixir
@spec build_with_ffn(keyword()) :: Axon.t()
```

Build a Liquid model with interleaved FFN layers.

This variant adds feed-forward networks between LTC layers for more
expressive power, similar to Transformer blocks.

# `high_accuracy_defaults`

```elixir
@spec high_accuracy_defaults() :: keyword()
```

Get high-accuracy configuration using Dormand-Prince 4/5.

Uses adaptive stepsize ODE solver for best accuracy.
Slower but more precise continuous-time dynamics.

# `init_cache`

```elixir
@spec init_cache(keyword()) :: map()
```

Initialize hidden state cache for O(1) incremental inference.

# `output_size`

```elixir
@spec output_size(keyword()) :: non_neg_integer()
```

Get the output size of a Liquid model.

# `param_count`

```elixir
@spec param_count(keyword()) :: non_neg_integer()
```

Calculate approximate parameter count for a Liquid model.

# `recommended_defaults`

```elixir
@spec recommended_defaults() :: keyword()
```

Get recommended defaults for sequence processing.

---

*Consult [api-reference.md](api-reference.md) for complete listing*
