Echo State Networks / Reservoir Computing.
Reservoir computing uses a fixed, randomly initialized recurrent network (the "reservoir") and only trains the output (readout) layer. This makes training extremely fast since only a linear layer is optimized.
Architecture
Input x[t]
|
v
+------------------+
| Fixed Reservoir | h[t] = tanh(W_in * x[t] + W_res * h[t-1])
| (random weights) | (NOT trained)
+------------------+
|
v
+------------------+
| Readout Layer | y[t] = W_out * h[t]
| (trained) | (ridge regression or gradient descent)
+------------------+
|
v
Output y[t]Key Properties
- Echo State Property: reservoir state asymptotically depends only on input, not initial conditions. Achieved when spectral radius of W_res < 1.
- Separation Property: different input sequences produce different reservoir states.
- Training: Only W_out is trained (via linear regression or gradient descent).
Usage
model = Reservoir.build(
input_size: 64,
reservoir_size: 500,
output_size: 10,
spectral_radius: 0.9,
sparsity: 0.1
)
Summary
Types
@type build_opt() :: {:input_scaling, float()} | {:input_size, pos_integer()} | {:leak_rate, float()} | {:output_size, pos_integer()} | {:reservoir_size, pos_integer()} | {:seq_len, pos_integer()} | {:sparsity, float()} | {:spectral_radius, float()}
Options for build/1.
Functions
Build an Echo State Network.
Options
:input_size- Input feature dimension (required):reservoir_size- Number of reservoir neurons (default: 500):output_size- Output dimension (default: reservoir_size):spectral_radius- Spectral radius of reservoir matrix (default: 0.9):sparsity- Fraction of zero connections in reservoir (default: 0.9):input_scaling- Scale of input weights (default: 1.0):leak_rate- Leaky integration rate (default: 1.0, no leaking):seq_len- Sequence length (default: nil for dynamic)
Returns
An Axon model that processes sequences through a fixed reservoir and trainable readout layer.
@spec output_size(keyword()) :: non_neg_integer()
Get the output size of the reservoir.