ANN-to-SNN Conversion via Rate Coding.
Provides a bridge between conventional artificial neural networks (ANNs) and spiking neural networks (SNNs). The ANN is trained with standard backpropagation, then converted to an SNN by replacing ReLU activations with integrate-and-fire neurons that encode activation magnitudes as spike rates.
Conversion Principle
A ReLU neuron with activation a can be approximated by a spiking neuron
that fires at rate a / threshold over num_timesteps. The key insight
is that the time-averaged spike rate of an IF neuron converges to the
ReLU activation as timesteps increase.
Architecture
ANN Mode: SNN Mode:
Input [batch, input_size] Input [batch, input_size]
| |
v v
Dense + ReLU Dense (same weights)
| |
v v
Dense + ReLU IF Neuron (rate-coded)
| | (simulate num_timesteps)
v v
Output [batch, output_size] Rate Decode -> OutputUsage
# Build the ANN (for training)
ann = ANN2SNN.build(
input_size: 256,
hidden_sizes: [128, 64],
num_timesteps: 10,
threshold: 1.0
)
# Build the SNN (for inference on neuromorphic hardware)
snn = ANN2SNN.build_snn(
input_size: 256,
hidden_sizes: [128, 64],
num_timesteps: 10,
threshold: 1.0
)References
- Diehl et al., "Fast-Classifying, High-Accuracy Spiking Deep Networks Through Weight and Threshold Balancing" (IJCNN 2015)
- Rueckauer et al., "Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification" (2017)
Summary
Functions
Build the ANN version (for training with backpropagation).
Build the SNN version (for spiking inference).
Integrate-and-Fire neuron simulation for rate-coded SNN inference.
Get the output size of an ANN2SNN model.
Types
@type build_opt() :: {:hidden_sizes, [pos_integer()]} | {:input_size, pos_integer()} | {:output_size, pos_integer()}
Options for build/1.
Functions
Build the ANN version (for training with backpropagation).
This is a standard feedforward network with ReLU activations. After
training, the same weights can be used with build_snn/1 for
spiking inference.
Options
:input_size- Input feature dimension (required):hidden_sizes- List of hidden layer sizes (default: [256, 128]):output_size- Output dimension (default: last hidden size):num_timesteps- Stored for SNN conversion reference (default: 10):threshold- Spiking threshold for conversion (default: 1.0)
Returns
An Axon model (standard ANN): [batch, input_size] -> [batch, output_size]
Build the SNN version (for spiking inference).
Uses the same dense layer structure but replaces ReLU with integrate-and-fire neuron simulation. Weights from a trained ANN can be directly transferred.
Options
Same as build/1.
Returns
An Axon model (SNN via rate coding): [batch, input_size] -> [batch, output_size]
@spec if_neuron(Nx.Tensor.t(), Nx.Tensor.t(), pos_integer(), float()) :: Nx.Tensor.t()
Integrate-and-Fire neuron simulation for rate-coded SNN inference.
Simulates an IF neuron over multiple timesteps. The input current is presented at each timestep, the membrane integrates, and spikes are emitted when threshold is exceeded. The output is the average spike rate, which approximates the ReLU activation.
Parameters
membrane- Initial membrane potential (zero)input_current- Weighted input currentnum_timesteps- Number of simulation stepsthreshold- Firing threshold
Returns
Average spike rate (approximates ReLU output).
@spec output_size(keyword()) :: pos_integer()
Get the output size of an ANN2SNN model.