pith. machine review for the scientific record. sign in

arxiv: 2409.08290 · v4 · submitted 2024-08-29 · 💻 cs.NE · cs.AI· cs.LG

Recognition: unknown

Reconsidering the energy efficiency of spiking neural networks

Authors on Pith no claims yet
classification 💻 cs.NE cs.AIcs.LG
keywords energysnnsneuralefficiencyhardwaremodelnetworksqnns
0
0 comments X
read the original abstract

Spiking Neural Networks (SNNs) promise higher energy efficiency over conventional Quantized Artificial Neural Networks (QNNs) due to their event-driven, spike-based computation. However, prevailing energy evaluations often oversimplify, focusing on computational aspects while neglecting critical overheads like comprehensive data movement and memory access. Such simplifications can lead to misleading conclusions regarding the true energy benefits of SNNs. This paper presents a rigorous re-evaluation. We establish a fair baseline by mapping rate-encoded SNNs with $T$ timesteps to functionally equivalent QNNs with $\lceil \log_2(T+1) \rceil$ bits. This ensures both models have comparable representational capacities, as well has similar hardware requirement, enabling meaningful energy comparisons. We introduce a detailed analytical energy model encompassing core computation and data movement. Using this model, we systematically explore a wide parameter space, including intrinsic network characteristics ($T$, spike rate $s_r$, QNN sparsity $\gamma$, model size $N$, weight bit-level) and hardware characteristics (memory system and network-on-chip). Our analysis identifies specific operational regimes where SNNs genuinely offer superior energy efficiency. For example, under typical neuromorphic hardware conditions, SNNs with moderate time windows ($T \in [5,10]$) require an average spike rate ($s_r$) below 6.4\% to outperform equivalent QNNs. Furthermore, to illustrate the real-world implications of our findings, we analyze the operational lifetime of a typical smartwatch, showing that an optimized SNN can nearly double its battery life compared to a QNN. These insights guide the design of turely energy-efficient neural network solutions.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 3 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Energy-Efficient Implementation of Spiking Recurrent Cells on FPGA

    cs.NE 2026-05 conditional novelty 5.0

    An FPGA implementation of SRC-based SNNs reaches 96.31% MNIST accuracy at 1.74 ms per digit and drops to 0.45 mJ per digit with 4-bit weights and shorter traces while retaining richer dynamics than LIF models.

  2. Energy-Efficient Implementation of Spiking Recurrent Cells on FPGA

    cs.NE 2026-05 conditional novelty 5.0

    Simplified Spiking Recurrent Cells enable FPGA SNNs to reach 92-96% MNIST accuracy at 0.45-1.74 mJ per classification while retaining richer dynamics than basic LIF models.

  3. ShiftLIF: Efficient Multi-Level Spiking Neurons with Power-of-Two Quantization

    cs.NE 2026-05 unverdicted novelty 5.0

    ShiftLIF maps membrane potentials to logarithmically spaced power-of-two spike levels, improving representational capacity in SNNs while keeping synaptic operations multiplier-free.