Recognition: 2 theorem links
· Lean TheoremNeuroTrain: Surveying Local Learning Rules for Spiking Neural Networks with an Open Benchmarking Framework
Pith reviewed 2026-05-15 03:05 UTC · model grok-4.3
The pith
A single taxonomy sorts spiking neural network training methods by their signals and locality while a shared code base lets researchers test them together.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
SNN training algorithms can be organized by their computational principles, learning signals, and degree of locality into a fine-grained taxonomy that covers surrogate-gradient backpropagation, local and three-factor rules, biologically inspired plasticity, ANN-to-SNN conversion, and non-standard optimization; the NeuroTrain framework implements a representative subset of each class in one modular code base so that performance, hardware fit, and scaling behavior can be measured on equal terms.
What carries the argument
The taxonomy that groups algorithms according to how they generate and propagate learning signals together with the modular NeuroTrain framework that places representative implementations of each group inside the same code structure for direct comparison.
If this is right
- Researchers can run head-to-head tests of local-rule methods against surrogate-gradient ones on the same architectures and data sets.
- Patterns that cut across multiple algorithm classes become visible, showing which limits are shared rather than method-specific.
- New algorithms can be dropped into the framework and measured against the existing set without rewriting the evaluation pipeline.
- Hardware choices can be guided by which classes of rules actually run efficiently once placed on the same footing.
Where Pith is reading between the lines
- If the taxonomy proves stable, hybrid training schemes that borrow locality from one class and scaling from another could be designed more deliberately.
- The benchmark may show that current local rules still trail surrogate methods on large tasks, directing attention toward specific fixes in signal generation or weight update rules.
- Consistent numbers across methods could make it clearer which open challenges, such as online learning on edge devices, are truly unsolved rather than just under-tested.
Load-bearing premise
The algorithms picked for NeuroTrain capture the main ideas and performance traits of the wider literature without leaving out key variants or adding hidden coding differences.
What would settle it
A new training method that reaches strong accuracy on standard SNN benchmarks yet cannot be placed in any existing taxonomy category or produces markedly different results when re-coded inside the NeuroTrain framework.
Figures
read the original abstract
The rapid expansion of spiking neural networks (SNNs) has led to a proliferation of training algorithms that differ widely in biological inspiration, computational structure, and hardware suitability. Despite this progress, the field lacks a unified, fine-grained taxonomy that systematically organizes these approaches and clarifies their conceptual relationships. This survey provides a comprehensive taxonomy of SNN training algorithms, spanning surrogate-gradient backpropagation, local and three-factor learning rules, biologically inspired plasticity mechanisms, ANN-to-SNN conversion pipelines, and non-standard optimization strategies. We analyze each class in terms of its computational principles, learning signals, and locality properties. To support reproducible research, we release NeuroTrain, an open-source snnTorch-based framework that implements a representative set of these algorithms within a unified, modular, and extendable framework, enabling consistent benchmarking across datasets, architectures, and training regimes. By consolidating fragmented literature and providing a reusable benchmarking framework, this survey identifies common patterns, highlights open challenges, and outlines promising directions for future work on scalable, efficient SNN training.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript surveys training algorithms for spiking neural networks (SNNs), offering a taxonomy that spans surrogate-gradient backpropagation, local and three-factor learning rules, biologically inspired plasticity mechanisms, ANN-to-SNN conversion pipelines, and non-standard optimization strategies. Each class is analyzed with respect to computational principles, learning signals, and locality properties. The work releases NeuroTrain, an open-source snnTorch-based framework that implements a representative subset of these algorithms to enable consistent, reproducible benchmarking across datasets, architectures, and training regimes.
Significance. If the taxonomy is complete and the NeuroTrain implementations faithfully reproduce the core properties of each class without unstated approximations, the paper would consolidate a fragmented literature and supply a reusable tool for fair comparisons, directly addressing the lack of standardized evaluation in SNN training research.
major comments (1)
- [Abstract] Abstract: the claim that NeuroTrain implements 'a representative set' of algorithms is load-bearing for both the taxonomy analysis and the benchmarking contribution, yet no explicit inclusion criteria, coverage audit, or justification for the chosen representatives (particularly within the three-factor and biologically inspired classes) is provided; without this, it is impossible to assess whether major variants with distinct eligibility traces or convergence behaviors have been omitted or distorted by the snnTorch re-implementations.
Simulated Author's Rebuttal
We thank the referee for their constructive feedback on our survey and the NeuroTrain framework. We address the single major comment below and have prepared revisions to strengthen the manuscript's clarity on algorithm selection and coverage.
read point-by-point responses
-
Referee: [Abstract] Abstract: the claim that NeuroTrain implements 'a representative set' of algorithms is load-bearing for both the taxonomy analysis and the benchmarking contribution, yet no explicit inclusion criteria, coverage audit, or justification for the chosen representatives (particularly within the three-factor and biologically inspired classes) is provided; without this, it is impossible to assess whether major variants with distinct eligibility traces or convergence behaviors have been omitted or distorted by the snnTorch re-implementations.
Authors: We agree that the abstract claim requires explicit support. In the revised manuscript we will add a new subsection (Section 4.1) that states the inclusion criteria: algorithms were selected to (i) cover every major class in the taxonomy, (ii) include at least one canonical implementation per class with documented differences in eligibility traces or convergence properties, and (iii) be re-implementable within the snnTorch modular interface without altering core mathematical formulations. We will also insert a coverage-audit table that lists each taxonomy category, the representative algorithm(s) chosen, the key variants deliberately omitted (with citations), and any snnTorch-specific approximations (e.g., fixed time-step discretization). This addition will allow readers to evaluate scope and fidelity directly. revision: yes
Circularity Check
No significant circularity in survey and framework release
full rationale
This paper is a literature survey that organizes existing SNN training methods into a taxonomy (surrogate-gradient, local/three-factor rules, biologically inspired plasticity, ANN-to-SNN conversion, non-standard optimization) and releases the NeuroTrain snnTorch-based benchmarking framework. The provided text contains no equations, derivations, fitted parameters, predictions of new quantities, or self-citations used to justify uniqueness theorems or ansatzes. The central claims rest on consolidation of prior work and tool release rather than any internal reduction to inputs by construction. The selection of representative algorithms is presented as a practical choice for the framework, not as a derived or predicted result.
Axiom & Free-Parameter Ledger
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
taxonomy... spanning surrogate-gradient backpropagation, local and three-factor learning rules, biologically inspired plasticity mechanisms... locality properties
-
IndisputableMonolith/Foundation/AlexanderDuality.leanalexander_duality_circle_linking unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
eligibility traces... three-factor rule Δw ∝ e·M
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
W. Maass, Networks of spiking neurons: The third generation of neural network models, Neural Net- works 10 (9) (1997) 1659–1671.doi:10.1016/ S0893-6080(97)00011-7
work page 1997
-
[2]
M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y . Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, S. Jain, Y . Liao, C.-K. Lin, A. Lines, R. Liu, D. Mathaikutty, S. McCoy, A. Paul, J. Tse, G. Venkataramanan, Y .-H. Weng, A. Wild, Y . Yang, H. Wang, Loihi: A Neuromor- phic Manycore Processor with On-Chip Learning, IEEE Micro 38 (1) (2018) 82–99.doi:10.110...
-
[3]
B. Huo, F. Li, S. Peng, H. Chen, S. Xin, H. Wang, Research on SNN Learning Algorithms and Networks Based on Biological Plausibility, IEEE Access 13 (2025) 95243–95256.doi:10.1109/ACCESS.2025.3566717
-
[4]
P. Lansky, S. Ditlevsen, A review of the methods for signal estimation in stochastic diffusion leaky integrate- and-fire neuronal models, Biological cybernetics 99 (4) (2008) 253–262.doi:10.1007/s00422-008-0237-x
-
[6]
B. Mészáros, J. C. Knight, T. Nowotny, Efficient event- based delay learning in spiking neural networks, Nature Communications 16 (1) (2025) 10422.doi:10.1038/ s41467-025-65394-8
work page 2025
-
[8]
G.-Q. Bi, Spatiotemporal specificity of synaptic plas- ticity: cellular rules and mechanisms, Biological cy- bernetics 87 (5) (2002) 319–332.doi:10.1007/ s00422-002-0349-7
work page 2002
-
[9]
C. Zha, W. S. Sossin, The molecular diversity of plas- ticity mechanisms underlying memory: An evolutionary perspective, Journal of neurochemistry 163 (6) (2022) 444–460.doi:10.1111/jnc.15717
-
[10]
J. D. Sweatt, Neural plasticity and behavior–sixty years of conceptual advances, Journal of neurochemistry 139 (2016) 179–199.doi:10.1111/jnc.13580
-
[11]
A. R. McFarlan, C. Y . Chou, A. Watanabe, N. Cherepacha, M. Haddad, H. Owens, P. J. Sjöström, The plasticitome of cortical interneurons, Nature Reviews Neuroscience 24 (2) (2023) 80–97. doi:10.1038/s41583-022-00663-9
-
[12]
D. Ivanov, A. Chezhegov, M. Kiselev, A. Grunin, D. Larionov, Neuromorphic artificial intelligence sys- tems, Frontiers in Neuroscience 16 (Sep. 2022).doi: 10.3389/fnins.2022.959626
-
[13]
A. Caviglia, F. Marostica, A. Carpegna, A. Savino, S. Di Carlo, Sfatti: Spiking fpga accelerator for tem- poral task-driven inference - a case study on mnist, in: 2025 IEEE International Conference on Image Process- ing Workshops (ICIPW), 2025, pp. 59–64.doi:10. 1109/ICIPW68931.2025.11385983
-
[14]
J. Pedersen, et al., Spinnaker 2: A 10 million core pro- cessor system for brain simulation and machine learning, Communicating Process Architectures 2017 & 2018: WoTUG-39 & WoTUG-40 70 (2019) 277.doi:10. 3233/978-1-61499-949-2-277
work page 2017
-
[15]
A. Carpegna, A. Savino, S. D. Carlo, Spiker+: A frame- work for the generation of efficient spiking neural net- works fpga accelerators for inference at the edge, IEEE Transactions on Emerging Topics in Computing 13 (3) (2025) 784–798.doi:10.1109/TETC.2024.3511676
-
[16]
F. Akopyan, J. Sawada, A. Cassidy, R. Alvarez-Icaza, J. Arthur, P. Merolla, N. Imam, Y . Nakamura, P. Datta, G.-J. Nam, B. Taba, M. Beakes, B. Brezzo, J. B. Kuang, R. Manohar, W. P. Risk, B. Jackson, D. S. Modha, TrueNorth: Design and Tool Flow of a 65 mW 1 Mil- lion Neuron Programmable Neurosynaptic Chip, IEEE Transactions on Computer-Aided Design of Int...
-
[17]
B. V . Benjamin, P. Gao, E. McQuinn, S. Choudhary, A. R. Chandrasekaran, J.-M. Bussat, R. Alvarez-Icaza, J. V . Arthur, P. A. Merolla, K. Boahen, Neurogrid: A Mixed-Analog-Digital Multichip System for Large- Scale Neural Simulations, Proceedings of the IEEE 102 (5) (2014) 699–716.doi:10.1109/JPROC.2014. 2313565
-
[18]
F. Marostica, A. Carpegna, A. Savino, S. D. Carlo, Energy-efficient digital design: A comparative study of event-driven and clock-driven spiking neurons, in: 2025 IEEE Computer Society Annual Symposium on VLSI (ISVLSI), V ol. 1, 2025, pp. 1–6.doi:10.1109/ ISVLSI65124.2025.11130320
-
[19]
F. Ponulak, A. Kasinski, Introduction to spiking neural networks: Information processing, learning and applica- tions, Acta Neurobiologiae Experimentalis 71 (4) (2011) 409–433.doi:10.55782/ane-2011-1862
-
[20]
F. Ponulak, A. Kasi ´nski, Supervised Learning in Spik- ing Neural Networks with ReSuMe: Sequence Learning, Classification, and Spike Shifting, Neural Computation 22 (2) (2010) 467–510.doi:10.1162/neco.2009. 11-08-901
-
[21]
S. M. Bohte, J. N. Kok, H. La Poutré, Error- backpropagation in temporally encoded networks of spiking neurons, Neurocomputing 48 (1) (2002) 17–37. doi:10.1016/S0925-2312(01)00658-0
-
[22]
M. Pfeiffer, T. Pfeil, Deep Learning With Spiking Neu- rons: Opportunities and Challenges, Frontiers in Neuro- science 12 (Oct. 2018).doi:10.3389/fnins.2018. 00774
-
[23]
A. Tavanaei, M. Ghodrati, S. R. Kheradpisheh, T. Masquelier, A. Maida, Deep learning in spiking neu- ral networks, Neural Networks 111 (2019) 47–63.doi: 10.1016/j.neunet.2018.12.002
-
[24]
X. Wang, X. Lin, X. Dang, Supervised learning in spik- ing neural networks: A review of algorithms and eval- uations, Neural Networks 125 (2020) 258–280.doi: 10.1016/j.neunet.2020.02.011
-
[25]
A. Vigneron, J. Martinet, A critical survey of STDP in Spiking Neural Networks for Pattern Recognition, in: 2020 International Joint Conference on Neural Networks (IJCNN), 2020, pp. 1–9.doi:10.1109/IJCNN48605. 2020.9207239
-
[26]
A. Taherkhani, A. Belatreche, Y . Li, G. Cosma, L. P. Maguire, T. M. McGinnity, A review of learning in bio- logically plausible spiking neural networks, Neural Net- works 122 (2020) 253–272.doi:10.1016/j.neunet. 2019.09.036
-
[27]
K. Roy, A. Jaiswal, P. Panda, Towards spike-based ma- chine intelligence with neuromorphic computing, Na- ture 575 (7784) (2019) 607–617.doi:10.1038/ s41586-019-1677-2
work page 2019
-
[28]
S. Dora, N. Kasabov, Spiking Neural Networks for Com- putational Intelligence: An Overview, Big Data and Cognitive Computing 5 (4) (2021) 67.doi:10.3390/ bdcc5040067
work page 2021
-
[29]
D.-A. Nguyen, X.-T. Tran, F. Iacopi, A Review of Al- gorithms and Hardware Implementations for Spiking Neural Networks, Journal of Low Power Electronics and Applications 11 (2) (2021) 23.doi:10.3390/ jlpea11020023
work page 2021
-
[30]
D. V . Christensen, R. Dittmann, B. Linares-Barranco, A. Sebastian, M. Le Gallo, A. Redaelli, S. Slesazeck, T. Mikolajick, S. Spiga, S. Menzel, I. Valov, G. Mi- lano, C. Ricciardi, S.-J. Liang, F. Miao, M. Lanza, T. J. Quill, S. T. Keene, A. Salleo, J. Grollier, D. Markovi ´c, A. Mizrahi, P. Yao, J. J. Yang, G. Indiveri, J. P. Strachan, S. Datta, E. Viane...
work page 2022
-
[31]
J. D. Nunes, M. Carvalho, D. Carneiro, J. S. Cardoso, Spiking Neural Networks: A Survey, IEEE Access 10 60738–60764.doi:10.1109/ACCESS.2022.3179968
-
[32]
C. D. Schuman, S. R. Kulkarni, M. Parsa, J. P. Mitchell, P. Date, B. Kay, Opportunities for neuromorphic com- puting algorithms and applications, Nature Computa- tional Science 2 (1) (2022) 10–19.doi:10.1038/ s43588-021-00184-y
work page 2022
-
[33]
A. Shrestha, H. Fang, Z. Mei, D. P. Rider, Q. Wu, Q. Qiu, A survey on neuromorphic computing: Models and hardware, IEEE Circuits and Systems Magazine 22 (2) (2022) 6–35.doi:10.1109/MCAS.2022.3166331
-
[34]
K. Yamazaki, V .-K. V o-Ho, D. Bulsara, N. Le, Spik- ing Neural Networks and Their Applications: A Re- view, Brain Sciences 12 (7) (2022) 863.doi:10.3390/ brainsci12070863
work page 2022
-
[35]
L. Khacef, P. Klein, M. Cartiglia, A. Rubino, G. Indi- veri, E. Chicca, Spike-based local synaptic plasticity: a survey of computational models and neuromorphic cir- cuits, Neuromorphic Computing and Engineering 3 (4) (2023) 042001.doi:10.1088/2634-4386/ad05da
-
[36]
P. Pietrzak, S. Szczesny, D. Huderek, L. Przyborowski, Overview of Spiking Neural Network Learning Ap- proaches and Their Computational Complexities, Sen- sors 23 (6) (2023) 3037.doi:10.3390/s23063037. 19
-
[37]
N. Rathi, I. Chakraborty, A. Kosta, A. Sengupta, A. Ankit, P. Panda, K. Roy, Exploring Neuromorphic Computing Based on Spiking Neural Networks: Algo- rithms to Hardware, ACM Comput. Surv. 55 (12) (2023) 243:1–243:49.doi:10.1145/3571155
-
[38]
S. Al Abdul Wahid, A. Asad, F. Mohammadi, A Sur- vey on Neuromorphic Architectures for Running Artifi- cial Intelligence Algorithms, Electronics 13 (15) (2024) 2963.doi:10.3390/electronics13152963
-
[39]
C. Frenkel, D. Bol, G. Indiveri, Bottom-Up and Top- Down Approaches for the Design of Neuromorphic Pro- cessing Systems: Tradeoffs and Synergies Between Nat- ural and Artificial Intelligence, Proceedings of the IEEE 111 (6) (2023) 623–652.doi:10.1109/JPROC.2023. 3273520
-
[40]
Z. Yi, J. Lian, Q. Liu, H. Zhu, D. Liang, J. Liu, Learning rules in spiking neural networks: A survey, Neurocom- puting 531 (2023) 163–179.doi:https://doi.org/ 10.1016/j.neucom.2023.02.026
-
[41]
M. Dampfhoffer, T. Mesquida, A. Valentian, L. Anghel, Backpropagation-Based Learning Techniques for Deep Spiking Neural Networks: A Survey, IEEE Transactions on Neural Networks and Learning Systems 35 (9) (2024) 11906–11921.doi:10.1109/TNNLS.2023.3263008
-
[42]
A. H. Khan, X. Cao, C. Luo, S. Zhang, W. Guo, V . N. Katsikis, S. Li, Spiking Neural Networks: A Com- prehensive Survey of Training Methodologies, Hard- ware Implementations and Applications, Artificial Intel- ligence Science and Engineering 1 (3) (2025) 175–207. doi:10.23919/AISE.2025.000013
-
[43]
B. Ayasi, C. J. Carmona, M. Saleh, A. M. García-Vico, A Practical Tutorial on Spiking Neural Networks: Compre- hensive Review, Models, Experiments, Software Tools, and Implementation Guidelines, Eng 6 (11) (2025) 304. doi:10.3390/eng6110304
-
[44]
J. Wu, Y . Wang, Z. Li, L. Lu, Q. Li, A Review of Computing with Spiking Neural Networks, Computers, Materials & Continua 78 (3) (2024) 2909–2939.doi: 10.32604/cmc.2024.047240
-
[45]
Y . Hu, Q. Zheng, G. Li, H. Tang, G. Pan, Toward Large-scale Spiking Neural Networks: A Comprehen- sive Survey and Future Directions, preprint on arXiv (Aug. 2024).doi:10.48550/arXiv.2409.02111
-
[46]
C. Zhou, H. Zhang, L. Yu, Y . Ye, Z. Zhou, L. Huang, Z. Ma, X. Fan, H. Zhou, Y . Tian, Direct training high- performance deep spiking neural networks: a review of theories and methods, Frontiers in Neuroscience 18 (Jul. 2024).doi:10.3389/fnins.2024.1383844
-
[47]
Progress and Challenges in Large Scale Spiking Neural Networks for AI and Neuroscience, preprint on TechRxiv (2025).doi:10.36227/techrxiv. 174197995.54372833
-
[48]
M. Fatima Minhas, R. Vidya Wicaksana Putra, F. Awwad, O. Hasan, M. Shafique, Continual Learn- ing With Neuromorphic Computing: Foundations, Meth- ods, and Emerging Applications, IEEE Access 13 (2025) 124824–124873.doi:10.1109/ACCESS.2025. 3588665
-
[49]
R. Mishra, M. Suri, A survey and perspective on neuro- morphic continual learning systems, Frontiers in Neuro- science 17 (May 2023).doi:10.3389/fnins.2023. 1149410
-
[50]
S. Shen, R. Zhang, C. Wang, R. Huang, A. Tuer- hong, Q. Guo, Z. Lu, J. Zhang, L. Leng, Evolution- ary spiking neural networks: a survey, Journal of Mem- brane Computing 6 (4) (2024) 335–346.doi:10.1007/ s41965-024-00156-x
work page 2024
-
[51]
M. Karamimanesh, E. Abiri, M. Shahsavari, K. Hassanli, A. van Schaik, J. Eshraghian, Spiking neural networks on FPGA: A survey of methodologies and recent ad- vancements, Neural Networks 186 (2025) 107256.doi: 10.1016/j.neunet.2025.107256
-
[52]
S. Mazurek, J. Caputa, J. K. Argasi ´nski, M. Wielgosz, Three-factor learning in spiking neural networks: An overview of methods and trends from a machine learn- ing perspective, Patterns 6 (12) (2025) 101414. URLhttps://www.sciencedirect.com/science/ article/pii/S2666389925002624
work page 2025
-
[53]
J. Lin, S. Lu, M. Bal, A. Sengupta, Benchmarking Spik- ing Neural Network Learning Methods With Varying Lo- cality, IEEE Access 13 (2025) 113606–113617.doi: 10.1109/ACCESS.2025.3582564
-
[54]
J. K. Eshraghian, M. Ward, E. O. Neftci, X. Wang, G. Lenz, G. Dwivedi, M. Bennamoun, D. S. Jeong, W. D. Lu, Training Spiking Neural Networks Using Lessons From Deep Learning, Proceedings of the IEEE 111 (9) (2023) 1016–1054.doi:10.1109/JPROC. 2023.3308088
-
[56]
P. Werbos, Backpropagation through time: what it does and how to do it, Proceedings of the IEEE 78 (10) (1990) 1550–1560.doi:10.1109/5.58337
-
[57]
N. Frémaux, W. Gerstner, Neuromodulated Spike- Timing-Dependent Plasticity, and Theory of Three- Factor Learning Rules, Frontiers in Neural Circuits 9 (Jan. 2016).doi:10.3389/fncir.2015.00085. 20
-
[58]
A. Nø kland, Direct Feedback Alignment Provides Learning in Deep Neural Networks, in: Advances in Neural Information Processing Systems, V ol. 29, Curran Associates, Inc., 2016. URLhttps://proceedings. neurips.cc/paper/2016/hash/ d490d7b4576290fa60eb31b5fc917ad1-Abstract. html
work page 2016
-
[59]
C. Frenkel, M. Lefebvre, D. Bol, Learning Without Feedback: Fixed Random Learning Signals Allow for Feedforward Training of Deep Neural Networks, Fron- tiers in Neuroscience 15 (Feb. 2021).doi:10.3389/ fnins.2021.629892
-
[60]
B. Schrauwen, J. Van Campenhout, Extending spike- prop, in: 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541), V ol. 1, 2004, pp. 471–475.doi:10.1109/IJCNN.2004. 1379954
-
[61]
S. McKennoch, D. Liu, L. Bushnell, Fast Modifications of the SpikeProp Algorithm, in: The 2006 IEEE In- ternational Joint Conference on Neural Network Pro- ceedings, 2006, pp. 3970–3977.doi:10.1109/IJCNN. 2006.246918
-
[62]
I. Sporea, A. Grüning, Supervised Learning in Mul- tilayer Spiking Neural Networks, Neural Computation 25 (2) (2013) 473–509.doi:10.1162/NECO_a_00396
-
[63]
A. Mohemmed, S. Schliebs, S. Matsuda, N. Kasabov, Span: spike pattern association neuron for learning spatio-temporal spike patterns, International Journal of Neural Systems 22 (04) (2012) 1250012.doi:10. 1142/S0129065712500128
work page 2012
-
[64]
R. V . Florian, The Chronotron: A Neuron That Learns to Fire Temporally Precise Spike Patterns, PLOS ONE 7 (8) (2012) e40233.doi:10.1371/journal.pone. 0040233
-
[65]
R. Gütig, H. Sompolinsky, The tempotron: a neuron that learns spike timing-based decisions, Nature Neuro- science 9 (3) (2006) 420–428.doi:10.1038/nn1643
- [66]
-
[68]
Y . Wu, L. Deng, G. Li, J. Zhu, L. Shi, Spatio-Temporal Backpropagation for Training High-Performance Spik- ing Neural Networks, Frontiers in Neuroscience 12 (May 2018).doi:10.3389/fnins.2018.00331
-
[69]
E. O. Neftci, H. Mostafa, F. Zenke, Surrogate Gradi- ent Learning in Spiking Neural Networks: Bringing the Power of Gradient-Based Optimization to Spiking Neu- ral Networks, IEEE Signal Processing Magazine 36 (6) (2019) 51–63.doi:10.1109/MSP.2019.2931595
-
[70]
F. Zenke, S. Ganguli, SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks, Neural Compu- tation 30 (6) (2018) 1514–1541.doi:10.1162/neco_ a_01086
-
[71]
S. B. Shrestha, G. Orchard, SLAYER: Spike Layer Error Reassignment in Time, in: Advances in Neural Infor- mation Processing Systems, V ol. 31, Curran Associates, Inc., 2018. URLhttps://proceedings.neurips. cc/paper_files/paper/2018/hash/ 82f2b308c3b01637c607ce05f52a2fed-Abstract. html
work page 2018
-
[72]
A. Amir, B. Taba, D. Berg, T. Melano, J. McKinstry, C. Di Nolfo, T. Nayak, A. Andreopoulos, G. Gar- reau, M. Mendoza, J. Kusnitz, M. Debole, S. Esser, T. Delbruck, M. Flickner, D. Modha, A Low Power, Fully Event-Based Gesture Recognition System, in: 2017 IEEE Conference on Computer Vision and Pat- tern Recognition (CVPR), IEEE, Honolulu, HI, 2017, pp. 738...
-
[73]
F. C. Bauer, G. Lenz, S. Haghighatshoar, S. Sheik, EXO- DUS: Stable and efficient training of spiking neural net- works, Frontiers in Neuroscience 17 (Feb. 2023).doi: 10.3389/fnins.2023.1110444
-
[74]
F. Zenke, T. P. V ogels, The Remarkable Robustness of Surrogate Gradient Learning for Instilling Complex Function in Spiking Neural Networks, Neural Compu- tation 33 (4) (2021) 899–925.doi:10.1162/neco_a_ 01367
-
[75]
W. Fang, Z. Yu, Y . Chen, T. Masquelier, T. Huang, Y . Tian, Incorporating Learnable Membrane Time Con- stant To Enhance Learning of Spiking Neural Networks, 2021, pp. 2661–2671. URLhttps://openaccess.thecvf.com/ content/ICCV2021/html/Fang_Incorporating_ Learnable_Membrane_Time_Constant_To_ Enhance_Learning_of_Spiking_ICCV_2021_ paper.html
work page 2021
-
[76]
X. Yao, F. Li, Z. Mo, J. Cheng, GLIF: A Unified Gated Leaky Integrate-and-Fire Neuron for Spiking Neural Networks, Advances in Neural Information Processing Systems 35 (2022) 32160–32171. URLhttps://proceedings.neurips. cc/paper_files/paper/2022/hash/ cfa8440d500a6a6867157dfd4eaff66e-Abstract-Conference. html 21
work page 2022
- [77]
-
[78]
Y . Kim, P. Panda, Revisiting Batch Normalization for Training Low-Latency Deep Spiking Neural Networks From Scratch, Frontiers in Neuroscience 15 (Dec. 2021). doi:10.3389/fnins.2021.773954
-
[79]
C. Duan, J. Ding, S. Chen, Z. Yu, T. Huang, Tempo- ral Effective Batch Normalization in Spiking Neural Networks, Advances in Neural Information Processing Systems 35 (2022) 34377–34390. URLhttps://proceedings.neurips. cc/paper_files/paper/2022/hash/ de2ad3ed44ee4e675b3be42aa0b615d0-Abstract-Conference. html
work page 2022
-
[80]
Y . Guo, Y . Zhang, Y . Chen, W. Peng, X. Liu, L. Zhang, X. Huang, Z. Ma, Membrane Potential Batch Nor- malization for Spiking Neural Networks, 2023, pp. 19420–19430. URLhttps://openaccess.thecvf.com/content/ ICCV2023/html/Guo_Membrane_Potential_ Batch_Normalization_for_Spiking_Neural_ Networks_ICCV_2023_paper.html
work page 2023
-
[81]
S. Deng, Y . Li, S. Zhang, S. Gu, Temporal efficient training of spiking neural network via gradient re- weighting, in: International Conference on Learning Representations, 2022. URLhttps://openreview.net/forum?id= _XNtisL32jv
work page 2022
-
[82]
Y . Guo, X. Tong, Y . Chen, L. Zhang, X. Liu, Z. Ma, X. Huang, RecDis-SNN: Rectifying Membrane Poten- tial Distribution for Directly Training Spiking Neural Networks, 2022, pp. 326–335. URLhttps://openaccess.thecvf.com/content/ CVPR2022/html/Guo_RecDis-SNN_Rectifying_ Membrane_Potential_Distribution_for_ Directly_Training_Spiking_Neural_CVPR_ 2022_paper.html
work page 2022
-
[83]
Y . Guo, Y . Chen, L. Zhang, X. Liu, Y . Wang, X. Huang, Z. Ma, IM-Loss: Information Maximization Loss for Spiking Neural Networks, Advances in Neural Informa- tion Processing Systems 35 (2022) 156–166. URLhttps://proceedings.neurips. cc/paper_files/paper/2022/hash/ 010c5ba0cafc743fece8be02e7adb8dd-Abstract-Conference. html
work page 2022
-
[84]
Q. Meng, M. Xiao, S. Yan, Y . Wang, Z. Lin, Z.-Q. Luo, Training High-Performance Low-Latency Spiking Neural Networks by Differentiation on Spike Represen- tation, 2022, pp. 12444–12453. URLhttps://openaccess.thecvf.com/ content/CVPR2022/html/Meng_Training_ High-Performance_Low-Latency_Spiking_ Neural_Networks_by_Differentiation_on_ Spike_CVPR_2022_paper.html
work page 2022
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.