pith. machine review for the scientific record. sign in

arxiv: 2605.10994 · v1 · submitted 2026-05-09 · 🧬 q-bio.NC · q-bio.OT

Recognition: no theorem link

Internally triggered retrospective learning in neural networks

Arturo Tozzi

Pith reviewed 2026-05-13 01:47 UTC · model grok-4.3

classification 🧬 q-bio.NC q-bio.OT
keywords neural networksepisodic learningprediction discrepancyretrospective updatinglatent tracesinternal triggeringsparse adaptationsequential inputs
0
0 comments X

The pith

Neural networks learn via internally triggered retrospective updates based on prediction discrepancies.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes a learning method for neural networks where parameter updates occur only at internally generated events instead of continuously with each input. Synaptic interactions accumulate as latent traces without immediate modification while an internal process predicts the current latent state and tracks a discrepancy measure. When this discrepancy exceeds a threshold computed from recent error statistics, a retrospective update integrates selected past activity. Simulations on a minimal network with structured sequential inputs and transient perturbations show that these events are sparse, tied to error spikes, and produce stepwise synaptic changes plus discrete latent state shifts. The approach is presented as a way to limit unnecessary parameter adjustments from routine data while retaining informative patterns.

Core claim

Parameter updates in neural networks can be governed by internally generated events arising from discrepancies between predicted and observed latent states, enabling retrospective integration of past coactivation patterns into the current configuration only when an adaptive error-derived threshold is crossed rather than at every step.

What carries the argument

An internal predictive process that estimates the evolving latent state from ongoing network activity, computes a scalar discrepancy between predicted and observed states, and triggers selective retrospective parameter updates when the discrepancy exceeds a threshold derived from recent error statistics.

If this is right

  • Learning events become sparse and temporally localized around increases in prediction error.
  • Synaptic efficacy undergoes stepwise rather than incremental changes with each input.
  • Latent state organization exhibits discrete transitions instead of continuous evolution.
  • Unnecessary parameter drift is reduced while informative patterns from sequential inputs are preserved.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The internal discrepancy trigger could extend naturally to recurrent or deeper architectures handling longer sequences.
  • Such selective updating might improve stability when inputs contain mostly routine or noisy data streams.
  • The method offers a potential route to lower energy costs in sequential processing tasks by avoiding constant parameter changes.

Load-bearing premise

An internal predictive process can accurately estimate the evolving latent state from ongoing activity and the discrepancy threshold derived from recent error statistics reliably distinguishes informative inputs from noise without post-hoc tuning.

What would settle it

Re-running the minimal-network simulations with the same sequential inputs and perturbations and finding that learning events are not sparse, do not align with prediction-error increases, or produce equivalent or greater parameter drift than continuous updates.

read the original abstract

Learning in artificial neural networks usually relies on continuous, externally driven weight updates, in which parameters are modified at every step in response to incoming data, error signals or reward feedback. In this setting, routine and informative inputs contribute similarly to parameter adjustment. We introduce a learning approach in which parameter updates are governed by internally generated events arising from the network own representational dynamics. During ongoing activity, synaptic interactions are accumulated as latent traces encoding recent coactivation patterns, without immediately modifying the underlying parameters. In parallel, an internal predictive process estimates the evolving latent state, while a scalar measure of discrepancy between predicted and observed states is continuously computed. When discrepancy exceeds an adaptive threshold derived from recent error statistics, a learning event is triggered, inducing a retrospective update selectively integrating past activity into the current configuration. We performed simulations using a minimal neural network exposed to structured sequential inputs with transient perturbations. We found that learning occurs through sparse, temporally localized events associated with increases in prediction error, leading to stepwise changes in synaptic efficacy and discrete transitions in latent state organization. By selectively reorganizing parameters in response to internally detected discrepancies, our episodic updating may reduce unnecessary parameter drift while preserving informative patterns. Potential applications include systems requiring selective adaptation to rare or informative inputs such as physiological, industrial or environmental monitoring, edge computing under limited energy budgets, autonomous systems operating in dynamic conditions and sequential computational data processing.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The manuscript introduces an internally triggered retrospective learning mechanism for neural networks. Synaptic coactivations are accumulated as latent traces without immediate parameter changes. An internal predictive process estimates the evolving latent state and computes a scalar discrepancy; when this exceeds an adaptive threshold derived from recent error statistics, a retrospective update integrates past activity into the current configuration. Simulations on a minimal network with structured sequential inputs plus transient perturbations show sparse, temporally localized learning events associated with prediction-error increases, producing stepwise synaptic changes and discrete latent-state transitions. The authors claim that selective reorganization in response to internally detected discrepancies may reduce unnecessary parameter drift while preserving informative patterns, with suggested applications in energy-constrained or selective-adaptation settings.

Significance. If substantiated, the approach could offer a useful alternative to continuous-update regimes in resource-limited or event-driven domains such as edge computing and monitoring systems. The separation of trace accumulation from parameter modification and the use of an internal discrepancy trigger represent a coherent departure from standard gradient-based or reward-driven learning. The simulations illustrate the qualitative behavior of sparse events and stepwise reorganization, but the absence of quantitative controls limits the strength of the efficiency claim at present.

major comments (3)
  1. Results (simulations): the experiments document sparse events, stepwise synaptic changes, and latent-state transitions but omit any continuous-update baseline. No metrics such as cumulative weight-change norm, variance under noise, or pattern-retention curves are supplied for an equivalent continuous-modification control, so the central claim that episodic updating reduces unnecessary drift remains untested.
  2. Methods (network and analysis): the description of the minimal network supplies no architecture details, quantitative performance metrics, error bars, or data-exclusion criteria. Without these, it is impossible to determine whether the reported stepwise changes and reduced-drift behavior are supported by the results or reproducible.
  3. Discrepancy-threshold mechanism: the adaptive threshold is derived from recent error statistics while the discrepancy itself drives the trigger. This creates a potential circular dependence in which the learning signal relies on quantities computed from the same ongoing activity; the manuscript should explicitly show how the latent-trace representation maintains sufficient independence to avoid self-referential bias.
minor comments (2)
  1. Abstract: the phrase 'network own representational dynamics' should be corrected to 'network's own representational dynamics'.
  2. Related-work section: additional citations to prior event-driven or sparse-update literature would help situate the novelty of the internally triggered retrospective mechanism.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for their thoughtful and constructive comments on our manuscript. We address each of the major comments below and have made revisions to the manuscript to incorporate the suggested improvements.

read point-by-point responses
  1. Referee: Results (simulations): the experiments document sparse events, stepwise synaptic changes, and latent-state transitions but omit any continuous-update baseline. No metrics such as cumulative weight-change norm, variance under noise, or pattern-retention curves are supplied for an equivalent continuous-modification control, so the central claim that episodic updating reduces unnecessary drift remains untested.

    Authors: We agree that including a continuous-update baseline would strengthen the manuscript's claims regarding reduced parameter drift. In the revised manuscript, we will add simulations comparing our internally triggered retrospective learning to a standard continuous update regime under identical input sequences. We will compute and report the suggested metrics, including the cumulative weight-change norm, variance under noise, and pattern-retention curves, to quantitatively demonstrate the differences in drift and efficiency. revision: yes

  2. Referee: Methods (network and analysis): the description of the minimal network supplies no architecture details, quantitative performance metrics, error bars, or data-exclusion criteria. Without these, it is impossible to determine whether the reported stepwise changes and reduced-drift behavior are supported by the results or reproducible.

    Authors: We acknowledge the need for greater methodological transparency. The revised manuscript will include a detailed description of the network architecture, including the number of neurons, synaptic connectivity rules, and activation functions. We will also provide quantitative performance metrics with error bars from repeated simulations, specify any data exclusion criteria, and include the full set of parameters used for the discrepancy threshold adaptation to ensure reproducibility. revision: yes

  3. Referee: Discrepancy-threshold mechanism: the adaptive threshold is derived from recent error statistics while the discrepancy itself drives the trigger. This creates a potential circular dependence in which the learning signal relies on quantities computed from the same ongoing activity; the manuscript should explicitly show how the latent-trace representation maintains sufficient independence to avoid self-referential bias.

    Authors: This is an important point regarding the independence of the components. The latent traces represent accumulated coactivations over a sliding window of past activity, independent of the current prediction. The predictive process uses a separate model to forecast the next latent state based on recent history, and the discrepancy is the difference between this prediction and the current observed trace. The threshold is an exponential moving average of prior discrepancies, updated only after each trigger event. This structure ensures that the trigger decision depends on historical error statistics rather than the instantaneous state, preventing circularity. We will add an explicit mathematical formulation and a diagram in the revised Methods section to clarify this separation. revision: yes

Circularity Check

0 steps flagged

No significant circularity; method description is self-contained

full rationale

The paper proposes an episodic learning rule driven by internal discrepancy between a predictive estimate of latent state and observed coactivation traces. This construction is presented directly as a novel mechanism without any equations, fitted parameters, or self-citations that reduce the claimed selective-update advantage to the input data or prior results by definition. The adaptive threshold is a conventional statistical device applied to the discrepancy signal itself, not a renaming or self-referential fit that forces the outcome. Simulations illustrate sparse events on a minimal network but do not claim first-principles derivations or uniqueness theorems that loop back to the method's own quantities. The derivation chain therefore remains independent of its inputs.

Axiom & Free-Parameter Ledger

1 free parameters · 2 axioms · 1 invented entities

Only the abstract is available, so the ledger is necessarily incomplete; the method rests on several unelaborated assumptions about internal prediction accuracy and error-based triggering whose details would appear in the full methods section.

free parameters (1)
  • adaptive discrepancy threshold
    Derived from recent error statistics; exact computation rule and any scaling constants are not specified in the abstract.
axioms (2)
  • domain assumption Synaptic interactions accumulate as latent traces encoding recent coactivation patterns without immediate parameter change
    Central to the retrospective update mechanism described in the abstract.
  • domain assumption An internal predictive process can continuously estimate the evolving latent state
    Required to compute the scalar discrepancy measure that triggers learning.
invented entities (1)
  • latent traces no independent evidence
    purpose: Store recent coactivation patterns for later retrospective integration
    New representational construct introduced to enable episodic rather than continuous updates; no independent evidence outside the proposed method is provided.

pith-pipeline@v0.9.0 · 5535 in / 1512 out tokens · 87940 ms · 2026-05-13T01:47:19.243644+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

43 extracted references · 43 canonical work pages

  1. [1]

    Fractional-Order Deep Backpropagation Neural Network

    Bao, C., Y . Pu, and Y . Zhang. 2018. “Fractional-Order Deep Backpropagation Neural Network.” Computational Intelligence and Neuroscience 2018: 7361628. https://doi.org/10.1155/2018/7361628

  2. [2]

    A Bayesian Generative Neural Network Framework for Epidemic Inference Problems

    Biazzo, I., A. Braunstein, L. Dall’Asta, and F. Mazza. 2022. “A Bayesian Generative Neural Network Framework for Epidemic Inference Problems.” Scientific Reports 12 (1): 19673. https://doi.org/10.1038/s41598-022-20898- x

  3. [3]

    Event-Driven Adaptive Optical Neural Network

    Brückerhoff-Plückelmann, F., I. Bente, M. Becker, N. V ollmar, N. Farmakidis, E. Lomonte, F. Lenzini, C. D. Wright, H. Bhaskaran, M. Salinga, B. Risse, and W. H. P. Pernice. 2023. “Event-Driven Adaptive Optical Neural Network.” Science Advances 9 (42): eadi9127. https://doi.org/10.1126/sciadv.adi9127

  4. [4]

    One -Shot Screening: Utilization of a Two -Dimensional Convolutional Neural Network for Automatic Detection of Left Ventricular Hypertrophy Using Electrocardiograms

    Cai, C., T. Imai, E. Hasumi, and K. Fujiu. 2024. “One -Shot Screening: Utilization of a Two -Dimensional Convolutional Neural Network for Automatic Detection of Left Ventricular Hypertrophy Using Electrocardiograms.” Computer Methods and Programs in Biomedicine 247: 108097. https://doi.org/10.1016/j.cmpb.2024.108097

  5. [5]

    A Lagrange Programming Neural Network Approach for Nuclear Norm Optimization

    Dai, X., J. Qiu, C. Wan, and F. Dai. 2024. “A Lagrange Programming Neural Network Approach for Nuclear Norm Optimization.” PLoS ONE 19 (2): e0292380. https://doi.org/10.1371/journal.pone.0292380

  6. [6]

    An Unsupervised STDP-Based Spiking Neural Network Inspired by Biologically Plausible Learning Rules and Connections

    Dong, Y ., D. Zhao, Y . Li, and Y . Zeng. 2023. “An Unsupervised STDP-Based Spiking Neural Network Inspired by Biologically Plausible Learning Rules and Connections.” Neural Networks 165: 799 –808. https://doi.org/10.1016/j.neunet.2023.06.019

  7. [7]

    Sigmoid-weighted linear units for neural network function approximation in reinforcement learning

    Elfwing, S., E. Uchibe, and K. Doya. 2018. “Sigmoid -Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning.” Neural Networks 107: 3 –11. https://doi.org/10.1016/j.neunet.2017.12.012

  8. [8]

    Introduction to Backpropagation Neural Network Computation

    Erb, R. J. 1993. “Introduction to Backpropagation Neural Network Computation.” Pharmaceutical Research 10 (2): 165–170. https://doi.org/10.1023/A:1018966222807

  9. [9]

    Bayesian Convolutional Neural Network Estimation for Pediatric Pneumonia Detection and Diagnosis

    Fernandes, V ., G. B. Junior, A. C. de Paiva, A. C. Silva, and M. Gattass. 2021. “Bayesian Convolutional Neural Network Estimation for Pediatric Pneumonia Detection and Diagnosis.” Computer Methods and Programs in Biomedicine 208: 106259. https://doi.org/10.1016/j.cmpb.2021.106259

  10. [10]

    Neural and Computational Mechanisms Underlying One -Shot Perceptual Learning in Humans

    Hachisuka, A., J. D. Shor, X. C. Liu, D. Friedman, P. Dugan, I. Saez, F. E. Panov, Y . Wang, W. Doyle, O. Devinsky, E. K. Oermann, and H. B. He. 2026. “Neural and Computational Mechanisms Underlying One -Shot Perceptual Learning in Humans.” Nature Communications 17 (1): 1204. https://doi.org/10.1038/s41467-026- 68711-x. 10

  11. [11]

    Efficient Inverse Design of Optical Multilayer Nano -Thin Films Using Neural Network Principles: Backpropagation and Gradient Descent

    Han, J. H. 2024. “Efficient Inverse Design of Optical Multilayer Nano -Thin Films Using Neural Network Principles: Backpropagation and Gradient Descent.” Nanoscale 16 (36): 17165 –17175. https://doi.org/10.1039/d4nr01667j

  12. [12]

    A Reinforcement Learning Neural Network for Robotic Manipulator Control

    Hu, Y ., and B. Si. 2018. “A Reinforcement Learning Neural Network for Robotic Manipulator Control.” Neural Computation 30 (7): 1983–2004. https://doi.org/10.1162/neco_a_01079

  13. [13]

    A Bayesian Convolutional Neural Network -Based Generalized Linear Model

    Jeon, Y ., W. Chang, S. Jeong, S. Han, and J. Park. 2024. “A Bayesian Convolutional Neural Network -Based Generalized Linear Model.” Biometrics 80 (2): ujae057. https://doi.org/10.1093/biomtc/ujae057

  14. [14]

    Robust High-Dimensional Memory -Augmented Neural Networks

    Karunaratne, G., M. Schmuck, M. Le Gallo, G. Cherubini, L. Benini, A. Sebastian, and A. Rahimi. 2021. “Robust High-Dimensional Memory -Augmented Neural Networks.” Nature Communications 12 (1): 2468. https://doi.org/10.1038/s41467-021-22364-0

  15. [15]

    Neuroevolution of a Modular Memory -Augmented Neural Network for Deep Memory Problems

    Khadka, S., J. J. Chung, and K. Tumer. 2019. “Neuroevolution of a Modular Memory -Augmented Neural Network for Deep Memory Problems.” Evolutionary Computation 27 (4): 639 –664. https://doi.org/10.1162/evco_a_00239

  16. [16]

    Design and Simulation of a Multilayer Chemical Neural Network That Learns via Backpropagation

    Lakin, M. R. 2023. “Design and Simulation of a Multilayer Chemical Neural Network That Learns via Backpropagation.” Artificial Life 29 (3): 308–335. https://doi.org/10.1162/artl_a_00405

  17. [17]

    Two -Branch Attention Network via Efficient Semantic Coupling for One -Shot Learning

    Li, J., D. Wang, X. Liu, Z. Shi, and M. Wang. 2022. “Two -Branch Attention Network via Efficient Semantic Coupling for One -Shot Learning.” IEEE Transactions on Image Processing 31: 341 –351. https://doi.org/10.1109/TIP.2021.3124668

  18. [18]

    Electromagnetic Source Imaging With a Combination of Sparse Bayesian Learning and Deep Neural Network

    Liang, J., Z. L. Yu, Z. Gu, and Y . Li. 2023. “Electromagnetic Source Imaging With a Combination of Sparse Bayesian Learning and Deep Neural Network.” IEEE Transactions on Neural Systems and Rehabilitation Engineering 31: 2338–2348. https://doi.org/10.1109/TNSRE.2023.3251420

  19. [19]

    Supervised Learning in Multilayer Spiking Neural Networks With Spike Temporal Error Backpropagation

    Luo, X., H. Qu, Y . Wang, Z. Yi, J. Zhang, and M. Zhang. 2023. “Supervised Learning in Multilayer Spiking Neural Networks With Spike Temporal Error Backpropagation.” IEEE Transactions on Neural Networks and Learning Systems 34 (12): 10141–10153. https://doi.org/10.1109/TNNLS.2022.3164930

  20. [20]

    Echo Memory -Augmented Network for Time Series Classification

    Ma, Q., Z. Zheng, W. Zhuang, E. Chen, J. Wei, and J. Wang. 2021. “Echo Memory -Augmented Network for Time Series Classification.” Neural Networks 133: 177–192. https://doi.org/10.1016/j.neunet.2020.10.015

  21. [21]

    Behavioral Timescale Synaptic Plasticity: Properties, Elements and Functions

    Magee, Jeffrey C. 2026. “Behavioral Timescale Synaptic Plasticity: Properties, Elements and Functions.” Nature Neuroscience 29: 520–534

  22. [22]

    Neural Network and Bayesian-Based Prediction of Breeding Values in Beetal Goat

    Magotra, A., Y . C. Bangar, and A. S. Yadav. 2022. “Neural Network and Bayesian-Based Prediction of Breeding Values in Beetal Goat.” Tropical Animal Health and Production 54 (5): 282. https://doi.org/10.1007/s11250-022- 03294-5

  23. [23]

    Experimentally Validated Memristive Memory Augmented Neural Network with Efficient Hashing and Similarity Se arch

    Mao, R., B. Wen, A. Kazemi, Y . Zhao, A. F. Laguna, R. Lin, N. Wong, M. Niemier, X. S. Hu, X. Sheng, C. E. Graves, J. P. Strachan, and C. Li. 2022. “Experimentally Validated Memristive Memory Augmented Neural Network with Efficient Hashing and Similarity Se arch.” Nature Communications 13 (1): 6284. https://doi.org/10.1038/s41467-022-33629-7

  24. [24]

    A Neural Network Model for Online One-Shot Storage of Pattern Sequences

    Melchior, J., A. Altamimi, M. Bayati, S. Cheng, and L. Wiskott. 2024. “A Neural Network Model for Online One-Shot Storage of Pattern Sequences.” PLoS ONE 19 (6): e0304076. https://doi.org/10.1371/journal.pone.0304076

  25. [25]

    Backpropagation Neural Tree

    Ojha, V ., and G. Nicosia. 2022. “Backpropagation Neural Tree.” Neural Networks 149: 66 –83. https://doi.org/10.1016/j.neunet.2022.02.003

  26. [26]

    Spatially Informed Bayesian Neural Network for Neurodegenerative Diseases Classification

    Payares-Garcia, D., J. Mateu, and W. Schick. 2023. “Spatially Informed Bayesian Neural Network for Neurodegenerative Diseases Classification.” Statistics in Medicine 42 (2): 105 –121. https://doi.org/10.1002/sim.9604

  27. [27]

    Self -Organizing Neural Network for Reproducing Human Postural Mode Alternation through Deep Reinforcement Learning

    Shen, K., G. Li, A. Chemori, and M. Hayashibe. 2023. “Self -Organizing Neural Network for Reproducing Human Postural Mode Alternation through Deep Reinforcement Learning.” Scientific Reports 13 (1): 8966. https://doi.org/10.1038/s41598-023-35886-y

  28. [28]

    Multi-Compartment Neuron and Population Encoding Powered Spiking Neural Network for Deep Distributional Reinforcement Learning

    Sun, Y ., F. Zhao, Z. Zhao, and Y . Zeng. 2025. “Multi-Compartment Neuron and Population Encoding Powered Spiking Neural Network for Deep Distributional Reinforcement Learning.” Neural Networks 182: 106898. https://doi.org/10.1016/j.neunet.2024.106898

  29. [29]

    Memory Augmented Recurrent Neural Networks for De -Novo Drug Design

    Suresh, N., N. Chinnakonda Ashok Kumar, S. Subramanian, and G. Srinivasa. 2022. “Memory Augmented Recurrent Neural Networks for De -Novo Drug Design.” PLoS ONE 17 (6): e0269461. https://doi.org/10.1371/journal.pone.0269461

  30. [30]

    Learning of Sequential Movements by Neural Network Model with Dopamine-Like Reinforcement Signal

    Suri, R. E., and W. Schultz. 1998. “Learning of Sequential Movements by Neural Network Model with Dopamine-Like Reinforcement Signal.” Experimental Brain Research 121 (3): 350 –354. https://doi.org/10.1007/s002210050467

  31. [31]

    Integrating a Tailored Recurrent Neural Network With Bayesian Experimental Design to Optimize Microbial Community Functions

    Thompson, J. C., V . M. Zavala, and O. S. Venturelli. 2023. “Integrating a Tailored Recurrent Neural Network With Bayesian Experimental Design to Optimize Microbial Community Functions.” PLoS Computational Biology 19 (9): e1011436. https://doi.org/10.1371/journal.pcbi.1011436

  32. [32]

    NAS -Navigator: Visual Steering for Explainable One -Shot Deep Neural Network Synthesis

    Tyagi, A., C. Xie, and K. Mueller. 2023. “NAS -Navigator: Visual Steering for Explainable One -Shot Deep Neural Network Synthesis.” IEEE Transactions on Visualization and Computer Graphics 29 (1): 299 –309. https://doi.org/10.1109/TVCG.2022.3209361. 11

  33. [33]

    Brain-Inspired Synaptic Transistors for In-Situ Spiking Reinforcement Learning with Eligibility Trace

    Wang, Y ., W. Xiong, J. Yan, Y . Zhou, C. Zhu, X. Miao, Y . He, and Y . Chai. 2026. “Brain-Inspired Synaptic Transistors for In-Situ Spiking Reinforcement Learning with Eligibility Trace.” Nature Communications 17 (1):

  34. [34]

    https://doi.org/10.1038/s41467-026-69898-9

  35. [35]

    Neural Network for a Class of Sparse Optimization with L(0) - Regularization

    Wei, Z., Q. Li, J. Wei, and W. Bian. 2022. “Neural Network for a Class of Sparse Optimization with L(0) - Regularization.” Neural Networks 151: 211–221. https://doi.org/10.1016/j.neunet.2022.03.033

  36. [36]

    Optimizing Event -Driven Spiking Neural Network with Regularization and Cutoff

    Wu, D., G. Jin, H. Yu, X. Yi, and X. Huang. 2025. “Optimizing Event -Driven Spiking Neural Network with Regularization and Cutoff.” Frontiers in Neuroscience 19: 1522788. https://doi.org/10.3389/fnins.2025.1522788

  37. [37]

    A., and Elmar W., L

    Xiao, D., K. F. Liang, K. Ji, and J. C. Kao. 2025. “Using Reinforcement Learning to Investigate Neural Dynamics During Motor Learning.” Annual International Conference of the IEEE Engineering in Medicine and Biology Society 2025: 1–7. https://doi.org/10.1109/EMBC58623.2025.11252999

  38. [38]

    Fault Diagnosis for Wind Turbines with Graph Neural Network Model Based on One -Shot Learning

    Yang, S., Y . Zhou, X. Chen, C. Li, and H. Song. 2023. “Fault Diagnosis for Wind Turbines with Graph Neural Network Model Based on One -Shot Learning.” Royal Society Open Science 10 (7): 230706. https://doi.org/10.1098/rsos.230706

  39. [39]

    Memory Augmented Deep Recurrent Neural Network for Video Question Answering

    Yin, C., J. Tang, Z. Xu, and Y . Wang. 2020. “Memory Augmented Deep Recurrent Neural Network for Video Question Answering.” IEEE Transactions on Neural Networks and Learning Systems 31 (9): 3159 –3167. https://doi.org/10.1109/TNNLS.2019.2938015

  40. [40]

    Event-Driven Intrinsic Plasticity for Spiking Convolutional Neural Networks

    Zhang, A., X. Li, Y . Gao, and Y . Niu. 2022. “Event-Driven Intrinsic Plasticity for Spiking Convolutional Neural Networks.” IEEE Transactions on Neural Networks and Learning Systems 33 (5): 1986 –1995. https://doi.org/10.1109/TNNLS.2021.3084955

  41. [41]

    A 0.99 -to-4.38 μJ/Class Event- Driven Hybrid Neural Network Processor for Full -Spectrum Neural Signal Analyses

    Zhao, S., J. Yang, J. Wang, C. Fang, T. Liu, S. Zhang, and M. Sawan. 2023. “A 0.99 -to-4.38 μJ/Class Event- Driven Hybrid Neural Network Processor for Full -Spectrum Neural Signal Analyses.” IEEE Transactions on Biomedical Circuits and Systems 17 (3): 598–609. https://doi.org/10.1109/TBCAS.2023.3268502

  42. [42]

    Instantaneous Sap Flow Velocity Simulation of Euonymus bungeanus Based on Neural Network Optimization Model

    Zhou, P., L. Han, L. Peng, L. L. Liu, N. N. Wang, J. Ma, and Y . L. Ma. 2023. “Instantaneous Sap Flow Velocity Simulation of Euonymus bungeanus Based on Neural Network Optimization Model.” Ying Yong Sheng Tai Xue Bao 34 (8): 2123–2132. https://doi.org/10.13287/j.1001-9332.202308.019

  43. [43]

    Collaborative Neural Network Algorithm for Event-Driven Deployment in Wireless Sensor and Robot Networks

    Zhuang, Y ., C. Wu, H. Wu, Z. Zhang, Y . Gao, and L. Li. 2020. “Collaborative Neural Network Algorithm for Event-Driven Deployment in Wireless Sensor and Robot Networks.” Sensors 20 (10): 2779. https://doi.org/10.3390/s20102779. SUPPLEMENTARY MATERIAL Deterministic implementation of internally triggered retrospective learning . The following algorithm spe...