ISI-CV derives a synaptic importance score from the regularity of neuron firing intervals to enable continual learning without gradients or forgetting on SNNs.
Continual learning and catastrophic forgetting
7 Pith papers cite this work. Polarity classification is still indexing.
years
2026 7verdicts
UNVERDICTED 7representative citing papers
McNdroid is a new longitudinal multimodal benchmark showing that Android malware detectors degrade over time but multimodal approaches maintain better performance across long temporal gaps.
Forgetting in LLM continual post-training is a geometry conflict between task-induced covariance structures and the evolving model state, controlled by gating Wasserstein barycenter merging on measured conflict.
HEDP uses energy regularization inspired by Helmholtz free energy plus hybrid energy-distance weighting in prompts to improve domain selection and achieve a 2.57% accuracy gain on benchmarks like CORe50 while mitigating catastrophic forgetting.
Updating clinical AI models can cause prediction flips, arbitrariness, and unfair error rates across groups, requiring dedicated monitoring dimensions.
ALAS disentangles environment and self-state streams via bio-inspired modules to deliver 23% higher subtask success and 29% better execution efficiency on long-horizon HSI tasks.
Operator splitting separates task optimization from proximal stability enforcement to achieve forgetting-free continual learning with SOTA benchmark results.
citing papers explorer
-
Gradient-Free Continual Learning in Spiking Neural Networks via Inter-Spike Interval Regularization
ISI-CV derives a synaptic importance score from the regularity of neuron firing intervals to enable continual learning without gradients or forgetting on SNNs.
-
McNdroid: A Longitudinal Multimodal Benchmark for Robust Drift Detection in Android Malware
McNdroid is a new longitudinal multimodal benchmark showing that Android malware detectors degrade over time but multimodal approaches maintain better performance across long temporal gaps.
-
Geometry Conflict: Explaining and Controlling Forgetting in LLM Continual Post-Training
Forgetting in LLM continual post-training is a geometry conflict between task-induced covariance structures and the evolving model state, controlled by gating Wasserstein barycenter merging on measured conflict.
-
HEDP: A Hybrid Energy-Distance Prompt-based Framework for Domain Incremental Learning
HEDP uses energy regularization inspired by Helmholtz free energy plus hybrid energy-distance weighting in prompts to improve domain selection and achieve a 2.57% accuracy gain on benchmarks like CORe50 while mitigating catastrophic forgetting.
-
An empirical evaluation of the risks of AI model updates using clinical data: stability, arbitrariness, and fairness
Updating clinical AI models can cause prediction flips, arbitrariness, and unfair error rates across groups, requiring dedicated monitoring dimensions.
-
ALAS: Adaptive Long-Horizon Action Synthesis via Async-pathway Stream Disentanglement
ALAS disentangles environment and self-state streams via bio-inspired modules to deliver 23% higher subtask success and 29% better execution efficiency on long-horizon HSI tasks.
-
Task Switching Without Forgetting via Proximal Decoupling
Operator splitting separates task optimization from proximal stability enforcement to achieve forgetting-free continual learning with SOTA benchmark results.