pith. machine review for the scientific record. sign in

arxiv: 1904.07734 · v1 · submitted 2019-04-15 · 💻 cs.LG · cs.AI· cs.CV· stat.ML

Recognition: unknown

Three scenarios for continual learning

Authors on Pith no claims yet
classification 💻 cs.LG cs.AIcs.CVstat.ML
keywords learningcontinualmethodsscenarioscenariostaskthreedifferences
0
0 comments X
read the original abstract

Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning difficult for machine learning. In recent years, numerous methods have been proposed for continual learning, but due to differences in evaluation protocols it is difficult to directly compare their performance. To enable more structured comparisons, we describe three continual learning scenarios based on whether at test time task identity is provided and--in case it is not--whether it must be inferred. Any sequence of well-defined tasks can be performed according to each scenario. Using the split and permuted MNIST task protocols, for each scenario we carry out an extensive comparison of recently proposed continual learning methods. We demonstrate substantial differences between the three scenarios in terms of difficulty and in terms of how efficient different methods are. In particular, when task identity must be inferred (i.e., class incremental learning), we find that regularization-based approaches (e.g., elastic weight consolidation) fail and that replaying representations of previous experiences seems required for solving this scenario.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 10 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. PrimeKG-CL: A Continual Graph Learning Benchmark on Evolving Biomedical Knowledge Graphs

    cs.AI 2026-05 conditional novelty 8.0

    PrimeKG-CL supplies the first continual graph learning benchmark using authentic temporal snapshots from nine biomedical databases, showing strong interactions between embedding decoders and learning strategies plus l...

  2. MIST: Reliable Streaming Decision Trees for Online Class-Incremental Learning via McDiarmid Bound

    cs.LG 2026-05 unverdicted novelty 7.0

    MIST fixes unreliable splits in streaming decision trees for class-incremental learning by using a K-independent McDiarmid bound on Gini impurity, Bayesian moment projection for knowledge transfer, and KLL quantile sk...

  3. Characterizing and Correcting Effective Target Shift in Online Learning

    stat.ML 2026-05 unverdicted novelty 7.0

    Online kernel regression equals offline regression with shifted targets; correcting the targets lets online learning match offline performance and outperform true targets in continual image classification.

  4. Continual Learning for fMRI-Based Brain Disorder Diagnosis via Functional Connectivity Matrices Generative Replay

    q-bio.TO 2026-04 conditional novelty 7.0

    A structure-aware VAE generates realistic FC matrices for replay, combined with multi-level knowledge distillation and hierarchical contextual bandit sampling, to enable continual fMRI-based brain disorder diagnosis a...

  5. SLE-FNO: Single-Layer Extensions for Task-Agnostic Continual Learning in Fourier Neural Operators

    cs.LG 2026-03 unverdicted novelty 7.0

    SLE-FNO achieves zero forgetting and strong plasticity-stability balance in continual learning for FNO surrogate models of pulsatile blood flow by adding minimal single-layer extensions across four out-of-distribution tasks.

  6. NORACL: Neurogenesis for Oracle-free Resource-Adaptive Continual Learning

    cs.LG 2026-04 unverdicted novelty 6.0

    NORACL dynamically grows network capacity via neurogenesis-inspired signals to achieve oracle-level continual learning performance without pre-specifying architecture size.

  7. Cortex-Inspired Continual Learning: Unsupervised Instantiation and Recovery of Functional Task Networks

    cs.LG 2026-04 unverdicted novelty 6.0

    FTN achieves near-zero forgetting on continual learning benchmarks by isolating task subnetworks via self-organizing binary masks generated through gradient descent, smoothing, and k-winner-take-all.

  8. Fine-Tuning Regimes Define Distinct Continual Learning Problems

    cs.LG 2026-04 unverdicted novelty 6.0

    The relative rankings of continual learning methods are not preserved across different fine-tuning regimes defined by trainable parameter depth.

  9. HEDP: A Hybrid Energy-Distance Prompt-based Framework for Domain Incremental Learning

    cs.AI 2026-05 unverdicted novelty 5.0

    HEDP uses energy regularization inspired by Helmholtz free energy plus hybrid energy-distance weighting in prompts to improve domain selection and achieve a 2.57% accuracy gain on benchmarks like CORe50 while mitigati...

  10. Incremental learning for audio classification with Hebbian Deep Neural Networks

    eess.AS 2026-04 unverdicted novelty 5.0

    A kernel plasticity approach in Hebbian DNNs for incremental sound classification achieves 76.3% accuracy over five steps on ESC-50, outperforming the 68.7% baseline without plasticity.