pith. machine review for the scientific record. sign in

hub

Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets

45 Pith papers cite this work. Polarity classification is still indexing.

45 Pith papers citing it
abstract

In this paper we propose to study generalization of neural networks on small algorithmically generated datasets. In this setting, questions about data efficiency, memorization, generalization, and speed of learning can be studied in great detail. In some situations we show that neural networks learn through a process of "grokking" a pattern in the data, improving generalization performance from random chance level to perfect generalization, and that this improvement in generalization can happen well past the point of overfitting. We also study generalization as a function of dataset size and find that smaller datasets require increasing amounts of optimization for generalization. We argue that these datasets provide a fertile ground for studying a poorly understood aspect of deep learning: generalization of overparametrized neural networks beyond memorization of the finite training dataset.

hub tools

citation-role summary

background 1

citation-polarity summary

claims ledger

  • abstract In this paper we propose to study generalization of neural networks on small algorithmically generated datasets. In this setting, questions about data efficiency, memorization, generalization, and speed of learning can be studied in great detail. In some situations we show that neural networks learn through a process of "grokking" a pattern in the data, improving generalization performance from random chance level to perfect generalization, and that this improvement in generalization can happen well past the point of overfitting. We also study generalization as a function of dataset size and f

co-cited works

roles

background 1

polarities

background 1

clear filters

representative citing papers

Toy Models of Superposition

cs.LG · 2022-09-21 · accept · novelty 8.0

Toy models demonstrate that polysemanticity arises when neural networks store more sparse features than neurons via superposition, producing a phase transition tied to polytope geometry and increased adversarial vulnerability.

The Geometric Structure of Models Learning Sparse Data

cs.LG · 2026-05-08 · unverdicted · novelty 7.0

In sparse regimes, models exploit normal alignment of Jacobians to minimize loss and maximize robustness; GrokAlign induces this alignment to accelerate training and RFAMs improve adversarial robustness.

Topological Signatures of Grokking

cs.LG · 2026-05-07 · unverdicted · novelty 7.0

Persistent homology detects a sharp increase in maximum and total H1 persistence during grokking on modular arithmetic, offering a topological diagnostic that links representation geometry to generalization.

Grokking or Glitching? How Low-Precision Drives Slingshot Loss Spikes

cs.LG · 2026-05-07 · unverdicted · novelty 7.0 · 2 refs

Slingshot loss spikes arise from floating-point precision limits that round correct-class gradients to zero, breaking zero-sum constraints and driving exponential parameter growth through numerical feature inflation.

Layerwise LQR for Geometry-Aware Optimization of Deep Networks

cs.LG · 2026-05-05 · unverdicted · novelty 7.0

Steepest descent under divergence-induced quadratic models equals an LQR problem, enabling learning of diagonal or Kronecker-factored inverse preconditioners via a global layerwise objective for scalable geometry-aware training.

A Theory of Generalization in Deep Learning

cs.LG · 2026-05-02 · unverdicted · novelty 7.0

A theory shows SGD accumulates coherent signal via linear drift in NTK signal directions while trapping noise in orthogonal low-eigenvalue dimensions, enabling generalization even under O(1) kernel evolution and yielding an exact population-risk objective from one run that acts as an Adam SNR boost.

ILDR: Geometric Early Detection of Grokking

cs.LG · 2026-04-22 · unverdicted · novelty 7.0

ILDR detects the geometric reorganization preceding grokking by measuring when inter-class centroid separation exceeds intra-class scatter by 2.5 times its baseline in penultimate-layer representations.

Grokking of Diffusion Models: Case Study on Modular Addition

cs.LG · 2026-04-20 · unverdicted · novelty 7.0

Diffusion models show grokking on modular addition by composing periodic operand representations in simple data regimes or by separating arithmetic computation from visual denoising across timesteps in varied regimes.

Is your algorithm unlearning or untraining?

cs.LG · 2026-04-09 · conditional · novelty 7.0

Machine unlearning conflates reversing the influence of specific training examples (untraining) with removing the full underlying distribution or behavior (unlearning).

Spectral Edge Dynamics Reveal Functional Modes of Learning

cs.LG · 2026-04-06 · unverdicted · novelty 7.0

Spectral edge dynamics during grokking reveal task-dependent low-dimensional functional modes over inputs, such as Fourier modes for modular addition and cross-term decompositions for x squared plus y squared.

Dimensional Criticality at Grokking Across MLPs and Transformers

cs.LG · 2026-04-06 · unverdicted · novelty 7.0

Effective cascade dimension D(t) crosses D=1 at the grokking transition in MLPs and Transformers, with opposite directions for modular addition versus XOR, consistent with attraction to a shared critical manifold.

In-context Learning and Induction Heads

cs.LG · 2022-09-24 · unverdicted · novelty 7.0

Induction heads, which implement pattern completion in attention, develop at the same training stage as a sudden rise in in-context learning, providing evidence they are the primary mechanism for in-context learning in transformers.

Overtrained, Not Misaligned

cs.LG · 2026-05-12 · unverdicted · novelty 6.0

Emergent misalignment arises from overtraining after primary task convergence and is preventable by early stopping, which retains 93% of task performance on average.

citing papers explorer

Showing 3 of 3 citing papers after filters.

  • Progress measures for grokking via mechanistic interpretability cs.LG · 2023-01-12 · accept · none · ref 46 · internal anchor

    Grokking arises from gradual amplification of a Fourier-based circuit in the weights followed by removal of memorizing components.

  • Toy Models of Superposition cs.LG · 2022-09-21 · accept · none · ref 11 · internal anchor

    Toy models demonstrate that polysemanticity arises when neural networks store more sparse features than neurons via superposition, producing a phase transition tied to polytope geometry and increased adversarial vulnerability.

  • A Survey of Large Language Models cs.CL · 2023-03-31 · accept · none · ref 75 · internal anchor

    This survey reviews the background, key techniques, and evaluation methods for large language models, emphasizing emergent abilities that appear at large scales.