pith. machine review for the scientific record. sign in

arxiv: 2201.02177 · v1 · submitted 2022-01-06 · 💻 cs.LG

Recognition: no theorem link

Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets

Alethea Power, Harri Edwards, Igor Babuschkin, Vedant Misra, Yuri Burda

Pith reviewed 2026-05-11 19:22 UTC · model grok-4.3

classification 💻 cs.LG
keywords grokkinggeneralizationoverfittingneural networksalgorithmic datasetsmemorizationdeep learningoptimization
0
0 comments X

The pith

Neural networks can suddenly achieve perfect generalization on small algorithmic tasks long after they have overfitted the training data.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper examines generalization in neural networks using small, algorithmically generated datasets that allow precise control over every aspect of the data. It shows that networks sometimes first overfit by memorizing training examples to the point of poor test performance, then later improve dramatically to perfect generalization through a process the authors call grokking. This jump occurs after far more optimization steps than needed to reach overfitting. The authors also find that smaller datasets require progressively more training to reach good generalization. These controlled settings are presented as a way to study how overparameterized networks can generalize beyond simply memorizing the finite training set.

Core claim

In some situations we show that neural networks learn through a process of grokking a pattern in the data, improving generalization performance from random chance level to perfect generalization, and that this improvement in generalization can happen well past the point of overfitting. We also study generalization as a function of dataset size and find that smaller datasets require increasing amounts of optimization for generalization. These datasets provide a fertile ground for studying generalization of overparameterized neural networks beyond memorization of the finite training dataset.

What carries the argument

The grokking process on small algorithmic datasets, in which generalization jumps from chance to perfect performance after overfitting has already occurred.

If this is right

  • Generalization performance can continue to improve substantially after a model has already overfit the training data.
  • Smaller dataset sizes require more optimization steps before generalization is achieved.
  • These algorithmically generated datasets make it possible to isolate and measure the transition from memorization to pattern-based generalization.
  • Overparameterized networks are capable of generalizing beyond rote memorization of the training examples.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Tracking test accuracy over very long training runs could reveal similar late-stage improvements in other domains where only short training is usually examined.
  • The separation between overfitting and later generalization phases suggests optimization trajectories may contain distinct stages that current early-stopping practices might miss.
  • If grokking depends on the structure of algorithmic data, it may be possible to design synthetic training distributions that deliberately induce this delayed generalization in practical tasks.

Load-bearing premise

The grokking behavior observed on these specific small algorithmic datasets reveals a general mechanism of neural network generalization rather than an artifact limited to the chosen tasks, architectures, and optimization regimes.

What would settle it

Finding that on other datasets or with other architectures generalization either stays at chance level after overfitting or improves only gradually without a sudden late jump would falsify the existence of a distinct grokking process.

read the original abstract

In this paper we propose to study generalization of neural networks on small algorithmically generated datasets. In this setting, questions about data efficiency, memorization, generalization, and speed of learning can be studied in great detail. In some situations we show that neural networks learn through a process of "grokking" a pattern in the data, improving generalization performance from random chance level to perfect generalization, and that this improvement in generalization can happen well past the point of overfitting. We also study generalization as a function of dataset size and find that smaller datasets require increasing amounts of optimization for generalization. We argue that these datasets provide a fertile ground for studying a poorly understood aspect of deep learning: generalization of overparametrized neural networks beyond memorization of the finite training dataset.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

0 major / 3 minor

Summary. The manuscript proposes small, algorithmically generated datasets (e.g., modular arithmetic tasks) as a controlled testbed for studying neural network generalization. It reports the 'grokking' phenomenon in which test accuracy jumps from chance level to perfect generalization well after training accuracy has saturated at 100%, and shows that smaller training sets require substantially more optimization steps before generalization occurs. The authors argue these setups are useful for examining generalization beyond memorization in overparameterized models.

Significance. If the observations hold under the reported conditions, the work supplies a clean, fully reproducible experimental platform for dissecting delayed generalization. The use of perfectly structured, small-scale algorithmic tasks enables precise tracking of learning dynamics that are difficult to isolate on standard benchmarks. Credit is due for the emphasis on controlled, open setups that facilitate community follow-up rather than asserting a universal mechanism.

minor comments (3)
  1. [Figures 1-2] Figure 1 and Figure 2: the x-axis (training steps) scaling and lack of shaded variance bands across random seeds make it difficult to judge how consistently the grokking transition occurs and at what exact step count.
  2. [Section 3.1] Section 3.1: the precise definition of 'overfitting' (e.g., whether it is the first step at which train accuracy reaches 1.0 or a sustained plateau) should be stated explicitly so readers can replicate the 'well past' timing claim.
  3. [Experimental details] The manuscript would be strengthened by a short table summarizing hyper-parameters (learning rate, weight decay, batch size, architecture depth/width) for each reported experiment.

Simulated Author's Rebuttal

0 responses · 0 unresolved

We thank the referee for the positive assessment of our work and the recommendation for minor revision. We appreciate the recognition that small algorithmic datasets provide a clean, reproducible platform for studying delayed generalization in overparameterized models.

Circularity Check

0 steps flagged

No significant circularity

full rationale

The paper is an empirical observational study of neural network behavior on small algorithmic datasets. It reports the grokking phenomenon as an observed pattern without any derivation chain, equations, or fitted parameters that reduce the reported results to self-referential definitions or inputs. Claims are scoped to 'in some situations' on the chosen testbeds and do not rely on self-citations, uniqueness theorems, or ansatzes for their central content.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The paper is an empirical study relying on standard assumptions of neural network training; no free parameters, new axioms, or invented entities are introduced in the abstract.

axioms (1)
  • domain assumption Gradient-based optimization of neural networks on finite datasets can produce both memorization and later generalization.
    Standard background assumption in machine learning invoked to frame the experiments.

pith-pipeline@v0.9.0 · 5437 in / 1107 out tokens · 58879 ms · 2026-05-11T19:22:31.706868+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 44 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Spherical Boltzmann machines: a solvable theory of learning and generation in energy-based models

    cs.LG 2026-05 unverdicted novelty 8.0

    In the high-dimensional limit the spherical Boltzmann machine admits exact equations for training dynamics, Bayesian evidence, and cascades of phase transitions tied to mode alignment with data, which connect to gener...

  2. The Spectral Lifecycle of Transformer Training: Transient Compression Waves, Persistent Spectral Gradients, and the Q/K--V Asymmetry

    cs.LG 2026-04 unverdicted novelty 8.0

    Transformer weight spectra exhibit transient compression waves that propagate layer-wise, persistent non-monotonic depth gradients in power-law exponents, and Q/K-V asymmetry, with the spectral exponent alpha predicti...

  3. The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery

    cs.AI 2024-08 unverdicted novelty 8.0

    The AI Scientist framework enables LLMs to independently conduct the full scientific process from idea generation to paper writing and review, demonstrated across three ML subfields with papers costing under $15 each.

  4. Toy Models of Superposition

    cs.LG 2022-09 accept novelty 8.0

    Toy models demonstrate that polysemanticity arises when neural networks store more sparse features than neurons via superposition, producing a phase transition tied to polytope geometry and increased adversarial vulne...

  5. The Benefits of Temporal Correlations: SGD Learns k-Juntas from Random Walks Efficiently

    cs.LG 2026-05 unverdicted novelty 7.0

    Temporal correlations from lazy random walks enable efficient SGD learning of k-juntas via temporal-difference loss on ReLU networks, achieving linear sample complexity in d.

  6. The Geometric Structure of Models Learning Sparse Data

    cs.LG 2026-05 unverdicted novelty 7.0

    In sparse regimes, models exploit normal alignment of Jacobians to minimize loss and maximize robustness; GrokAlign induces this alignment to accelerate training and RFAMs improve adversarial robustness.

  7. Topological Signatures of Grokking

    cs.LG 2026-05 unverdicted novelty 7.0

    Persistent homology detects a sharp increase in maximum and total H1 persistence during grokking on modular arithmetic, offering a topological diagnostic that links representation geometry to generalization.

  8. Grokking or Glitching? How Low-Precision Drives Slingshot Loss Spikes

    cs.LG 2026-05 unverdicted novelty 7.0

    Slingshot loss spikes result from floating-point precision limits that round correct-class gradients to zero, triggering Numerical Feature Inflation and breaking gradient zero-sum constraints.

  9. Grokking or Glitching? How Low-Precision Drives Slingshot Loss Spikes

    cs.LG 2026-05 unverdicted novelty 7.0

    Slingshot loss spikes arise from floating-point precision limits that round correct-class gradients to zero, breaking zero-sum constraints and driving exponential parameter growth through numerical feature inflation.

  10. Estimating Implicit Regularization in Deep Learning

    stat.ML 2026-05 unverdicted novelty 7.0

    Gradient matching empirically recovers implicit regularization effects such as l2 penalties from early stopping and dropout in neural networks.

  11. Layerwise LQR for Geometry-Aware Optimization of Deep Networks

    cs.LG 2026-05 unverdicted novelty 7.0

    Steepest descent under divergence-induced quadratic models equals an LQR problem, enabling learning of diagonal or Kronecker-factored inverse preconditioners via a global layerwise objective for scalable geometry-awar...

  12. A Theory of Generalization in Deep Learning

    cs.LG 2026-05 unverdicted novelty 7.0

    A theory shows SGD accumulates coherent signal via linear drift in NTK signal directions while trapping noise in orthogonal low-eigenvalue dimensions, enabling generalization even under O(1) kernel evolution and yield...

  13. ILDR: Geometric Early Detection of Grokking

    cs.LG 2026-04 unverdicted novelty 7.0

    ILDR detects the geometric reorganization preceding grokking by measuring when inter-class centroid separation exceeds intra-class scatter by 2.5 times its baseline in penultimate-layer representations.

  14. Grokking of Diffusion Models: Case Study on Modular Addition

    cs.LG 2026-04 unverdicted novelty 7.0

    Diffusion models show grokking on modular addition by composing periodic operand representations in simple data regimes or by separating arithmetic computation from visual denoising across timesteps in varied regimes.

  15. Scalable Neural Decoders for Practical Fault-Tolerant Quantum Computation

    quant-ph 2026-04 unverdicted novelty 7.0

    Neural decoder for quantum LDPC codes achieves ~10^{-10} logical error at 0.1% physical error with 17x improvement and high throughput, enabling practical fault tolerance at modest code sizes.

  16. Is your algorithm unlearning or untraining?

    cs.LG 2026-04 conditional novelty 7.0

    Machine unlearning conflates reversing the influence of specific training examples (untraining) with removing the full underlying distribution or behavior (unlearning).

  17. Spectral Edge Dynamics Reveal Functional Modes of Learning

    cs.LG 2026-04 unverdicted novelty 7.0

    Spectral edge dynamics during grokking reveal task-dependent low-dimensional functional modes over inputs, such as Fourier modes for modular addition and cross-term decompositions for x squared plus y squared.

  18. Dimensional Criticality at Grokking Across MLPs and Transformers

    cs.LG 2026-04 unverdicted novelty 7.0

    Effective cascade dimension D(t) crosses D=1 at the grokking transition in MLPs and Transformers, with opposite directions for modular addition versus XOR, consistent with attraction to a shared critical manifold.

  19. In-context Learning and Induction Heads

    cs.LG 2022-09 unverdicted novelty 7.0

    Induction heads, which implement pattern completion in attention, develop at the same training stage as a sudden rise in in-context learning, providing evidence they are the primary mechanism for in-context learning i...

  20. Detecting overfitting in Neural Networks during long-horizon grokking using Random Matrix Theory

    cs.LG 2026-05 unverdicted novelty 6.0

    A Random Matrix Theory method identifies growing Correlation Traps in neural network weight spectra during an 'anti-grokking' overfitting phase, and applies the same diagnostic to some foundation LLMs.

  21. Overtrained, Not Misaligned

    cs.LG 2026-05 unverdicted novelty 6.0

    Emergent misalignment arises from overtraining after primary task convergence and is preventable by early stopping, which retains 93% of task performance on average.

  22. The two clocks and the innovation window: When and how generative models learn rules

    cs.LG 2026-05 unverdicted novelty 6.0

    Generative models learn rules before memorizing data, creating an innovation window whose width depends on dataset size and rule complexity, observed in both diffusion and autoregressive architectures.

  23. Selection Plateau and a Sparsity-Dependent Hierarchy of Pruning Features

    cs.LG 2026-05 unverdicted novelty 6.0

    All rank-monotone pruning scorers converge to identical accuracy at fixed sparsity, but non-monotone features with sparsity-dependent complexity can escape this plateau, as shown by the SICS hypothesis on ViT-Small/CIFAR-10.

  24. Learning Large-Scale Modular Addition with an Auxiliary Modulus

    cs.LG 2026-05 unverdicted novelty 6.0

    An auxiliary modulus during training reduces wrap-around issues and preserves train-test input distributions, enabling better accuracy and sample efficiency for large N and q in modular addition learning.

  25. The Weight Gram Matrix Captures Sequential Feature Linearization in Deep Networks

    cs.LG 2026-05 unverdicted novelty 6.0

    Gradient descent in deep networks implicitly drives features toward target-linear structure as captured by the weight Gram matrix and a derived virtual covariance.

  26. Spectral Lens: Activation and Gradient Spectra as Diagnostics of LLM Optimization

    stat.ML 2026-05 unverdicted novelty 6.0

    Spectral analysis of activations and gradients provides new diagnostics that link batch size to representation geometry, early covariance tails to token efficiency, and spectral shifts to learning dynamics in decoder-...

  27. Critical Windows of Complexity Control: When Transformers Decide to Reason or Memorize

    cs.LG 2026-05 unverdicted novelty 6.0

    Transformers show a sharp, task-specific critical window for weight decay application that determines reasoning versus memorization, with middle placement optimal and boundaries as narrow as 100 steps.

  28. Can Transformers predict system collapse in dynamical systems?

    nlin.CD 2026-05 unverdicted novelty 6.0

    Transformers fail to predict catastrophic collapse in unseen parameter regimes of nonlinear dynamical systems, while reservoir computing reliably succeeds.

  29. Finite-Size Gradient Transport in Large Language Model Pretraining: From Cascade Size to Intensive Transport Efficiency

    cs.LG 2026-05 unverdicted novelty 6.0

    A gradient-transport framework with observables D, z, β, δ, v_rel applied to Pico-LM and Pythia datasets shows distinct scaling regimes in duration and efficiency while sharing a near-unity cascade-size backbone.

  30. Convergent Evolution: How Different Language Models Learn Similar Number Representations

    cs.CL 2026-04 unverdicted novelty 6.0

    Diverse language models converge on similar periodic number features with a two-tier hierarchy of Fourier sparsity and geometric separability, acquired via language co-occurrences or multi-token arithmetic.

  31. Generalization at the Edge of Stability

    cs.LG 2026-04 unverdicted novelty 6.0

    Training at the edge of stability causes neural network optimizers to converge on fractal attractors whose effective dimension, measured via a new sharpness dimension from the Hessian spectrum, bounds generalization e...

  32. Spectral Entropy Collapse as a Phase Transition in Delayed Generalisation: An Interventional and Predictive Framework for Grokkin

    cs.LG 2026-04 unverdicted novelty 6.0

    Spectral entropy collapse in learned representations precedes and predicts grokking, with interventions showing it is not explained by parameter norm alone.

  33. Spectral Entropy Collapse as a Phase Transition in Delayed Generalisation: An Interventional and Predictive Framework for Grokkin

    cs.LG 2026-04 unverdicted novelty 6.0

    Normalized spectral entropy of model representations collapses before grokking, crosses a threshold in all runs, and interventions confirm it drives the transition on group-theoretic tasks.

  34. Nexus: Same Pretraining Loss, Better Downstream Generalization via Common Minima

    cs.LG 2026-04 unverdicted novelty 6.0

    Nexus optimizer improves LLM downstream performance by converging to common minima across data sources despite identical pretraining loss.

  35. Training Deep Visual Networks Beyond Loss and Accuracy Through a Dynamical Systems Approach

    cs.CV 2026-04 unverdicted novelty 6.0

    Introduces integration, metastability, and dynamical stability index measures from layer activations and reports patterns distinguishing CIFAR-10 from CIFAR-100 difficulty plus early convergence signals across ResNet ...

  36. Grokking as Dimensional Phase Transition in Neural Networks

    cs.LG 2026-04 unverdicted novelty 6.0

    Grokking occurs as the effective dimensionality of the gradient field transitions from sub-diffusive to super-diffusive at the onset of generalization, exhibiting self-organized criticality.

  37. Autolearn: Learn by Surprise, Commit by Proof

    cs.LG 2026-04 unverdicted novelty 6.0

    Autolearn uses high-loss passages and self-generated Q&A training to drive the perturbation gap below baseline, improving novel fact acquisition while suppressing memorization in language models.

  38. Language Models (Mostly) Know What They Know

    cs.CL 2022-07 unverdicted novelty 6.0

    Language models show good calibration when asked to estimate the probability that their own answers are correct, with performance improving as models get larger.

  39. Constrained Stochastic Spectral Preconditioning Converges for Nonconvex Objectives

    math.OC 2026-05 unverdicted novelty 5.0

    Proximal stochastic spectral preconditioning converges for nonconvex constrained objectives under heavy-tailed noise, with a variance-reduced version achieving faster rates and a refined analysis of Muon iterations.

  40. Model Capacity Determines Grokking through Competing Memorisation and Generalisation Speeds

    cs.LG 2026-05 unverdicted novelty 5.0

    Grokking emerges near the model size where memorization timescale T_mem(P) intersects generalization timescale T_gen(P) on modular arithmetic.

  41. Gradient-Direction Sensitivity Reveals Linear-Centroid Coupling Hidden by Optimizer Trajectories

    cs.LG 2026-04 unverdicted novelty 5.0

    Gradient-based SVD diagnostic uncovers hidden SED-LCH coupling in single and multitask settings and shows rank-3 subspace constraints speed up grokking by 2.3x.

  42. Artificial Jagged Intelligence as Uneven Optimization Energy Allocation Capability Concentration, Redistribution, and Optimization Governance

    cs.AI 2026-05 unverdicted novelty 4.0

    AJI frames jagged AI capabilities as lower bounds on performance dispersion arising from concentrated optimization energy allocation under anisotropic objectives, with theorems on tradeoffs and redistribution interventions.

  43. Feature Repulsion and Spectral Lock-in: An Empirical Study of Two-Layer Network Grokking

    cs.LG 2026-04 unverdicted novelty 4.0

    Empirical tests confirm robust feature repulsion signs but reveal activation-dependent spectral lock-in in grokking, with x^2 yielding rank-2 updates at epoch ~174 and ReLU remaining rank-1.

  44. A Survey of Large Language Models

    cs.CL 2023-03 accept novelty 3.0

    This survey reviews the background, key techniques, and evaluation methods for large language models, emphasizing emergent abilities that appear at large scales.

Reference graph

Works this paper leans on

21 extracted references · 21 canonical work pages · cited by 42 Pith papers · 6 internal anchors

  1. [1]

    Reconciling modern machine-learning practice and the classical bias– variance trade-off,

    Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling modern machine learning practice and the bias-variance trade-off. arXiv preprint arXiv:1812.11118,

  2. [2]

    Triple descent and the two kinds of overfitting: Where & why do they appear? arXiv preprint arXiv:2006.03509,

    St´ephane d’Ascoli, Levent Sagun, and Giulio Biroli. Triple descent and the two kinds of overfitting: Where & why do they appear? arXiv preprint arXiv:2006.03509,

  3. [3]

    Universal Transformers

    Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, andŁukasz Kaiser. Universal transformers. arXiv preprint arXiv:1807.03819,

  4. [4]

    Adaptive Computation Time for Recurrent Neural Networks

    Alex Graves. Adaptive computation time for recurrent neural networks. arXiv preprint arXiv:1603.08983,

  5. [5]

    Neural Turing Machines

    Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401,

  6. [6]

    arXiv preprint arXiv:1912.02178 , year=

    Yiding Jiang, Behnam Neyshabur, Hossein Mobahi, Dilip Krishnan, and Samy Bengio. Fantastic generalization measures and where to find them. arXiv preprint arXiv:1912.02178,

  7. [7]

    Łukasz Kaiser, Aidan N

    Łukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. arXiv preprint arXiv:1511.08228,

  8. [9]

    On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima

    URL http://arxiv.org/abs/1609.04836. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101,

  9. [10]

    Deep double descent: Where bigger models and more data hurt

    Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever. Deep double descent: Where bigger models and more data hurt. arXiv preprint arXiv:1912.02292,

  10. [11]

    Adding gradient noise improves learning for very deep networks

    Arvind Neelakantan, Luke Vilnis, Quoc V Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, and James Martens. Adding gradient noise improves learning for very deep networks. arXiv preprint arXiv:1511.06807,

  11. [12]

    Neural programmer-interpreters

    Scott Reed and Nando De Freitas. Neural programmer-interpreters. arXiv preprint arXiv:1511.06279,

  12. [13]

    Analysing mathematical reasoning abilities of neural models

    David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. Analysing mathematical reasoning abilities of neural models. arXiv preprint arXiv:1904.01557,

  13. [14]

    Dropout: a simple way to prevent neural networks from overfitting

    Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958,

  14. [15]

    Attention Is All You Need

    Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. arXiv preprint arXiv:1706.03762,

  15. [16]
  16. [17]

    Towards ai-complete question answering: A set of prerequisite toy tasks

    5 Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merri¨enboer, Armand Joulin, and Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698,

  17. [18]

    Reinforcement learning neural turing machines-revised

    Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines-revised. arXiv preprint arXiv:1505.00521,

  18. [19]

    Understanding deep learning requires rethinking generalization

    Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530,

  19. [20]

    For each training run, we chose a fraction of all available equations at random and declared them to be the training set, with the rest of equations being the validation set

    A A PPENDIX A.1 A DDITIONAL EXPERIMENTAL DETAILS A.1.1 B INARY OPERATIONS The following are the binary operations that we have tried (for a prime numberp = 97): x ◦y =x +y (mod p) for 0 ≤x,y <p x ◦y =x −y (mod p) for 0 ≤x,y <p x ◦y =x/y (mod p) for 0 ≤x<p , 0<y <p x ◦y = [x/y (mod p) ify is odd, otherwisex −y (mod p)] for 0 ≤x,y <p x ◦y =x2 +y2 (mod p) fo...

  20. [21]

    grok-like

    In Figure 5 we show an example of a binary operation table that the network can actually solve. Figure 4: The loss curves for modular division, train and validation. We see the validation loss increases from 102 to about 105 optimization steps before it begins a second descent. A.3 R ELATED WORK In this paper we study training and generalization dynamics ...

  21. [22]

    We interpret this as additional evidence that the capacity of the network and optimization procedure is well beyond the capacity needed for 9 memorizing all the labels on the training data, and that generalization happening at all requires a non-trivial explanation. A.5 G ENERALIZATION MEASURES We believe it is useful to explore how predictive common gene...