Recognition: 2 theorem links
· Lean TheoremSpectral Dynamics in Deep Networks: Feature Learning, Outlier Escape, and Learning Rate Transfer
Pith reviewed 2026-05-11 02:58 UTC · model grok-4.3
The pith
Spectral outliers in wide neural networks evolve predictably during gradient descent, with one scaling regime producing width-independent dynamics and hyperparameter transfer.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
A two-level dynamical mean-field theory jointly tracks bulk and outlier spectral dynamics for spiked ensembles with spike directions statistically dependent on the random bulk. In infinite-width nonlinear networks under mean-field scaling and in deep linear networks in the proportional high-dimensional limit, this predicts the evolution of outliers with training time, width, output scale, and initialization variance. Mean-field scaling produces width-consistent outlier dynamics and hyperparameter transfer in deep linear networks, with the leading neural tangent kernel mode growing toward the edge of stability in a width-stable manner, whereas neural tangent kernel parameterization shows st
What carries the argument
The two-level dynamical mean-field theory for spiked random matrix ensembles in which the spike directions remain statistically dependent on the random bulk, applied to track spectral evolution during gradient descent training.
Load-bearing premise
The two-level dynamical mean-field theory accurately captures the dynamics when spike directions stay statistically dependent on the random bulk and when infinite-width or proportional limits represent finite practical networks.
What would settle it
Measuring the growth rate of the leading neural tangent kernel eigenvalue toward the edge of stability in finite-width deep linear networks trained with mean-field scaling and checking whether it remains independent of width.
Figures
read the original abstract
We study the evolution of hidden-weight spectra in wide neural networks trained by (stochastic) gradient descent. We develop a two-level dynamical mean-field theory (DMFT) that jointly tracks bulk and outlier spectral dynamics for spiked ensembles whose spike directions remain statistically dependent on the random bulk. We apply this framework to two settings: (1) infinite-width nonlinear networks in mean-field/$\mu$P scaling and (2) deep linear networks in the proportional high-dimensional limit, where width, input dimension, and sample size diverge with fixed ratios. Our theory predicts how outliers evolve with training time, width, output scale, and initialization variance. In deep linear networks, $\mu$P yields width-consistent outlier dynamics and hyperparameter transfer, including width-stable growth of the leading NTK mode toward the edge of stability (EoS). In contrast, NTK parameterization exhibits strongly width-dependent outlier dynamics, despite converging to a stable large-width limit. We show that this bulk+outlier picture is descriptive of simple tasks with small output channels, but that tasks involving large numbers of outputs (ImageNet classification or GPT language modeling) are better described by a restructuring of the spectral bulk. We develop a toy model with extensive output channels that recapitulates this phenomenon and show that edge of the spectrum still converges for sufficiently wide networks.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript develops a two-level dynamical mean-field theory (DMFT) to jointly track bulk and outlier spectral dynamics in wide neural networks trained by SGD, for spiked ensembles in which spike directions remain statistically dependent on the random bulk. The framework is applied to (1) infinite-width nonlinear networks under mean-field/μP scaling and (2) deep linear networks in the proportional high-dimensional limit (width, input dimension, and sample size diverging at fixed ratios). It derives predictions for outlier evolution with training time, width, output scale, and initialization variance. In the linear case, μP is claimed to yield width-consistent outlier dynamics and hyperparameter transfer, including stable growth of the leading NTK mode toward the edge of stability, whereas NTK parameterization produces strongly width-dependent dynamics. For tasks with large output channels (e.g., ImageNet or GPT), the paper argues that bulk spectral restructuring dominates and supports this with a toy model showing convergence of the spectral edge for sufficiently wide networks.
Significance. If the two-level DMFT closure is valid and the predictions are quantitatively confirmed, the work would provide a valuable dynamical theory linking feature learning, spectral outliers, and scaling behavior in deep networks. It offers a concrete explanation for why μP enables width-independent dynamics and learning-rate transfer, while distinguishing outlier-driven versus bulk-driven regimes according to output dimension. The technical extension of DMFT to the proportional limit for linear networks, together with the explicit dependence on initialization variance and output scale, constitutes a clear advance over static NTK analyses.
major comments (3)
- [§3] §3 (two-level DMFT derivation): the closure of the two-level truncation in the proportional high-dimensional limit is asserted without explicit control on higher-order spike-bulk moments. Non-vanishing triple or quadruple correlations between spike directions and bulk eigenvectors would modify the effective drift and diffusion terms for the outlier eigenvalues, directly affecting the claimed width-independence of μP dynamics.
- [§4] §4 (deep linear networks, proportional limit): the prediction that the leading NTK mode grows stably toward the edge of stability under μP (and does so in a width-independent manner) rests on the DMFT equations; however, the manuscript provides neither the explicit DMFT ODEs for the outlier trajectory nor quantitative finite-width simulations with error bars that would confirm the approximation remains accurate when spike-bulk dependence is present.
- [§5] §5 (large-output toy model): the claim that bulk restructuring rather than outlier escape dominates for extensive output channels is supported only by a qualitative toy model; no scaling relation with output dimension is derived or tested, leaving open whether the reported edge-of-spectrum convergence holds uniformly or only in a restricted regime.
minor comments (2)
- [Abstract] The abstract states that 'edge of the spectrum still converges' without specifying the parameterization or the precise limit; this should be clarified for readers.
- [§3] Notation for the two-level DMFT variables (e.g., the decomposition into bulk and spike components) is introduced without a compact summary table; adding one would improve readability.
Simulated Author's Rebuttal
We thank the referee for their constructive and detailed comments, which help clarify the scope and limitations of our two-level DMFT analysis. We respond point-by-point to the major comments below, indicating where we will revise the manuscript.
read point-by-point responses
-
Referee: [§3] §3 (two-level DMFT derivation): the closure of the two-level truncation in the proportional high-dimensional limit is asserted without explicit control on higher-order spike-bulk moments. Non-vanishing triple or quadruple correlations between spike directions and bulk eigenvectors would modify the effective drift and diffusion terms for the outlier eigenvalues, directly affecting the claimed width-independence of μP dynamics.
Authors: The two-level closure is obtained by projecting the full DMFT onto the spike and bulk subspaces under the assumption that the spiked ensemble satisfies a spiked covariance model with Gaussian bulk fluctuations. In this setting, the higher-order (triple and quadruple) spike-bulk moments factorize and vanish at leading order in the proportional limit because of the orthogonality between the fixed spike directions and the delocalized bulk eigenvectors. We will add an explicit paragraph in §3 deriving the vanishing of these moments from the moment-generating function of the ensemble, thereby making the control on the truncation transparent. revision: partial
-
Referee: [§4] §4 (deep linear networks, proportional limit): the prediction that the leading NTK mode grows stably toward the edge of stability under μP (and does so in a width-independent manner) rests on the DMFT equations; however, the manuscript provides neither the explicit DMFT ODEs for the outlier trajectory nor quantitative finite-width simulations with error bars that would confirm the approximation remains accurate when spike-bulk dependence is present.
Authors: The closed DMFT ODEs for the outlier eigenvalue (including its explicit dependence on initialization variance and output scale) appear in Appendix B. We agree that direct numerical confirmation with error bars is valuable. In the revision we will augment Figure 4 with new panels that overlay finite-width SGD trajectories (N=10 independent seeds, shaded standard-error bands) against the DMFT solution for both μP and NTK parameterizations, confirming that the width-independent growth toward the edge of stability persists under the spike-bulk dependence present in the proportional limit. revision: yes
-
Referee: [§5] §5 (large-output toy model): the claim that bulk restructuring rather than outlier escape dominates for extensive output channels is supported only by a qualitative toy model; no scaling relation with output dimension is derived or tested, leaving open whether the reported edge-of-spectrum convergence holds uniformly or only in a restricted regime.
Authors: We will strengthen §5 by deriving the scaling relation for the spectral-edge location as a function of output dimension C and width N. The analysis shows that the edge converges to its infinite-width value whenever C = o(N), which is the regime relevant to ImageNet-scale and language-modeling tasks. We will also add a new figure that numerically tests this scaling across a range of C/N ratios, confirming uniform convergence for sufficiently wide networks. revision: yes
Circularity Check
No significant circularity; derivation proceeds from DMFT assumptions to explicit dynamics
full rationale
The paper develops a two-level DMFT from the infinite-width and proportional limits, then derives dynamical equations for bulk and outlier spectra under stated assumptions about spike-bulk dependence. No quoted step reduces a reported prediction to a fitted parameter or self-citation by construction; the central claims about width-consistent outlier evolution and EoS growth follow from solving the closed DMFT equations rather than from re-labeling inputs. Self-citations, if present, are not load-bearing for the uniqueness or closure of the two-level truncation.
Axiom & Free-Parameter Ledger
axioms (3)
- domain assumption Infinite-width limit permits closed dynamical mean-field equations for spectra
- domain assumption Spiked ensemble with spike directions statistically dependent on the random bulk
- domain assumption Proportional high-dimensional limit (width, input dim, sample size diverge with fixed ratios)
Lean theorems connected to this paper
-
IndisputableMonolith/Foundation/AbsoluteFloorClosure.leanreality_from_one_distinction unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
We develop a two-level dynamical mean-field theory (DMFT) that jointly tracks bulk and outlier spectral dynamics for spiked ensembles whose spike directions remain statistically dependent on the random bulk.
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
Result 1: Singular Values of Random Matrices with Statistically Coupled Spikes ... det A(z) = 0
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Scaling Laws for Neural Language Models
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models.arXiv preprint arXiv:2001.08361, 2020
work page internal anchor Pith review Pith/arXiv arXiv 2001
-
[2]
Training Compute-Optimal Large Language Models
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models.arXiv preprint arXiv:2203.15556, 2022. 10
work page internal anchor Pith review Pith/arXiv arXiv 2022
-
[3]
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report.arXiv preprint arXiv:2303.08774, 2023
work page internal anchor Pith review Pith/arXiv arXiv 2023
-
[4]
Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization in neural networks.Advances in neural information processing systems, 31, 2018
work page 2018
-
[5]
Jaehoon Lee, Lechao Xiao, Samuel Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl- Dickstein, and Jeffrey Pennington. Wide neural networks of any depth evolve as linear models under gradient descent.Advances in neural information processing systems, 32, 2019
work page 2019
-
[6]
arXiv preprint arXiv:2102.06701 , year=
Yasaman Bahri, Ethan Dyer, Jared Kaplan, Jaehoon Lee, and Utkarsh Sharma. Explaining neural scaling laws.arXiv preprint arXiv:2102.06701, 2021
-
[7]
Spectrum dependent learning curves in kernel regression and wide neural networks
Blake Bordelon, Abdulkadir Canatar, and Cengiz Pehlevan. Spectrum dependent learning curves in kernel regression and wide neural networks. InInternational Conference on Machine Learning, pages 1024–1034. PMLR, 2020
work page 2020
-
[8]
Bruno Loureiro, Cedric Gerbelot, Hugo Cui, Sebastian Goldt, Florent Krzakala, Marc Mezard, and Lenka Zdeborová. Learning curves of generic features maps for realistic datasets with a teacher-student model.Advances in Neural Information Processing Systems, 34:18137–18151, 2021
work page 2021
-
[9]
Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit
Song Mei, Theodor Misiakiewicz, and Andrea Montanari. Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit. InConference on Learning Theory, pages 2388–2464. PMLR, 2019
work page 2019
-
[10]
Mario Geiger, Stefano Spigler, Arthur Jacot, and Matthieu Wyart. Disentangling feature and lazy training in deep neural networks.Journal of Statistical Mechanics: Theory and Experiment, 2020(11):113301, 2020
work page 2020
-
[11]
Tensor programs iv: Feature learning in infinite-width neural networks
Greg Yang and Edward J Hu. Tensor programs iv: Feature learning in infinite-width neural networks. InInternational Conference on Machine Learning, pages 11727–11737. PMLR, 2021
work page 2021
-
[12]
Blake Bordelon and Cengiz Pehlevan. Self-consistent dynamical field theory of kernel evolution in wide neural networks.arXiv preprint arXiv:2205.09653, 2022
-
[13]
Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, and Andrea Montanari. When do neural networks outperform kernel methods?Advances in Neural Information Processing Systems, 33:14820–14830, 2020
work page 2020
-
[14]
arXiv preprint arXiv:2206.10012 , year=
Nikhil Vyas, Yamini Bansal, and Preetum Nakkiran. Limitations of the ntk for understanding generalization in deep learning.arXiv preprint arXiv:2206.10012, 2022
-
[15]
Dynamical decoupling of generalization and overfitting in large two-layer networks,
Andrea Montanari and Pierfrancesco Urbani. Dynamical decoupling of generalization and overfitting in large two-layer networks.arXiv preprint arXiv:2502.21269, 2025
-
[16]
Practice, theory, and theorems for random matrix theory in modern machine learning
Michael W Mahoney. Practice, theory, and theorems for random matrix theory in modern machine learning. 2022
work page 2022
-
[17]
Liam Hodgkinson, Zhichao Wang, and Michael W. Mahoney. Models of heavy-tailed mechanis- tic universality. InForty-second International Conference on Machine Learning, 2025
work page 2025
-
[18]
Vladimir A Marˇcenko and Leonid Andreevich Pastur. Distribution of eigenvalues for some sets of random matrices.Mathematics of the USSR-Sbornik, 1(4):457–483, 1967
work page 1967
-
[19]
Cambridge University Press, 2020
Marc Potters and Jean-Philippe Bouchaud.A first course in random matrix theory: for physicists, engineers and data scientists. Cambridge University Press, 2020
work page 2020
-
[20]
Phase transition of the largest eigenvalue for nonnull complex sample covariance matrices
Jinho Baik, Gérard Ben Arous, and Sandrine Péché. Phase transition of the largest eigenvalue for nonnull complex sample covariance matrices. 2005. 11
work page 2005
-
[21]
Greg Yang, Edward J Hu, Igor Babuschkin, Szymon Sidor, Xiaodong Liu, David Farhi, Nick Ryder, Jakub Pachocki, Weizhu Chen, and Jianfeng Gao. Tensor programs v: Tuning large neural networks via zero-shot hyperparameter transfer.arXiv preprint arXiv:2203.03466, 2022
-
[22]
Nolan Dey, Bin Claire Zhang, Lorenzo Noci, Mufan Li, Blake Bordelon, Shane Bergsma, Cengiz Pehlevan, Boris Hanin, and Joel Hestness. Don’t be lazy: Completep enables compute-efficient deep transformers.arXiv preprint arXiv:2505.01618, 2025
-
[23]
Chen Xing, Devansh Arpit, Christos Tsirigotis, and Yoshua Bengio. A walk with sgd, 2018
work page 2018
-
[24]
The break-even point on optimization trajectories of deep neural networks
Stanislaw Jastrzebski, Maciej Szymczak, Stanislav Fort, Devansh Arpit, Jacek Tabor, Kyunghyun Cho*, and Krzysztof Geras*. The break-even point on optimization trajectories of deep neural networks. InInternational Conference on Learning Representations, 2020
work page 2020
-
[25]
arXiv preprint arXiv:2103.00065 , year=
Jeremy M Cohen, Simran Kaur, Yuanzhi Li, J Zico Kolter, and Ameet Talwalkar. Gra- dient descent on neural networks typically occurs at the edge of stability.arXiv preprint arXiv:2103.00065, 2021
-
[26]
Jeremy M. Cohen, B. Ghorbani, Shankar Krishnan, Naman Agarwal, Sourabh Medapati, Michal Badura, Daniel Suo, David E. Cardoze, Zachary Nado, George E. Dahl, and Justin Gilmer. Adaptive gradient methods at the edge of stability.ArXiv, abs/2207.14484, 2022
-
[27]
Edge of stochastic stability: Revisiting the edge of stability for sgd, 2025
Arseniy Andreyev and Pierfrancesco Beneventano. Edge of stochastic stability: Revisiting the edge of stability for sgd, 2025
work page 2025
-
[28]
Understanding the evolution of the neural tangent kernel at the edge of stability
Kaiqi Jiang, Jeremy Cohen, and Yuanzhi Li. Understanding the evolution of the neural tangent kernel at the edge of stability. InThe Thirty-ninth Annual Conference on Neural Information Processing Systems, 2026
work page 2026
-
[29]
arXiv preprint arXiv:2512.22768 , year=
Nikhil Ghosh, Denny Wu, and Alberto Bietti. Understanding the mechanisms of fast hyperpa- rameter transfer.arXiv preprint arXiv:2512.22768, 2025
-
[30]
Florent Benaych-Georges and Raj Rao Nadakuditi. The singular values and vectors of low rank perturbations of large rectangular random matrices.Journal of Multivariate Analysis, 111:120–135, 2012
work page 2012
-
[31]
Amelia Perry, Alexander S Wein, Afonso S Bandeira, and Ankur Moitra. Optimality and sub- optimality of pca i: Spiked random matrix models.The Annals of Statistics, 46(5):2416–2451, 2018
work page 2018
-
[32]
Limiting eigenvectors of outliers for spiked information-plus-noise type matrices
Mireille Capitaine. Limiting eigenvectors of outliers for spiked information-plus-noise type matrices. InSéminaire de Probabilités XLIX, pages 119–164. Springer, 2018
work page 2018
-
[33]
Jean Barbier, Francesco Camilli, Marco Mondelli, and Manuel Sáenz. Fundamental limits in structured principal component analysis and how to reach them.Proceedings of the National Academy of Sciences, 120(30):e2302028120, 2023
work page 2023
-
[34]
Bbp phase transition for an extensive number of outliers.arXiv preprint arXiv:2511.18501, 2025
Niklas Forner, Alexander Maloney, and Bernd Rosenow. Bbp phase transition for an extensive number of outliers.arXiv preprint arXiv:2511.18501, 2025
-
[35]
Lenaic Chizat, Edouard Oyallon, and Francis Bach. On lazy training in differentiable program- ming.Advances in neural information processing systems, 32, 2019
work page 2019
-
[36]
Kernel and rich regimes in overparametrized models
Blake Woodworth, Suriya Gunasekar, Jason D Lee, Edward Moroshko, Pedro Savarese, Itay Golan, Daniel Soudry, and Nathan Srebro. Kernel and rich regimes in overparametrized models. InConference on Learning Theory, pages 3635–3673. PMLR, 2020
work page 2020
-
[37]
Feature-learning networks are consistent across widths at realistic scales
Nikhil Vyas, Alexander Atanasov, Blake Bordelon, Depen Morwani, Sabarish Sainathan, and Cengiz Pehlevan. Feature-learning networks are consistent across widths at realistic scales. arXiv preprint arXiv:2305.18411, 2023
-
[38]
Dynamic theory of the spin-glass phase.Physical Review Letters, 47(5):359, 1981
Haim Sompolinsky and Annette Zippelius. Dynamic theory of the spin-glass phase.Physical Review Letters, 47(5):359, 1981. 12
work page 1981
-
[39]
C De Dominicis. Dynamics as a substitute for replicas in systems with quenched random impurities.Physical Review B, 18(9):4913, 1978
work page 1978
-
[40]
Blake Bordelon and Cengiz Pehlevan. Disordered dynamics in high dimensions: Connections to random matrices and machine learning.arXiv preprint arXiv:2601.01010, 2026
-
[41]
Path integral approach to random neural networks.Physical Review E, 98(6):062120, 2018
A Crisanti and H Sompolinsky. Path integral approach to random neural networks.Physical Review E, 98(6):062120, 2018
work page 2018
-
[42]
David G Clark, Blake Bordelon, Jacob A Zavatone-Veth, and Cengiz Pehlevan. Structure, disorder, and dynamics in task-trained recurrent neural circuits.bioRxiv, pages 2026–03, 2026
work page 2026
-
[43]
Francesca Mignacco, Florent Krzakala, Pierfrancesco Urbani, and Lenka Zdeborová. Dynamical mean-field theory for stochastic gradient descent in gaussian mixture classification.Advances in Neural Information Processing Systems, 33:9540–9550, 2020
work page 2020
-
[44]
Francesca Mignacco and Pierfrancesco Urbani. The effective noise of stochastic gradient descent.Journal of Statistical Mechanics: Theory and Experiment, 2022(8):083405, 2022
work page 2022
-
[45]
Cedric Gerbelot, Emanuele Troiani, Francesca Mignacco, Florent Krzakala, and Lenka Zde- borova. Rigorous dynamical mean field theory for stochastic gradient descent methods.arXiv preprint arXiv:2210.06591, 2022
-
[46]
A dynamical model of neural scaling laws.arXiv preprint arXiv:2402.01092, 2024
Blake Bordelon, Alexander Atanasov, and Cengiz Pehlevan. A dynamical model of neural scaling laws.arXiv preprint arXiv:2402.01092, 2024
-
[47]
arXiv preprint arXiv:2304.03408 , year=
Blake Bordelon and Cengiz Pehlevan. Dynamics of finite width kernel and prediction fluctua- tions in mean field neural networks.arXiv preprint arXiv:2304.03408, 2023
-
[48]
Infinite limits of multi-head transformer dynamics.arXiv preprint arXiv:2405.15712, 2024
Blake Bordelon, Hamza Tahir Chaudhry, and Cengiz Pehlevan. Infinite limits of multi-head transformer dynamics.arXiv preprint arXiv:2405.15712, 2024
-
[49]
Depthwise hyperparameter transfer in residual networks: Dynamics and scaling limit
Blake Bordelon, Lorenzo Noci, Mufan Bill Li, Boris Hanin, and Cengiz Pehlevan. Depthwise hyperparameter transfer in residual networks: Dynamics and scaling limit. InThe Twelfth International Conference on Learning Representations, 2024
work page 2024
-
[50]
arXiv preprint arXiv:2601.20205 , year=
Tianze Jiang, Blake Bordelon, Cengiz Pehlevan, and Boris Hanin. Hyperparameter transfer with mixture-of-expert layers.arXiv preprint arXiv:2601.20205, 2026
-
[51]
arXiv preprint arXiv:2603.18168 , year=
Louis-Pierre Chaintron, Lénaïc Chizat, and Javier Maas. Resnets of all shapes and sizes: Convergence of training dynamics in the large-scale limit.arXiv preprint arXiv:2603.18168, 2026
-
[52]
Jinho Baik and Jack W Silverstein. Eigenvalues of large sample covariance matrices of spiked population models.Journal of multivariate analysis, 97(6):1382–1408, 2006
work page 2006
-
[53]
Charles H Martin and Michael W Mahoney. Heavy-tailed universality predicts trends in test accuracies for very large pre-trained deep neural networks. InProceedings of the 2020 SIAM International Conference on Data Mining, pages 505–513. SIAM, 2020
work page 2020
-
[54]
Charles H Martin and Michael W Mahoney. Implicit self-regularization in deep neural networks: Evidence from random matrix theory and implications for learning.Journal of Machine Learning Research, 22(165):1–73, 2021
work page 2021
-
[55]
Random matrix analysis of deep neural network weight matrices.Physical Review E, 106(5):054124, 2022
Matthias Thamm, Max Staats, and Bernd Rosenow. Random matrix analysis of deep neural network weight matrices.Physical Review E, 106(5):054124, 2022
work page 2022
-
[56]
Zhichao Wang, Andrew Engel, Anand D Sarwate, Ioana Dumitriu, and Tony Chiang. Spectral evolution and invariance in linear-width neural networks.Advances in neural information processing systems, 36:20695–20728, 2023
work page 2023
-
[57]
How two-layer neural networks learn, one (giant) step at a time
Yatin Dandi, Florent Krzakala, Bruno Loureiro, Luca Pesce, and Ludovic Stephan. How two-layer neural networks learn, one (giant) step at a time. InNeurIPS 2023 Workshop on Mathematics of Modern Machine Learning, 2023. 13
work page 2023
-
[58]
arXiv preprint arXiv:2310.07891 , year=
Behrad Moniri, Donghwan Lee, Hamed Hassani, and Edgar Dobriban. A theory of non- linear feature learning with one gradient step in two-layer neural networks.arXiv preprint arXiv:2310.07891, 2023
-
[59]
arXiv preprint arXiv:2410.18938 , year=
Yatin Dandi, Luca Pesce, Hugo Cui, Florent Krzakala, Yue M Lu, and Bruno Loureiro. A random matrix theory perspective on the spectrum of learned features and asymptotic generalization capabilities.arXiv preprint arXiv:2410.18938, 2024
-
[60]
arXiv preprint arXiv:2402.04980 , year=
Hugo Cui, Luca Pesce, Yatin Dandi, Florent Krzakala, Yue M Lu, Lenka Zdeborová, and Bruno Loureiro. Asymptotics of feature learning in two-layer networks after one gradient-step.arXiv preprint arXiv:2402.04980, 2024
-
[61]
Exact solutions to the nonlinear dynamics of learning in deep linear neural networks
Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks.arXiv preprint arXiv:1312.6120, 2013
work page Pith review arXiv 2013
-
[62]
Daniel Kunin, Allan Raventós, Clémentine Dominé, Feng Chen, David Klindt, Andrew Saxe, and Surya Ganguli. Get rich quick: exact solutions reveal how unbalanced initializations promote rapid feature learning.arXiv preprint arXiv:2406.06158, 2024
-
[63]
Clémentine CJ Dominé, Nicolas Anguita, Alexandra M Proca, Lukas Braun, Daniel Kunin, Pedro AM Mediano, and Andrew M Saxe. From lazy to rich: Exact learning dynamics in deep linear networks.arXiv preprint arXiv:2409.14623, 2024
-
[64]
Blake Bordelon and Cengiz Pehlevan. Deep linear network training dynamics from random initialization: Data, width, depth, and hyperparameter transfer.arXiv preprint arXiv:2502.02531, 2025
-
[65]
Scaling and renormalization in high-dimensional regression
Alexander B Atanasov, Jacob A Zavatone-Veth, and Cengiz Pehlevan. Scaling and renormaliza- tion in high-dimensional regression.arXiv preprint arXiv:2405.00592, 2024
work page internal anchor Pith review Pith/arXiv arXiv 2024
-
[66]
Tuning large neural networks via zero-shot hyperparameter transfer
Greg Yang, Edward J Hu, Igor Babuschkin, Szymon Sidor, Xiaodong Liu, David Farhi, Nick Ryder, Jakub Pachocki, Weizhu Chen, and Jianfeng Gao. Tuning large neural networks via zero-shot hyperparameter transfer. In A. Beygelzimer, Y . Dauphin, P. Liang, and J. Wortman Vaughan, editors,Advances in Neural Information Processing Systems, 2021
work page 2021
-
[67]
David G. Clark and L. F. Abbott. Theory of coupled neuronal-synaptic dynamics.Phys. Rev. X, 14:021001, Apr 2024
work page 2024
-
[68]
Learning multiple layers of features from tiny images
Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009
work page 2009
-
[69]
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. Imagenet large scale visual recognition challenge.International Journal of Computer Vision (IJCV), 115(3):211–252, 2015
work page 2015
-
[70]
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer.The Journal of Machine Learning Research, 21(1):5485–5551, 2020
work page 2020
-
[71]
ψ1(τ)− X s c0(τ, s)g(s) # (69) +i Z dτ ˆψ0(τ)·
Team OLMo, Pete Walsh, Luca Soldaini, Dirk Groeneveld, Kyle Lo, Shane Arora, Akshita Bhagia, Yuling Gu, Shengyi Huang, Matt Jordan, Nathan Lambert, Dustin Schwenk, Oyvind Tafjord, Taira Anderson, David Atkinson, Faeze Brahman, Christopher Clark, Pradeep Dasigi, Nouha Dziri, Michal Guerquin, Hamish Ivison, Pang Wei Koh, Jiacheng Liu, Saumya Ma- lik, Willia...
work page 2024
-
[72]
Super-wide Scaling: The network width N→ ∞ with batch size B, number of update stepsT, and input-dimensionDare held fixed. Under these assumptions, one can show using DMFT [12] that the feature structural condition holds since we have
-
[73]
Concentration of Correlations and Errors: The quantities C ϕ and C g and ∆ concentrate due to a law of large numbers effect (they are averages over neurons in each layer)
-
[74]
Decoupling of Neurons: The neurons effectively decouple in their dynamics and become iid random processes throughout training. As a consequence, we can indeed view the single-site equations as defining the features ϕ and g elementwise in terms ofχandξ ϕℓ µ(t) =ϕ ℓ µ,t {χℓ,ξ ℓ} ,g ℓ+1 µ (t) =g ℓ+1 µ,t {χℓ+1,ξ ℓ+1} ,(112) which verifies that our structural ...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.