Neural LoFi models deep learning as layer-wise spectral filtering that selects maximal low-degree correlations, yielding a tractable surrogate for hierarchical representation learning beyond the lazy regime.
Title resolution pending
4 Pith papers cite this work. Polarity classification is still indexing.
years
2026 4representative citing papers
Linear associative memories store up to p_c log p_c / d^2 = 1/2 associations, with optimal weights pushing correct scores just above the extreme value of competing outputs.
In high-dimensional convex ERM with non-Gaussian data, the projection of the estimator onto a test covariate asymptotically follows the convolution of a generally non-Gaussian term with an independent centered Gaussian whose variance is the trace of the estimator covariance times the data second-mom
A survey synthesizing representative advances, common themes, and open problems in high-dimensional statistics while pointing to key entry-point works.
citing papers explorer
-
Deep Learning as Neural Low-Degree Filtering: A Spectral Theory of Hierarchical Feature Learning
Neural LoFi models deep learning as layer-wise spectral filtering that selects maximal low-degree correlations, yielding a tractable surrogate for hierarchical representation learning beyond the lazy regime.
-
Factual recall in linear associative memories: sharp asymptotics and mechanistic insights
Linear associative memories store up to p_c log p_c / d^2 = 1/2 associations, with optimal weights pushing correct scores just above the extreme value of competing outputs.
-
Characterization of Gaussian Universality Breakdown in High-Dimensional Empirical Risk Minimization
In high-dimensional convex ERM with non-Gaussian data, the projection of the estimator onto a test covariate asymptotically follows the convolution of a generally non-Gaussian term with an independent centered Gaussian whose variance is the trace of the estimator covariance times the data second-mom
-
High-Dimensional Statistics: Reflections on Progress and Open Problems
A survey synthesizing representative advances, common themes, and open problems in high-dimensional statistics while pointing to key entry-point works.