A shallow dense Transformer achieves uniform epsilon-approximation of alpha-Holder functions with O(epsilon^{-d/alpha}) parameters and near-minimax generalization error O(n^{-2alpha/(2alpha+d)} log n).
An analysis of attention via the lens of exchangeability and latent variable models
2 Pith papers cite this work. Polarity classification is still indexing.
2
Pith papers citing it
fields
stat.ML 2years
2026 2verdicts
UNVERDICTED 2representative citing papers
Spectrum-adaptive post-hoc generalization bounds for multi-layer Transformers are derived using layerwise Schatten quantities whose indices are chosen after training based on singular-value profiles.
citing papers explorer
-
Learning Theory of Transformers: Local-to-Global Approximation via Softmax Partition of Unity
A shallow dense Transformer achieves uniform epsilon-approximation of alpha-Holder functions with O(epsilon^{-d/alpha}) parameters and near-minimax generalization error O(n^{-2alpha/(2alpha+d)} log n).
-
Spectrum-Adaptive Generalization Bounds for Trained Deep Transformers
Spectrum-adaptive post-hoc generalization bounds for multi-layer Transformers are derived using layerwise Schatten quantities whose indices are chosen after training based on singular-value profiles.