Machine-learning optimization produces candidate truncated modular-invariant partition functions for 2d CFTs in the central-charge window 1 to 8/7, indicating a continuous solution space and a stricter spectral-gap bound than the prior c/6 + 1/3 limit.
Neural Networks Reveal a Universal Bias in Conformal Correlators
2 Pith papers cite this work. Polarity classification is still indexing.
abstract
We propose that simple neural networks (NNs) trained on crossing symmetry can reconstruct conformal correlators restricted to a line to remarkable accuracy. The input is minimal: an external scaling dimension, a spectral gap, and the value of the correlator at a single point. We present evidence across a wide range of conformal theories and dimensions, for both four-point and thermal two-point functions. We attribute these observations to the spectral bias of gradient-based NN training, which appears to align with an intrinsic smoothness property of conformal field theory. This suggests a novel variational principle for conformal correlators and opens a path towards a powerful new computational framework for non-perturbative quantum field theory.
fields
hep-th 2years
2026 2verdicts
UNVERDICTED 2representative citing papers
A neural-network approach with dispersion relations handles infinite OPE towers in thermal conformal correlators without positivity.
citing papers explorer
-
Descending into the Modular Bootstrap
Machine-learning optimization produces candidate truncated modular-invariant partition functions for 2d CFTs in the central-charge window 1 to 8/7, indicating a continuous solution space and a stricter spectral-gap bound than the prior c/6 + 1/3 limit.
-
Neural Networks, Dispersion Relations and the Thermal Bootstrap
A neural-network approach with dispersion relations handles infinite OPE towers in thermal conformal correlators without positivity.