pith. machine review for the scientific record. sign in

arxiv: 2604.18686 · v1 · submitted 2026-04-20 · ✦ hep-th

Recognition: unknown

Neural Spectral Bias and Conformal Correlators I: Introduction and Applications

Andreas Stergiou, Kausik Ghosh, Sidhaarth Kumar, Vasilis Niarchos

Pith reviewed 2026-05-10 04:00 UTC · model grok-4.3

classification ✦ hep-th
keywords neural networksconformal correlatorscrossing symmetryspectral biasCFT bootstrap3d Ising modelminimal modelsAdS Witten diagrams
0
0 comments X

The pith

Neural networks can reconstruct physical conformal correlators to within a few percent by optimizing only on crossing symmetry plus two scalar inputs.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes that simple feed-forward neural networks, when trained solely to enforce crossing symmetry, recover accurate conformal field theory correlators even when supplied with nothing more than the scaling dimension of the leading operator and the function value at one anchor point. This minimal-data procedure succeeds across generalized free fields, AdS Witten diagrams, 2d minimal models, the 3d Ising model, 4d super-Yang-Mills half-BPS correlators, and several thermal two-point functions. The authors trace the success to the spectral bias of gradient descent, which strongly favors smooth functions, and they quantify the smoothness of conformal correlators through Sobolev norms, Chebyshev expansions, and curvature measures. If the approach generalizes, it supplies a lightweight computational route to CFT data without solving the full bootstrap system.

Core claim

By optimizing a simple feed-forward neural network solely on the crossing symmetry condition, and supplying it with only the scaling dimension of the leading non-trivial operator and the correlator's value at a single anchor point, the network reconstructs target physical conformal correlators to within a few percent accuracy across a broad class of theories and dimensions.

What carries the argument

The spectral bias of gradient-based neural-network training, which preferentially learns smooth functions that match the smoothness properties of conformal correlators.

If this is right

  • The same minimal-input procedure works for contact and one-loop Witten diagrams in AdS2, unitary and non-unitary 2d minimal models, half-BPS correlators in 4d N=4 SYM, and thermal two-point functions.
  • The method extends beyond diagonal kinematics on the line.
  • No additional data or assumptions beyond crossing symmetry and the two scalars are required for the reported accuracy.
  • Smoothness analysis via fractional Sobolev semi-norms and Chebyshev decompositions underpins why the networks succeed.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If spectral bias is the operative mechanism, analogous networks could be applied to other symmetry-constrained problems whose solutions are known to be smooth.
  • Hybrid schemes that combine the network optimization with a small number of traditional bootstrap equations might further improve accuracy or reach higher-point functions.
  • Testing the same protocol on correlators in higher dimensions or with multiple exchanged operators would clarify the boundary of its applicability.

Load-bearing premise

That the combination of crossing symmetry plus the two supplied scalars selects the unique physical correlator among all smooth functions that satisfy the same constraints.

What would settle it

For the 3d Ising model, where an independent bootstrap result is known, run the neural-network optimization with the leading-operator dimension and one anchor value; a reconstruction error substantially larger than a few percent would falsify the claim.

Figures

Figures reproduced from arXiv: 2604.18686 by Andreas Stergiou, Kausik Ghosh, Sidhaarth Kumar, Vasilis Niarchos.

Figure 1
Figure 1. Figure 1: Fig.1 [PITH_FULL_IMAGE:figures/full_fig_p019_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Fig.2 [PITH_FULL_IMAGE:figures/full_fig_p020_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Fig.3 [PITH_FULL_IMAGE:figures/full_fig_p021_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Fig.4 [PITH_FULL_IMAGE:figures/full_fig_p022_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Fig.5 [PITH_FULL_IMAGE:figures/full_fig_p023_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Fig.6 [PITH_FULL_IMAGE:figures/full_fig_p023_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Fig.7 [PITH_FULL_IMAGE:figures/full_fig_p024_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Fig.8 [PITH_FULL_IMAGE:figures/full_fig_p026_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Fig.9 [PITH_FULL_IMAGE:figures/full_fig_p027_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Fig.10 [PITH_FULL_IMAGE:figures/full_fig_p028_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Fig.11 [PITH_FULL_IMAGE:figures/full_fig_p029_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Fig.12 [PITH_FULL_IMAGE:figures/full_fig_p031_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Fig.13 [PITH_FULL_IMAGE:figures/full_fig_p032_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: Fig.14 [PITH_FULL_IMAGE:figures/full_fig_p035_14.png] view at source ↗
Figure 15
Figure 15. Figure 15: Fig.15 [PITH_FULL_IMAGE:figures/full_fig_p036_15.png] view at source ↗
Figure 16
Figure 16. Figure 16: Fig.16 [PITH_FULL_IMAGE:figures/full_fig_p037_16.png] view at source ↗
Figure 17
Figure 17. Figure 17: Fig.17 [PITH_FULL_IMAGE:figures/full_fig_p039_17.png] view at source ↗
Figure 18
Figure 18. Figure 18: Fig.18 [PITH_FULL_IMAGE:figures/full_fig_p040_18.png] view at source ↗
Figure 19
Figure 19. Figure 19: Fig.19 [PITH_FULL_IMAGE:figures/full_fig_p042_19.png] view at source ↗
Figure 20
Figure 20. Figure 20: Fig.20 [PITH_FULL_IMAGE:figures/full_fig_p043_20.png] view at source ↗
Figure 21
Figure 21. Figure 21: Fig.21 [PITH_FULL_IMAGE:figures/full_fig_p045_21.png] view at source ↗
Figure 22
Figure 22. Figure 22: Fig.22 [PITH_FULL_IMAGE:figures/full_fig_p046_22.png] view at source ↗
Figure 23
Figure 23. Figure 23: Fig.23 [PITH_FULL_IMAGE:figures/full_fig_p047_23.png] view at source ↗
Figure 24
Figure 24. Figure 24: Fig.24 [PITH_FULL_IMAGE:figures/full_fig_p049_24.png] view at source ↗
Figure 25
Figure 25. Figure 25: Fig.25 [PITH_FULL_IMAGE:figures/full_fig_p052_25.png] view at source ↗
Figure 26
Figure 26. Figure 26: Fig.26 [PITH_FULL_IMAGE:figures/full_fig_p054_26.png] view at source ↗
Figure 27
Figure 27. Figure 27: Fig.27 [PITH_FULL_IMAGE:figures/full_fig_p056_27.png] view at source ↗
Figure 28
Figure 28. Figure 28: Fig.28 [PITH_FULL_IMAGE:figures/full_fig_p057_28.png] view at source ↗
Figure 29
Figure 29. Figure 29: Fig.29 [PITH_FULL_IMAGE:figures/full_fig_p059_29.png] view at source ↗
Figure 30
Figure 30. Figure 30: Fig.30 [PITH_FULL_IMAGE:figures/full_fig_p060_30.png] view at source ↗
Figure 31
Figure 31. Figure 31: Fig.31 [PITH_FULL_IMAGE:figures/full_fig_p061_31.png] view at source ↗
Figure 32
Figure 32. Figure 32: Fig.32 [PITH_FULL_IMAGE:figures/full_fig_p062_32.png] view at source ↗
Figure 33
Figure 33. Figure 33: Fig.33 [PITH_FULL_IMAGE:figures/full_fig_p063_33.png] view at source ↗
Figure 34
Figure 34. Figure 34: Fig.34 [PITH_FULL_IMAGE:figures/full_fig_p064_34.png] view at source ↗
Figure 35
Figure 35. Figure 35: Fig.35 [PITH_FULL_IMAGE:figures/full_fig_p065_35.png] view at source ↗
Figure 36
Figure 36. Figure 36: Fig.36 [PITH_FULL_IMAGE:figures/full_fig_p065_36.png] view at source ↗
Figure 37
Figure 37. Figure 37: Fig.37 [PITH_FULL_IMAGE:figures/full_fig_p075_37.png] view at source ↗
Figure 38
Figure 38. Figure 38: Fig.38 [PITH_FULL_IMAGE:figures/full_fig_p076_38.png] view at source ↗
Figure 39
Figure 39. Figure 39: Fig.39 [PITH_FULL_IMAGE:figures/full_fig_p077_39.png] view at source ↗
read the original abstract

We demonstrate that simple feed-forward neural networks (NNs) can accurately compute correlation functions of conformal field theories (CFTs) on a line. Strikingly, by optimising a NN solely on crossing symmetry and providing only the scaling dimension of the leading non-trivial operator and the correlator's value at a single "anchor point", we can reconstruct target physical correlators to within a few percent. We establish the robustness of this minimal-data approach across a broad class of theories and dimensions, including generalised free fields, contact and one-loop Witten diagrams in AdS$_2$, unitary and non-unitary 2d minimal models, the 3d Ising model, and half-BPS correlators in 4d $\mathcal{N}=4$ super-Yang-Mills theory, together with several thermal two-point functions, notably including those of the 3d Ising model. We argue that this remarkable alignment between NNs and CFTs stems from the spectral bias of gradient-based training, which heavily favours smooth functions. To ground this connection, we analyse the smoothness of conformal correlators using fractional Sobolev semi-norms, Chebyshev spectral decompositions, and a measure based on curvature. Finally, we establish the broader reconstructive power of this technique by extending it beyond the diagonal kinematics of the line.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The manuscript demonstrates that simple feed-forward neural networks can reconstruct conformal correlators by optimizing solely against crossing symmetry, using as input only the scaling dimension of the leading non-trivial operator and the correlator value at a single anchor point. It reports few-percent accuracy on generalized free fields, contact and one-loop Witten diagrams in AdS2, 2d minimal models, the 3d Ising model, half-BPS correlators in 4d N=4 SYM, and several thermal two-point functions, and extends the method beyond diagonal kinematics. The authors attribute the performance to the spectral bias of gradient descent toward smooth functions and support this with analyses based on fractional Sobolev semi-norms, Chebyshev decompositions, and curvature measures.

Significance. If the reconstruction is shown to systematically select the physical solution, the approach would provide a low-data, architecture-agnostic tool for computing CFT correlators that could complement bootstrap methods and AdS/CFT calculations. The breadth of examples tested and the independent mathematical characterization of correlator smoothness are genuine strengths that ground the connection to spectral bias.

major comments (3)
  1. [§4] §4 (Optimization and loss): Crossing symmetry is a linear functional equation on the cross-ratio. With only the leading Δ and one anchor point supplied, the manuscript provides no analysis of the dimension of the solution space or of whether other smooth functions satisfying the same constraints to within a few percent exist. No controls (multiple random initializations, comparison against deliberately constructed non-physical smooth solutions, or null-space probes) are reported to establish that gradient descent converges to the physical correlator rather than another admissible function.
  2. [§5] §5 (Numerical results): The reported accuracies are stated as “few percent” without error bars, statistics over independent training runs, or convergence diagnostics. For the 3d Ising and N=4 SYM cases in particular, it is unclear whether the network has captured the full operator spectrum or only the leading contributions within the tested cross-ratio range.
  3. [§6] §6 (Smoothness analysis): While Sobolev norms and Chebyshev coefficients demonstrate that physical correlators are smooth, the manuscript does not quantify how this bias, combined with the minimal constraints, excludes other smooth crossing-symmetric functions. A concrete test (e.g., injecting a known non-physical smooth solution and checking whether it is rejected) is missing.
minor comments (2)
  1. [§3] The choice and placement of the single anchor point are not varied systematically; a short robustness check with respect to anchor location would strengthen the minimal-data claim.
  2. [Figures] Figure captions and axis labels in the results section would benefit from explicit statements of the cross-ratio range and the precise definition of the reported error metric.

Simulated Author's Rebuttal

3 responses · 1 unresolved

We thank the referee for the careful reading and constructive comments on our manuscript. The points raised regarding solution uniqueness, numerical robustness, and the quantitative link between spectral bias and constraint satisfaction are well-taken. We address each major comment below, indicating where revisions will be made to strengthen the presentation while defending the core claims on the basis of the empirical evidence already provided.

read point-by-point responses
  1. Referee: [§4] §4 (Optimization and loss): Crossing symmetry is a linear functional equation on the cross-ratio. With only the leading Δ and one anchor point supplied, the manuscript provides no analysis of the dimension of the solution space or of whether other smooth functions satisfying the same constraints to within a few percent exist. No controls (multiple random initializations, comparison against deliberately constructed non-physical smooth solutions, or null-space probes) are reported to establish that gradient descent converges to the physical correlator rather than another admissible function.

    Authors: We agree that an explicit analysis of the dimension of the space of admissible smooth functions is absent from the current manuscript. Our defense rests on the consistent empirical recovery of the known physical correlators across many independent examples (GFFs, minimal models, Ising, N=4 SYM, AdS diagrams), which would be statistically unlikely if gradient descent were routinely selecting unrelated smooth solutions. To address the referee’s concern directly, the revised manuscript will report: (i) statistics over multiple random initializations showing convergence to the same function within the quoted accuracy; (ii) an explicit attempt to optimize a deliberately constructed non-physical smooth crossing-symmetric function that matches the supplied Δ and anchor point, demonstrating that the optimization either fails to converge or produces a visibly different correlator. A rigorous computation of the full null-space dimension lies beyond the scope of the present work and would require new functional-analytic tools. revision: partial

  2. Referee: [§5] §5 (Numerical results): The reported accuracies are stated as “few percent” without error bars, statistics over independent training runs, or convergence diagnostics. For the 3d Ising and N=4 SYM cases in particular, it is unclear whether the network has captured the full operator spectrum or only the leading contributions within the tested cross-ratio range.

    Authors: The phrase “few percent” denotes the maximum pointwise relative deviation between the reconstructed and target correlators over the sampled cross-ratio interval. We will add error bars derived from independent training runs, loss-convergence histories, and a table of run-to-run variance in the revised version. On the spectrum question, the network outputs the full correlator function; any OPE data (including sub-leading operators) is implicitly encoded in that function. For the 3d Ising and N=4 SYM examples the agreement to a few percent in the tested kinematic window is consistent with the dominance of the leading operators in that range, as expected from the known spectra. We will clarify this point and, where feasible, extract and compare the leading OPE coefficients obtained from the reconstructed correlator. revision: yes

  3. Referee: [§6] §6 (Smoothness analysis): While Sobolev norms and Chebyshev coefficients demonstrate that physical correlators are smooth, the manuscript does not quantify how this bias, combined with the minimal constraints, excludes other smooth crossing-symmetric functions. A concrete test (e.g., injecting a known non-physical smooth solution and checking whether it is rejected) is missing.

    Authors: The Sobolev and Chebyshev analyses establish that physical correlators lie in the low-frequency regime favored by gradient descent. To quantify the exclusion of alternatives, the revised manuscript will include a concrete numerical test: we construct a smooth but non-physical function that satisfies crossing symmetry together with the supplied leading Δ and anchor-point value, then show that the same optimization procedure either diverges or converges to a visibly different correlator with substantially higher loss. This test will provide direct evidence that the combination of spectral bias and the minimal constraints is sufficient to reject at least some non-physical smooth solutions. revision: yes

standing simulated objections not resolved
  • A complete mathematical determination of the dimension of the space of smooth crossing-symmetric functions compatible with the minimal input data (leading Δ and one anchor point).

Circularity Check

0 steps flagged

Optimization on crossing symmetry with minimal inputs is empirical and self-contained

full rationale

The paper's central result is obtained by direct numerical optimization of a feed-forward NN to minimize a crossing-symmetry loss, supplied only with the leading operator dimension Δ and the correlator value at one anchor point. This produces an approximation to known physical correlators (Ising, minimal models, Witten diagrams, etc.) to within a few percent. The reconstruction is not equivalent to the inputs by construction; crossing symmetry is a linear functional constraint whose solution space is under-determined, and the NN's output is selected by gradient descent plus spectral bias toward smooth functions. The separate smoothness analysis (fractional Sobolev semi-norms, Chebyshev coefficients, curvature measures) is performed with independent mathematical tools and does not presuppose the target correlators. No load-bearing step reduces to a self-citation, fitted parameter renamed as prediction, or ansatz smuggled from prior work. The method is externally validated against standard CFT benchmarks and is therefore self-contained.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The central claim rests on the empirical observation that neural networks converge to physical correlators when trained on crossing symmetry with two scalar inputs, plus the domain assumption that CFT correlators are sufficiently smooth for spectral bias to select them.

free parameters (1)
  • neural network architecture and hyperparameters
    Layer count, width, activation functions, and optimizer settings are chosen to enable convergence but are not derived from first principles.
axioms (1)
  • domain assumption Conformal correlators on the line are smooth functions that satisfy crossing symmetry and can be uniquely determined by the leading operator dimension plus one anchor value.
    Invoked throughout the training procedure and the claim of reconstruction to few-percent accuracy.

pith-pipeline@v0.9.0 · 5544 in / 1418 out tokens · 43554 ms · 2026-05-10T04:00:06.958547+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Descending into the Modular Bootstrap

    hep-th 2026-04 unverdicted novelty 7.0

    Machine-learning optimization produces candidate truncated modular-invariant partition functions for 2d CFTs in the central-charge window 1 to 8/7, indicating a continuous solution space and a stricter spectral-gap bo...

Reference graph

Works this paper leans on

67 extracted references · 57 canonical work pages · cited by 1 Pith paper · 3 internal anchors

  1. [1]

    Conformal invariance on the light cone and canonical dimensions

    S. Ferrara, R. Gatto & A. F. Grillo,“Conformal invariance on the light cone and canonical dimensions”, Nucl. Phys. B34, 349 (1971)

  2. [2]

    Nonhamiltonian approach to conformal quantum field theory

    A. M. Polyakov,“Nonhamiltonian approach to conformal quantum field theory”, Zh. Eksp. Teor. Fiz.66, 23 (1974)

  3. [3]

    Rattazzi, V

    R. Rattazzi, V. S. Rychkov, E. Tonni & A. Vichi,“Bounding scalar operator dimensions in 4D CFT”, JHEP0812, 031 (2008),arXiv:0807.0004 [hep-th]

  4. [4]

    Poland, S

    D. Poland, S. Rychkov & A. Vichi,“The Conformal Bootstrap: Theory, Numerical Techniques, and Applications”, Rev. Mod. Phys.91, 015002 (2019),arXiv:1805.04405 [hep-th]. 76

  5. [5]

    Conformal four point functions and the operator product expansion,

    F. A. Dolan & H. Osborn,“Conformal four point functions and the operator product expansion”, Nucl. Phys. B599, 459 (2001),hep-th/0011040

  6. [6]

    Conformal partial waves and the operator product expansion

    F. A. Dolan & H. Osborn,“Conformal partial waves and the operator product expansion”, Nucl. Phys. B678, 491 (2004),hep-th/0309180

  7. [7]

    Non-gaussianity of the critical 3d Ising model

    S. Rychkov, D. Simmons-Duffin & B. Zan,“Non-gaussianity of the critical 3d Ising model”, SciPost Phys.2, 001 (2017),arXiv:1612.02436 [hep-th]

  8. [8]

    Radial Coordinates for Conformal Blocks

    M. Hogervorst & S. Rychkov,“Radial Coordinates for Conformal Blocks”, Phys. Rev. D87, 106004 (2013),arXiv:1303.1111 [hep-th]

  9. [9]

    Analytic bounds and emergence of AdS 2 physics from the conformal bootstrap,

    D. Mazac,“Analytic bounds and emergence of AdS2 physics from the conformal bootstrap”, JHEP 1704, 146 (2017),arXiv:1611.10060 [hep-th]

  10. [10]

    The analytic functional bootstrap. Part I: 1D CFTs and 2D S- matrices

    D. Mazac & M. F. Paulos,“The analytic functional bootstrap. Part I: 1D CFTs and 2D S- matrices”, JHEP1902, 162 (2019),arXiv:1803.10233 [hep-th]

  11. [11]

    Charging up the functional bootstrap

    K. Ghosh, A. Kaviraj & M. F. Paulos,“Charging up the functional bootstrap”, JHEP2110, 116 (2021),arXiv:2107.00041 [hep-th]

  12. [12]

    Localized magnetic field in the O(N ) model,

    G. Cuomo, Z. Komargodski & M. Mezei,“Localized magnetic field in the O(N) model”, JHEP 2202, 134 (2022),arXiv:2112.10634 [hep-th]

  13. [13]

    Renormalization Group Flows on Line Defects

    G. Cuomo, Z. Komargodski & A. Raviv-Moshe,“Renormalization Group Flows on Line Defects”, Phys. Rev. Lett.128, 021603 (2022),arXiv:2108.01117 [hep-th]

  14. [14]

    Line defect RG flows in theε expansion

    W. H. Pannell & A. Stergiou,“Line defect RG flows in theε expansion”, JHEP2306, 186 (2023), arXiv:2302.14069 [hep-th]

  15. [15]

    A strong-weak duality for the 1d long-range Ising model

    D. Benedetti, E. Lauria, D. Mazac & P. van Vliet,“A strong-weak duality for the 1d long-range Ising model”, SciPost Phys.20, 029 (2026),arXiv:2509.05250 [hep-th]

  16. [16]

    Super Sum rules for Long-Range Models

    K. Ghosh, M. F. Paulos, N. Suchel & Z. Zheng,“Super Sum rules for Long-Range Models”, arXiv:2603.22395 [hep-th]

  17. [17]

    A study of quantum field theories in AdS at finite coupling,

    D. Carmi, L. Di Pietro & S. Komatsu,“A Study of Quantum Field Theories in AdS at Finite Coupling”, JHEP1901, 200 (2019),arXiv:1810.04185 [hep-th]

  18. [18]

    Adam: A Method for Stochastic Optimization

    D. P. Kingma & J. Ba,“Adam: A Method for Stochastic Optimization”,arXiv:1412.6980 [cs.LG], in“3rd International Conference on Learning Representations (ICLR 2015)”

  19. [19]

    Deep finite temperature bootstrap

    V. Niarchos, C. Papageorgakis, A. Stratoudakis & M. Woolley,“Deep finite temperature bootstrap”, Phys. Rev. D112, 126012 (2025),arXiv:2508.08560 [hep-th]

  20. [20]

    Hamprecht, Yoshua Bengio, and Aaron Courville

    N. Rahaman, A. Baratin, D. Arpit, F. Draxler, M. Lin, F. A. Hamprecht, Y. Bengio & A. Courville,“On the Spectral Bias of Neural Networks”,arXiv:1806.08734 [stat.ML], in “Proceedings of the 36th International Conference on Machine Learning”, p. 5301–5310. 77

  21. [21]

    arXiv preprint arXiv:1901.06523 (2019)

    Z.-Q. J. Xu, Y. Zhang, T. Luo, Y. Xiao & Z. Ma,“Frequency Principle: Fourier Analysis Sheds Light on Deep Neural Networks”, Communications in Computational Physics28, 1746 (2020), arXiv:1901.06523 [cs.LG]

  22. [22]

    Theory of the Frequency Principle for General Deep Neu- ral Networks

    T. Luo, Z. Ma, Z.-Q. J. Xu & Y. Zhang,“Theory of the Frequency Principle for General Deep Neu- ral Networks”, CSIAM Transactions on Applied Mathematics2, 484 (2021),arXiv:1906.09235 [cs.LG]

  23. [23]

    Neural tangent kernel: Convergence and generalization in neural networks

    A. Jacot, F. Gabriel & C. Hongler,“Neural Tangent Kernel: Convergence and Generalization in Neural Networks”,arXiv:1806.07572 [cs.LG], in“Advances in Neural Information Processing Systems 31”, p. 8571–8580

  24. [24]

    Thermal Bootstrap for the Critical O(N) Model

    J. Barrat, E. Marchetto, A. Miscioscia & E. Pomoni,“Thermal Bootstrap for the Critical O(N) Model”, Phys. Rev. Lett.134, 211604 (2025),arXiv:2411.00978 [hep-th]

  25. [25]

    The analytic bootstrap at finite temperature

    J. Barrat, D. N. Bozkurt, E. Marchetto, A. Miscioscia & E. Pomoni,“The analytic bootstrap at finite temperature”,arXiv:2506.06422 [hep-th]

  26. [26]

    Neural Spectral Bias and Conformal Correlators II: Modular and annulus bootstrap bootstrap

    K. Ghosh, S. Kumar, V. Niarchos & A. Stergiou,“Neural Spectral Bias and Conformal Correlators II: Modular and annulus bootstrap bootstrap”, In preparation

  27. [27]

    Neural Spectral Bias and Conformal Correlators III: Bootstrability and holomorphic bootstrap

    K. Ghosh, S. Kumar, V. Niarchos & A. Stergiou,“Neural Spectral Bias and Conformal Correlators III: Bootstrability and holomorphic bootstrap”, In preparation

  28. [28]

    Neural Networks Reveal a Universal Bias in Conformal Correlators

    K. Ghosh, S. Kumar, V. Niarchos & A. Stergiou,“Neural Networks Reveal a Universal Bias in Conformal Correlators”, In preparation

  29. [29]

    K´ antor, V

    G. Kántor, V. Niarchos & C. Papageorgakis,“Solving Conformal Field Theories with Artificial Intelligence”, Phys. Rev. Lett.128, 041601 (2022),arXiv:2108.08859 [hep-th]

  30. [30]

    K´ antor, V

    G. Kántor, V. Niarchos & C. Papageorgakis,“Conformal bootstrap with reinforcement learning”, Phys. Rev. D105, 025018 (2022),arXiv:2108.09330 [hep-th]

  31. [31]

    K´ antor, V

    G. Kántor, V. Niarchos, C. Papageorgakis & P. Richmond,“6D (2,0) bootstrap with the soft- actor-critic algorithm”, Phys. Rev. D107, 025005 (2023),arXiv:2209.02801 [hep-th]

  32. [32]

    Bootstrability in line-defect CFTs with improved truncation methods

    V. Niarchos, C. Papageorgakis, P. Richmond, A. G. Stapleton & M. Woolley,“Bootstrability in line-defect CFTs with improved truncation methods”, Phys. Rev. D108, 105027 (2023), arXiv:2306.15730 [hep-th]

  33. [33]

    Halverson, J

    J. Halverson, J. Naskar & J. Tian,“Conformal fields from neural networks”, JHEP2510, 039 (2025),arXiv:2409.12222 [hep-th]

  34. [34]

    Laio, U.L

    A. Laio, U. L. Valenzuela & M. Serone,“Monte Carlo approach to the conformal bootstrap”, Phys. Rev. D106, 025019 (2022),arXiv:2206.05193 [hep-th]

  35. [35]

    Bootstrapping non-unitary CFTs

    Y.-t. Huang, S.-C. Lee, H. Liao & J. Rumbutis,“Bootstrapping non-unitary CFTs”, arXiv:2512.07706 [hep-th]. 78

  36. [36]

    Descending into the Modular Bootstrap

    N. Benjamin, A. L. Fitzpatrick, W. Li & J. Thaler,“Descending into the Modular Bootstrap”, arXiv:2604.01275 [hep-th]

  37. [37]

    On the Inductive Bias of Neural Tangent Kernels

    A. Bietti & J. Mairal,“On the Inductive Bias of Neural Tangent Kernels”,arXiv:1905.12173 [stat.ML], in“Advances in Neural Information Processing Systems 32”, p. 12897–12908

  38. [38]

    The Convergence Rate of Neural Networks for Learned Functions of Different Frequencies

    R. Basri, D. W. Jacobs, Y. Kasten & S. Kritchman,“The Convergence Rate of Neural Networks for Learned Functions of Different Frequencies”,arXiv:1906.00425 [cs.LG]

  39. [39]

    arXiv preprint arXiv:1912.01198 , year=

    Y. Cao, Z. Fang, Y. Wu, D. Zhou & Q. Gu,“Towards Understanding the Spectral Bias of Deep Learning”,arXiv:1912.01198 [cs.LG]

  40. [40]

    On the Similarity between the Laplace and Neural Tangent Kernels

    A. Geifman, A. Yadav, Y. Kasten, M. Galun, D. Jacobs & R. Basri,“On the Similarity between the Laplace and Neural Tangent Kernels”,arXiv:2007.01580 [cs.LG], in“Advances in Neural Information Processing Systems 33”

  41. [41]

    Characterizing the Spectrum of the NTK via a Power Series Expansion

    M. Murray, H. Jin, B. Bowman & G. Montufar,“Characterizing the Spectrum of the NTK via a Power Series Expansion”,arXiv:2211.07844 [cs.LG]

  42. [42]

    CoRRabs/2405.17556(2024)

    Z. Li, Y. Wei & S. Huang,“Eigenvalue Decay of the NTK: When and How Fast?”, arXiv:2405.17556 [cs.LG]

  43. [43]

    Characterizing Implicit Bias in Terms of Optimization Geometry

    S. Gunasekar, J. Lee, D. Soudry & N. Srebro,“Characterizing Implicit Bias in Terms of Optimization Geometry”, in“Proceedings of the 35th International Conference on Machine Learning”, p. 1832–1841, PMLR (2018)

  44. [44]

    Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks

    S. Arora, S. S. Du, W. Hu, Z. Li & R. Wang,“Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks”, in“Proceedings of the 36th International Conference on Machine Learning”, p. 322–332, PMLR (2019)

  45. [45]

    The analytic functional bootstrap. Part II. Natural bases for the crossing equation,

    D. Mazac & M. F. Paulos,“The analytic functional bootstrap. Part II. Natural bases for the crossing equation”, JHEP1902, 163 (2019),arXiv:1811.10646 [hep-th]

  46. [46]

    Crossing symmetry, transcendentality and the Regge behaviour of 1d CFTs

    P. Ferrero, K. Ghosh, A. Sinha & A. Zahed,“Crossing symmetry, transcendentality and the Regge behaviour of 1d CFTs”, JHEP2007, 170 (2020),arXiv:1911.12388 [hep-th]

  47. [47]

    Bootstrapping theO(N)vector models

    F. Kos, D. Poland & D. Simmons-Duffin,“Bootstrapping theO(N)vector models”, JHEP1406, 091 (2014),arXiv:1307.6856 [hep-th]

  48. [48]

    Precision Islands in the Ising and O(N) Models,

    F. Kos, D. Poland, D. Simmons-Duffin & A. Vichi,“Precision Islands in the Ising andO(N) Models”, JHEP1608, 036 (2016),arXiv:1603.04436 [hep-th]

  49. [49]

    The Lightcone Bootstrap and the Spectrum of the 3d Ising CFT

    D. Simmons-Duffin,“The Lightcone Bootstrap and the Spectrum of the 3d Ising CFT”, JHEP 1703, 086 (2017),arXiv:1612.08471 [hep-th]

  50. [50]

    Uncovering Conformal Symmetry in the 3D Ising Transition: State-Operator Correspondence from a Quantum Fuzzy Sphere Regularization

    W. Zhu, C. Han, E. Huffman, J. S. Hofmann & Y.-C. He,“Uncovering Conformal Symmetry in the 3D Ising Transition: State-Operator Correspondence from a Quantum Fuzzy Sphere Regularization”, Phys. Rev. X13, 021009 (2023),arXiv:2210.13482 [cond-mat.stat-mech]. 79

  51. [51]

    Conformal four-point correlators of the 3D Ising transition via the quantum fuzzy sphere

    C. Han, L. Hu, W. Zhu & Y.-C. He,“Conformal four-point correlators of the three- dimensional Ising transition via the quantum fuzzy sphere”, Phys. Rev. B108, 235123 (2023), arXiv:2306.04681 [cond-mat.stat-mech]

  52. [52]

    Density matrix formulation for quantum renormalization groups

    S. R. White,“Density matrix formulation for quantum renormalization groups”, Phys. Rev. Lett. 69, 2863 (1992)

  53. [53]

    Dispersion Relation for CFT Four-Point Functions

    A. Bissi, P. Dey & T. Hansen,“Dispersion Relation for CFT Four-Point Functions”, JHEP 2004, 092 (2020),arXiv:1910.04661 [hep-th]

  54. [54]

    Analytic bootstrap of mixed correlators in the O(n) CFT

    F. Bertucci, J. Henriksson & B. McPeak,“Analytic bootstrap of mixed correlators in the O(n) CFT”, JHEP2210, 104 (2022),arXiv:2205.09132 [hep-th]

  55. [55]

    On four-point functions of 1/2-BPS operators in general dimensions

    F. A. Dolan, L. Gallot & E. Sokatchev,“On four-point functions of 1/2-BPS operators in general dimensions”, JHEP0409, 056 (2004),hep-th/0405180

  56. [56]

    Superconformal Ward identities and their solution

    M. Nirschl & H. Osborn,“Superconformal Ward identities and their solution”, Nucl. Phys. B 711, 409 (2005),hep-th/0407060

  57. [57]

    Superconformal symmetry, correlation functions and the operator product expansion

    F. A. Dolan & H. Osborn,“Superconformal symmetry, correlation functions and the operator product expansion”, Nucl. Phys. B629, 3 (2002),hep-th/0112251

  58. [58]

    Lessons from crossing symmetry at large N

    L. F. Alday, A. Bissi & T. Lukowski,“Lessons from crossing symmetry at large N”, JHEP1506, 074 (2015),arXiv:1410.4717 [hep-th]

  59. [59]

    The conformal bootstrap at finite temperature,

    L. Iliesiu, M. Kologlu, R. Mahajan, E. Perlmutter & D. Simmons-Duffin,“The Conformal Bootstrap at Finite Temperature”, JHEP1810, 070 (2018),arXiv:1802.10266 [hep-th]

  60. [60]

    Dynamics of Finite-Temperature Conformal Field Theories from Operator Product Expansion Inversion Formulas

    A. C. Petkou & A. Stergiou,“Dynamics of Finite-Temperature Conformal Field Theories from Operator Product Expansion Inversion Formulas”, Phys. Rev. Lett.121, 071602 (2018), arXiv:1806.02340 [hep-th]

  61. [61]

    Dispersion relations and exact bounds on CFT correlators

    M. F. Paulos,“Dispersion relations and exact bounds on CFT correlators”, JHEP2108, 166 (2021),arXiv:2012.10454 [hep-th]

  62. [62]

    King’s Computational Research, Engineering and Technology Environ- ment (CREATE)

    King’s College London,“King’s Computational Research, Engineering and Technology Environ- ment (CREATE)”,https://doi.org/10.18742/rnvf-m076

  63. [63]

    Sobolev Spaces

    R. A. Adams & J. J. F. Fournier,“Sobolev Spaces”, second edition, Academic Press (2003)

  64. [64]

    Hitchhiker’s guide to the fractional Sobolev spaces

    E. D. Nezza, G. Palatucci & E. Valdinoci,“Hitchhiker’s guide to the fractional Sobolev spaces”, arXiv:1104.4345 [math.FA],https://arxiv.org/abs/1104.4345

  65. [65]

    Another look at Sobolev spaces

    J. Bourgain, H. Brezis & P. Mironescu,“Another look at Sobolev spaces”, in“Optimal Control and Partial Differential Equations: Innovations and Applications”, ed: J. L. Menaldi, E. Rofman & A. Sulem, IOS Press (2001), Amsterdam, p. 439–455, A volume in honor of Alain Bensoussan’s 60th birthday,https://hal.archives-ouvertes.fr/hal-00747692. 80

  66. [66]

    FuzzifiED : Julia Package for Numerics on the Fuzzy Sphere

    Z. Zhou,“FuzzifiED : Julia Package for Numerics on the Fuzzy Sphere”,arXiv:2503.00100 [cond-mat.str-el]

  67. [67]

    Operator Product Expansion Coefficients of the 3D Ising Criti- cality via Quantum Fuzzy Spheres

    L. Hu, Y.-C. He & W. Zhu,“Operator Product Expansion Coefficients of the 3D Ising Criti- cality via Quantum Fuzzy Spheres”, Phys. Rev. Lett.131, 031601 (2023),arXiv:2303.08844 [cond-mat.stat-mech]. 81