pith. machine review for the scientific record. sign in

arxiv: 2604.11404 · v1 · submitted 2026-04-13 · ✦ hep-th · cs.LG· math.AG

Recognition: unknown

GlobalCY I: A JAX Framework for Globally Defined and Symmetry-Aware Neural K\"ahler Potentials

Authors on Pith no claims yet

Pith reviewed 2026-05-10 16:02 UTC · model grok-4.3

classification ✦ hep-th cs.LGmath.AG
keywords Calabi-YauKähler potentialneural networksglobal invarianceprojective hypersurfaceCefalú familyJAX
0
0 comments X

The pith

Globally defined invariant models outperform local baselines on geometric metrics for neural Kähler potentials on Calabi-Yau hypersurfaces.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces GlobalCY, a JAX framework for neural Kähler-potential models that are globally defined and symmetry-aware on projective hypersurface Calabi-Yau geometries. It compares a local-input baseline against a globally defined invariant model and a symmetry-aware global variant on the difficult Cefalú family members at λ=0.75 and λ=1.0. The globally defined invariant model records the best performance on the key diagnostics of negative-eigenvalue frequency and projective-invariance drift, with larger improvements at λ=0.75. This demonstrates that global invariant structure serves as a useful architectural constraint when local models fail to satisfy geometry-sensitive checks despite successful training.

Core claim

The globally defined invariant model is the strongest overall family, outperforming the local baseline on negative-eigenvalue frequency and projective-invariance drift for both λ=0.75 and λ=1.0 in the Cefalú family; the symmetry-aware model improves drift relative to the local baseline but does not yet surpass the plain global invariant model.

What carries the argument

Comparison of three neural Kähler-potential families (local-input baseline, globally defined invariant model, symmetry-aware global model) on projective hypersurface Calabi-Yau geometries, evaluated via geometry-aware diagnostics including negative-eigenvalue frequency and projective-invariance drift.

If this is right

  • Global invariant inputs reduce the frequency of negative eigenvalues in the learned metric compared to local inputs.
  • Projective-invariance drift decreases when models use globally defined rather than local inputs.
  • Adding symmetry awareness improves invariance preservation but has not yet exceeded the gains from global definition alone.
  • The case λ=1.0 remains harder than λ=0.75 even for the strongest model family tested.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same global-invariant constraint may improve neural approximations on other projective Calabi-Yau families beyond the Cefalú examples.
  • JAX-based implementations of these architectures could enable systematic scaling to higher-degree hypersurfaces or larger training regimes.

Load-bearing premise

The chosen diagnostics of negative-eigenvalue frequency and projective-invariance drift are sufficient to reveal the failure modes of local models in quartic regimes near singular points.

What would settle it

A new test on an additional hard quartic Calabi-Yau hypersurface where the local baseline produces lower negative-eigenvalue frequency than the globally defined invariant model would contradict the reported superiority.

Figures

Figures reproduced from arXiv: 2604.11404 by Abdul Rahman.

Figure 1
Figure 1. Figure 1: High-level architecture of the GlobalCY I benchmark stack. The work￾flow is organized around a two-layer division of responsibilities. GeoCYData provides the geometric and protocol substrate, including quartic geometry cases, bundle exports with local-chart, invariant, and symmetry-aware views, and the canoni￾cal case identifiers, seeds, and benchmark presets used for controlled comparison. GlobalCY consum… view at source ↗
Figure 2
Figure 2. Figure 2: Benchmark-wide comparison across the two hard Cefalú cases, λ = 0.75 and λ = 1.0, for the local, globally invariant, and symmetry-aware model fami￾lies. The three panels report mean negative-eigenvalue frequency, mean projective￾invariance drift, and mean training loss, with error bars indicating cross-seed varia￾tion over the fixed three-seed protocol. The figure shows that the globally invariant model is… view at source ↗
Figure 3
Figure 3. Figure 3: Focused comparison for the hardest benchmark case, the Cefalú regime λ = 1.0. The three panels isolate the same diagnostics as in [PITH_FULL_IMAGE:figures/full_fig_p020_3.png] view at source ↗
read the original abstract

We present \emph{GlobalCY}, a JAX-based framework for globally defined and symmetry-aware neural K\"ahler-potential models on projective hypersurface Calabi--Yau geometries. The central problem is that local-input neural K\"ahler-potential models can train successfully while still failing the geometry-sensitive diagnostics that matter in hard quartic regimes, especially near singular and near-singular members of the Cefal\'u family. To study this, we compare three model families -- a local-input baseline, a globally defined invariant model, and a symmetry-aware global model -- on the hard Cefal\'u cases $\lambda=0.75$ and $\lambda=1.0$ using a fixed multi-seed protocol and a geometry-aware diagnostic suite. In this benchmark, the globally defined invariant model is the strongest overall family, outperforming the local baseline on the two clearest geometric comparison metrics, negative-eigenvalue frequency and projective-invariance drift, in both cases. The gains are strongest at $\lambda=0.75$, while $\lambda=1.0$ remains more difficult. The current symmetry-aware model improves projective-invariance drift relative to the local baseline, but does not yet surpass the plain global invariant model. These results show that global invariant structure is a meaningful architectural constraint for learned K\"ahler-potential modeling in hard quartic Calabi--Yau settings.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The manuscript introduces GlobalCY, a JAX-based framework for globally defined and symmetry-aware neural Kähler-potential models on projective hypersurface Calabi-Yau geometries. It compares three model families—a local-input baseline, a globally defined invariant model, and a symmetry-aware global model—on the hard Cefalú cases λ=0.75 and λ=1.0 using a fixed multi-seed protocol and a geometry-aware diagnostic suite. The central claim is that the globally defined invariant model is the strongest overall family, outperforming the local baseline on negative-eigenvalue frequency and projective-invariance drift in both cases, with gains strongest at λ=0.75.

Significance. If the reported outperformance on the chosen diagnostics corresponds to improved geometric fidelity, the work would establish that global invariant structure supplies a useful architectural constraint for neural Kähler-potential modeling in quartic regimes near singularities. The multi-seed protocol and JAX implementation provide a reproducible benchmark that can be extended to other Calabi-Yau families.

major comments (2)
  1. Abstract: the claim that the globally defined invariant model is strongest rests on outperformance versus the local baseline on negative-eigenvalue frequency and projective-invariance drift. The manuscript gives no indication that the models were cross-checked against standard Calabi-Yau diagnostics such as integrated Monge-Ampère residuals or det(g) deviation from the expected volume form; if large residuals remain in these quantities, the superiority on the reported metrics would not establish better approximation of the true Kähler potential.
  2. Benchmark description (abstract): the outperformance is asserted for both λ=0.75 and λ=1.0, yet without tabulated metric values, error bars from the multi-seed runs, or statistical significance tests, the robustness of the conclusion that the global invariant model is strongest overall cannot be fully assessed.
minor comments (1)
  1. The abstract refers to 'the two clearest geometric comparison metrics' without a brief inline definition; adding one sentence clarifying negative-eigenvalue frequency and projective-invariance drift would improve accessibility.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their careful reading of the manuscript and for the constructive comments. Below we respond to each major comment and outline the revisions we intend to make.

read point-by-point responses
  1. Referee: Abstract: the claim that the globally defined invariant model is strongest rests on outperformance versus the local baseline on negative-eigenvalue frequency and projective-invariance drift. The manuscript gives no indication that the models were cross-checked against standard Calabi-Yau diagnostics such as integrated Monge-Ampère residuals or det(g) deviation from the expected volume form; if large residuals remain in these quantities, the superiority on the reported metrics would not establish better approximation of the true Kähler potential.

    Authors: We agree that standard Calabi-Yau diagnostics such as the integrated Monge-Ampère residual and det(g) deviation from the expected volume form are important for confirming that outperformance on the reported metrics corresponds to improved approximation of the true Kähler potential. Our diagnostics were chosen for their sensitivity to geometric failures in the hard quartic regimes, but we acknowledge the value of the additional checks. In the revised manuscript we will add explicit computations and comparisons of the integrated Monge-Ampère residuals and det(g) deviations for all three model families on both λ=0.75 and λ=1.0. revision: yes

  2. Referee: Benchmark description (abstract): the outperformance is asserted for both λ=0.75 and λ=1.0, yet without tabulated metric values, error bars from the multi-seed runs, or statistical significance tests, the robustness of the conclusion that the global invariant model is strongest overall cannot be fully assessed.

    Authors: We appreciate the referee's point on the need for detailed numerical reporting to assess robustness. While the multi-seed protocol is described in the manuscript, the abstract summarizes the findings at a high level. To allow full evaluation of the claims, we will revise the manuscript to include tabulated metric values with error bars (standard deviations across seeds) and appropriate statistical significance tests supporting the relative performance of the model families. revision: yes

Circularity Check

0 steps flagged

Empirical architecture comparison on fixed benchmarks exhibits no circularity

full rationale

The paper's central claim is an empirical statement: on the Cefalú family at λ=0.75 and λ=1.0, the globally defined invariant model family records lower negative-eigenvalue frequency and lower projective-invariance drift than the local-input baseline under a fixed multi-seed protocol. This comparison is performed by training distinct neural architectures and measuring the two diagnostics directly on the resulting potentials; neither diagnostic is obtained by fitting a parameter to the target metric nor by re-expressing an input quantity. No equations, ansätze, or uniqueness theorems are invoked that would reduce the reported superiority to a self-definition or to a self-citation chain. The work is therefore self-contained against external benchmarks and receives a circularity score of zero.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The paper's central empirical claim depends on the validity of the diagnostic metrics and the assumption that the neural architectures correctly implement the global and symmetry properties as described.

axioms (1)
  • domain assumption Negative eigenvalue frequency and projective invariance drift are appropriate metrics for evaluating the quality of learned Kähler potentials.
    These are used as the clearest geometric comparison metrics in the benchmark.

pith-pipeline@v0.9.0 · 5555 in / 1290 out tokens · 77328 ms · 2026-05-10T16:02:21.103943+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Beyond Algebraic Superstring Compactification: Part II

    hep-th 2026-05 unverdicted novelty 4.0

    Deformations in algebraic superstring models indicate a non-algebraic generalization that aligns with mirror duality requirements.

Reference graph

Works this paper leans on

18 extracted references · 9 canonical work pages · cited by 1 Pith paper

  1. [1]

    Machine learned Calabi–Yau metrics and curvature

    Per Berglund et al. “Machine learned Calabi–Yau metrics and curvature”. In:Advances in Theoretical and Mathematical Physics27.4 (2023), pp. 1107–1158. arXiv:2211 . 09801 [hep-th]

  2. [2]

    Accessed: 2026-04-12

    James Bradbury et al.JAX: composable transformations of Python+NumPy programs.http: //github.com/google/jax. Accessed: 2026-04-12

  3. [3]

    com/google/flax

    Jonas Heek et al.Flax: A neural network library and ecosystem for JAX.http://github. com/google/flax. Accessed: 2026-04-12

  4. [4]

    The space of Kähler metrics

    Eugenio Calabi. “The space of Kähler metrics”. In:Proceedings of the International Congress of Mathematicians, Amsterdam2 (1954), pp. 206–207

  5. [5]

    On the Ricci curvature of a compact Kähler manifold and the complex Monge–Ampère equation, I

    Shing-Tung Yau. “On the Ricci curvature of a compact Kähler manifold and the complex Monge–Ampère equation, I”. In:Communications on Pure and Applied Mathematics31.3 (1978), pp. 339–411

  6. [6]

    Scalar Curvature and Projective Embeddings, II

    S. K. Donaldson. “Scalar Curvature and Projective Embeddings, II”. In:Q. J. Math.56.3 (2005), pp. 345–356.doi:10.1093/qmath/hah045. 26 REFERENCES

  7. [7]

    Some numerical results in complex differential geometry

    S. K. Donaldson. “Some numerical results in complex differential geometry”. In:Pure and Applied Mathematics Quarterly5.2 (2009), pp. 571–618

  8. [8]

    Numerical Calabi–Yau metrics

    Michael R. Douglas et al. “Numerical Calabi–Yau metrics”. In:Journal of Mathematical Physics49.3 (2008), p. 032302. arXiv:hep-th/0612075

  9. [9]

    Machine Learning Calabi–Yau Met- rics

    Anthony Ashmore, Yang-Hui He, and Burt A. Ovrut. “Machine Learning Calabi–Yau Met- rics”. In:Fortschritte der Physik68.9 (2020), p. 2000068.doi:10.1002/prop.202000068

  10. [10]

    Numerical Calabi–Yau Metrics from Holomorphic Networks

    Michael R. Douglas et al. “Numerical Calabi–Yau Metrics from Holomorphic Networks”. In: Proceedings of Machine Learning Research. Vol. 145. 2022, pp. 74–100

  11. [11]

    Learning Size and Shape of Calabi– Yau Spaces

    Magdalena Larfors, Robin Schneider, Fabian Rühle, et al. “Learning Size and Shape of Calabi– Yau Spaces”. In:Proceedings of Machine Learning and the Physical Sciences Workshop at NeurIPS(2021). arXiv:2111.01436 [hep-th]

  12. [12]

    CYJAX: A package for Calabi–Yau metrics with JAX

    Mathis Gerdes and Sven Krippendorf. “CYJAX: A package for Calabi–Yau metrics with JAX”. In:Machine Learning: Science and Technology4.2 (2022), p. 025031. arXiv:2211. 12520 [hep-th]

  13. [13]

    Learning Group Invariant Calabi– Yau Metrics by Fundamental Domain Projections

    Yacoub Hendi, Magdalena Larfors, and Moritz Walden. “Learning Group Invariant Calabi– Yau Metrics by Fundamental Domain Projections”. In:Machine Learning: Science and Tech- nology(2025). arXiv:2407.06914 [hep-th]

  14. [14]

    Calabi–Yau metrics through Grass- mannian learning and Donaldson’s algorithm

    Carl Henrik Ek, Oisin Kim, and Challenger Mishra. “Calabi–Yau metrics through Grass- mannian learning and Donaldson’s algorithm”. In:arXiv preprint(2024). arXiv:2410.11284 [hep-th]

  15. [15]

    cymyc: Calabi–Yau Metrics, Yukawas, and Curvature

    Giorgi Butbaia et al. “cymyc: Calabi–Yau Metrics, Yukawas, and Curvature”. In:JHEP (2025). arXiv:2410.19728 [hep-th]

  16. [16]

    Symbolic Approximations to Ricci-flat Metrics via Extrinsic Symmetries of Calabi–Yau Hypersurfaces

    Viktor Mirjanić and Challenger Mishra. “Symbolic Approximations to Ricci-flat Metrics via Extrinsic Symmetries of Calabi–Yau Hypersurfaces”. In:arXiv preprint(2024). arXiv:2412. 19778 [hep-th]

  17. [17]

    Interpretable Analytic Calabi–Yau Metrics via Symbolic Distillation

    D. Y. Eng. “Interpretable Analytic Calabi–Yau Metrics via Symbolic Distillation”. In:arXiv preprint(2026). arXiv:2602.07834 [cs.LG]

  18. [18]

    Calabi–Yau Metrics with Kähler Moduli Dependence

    Andrei Constantin, Andre Lukas, and Luca A. Nutricati. “Calabi–Yau Metrics with Kähler Moduli Dependence”. In:arXiv preprint(2026). arXiv:2603.12384 [hep-th]