pith. machine review for the scientific record. sign in

arxiv: 2605.06061 · v1 · submitted 2026-05-07 · 💻 cs.LG · cs.CG· math.AT

Recognition: unknown

Geometry-Aware Simplicial Message Passing

Bastian Rieck, Elena Xinyi Wang

Authors on Pith no claims yet

Pith reviewed 2026-05-08 14:00 UTC · model grok-4.3

classification 💻 cs.LG cs.CGmath.AT
keywords simplicial message passinggeometric Weisfeiler-LehmanEuler characteristic transformexpressivity boundsgeometric simplicial complexesmesh classificationcolor refinementneural network expressivity
0
0 comments X

The pith

Incorporating vertex coordinates into simplicial color refinement bounds the expressivity of geometry-aware message passing and pairs with a complete invariant for approximation.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Standard Weisfeiler-Lehman tests and their simplicial extensions remain blind to geometry, so they treat two meshes as identical whenever their connectivity matches even if the vertex positions differ. The paper defines a Geometric Simplicial Weisfeiler-Lehman test that folds vertex coordinates directly into the color-refinement procedure. It proves that every geometry-aware simplicial message passing scheme is limited by what this test can distinguish, yet that suitable parameters let the schemes recover the full power of the test on any fixed finite collection of complexes. When the test is combined with the Euler Characteristic Transform, which separates all geometric simplicial complexes, the result supplies both a sharp expressivity characterization and a concrete approximation route for models operating on meshes and other embedded data.

Core claim

The expressivity of geometry-aware simplicial message passing schemes is bounded above by the Geometric Simplicial Weisfeiler-Lehman test. There exist parameters such that these schemes match the discriminating power of GSWL on any fixed finite family of geometric simplicial complexes. Combined with the Euler Characteristic Transform, a complete invariant for geometric simplicial complexes, this yields a geometric expressivity characterization together with an approximation framework.

What carries the argument

The Geometric Simplicial Weisfeiler-Lehman test, which extends combinatorial color refinement by incorporating vertex coordinates when labeling simplices.

If this is right

  • No geometry-aware simplicial message passing scheme can distinguish complexes beyond the separations produced by GSWL.
  • On any fixed finite family of geometric simplicial complexes, parameters exist that let message passing achieve exactly the same distinctions as GSWL.
  • Pairing the GSWL bound with the Euler Characteristic Transform gives a complete characterization of which geometric differences the models can capture.
  • The same combination supplies an approximation framework for using message passing networks to learn geometric distinctions in simplicial data such as meshes.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Practical mesh and shape-learning pipelines will need explicit coordinate integration to move past purely topological distinctions.
  • The bounding argument supplies a template for proving expressivity limits on other geometry-aware architectures that operate on embedded data.
  • The coordinate-folding idea may extend to point-cloud or manifold settings where standard combinatorial tests likewise lose geometric information.

Load-bearing premise

Vertex coordinates can be stably incorporated into the color-refinement procedure while preserving the combinatorial bounding properties of the original simplicial Weisfeiler-Lehman test, and the Euler Characteristic Transform remains a complete invariant under the same geometric embedding assumptions.

What would settle it

A pair of geometric simplicial complexes that receive identical GSWL colors yet possess distinct Euler Characteristic Transforms, or a finite family of complexes on which no choice of message-passing parameters recovers the full separation achieved by GSWL.

Figures

Figures reproduced from arXiv: 2605.06061 by Bastian Rieck, Elena Xinyi Wang.

Figure 1
Figure 1. Figure 1: Coordinate recovery in GSWL. We need k propagation steps to recover k-simplices. The key difference to SWL [6] is the initialization: Each simplex receives features derived from its vertex coordinates rather than a uniform color. In particular, vertices carry their coordinates, and higher-dimensional simplices carry Φk, a permutation-invariant function of their vertex coordinates such as edge midpoint and … view at source ↗
Figure 2
Figure 2. Figure 2: Simplicial message passing update for a target simplex view at source ↗
read the original abstract

The Weisfeiler--Lehman (WL) test and its simplicial extension (SWL) characterize the combinatorial expressivity of message passing networks, but they are blind to geometry, i.e., meshes with identical connectivity but different embeddings are indistinguishable. We introduce the Geometric Simplicial Weisfeiler--Lehman (GSWL) test, which incorporates vertex coordinates into color refinement for geometric simplicial complexes. In addition, we show that (i) the expressivity of geometry-aware simplicial message passing schemes is bounded above by GSWL, and (ii) that there exist parameters such that the discriminating power of GSWL is matched by these schemes on any fixed finite family of geometric simplicial complexes. Combined with the Euler Characteristic Transform (ECT), a complete invariant for geometric simplicial complexes, this yields a geometric expressivity characterization together with an approximation framework. Experiments on synthetic and mesh datasets serve to validate our theory, showing a clear hierarchy from combinatorial to geometry-aware models.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

0 major / 3 minor

Summary. The paper introduces the Geometric Simplicial Weisfeiler-Lehman (GSWL) test, which augments standard SWL color refinement by incorporating vertex coordinates for geometric simplicial complexes. It proves that the expressivity of geometry-aware simplicial message-passing schemes is upper-bounded by GSWL and that, for any fixed finite family of such complexes, there exist parameters under which the schemes achieve the same discriminating power as GSWL. Combining this with the Euler Characteristic Transform (ECT) as a complete invariant yields a geometric expressivity characterization together with an approximation framework. Experiments on synthetic data and mesh datasets are used to illustrate a hierarchy from combinatorial to geometry-aware models.

Significance. If the central bounds and matching results hold, the work provides a clean theoretical extension of WL-style analysis to geometric simplicial settings, which is relevant for mesh and point-cloud applications. The explicit finite-family matching result and the ECT-based approximation framework are concrete strengths that could guide the design of geometry-aware networks while preserving combinatorial guarantees.

minor comments (3)
  1. §3.2: the definition of the GSWL update rule should explicitly state how coordinate features are aggregated (e.g., via concatenation or a learned projection) to avoid ambiguity when comparing to the combinatorial SWL baseline.
  2. Figure 4: the caption and axis labels for the mesh-dataset results should clarify whether the reported accuracy is mean ± std over the same fixed train/test splits used in the theory section or over random splits.
  3. The proof of the matching result (Theorem 4.3) assumes a finite family; a brief remark on whether the construction extends to compact families under suitable discretization would strengthen the approximation-framework claim.

Simulated Author's Rebuttal

0 responses · 0 unresolved

We thank the referee for their positive assessment of the manuscript and for recommending minor revision. The summary correctly captures the core contributions of the GSWL test, the upper bound on geometry-aware simplicial message passing, the finite-family matching result, and the combination with the Euler Characteristic Transform to obtain a geometric expressivity characterization.

Circularity Check

0 steps flagged

No significant circularity in the derivation chain

full rationale

The paper directly defines GSWL by extending color refinement to incorporate vertex coordinates on geometric simplicial complexes, then establishes standard expressivity bounds (MP schemes bounded by GSWL, and matching on finite families) that follow from the definition without reducing to fitted parameters or self-referential inputs. The combination with ECT treats the latter as an externally known complete invariant rather than deriving it internally. No load-bearing step relies on self-citation chains, ansatz smuggling, or renaming; the central claims remain independent of the paper's own fitted quantities or prior author results.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract provides no explicit free parameters, axioms, or invented entities; full text required to audit any hidden fitting or background assumptions.

pith-pipeline@v0.9.0 · 5468 in / 1113 out tokens · 31095 ms · 2026-05-08T14:00:09.243566+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

33 extracted references · 2 canonical work pages

  1. [1]

    Akbari, A

    A. Akbari, A. H. Souza, and V . Garg. The logical expressiveness of topological neural networks. InInternational Conference on Learning Representations, 2026

  2. [2]

    Amboage, E

    J. Amboage, E. Röell, P. Schnider, and B. Rieck. Leap: Local ECT-based learnable positional encodings for graphs. InInternational Conference on Learning Representations, 2026

  3. [3]

    E. J. Amézquita, M. Y . Quigley, T. Ophelders, J. B. Landis, D. Koenig, E. Munch, and D. H. Chitwood. Measuring hidden phenotype: quantifying the shape of barley seeds using the Euler characteristic transform.in silico Plants, 4(1), 2021

  4. [4]

    Ballester, E

    R. Ballester, E. Röell, D. B. Schmid, M. Alain, S. Escalera, C. Casacuberta, and B. Rieck. MANTRA: The Manifold Triangulations Assemblage. InInternational Conference on Learning Representations, 2025

  5. [5]

    Bodnar, F

    C. Bodnar, F. Frasca, N. Otter, Y . Wang, P. Liò, G. Montufar, and M. Bronstein. Weisfeiler and lehman go cellular: CW networks. In M. Ranzato, A. Beygelzimer, Y . Dauphin, P. Liang, and J. W. Vaughan, editors,Advances in Neural Information Processing Systems, volume 34, pages 2625–2640. Curran Associates, Inc., 2021

  6. [6]

    Bodnar, F

    C. Bodnar, F. Frasca, Y . G. Wang, N. Otter, G. Montufar, P. Lio, and M. M. Bronstein. Weisfeiler and Lehman go topological: Message passing simplicial networks. InInternational Conference on Machine Learning, pages 1026–1037. PMLR, 2021

  7. [7]

    F. Bogo, J. Romero, M. Loper, and M. J. Black. FAUST: Dataset and evaluation for 3D mesh registration. InIEEE Conference on Computer Vision and Pattern Recognition, pages 3794–3801, 2014

  8. [8]

    Brandstetter, R

    J. Brandstetter, R. Hesselink, E. van der Pol, E. Bekkers, and M. Welling. Geometric and physical quantities improve E(3) equivariant message passing. InInternational Conference on Learning Representations, 2022

  9. [9]

    Crawford, A

    L. Crawford, A. Monod, A. X. Chen, S. Mukherjee, and R. Rabadán. Predicting clinical outcomes in glioblastoma: An application of topological and functional data analysis.Journal of the American Statistical Association, 115(531):1139–1150, 2020

  10. [10]

    Curry, S

    J. Curry, S. Mukherjee, and K. Turner. How many directions determine a shape and other sufficiency results for two topological transforms.Transactions of the American Mathematical Society, Series B, 9:1006–1043, 2022

  11. [11]

    George, O

    J. George, O. L. Osborn, E. Munch, M. R. II, and E. X. Wang. On the stability of the Euler characteristic transform for a perturbed embedding, 2025. URL https://arxiv.org/abs/ 2506.19991

  12. [12]

    Ghrist, R

    R. Ghrist, R. Levanger, and H. Mai. Persistent homology and Euler integral transforms.Journal of Applied and Computational Topology, 2(1):55–60, 2018

  13. [13]

    J. Han, Y . Rong, T. Xu, and W. Huang. Geometrically equivariant graph neural networks: A survey, 2022. URLhttps://arxiv.org/abs/2202.07230

  14. [14]

    Hatcher.Algebraic topology

    A. Hatcher.Algebraic topology. Cambridge University Press, Cambridge, UK, 2000

  15. [15]

    C. K. Joshi, C. Bodnar, S. V . Mathis, T. Cohen, and P. Liò. On the expressive power of geometric graph neural networks. InInternational Conference on Machine Learning, 2023

  16. [16]

    T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. InInternational Conference on Learning Representations, 2017. 10

  17. [17]

    Maron, H

    H. Maron, H. Ben-Hamu, H. Serviansky, and Y . Lipman. Provably powerful graph networks. In Advances in Neural Information Processing Systems, 2019

  18. [18]

    Morris, M

    C. Morris, M. Ritzert, M. Fey, W. L. Hamilton, J. E. Lenssen, G. Rattan, and M. Grohe. Weisfeiler and Leman go neural: Higher-order graph neural networks. InAAAI Conference on Artificial Intelligence, 2019

  19. [19]

    Morris, G

    C. Morris, G. Rattan, and P. Mutzel. Weisfeiler and Leman go sparse: Towards scalable higher-order graph embeddings. InAdvances in Neural Information Processing Systems, 2020

  20. [20]

    Morris, Y

    C. Morris, Y . Lipman, H. Maron, B. Rieck, N. M. Kriege, M. Grohe, M. Fey, and K. Borgwardt. Weisfeiler and Leman go machine learning: The story so far.Journal of Machine Learning Research, 24(333):1–59, 2023

  21. [21]

    E. Munch. An invitation to the Euler characteristic transform.The American Mathematical Monthly, 132(1):15–25, 2025

  22. [22]

    Papamarkou, T

    T. Papamarkou, T. Birdal, M. Bronstein, G. Carlsson, J. Curry, Y . Gao, M. Hajij, R. Kwitt, P. Liò, P. Di Lorenzo, V . Maroulas, N. Miolane, F. Nasrin, K. N. Ramamurthy, B. Rieck, S. Scardapane, M. T. Schaub, P. Veliˇckovi´c, B. Wang, Y . Wang, G. W. Wei, and G. Zamzmi. Position: Topological deep learning is the new frontier for relational learning. InInt...

  23. [23]

    C. R. Qi, L. Yi, H. Su, and L. J. Guibas. PointNet++: Deep hierarchical feature learning on point sets in a metric space. InAdvances in Neural Information Processing Systems, 2017

  24. [24]

    B. Rieck. Topology meets machine learning: An introduction using the Euler characteristic transform.Notices of the American Mathematical Society, 72(7):719–727, 2025

  25. [25]

    Röell and B

    E. Röell and B. Rieck. Differentiable Euler characteristic transforms for shape classification. In International Conference on Learning Representations, 2024

  26. [26]

    W. S. Tang, G. M. da Silva, H. Kirveslahti, E. Skeens, B. Feng, T. Sudijono, K. K. Yang, S. Mukherjee, B. Rubenstein, and L. Crawford. A topological data analytic approach for discovering biophysical signatures in protein dynamics.PLoS Comput. Biol., 18(5):e1010045, 2022

  27. [27]

    Turner, S

    K. Turner, S. Mukherjee, and D. M. Boyer. Persistent homology transform for modeling shapes and surfaces.Information and Inference: A Journal of the IMA, 3(4):310–344, 2014

  28. [28]

    Villar, D

    S. Villar, D. W. Hogg, K. Storey-Fisher, W. Yao, and B. Blum-Smith. Scalars are universal: Equivariant machine learning, structured like classical physics. InAdvances in Neural Information Processing Systems, 2021

  29. [29]

    von Rohrscheidt and B

    J. von Rohrscheidt and B. Rieck. Diss-l-ECT: Dissecting graph data with local Euler characteristic transforms. InInternational Conference on Machine Learning, 2025

  30. [30]

    Y . Wang, Y . Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon. Dynamic graph CNN for learning on point clouds.ACM Transactions on Graphics, 38(5):1–12, 2019

  31. [31]

    Weisfeiler and A

    B. Weisfeiler and A. Leman. The reduction of a graph to canonical form and the algebra which appears therein.NTI Series, 2(9):12–16, 1968

  32. [32]

    K. Xu, W. Hu, J. Leskovec, and S. Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2019

  33. [33]

    Zaheer, S

    M. Zaheer, S. Kottur, S. Ravanbhakhsh, B. Póczos, R. Salakhutdinov, and A. J. Smola. Deep sets. InAdvances in Neural Information Processing Systems, 2017. 11 A Full proofs Lemma 1(Coordinate recovery).Let (K,x) be an embedded simplicial complex whose vertex map x: vert(K)→R d is injective, and let σ={v 0, . . . , vk} be a k-simplex of K. If L≥k , then c(L...