pith. machine review for the scientific record. sign in

arxiv: 2605.10393 · v1 · submitted 2026-05-11 · 💻 cs.LG · cs.LO

Recognition: 2 theorem links

· Lean Theorem

The Polynomial Counting Capabilities of Message Passing Neural Networks

Authors on Pith no claims yet

Pith reviewed 2026-05-12 05:11 UTC · model grok-4.3

classification 💻 cs.LG cs.LO
keywords message passing neural networksgraph neural networkspolynomial countingexpressivitymodal logicaggregation functionsnode-labelled graphs
0
0 comments X

The pith

Mean aggregations in message passing neural networks suffice to verify global polynomial counting constraints on node-labelled graphs.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper examines the limits of what message passing neural networks can count when they move past simple linear thresholds into polynomial constraints. It establishes that mean-based aggregation alone can enforce global polynomial counting rules on graphs whose nodes carry labels, provided the graphs meet basic structural conditions. Local polynomial constraints become expressible when formulas avoid nested modalities and the model is allowed sum or max aggregation or the graph is regular; nested cases are handled on tree-like graphs under similar rules. These results matter because they map out concrete conditions under which standard MPNN architectures already capture higher-order counting logic that arises in many graph tasks.

Core claim

Global polynomial counting constraints in node-labelled graphs can be checked using mean MPNN under mild assumptions. Checking local constraints is also possible if formulas have no nested modalities and we additionally either permit sum or max aggregations or restrict to regular graphs. Formulas with nested modalities can be captured by mean MPNN over graphs with tree-like structures and similar assumptions.

What carries the argument

mean aggregation in MPNNs, which replaces sum or max pooling to enforce polynomial counting extensions of graded modal logic on graphs

If this is right

  • Standard mean MPNNs become sufficient to decide global polynomial extensions of graded modal logic on labelled graphs.
  • Local polynomial constraints without nesting become decidable once sum or max aggregation is allowed or the graph is forced to be regular.
  • Nested polynomial modalities remain expressible provided the underlying graph is tree-like.
  • The separation between global and local constraints shows that aggregation choice and graph symmetry directly control counting power.
  • These conditions give explicit recipes for training or verifying MPNNs on counting-heavy graph problems.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Real-world graphs that are nearly regular may inherit the local counting guarantees without needing non-mean aggregations.
  • The tree-like restriction for nested cases suggests that message-passing on hierarchical or acyclic structures already covers many practical nested counting tasks.
  • One could test the boundary by constructing small non-regular graphs with local nested polynomial constraints and measuring where mean MPNNs fail.
  • The results hint that many counting queries in chemistry or social-network analysis may not require architectural extensions beyond mean pooling.

Load-bearing premise

The input graphs must be node-labelled and the network must rely on mean aggregation, with extra restrictions on graph regularity or modality nesting for local and nested cases.

What would settle it

A concrete node-labelled graph containing a global polynomial counting constraint that no mean MPNN, regardless of depth or width, can correctly decide under the paper's stated conditions.

Figures

Figures reproduced from arXiv: 2605.10393 by Anthony W. Lin, Marco S\"alzer, Pascal Bergstr\"a{\ss}er.

Figure 1
Figure 1. Figure 1: Overview of results. The MPNN class Mmean,x refers to MPNN that use mean and either sum or max as aggregation func￾tions, ≤ refers to the notion of recognition with some certainty de￾fined in Sec. 2, and ⪯ denotes the lowerbound on the certainty of MPNN in checking modal formulas which is exclusive to Lem. 1. have one thing in common: the logics employed for com￾parison with MPNN are confined to extensions… view at source ↗
Figure 2
Figure 2. Figure 2: Let φ be a formula with md(φ) = 2 that only uses the modality Eout. The top two graphs are members of (T ⟳ φ ) p •, where the blue node indicates the focus. The bottom graphs are not included in (T ⟳ φ ) p • since multiple predecessors of the red node have the same trace set. For clarity reasons, self-loops are not depicted. Theorem 6. Let φ ∈ PMLE. There is Mφ ∈ Mmean with O(md(φ)deg(φ)) layers that recog… view at source ↗
read the original abstract

The counting power of Message Passing Neural Networks (MPNN) has been the subject of many recent papers, showing that they can express logic that involves counting up to a threshold or more generally satisfy a linear arithmetic constraint. In this paper, we study the counting capabilities of MPNN beyond linear arithmetic, primarily utilising local and global mean aggregations. In particular, our goal is to tease out conditions required to express extensions of graded modal logic with polynomial counting constraints. We show that global polynomial counting constraints in node-labelled graphs can be checked using mean MPNN under mild assumptions. Checking local constraints is also possible, if we consider formulas with no nested modalities and additionally either (i) permit sum/max aggregations, or (ii) only restrict to regular graphs. We also show how formulas with nested modalities can be captured by mean MPNN over graphs with tree-like structures and similar assumptions.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper claims that MPNNs with mean aggregations can express and check global polynomial counting constraints (extensions of graded modal logic) over node-labelled graphs under mild assumptions. Local polynomial constraints without nested modalities are expressible if sum/max aggregations are allowed or graphs are regular; nested modalities require tree-like graph structures with analogous assumptions. This extends prior results limited to linear arithmetic constraints.

Significance. If the derivations hold, the results meaningfully extend the known logical expressivity of MPNNs from linear to polynomial counting, providing concrete conditions under which mean-based aggregations suffice. This could inform both theoretical analyses of GNN limitations and the design of architectures for counting-heavy tasks in graph reasoning.

major comments (2)
  1. [Abstract] Abstract: The central claim that 'global polynomial counting constraints ... can be checked using mean MPNN under mild assumptions' is load-bearing for the paper's contribution. Mean aggregation computes label fractions, so recovering absolute counts needed for a polynomial p(count)=0 requires either fixed |V| across inputs or |V| supplied as a node feature. The abstract explicitly lists size/aggregation caveats only for the local case, leaving it unclear whether the global 'mild assumptions' include a hidden size restriction; this must be stated explicitly with a concrete construction or counter-example for variable-order graphs.
  2. [Section 4] Section 4 (global constraints) and the main theorem: The reduction from MPNN layers to polynomial logic extensions must be checked for dependence on graph order. If the proof assumes |V| constant (or encodes it via an extra feature), the result is narrower than stated and the contrast with linear-arithmetic prior work collapses for general inputs.
minor comments (2)
  1. [Introduction] Notation for the polynomial predicates (e.g., how p is applied to label counts) should be introduced with a small concrete example early in the paper.
  2. [Throughout] The 'mild assumptions' are referenced repeatedly but never collected in one place; a dedicated paragraph or table listing them for each regime (global, local non-nested, nested) would improve readability.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their careful reading and for highlighting the need to clarify assumptions about graph order in our claims regarding global polynomial counting constraints. We will revise the manuscript to make these conditions explicit in the abstract and Section 4, including a concrete construction and discussion of limitations for variable-size graphs.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The central claim that 'global polynomial counting constraints ... can be checked using mean MPNN under mild assumptions' is load-bearing for the paper's contribution. Mean aggregation computes label fractions, so recovering absolute counts needed for a polynomial p(count)=0 requires either fixed |V| across inputs or |V| supplied as a node feature. The abstract explicitly lists size/aggregation caveats only for the local case, leaving it unclear whether the global 'mild assumptions' include a hidden size restriction; this must be stated explicitly with a concrete construction or counter-example for variable-order graphs.

    Authors: We agree that the abstract requires greater explicitness on this point. The mild assumptions for global polynomial constraints in our framework include that |V| is either fixed across inputs or supplied as a constant node feature; this allows mean aggregation to recover absolute counts (by scaling the fraction) before evaluating the polynomial. We will update the abstract to list this caveat alongside the local-case conditions. We will also add a concrete construction in the main text (with a small example graph) showing how the MPNN computes the required polynomial when |V| is provided as a feature, and a brief counter-example illustrating failure on variable-order graphs without size information. revision: yes

  2. Referee: [Section 4] Section 4 (global constraints) and the main theorem: The reduction from MPNN layers to polynomial logic extensions must be checked for dependence on graph order. If the proof assumes |V| constant (or encodes it via an extra feature), the result is narrower than stated and the contrast with linear-arithmetic prior work collapses for general inputs.

    Authors: The reduction in Section 4 does rely on |V| being accessible (constant or via feature) to convert mean-based fractions into absolute counts for polynomial evaluation. We will revise Section 4 to state this dependence explicitly, add a remark on the resulting scope for variable-order inputs, and discuss the comparison to linear-arithmetic results (noting that many such results also require sum aggregation or size normalization). While this narrows the claim relative to completely unrestricted variable-size inputs, the polynomial extension remains meaningful under the stated mild assumptions and does not fully collapse the contrast with prior linear work. revision: yes

Circularity Check

0 steps flagged

No circularity in derivation chain

full rationale

The paper derives the ability of mean MPNN to check global and local polynomial counting constraints from explicit relations between aggregation functions (mean, sum, max) and extensions of graded modal logic, under stated assumptions on graph structure and formula nesting. No load-bearing step reduces by construction to a fitted parameter, self-definition, or self-citation chain; the central results follow from case analysis on aggregations and graph regularity rather than renaming known patterns or smuggling ansatzes via prior work. The derivation remains self-contained against the paper's own equations and assumptions.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The paper relies on standard prior definitions of MPNN architectures and graded modal logic; no new free parameters, invented entities, or ad-hoc axioms are introduced in the abstract.

axioms (2)
  • domain assumption Standard definitions of message passing neural networks with mean, sum, and max aggregations
    Builds directly on existing MPNN model definitions from prior work.
  • domain assumption Well-defined extensions of graded modal logic with polynomial counting constraints
    Assumes the target logic fragment is formally specified.

pith-pipeline@v0.9.0 · 5454 in / 1349 out tokens · 60734 ms · 2026-05-12T05:11:51.895670+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

27 extracted references · 27 canonical work pages

  1. [1]

    \.I .; Grohe, M.; and Lukasiewicz, T

    Abboud, R.; Ceylan, \.I . \.I .; Grohe, M.; and Lukasiewicz, T. 2021. The surprising power of graph neural networks with random node initialization. In Zhou, Z., ed., Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event / Montreal, Canada, 19-27 August 2021 , 2112--2118. ijcai.org

  2. [2]

    Ahvonen, V.; Heiman, D.; Kuusisto, A.; and Lutz, C. 2024. Logical characterizations of recurrent graph neural networks with reals and floats. In Globersons, A.; Mackey, L.; Belgrave, D.; Fan, A.; Paquet, U.; Tomczak, J. M.; and Zhang, C., eds., Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems...

  3. [3]

    W.; and Podolskii, V

    Barcel \' o , P.; Kozachinskiy, A.; Lin, A. W.; and Podolskii, V. V. 2024. Logical languages accepted by transformer encoders with hard attention. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net

  4. [4]

    V.; Monet, M.; Pérez, J.; Reutter, J.; and Silva, J

    Barceló, P.; Kostylev, E. V.; Monet, M.; Pérez, J.; Reutter, J.; and Silva, J. P. 2020. The logical expressiveness of graph neural networks. In International Conference on Learning Representations

  5. [5]

    Benedikt, M.; Lu, C.; Motik, B.; and Tan, T. 2024. Decidability of graph neural networks via logical characterizations. In 51st International Colloquium on Automata, Languages, and Programming, ICALP

  6. [6]

    Bongini, P.; Bianchini, M.; and Scarselli, F. 2021. Molecular generative graph neural networks for drug discovery. Neurocomputing 450:242--252

  7. [7]

    Chiang, D.; Cholak, P.; and Pillay, A. 2023. Tighter bounds on the expressivity of transformer encoders. In Krause, A.; Brunskill, E.; Cho, K.; Engelhardt, B.; Sabato, S.; and Scarlett, J., eds., International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA , volume 202 of Proceedings of Machine Learning Research , 5544--...

  8. [8]

    T.; Grau, B

    Cucala, D. T.; Grau, B. C.; Motik, B.; and Kostylev, E. V. 2023. On the correspondence between monotonic max-sum gnns and datalog. In Marquis, P.; Son, T. C.; and Kern - Isberner, G., eds., Proceedings of the 20th International Conference on Principles of Knowledge Representation and Reasoning, KR 2023, Rhodes, Greece, September 2-8, 2023 , 658--667

  9. [9]

    S.; Riley, P

    Gilmer, J.; Schoenholz, S. S.; Riley, P. F.; Vinyals, O.; and Dahl, G. E. 2017. Neural message passing for quantum chemistry. In Precup, D., and Teh, Y. W., eds., Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017 , volume 70 of Proceedings of Machine Learning Research , 1263--1272. PMLR

  10. [10]

    Grohe, M. 2021. The logic of graph neural networks. In 36th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2021, Rome, Italy, June 29 - July 2, 2021 , 1--17. IEEE

  11. [11]

    W.; and Hong, C

    Hague, M.; Lin, A. W.; and Hong, C. 2019. CSS minification via constraint solving. ACM Trans. Program. Lang. Syst. 41(2):12:1--12:76

  12. [12]

    Jaakkola, R.; Janhunen, T.; Kuusisto, A.; Ortiz, M.; Selin, M.; and Simkus, M. 2025. Graph learning via logic-based weisfeiler-leman variants and tabularization. CoRR abs/2508.10651

  13. [13]

    Kipf, T.; Fetaya, E.; Wang, K.-C.; Welling, M.; and Zemel, R. 2018. Neural relational inference for interacting systems. In Dy, J., and Krause, A., eds., Proceedings of the 35th International Conference on Machine Learning , volume 80 of Proceedings of Machine Learning Research , 2688--2697. PMLR

  14. [14]

    M.; Johansson, F

    Kriege, N. M.; Johansson, F. D.; and Morris, C. 2020. A survey on graph kernels. Appl. Netw. Sci. 5(1):6

  15. [15]

    Nunn, P.; S \" a lzer, M.; Schwarzentruber, F.; and Troquard, N. 2024. A logic for reasoning about aggregate-combine graph neural networks. In Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI 2024, Jeju, South Korea, August 3-9, 2024 , 3532--3540. ijcai.org

  16. [16]

    Rosenbluth, E.; T \" o nshoff, J.; and Grohe, M. 2023. Some might say all you need is sum. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, IJCAI 2023, 19th-25th August 2023, Macao, SAR, China , 4172--4179. ijcai.org

  17. [17]

    Sch \" o nherr, M., and Lutz, C. 2025. Logical characterizations of gnns with mean aggregation. CoRR abs/2507.18145

  18. [18]

    Seidl, H.; Schwentick, T.; and Muscholl, A. 2008. Counting in trees. In Flum, J.; Gr \" a del, E.; and Wilke, T., eds., Logic and Automata: History and Perspectives [in Honor of Wolfgang Thomas] , Texts in Logic and Games, 575--612. Amsterdam University Press

  19. [19]

    J.; Mehlhorn, K.; and Borgwardt, K

    Shervashidze, N.; Schweitzer, P.; van Leeuwen, E. J.; Mehlhorn, K.; and Borgwardt, K. M. 2011. Weisfeiler-lehman graph kernels. J. Mach. Learn. Res. 12:2539--2561

  20. [20]

    Shlomi, J.; Battaglia, P.; and Vlimant, J.-R. 2020. Graph neural networks in particle physics. Machine Learning: Science and Technology 2(2):021001

  21. [21]

    Sälzer, M.; Köcher, C.; Kozachinskiy, A.; Zetzsche, G.; and Lin, A. W. 2026. The counting power of transformers. In The Fourteenth International Conference on Learning Representations

  22. [22]

    Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Liò, P.; and Bengio, Y. 2018. Graph attention networks

  23. [23]

    Wu, S.; Sun, F.; Zhang, W.; Xie, X.; and Cui, B. 2022. Graph neural networks in recommender systems: A survey. ACM Comput. Surv. 55(5)

  24. [24]

    Yang, A.; Chiang, D.; and Angluin, D. 2024. Masked hard-attention transformers recognize exactly the star-free languages. In Globersons, A.; Mackey, L.; Belgrave, D.; Fan, A.; Paquet, U.; Tomczak, J. M.; and Zhang, C., eds., Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, NeurIPS 2024,...

  25. [25]

    Zhang, M.; Li, P.; Xia, Y.; Wang, K.; and Jin, L. 2021. Labeling trick: A theory of using graph neural networks for multi-node representation learning. In Ranzato, M.; Beygelzimer, A.; Dauphin, Y. N.; Liang, P.; and Vaughan, J. W., eds., Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, ...

  26. [26]

    Zhang, J.; Liu, C.; and Jiang, X. 2024. Polynomial kernel learning for interpolation kernel machines with application to graph classification. Pattern Recognit. Lett. 186:7--13

  27. [27]

    Zhou, J.; Cui, G.; Hu, S.; Zhang, Z.; Yang, C.; Liu, Z.; Wang, L.; Li, C.; and Sun, M. 2020. Graph neural networks: A review of methods and applications. AI Open 1:57--81