Recognition: unknown
Topological Neural Tangent Kernel
Pith reviewed 2026-05-09 19:27 UTC · model grok-4.3
The pith
The Topological Neural Tangent Kernel makes infinite-width models sensitive to simplicial topology by combining lower and upper Hodge interactions.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
TopoNTK is the neural tangent kernel limit of a simplicial network whose message passing on edge features is governed by the sum of the lower and upper Hodge Laplacians; this endows the kernel with expressivity to detect topological differences invisible to graph NTKs, Hodge alignment of features, a topological form of spectral bias, and stability properties.
What carries the argument
The sum of the lower and upper Hodge Laplacians acting on the space of edge cochains, which supplies both the linear operator for the kernel and the orthogonal decomposition of signals into gradient, harmonic and curl components.
If this is right
- Complexes with the same underlying graph but different filled simplices induce distinct TopoNTKs.
- An edge signal component is learned at a rate set by its alignment with the eigenmodes of the combined Hodge operator.
- Global harmonic modes typically sit at smaller eigenvalues and are therefore learned more slowly than local gradient or circulation modes.
- The kernel remains stable under small changes to the simplicial structure.
- Direct kernel evaluation on the Hodge spectrum yields competitive performance on higher-order link prediction without training a network.
Where Pith is reading between the lines
- The same Hodge-based construction could be applied at other dimensions to produce kernels for node features or triangle features.
- Datasets containing explicit higher-order relations may exhibit accuracy gains when the kernel respects filled simplices rather than the graph skeleton alone.
- The observed spectral bias suggests that one could deliberately regularize the kernel spectrum to control the entry of global topological features during training.
Load-bearing premise
The infinite-width limit of the simplicial message-passing dynamics on edge features is exactly reproduced by the combined lower and upper Hodge Laplacian without further regularity conditions.
What would settle it
Take two complexes with identical graphs but one containing an extra 2-simplex; compute or approximate their TopoNTKs on a shared set of edge signals and check whether the resulting Gram matrices differ; identical matrices would falsify the claim that upper Hodge terms add new topological sensitivity.
Figures
read the original abstract
Graph neural tangent kernels give a principled infinite-width theory for graph neural networks, but inherit a basic limitation of graph models: they see only pairwise structure. Many relational systems contain higher-order interactions that are more naturally represented by simplicial complexes. We introduce the Topological Neural Tangent Kernel (TopoNTK), an infinite-width kernel for simplicial message passing on edge features. TopoNTK combines lower Hodge interactions, capturing graph-like coupling through shared vertices, with upper Hodge interactions, capturing coupling through filled simplices. This makes the kernel sensitive to topology invisible to graph kernels, allowing complexes with the same graph but different filled simplices to induce different kernels. Beyond expressivity, the Hodge structure gives the kernel an interpretable learning geometry. Edge signals decompose into gradient-like, harmonic, and local circulation components, and the spectrum of the TopoNTK determines how quickly each component is learned. This yields a topological form of spectral bias: components aligned with large-eigenvalue modes are learned quickly, while global harmonic modes, retained through the residual channel, often lie at smaller eigenvalues and are learned more slowly. We prove expressivity, Hodge-alignment, spectral learning, and stability properties, and validate them on synthetic simplicial tasks and DBLP higher-order link prediction. The results show that topology is not merely extra structure; it can provide coordinates that make relational learning more faithful, interpretable, and effective.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces the Topological Neural Tangent Kernel (TopoNTK) as an infinite-width kernel for simplicial message passing on edge features. It combines lower Hodge interactions (capturing vertex-shared coupling) with upper Hodge interactions (capturing filled-simplex coupling) derived from the Hodge Laplacian on simplicial complexes. The work claims to prove expressivity, Hodge alignment, a topological form of spectral bias via the kernel spectrum, and stability, with validation on synthetic simplicial tasks and DBLP higher-order link prediction.
Significance. If the derivations hold, the result would be significant for extending NTK theory beyond graphs to capture higher-order topology in a principled, interpretable manner. The Hodge-based decomposition and resulting spectral learning geometry offer a concrete way to reason about what topological features are learned quickly or slowly. Credit is due for attempting formal proofs of expressivity and stability rather than relying solely on experiments.
major comments (3)
- [Section 3 (Kernel Derivation)] The central equivalence between simplicial message-passing dynamics on edge features and the combined lower-plus-upper Hodge Laplacian construction in the infinite-width limit (Section 3, Theorem 1 or equivalent) is asserted without explicit derivation steps, error bounds, or stated regularity conditions on the complex (e.g., closed manifold-like structure, commutativity of aggregations with Hodge projectors, absence of boundary artifacts). This equivalence is load-bearing for all subsequent claims about expressivity and spectral bias; if non-linearities or aggregations fail to linearize exactly, the derived kernel deviates from the actual NTK.
- [Section 4 (Hodge Alignment and Spectral Learning)] The proof of Hodge alignment and the claimed decomposition of edge signals into gradient-like, harmonic, and local circulation components (Section 4) does not specify the precise hypotheses under which the TopoNTK spectrum governs learning rates for each component. Without these, the topological spectral bias interpretation remains conditional and may not hold for general complexes or activations.
- [Section 5 (Stability)] The stability property (Section 5) is stated qualitatively; quantitative bounds on how perturbations to the simplicial complex affect the kernel (or the learned function) are needed to support the claim that TopoNTK is robust in practice.
minor comments (2)
- [Section 2 (Background)] The notation for the lower and upper Hodge operators (L_lower and L_upper) should be defined with explicit matrix representations in the main text rather than deferred to the appendix.
- [Section 6 (Experiments)] In the experimental section, the construction of the simplicial complex from the DBLP dataset (e.g., choice of 2-simplices) requires additional detail to allow reproduction.
Simulated Author's Rebuttal
We thank the referee for their constructive and detailed feedback, which has helped us strengthen the presentation of our results. We address each major comment point by point below and have revised the manuscript to provide the requested clarifications, derivations, and bounds.
read point-by-point responses
-
Referee: [Section 3 (Kernel Derivation)] The central equivalence between simplicial message-passing dynamics on edge features and the combined lower-plus-upper Hodge Laplacian construction in the infinite-width limit (Section 3, Theorem 1 or equivalent) is asserted without explicit derivation steps, error bounds, or stated regularity conditions on the complex (e.g., closed manifold-like structure, commutativity of aggregations with Hodge projectors, absence of boundary artifacts). This equivalence is load-bearing for all subsequent claims about expressivity and spectral bias; if non-linearities or aggregations fail to linearize exactly, the derived kernel deviates from the actual NTK.
Authors: We agree that the derivation of the equivalence in Section 3 requires more explicit steps to be fully rigorous. In the revised manuscript, we have expanded Theorem 1 with a complete step-by-step derivation of the infinite-width limit for simplicial message passing on edge features. This includes error bounds obtained via standard NTK linearization arguments under Lipschitz activations, as well as regularity conditions: the simplicial complex is finite-dimensional, the Hodge projectors commute with the chosen aggregations, and boundary artifacts are absent (consistent with the closed-complex setting used throughout the paper). These additions ensure the kernel equivalence holds exactly in the limit and directly supports the expressivity and spectral-bias claims. revision: yes
-
Referee: [Section 4 (Hodge Alignment and Spectral Learning)] The proof of Hodge alignment and the claimed decomposition of edge signals into gradient-like, harmonic, and local circulation components (Section 4) does not specify the precise hypotheses under which the TopoNTK spectrum governs learning rates for each component. Without these, the topological spectral bias interpretation remains conditional and may not hold for general complexes or activations.
Authors: We thank the referee for highlighting the need for explicit hypotheses. The original proof implicitly relied on standard NTK assumptions (analytic activations and finite complexes). In the revision we have added a dedicated paragraph in Section 4 stating the precise conditions: activations are twice continuously differentiable with bounded second derivatives, and the simplicial complex is finite, orientable, and without boundary. Under these hypotheses the Hodge decomposition of edge signals is orthogonal with respect to the TopoNTK inner product, and the spectrum governs component-wise learning rates exactly. We also include a brief remark on the interpretation for complexes that violate these conditions. revision: yes
-
Referee: [Section 5 (Stability)] The stability property (Section 5) is stated qualitatively; quantitative bounds on how perturbations to the simplicial complex affect the kernel (or the learned function) are needed to support the claim that TopoNTK is robust in practice.
Authors: The stability claim in Section 5 was originally stated qualitatively to emphasize the structural robustness induced by the Hodge construction. We agree that quantitative bounds improve the result. In the revised manuscript we have added a new proposition (Proposition 5.1) that supplies an explicit Lipschitz-type bound: the difference between two TopoNTKs induced by complexes differing by a perturbation of size ε in the Hodge Laplacian is at most Cε, where C depends only on the maximum degree and the activation Lipschitz constant. The proof follows from the continuity of the Hodge eigenvalues with respect to the complex structure. revision: yes
Circularity Check
No circularity: kernel defined directly from Hodge operators with independent proofs claimed
full rationale
The abstract and available description construct TopoNTK explicitly as the combination of lower and upper Hodge interactions on simplicial complexes to capture infinite-width simplicial message passing on edge features. No load-bearing step is shown to reduce a derived quantity (such as a prediction or expressivity result) to a fitted parameter, self-definition, or self-citation chain by construction. The claimed proofs of expressivity, Hodge-alignment, spectral learning, and stability are presented as separate derivations rather than tautological restatements of the kernel definition itself. This satisfies the default expectation of self-contained derivation without circularity.
Axiom & Free-Parameter Ledger
axioms (1)
- standard math Hodge decomposition theorem applies to the edge space of a simplicial complex
Reference graph
Works this paper leans on
-
[1]
International Conference on Learning Representations (ICLR) , year=
Semi-Supervised Classification with Graph Convolutional Networks , author=. International Conference on Learning Representations (ICLR) , year=
-
[2]
Advances in Neural Information Processing Systems (NeurIPS) , year=
Inductive Representation Learning on Large Graphs , author=. Advances in Neural Information Processing Systems (NeurIPS) , year=
-
[3]
Bulletin of the American Mathematical Society , volume=
Barcodes: The Persistent Topology of Data , author=. Bulletin of the American Mathematical Society , volume=
-
[4]
Advances in Mathematics , volume=
Spectra of Combinatorial Laplace Operators on Simplicial Complexes , author=. Advances in Mathematics , volume=
-
[5]
SIAM Review , volume=
Hodge Laplacians on Graphs , author=. SIAM Review , volume=. 2020 , publisher=
2020
-
[6]
Advances in Neural Information Processing Systems , year=
Neural Tangent Kernel: Convergence and Generalization in Neural Networks , author=. Advances in Neural Information Processing Systems , year=
-
[7]
Advances in Neural Information Processing Systems , year=
Graph Neural Tangent Kernel: Fusing Graph Neural Networks with Graph Kernels , author=. Advances in Neural Information Processing Systems , year=
-
[8]
Proceedings of the 40th International Conference on Machine Learning , pages=
Graph Neural Tangent Kernel: Convergence on Large Graphs , author=. Proceedings of the 40th International Conference on Machine Learning , pages=. 2023 , organization=
2023
-
[9]
Physical Review E , volume=
Spectral Detection of Simplicial Communities via Hodge Laplacians , author=. Physical Review E , volume=. 2021 , publisher=
2021
-
[10]
Applied Network Science , volume=
The Collective vs Individual Nature of Mountaineering: A Network and Simplicial Approach , author=. Applied Network Science , volume=. 2022 , publisher=
2022
-
[11]
Physics Reports , volume=
Networks Beyond Pairwise Interactions: Structure and Dynamics , author=. Physics Reports , volume=. 2020 , publisher=
2020
-
[12]
Proceedings of the 38th International Conference on Machine Learning , series=
Weisfeiler and Lehman Go Topological: Message Passing Simplicial Networks , author=. Proceedings of the 38th International Conference on Machine Learning , series=. 2021 , publisher=
2021
-
[13]
NeurIPS Workshop on Topological Data Analysis and Beyond , year=
Simplicial Neural Networks , author=. NeurIPS Workshop on Topological Data Analysis and Beyond , year=
-
[14]
Proceedings of the 41st International Conference on Machine Learning , series=
Position: Topological Deep Learning is the New Frontier for Relational Learning , author=. Proceedings of the 41st International Conference on Machine Learning , series=. 2024 , publisher=
2024
-
[15]
and Beyond , author=
Signal Processing on Higher-Order Networks: Livin' on the Edge ... and Beyond , author=. Signal Processing , volume=. 2021 , doi=
2021
-
[16]
SIAM Review , volume=
What Are Higher-Order Networks? , author=. SIAM Review , volume=. 2023 , doi=
2023
-
[17]
Hodge Decomposition of Single-Cell
Su, Zhe and Tong, Yiying and Wei, Guo-Wei , journal =. Hodge Decomposition of Single-Cell. 2024 , doi =
2024
-
[18]
Topological deep learning: Going beyond graph data
Topological Deep Learning: Going Beyond Graph Data , author=. arXiv preprint arXiv:2206.00606 , year=
-
[19]
IEEE Transactions on Signal Processing , volume=
Topological Signal Processing over Simplicial Complexes , author=. IEEE Transactions on Signal Processing , volume=. 2020 , publisher=
2020
-
[20]
SIAM Review , volume=
Random Walks on Simplicial Complexes and the Normalized Hodge 1-Laplacian , author=. SIAM Review , volume=. 2020 , publisher=
2020
-
[21]
IEEE Transactions on Signal Processing , volume=
Simplicial Convolutional Filters , author=. IEEE Transactions on Signal Processing , volume=. 2022 , publisher=
2022
-
[22]
Proceedings of the National Academy of Sciences , volume=
Simplicial Closure and Higher-order Link Prediction , author=. Proceedings of the National Academy of Sciences , volume=. 2018 , publisher=
2018
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.