pith. machine review for the scientific record. sign in

arxiv: 2604.21094 · v1 · submitted 2026-04-22 · 💻 cs.LG

Recognition: unknown

Spectral Embeddings Leak Graph Topology: Theory, Benchmark, and Adaptive Reconstruction

Thang N. Dinh, Thinh Nguyen-Cong, Truong-Son Hy

Authors on Pith no claims yet

Pith reviewed 2026-05-10 00:20 UTC · model grok-4.3

classification 💻 cs.LG
keywords spectral embeddingsgraph reconstructionfederated graph learningdifferential privacytopology leakageadaptive stitchingbenchmark fragmentation
0
0 comments X

The pith

Spectral embeddings of graph fragments leak the original graph topology and enable its reconstruction.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes that spectral embeddings shared from localized graph fragments can reveal the underlying structure, even when the data is fragmented, noisy, and privacy-protected. It introduces LoGraB to create realistic benchmarks by splitting graphs using neighborhood radius, spectral quality, noise, and coverage controls. The Adaptive Fidelity-driven Reconstruction method evaluates fragment quality and stitches them adaptively to recover large coherent parts of the graph. Theoretical results show that under spectral gap and separation conditions, edges can be recovered using heat kernels and Bayesian methods can exploit the leakage. This matters because it highlights privacy risks in distributed graph learning and provides a practical way to reconstruct graphs from partial embeddings.

Core claim

The authors claim that spectral embeddings leak graph topology. Under a spectral-gap assumption, polynomial-time Bayesian recovery is feasible once enough eigenvectors are shared. They prove heat-kernel edge recovery under a separation condition along with Davis-Kahan perturbation stability and bounded alignment error. The AFR method, which scores patch quality via a fidelity measure and assembles fragments using RANSAC-Procrustes alignment and bundle adjustment, achieves the best F1 on 7 of 9 datasets and retains 75 percent performance at epsilon=2 under Gaussian differential privacy.

What carries the argument

Adaptive Fidelity-driven Reconstruction (AFR) that uses a fidelity score from gap-to-truncation stability ratio and structural entropy to guide alignment and stitching of spectral fragments into faithful graph islands.

If this is right

  • Heat kernel edges can be recovered when a separation condition holds between node distances.
  • Davis-Kahan bounds ensure stability of the embeddings under perturbation.
  • LoGraB benchmarks show how fragmentation affects model performance in distributed settings.
  • AFR maintains good reconstruction quality even with added differential privacy noise at moderate epsilon levels.
  • Bayesian recovery becomes polynomial-time feasible with shared eigenvectors under the gap assumption.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • In federated settings, sharing spectral embeddings may inadvertently disclose more topology than intended, suggesting need for additional safeguards.
  • The reconstruction approach might extend to other embedding types like node2vec if similar leakage properties hold.
  • Testing AFR on real-world distributed systems with actual fragmentation could validate its practicality beyond synthetic controls.

Load-bearing premise

The central proofs rely on the graph having a large enough spectral gap and nodes satisfying a separation condition.

What would settle it

A counterexample graph with a vanishing spectral gap where the AFR method or the Bayesian recovery fails to outperform random guessing would falsify the leakage claim.

Figures

Figures reproduced from arXiv: 2604.21094 by Thang N. Dinh, Thinh Nguyen-Cong, Truong-Son Hy.

Figure 1
Figure 1. Figure 1: Fragmented graph scenario in LoGraB. Each of the four coloured panels is a local fragment observed by a different client: a cycle, a star, a path, and a complete graph K4. Solid edges are observed intra-fragment links; dashed curves are the unobserved inter-fragment links that Task 3 (inter-fragment link prediction) must recover. Baseline approaches. We employ two types of baselines: (i) representation-bas… view at source ↗
Figure 2
Figure 2. Figure 2: Adaptive Fidelity-driven Reconstruction (AFR). The pipeline reconstructs graphs in three stages: local reconstruction with fidelity scoring, adaptive island assembly using RANSAC￾Procrustes, and global refinement via Bundle Adjustment and inter-island link inference. 5.1 The Three-Stage Pipeline Let G = (V, E) be an undirected graph with |V | = n nodes. For every v ∈ V we observe a local d-hop embedding Pv… view at source ↗
Figure 3
Figure 3. Figure 3: Relative F1-score performance (normalized by the ϵ = ∞ score) as DP noise increases (ϵ decreases) across nine benchmarks. AFR’s fidelity-aware design consistently yields the most graceful degradation. 6.3 How Fragile Are GNNs Under Fragmentation? Having established AFR’s reconstruction superiority, we now examine how fragmentation impacts downstream learning [PITH_FULL_IMAGE:figures/full_fig_p028_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Impact of patch radius (d) and coverage (p) on node classification across GNN models on Cora. Increasing either parameter provides more local context and consistently improves F1-Micro scores. Impact of spectral quality (k and σ). For classification tasks, GNNs can compensate for compro￾mised structural data by leveraging node features: doubling σ from 0.05 to 0.1 changes SAN+RWSE’s score from 94.4% to 94.… view at source ↗
Figure 5
Figure 5. Figure 5: Performance comparison across d-hop, cluster, and random fragmentation strategies on CiteSeer. The d-hop strategy yields the highest node classification performance, but the hierarchy inverts for link prediction. 6.4 Can Models Reason Across Fragment Boundaries? Finally, we assess whether models can reason about connections across fragments that they never observe directly [PITH_FULL_IMAGE:figures/full_fi… view at source ↗
Figure 6
Figure 6. Figure 6: Sensitivity analysis for the fidelity score parameter α. F1 score on the Cora validation set as a function of α. The vertical dashed line indicates the chosen value and the shaded region highlights the robust range. Extended sensitivity analysis (smin and kbase) [PITH_FULL_IMAGE:figures/full_fig_p048_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Sensitivity analysis for additional hyperparameters on the Cora validation set. (Left) Impact of smin. (Right) Impact of kbase. Both exhibit convex performance curves with clear robust operating ranges (shaded areas). C.3 GNN Architecture Details Graph Convolutional Network (GCN). GCN (Kipf and Welling, 2017) iteratively updates node representations through normalized mean aggregation: h ℓ+1 i = ReLU  U … view at source ↗
read the original abstract

Graph Neural Networks (GNNs) excel on relational data, but standard benchmarks unrealistically assume the graph is centrally available. In practice, settings such as Federated Graph Learning, distributed systems, and privacy-sensitive applications involve graph data that are localized, fragmented, noisy, and privacy-leaking. We present a unified framework for this setting. We introduce LoGraB (Local Graph Benchmark), which decomposes standard datasets into fragmented benchmarks using three strategies and four controls: neighborhood radius $d$, spectral quality $k$, noise level $\sigma$, and coverage ratio $p$. LoGraB supports graph reconstruction, localized node classification, and inter-fragment link prediction, with Island Cohesion. We propose AFR (Adaptive Fidelity-driven Reconstruction), a method for noisy spectral fragments. AFR scores patch quality via a fidelity measure combining a gap-to-truncation stability ratio and structural entropy, then assembles fragments using RANSAC-Procrustes alignment, adaptive stitching, and Bundle Adjustment. Rather than forcing a single global graph, AFR recovers large faithful islands. We prove heat-kernel edge recovery under a separation condition, Davis--Kahan perturbation stability, and bounded alignment error. We establish a Spectral Leakage Proposition: under a spectral-gap assumption, polynomial-time Bayesian recovery is feasible once enough eigenvectors are shared, complementing AFR's deterministic guarantees. Experiments on nine benchmarks show that LoGraB reveals model strengths and weaknesses under fragmentation, AFR achieves the best F1 on 7/9 datasets, and under per-embedding $(\epsilon,\delta)$-Gaussian differential privacy, AFR retains 75% of its undefended F1 at $\epsilon=2$. Our anonymous code is available at https://anonymous.4open.science/r/JMLR_submission

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript introduces LoGraB, a benchmark that fragments standard graph datasets into localized, noisy components via three strategies controlled by neighborhood radius d, spectral quality k, noise level σ, and coverage ratio p. It proposes the AFR method, which scores fragment quality using a fidelity measure (gap-to-truncation stability ratio plus structural entropy), performs RANSAC-Procrustes alignment, adaptive stitching, and bundle adjustment to recover large faithful islands rather than a single global graph. Theoretical results include a proof of heat-kernel edge recovery under a separation condition together with Davis-Kahan perturbation stability and bounded alignment error, plus a Spectral Leakage Proposition asserting that a spectral-gap assumption permits polynomial-time Bayesian recovery from shared eigenvectors. Experiments on nine benchmarks report that AFR obtains the highest F1 on 7/9 datasets and retains 75% of its undefended F1 score at ε=2 under per-embedding (ε,δ)-Gaussian differential privacy.

Significance. If the central claims hold, the work is significant for federated graph learning and privacy-sensitive applications because it supplies both a controllable benchmark for realistic fragmentation and a reconstruction method with explicit deterministic and Bayesian guarantees. The reproducible anonymous code link and the concrete DP retention result (75% F1 at ε=2) are concrete strengths that facilitate follow-up work. The combination of a new benchmark, alignment-based stitching, and spectral-leakage theory could influence how the community evaluates GNNs under distributed constraints.

major comments (2)
  1. [Abstract and theoretical sections] Abstract / Theoretical claims: The heat-kernel edge recovery proof and the Spectral Leakage Proposition both rest on a separation condition and a spectral-gap assumption. LoGraB fragments are generated with explicit controls on d, k, σ, and p that can shrink the gap or violate separation; the manuscript does not report verifying that these conditions hold on the nine experimental instances. Because the proofs are positioned as complementing AFR's deterministic stitching, the absence of this verification makes it unclear whether the theoretical guarantees apply to the reported F1 scores.
  2. [Experimental evaluation] Experiments (abstract): The claim that AFR achieves the best F1 on 7/9 datasets and retains 75% undefended F1 at ε=2 is load-bearing for the practical contribution, yet no details are given on run-to-run variance, statistical significance testing, or whether the fidelity measure inside AFR was recomputed after the Gaussian noise was added. Without these controls it is difficult to assess whether the superiority is robust or an artifact of the particular fragmentation parameters chosen post-hoc.
minor comments (2)
  1. [Benchmark description] The term 'Island Cohesion' is introduced in the abstract without a formal definition or formula; a short paragraph or equation in the benchmark section would clarify how it is computed and used to evaluate recovered islands.
  2. [Method description] The abstract states that AFR 'rather than forcing a single global graph, recovers large faithful islands,' but does not specify the quantitative criterion used to decide when an island is 'faithful' versus when further stitching is attempted.

Simulated Author's Rebuttal

2 responses · 0 unresolved

Thank you for the constructive review. We address each major comment below and will revise the manuscript accordingly to improve clarity and rigor.

read point-by-point responses
  1. Referee: [Abstract and theoretical sections] Abstract / Theoretical claims: The heat-kernel edge recovery proof and the Spectral Leakage Proposition both rest on a separation condition and a spectral-gap assumption. LoGraB fragments are generated with explicit controls on d, k, σ, and p that can shrink the gap or violate separation; the manuscript does not report verifying that these conditions hold on the nine experimental instances. Because the proofs are positioned as complementing AFR's deterministic stitching, the absence of this verification makes it unclear whether the theoretical guarantees apply to the reported F1 scores.

    Authors: We agree that explicit verification of the separation condition and spectral-gap assumption is necessary for the theoretical results to directly support the empirical claims. The proofs are stated under these assumptions, and while the fragmentation controls in LoGraB allow parameters that could violate them, the reported experiments used values intended to preserve a sufficient gap and separation. In the revised manuscript we will add a dedicated subsection (and supplementary table) that computes the spectral gap for each of the nine datasets under the chosen (d, k, σ, p) settings and verifies the separation condition holds for the fragments used in the F1 evaluation. This will make the link between theory and reported results explicit. revision: yes

  2. Referee: [Experimental evaluation] Experiments (abstract): The claim that AFR achieves the best F1 on 7/9 datasets and retains 75% undefended F1 at ε=2 is load-bearing for the practical contribution, yet no details are given on run-to-run variance, statistical significance testing, or whether the fidelity measure inside AFR was recomputed after the Gaussian noise was added. Without these controls it is difficult to assess whether the superiority is robust or an artifact of the particular fragmentation parameters chosen post-hoc.

    Authors: We acknowledge the need for statistical controls. AFR's RANSAC-Procrustes step is stochastic, so the revised experimental section will report mean F1 and standard deviation over 10 independent runs with different random seeds, together with pairwise statistical significance tests (e.g., paired t-tests with Bonferroni correction) against the strongest baseline on each dataset. Regarding the fidelity measure under differential privacy: the fidelity score is an integral part of AFR and is recomputed on the noisy spectral patches after Gaussian noise is added; we will add an explicit statement and a short algorithmic note clarifying this recomputation. Finally, the fragmentation parameters follow the fixed LoGraB protocol described in Section 4; we will include a sensitivity plot over a small grid of (d, k, σ, p) values to demonstrate that the reported superiority is not an artifact of a single post-hoc choice. revision: yes

Circularity Check

0 steps flagged

No circularity: proofs rest on explicit assumptions; experiments independent of theory

full rationale

The paper introduces LoGraB fragmentation controls and AFR reconstruction with fidelity scoring, RANSAC-Procrustes alignment, and Bundle Adjustment. Theoretical results are stated as proofs of heat-kernel edge recovery under a separation condition plus Davis-Kahan and bounded alignment error, and a Spectral Leakage Proposition under a spectral-gap assumption enabling Bayesian recovery. These are presented as conditional derivations rather than reductions of outputs to inputs by construction. Experiments report F1 scores on nine datasets and DP retention at ε=2 as empirical tests, with no renaming of fitted quantities as predictions and no load-bearing self-citations invoked to justify uniqueness or ansatz choices. The derivation chain remains self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

4 free parameters · 3 axioms · 3 invented entities

The framework depends on standard perturbation theorems and new control parameters plus assumptions for the recovery guarantees; new entities are introduced for the benchmark and method.

free parameters (4)
  • neighborhood radius d
    Controls local fragment size in LoGraB decomposition
  • spectral quality k
    Number of eigenvectors retained per fragment
  • noise level σ
    Gaussian noise added to spectral embeddings
  • coverage ratio p
    Fraction of graph covered by each fragment
axioms (3)
  • domain assumption heat-kernel edge recovery under a separation condition
    Invoked for proving edge recovery in AFR
  • standard math Davis--Kahan perturbation stability
    Used to bound stability of spectral embeddings under noise
  • domain assumption spectral-gap assumption
    Required for the Spectral Leakage Proposition on Bayesian recovery
invented entities (3)
  • LoGraB benchmark no independent evidence
    purpose: Decomposes datasets into fragmented noisy spectral benchmarks
    New benchmark with three strategies and four controls
  • AFR method no independent evidence
    purpose: Adaptive reconstruction via fidelity scoring and alignment
    Core proposed algorithm with RANSAC-Procrustes and Bundle Adjustment
  • Island Cohesion no independent evidence
    purpose: Metric supporting localized reconstruction evaluation
    New evaluation concept for faithful islands

pith-pipeline@v0.9.0 · 5625 in / 1706 out tokens · 46137 ms · 2026-05-10T00:20:36.630043+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

55 extracted references · 18 canonical work pages

  1. [1]

    Graph neural networks: A review of methods and applications,

    Graph neural networks: A review of methods and applications , journal =. 2020 , issn =. doi:https://doi.org/10.1016/j.aiopen.2021.01.001 , url =

  2. [2]

    An Empirical Evaluation of the Data Leakage in Federated Graph Learning , year=

    Chen, Jinyin and Ma, Minying and Ma, Haonan and Zheng, Haibin and Zhang, Jian , journal=. An Empirical Evaluation of the Data Leakage in Federated Graph Learning , year=

  3. [3]

    2025 , issn =

    PIAFGNN: Property Inference Attacks against Federated Graph Neural Networks , journal =. 2025 , issn =. doi:https://doi.org/10.32604/cmc.2024.057814 , author =

  4. [4]

    LINKTELLER: Recovering Private Edges from Graph Neural Networks via Influence Analysis , year=

    Wu, Fan and Long, Yunhui and Zhang, Ce and Li, Bo , booktitle=. LINKTELLER: Recovering Private Edges from Graph Neural Networks via Influence Analysis , year=

  5. [5]

    In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security

    Meng, Lingshuo and Bai, Yijie and Chen, Yanjiao and Hu, Yutong and Xu, Wenyuan and Weng, Haiqin , title =. 2023 , isbn =. doi:10.1145/3576915.3623173 , booktitle =

  6. [6]

    2025 , isbn =

    Zhong, Luying and Lai, Xuan and Zhang, Junjie and Huang, Zhiqin and Chen, Zheyi , title =. 2025 , isbn =. doi:10.1145/3711896.3736994 , booktitle =

  7. [7]

    and Derr, Tyler , journal=

    Zhang, Yi and Zhao, Yuying and Li, Zhaoqing and Cheng, Xueqi and Wang, Yu and Kotevska, Olivera and Yu, Philip S. and Derr, Tyler , journal=. A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and Applications , year=

  8. [8]

    Deep Leakage from Gradients , volume =

    Zhu, Ligeng and Liu, Zhijian and Han, Song , booktitle =. Deep Leakage from Gradients , volume =

  9. [9]

    Inverting Gradients - How easy is it to break privacy in federated learning? , volume =

    Geiping, Jonas and Bauermeister, Hartmut and Dr\". Inverting Gradients - How easy is it to break privacy in federated learning? , volume =. Advances in Neural Information Processing Systems , pages =

  10. [10]

    30th USENIX Security Symposium (USENIX Security 21) , pages=

    Stealing links from graph neural networks , author=. 30th USENIX Security Symposium (USENIX Security 21) , pages=

  11. [11]

    Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence,

    GraphMI: Extracting Private Graph Data from Graph Neural Networks , author=. Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence,. 2021 , doi=

  12. [12]

    Journal of Machine Learning Research , year =

    Yohann De Castro and Thibault Espinasse and Paul Rochet , title =. Journal of Machine Learning Research , year =

  13. [13]

    2022 , publisher =

    Qiu, Yeqing and Huang, Chenyu and Wang, Jianzong and Huang, Zhangcheng and Xiao, Jing , title =. 2022 , publisher =. doi:10.1007/978-3-031-10989-8_14 , booktitle =

  14. [14]

    GAP: differentially private graph neural networks with aggregation perturbation , year =

    Sajadmanesh, Sina and Shamsabadi, Ali Shahin and Bellet, Aur\'. GAP: differentially private graph neural networks with aggregation perturbation , year =. Proceedings of the 32nd USENIX Conference on Security Symposium , articleno =

  15. [15]

    and Yu, Shui , title =

    Zhang, Chenhan and Wang, Weiqi and Yu, James J.Q. and Yu, Shui , title =. 2023 , publisher =. doi:10.1145/3579856.3595791 , booktitle =

  16. [16]

    2024 , volume =

    Han, Zhaoxing and Hu, Chengyu and Li, Tongyaqi and Qi, Qingqiang and Tang, Peng and Guo, Shanqing , title =. 2024 , volume =. doi:10.1016/j.neunet.2024.106574 , journal =

  17. [17]

    2021 , eprint=

    FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks , author=. 2021 , eprint=

  18. [18]

    2025 , eprint=

    OpenFGL: A Comprehensive Benchmark for Federated Graph Learning , author=. 2025 , eprint=

  19. [19]

    Perozzi, B., Al-Rfou, R., and Skiena, S

    Perozzi, Bryan and Al-Rfou, Rami and Skiena, Steven , title =. 2014 , publisher =. doi:10.1145/2623330.2623732 , booktitle =

  20. [20]

    Information and Inference: A Journal of the IMA , volume =

    Cucuringu, Mihai and Singer, Amit and Cowburn, David , title =. Information and Inference: A Journal of the IMA , volume =. 2012 , doi =

  21. [21]

    2022 , eprint=

    Local2Global: A distributed approach for scaling representation learning on graphs , author=. 2022 , eprint=

  22. [22]

    and Bolles, Robert C

    Fischler, Martin A. and Bolles, Robert C. , title =. 1981 , volume =. doi:10.1145/358669.358692 , journal =

  23. [23]

    Schönemann

    A Generalized Solution of the Orthogonal Procrustes Problem , volume=. Psychometrika , author=. 1966 , pages=. doi:10.1007/BF02289451 , number=

  24. [24]

    Gower, J. C. , title=. Psychometrika , year=

  25. [25]

    1997 , publisher=

    Spectral Graph Theory , author=. 1997 , publisher=

  26. [26]

    Statistics and Computing , volume=

    A tutorial on spectral clustering , author=. Statistics and Computing , volume=. 2007 , publisher=

  27. [27]

    and Hartley, Richard I

    Triggs, Bill and McLauchlan, Philip F. and Hartley, Richard I. and Fitzgibbon, Andrew W. Bundle Adjustment --- A Modern Synthesis. Vision Algorithms: Theory and Practice. 2000

  28. [28]

    International Conference on Learning Representations , year=

    Semi-Supervised Classification with Graph Convolutional Networks , author=. International Conference on Learning Representations , year=

  29. [29]

    Kipf et al.Variational Graph Auto- Encoders

    Variational graph auto-encoders , author=. arXiv preprint arXiv:1611.07308 , year=

  30. [30]

    and Ying, Rex and Leskovec, Jure , title =

    Hamilton, William L. and Ying, Rex and Leskovec, Jure , title =. Proceedings of the 31st International Conference on Neural Information Processing Systems , pages =. 2017 , publisher =

  31. [31]

    International Conference on Learning Representations , year=

    Graph Attention Networks , author=. International Conference on Learning Representations , year=

  32. [32]

    International Conference on Learning Representations , year=

    How Powerful are Graph Neural Networks? , author=. International Conference on Learning Representations , year=

  33. [33]

    Recipe for a general, powerful, scalable graph transformer , year =

    Ramp\'. Recipe for a general, powerful, scalable graph transformer , year =. Proceedings of the 36th International Conference on Neural Information Processing Systems , articleno =

  34. [34]

    Kreuzer, Devin and Beaini, Dominique and Hamilton, William L. and L\'. Rethinking graph transformers with spectral attention , year =. Proceedings of the 35th International Conference on Neural Information Processing Systems , articleno =

  35. [35]

    Proceedings of the 35th International Conference on Neural Information Processing Systems , articleno =

    Ying, Chengxuan and Cai, Tianle and Luo, Shengjie and Zheng, Shuxin and Ke, Guolin and He, Di and Shen, Yanming and Liu, Tie-Yan , title =. Proceedings of the 35th International Conference on Neural Information Processing Systems , articleno =. 2021 , publisher =

  36. [36]

    Proceedings of the 40th International Conference on Machine Learning , articleno =

    Cai, Chen and Hy, Truong Son and Yu, Rose and Wang, Yusu , title =. Proceedings of the 40th International Conference on Machine Learning , articleno =. 2023 , publisher =

  37. [37]

    2020 , publisher =

    He, Xiangnan and Deng, Kuan and Wang, Xiang and Li, Yan and Zhang, YongDong and Wang, Meng , title =. 2020 , publisher =. doi:10.1145/3397271.3401063 , booktitle =

  38. [38]

    Proceedings of the 32nd International Conference on Neural Information Processing Systems , pages =

    Zhang, Muhan and Chen, Yixin , title =. Proceedings of the 32nd International Conference on Neural Information Processing Systems , pages =. 2018 , publisher =

  39. [39]

    PointNetLK: Robust & Efficient Point Cloud Registration Using PointNet , year=

    Aoki, Yasuhiro and Goforth, Hunter and Srivatsan, Rangaprasad Arun and Lucey, Simon , booktitle=. PointNetLK: Robust & Efficient Point Cloud Registration Using PointNet , year=

  40. [40]

    Belkin and P

    Belkin, Mikhail and Niyogi, Partha , title =. 2003 , publisher =. doi:10.1162/089976603321780317 , journal =

  41. [41]

    and Luu, Anh Tuan and Laurent, Thomas and Bengio, Yoshua and Bresson, Xavier , title =

    Dwivedi, Vijay Prakash and Joshi, Chaitanya K. and Luu, Anh Tuan and Laurent, Thomas and Bengio, Yoshua and Bresson, Xavier , title =. Journal of Machine Learning Research , articleno =. 2023 , volume =

  42. [42]

    Cora , year =

    Andrew McCallum , publisher =. Cora , year =. doi:10.21227/jsg4-wp31 , url =

  43. [43]

    Lee and Bollacker, Kurt D

    Giles, C. Lee and Bollacker, Kurt D. and Lawrence, Steve , title =. 1998 , publisher =. doi:10.1145/276675.276685 , booktitle =

  44. [44]

    AI Magazine , author=

    Sen, Prithviraj and Namata, Galileo and Bilgic, Mustafa and Getoor, Lise and Gallagher, Brian and Eliassi-Rad, Tina , title =. AI Magazine , volume =. doi:https://doi.org/10.1609/aimag.v29i3.2157 , year =

  45. [45]

    Proceedings of the 34th International Conference on Neural Information Processing Systems , articleno =

    Hu, Weihua and Fey, Matthias and Zitnik, Marinka and Dong, Yuxiao and Ren, Hongyu and Liu, Bowen and Catasta, Michele and Leskovec, Jure , title =. Proceedings of the 34th International Conference on Neural Information Processing Systems , articleno =. 2020 , publisher =

  46. [46]

    2009 , publisher =

    Tang, Lei and Liu, Huan , title =. 2009 , publisher =. doi:10.1145/1557019.1557109 , booktitle =

  47. [47]

    and Ong, Cheng Soon and Sch

    Borgwardt, Karsten M. and Ong, Cheng Soon and Sch. Protein function prediction via graph kernels , journal =. 2005 , publisher =

  48. [48]

    BMC Bioinformatics , year=

    AlQuraishi, Mohammed , title=. BMC Bioinformatics , year=

  49. [49]

    2021 , url=

    Weihua Hu and Matthias Fey and Hongyu Ren and Maho Nakata and Yuxiao Dong and Jure Leskovec , booktitle=. 2021 , url=

  50. [50]

    Long range graph benchmark , year =

    Dwivedi, Vijay Prakash and Ramp\'. Long range graph benchmark , year =. Proceedings of the 36th International Conference on Neural Information Processing Systems , articleno =

  51. [51]

    Everingham, Mark and Eslami, S. M. and Gool, Luc and Williams, Christopher K. and Winn, John and Zisserman, Andrew , title =. 2015 , volume =. doi:10.1007/s11263-014-0733-5 , journal =

  52. [52]

    European Conference on Computer Vision , pages=

    Microsoft COCO: Common objects in context , author=. European Conference on Computer Vision , pages=. 2014 , organization=

  53. [53]

    Chandler Davis and W. M. Kahan , journal =. The Rotation of Eigenvectors by a Perturbation. III , volume =

  54. [54]

    and Mahony, R

    Absil, P.-A. and Mahony, R. and Sepulchre, R. , title =. 2007 , publisher =

  55. [55]

    On the spectral norm of

    Van Handel, Ramon , journal=. On the spectral norm of