pith. machine review for the scientific record. sign in

arxiv: 2605.06260 · v1 · submitted 2026-05-07 · 💻 cs.LG

Recognition: unknown

Beyond Rigid Alignment: Graph Federated Learning via Dual Manifold Calibration

Authors on Pith no claims yet

Pith reviewed 2026-05-08 13:12 UTC · model grok-4.3

classification 💻 cs.LG
keywords graph federated learningmanifold calibrationsemantic heterogeneitystructural heterogeneitylocal personalizationheterophilic graphsequidistant anchors
0
0 comments X

The pith

FedGMC replaces rigid alignment in graph federated learning with dual manifold calibration to keep both global commonalities and local personalization intact.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper argues that standard methods for graph federated learning force rigid alignment of model parameters or prototypes across clients and server, which rests on an implicit global linearity assumption and therefore squeezes each client's own representation space. To remove that restriction, it introduces Federated Graph Manifold Calibration, which separates semantic heterogeneity from structural heterogeneity and handles each with its own global manifold. The server builds a geometrically optimal semantic manifold from equidistant anchors and a global structural manifold from templates; these manifolds then steer the calibration of each client's local manifolds. Local manifolds are aggregated back to refine the global ones over rounds. Experiments across eleven homophilic and heterophilic graphs show the resulting balance yields higher accuracy than prior rigid-alignment baselines.

Core claim

Instead of enforcing rigid alignment of parameters or prototypes, FedGMC constructs a geometrically optimal semantic manifold via equidistant semantic anchors to guide local semantic calibration and a global structural manifold via structural templates to guide local structural calibration; the server then dynamically refines both global manifolds by aggregating the calibrated local manifolds, thereby preserving diverse local graph distributions while maintaining global commonalities.

What carries the argument

dual manifold calibration mechanism that uses equidistant semantic anchors for semantic heterogeneity and global structural templates for structural heterogeneity to steer local manifold adjustments

If this is right

  • Clients retain distinct local graph distributions instead of having them compressed into one shared linear space.
  • Semantic and structural heterogeneity are handled uniformly through manifold guidance rather than separate ad-hoc fixes.
  • Dynamic aggregation of local manifolds continuously updates the global templates and anchors across communication rounds.
  • The approach applies equally to homophilic graphs, where neighbors share labels, and heterophilic graphs, where they do not.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The separation of semantic and structural manifolds suggests a template that could be tested in non-graph federated settings where data distributions differ along multiple independent axes.
  • If equidistant anchors prove stable across rounds, they could be pre-computed once and reused, lowering communication cost in later training stages.
  • The method's success on both homophilic and heterophilic graphs indicates it may generalize to other domains that mix dense and sparse connectivity patterns.

Load-bearing premise

Constructing geometrically optimal semantic manifolds via equidistant anchors and global structural templates can guide local calibration while preserving diverse local distributions without the restrictive global linearity assumption.

What would settle it

On the eleven homophilic and heterophilic graphs used in the evaluation, FedGMC would fail to show statistically significant gains over rigid-alignment baselines or would show that local client distributions become compressed rather than preserved.

Figures

Figures reproduced from arXiv: 2605.06260 by Bo Han, Chen Gong, Jie Yang, Wentao Yu.

Figure 1
Figure 1. Figure 1: The overview of our proposed FedGMC. 4.1 Framework Overview As shown in view at source ↗
Figure 2
Figure 2. Figure 2: Ablation studies under non-overlapping partitioning setting with 10 clients. 1 20 40 60 80 100 Communication Rounds 40 50 60 70 80 Accuracy (%) FedAvg FedProx FedPer GCFL FedGNN FedSage+ FED-PUB FedGTA AdaFGL FedTAD FedIIH FedICI FedSSA FedGMC (Ours) (a) PubMed 1 20 40 60 80 100 Communication Rounds 35 40 45 50 55 60 65 70 75 AUC (%) FedAvg FedProx FedPer GCFL FedGNN FedSage+ FED-PUB FedGTA AdaFGL FedTAD F… view at source ↗
Figure 4
Figure 4. Figure 4: Accuracy curves accompanied by standard deviation bands on view at source ↗
Figure 5
Figure 5. Figure 5: Case studies on Cora dataset under non-overlapping partitioning setting with 10 clients. manifolds. Since FedGMC effectively balances global commonality and local personalization, it outperforms thirteen state-of-the-art methods on eleven datasets. References [1] Jing Bai, Wentao Yu, Zhu Xiao, Vincent Havyarimana, Amelia C. Regan, Hongbo Jiang, and Licheng Jiao. Two-stream spatial–temporal graph convolutio… view at source ↗
Figure 6
Figure 6. Figure 6: Convergence curves on eight datasets under non-overlapping partitioning setting with 10 view at source ↗
Figure 7
Figure 7. Figure 7: Convergence curves on six datasets under overlapping partitioning settings with 30 clients. view at source ↗
Figure 8
Figure 8. Figure 8: Accuracy curves with variance bars on Cora dataset under different values of Q, B, τ , and η. 4 6 8 10 12 14 Q 62 64 66 68 70 72 74 76 Accuracy (%) Non-overlapping 5 clients Non-overlapping 10 clients Non-overlapping 20 clients Overlapping 10 clients Overlapping 30 clients Overlapping 50 clients (a) Roman-empire under different Q 4 8 16 32 64 128 B 62 64 66 68 70 72 74 76 78 Accuracy (%) Non-overlapping 5 … view at source ↗
Figure 9
Figure 9. Figure 9: Accuracy curves with variance bars on Roman-empire dataset under different values of Q, B, τ , and η. In summary, we believe that our proposed FedGMC has shown promising results in dealing with the heterogeneity in GFL. We will continue to improve our work for the benefit of the broader community. D.2 Limitations The proposed method is currently involving four hyperparameters. Future enhancements could inc… view at source ↗
Figure 10
Figure 10. Figure 10: Case studies on CiteSeer dataset under non-overlapping partitioning setting with 10 clients. (a) global semantic manifolds (b) semantic manifolds after refinement (c) global structural manifolds (d) structural manifolds after refinement view at source ↗
Figure 11
Figure 11. Figure 11: Case studies on PubMed dataset under non-overlapping partitioning setting with 10 clients. (a) global semantic manifolds (b) semantic manifolds after refinement (c) global structural manifolds (d) structural manifolds after refinement view at source ↗
Figure 12
Figure 12. Figure 12: Case studies on Amazon-Computer dataset under non-overlapping partitioning setting with 10 clients. (a) global semantic manifolds (b) semantic manifolds after refinement (c) global structural manifolds (d) structural manifolds after refinement view at source ↗
Figure 13
Figure 13. Figure 13: Case studies on Amazon-Photo dataset under non-overlapping partitioning setting with 10 clients. (a) global semantic manifolds (b) semantic manifolds after refinement (c) global structural manifolds (d) structural manifolds after refinement view at source ↗
Figure 14
Figure 14. Figure 14: Case studies on Roman-empire dataset under non-overlapping partitioning setting with 10 clients. 29 view at source ↗
Figure 15
Figure 15. Figure 15: Case studies on Amazon-ratings dataset under non-overlapping partitioning setting with 10 clients view at source ↗
read the original abstract

Graph Federated Learning (GFL) enables collaborative representation learning across distributed subgraphs while preserving privacy. However, heterogeneity remains a critical challenge, as subgraphs across clients typically differ significantly in both semantics and structures. Existing methods address heterogeneity by enforcing the rigid alignment of model parameters or prototypes between clients and the server. However, these alignments implicitly rely on a restrictive global linearity assumption that summarizes local data distributions using a single and globally consistent representation space. This severely compresses the personalized representation space of clients and fails to preserve diverse local graph distributions. To overcome these limitations, we propose Federated Graph Manifold Calibration (FedGMC), a novel paradigm that tackles semantic heterogeneity and structural heterogeneity from a unified manifold perspective. Instead of enforcing rigid alignment, FedGMC introduces a dual manifold calibration mechanism that preserves global commonalities while maximizing the personalized representation space of local clients. Specifically, for semantic heterogeneity, the server constructs a geometrically optimal semantic manifold via equidistant semantic anchors, so as to guide the calibration of local semantic manifolds. For structural heterogeneity, the server constructs a global structural manifold by building global structural templates, so as to guide the calibration of local structural manifolds. Finally, the server dynamically refines both global semantic manifolds and structural manifolds by aggregating local manifolds. Extensive experiments on eleven homophilic and heterophilic graphs demonstrate that FedGMC effectively balances global commonality and local personalization, thereby significantly outperforming state-of-the-art baseline methods.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper proposes Federated Graph Manifold Calibration (FedGMC) as a new paradigm for graph federated learning to address semantic and structural heterogeneity. Instead of rigid alignment of parameters or prototypes (which relies on a global linearity assumption), FedGMC uses a dual manifold calibration mechanism: the server builds a geometrically optimal semantic manifold via equidistant semantic anchors to calibrate local semantic manifolds, and a global structural manifold via structural templates to calibrate local structural manifolds. Local manifolds are then dynamically aggregated to refine the global ones. Experiments on eleven homophilic and heterophilic graphs are reported to show that FedGMC balances global commonality and local personalization better than state-of-the-art baselines.

Significance. If the empirical results and manifold constructions hold under scrutiny, this work offers a coherent relaxation of the restrictive linearity assumption common in prior GFL methods. The manifold-based perspective for handling both semantic and structural heterogeneity could meaningfully advance personalized federated learning on graphs, particularly for heterophilic settings where rigid global representations compress local diversity.

major comments (2)
  1. [§3.1–3.2] §3.1–3.2: The construction of the 'geometrically optimal semantic manifold via equidistant semantic anchors' is described at a high level but lacks explicit equations showing how equidistance is enforced (e.g., via a specific loss or constraint) and how it avoids introducing hidden parameters that effectively reintroduce a global linearity assumption; this is load-bearing for the central claim that the method is free of the restrictive global linearity of prior work.
  2. [§4] §4 (Experiments): While outperformance on eleven graphs is asserted, the manuscript provides insufficient detail on the precise baselines, metrics (e.g., node classification accuracy vs. other measures), ablation studies isolating the dual calibration components, and statistical significance testing; without these, the claim that FedGMC 'significantly outperforms' cannot be fully evaluated.
minor comments (2)
  1. Notation for local vs. global manifolds is introduced without a clear summary table or diagram early in the paper, making it harder to track the dual calibration flow.
  2. [Abstract, §1] The abstract and introduction repeat the phrase 'balances global commonality and local personalization' without quantifying what 'maximizing the personalized representation space' means in terms of a measurable quantity.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and insightful comments on our manuscript. We have carefully reviewed each point and provide detailed responses below. Where appropriate, we will revise the manuscript to incorporate additional technical details and experimental clarifications, which we believe will strengthen the presentation without altering the core contributions.

read point-by-point responses
  1. Referee: [§3.1–3.2] §3.1–3.2: The construction of the 'geometrically optimal semantic manifold via equidistant semantic anchors' is described at a high level but lacks explicit equations showing how equidistance is enforced (e.g., via a specific loss or constraint) and how it avoids introducing hidden parameters that effectively reintroduce a global linearity assumption; this is load-bearing for the central claim that the method is free of the restrictive global linearity of prior work.

    Authors: We appreciate the referee's emphasis on this foundational aspect of our approach. The manuscript intentionally presents the dual manifold calibration at a conceptual level in §3.1–3.2 to highlight the departure from rigid alignment methods. However, we acknowledge that explicit formulations would better substantiate the claims. In the revised version, we will add precise equations in §3.1 detailing the construction of the geometrically optimal semantic manifold, including the specific regularization term or optimization constraint (e.g., a pairwise distance variance minimization objective) used to enforce equidistance among semantic anchors. We will also include a clarifying discussion and supporting argument demonstrating that this manifold calibration avoids reintroducing a global linearity assumption: unlike prior methods that enforce a single shared linear representation space across clients, our anchors serve only as calibration references on a non-linear manifold, permitting each local client to retain its own curved, personalized semantic structure. This distinction will be illustrated with a brief theoretical comparison to linear prototype alignment. revision: yes

  2. Referee: [§4] §4 (Experiments): While outperformance on eleven graphs is asserted, the manuscript provides insufficient detail on the precise baselines, metrics (e.g., node classification accuracy vs. other measures), ablation studies isolating the dual calibration components, and statistical significance testing; without these, the claim that FedGMC 'significantly outperforms' cannot be fully evaluated.

    Authors: We agree that expanded experimental details are essential for full reproducibility and evaluation of the performance claims. In the revised §4, we will provide: a complete enumeration of all baselines with citations, key hyperparameters, and how they were adapted to the graph federated setting; explicit confirmation that node classification accuracy is the primary metric (with any supplementary metrics such as macro-F1 noted); dedicated ablation studies that isolate the semantic manifold calibration and structural manifold calibration components individually and in combination; and statistical analysis including means and standard deviations over multiple random seeds, along with paired t-test p-values to assess significance of improvements over baselines. These additions will be drawn from the existing experimental protocol on the eleven homophilic and heterophilic graphs and will not require new runs. revision: yes

Circularity Check

0 steps flagged

No significant circularity; derivation self-contained

full rationale

The paper introduces FedGMC as a new paradigm using dual manifold calibration (equidistant semantic anchors for semantic heterogeneity and global structural templates for structural heterogeneity, with dynamic aggregation). The abstract and high-level description present these as constructive mechanisms operating on manifolds to relax the global linearity assumption of prior rigid-alignment methods. No equations, fitted parameters renamed as predictions, self-definitional reductions, or load-bearing self-citations appear in the provided text. The central claim rests on the proposed construction and experimental validation rather than any step that reduces by construction to its own inputs. This is the normal case of an independent proposal.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 3 invented entities

The central claim rests on several domain assumptions about heterogeneity and manifold geometry plus newly introduced entities for calibration; no free parameters are explicitly fitted in the abstract description.

axioms (2)
  • domain assumption Subgraphs across clients differ significantly in both semantics and structures
    Stated as the critical challenge that existing methods fail to address adequately.
  • domain assumption Rigid alignment implicitly relies on a restrictive global linearity assumption
    Used to critique prior work and motivate the manifold approach.
invented entities (3)
  • Dual manifold calibration mechanism no independent evidence
    purpose: Preserves global commonalities while maximizing personalized local representation spaces
    Core novel paradigm introduced to replace rigid alignment.
  • Geometrically optimal semantic manifold via equidistant semantic anchors no independent evidence
    purpose: Guides calibration of local semantic manifolds for semantic heterogeneity
    Constructed by the server as part of the proposed method.
  • Global structural manifold via global structural templates no independent evidence
    purpose: Guides calibration of local structural manifolds for structural heterogeneity
    Constructed by the server as part of the proposed method.

pith-pipeline@v0.9.0 · 5554 in / 1421 out tokens · 47172 ms · 2026-05-08T13:12:32.434120+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

45 extracted references · 3 canonical work pages · 1 internal anchor

  1. [1]

    Regan, Hongbo Jiang, and Licheng Jiao

    Jing Bai, Wentao Yu, Zhu Xiao, Vincent Havyarimana, Amelia C. Regan, Hongbo Jiang, and Licheng Jiao. Two-stream spatial–temporal graph convolutional networks for driver drowsiness detection.IEEE Transactions on Cybernetics, 52(12):13821–13833, 2022

  2. [2]

    Hyperspectral image classification with contrastive graph convolutional network.IEEE Transactions on Geoscience and Remote Sensing, 61(1):1–15, 2023

    Wentao Yu, Sheng Wan, Guangyu Li, Jian Yang, and Chen Gong. Hyperspectral image classification with contrastive graph convolutional network.IEEE Transactions on Geoscience and Remote Sensing, 61(1):1–15, 2023

  3. [3]

    Traffic pattern sharing for federated traffic flow prediction with personalization

    Hang Zhou, Wentao Yu, Sheng Wan, Yongxin Tong, Tianlong Gu, and Chen Gong. Traffic pattern sharing for federated traffic flow prediction with personalization. InInternational Conference on Data Mining, pages 1–10, 2024

  4. [4]

    FedTPS: traf- fic pattern sharing for personalized federated traffic flow prediction.Knowledge and Information Systems, 1(1):1–27, 2025

    Hang Zhou, Wentao Yu, Sheng Wan, Yongxin Tong, Tianlong Gu, and Chen Gong. FedTPS: traf- fic pattern sharing for personalized federated traffic flow prediction.Knowledge and Information Systems, 1(1):1–27, 2025

  5. [5]

    Inter-client de- pendency recovery with hidden global components for federated traffic prediction

    Hang Zhou, Wentao Yu, Yang Wei, Guangyu Li, Sha Xu, and Chen Gong. Inter-client de- pendency recovery with hidden global components for federated traffic prediction. InAAAI Conference on Artificial Intelligence, pages 28946–28954, 2026

  6. [6]

    Atom-motif contrastive transformer for molecular property prediction.ACM Transactions on Intelligent Systems and Technology, 1(1):1–28, 2026

    Wentao Yu, Shuo Chen, Chen Gong, Bo Han, Gang Niu, and Masashi Sugiyama. Atom-motif contrastive transformer for molecular property prediction.ACM Transactions on Intelligent Systems and Technology, 1(1):1–28, 2026

  7. [7]

    Personalized subgraph federated learning

    Jinheon Baek, Wonyong Jeong, Jiongdao Jin, Jaehong Yoon, and Sung Ju Hwang. Personalized subgraph federated learning. InInternational Conference on Machine Learning, pages 1396– 1415, 2023

  8. [8]

    Modeling inter-intra heterogeneity for graph federated learning

    Wentao Yu, Shuo Chen, Yongxin Tong, Tianlong Gu, and Chen Gong. Modeling inter-intra heterogeneity for graph federated learning. InAAAI Conference on Artificial Intelligence, pages 22236–22244, 2025

  9. [9]

    Homophily heterogeneity matters in graph federated learning: A spectrum sharing and complementing perspective.arXiv:2502.13732, pages 1–15, 2025

    Wentao Yu. Homophily heterogeneity matters in graph federated learning: A spectrum sharing and complementing perspective.arXiv:2502.13732, pages 1–15, 2025

  10. [10]

    Integrating commonality and individuality for graph federated learning: A graph spectrum perspective.Authorea Preprints, pages 1–16, 2025

    Wentao Yu, Chen Gong, Bo Han, Lixin Fan, and Qiang Yang. Integrating commonality and individuality for graph federated learning: A graph spectrum perspective.Authorea Preprints, pages 1–16, 2025

  11. [11]

    Heterogeneity-aware knowledge sharing for graph federated learning

    Wentao Yu, Sheng Wan, Shuo Chen, Bo Han, and Chen Gong. Heterogeneity-aware knowledge sharing for graph federated learning. InInternational Conference on Machine Learning, pages 1–8, 2026

  12. [12]

    Federated graph classification over non-iid graphs

    Han Xie, Jing Ma, Li Xiong, and Carl Yang. Federated graph classification over non-iid graphs. InAdvances in Neural Information Processing Systems, pages 18839–18852, 2021. 10

  13. [13]

    Federated optimization in heterogeneous networks

    Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. InMachine Learning and Systems, pages 429–450, 2020

  14. [14]

    FedProto: Federated prototype learning across heterogeneous clients

    Yue Tan, Guodong Long, Lu Liu, Tianyi Zhou, Qinghua Lu, Jing Jiang, and Chengqi Zhang. FedProto: Federated prototype learning across heterogeneous clients. InAAAI Conference on Artificial Intelligence, pages 8432–8440, 2022

  15. [15]

    Federated graph semantic and structural learning

    Wenke Huang, Guancheng Wan, Mang Ye, and Bo Du. Federated graph semantic and structural learning. InInternational Joint Conference on Artificial Intelligence, pages 3830–3838, 2023

  16. [16]

    FedMC: Federated manifold calibration

    Yanbiao Ma, Wei Dai, Gaoyang Jiang, Wanyi Chen, Chenyue Zhou, Yiwei Zhang, Fei Luo, Junhao Wang, and Andi Zhang. FedMC: Federated manifold calibration. InInternational Conference on Learning Representations, pages 1–21, 2026

  17. [17]

    Beyond homophily in graph neural networks: Current limitations and effective designs

    Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, and Danai Koutra. Beyond homophily in graph neural networks: Current limitations and effective designs. In Advances in Neural Information Processing Systems, pages 7793–7804, 2020

  18. [18]

    Consistency- driven calibration and matching for few-shot class incremental learning

    Qinzhe Wang, Zixuan Chen, Keke Huang, Xiu Su, Chunhua Yang, and Chang Xu. Consistency- driven calibration and matching for few-shot class incremental learning. InInternational Conference on Learning Representations, pages 1–25, 2026

  19. [19]

    FedGNN: Federated graph neural network for privacy-preserving recommendation.arXiv:2102.04925, 2021

    Chuhan Wu, Fangzhao Wu, Yang Cao, Yongfeng Huang, and Xing Xie. FedGNN: Federated graph neural network for privacy-preserving recommendation.arXiv:2102.04925, 2021

  20. [20]

    FedTAD: Topology- aware data-free knowledge distillation for subgraph federated learning

    Yinlin Zhu, Xunkai Li, Zhengyu Wu, Di Wu, Miao Hu, and Rong-Hua Li. FedTAD: Topology- aware data-free knowledge distillation for subgraph federated learning. InInternational Joint Conference on Artificial Intelligence, pages 1–9, 2024

  21. [21]

    Tenenbaum, Vin de Silva, and John C

    Joshua B. Tenenbaum, Vin de Silva, and John C. Langford. A global geometric framework for nonlinear dimensionality reduction.Science, 290(5500):2319–2323, 2000

  22. [22]

    A geometric understanding of deep learning.Engineering, 6(3):361–374, 2020

    Na Lei, Dongsheng An, Yang Guo, Kehua Su, Shixia Liu, Zhongxuan Luo, Shing-Tung Yau, and Xianfeng Gu. A geometric understanding of deep learning.Engineering, 6(3):361–374, 2020

  23. [23]

    Caterini, Gabriel Loaiza-Ganem, Geoff Pleiss, and John P

    Anthony L. Caterini, Gabriel Loaiza-Ganem, Geoff Pleiss, and John P. Cunningham. Rectangu- lar flows for manifold learning. InAdvances in Neural Information Processing Systems, pages 30228–30241, 2021

  24. [24]

    Kiani, Jason Wang, and Melanie Weber

    Bobak T. Kiani, Jason Wang, and Melanie Weber. Hardness of learning neural networks under the manifold hypothesis. InAdvances in Neural Information Processing Systems, pages 5661–5696, 2024

  25. [25]

    Mixon, and John Jasper

    Matthew Fickus, Dustin G. Mixon, and John Jasper. Equiangular tight frames from hyperovals. IEEE Transactions on Information Theory, 62(9):5225–5236, 2016

  26. [26]

    Guiding neural collapse: Opti- mising towards the nearest simplex equiangular tight frame

    Evan Markou, Thalaiyasingam Ajanthan, and Stephen Gould. Guiding neural collapse: Opti- mising towards the nearest simplex equiangular tight frame. InAdvances in Neural Information Processing Systems, pages 35544–35573, 2024

  27. [27]

    Sinkhorn distances: Lightspeed computation of optimal transport

    Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. InAdvances in Neural Information Processing Systems, pages 1–9, 2013

  28. [28]

    Unified optimal transport framework for universal domain adaptation

    Wanxing Chang, Ye Shi, Hoang Tuan, and Jingya Wang. Unified optimal transport framework for universal domain adaptation. InAdvances in Neural Information Processing Systems, pages 29512–29524, 2022

  29. [29]

    CSOT: Curriculum and structure-aware optimal transport for learning with noisy labels

    Wanxing Chang, Ye Shi, and Jingya Wang. CSOT: Curriculum and structure-aware optimal transport for learning with noisy labels. InAdvances in Neural Information Processing Systems, pages 8528–8541, 2023

  30. [30]

    On a linear Gromov–Wasserstein distance

    Florian Beier, Robert Beinert, and Gabriele Steidl. On a linear Gromov–Wasserstein distance. IEEE Transactions on Image Processing, 31(1):7292–7305, 2022. 11

  31. [31]

    Communication-efficient learning of deep networks from decentralized data

    Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. InInternational Conference on Artificial Intelligence and Statistics, pages 1273–1282, 2017

  32. [32]

    Federated Learning with Personalization Layers

    Manoj Ghuhan Arivazhagan, Vinay Aggarwal, Aaditya Kumar Singh, and Sunav Choudhary. Federated learning with personalization layers.arXiv:1912.00818, 2019

  33. [33]

    Subgraph federated learning with missing neighbor generation

    Ke Zhang, Carl Yang, Xiaoxiao Li, Lichao Sun, and Siu Ming Yiu. Subgraph federated learning with missing neighbor generation. InAdvances in Neural Information Processing Systems, pages 6671–6682, 2021

  34. [34]

    FedGTA: Topology-aware averaging for federated graph learning

    Xunkai Li, Zhengyu Wu, Wentao Zhang, Yinlin Zhu, Rong-Hua Li, and Guoren Wang. FedGTA: Topology-aware averaging for federated graph learning. InInternational Conference on Very Large Databases, pages 41–50, 2023

  35. [35]

    AdaFGL: A new paradigm for federated node classification with topology heterogeneity

    Xunkai Li, Zhengyu Wu, Wentao Zhang, Henan Sun, Rong-Hua Li, and Guoren Wang. AdaFGL: A new paradigm for federated node classification with topology heterogeneity. InInternational Conference on Data Engineering, pages 2517–2530, 2024

  36. [36]

    Visualizing data using t-SNE.Journal of Machine Learning Research, 9(11):2579–2605, 2008

    Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE.Journal of Machine Learning Research, 9(11):2579–2605, 2008

  37. [37]

    A critical look at the evaluation of GNNs under heterophily: Are we re- ally making progress? InInternational Conference on Learning Representations, pages 1–15, 2023

    Oleg Platonov, Denis Kuznedelev, Michael Diskin, Artem Babenko, and Liudmila Prokhorenkova. A critical look at the evaluation of GNNs under heterophily: Are we re- ally making progress? InInternational Conference on Learning Representations, pages 1–15, 2023

  38. [38]

    METIS: Unstructured graph partitioning and sparse matrix ordering system

    George Karypis. METIS: Unstructured graph partitioning and sparse matrix ordering system. Technical report, 1997

  39. [39]

    Felix Sattler, Klaus-Robert Müller, and Wojciech Samek. Clustered federated learning: Model- agnostic distributed multitask optimization under privacy constraints.IEEE Transactions on Neural Networks and Learning Systems, 32(8):3710–3722, 2021

  40. [40]

    Kipf and Max Welling

    Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. InInternational Conference on Learning Representations, pages 1–14, 2017

  41. [41]

    Inductive representation learning on large graphs

    Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. InAdvances in Neural Information Processing Systems, pages 1–11, 2017

  42. [42]

    Graph attention multi-layer perceptron

    Wentao Zhang, Ziqi Yin, Zeang Sheng, Yang Li, Wen Ouyang, Xiaosen Li, Yangyu Tao, Zhi Yang, and Bin Cui. Graph attention multi-layer perceptron. InACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 4560–4570, 2022

  43. [43]

    Disentangled graph con- volutional networks

    Jianxin Ma, Peng Cui, Kun Kuang, Xin Wang, and Wenwu Zhu. Disentangled graph con- volutional networks. InInternational Conference on Machine Learning, pages 4212–4221, 2019

  44. [44]

    How universal polynomial bases enhance spectral graph neural networks: Heterophily, over-smoothing, and over-squashing

    Keke Huang, Yu Guang Wang, Ming Li, and Pietro Lio. How universal polynomial bases enhance spectral graph neural networks: Heterophily, over-smoothing, and over-squashing. In International Conference on Machine Learning, pages 1–20, 2024. 12 NeurIPS Paper Checklist 1.Claims Question: Do the main claims made in the abstract and introduction accurately refl...

  45. [45]

    Institutional review board (IRB) approvals or equivalent for research with human subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or ...