pith. machine review for the scientific record. sign in

arxiv: 2604.15370 · v1 · submitted 2026-04-15 · 💻 cs.CR · cs.LG

Recognition: unknown

TopFeaRe: Locating Critical State of Adversarial Resilience for Graphs Regarding Topology-Feature Entanglement

Authors on Pith no claims yet

Pith reviewed 2026-05-10 13:25 UTC · model grok-4.3

classification 💻 cs.CR cs.LG
keywords graph adversarial defensecomplex dynamic systemsequilibrium point theorytopology feature entanglementadversarial resiliencegraph neural networkscritical state analysis
0
0 comments X

The pith

Modeling graphs as complex dynamic systems locates their critical states of resilience to adversarial attacks by finding equilibrium points in a topology-feature entanglement function.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Adversarial attacks on graphs exploit both structure and node features, yet defenses rarely explain their combined effect. This work maps graphs to complex dynamic systems and models attacks as oscillations within them. Topology and features are projected into two spaces whose variances under perturbation are captured by a defined two-dimensional entangled function. Equilibrium-point theory then identifies the graph's critical resilience state. Experiments across five datasets confirm that this state yields stronger defense than existing methods against multiple attack types.

Core claim

By mapping a graph regime into a complex dynamic system and using oscillations to model adversarial perturbations, the paper defines a two-dimensional topology-feature-entangled perturbation function to represent dynamic variance. Equilibrium-point theory applied to this function locates the critical state of the graph's adversarial resilience.

What carries the argument

The 2D Topology-Feature-Entangled Perturbation Function, which represents the dynamic variance of graph topology and node features under adversarial attacks in two characteristic spaces.

If this is right

  • If the critical state is correctly located, defenses can proactively adjust graphs toward higher resilience without knowing the specific attack.
  • The unified modeling of topology and feature perturbations allows for joint analysis of attacks from both perspectives.
  • Equilibrium points provide a theoretical anchor for measuring and improving graph robustness in dynamic terms.
  • Validation on multiple datasets and attacks suggests the method generalizes across common graph learning scenarios.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Such dynamical modeling could be applied to other structured data like social networks or molecular graphs to predict vulnerability.
  • If the 2D function can be computed efficiently, it might enable real-time monitoring of graph resilience in evolving systems.
  • The approach opens the possibility of designing attack-agnostic defenses based on system stability rather than specific threat models.

Load-bearing premise

The assumption that graphs can be mapped to complex dynamic systems in a way that makes adversarial perturbations equivalent to oscillations and allows topology and features to be separated into spaces with a meaningful 2D entangled function whose equilibria indicate resilience.

What would settle it

Running the method on a graph, identifying the critical state, and then applying attacks to show that resilience does not peak there would falsify the location claim.

Figures

Figures reproduced from arXiv: 2604.15370 by Chi Lin, Quanliang Jing, Shaoye Luo, Wenbo Song, Wenxiong Chen, Xinxin Fan, Yunfeng Lu.

Figure 1
Figure 1. Figure 1: Variations of adjacency-matrix rank and singular value, feature smoothness under attack Metattack [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: ASEP surface with rGra on Cora_ML (a) Metattack (b) CE-PGD (c) DICE [PITH_FULL_IMAGE:figures/full_fig_p008_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: ASEP surface with qGra on Cora_ML 5 Implementation In this section, we detail the implementation on adversarial defense. After the entangled mapping of topology and feature, we know that the dual variables ⟨γr⟩, [PITH_FULL_IMAGE:figures/full_fig_p008_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Rank variation of adjacency matrix of Cora_ML. [PITH_FULL_IMAGE:figures/full_fig_p010_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Rank variation of adjacency matrix of citeseer. [PITH_FULL_IMAGE:figures/full_fig_p010_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Singular-Value variation of Cora_ML 6.6 Computational Complexity Assume E(n) is the edge function with respect to the node number n, the computational complexity depends on the num￾ber of edges, i.e. O(E(n)). Furthermore, consider that graph is perturbed pursuant to RAP, thus the total complexity can be formulated as O(E(n +n ∗ RAP)). The time overhead is sketched in [PITH_FULL_IMAGE:figures/full_fig_p012… view at source ↗
Figure 8
Figure 8. Figure 8: Feature-Smoothness variation of Cora_ML jacency matrix of the perturbed graph, obtaining a low-rank approximation that represents a cleaner graph. Another re￾search direction aims to resist adversarial perturbations by enhancing the robustness of GNN (neural-flow) models. For instance, the core idea of ProGNN [14] is to combine the training process of GNNs with the structural properties of the graph, simul… view at source ↗
Figure 9
Figure 9. Figure 9: Feature-Smoothness variation on Citeseer [PITH_FULL_IMAGE:figures/full_fig_p014_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: The time overhead as graph (Citeseer) size enlarges [PITH_FULL_IMAGE:figures/full_fig_p014_10.png] view at source ↗
read the original abstract

Graph adversarial attacks are usually produced from the two perspectives of topology/structure and node feature, both of them represent the paramount characteristics learned by today's deep learning models. Although some defense countermeasures are proposed at present, they fails to disclose the intrinsic reasons why these two aspects necessitate and how they are adequately fused to co-learn the graph representation. Towards this question, we in this paper propose an adversarial defense approach through locating the graph's critical state of adversarial resilience, resorting to the equilibrium-point theory in the discipline of complex dynamic system (CDS). In brief, our work has three novelties: i) Adversarial-Attack Modeling, i.e. map a graph regime into CDS, and use the oscillation of dynamic system to model the behavior of adversarial perturbation; ii) 2D Topology-Feature-Entangled Function Design for Perturbed Graph, i.e. project graph topology and node feature as two characteristic spaces, and define two-dimensional entangled perturbation functions to represent the dynamic variance under adversarial attacks; and iii) Location of Critical State of Adversarial Resilience, i.e. utilize the equilibrium-point theory to locate the graph's critical state of attack resilience resorting to the perturbation-reflected 2D function. Finally, multi-facet experiments on five commonly-used realistic datasets validate the effectiveness of our proposed approach, and the results show our approach can significantly outperform the state-of-the-art baselines under four representative graph adversarial attacks.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The manuscript proposes TopFeaRe, an adversarial defense method for graphs that locates the critical state of adversarial resilience by modeling the graph as a complex dynamic system (CDS). It maps adversarial perturbations to oscillations, projects topology and node features into a two-dimensional entangled perturbation function representing dynamic variance, and applies equilibrium-point theory to identify the resilience critical state. Multi-facet experiments on five realistic datasets demonstrate that the approach significantly outperforms state-of-the-art baselines under four representative graph adversarial attacks.

Significance. If the central modeling holds, the work provides a novel theoretical framework linking graph adversarial attacks to CDS theory, potentially enabling more interpretable and robust defenses by identifying critical states rather than relying solely on empirical robustness. The experimental results on multiple datasets against various attacks represent a strength, offering empirical support for the practical utility of the proposed method.

major comments (3)
  1. The claim that mapping a graph regime into a CDS and modeling adversarial perturbations as oscillations adequately captures attack behavior is not sufficiently justified; it remains unclear why this dynamic system analogy corresponds to how topology and feature attacks degrade GNN performance, as opposed to direct perturbation analysis.
  2. The definition of the 2D entangled perturbation function and the subsequent use of equilibrium-point theory to locate the critical state lacks demonstration that these equilibrium points meaningfully mark the boundary of adversarial resilience; without showing that deviations around these points lead to the expected sharp instability, the location may not be load-bearing for the defense.
  3. While experiments claim outperformance, there is no ablation or analysis to confirm that the CDS-based critical state location is responsible for the gains, rather than incidental to the overall defense procedure; this undermines the assertion that the equilibrium theory is key to the effectiveness.
minor comments (2)
  1. Grammatical error in abstract: 'they fails to disclose' should be 'they fail to disclose'.
  2. The abstract provides a high-level overview but lacks any specific quantitative metrics, equations, or details on the 2D function or equilibrium calculations, making it difficult to assess the technical contributions.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive and detailed comments. We address each major comment point by point below, indicating the revisions we will incorporate to strengthen the manuscript.

read point-by-point responses
  1. Referee: The claim that mapping a graph regime into a CDS and modeling adversarial perturbations as oscillations adequately captures attack behavior is not sufficiently justified; it remains unclear why this dynamic system analogy corresponds to how topology and feature attacks degrade GNN performance, as opposed to direct perturbation analysis.

    Authors: We agree that the motivation for the CDS analogy requires stronger elaboration. In the revised manuscript, we will expand the introduction and methodology sections to explicitly justify the mapping: adversarial perturbations on topology and features are modeled as oscillations because they induce dynamic variance that shifts the graph representation away from its learned equilibrium, analogous to how external forces drive instability in complex systems. This perspective is chosen to enable equilibrium-point analysis for resilience boundaries, which static direct perturbation methods do not inherently provide. We will add a discussion contrasting the dynamic view with purely empirical perturbation analysis to clarify the intended contribution. revision: yes

  2. Referee: The definition of the 2D entangled perturbation function and the subsequent use of equilibrium-point theory to locate the critical state lacks demonstration that these equilibrium points meaningfully mark the boundary of adversarial resilience; without showing that deviations around these points lead to the expected sharp instability, the location may not be load-bearing for the defense.

    Authors: We acknowledge the need for explicit validation of the equilibrium points. The revised manuscript will include additional analysis, such as sensitivity plots and instability metrics, demonstrating that small deviations from the located critical states produce sharp increases in attack success rates or representation degradation. This will be presented in a new subsection under the critical state location method to confirm that the points are load-bearing for the defense mechanism. revision: yes

  3. Referee: While experiments claim outperformance, there is no ablation or analysis to confirm that the CDS-based critical state location is responsible for the gains, rather than incidental to the overall defense procedure; this undermines the assertion that the equilibrium theory is key to the effectiveness.

    Authors: We recognize that isolating the contribution of the equilibrium-point component is essential. We will add ablation studies in the experiments section, comparing the full TopFeaRe method against variants that omit or replace the CDS-based critical state location with heuristic alternatives. These results will quantify the performance drop when the equilibrium theory is not used, thereby demonstrating its role in the observed improvements. revision: yes

Circularity Check

0 steps flagged

No circularity: modeling choices and external CDS theory remain independent of fitted outputs.

full rationale

The paper defines a 2D topology-feature entangled perturbation function as an explicit modeling step to represent dynamic variance, then applies standard equilibrium-point theory from complex dynamic systems to locate a critical state. This is a forward construction rather than a reduction: the equilibrium is computed from the defined function, not fitted to the target resilience metric and then renamed as a prediction. No self-citation chain is load-bearing for the central claim, no parameter is fitted on a data subset and then called a prediction of a closely related quantity, and the abstract plus described novelties show no self-definitional loop where the output is presupposed in the input definition. Experiments on five datasets under four attacks provide external validation, keeping the derivation self-contained against the listed circularity patterns.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 1 invented entities

Only the abstract is available, so the ledger is limited to assumptions explicitly invoked in the three novelties; no numerical free parameters are mentioned.

axioms (2)
  • domain assumption A graph regime under adversarial attack can be mapped into a complex dynamic system whose behavior is modeled by oscillation of the system.
    Invoked in novelty i) Adversarial-Attack Modeling.
  • domain assumption Equilibrium-point theory from complex dynamic systems can locate the critical state of attack resilience once the 2D entangled perturbation function is defined.
    Invoked in novelty iii) Location of Critical State of Adversarial Resilience.
invented entities (1)
  • 2D Topology-Feature-Entangled Perturbation Function no independent evidence
    purpose: To project graph topology and node features as two characteristic spaces and represent the dynamic variance under adversarial attacks.
    Defined in novelty ii) 2D Topology-Feature-Entangled Function Design for Perturbed Graph.

pith-pipeline@v0.9.0 · 5578 in / 1510 out tokens · 82713 ms · 2026-05-10T13:25:32.286113+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

39 extracted references · 6 canonical work pages · 2 internal anchors

  1. [1]

    Design principles of biological circuits.Febs J, 277:11, 2007

    Uri Alon. Design principles of biological circuits.Febs J, 277:11, 2007

  2. [2]

    Grand: Graph neural diffusion

    Ben Chamberlain, James Rowbottom, Maria I Gorinova, Michael Bronstein, Stefan Webb, and Emanuele Rossi. Grand: Graph neural diffusion. InInternational confer- ence on machine learning, pages 1407–1418. PMLR, 2021

  3. [3]

    Beltrami flow and neural diffusion on graphs.Advances in Neural Information Processing Systems, 34:1594–1609, 2021

    Benjamin Chamberlain, James Rowbottom, Davide Eynard, Francesco Di Giovanni, Xiaowen Dong, and Michael Bronstein. Beltrami flow and neural diffusion on graphs.Advances in Neural Information Processing Systems, 34:1594–1609, 2021

  4. [4]

    Adversarial attack on graph structured data

    Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song. Adversarial attack on graph structured data. InInternational conference on machine learning, pages 1115–1124. PMLR, 2018

  5. [5]

    All you need is low (rank) defending against adversarial attacks on graphs

    Negin Entezari, Saba A Al-Sayouri, Amirali Darvishzadeh, and Evangelos E Papalexakis. All you need is low (rank) defending against adversarial attacks on graphs. InProceedings of the 13th international conference on web search and data mining, pages 169–177, 2020

  6. [6]

    Graph neural networks for social recommendation

    Wenqi Fan, Yao Ma, Qing Li, Yuan He, Eric Zhao, Jil- iang Tang, and Dawei Yin. Graph neural networks for social recommendation. InThe world wide web confer- ence, pages 417–426, 2019

  7. [7]

    Adverseness vs

    Xinxin Fan, Wenxiong Chen, Mengfan Li, Wenqi Wei, and Ling Liu. Adverseness vs. equilibrium: Exploring graph adversarial resilience through dynamic equilib- rium.arXiv preprint arXiv:2505.14463, 2025

  8. [8]

    Grouptrust: Dependable trust management.IEEE Trans

    Xinxin Fan, Ling Liu, Mingchu Li, and Zhiyuan Su. Grouptrust: Dependable trust management.IEEE Trans. Parallel Distributed Syst., 28(4):1076–1090, 2017

  9. [9]

    Decentralized trust management: Risk analysis and trust aggregation.ACM Comput

    Xinxin Fan, Ling Liu, Rui Zhang, Quanliang Jing, and Jingping Bi. Decentralized trust management: Risk analysis and trust aggregation.ACM Comput. Surv., 53(1):2:1–2:33, 2021

  10. [10]

    Universal resilience patterns in complex networks.Na- ture, 530:307–312, 2016

    Jianxi Gao, Baruch Barzel, and Albert-László Barabási. Universal resilience patterns in complex networks.Na- ture, 530:307–312, 2016

  11. [11]

    Induc- tive representation learning on large graphs.Advances in neural information processing systems, 30, 2017

    Will Hamilton, Zhitao Ying, and Jure Leskovec. Induc- tive representation learning on large graphs.Advances in neural information processing systems, 30, 2017

  12. [12]

    Stealing links from graph neural networks

    Xinlei He, Jinyuan Jia, Michael Backes, Neil Zhenqiang Gong, and Yang Zhang. Stealing links from graph neural networks. In30th USENIX Security Symposium, pages 2669–2686, 2021

  13. [13]

    Robust mid-pass filtering graph convolutional networks

    Jincheng Huang, Lun Du, Xu Chen, Qiang Fu, Shi Han, and Dongmei Zhang. Robust mid-pass filtering graph convolutional networks. InProceedings of the ACM Web Conference 2023, page 328–338, 2023

  14. [14]

    Graph structure learning for ro- bust graph neural networks

    Wei Jin, Yao Ma, Xiaorui Liu, Xianfeng Tang, Suhang Wang, and Jiliang Tang. Graph structure learning for ro- bust graph neural networks. InThe 26th ACM SIGKDD Conference on Knowledge Discovery and Data Min- ing, Virtual Event, CA, USA, August 23-27, 2020, pages 66–74, 2020

  15. [16]

    Semi-Supervised Classification with Graph Convolutional Networks

    Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks.arXiv preprint arXiv:1609.02907, 2016

  16. [17]

    Ac- curacy of a one-dimensional reduction of dynamical systems on networks.Physical Review E, 105:024305, 2022

    Prosenjit Kundu, Hiroshi Kori, and Naoki Masuda. Ac- curacy of a one-dimensional reduction of dynamical systems on networks.Physical Review E, 105:024305, 2022

  17. [18]

    Gated Graph Sequence Neural Networks

    Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. arXiv preprint arXiv:1511.05493, 2015

  18. [19]

    Taeffect: Quantifying interaction risks in trust-enabled communi- cation systems.Int

    Yunfeng Lu, Xinxin Fan, and Quanliang Jing. Taeffect: Quantifying interaction risks in trust-enabled communi- cation systems.Int. J. Commun. Syst., 36(4), 2023

  19. [20]

    Epidemic processes in complex networks.Reviews of modern physics, 87:925–979, 2015

    Romualdo Pastor-Satorras, Claudio Castellano, Piet Van Mieghem, and Alessandro Vespignani. Epidemic processes in complex networks.Reviews of modern physics, 87:925–979, 2015

  20. [21]

    Mdgnn: Multi-relational dynamic graph neural network for comprehensive and dynamic stock investment prediction

    Hao Qian, Hongting Zhou, Qian Zhao, Hao Chen, Hongxiang Yao, Jingwei Wang, Ziqi Liu, Fei Yu, Zhiqiang Zhang, and Jun Zhou. Mdgnn: Multi-relational dynamic graph neural network for comprehensive and dynamic stock investment prediction. InProceedings of the AAAI Conference on Artificial Intelligence, vol- ume 38, pages 14642–14650, 2024

  21. [22]

    Graph- coupled oscillator networks

    T Konstantin Rusch, Ben Chamberlain, James Rowbot- tom, Siddhartha Mishra, and Michael Bronstein. Graph- coupled oscillator networks. InInternational Confer- ence on Machine Learning, pages 18888–18909. PMLR, 2022

  22. [23]

    Grand++: Graph neural diffusion with a source term.ICLR, 2022

    Matthew Thorpe, Tan Nguyen, Hedi Xia, Thomas Strohmer, Andrea Bertozzi, Stanley Osher, and Bao Wang. Grand++: Graph neural diffusion with a source term.ICLR, 2022

  23. [24]

    Community detection in networks with positive and negative links

    Vincent A Traag and Jeroen Bruggeman. Community detection in networks with positive and negative links. Physical Review E—Statistical, Nonlinear, and Soft Mat- ter Physics, 80:036115, 2009

  24. [25]

    Graph Attention Networks

    Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks.arXiv preprint arXiv:1710.10903, 2017

  25. [26]

    Graph attention networks.stat, 1050:10–48550, 2017

    Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, Yoshua Bengio, et al. Graph attention networks.stat, 1050:10–48550, 2017

  26. [27]

    Attacking graph-based classification via manipulating the graph structure

    Binghui Wang and Neil Zhenqiang Gong. Attacking graph-based classification via manipulating the graph structure. InProceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pages 2023 – 2040, 2019

  27. [28]

    Group property inference attacks against graph neural networks

    Xiuling Wang and Wendy Hui Wang. Group property inference attacks against graph neural networks. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, pages 2871– 2884, 2022

  28. [29]

    Subgraph struc- ture membership inference attacks against graph neural networks.Proceedings on Privacy Enhancing Technolo- gies, 2024

    Xiuling Wang and Wendy Hui Wang. Subgraph struc- ture membership inference attacks against graph neural networks.Proceedings on Privacy Enhancing Technolo- gies, 2024

  29. [30]

    Hiding individuals and communities in a social network.Nature Human Be- haviour, 2:139–147, 2018

    Marcin Waniek, Tomasz P Michalak, Michael J Wooldridge, and Talal Rahwan. Hiding individuals and communities in a social network.Nature Human Be- haviour, 2:139–147, 2018

  30. [32]

    Adversarial ex- amples on graph data: Deep insights into attack and defense.arXiv preprint arXiv:1903.01610, 2019

    Huijun Wu, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, Kai Lu, and Liming Zhu. Adversarial ex- amples on graph data: Deep insights into attack and defense.arXiv preprint arXiv:1903.01610, 2019

  31. [33]

    Topology attack and defense for graph neural networks: An optimization perspective.arXiv preprint arXiv:1906.04214, 2019

    Kaidi Xu, Hongge Chen, Sijia Liu, Pin-Yu Chen, Tsui- Wei Weng, Mingyi Hong, and Xue Lin. Topology attack and defense for graph neural networks: An optimization perspective.arXiv preprint arXiv:1906.04214, 2019

  32. [34]

    Financial risk analysis for smes with graph-based supply chain mining

    Shuo Yang, Zhiqiang Zhang, Jun Zhou, Yang Wang, Wang Sun, Xingyu Zhong, Yanming Fang, Quan Yu, and Yuan Qi. Financial risk analysis for smes with graph-based supply chain mining. In Christian Bessiere, editor,Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, pages 4661–4667. ijcai.org, 2020

  33. [35]

    Gnnguard: Defend- ing graph neural networks against adversarial attacks

    Xiang Zhang and Marinka Zitnik. Gnnguard: Defend- ing graph neural networks against adversarial attacks. Advances in neural information processing systems, 33:9263–9275, 2020

  34. [36]

    Deep learning on graphs: A survey.IEEE Transactions on Knowledge and Data Engineering, 34:249–270, 2020

    Ziwei Zhang, Peng Cui, and Wenwu Zhu. Deep learning on graphs: A survey.IEEE Transactions on Knowledge and Data Engineering, 34:249–270, 2020

  35. [37]

    Adversarial robustness in graph neural networks: A hamiltonian approach.Advances in Neural Information Processing Systems, 36, 2024

    Kai Zhao, Qiyu Kang, Yang Song, Rui She, Sijie Wang, and Wee Peng Tay. Adversarial robustness in graph neural networks: A hamiltonian approach.Advances in Neural Information Processing Systems, 36, 2024

  36. [38]

    Robust graph convolutional networks against adversarial attacks

    Dingyuan Zhu, Ziwei Zhang, Peng Cui, and Wenwu Zhu. Robust graph convolutional networks against adversarial attacks. InProceedings of the 25th ACM SIGKDD in- ternational conference on knowledge discovery & data mining, pages 1399–1407, 2019

  37. [40]

    Adversarial attacks on neural networks for graph data

    Daniel Zügner, Amir Akbarnejad, and Stephan Günne- mann. Adversarial attacks on neural networks for graph data. InProceedings of the 24th ACM SIGKDD inter- national conference on knowledge discovery & data mining, pages 2847–2856, 2018

  38. [41]

    Adversarial attacks on graph neural networks via meta learning

    Daniel Zügner and Stephan Günnemann. Adversarial attacks on graph neural networks via meta learning. In 7th International Conference on Learning Representa- tions, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. A Boundary of Perturbation-Domain Proof.According to Theorem 4.1, there exists a diagonal matrix M such that the perturbed ...

  39. [42]

    The accuracy of node classification will be averaged over ten-time experiments

    For each graph, we randomly select 10% of the nodes for model training, 10% for validation, and 80% for testing. The accuracy of node classification will be averaged over ten-time experiments. For baselines’ settings, GCN [16], GAT [26], and HANG [37] all use their default parameters. For GCN- SVD [5], we choose the optimal rank reduction number from {20,...