pith. machine review for the scientific record. sign in

arxiv: 2604.19186 · v1 · submitted 2026-04-21 · 💻 cs.LG · cs.AI

Recognition: unknown

Inductive Subgraphs as Shortcuts: Causal Disentanglement for Heterophilic Graph Learning

Authors on Pith no claims yet

Pith reviewed 2026-05-10 02:50 UTC · model grok-4.3

classification 💻 cs.LG cs.AI
keywords heterophilic graphsgraph neural networkscausal inferenceinductive subgraphsspurious shortcutsnode classificationdisentanglementdebiased learning
0
0 comments X

The pith

Recurring inductive subgraphs act as spurious shortcuts that mislead GNNs in heterophilic graphs, which causal disentanglement corrects by blocking non-causal paths.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper shows that repeating small subgraph patterns in heterophilic graphs create false shortcuts for GNNs, leading them to learn non-causal correlations instead of genuine predictors of node labels. Prior adaptations to heterophily through neighbor extension or architecture changes fail to address this root cause. The authors model the problem with causal inference, constructing a debiased causal graph that severs confounding and spillover paths from these shortcuts. This guides the CD-GNN framework to separate spurious inductive subgraphs from true causal ones, resulting in improved node classification accuracy and robustness on real datasets.

Core claim

Recurring inductive subgraphs act as spurious shortcuts that mislead GNNs and reinforce non-causal correlations in heterophilic graphs. A debiased causal graph explicitly blocks confounding and spillover paths, guiding the Causal Disentangled GNN (CD-GNN) to disentangle spurious inductive subgraphs from true causal subgraphs and thereby improve robustness and accuracy in node classification.

What carries the argument

The debiased causal graph that blocks non-causal paths induced by shortcut inductive subgraphs, enabling CD-GNN to disentangle spurious from causal structures.

Load-bearing premise

The debiased causal graph correctly identifies and blocks all confounding and spillover paths from shortcut inductive subgraphs without discarding useful causal information or introducing new biases.

What would settle it

If CD-GNN shows no accuracy gain over baselines on heterophilic datasets where recurring inductive subgraphs are removed or randomized while preserving other structure, the central claim would be undermined.

Figures

Figures reproduced from arXiv: 2604.19186 by Guandong Xu, Haiyang Xia, Hao Miao, Qian Li, Qing Li, Xiangmeng Wang.

Figure 1
Figure 1. Figure 1: A toy example shows that an inductive subgraph [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Prediction explanations: 𝑦, 𝑦ˆ is the ground truth and predicted label for the explaining node instance, respectively. The node being explained is highlighted with a larger node size for clarity. For consistency, the same node is selected for explanation across both homophilic and heterophilic settings within a single dataset [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Training losses of the GNN model on homophilic [PITH_FULL_IMAGE:figures/full_fig_p003_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: (a) Causal graph of existing GNN training. (b) Our [PITH_FULL_IMAGE:figures/full_fig_p004_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Ablation studies on seven datasets [PITH_FULL_IMAGE:figures/full_fig_p008_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Parameter analysis on seven datasets. 𝜆1 = 10, showing that a moderate L𝑐 𝑓 weight best enforces short￾cut–causal embedding independence. This is because setting 𝜆1 too high leads to excessive counterfactual shortcuts, inadvertently rein￾troducing harmful bias and degrading model performance. These results highlight that a properly L𝑐 𝑓 is crucial for blocking spillover and show that shortcut reliance (eve… view at source ↗
read the original abstract

Heterophily is a prevalent property of real-world graphs and is well known to impair the performance of homophilic Graph Neural Networks (GNNs). Prior work has attempted to adapt GNNs to heterophilic graphs through non-local neighbor extension or architecture refinement. However, the fundamental reasons behind misclassifications remain poorly understood. In this work, we take a novel perspective by examining recurring inductive subgraphs, empirically and theoretically showing that they act as spurious shortcuts that mislead GNNs and reinforce non-causal correlations in heterophilic graphs. To address this, we adopt a causal inference perspective to analyze and correct the biased learning behavior induced by shortcut inductive subgraphs. We propose a debiased causal graph that explicitly blocks confounding and spillover paths responsible for these shortcuts. Guided by this causal graph, we introduce Causal Disentangled GNN (CD-GNN), a principled framework that disentangles spurious inductive subgraphs from true causal subgraphs by explicitly blocking non-causal paths. By focusing on genuine causal signals, CD-GNN substantially improves the robustness and accuracy of node classification in heterophilic graphs. Extensive experiments on real-world datasets not only validate our theoretical findings but also demonstrate that our proposed CD-GNN outperforms state-of-the-art heterophily-aware baselines.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper claims that recurring inductive subgraphs function as spurious shortcuts in heterophilic graphs, causing GNNs to learn non-causal correlations. It introduces a debiased causal graph that explicitly blocks confounding and spillover paths, and proposes CD-GNN to disentangle spurious inductive subgraphs from true causal ones, yielding improved node classification accuracy and robustness on real-world heterophilic datasets.

Significance. If the debiased causal graph correctly severs all non-causal paths induced by inductive subgraphs while preserving causal signals, the work offers a principled causal-inference lens on heterophily that goes beyond existing architectural adaptations. The empirical outperformance over heterophily-aware baselines is a concrete strength, and the focus on falsifiable shortcut mechanisms could guide future robust GNN design.

major comments (3)
  1. [Abstract] Abstract: the assertion of 'empirical and theoretical validation' of the shortcut claim and CD-GNN gains is not accompanied by any derivations, identifiability conditions, or proof sketches; the central claim that the debiased causal graph exhaustively blocks all confounding/spillover paths therefore lacks load-bearing formal support.
  2. [§3] Debiased causal graph construction (likely §3): no general identification procedure or do-calculus rule is supplied for mapping arbitrary recurring inductive subgraphs to blocking sets; if the mapping is heuristic or incomplete, residual non-causal correlations remain and the disentanglement guarantee fails.
  3. [§4] CD-GNN framework (likely §4, Eq. for path blocking): the procedure for explicitly blocking non-causal paths must be shown not to discard useful causal information or introduce new biases; without such a check, the robustness claim on heterophilic graphs is not yet substantiated.
minor comments (2)
  1. Add error bars with standard deviations and dataset statistics (number of nodes, edges, homophily ratio) to all experimental tables and figures for reproducibility.
  2. Clarify notation for 'inductive subgraphs' versus 'causal subgraphs' on first use to avoid ambiguity in the causal graph diagrams.

Simulated Author's Rebuttal

3 responses · 1 unresolved

We thank the referee for the insightful and constructive comments. Below we respond point-by-point to the major comments, clarifying the current theoretical support in the manuscript and committing to specific revisions that strengthen the formal grounding without overstating the existing results.

read point-by-point responses
  1. Referee: [Abstract] the assertion of 'empirical and theoretical validation' of the shortcut claim and CD-GNN gains is not accompanied by any derivations, identifiability conditions, or proof sketches; the central claim that the debiased causal graph exhaustively blocks all confounding/spillover paths therefore lacks load-bearing formal support.

    Authors: The manuscript's theoretical contribution centers on constructing a debiased causal graph that identifies confounding and spillover paths induced by recurring inductive subgraphs, followed by an analysis showing how these paths produce non-causal correlations. We agree that no explicit derivations, identifiability conditions, or proof sketches appear in the current version. In the revision we will add a dedicated subsection containing a proof sketch that applies the backdoor criterion to the specific path structure of inductive subgraphs and demonstrates blocking. We will also revise the abstract to read 'empirical validation together with causal-graph analysis' to avoid overstating the formal results. A complete general identifiability theorem is not provided and remains future work. revision: partial

  2. Referee: [§3] no general identification procedure or do-calculus rule is supplied for mapping arbitrary recurring inductive subgraphs to blocking sets; if the mapping is heuristic or incomplete, residual non-causal correlations remain and the disentanglement guarantee fails.

    Authors: Section 3 presents a data-driven procedure that first detects recurring subgraphs via frequency statistics and then selects blocking sets based on structural patterns (common neighbors and motif connectivity). We acknowledge that this procedure is not derived from a general do-calculus rule applicable to arbitrary graphs. In the revision we will rewrite the construction using explicit do-calculus notation, stating the backdoor and front-door adjustments applied to each identified path type, and we will add a paragraph discussing the conditions under which residual correlations could remain. These changes make the mapping more transparent while preserving the original empirical detection step. revision: yes

  3. Referee: [§4] the procedure for explicitly blocking non-causal paths must be shown not to discard useful causal information or introduce new biases; without such a check, the robustness claim on heterophilic graphs is not yet substantiated.

    Authors: CD-GNN implements path blocking through a disentanglement objective that maximizes mutual information between the causal subgraph representation and the node label while minimizing information with the spurious subgraph. To address the concern we will add (i) an information-theoretic argument showing that the causal component retains at least the label-predictive information present in the original graph and (ii) controlled experiments on synthetic heterophilic graphs with known ground-truth causal structures. These additions will substantiate that the blocking step does not discard causal signals or introduce measurable new biases. revision: yes

standing simulated objections not resolved
  • A fully general, assumption-free identification procedure that maps any set of recurring inductive subgraphs to blocking sets in arbitrary graphs is not developed in the manuscript.

Circularity Check

0 steps flagged

No significant circularity detected

full rationale

The paper introduces recurring inductive subgraphs as spurious shortcuts in heterophilic graphs and proposes a debiased causal graph plus CD-GNN to block non-causal paths. The abstract and available text present this as a new causal-inference-guided framework validated by experiments on real-world datasets. No equations, definitions, or self-citations are shown that reduce the debiased graph, path-blocking procedure, or performance claims to fitted inputs or prior results by construction. The derivation chain introduces independent concepts and relies on external empirical validation rather than self-referential mappings.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 1 invented entities

Abstract-only review; no explicit free parameters, axioms, or invented entities are detailed beyond the high-level introduction of a debiased causal graph and CD-GNN framework.

invented entities (1)
  • debiased causal graph no independent evidence
    purpose: to explicitly block confounding and spillover paths from shortcut inductive subgraphs
    Introduced to model and correct biased learning behavior in heterophilic graphs.

pith-pipeline@v0.9.0 · 5536 in / 1238 out tokens · 58508 ms · 2026-05-10T02:50:28.546631+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

64 extracted references · 9 canonical work pages · 4 internal anchors

  1. [1]

    Sami Abu-El-Haija, Bryan Perozzi, Amol Kapoor, Nazanin Alipourfard, Kristina Lerman, Hrayr Harutyunyan, Greg Ver Steeg, and Aram Galstyan. 2019. Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing. Ininternational conference on machine learning. PMLR, 21–29

  2. [2]

    Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. 2018. Relational inductive biases, deep learning, and graph networks.arXiv preprint arXiv:1806.01261(2018)

  3. [3]

    Deyu Bo, Xiao Wang, Chuan Shi, and Huawei Shen. 2021. Beyond low-frequency information in graph convolutional networks. InProceedings of the AAAI confer- ence on artificial intelligence, Vol. 35. 3950–3957

  4. [4]

    Alexander Brown, Nenad Tomasev, Jan Freyberg, Yuan Liu, Alan Karthike- salingam, and Jessica Schrouff. 2023. Detecting shortcut learning for fair medical AI using shortcut testing.Nature communications14, 1 (2023), 4314

  5. [5]

    Guoxin Chen, Yongqing Wang, Fangda Guo, Qinglang Guo, Jiangli Shao, Huawei Shen, and Xueqi Cheng. 2023. Causality and independence enhancement for biased node classification. InProceedings of the 32nd ACM international conference on information and knowledge management. 203–212

  6. [6]

    Jie Chen, Shouzhen Chen, Mingyuan Bai, Jian Pu, Junping Zhang, and Junbin Gao

  7. [7]

    Graph decoupling attention markov networks for semisupervised graph node classification.IEEE transactions on neural networks and learning systems34, 12 (2022), 9859–9873

  8. [8]

    Jie Chen, Shouzhen Chen, Junbin Gao, Zengfeng Huang, Junping Zhang, and Jian Pu. 2023. Exploiting neighbor effect: Conv-agnostic GNN framework for graphs with heterophily.IEEE Transactions on Neural Networks and Learning Systems35, 10 (2023), 13383–13396

  9. [9]

    Eli Chien, Jianhao Peng, Pan Li, and Olgica Milenkovic. 2020. Adaptive universal generalized pagerank graph neural network.arXiv preprint arXiv:2006.07988 (2020)

  10. [10]

    Jingyuan Chou, Jiangzhuo Chen, and Madhav Marathe. 2024. State-Of-The-Art and Challenges in Causal Inference on Graphs: Confounders and Interferences. In Inductive Subgraphs as Shortcuts: Causal Disentanglement for Heterophilic Graph Learning SIGIR ’26, July 20–24, 2026, Melbourne, VIC, Australia 2024 IEEE 6th International Conference on Cognitive Machine...

  11. [11]

    Shaohua Fan, Xiao Wang, Yanhu Mo, Chuan Shi, and Jian Tang. 2022. Debiasing graph neural networks via learning disentangled causal substructure.Advances in Neural Information Processing Systems35 (2022), 24934–24946

  12. [12]

    Yuan Gao, Xiang Wang, Xiangnan He, Zhenguang Liu, Huamin Feng, and Yong- dong Zhang. 2023. Addressing heterophily in graph anomaly detection: A per- spective of graph spectrum. InProceedings of the ACM web conference 2023. 1528–1538

  13. [13]

    Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A Wichmann. 2020. Shortcut learn- ing in deep neural networks.Nature Machine Intelligence2, 11 (2020), 665–673

  14. [14]

    Arthur Gretton, Olivier Bousquet, Alex Smola, and Bernhard Schölkopf. 2005. Measuring statistical dependence with Hilbert-Schmidt norms. InInternational conference on algorithmic learning theory. Springer, 63–77

  15. [15]

    Jingwei Guo, Kaizhu Huang, Xinping Yi, and Rui Zhang. 2023. Graph neural networks with diverse spectral filtering. InProceedings of the ACM web conference

  16. [16]

    Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs.Advances in neural information processing systems30 (2017)

  17. [17]

    Mingguo He, Zhewei Wei, Hongteng Xu, et al. 2021. Bernnet: Learning arbitrary graph spectral filters via bernstein approximation.Advances in Neural Information Processing Systems34 (2021), 14239–14251

  18. [18]

    Silu He, Qinyao Luo, Xinsha Fu, Ling Zhao, Ronghua Du, and Haifeng Li. 2024. CAT: A causal graph attention network for trimming heterophilic graphs.Infor- mation Sciences677 (2024), 120916

  19. [19]

    Sirui Huang, Qian Li, Xiangmeng Wang, Dianer Yu, Guandong Xu, and Qing Li. 2024. Counterfactual debasing for multi-behavior recommendations. InIn- ternational Conference on Database Systems for Advanced Applications. Springer, 164–179

  20. [20]

    Yan Jiang, Guannan Liu, Junjie Wu, and Hao Lin. 2022. Telecom fraud detection via hawkes-enhanced sequence model.IEEE Transactions on Knowledge and Data Engineering35, 5 (2022), 5311–5324

  21. [21]

    Di Jin, Rui Wang, Meng Ge, Dongxiao He, Xiang Li, Wei Lin, and Weixiong Zhang

  22. [22]

    Raw-gnn: Random walk aggregation based graph neural network.arXiv preprint arXiv:2206.13953(2022)

  23. [23]

    Di Jin, Zhizhi Yu, Cuiying Huo, Rui Wang, Xiao Wang, Dongxiao He, and Ji- awei Han. 2021. Universal graph convolutional networks.Advances in neural information processing systems34 (2021), 10654–10664

  24. [24]

    Wei Jin, Tyler Derr, Yiqi Wang, Yao Ma, Zitao Liu, and Jiliang Tang. 2021. Node similarity preserving graph convolutional networks. InProceedings of the 14th ACM international conference on web search and data mining. 148–156

  25. [25]

    Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks.arXiv preprint arXiv:1609.02907(2016)

  26. [26]

    Han Lei, Jiaxing Xu, Xia Dong, and Yiping Ke. 2025. Divergent paths: Separating homophilic and heterophilic learning for enhanced graph-level representations. InProceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V. 2. 1286–1295

  27. [27]

    Qian Li, Xiangmeng Wang, Zhichao Wang, and Guandong Xu. 2023. Be causal: De-biasing social network confounding in recommendation.ACM Transactions on Knowledge Discovery from Data17, 1 (2023), 1–23

  28. [28]

    Meng Liu, Zhengyang Wang, and Shuiwang Ji. 2021. Non-local graph neural networks.IEEE transactions on pattern analysis and machine intelligence44, 12 (2021), 10270–10276

  29. [29]

    Yixin Liu, Yizhen Zheng, Daokun Zhang, Vincent CS Lee, and Shirui Pan. 2023. Beyond smoothing: Unsupervised graph representation learning with edge het- erophily discriminating. InProceedings of the AAAI conference on artificial intelli- gence, Vol. 37. 4516–4524

  30. [30]

    Sitao Luan, Chenqing Hua, Qincheng Lu, Liheng Ma, Lirong Wu, Xinyu Wang, Minkai Xu, Xiao-Wen Chang, Doina Precup, Rex Ying, et al. 2024. The heterophilic graph learning handbook: Benchmarks, models, theoretical analysis, applications and challenges.arXiv preprint arXiv:2407.09618(2024)

  31. [31]

    Sitao Luan, Chenqing Hua, Qincheng Lu, Jiaqi Zhu, Mingde Zhao, Shuyuan Zhang, Xiao-Wen Chang, and Doina Precup. 2022. Revisiting heterophily for graph neural networks.Advances in neural information processing systems35 (2022), 1362–1375

  32. [32]

    Dongsheng Luo, Wei Cheng, Dongkuan Xu, Wenchao Yu, Bo Zong, Haifeng Chen, and Xiang Zhang. 2020. Parameterized explainer for graph neural network. Advances in neural information processing systems33 (2020), 19620–19631

  33. [33]

    Yao Ma, Xiaorui Liu, Neil Shah, and Jiliang Tang. 2021. Is homophily a necessity for graph neural networks?arXiv preprint arXiv:2106.06134(2021)

  34. [34]

    Yunpu Ma and Volker Tresp. 2021. Causal inference under networked interference and intervention policy enhancement. InInternational Conference on Artificial Intelligence and Statistics. PMLR, 3700–3708

  35. [35]

    Junhyun Nam, Hyuntak Cha, Sungsoo Ahn, Jaeho Lee, and Jinwoo Shin. 2020. Learning from failure: De-biasing classifier from biased classifier.Advances in Neural Information Processing Systems33 (2020), 20673–20684

  36. [36]

    Judea Pearl. 1998. Why there is no statistical test for confounding, why many think there is, and why they are almost right. (1998)

  37. [37]

    2009.Causality

    Judea Pearl. 2009.Causality. Cambridge university press

  38. [38]

    Judea Pearl et al. 2000. Models, reasoning and inference.Cambridge, UK: Cam- bridgeUniversityPress19, 2 (2000)

  39. [39]

    Hongbin Pei, Bingzhe Wei, Kevin Chen-Chuan Chang, Yu Lei, and Bo Yang

  40. [40]

    Geom-gcn: Geometric graph convolutional networks.arXiv preprint arXiv:2002.05287(2020)

  41. [41]

    Lingfei Ren, Ruimin Hu, Zheng Wang, Yilin Xiao, Dengshi Li, Junhang Wu, Yilong Zang, Jinzhang Hu, and Zijun Huang. 2024. Heterophilic Graph Invariant Learning for Out-of-Distribution of Fraud Detection. InProceedings of the 32nd ACM International Conference on Multimedia. 11032–11040

  42. [42]

    Zhixiang Shen and Zhao Kang. 2025. When heterophily meets heterogeneous graphs: Latent graphs guided unsupervised representation learning.IEEE Trans- actions on Neural Networks and Learning Systems(2025)

  43. [43]

    Yongduo Sui, Caizhi Tang, Zhixuan Chu, Junfeng Fang, Yuan Gao, Qing Cui, Longfei Li, Jun Zhou, and Xiang Wang. 2024. Invariant Graph Learning for Causal Effect Estimation. InProceedings of the ACM on Web Conference 2024. 2552–2562

  44. [44]

    Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks.arXiv preprint arXiv:1710.10903(2017)

  45. [45]

    Xiangmeng Wang, Qian Li, Dianer Yu, Peng Cui, Zhichao Wang, and Guandong Xu. 2022. Causal disentanglement for semantic-aware intent learning in recom- mendation.IEEE Transactions on Knowledge and Data Engineering35, 10 (2022), 9836–9849

  46. [46]

    Xiangmeng Wang, Qian Li, Dianer Yu, Qing Li, and Guandong Xu. 2024. Con- strained off-policy learning over heterogeneous information for fairness-aware recommendation.ACM Transactions on Recommender Systems2, 4 (2024), 1–27

  47. [47]

    Xiangmeng Wang, Qian Li, Dianer Yu, Qing Li, and Guandong Xu. 2024. Coun- terfactual explanation for fairness in recommendation.ACM Transactions on Information Systems42, 4 (2024), 1–30

  48. [48]

    Xiangmeng Wang, Qian Li, Dianer Yu, Qing Li, and Guandong Xu. 2024. Re- inforced path reasoning for counterfactual explainable recommendation.IEEE Transactions on Knowledge and Data Engineering36, 7 (2024), 3443–3459

  49. [49]

    Xiangmeng Wang, Qian Li, Dianer Yu, and Guandong Xu. 2022. Off-policy learning over heterogeneous information for recommendation. InProceedings of the ACM Web Conference 2022. 2348–2359

  50. [50]

    Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2018. How powerful are graph neural networks?arXiv preprint arXiv:1810.00826(2018)

  51. [51]

    Yujun Yan, Milad Hashemi, Kevin Swersky, Yaoqing Yang, and Danai Koutra

  52. [52]

    In2022 IEEE International Conference on Data Mining (ICDM)

    Two sides of the same coin: Heterophily and oversmoothing in graph convolutional neural networks. In2022 IEEE International Conference on Data Mining (ICDM). IEEE, 1287–1292

  53. [53]

    Liang Yang, Mengzhe Li, Liyang Liu, Chuan Wang, Xiaochun Cao, Yuanfang Guo, et al. 2021. Diverse message passing for attribute with heterophily.Advances in Neural Information Processing Systems34 (2021), 4751–4763

  54. [54]

    Zhitao Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, and Jure Leskovec

  55. [55]

    Gnnexplainer: Generating explanations for graph neural networks.Ad- vances in neural information processing systems32 (2019)

  56. [56]

    Dianer Yu, Qian Li, Xiangmeng Wang, Qing Li, and Guandong Xu. 2023. Coun- terfactual explainable conversational recommendation.IEEE Transactions on Knowledge and Data Engineering36, 6 (2023), 2388–2400

  57. [57]

    Dianer Yu, Qian Li, Xiangmeng Wang, and Guandong Xu. 2023. Deconfounded recommendation via causal intervention.Neurocomputing529 (2023), 128–139

  58. [58]

    Dianer Yu, Qian Li, Xiangmeng Wang, and Guandong Xu. 2025. A causal-based attribute selection strategy for conversational recommender systems.IEEE Trans- actions on Knowledge and Data Engineering(2025)

  59. [59]

    En Yu, Jie Lu, Kun Wang, Xiaoyu Yang, and Guangquan Zhang. 2026. Drift- aware collaborative assistance mixture of experts for heterogeneous multistream learning. InProceedings of the AAAI Conference on Artificial Intelligence, Vol. 40. 16199–16207

  60. [60]

    Yuan Yuan, Kristen Altenburger, and Farshad Kooti. 2021. Causal network motifs: Identifying heterogeneous spillover effects in a/b tests. InProceedings of the Web Conference 2021. 3359–3370

  61. [61]

    Tong Zhang and Bin Yu. 2005. Boosting with early stopping: Convergence and consistency. (2005)

  62. [62]

    Shuai Zheng, Zhenfeng Zhu, Zhizhe Liu, Youru Li, and Yao Zhao. 2023. Node- oriented spectral filtering for graph neural networks.IEEE Transactions on Pattern Analysis and Machine Intelligence46, 1 (2023), 388–402

  63. [63]

    Jiong Zhu, Gaotang Li, Yao-An Yang, Jing Zhu, Xuehao Cui, and Danai Koutra

  64. [64]

    On the impact of feature heterophily on link prediction with graph neural networks.Advances in Neural Information Processing Systems37 (2024), 65823– 65851