pith. machine review for the scientific record. sign in

arxiv: 2605.08074 · v1 · submitted 2026-05-08 · 💻 cs.LG

Recognition: 2 theorem links

· Lean Theorem

GRAPHLCP: Structure-Aware Localized Conformal Prediction on Graphs

Authors on Pith no claims yet

Pith reviewed 2026-05-11 02:00 UTC · model grok-4.3

classification 💻 cs.LG
keywords conformal predictiongraph neural networksuncertainty quantificationlocalized conformal predictiongraph structurepersonalized pagerankprediction sets
0
0 comments X

The pith

GRAPHLCP uses graph topology via densification and PageRank kernels to localize conformal prediction and produce tighter sets with coverage guarantees.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper develops GRAPHLCP to apply conformal prediction to graph neural networks by moving beyond embedding proximity to explicitly include graph structure. It first densifies sparse graphs in a feature-aware way, then builds a kernel from personalized PageRank to quantify structural closeness between nodes. This kernel drives the selection of calibration anchors and the weighting of their nonconformity scores so that both nearby and distant dependencies are considered. The result is a procedure that retains finite-sample marginal coverage while aiming for smaller, more adaptive prediction sets on graph data. Experiments across regression and classification tasks show the method meets coverage targets and improves conditional performance in varied scenarios.

Core claim

GRAPHLCP performs localized conformal prediction on graphs by first applying feature-aware densification to reduce locality bias in sparse topologies and then computing a Personalized PageRank kernel that encodes structural proximity; the resulting kernel determines topology-dependent anchor sampling and calibration weighting, thereby capturing both local and long-range inter-node dependencies while preserving distribution-free coverage guarantees.

What carries the argument

Feature-aware densification step followed by a Personalized PageRank kernel that models structural proximity for anchor sampling and weighting.

Load-bearing premise

Graph topology supplies additional reliable signal for localization and weighting that is not already present in the node embeddings and that the added densification and PageRank steps preserve the exchangeability needed for conformal guarantees.

What would settle it

On a held-out graph dataset, GRAPHLCP either fails to achieve the promised marginal coverage rate or produces larger average prediction sets than an embedding-only localized conformal baseline would be a direct falsifier.

Figures

Figures reproduced from arXiv: 2605.08074 by Debmalya Mandal, Fangxin Wang, Peyman Baghershahi, Sourav Medya.

Figure 1
Figure 1. Figure 1: GRAPHLCP. Given the embeddings of a frozen GNN for an input graph, GRAPHLCP goes through four stages: 1) PCA Transformation: high-dimensional embeddings are decorrelated and largest eigenvalues (corresponding to directions of highest variance) yield an adaptive bandwidth for an anisotropic Gaussian kernel; 2) Graph Densification: adding informative edges via a homophily￾based dynamic threshold; 3) PPR Inte… view at source ↗
Figure 2
Figure 2. Figure 2: Results for WSC on regression and classification datasets. Miscoverage rate [PITH_FULL_IMAGE:figures/full_fig_p007_2.png] view at source ↗
Figure 4
Figure 4. Figure 4: Results for group-based conditional coverage. It shows the minimum coverage across [PITH_FULL_IMAGE:figures/full_fig_p008_4.png] view at source ↗
Figure 3
Figure 3. Figure 3: Results for WSC and efficiency on all datasets. While GRAPHLCP achieves the second￾best WSC, it outperforms CalLCP in normalized prediction length. Next, we design an experiment to simultane￾ously compare validity and efficiency. Specif￾ically, we collect worst-case coverage (WSC) and prediction length across datasets, and show in [PITH_FULL_IMAGE:figures/full_fig_p008_3.png] view at source ↗
Figure 5
Figure 5. Figure 5: GOODCBAS dataset: Impact of Gaussian kernel bandwidth [PITH_FULL_IMAGE:figures/full_fig_p014_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Impact of node homophily on the marginal coverage and prediction length/size of SCP. [PITH_FULL_IMAGE:figures/full_fig_p015_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Sensitivity analysis of the PPR restart probability [PITH_FULL_IMAGE:figures/full_fig_p016_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Sensitivity analysis of initial densification threshold [PITH_FULL_IMAGE:figures/full_fig_p017_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Results for group-based conditional coverage. It shows the minimum coverage across [PITH_FULL_IMAGE:figures/full_fig_p019_9.png] view at source ↗
read the original abstract

Conformal prediction (CP) provides a distribution-free approach to uncertainty quantification with finite-sample guarantees. However, applying CP to graph neural networks (GNNs) remains challenging as the combinatorial nature of graphs often leads to insufficiently certain predictions and indiscriminative embeddings. Existing methods primarily rely on embedding-space proximity for localization, which can be unreliable for graphs and yield inefficient prediction sets. We propose GRAPHLCP, a proximity-based localized CP framework that explicitly incorporates graph topology and inter-node dependencies into localization and weighting. Our approach introduces a feature-aware densification step to mitigate locality bias in sparse graphs, followed by a Personalized PageRank-based kernel computation to model structural proximity. This enables topology-dependent anchor sampling and calibration weighting that captures both local and long-range dependencies. Extensive experiments on several regression and classification datasets demonstrate that GRAPHLCP guarantees marginal coverage with finite samples while efficiently attaining favorable test conditional coverage across various conditioning scenarios.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 3 minor

Summary. The paper proposes GRAPHLCP, a localized conformal prediction method for GNNs on graphs. It adds a feature-aware densification step to address sparsity, computes a Personalized PageRank kernel on the full graph (including test nodes) to capture structural proximity, and uses the resulting topology-dependent anchor sampling and calibration weighting to produce prediction sets. The central claims are finite-sample marginal coverage together with improved test-conditional coverage relative to embedding-only baselines, demonstrated on regression and classification graph datasets.

Significance. If the finite-sample marginal coverage guarantee is rigorously established despite the test-dependent PPR construction, the work would meaningfully extend conformal prediction to graph-structured data by incorporating explicit topology rather than relying solely on embedding proximity. This could improve efficiency and conditional coverage in domains where graph structure carries signal beyond node features.

major comments (2)
  1. [§4, Theorem 1] §4 (Theoretical Analysis), Theorem 1 and surrounding derivation: the finite-sample marginal coverage claim relies on exchangeability (or a weighting that preserves super-uniformity of the conformity p-value). The construction computes the PPR kernel and selects anchors on the full graph including the test node, so both the sampled calibration set and the weights are functions of the test instance. The proof must explicitly derive that the resulting weighted p-value remains super-uniform; if it only invokes the standard CP argument without addressing this dependence, the guarantee does not follow.
  2. [§3.3] §3.3 (Anchor Sampling and Weighting): the topology-dependent sampling and reweighting step is presented as preserving the CP guarantee, but no leave-one-out or fixed-pool variant is described that would restore exchangeability. If the calibration pool is recomputed for each test node, an explicit correction (e.g., via a test-independent kernel or importance weighting that accounts for the selection bias) is required for the coverage statement to hold.
minor comments (3)
  1. [§3] Notation for the densification threshold and PPR teleport probability is introduced without a consolidated table of symbols; readers must hunt across §3.1–3.2.
  2. [Figure 2] Figure 2 (coverage vs. conditioning variable) would benefit from error bars or multiple random seeds to show variability of the conditional coverage curves.
  3. [Abstract and §5] The abstract states 'guarantees marginal coverage with finite samples' but the experiments only report empirical coverage; a short statement clarifying that the reported numbers are consistent with but do not prove the theorem would avoid overstatement.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the careful and constructive review of our manuscript on GRAPHLCP. The comments on the theoretical analysis are well-taken and highlight the need for greater explicitness regarding test-dependence in the coverage proof. We address each major comment point by point below, indicating the revisions we will make.

read point-by-point responses
  1. Referee: [§4, Theorem 1] §4 (Theoretical Analysis), Theorem 1 and surrounding derivation: the finite-sample marginal coverage claim relies on exchangeability (or a weighting that preserves super-uniformity of the conformity p-value). The construction computes the PPR kernel and selects anchors on the full graph including the test node, so both the sampled calibration set and the weights are functions of the test instance. The proof must explicitly derive that the resulting weighted p-value remains super-uniform; if it only invokes the standard CP argument without addressing this dependence, the guarantee does not follow.

    Authors: We agree that the test-dependent nature of the PPR kernel computation (performed on the full graph including the test node) requires an explicit argument that the weighted p-value remains super-uniform. The current proof of Theorem 1 invokes the standard conformal prediction marginal coverage result after establishing the form of the weights and sampling, but does not spell out the joint distribution argument needed to confirm super-uniformity under this dependence. We will revise §4 to include a detailed derivation: we will show that the PPR kernel is a symmetric function of the graph, that the anchor sampling probabilities are determined by a fixed (test-independent) feature-aware densification step followed by a test-augmented but exchangeable kernel evaluation, and that the resulting weighted scores satisfy the super-uniformity condition marginally over the joint distribution of calibration and test points. This will be presented as a self-contained lemma supporting Theorem 1. revision: yes

  2. Referee: [§3.3] §3.3 (Anchor Sampling and Weighting): the topology-dependent sampling and reweighting step is presented as preserving the CP guarantee, but no leave-one-out or fixed-pool variant is described that would restore exchangeability. If the calibration pool is recomputed for each test node, an explicit correction (e.g., via a test-independent kernel or importance weighting that accounts for the selection bias) is required for the coverage statement to hold.

    Authors: The referee is correct that recomputing the anchor set and weights for each test node via the full-graph PPR introduces a dependence that is not automatically covered by a standard exchangeability argument. The manuscript presents the weighting as preserving coverage through the structural proximity measure, but does not provide an auxiliary fixed-pool or leave-one-out construction for comparison. In the revision we will add to §3.3 both (i) a fixed-pool variant in which the PPR kernel is computed on the training graph only (excluding the test node) and (ii) an explicit importance-weighting correction that accounts for the selection bias induced by test-dependent sampling. We will prove that the corrected weights restore the required super-uniformity, thereby making the coverage statement rigorous for both the original and the fixed-pool procedures. revision: yes

Circularity Check

0 steps flagged

No circularity: GRAPHLCP extends CP with independent graph components

full rationale

The derivation chain starts from standard conformal prediction finite-sample guarantees and augments them with explicitly introduced steps (feature-aware densification and PPR kernel computation) whose definitions and motivations are external to the target coverage result. No equation reduces a claimed prediction or guarantee to a fitted quantity defined by the same procedure, no self-citation supplies a load-bearing uniqueness theorem, and no ansatz is smuggled in via prior work. The method is self-contained against external CP benchmarks and graph kernels; any coverage claim rests on the adaptation of exchangeability rather than internal redefinition.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

Review performed on abstract only; no explicit free parameters, axioms, or invented entities are stated in the provided text. Standard conformal prediction assumptions are implicitly used.

axioms (2)
  • standard math Conformal prediction provides finite-sample coverage guarantees under exchangeability of calibration and test points
    Core background assumption for all CP methods referenced in the abstract
  • domain assumption Graph topology supplies useful proximity information beyond embedding-space distances for localization
    Central premise justifying the new densification and PPR steps

pith-pipeline@v0.9.0 · 5466 in / 1337 out tokens · 33377 ms · 2026-05-11T02:00:41.197483+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

47 extracted references · 47 canonical work pages · 1 internal anchor

  1. [1]

    A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification

    Anastasios N Angelopoulos and Stephen Bates. A gentle introduction to conformal prediction and distribution-free uncertainty quantification.arXiv preprint arXiv:2107.07511, 2021

  2. [2]

    Candès, Aaditya Ramdas, and Ryan J

    Rina Foygel Barber, Emmanuel J. Candès, Aaditya Ramdas, and Ryan J. Tibshirani. Conformal prediction beyond exchangeability.The Annals of Statistics, 2022

  3. [3]

    Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking

    Aleksandar Bojchevski and Stephan Günnemann. Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking. InInternational Conference on Learning Representations, 2018

  4. [4]

    Jordan, and Francis Bach

    Sacha Braun, David Holzmüller, Michael I. Jordan, and Francis Bach. Conditional coverage diagnostics for conformal prediction, 2025

  5. [5]

    Maxime Cauchois, Suyash Gupta, and John C. Duchi. Knowing what you know: valid and validated confidence sets in multiclass and multilabel prediction.J. Mach. Learn. Res., 22(1), January 2021

  6. [6]

    Graph neural networks for financial fraud detection: a review.Frontiers of Computer Science, 19(9):199609, 2025

    Dawei Cheng, Yao Zou, Sheng Xiang, and Changjun Jiang. Graph neural networks for financial fraud detection: a review.Frontiers of Computer Science, 19(9):199609, 2025

  7. [7]

    Distribution free prediction sets for node classification

    Jase Clarkson. Distribution free prediction sets for node classification. InProceedings of the 40th International Conference on Machine Learning, ICML’23. JMLR.org, 2023

  8. [8]

    Potentialnet for molecular property prediction.Journal of chemical information and modeling, 58(6):1194–1201, 2018

    Evan N Feinberg, Harsh Suratia, and Amir Saffari. Potentialnet for molecular property prediction.Journal of chemical information and modeling, 58(6):1194–1201, 2018

  9. [9]

    The limits of distribution-free conditional predictive inference.Information and Inference: A Journal of the IMA, 10(2):455–482, 08 2020

    Rina Foygel Barber, Emmanuel J Candès, Aaditya Ramdas, and Ryan J Tibshirani. The limits of distribution-free conditional predictive inference.Information and Inference: A Journal of the IMA, 10(2):455–482, 08 2020

  10. [10]

    Combining neural networks with personalized pagerank for classification on graphs

    Johannes Gasteiger, Aleksandar Bojchevski, and Stephan Günnemann. Combining neural networks with personalized pagerank for classification on graphs. InInternational Conference on Learning Representa- tions, 2019

  11. [11]

    Schoenholz, Patrick F

    Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. InProceedings of the 34th International Conference on Machine Learning - Volume 70, ICML’17, page 1263–1272. JMLR.org, 2017

  12. [12]

    Konstantin Rusch, Michael Bronstein, Andreea Deac, Marc Lackenby, Sid- dhartha Mishra, and Petar Veliˇckovi´c

    Francesco Di Giovanni, T. Konstantin Rusch, Michael Bronstein, Andreea Deac, Marc Lackenby, Sid- dhartha Mishra, and Petar Veliˇckovi´c. How does over-squashing affect the power of GNNs?Transactions on Machine Learning Research, 2024

  13. [13]

    Giraldo, Konstantinos Skianis, Thierry Bouwmans, and Fragkiskos D

    Jhony H. Giraldo, Konstantinos Skianis, Thierry Bouwmans, and Fragkiskos D. Malliaros. On the trade-off between over-smoothing and over-squashing in deep graph neural networks. InProceedings of the 32nd ACM International Conference on Information and Knowledge Management, CIKM ’23, page 566–576, New York, NY , USA, 2023. Association for Computing Machinery

  14. [14]

    Localized conformal prediction: a generalized inference framework for conformal prediction

    Leying Guan. Localized conformal prediction: a generalized inference framework for conformal prediction. Biometrika, 110(1):33–50, 07 2022

  15. [15]

    GOOD: A graph out-of-distribution benchmark

    Shurui Gui, Xiner Li, Limei Wang, and Shuiwang Ji. GOOD: A graph out-of-distribution benchmark. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022

  16. [16]

    Conformal prediction with local weights: randomization enables robust guarantees.Journal of the Royal Statistical Society Series B: Statistical Methodology, 87(2):549–578, 11 2024

    Rohan Hore and Rina Foygel Barber. Conformal prediction with local weights: randomization enables robust guarantees.Journal of the Royal Statistical Society Series B: Statistical Methodology, 87(2):549–578, 11 2024. 10

  17. [17]

    What makes graph neural networks miscalibrated? In S

    Hans Hao-Hsun Hsu, Yuesong Shen, Christian Tomani, and Daniel Cremers. What makes graph neural networks miscalibrated? In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems, volume 35, pages 13775–13786. Curran Associates, Inc., 2022

  18. [18]

    Uncertainty quantification over graph with conformalized graph neural networks.NeurIPS, 2023

    Kexin Huang, Ying Jin, Emmanuel Candes, and Jure Leskovec. Uncertainty quantification over graph with conformalized graph neural networks.NeurIPS, 2023

  19. [19]

    Scaling personalized web search

    Glen Jeh and Jennifer Widom. Scaling personalized web search. InProceedings of the 12th International Conference on World Wide Web, WWW ’03, page 271–279, New York, NY , USA, 2003. Association for Computing Machinery

  20. [20]

    Junteng Jia and Austion R. Benson. Residual correlation in graph neural network regression. InProceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’20, page 588–598, New York, NY , USA, 2020. Association for Computing Machinery

  21. [21]

    Pappas, Edgar Dobriban, and Hamed Hassani

    Sunay Joshi, Shayan Kiyani, George J. Pappas, Edgar Dobriban, and Hamed Hassani. Conformal inference under high-dimensional covariate shifts via likelihood-ratio regularization. InThe Thirty-ninth Annual Conference on Neural Information Processing Systems, 2025

  22. [22]

    Not too little, not too much: a theoretical analysis of graph (over)smoothing

    Nicolas Keriven. Not too little, not too much: a theoretical analysis of graph (over)smoothing. In Proceedings of the 36th International Conference on Neural Information Processing Systems, NIPS ’22, Red Hook, NY , USA, 2022. Curran Associates Inc

  23. [23]

    Kipf and Max Welling

    Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017

  24. [24]

    Exchangeability, conformal prediction, and rank tests, 2021

    Arun Kumar Kuchibhotla. Exchangeability, conformal prediction, and rank tests, 2021

  25. [25]

    Codrug: conformai drug property prediction with density estimation under covariate shift

    Siddhartha Laghuvarapu, Zhen Lin, and Jimeng Sun. Codrug: conformai drug property prediction with density estimation under covariate shift. InProceedings of the 37th International Conference on Neural Information Processing Systems, NIPS ’23, Red Hook, NY , USA, 2023. Curran Associates Inc

  26. [26]

    Conformal graph-level out-of-distribution detection with adaptive data augmentation

    Xixun Lin, Yanan Cao, Nan Sun, Lixin Zou, Chuan Zhou, Peng Zhang, Shuai Zhang, Ge Zhang, and Jia Wu. Conformal graph-level out-of-distribution detection with adaptive data augmentation. InProceedings of the ACM on Web Conference 2025, WWW ’25, page 4755–4765, New York, NY , USA, 2025. Association for Computing Machinery

  27. [27]

    On the validity of conformal prediction for network data under non-uniform sampling, 2023

    Robert Lunde. On the validity of conformal prediction for network data under non-uniform sampling, 2023

  28. [28]

    Conformal prediction for network-assisted regression.Journal of the American Statistical Association, 120(551):1633–1644, 2025

    Robert Lunde, Elizaveta Levina, and Ji Zhu. Conformal prediction for network-assisted regression.Journal of the American Statistical Association, 120(551):1633–1644, 2025

  29. [29]

    Histgnn: Hierar- chical spatio-temporal graph neural network for weather forecasting.Information Sciences, 648:119580, 2023

    Minbo Ma, Peng Xie, Fei Teng, Bin Wang, Shenggong Ji, Junbo Zhang, and Tianrui Li. Histgnn: Hierar- chical spatio-temporal graph neural network for weather forecasting.Information Sciences, 648:119580, 2023

  30. [30]

    Conformal link prediction for false discovery rate control.TEST, 33:1062 – 1083, 2023

    Ariane Marandon. Conformal link prediction for false discovery rate control.TEST, 33:1062 – 1083, 2023

  31. [31]

    Huda Nassar, Kyle Kloster, and David F. Gleich. Strong localization in personalized pagerank vectors. In Proceedings of the 12th International Workshop on Algorithms and Models for the Web Graph - Volume 9479, W AW 2015, page 190–202, Berlin, Heidelberg, 2015. Springer-Verlag

  32. [32]

    Nikolakopoulos, Xia Ning, Christian Desrosiers, and George Karypis

    Athanasios N. Nikolakopoulos, Xia Ning, Christian Desrosiers, and George Karypis. Trust your neigh- bors: A comprehensive survey of neighborhood-based methods for recommender systems.ArXiv, abs/2109.04584, 2021

  33. [33]

    Oyedotun, Kassem Al Ismaeil, and Djamila Aouada

    Oyebade K. Oyedotun, Kassem Al Ismaeil, and Djamila Aouada. Why is everyone training very deep neural network with skip connections?IEEE Transactions on Neural Networks and Learning Systems, 34(9):5961–5975, 2023

  34. [34]

    Yaniv Romano, Matteo Sesia, and Emmanuel J. Candès. Classification with valid and adaptive coverage. InProceedings of the 34th International Conference on Neural Information Processing Systems, NIPS ’20, Red Hook, NY , USA, 2020. Curran Associates Inc

  35. [35]

    Konstantin Rusch, Michael M

    T. Konstantin Rusch, Michael M. Bronstein, and Siddhartha Mishra. A survey on oversmoothing in graph neural networks, 2023. 11

  36. [36]

    A tutorial on conformal prediction.Journal of Machine Learning Research, 9(3), 2008

    Glenn Shafer and Vladimir V ovk. A tutorial on conformal prediction.Journal of Machine Learning Research, 9(3), 2008

  37. [37]

    Pitfalls of graph neural network evaluation, 2019

    Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan Günnemann. Pitfalls of graph neural network evaluation, 2019

  38. [38]

    Resisting over-smoothing in graph neural networks via dual- dimensional decoupling

    Wei Shen, Mang Ye, and Wenke Huang. Resisting over-smoothing in graph neural networks via dual- dimensional decoupling. InACM Multimedia 2024, 2024

  39. [39]

    A tutorial on principal component analysis, 2014

    Jonathon Shlens. A tutorial on principal component analysis, 2014

  40. [40]

    Similarity-navigated conformal prediction for graph neural networks

    Jianqing Song, Jianguo Huang, Wenyu Jiang, Baoming Zhang, Shuangjie Li, and Chongjun Wang. Similarity-navigated conformal prediction for graph neural networks. InThe Thirty-eighth Annual Confer- ence on Neural Information Processing Systems, 2024

  41. [41]

    Conformal prediction under covariate shift

    Ryan J Tibshirani, Rina Foygel Barber, Emmanuel Candes, and Aaditya Ramdas. Conformal prediction under covariate shift. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors,Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019

  42. [42]

    Springer,

    Vladimir V ovk, Alex Gammerman, and Glenn Shafer.Algorithmic Learning in a Random World. Springer,

  43. [43]

    Be confident! towards trustworthy graph neural networks via confidence calibration

    Xiao Wang, Hongrui Liu, Chuan Shi, and Cheng Yang. Be confident! towards trustworthy graph neural networks via confidence calibration. In A. Beygelzimer, Y . Dauphin, P. Liang, and J. Wortman Vaughan, editors,Advances in Neural Information Processing Systems, 2021

  44. [44]

    Conformal prediction sets for graph neural networks

    Soroush H Zargarbashi, Simone Antonelli, and Aleksandar Bojchevski. Conformal prediction sets for graph neural networks. InInternational Conference on Machine Learning, pages 12292–12318. PMLR, 2023

  45. [45]

    Zargarbashi and Aleksandar Bojchevski

    Soroush H. Zargarbashi and Aleksandar Bojchevski. Conformal inductive graph neural networks. InThe Twelfth International Conference on Learning Representations, 2024

  46. [46]

    Residual reweighted con- formal prediction for graph neural networks

    Zheng Zhang, Jie Bao, Zhixin Zhou, Nicolo Colombo, Lixin Cheng, and Rui Luo. Residual reweighted con- formal prediction for graph neural networks. InProceedings of the Forty-First Conference on Uncertainty in Artificial Intelligence, UAI ’25. JMLR.org, 2025

  47. [47]

    Conformalized link prediction on graph neural networks

    Tianyi Zhao, Jian Kang, and Lu Cheng. Conformalized link prediction on graph neural networks. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD ’24, page 4490–4499, New York, NY , USA, 2024. Association for Computing Machinery. 12 A Appendix A.1 Additional Related Work Conformal Prediction (CP), Weighted CP (WCP)...