pith. machine review for the scientific record. sign in

arxiv: 2604.19028 · v1 · submitted 2026-04-21 · 💻 cs.LG

Recognition: unknown

Learning Posterior Predictive Distributions for Node Classification from Synthetic Graph Priors

Authors on Pith no claims yet

Pith reviewed 2026-05-10 03:46 UTC · model grok-4.3

classification 💻 cs.LG
keywords NodePFNposterior predictive distributionssynthetic graph priorsnode classificationgraph neural networksin-context learninguniversal generalizationhomophily
0
0 comments X

The pith

A single pre-trained NodePFN model learns posterior predictive distributions from synthetic graph priors and classifies nodes on arbitrary graphs without any graph-specific training.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper presents NodePFN as a way to overcome the need to retrain graph neural networks separately for every new dataset. It pre-trains one model on thousands of synthetic graphs whose homophily levels and feature-label relationships are controlled to span the range seen in real data. This pre-training teaches the model to produce posterior predictive distributions that support in-context node classification on unseen graphs. A reader would care because the approach removes the per-graph training step that currently blocks universal application of graph methods. The dual-branch design pairs attention over context examples with local message passing to make the predictions graph-aware.

Core claim

NodePFN learns posterior predictive distributions for node classification by training exclusively on synthetic graphs generated from random networks with controllable homophily and structural causal models for feature-label relationships; once pre-trained, the single model generalizes to arbitrary real graphs without further training and reaches 71.27 average accuracy across 23 benchmarks.

What carries the argument

Dual-branch architecture that combines context-query attention mechanisms with local message passing to perform graph-aware in-context learning from the learned posterior predictive distributions.

If this is right

  • Node classification no longer requires separate training runs for each new graph dataset.
  • Synthetic graph priors can substitute for real labeled data when learning general graph patterns.
  • A single model can be deployed across graphs that differ in homophily and community structure.
  • The posterior predictive approach enables in-context learning directly on graph data.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same synthetic-prior strategy could be tested on other graph tasks such as link prediction or graph classification.
  • Scaling the number and diversity of synthetic graphs might further close the gap to graph-specific models.
  • If the method works, practitioners could maintain one shared model instead of many per-dataset checkpoints.

Load-bearing premise

The chosen process for generating synthetic graphs with controllable homophily and structural causal models covers the full range of homophily levels, community structures, and feature distributions that appear in real-world graphs.

What would settle it

If a new real-world graph whose homophily or feature distribution lies outside the synthetic priors yields accuracy far below graph-specific baselines when fed to the pre-trained NodePFN, the universal generalization claim would be refuted.

Figures

Figures reproduced from arXiv: 2604.19028 by Jeongwhan Choi, Jongwoo Kim, Noseong Park, Woosung Kang.

Figure 1
Figure 1. Figure 1: NodePFN enables universal node classification. (a) Each real-world graph requires its own [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Network examples with various graph priors used in NodePFN training. (a) Sparse ER. (b) Dense ER. (c) Low homophily cSBM. (d) High homophily cSBM. For simplicity, class color￾coded nodes are not shown in ER. We consider 2 random networks. Erdos-R ˝ enyi ´ (ER) model (Erdos & R ˝ enyi, 1959) generates ´ graphs where each edge appears independently with probability per. This creates graphs with binomial degr… view at source ↗
Figure 3
Figure 3. Figure 3: NodePFN overview. (a) Generation of diverse synthetic graph priors with varying struc [PITH_FULL_IMAGE:figures/full_fig_p005_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Experiments on the syn￾thetic Cora Dataset. Setup. To evaluate the classification capability on various homophily ratios, we use the synthetic Cora generator Li et al. (2021). The detailed synthetic datasets are in Section B. Results [PITH_FULL_IMAGE:figures/full_fig_p006_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Structural node clas￾sification results (GraphAny trained on Wisconsin). TabPFN NodePFN 25 50 75 100 Test Accuracy (%) Mean: 55.5% Mean: 71.2% [PITH_FULL_IMAGE:figures/full_fig_p008_5.png] view at source ↗
Figure 7
Figure 7. Figure 7: Impact of Prior Data Scale on NodePFN Performance We analyze the impact of the number of synthetic prior graphs on NodePFN performance and address the question of data efficiency in PFN training. This analysis aims to understand the trade-off between computational cost and performance gains when scaling prior data. In our pretraining, each training iteration generates new synthetic graphs based on our rand… view at source ↗
Figure 8
Figure 8. Figure 8: Distribution of synthetic prior datasets used for pre-training. [PITH_FULL_IMAGE:figures/full_fig_p024_8.png] view at source ↗
read the original abstract

One of the most challenging problems in graph machine learning is generalizing across graphs with diverse properties. Graph neural networks (GNNs) face a fundamental limitation: they require separate training for each new graph, preventing universal generalization across diverse graph datasets. A critical challenge facing GNNs lies in their reliance on labeled training data for each individual graph, a requirement that hinders the capacity for universal node classification due to the heterogeneity inherent in graphs -- differences in homophily levels, community structures, and feature distributions across datasets. Inspired by the success of large language models (LLMs) that achieve in-context learning through massive-scale pre-training on diverse datasets, we introduce NodePFN. This universal node classification method generalizes to arbitrary graphs without graph-specific training. NodePFN learns posterior predictive distributions (PPDs) by training only on thousands of synthetic graphs generated from carefully designed priors. Our synthetic graph generation covers real-world graphs through the use of random networks with controllable homophily levels and structural causal models for complex feature-label relationships. We develop a dual-branch architecture combining context-query attention mechanisms with local message passing to enable graph-aware in-context learning. Extensive evaluation on 23 benchmarks demonstrates that a single pre-trained NodePFN achieves 71.27 average accuracy. These results validate that universal graph learning patterns can be effectively learned from synthetic priors, establishing a new paradigm for generalization in node classification.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper introduces NodePFN, a dual-branch neural architecture that pre-trains exclusively on thousands of synthetic graphs (generated via controllable-homophily random networks and structural causal models for feature-label relations) to learn posterior predictive distributions for node classification. It claims that one fixed model, without any graph-specific training or fine-tuning, achieves 71.27 average accuracy across 23 real-world benchmarks by performing graph-aware in-context learning.

Significance. If the central empirical claim holds after verification, the work would constitute a meaningful shift in graph ML by replacing per-graph supervised training with synthetic-prior pre-training, analogous to in-context learning in language models. It directly targets the heterogeneity problem (homophily, community structure, feature distributions) that currently forces separate GNN training per dataset. The reported cross-benchmark accuracy is the primary evidence offered for the new paradigm.

major comments (2)
  1. [§3] §3 (Synthetic Graph Generation): The manuscript asserts that the chosen priors 'cover real-world graphs' but provides no quantitative distributional comparison (e.g., Wasserstein distance on degree sequences, clustering coefficients, homophily statistics, or feature-label mutual information) between the synthetic ensemble and the 23 evaluation graphs. This coverage assumption is load-bearing for the generalization claim; without it, success on the benchmarks could reflect spurious correlations rather than true support over real-graph heterogeneity.
  2. [§5] §5 (Experimental Results): The headline 71.27 average accuracy is reported without error bars, standard deviations across random seeds, or ablation tables varying the synthetic prior parameters (homophily range, SCM complexity, number of graphs). In the absence of these controls it is impossible to assess whether the result is robust to prior choice or sensitive to the specific generation process described in §3.
minor comments (2)
  1. [§4] The abstract and §4 (Architecture) introduce the dual-branch context-query attention plus local message passing design, yet no pseudocode or precise tensor shapes are supplied, hindering immediate reproducibility.
  2. Table captions and axis labels in the experimental figures should explicitly state the number of synthetic graphs used for pre-training and the exact hyperparameter ranges of the priors.

Simulated Author's Rebuttal

2 responses · 0 unresolved

Thank you for the opportunity to respond to the referee's comments. We appreciate the constructive feedback, which helps us strengthen the manuscript. Below, we provide point-by-point responses to the major comments and indicate the revisions we will make.

read point-by-point responses
  1. Referee: [§3] §3 (Synthetic Graph Generation): The manuscript asserts that the chosen priors 'cover real-world graphs' but provides no quantitative distributional comparison (e.g., Wasserstein distance on degree sequences, clustering coefficients, homophily statistics, or feature-label mutual information) between the synthetic ensemble and the 23 evaluation graphs. This coverage assumption is load-bearing for the generalization claim; without it, success on the benchmarks could reflect spurious correlations rather than true support over real-graph heterogeneity.

    Authors: We agree that quantitative distributional comparisons would provide stronger evidence for the coverage claim. The synthetic priors were designed to control key heterogeneities (homophily levels and causal feature-label structures) known to vary across real graphs, but the submitted manuscript does not include explicit metrics such as Wasserstein distances or direct statistical comparisons. In the revised version, we will add an analysis in §3 (or a dedicated appendix) comparing statistics including degree sequences, clustering coefficients, homophily values, and feature-label mutual information between the synthetic ensemble and the 23 benchmarks. revision: yes

  2. Referee: [§5] §5 (Experimental Results): The headline 71.27 average accuracy is reported without error bars, standard deviations across random seeds, or ablation tables varying the synthetic prior parameters (homophily range, SCM complexity, number of graphs). In the absence of these controls it is impossible to assess whether the result is robust to prior choice or sensitive to the specific generation process described in §3.

    Authors: We acknowledge that error bars and ablations are essential for assessing robustness. The 71.27 average is the mean accuracy of the fixed pre-trained model across the 23 benchmarks. In the revision, we will report standard deviations computed over multiple random seeds for pre-training and evaluation. We will also add ablation studies varying the homophily range, SCM complexity, and number of synthetic graphs, with results shown in §5 and the supplementary material. revision: yes

Circularity Check

0 steps flagged

No significant circularity in empirical claims or derivation

full rationale

The paper presents an empirical pre-training approach on synthetic graphs followed by evaluation on real benchmarks. No mathematical derivations, equations, or 'predictions' appear that reduce by construction to fitted inputs, self-definitions, or self-citation chains. The 71.27 accuracy is reported as an observed experimental outcome rather than a tautological result. The coverage assumption on synthetic priors is a modeling choice open to external falsification, not a load-bearing circular step.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

The central claim rests on the domain assumption that synthetic graphs capture real-world heterogeneity; no free parameters or invented entities beyond the model itself are described in the abstract.

axioms (1)
  • domain assumption Synthetic graphs generated from random networks with controllable homophily levels and structural causal models for feature-label relationships cover the diversity of real-world graphs.
    Invoked to justify training only on synthetic data for generalization to arbitrary real graphs.
invented entities (1)
  • NodePFN no independent evidence
    purpose: Universal node classifier that performs in-context learning via posterior predictive distributions
    New model introduced in the paper; no independent evidence outside the claimed empirical results.

pith-pipeline@v0.9.0 · 5555 in / 1347 out tokens · 41756 ms · 2026-05-10T03:46:44.493039+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

300 extracted references · 37 canonical work pages · 8 internal anchors

  1. [1]

    2009 , booktitle =

    Rendle, Steffen and Freudenthaler, Christoph and Gantner, Zeno and Schmidt-Thieme, Lars , title =. 2009 , booktitle =

  2. [2]

    Computer , title=

    Y. Computer , title=. 2009 , volume=

  3. [3]

    Probabilistic Matrix Factorization , volume =

    Mnih, Andriy and Salakhutdinov, Russ R , booktitle =. Probabilistic Matrix Factorization , volume =

  4. [4]

    2008 , booktitle =

    Koren, Yehuda , title =. 2008 , booktitle =

  5. [5]

    ACM Trans

    Chen, Chong and Zhang, Min and Zhang, Yongfeng and Liu, Yiqun and Ma, Shaoping , title =. ACM Trans. Inf. Syst. , articleno =. 2020 , publisher =

  6. [6]

    He, Xiangnan and Liao, Lizi and Zhang, Hanwang and Nie, Liqiang and Hu, Xia and Chua, Tat-seng , booktitle =

  7. [7]

    TheWebConf (former WWW) , year =

    Yi Tay and Luu Anh Tuan and Siu Cheung Hui , title =. TheWebConf (former WWW) , year =

  8. [8]

    Ebesu, Travis and Shen, Bin and Fang, Yi , booktitle =

  9. [9]

    RecSys , year=

    HOP-rec: high-order proximity for implicit recommendation , author=. RecSys , year=

  10. [10]

    Kipf and Max Welling , title =

    Rianne van den Berg and Thomas N. Kipf and Max Welling , title =. KDD , year =

  11. [11]

    and Hoffman, Matthew D

    Liang, Dawen and Krishnan, Rahul G. and Hoffman, Matthew D. and Jebara, Tony , title =. 2018 , booktitle =

  12. [12]

    Learning Disentangled Representations for Recommendation , volume =

    Ma, Jianxin and Zhou, Chang and Cui, Peng and Yang, Hongxia and Zhu, Wenwu , booktitle =. Learning Disentangled Representations for Recommendation , volume =

  13. [13]

    2019 , booktitle =

    Steck, Harald , title =. 2019 , booktitle =

  14. [14]

    2016 , booktitle =

    Covington, Paul and Adams, Jay and Sargin, Emre , title =. 2016 , booktitle =

  15. [15]

    2017 , booktitle =

    Hsieh, Cheng-Kang and Yang, Longqi and Cui, Yin and Lin, Tsung-Yi and Belongie, Serge and Estrin, Deborah , title =. 2017 , booktitle =

  16. [16]

    Hamilton and Jure Leskovec , title =

    Rex Ying and Ruining He and Kaifeng Chen and Pong Eksombatchai and William L. Hamilton and Jure Leskovec , title =. KDD , year =

  17. [17]

    2020 , booktitle =

    He, Xiangnan and Deng, Kuan and Wang, Xiang and Li, Yan and Zhang, YongDong and Wang, Meng , title =. 2020 , booktitle =

  18. [18]

    Revisiting Graph Based Collaborative Filtering: A Linear Residual Graph Convolutional Network Approach , booktitle=

    Chen, Lei and Wu, Le and Hong, Richang and Zhang, Kun and Wang, Meng , year=. Revisiting Graph Based Collaborative Filtering: A Linear Residual Graph Convolutional Network Approach , booktitle=

  19. [19]

    Khaled and Li, Dongsheng , title =

    Shen, Yifei and Wu, Yongji and Zhang, Yao and Shan, Caihua and Zhang, Jun and Letaief, B. Khaled and Li, Dongsheng , title =. 2021 , booktitle =

  20. [20]

    2021 , booktitle =

    Mao, Kelong and Zhu, Jieming and Xiao, Xi and Lu, Biao and Wang, Zhaowei and He, Xiuqiang , title =. 2021 , booktitle =

  21. [21]

    Collaborative Filtering with Graph Information: Consistency and Scalable Methods , year =

    Rao, Nikhil and Yu, Hsiang-Fu and Ravikumar, Pradeep K and Dhillon, Inderjit S , booktitle =. Collaborative Filtering with Graph Information: Consistency and Scalable Methods , year =

  22. [22]

    2019 , booktitle =

    Wang, Xiang and He, Xiangnan and Wang, Meng and Feng, Fuli and Chua, Tat-Seng , title =. 2019 , booktitle =

  23. [23]

    2020 , booktitle =

    Sun, Jianing and Zhang, Yingxue and Guo, Wei and Guo, Huifeng and Tang, Ruiming and He, Xiuqiang and Ma, Chen and Coates, Mark , title =. 2020 , booktitle =

  24. [24]

    Multi-graph Convolution Collaborative Filtering , year=

    Sun, Jianing and Zhang, Yingxue and Ma, Chen and Coates, Mark and Guo, Huifeng and Tang, Ruiming and He, Xiuqiang , booktitle=. Multi-graph Convolution Collaborative Filtering , year=

  25. [25]

    Disentangled Graph Collaborative Filtering , booktitle =

    Xiang Wang and Hongye Jin and An Zhang and Xiangnan He and Tong Xu and Tat. Disentangled Graph Collaborative Filtering , booktitle =

  26. [26]

    arXiv preprint arXiv:2011.02100 , year=

    Deoscillated Graph Collaborative Filtering , author=. arXiv preprint arXiv:2011.02100 , year=

  27. [27]

    2021 , booktitle =

    Lee, Dongha and Kang, SeongKu and Ju, Hyunjun and Park, Chanyoung and Yu, Hwanjo , title =. 2021 , booktitle =

  28. [28]

    2021 , booktitle =

    Liu, Fan and Cheng, Zhiyong and Zhu, Lei and Gao, Zan and Nie, Liqiang , title =. 2021 , booktitle =

  29. [29]

    2022 , booktitle=

    Revisiting Neighborhood-based Link Prediction for Collaborative Filtering , author =. 2022 , booktitle=

  30. [30]

    CIKM , year=

    LT-OCF: Learnable-Time ODE-based Collaborative Filtering , author=. CIKM , year=

  31. [31]

    WSDM , pages=

    Linear, or Non-Linear, That is the Question! , author=. WSDM , pages=

  32. [32]

    2022 , journal =

    MGDCF: Distance Learning via Markov Graph Diffusion for Neural Collaborative Filtering , author =. 2022 , journal =

  33. [33]

    2021 , booktitle =

    Mao, Kelong and Zhu, Jieming and Wang, Jinpeng and Dai, Quanyu and Dong, Zhenhua and Xiao, Xi and He, Xiuqiang , title =. 2021 , booktitle =

  34. [34]

    2022 , booktitle =

    Fan, Wenqi and Liu, Xiaorui and Jin, Wei and Zhao, Xiangyu and Tang, Jiliang and Li, Qing , title =. 2022 , booktitle =

  35. [35]

    2022 , journal =

    IA-GCN: Interactive Graph Convolutional Network for Recommendation , author =. 2022 , journal =

  36. [36]

    2022 , booktitle =

    Peng, Shaowen and Sugiyama, Kazunari and Mine, Tsunenori , title =. 2022 , booktitle =

  37. [37]

    2022 , booktitle =

    Xia, Jiafeng and Li, Dongsheng and Gu, Hansu and Liu, Jiahao and Lu, Tun and Gu, Ning , title =. 2022 , booktitle =

  38. [38]

    NeurIPS , year=

    Parameter-free Dynamic Graph Embedding for Link Prediction , author=. NeurIPS , year=

  39. [39]

    SIGIR , year=

    Blurring-Sharpening Process Models for Collaborative Filtering , author=. SIGIR , year=

  40. [40]

    NeurIPS , volume=

    Graph contrastive learning with augmentations , author=. NeurIPS , volume=

  41. [41]

    CIKM , pages=

    Adaptive implicit friends identification over heterogeneous network for social recommendation , author=. CIKM , pages=

  42. [42]

    2021 , booktitle =

    Wu, Jiancan and Wang, Xiang and Feng, Fuli and He, Xiangnan and Chen, Liang and Lian, Jianxun and Xie, Xing , title =. 2021 , booktitle =

  43. [43]

    TheWebConf (former WWW) , pages=

    Graph contrastive learning with adaptive augmentation , author=. TheWebConf (former WWW) , pages=

  44. [44]

    , booktitle =

    Xia, Jun and Wu, Lirong and Chen, Jintao and Hu, Bozhen and Li, Stan Z. , booktitle =. SimGRACE: A Simple Framework for Graph Contrastive Learning without Data Augmentation , year =

  45. [45]

    SIGIR , pages=

    Hypergraph contrastive collaborative filtering , author=. SIGIR , pages=

  46. [46]

    KDD , pages=

    Self-supervised hypergraph transformer for recommender systems , author=. KDD , pages=

  47. [47]

    SIGIR , pages=

    Are graph augmentations necessary? simple graph contrastive learning for recommendation , author=. SIGIR , pages=

  48. [48]

    Proceedings of the International Conference on Learning Representations (ICLR) , year=

    LightGCL: Simple Yet Effective Graph Contrastive Learning for Recommendation , author=. Proceedings of the International Conference on Learning Representations (ICLR) , year=

  49. [49]

    arXiv preprint arXiv:2209.02544 , year =

    XSimGCL: Towards Extremely Simple Graph Contrastive Learning for Recommendation , author =. arXiv preprint arXiv:2209.02544 , year =

  50. [50]

    ACM Transactions on Information Systems , volume=

    Towards robust neural graph collaborative filtering via structure denoising and embedding perturbation , author=. ACM Transactions on Information Systems , volume=. 2023 , publisher=

  51. [51]

    Complex & Intelligent Systems , pages=

    SimDCL: dropout-based simple graph contrastive learning for recommendation , author=. Complex & Intelligent Systems , pages=. 2023 , publisher=

  52. [52]

    WSDM , pages=

    SGCCL: siamese graph contrastive consensus learning for personalized recommendation , author=. WSDM , pages=

  53. [53]

    2014 , booktitle =

    Perozzi, Bryan and Al-Rfou, Rami and Skiena, Steven , title =. 2014 , booktitle =

  54. [54]

    2016 , booktitle =

    Grover, Aditya and Leskovec, Jure , title =. 2016 , booktitle =

  55. [55]

    2015 , booktitle =

    Tang, Jian and Qu, Meng and Wang, Mingzhe and Zhang, Ming and Yan, Jun and Mei, Qiaozhu , title =. 2015 , booktitle =

  56. [56]

    IEEE 26th International Workshop on Machine Learning for Signal Processing , pages =

    ITEM2VEC: Neural item embedding for collaborative filtering , year =. IEEE 26th International Workshop on Machine Learning for Signal Processing , pages =

  57. [57]

    AAAI , year=

    Graph Neural Controlled Differential Equations for Traffic Forecasting , author=. AAAI , year=

  58. [58]

    KDD , year=

    Spatial-Temporal Graph ODE Networks for Traffic Flow Forecasting , author=. KDD , year=

  59. [59]

    Proceedings of the International Conference on Learning Representations (ICLR) , year=

    On Robustness of Neural Ordinary Differential Equations , author=. Proceedings of the International Conference on Learning Representations (ICLR) , year=

  60. [60]

    NeurIPS , year =

    Neural Ordinary Differential Equations , author =. NeurIPS , year =

  61. [61]

    Augmented Neural ODEs , year =

    Dupont, Emilien and Doucet, Arnaud and Teh, Yee Whye , booktitle =. Augmented Neural ODEs , year =

  62. [62]

    NeurIPS , year=

    On Second Order Behaviour in Augmented Neural ODEs , author=. NeurIPS , year=

  63. [63]

    Dormand and P.J

    J.R. Dormand and P.J. Prince. A family of embedded Runge-Kutta formulae. Journal of Computational and Applied Mathematics. 1980

  64. [64]

    NeurIPS , year =

    Latent Ordinary Differential Equations for Irregularly-Sampled Time Series , author =. NeurIPS , year =

  65. [65]

    Edward De Brouwer and Jaak Simm and Adam Arany and Yves Moreau , year=

  66. [66]

    Kidger, Patrick and Morrill, James and Foster, James and Lyons, Terry , booktitle=

  67. [67]

    International Conference on Machine Learning (ICML) , year=

    Neural Rough Differential Equations for Long Time Series , author=. International Conference on Machine Learning (ICML) , year=

  68. [68]

    ICDM , year=

    Attentive Neural Controlled Differential Equations for Time-series Classification and Forecasting , author=. ICDM , year=

  69. [69]

    Jaehoon Lee and Jinsung Jeon and Sheo yon Jhin and Jihyeon Hyeong and Jayoung Kim and Minju Jo and Kook Seungji and Noseong Park , booktitle=

  70. [70]

    NeurIPS , year=

    Maximum Likelihood Training of Score-Based Diffusion Models , author=. NeurIPS , year=

  71. [71]

    Proceedings of the International Conference on Learning Representations (ICLR) , year=

    Score-Based Generative Modeling through Stochastic Differential Equations , author=. Proceedings of the International Conference on Learning Representations (ICLR) , year=

  72. [72]

    NeurIPS , year =

    Yang Song and Stefano Ermon , title =. NeurIPS , year =

  73. [73]

    NeurIPS , year=

    On Density Estimation with Diffusion Models , author=. NeurIPS , year=

  74. [74]

    Diffusion Models Beat

    Prafulla Dhariwal and Alexander Quinn Nichol , booktitle=. Diffusion Models Beat

  75. [75]

    2020 , booktitle=

    Denoising Diffusion Probabilistic Models , author=. 2020 , booktitle=

  76. [76]

    AISTATS , pages =

    Permutation Invariant Graph Generation via Score-Based Generative Modeling , author =. AISTATS , pages =

  77. [77]

    IEEE transactions on signal processing , volume=

    Discrete signal processing on graphs , author=. IEEE transactions on signal processing , volume=. 2013 , publisher=

  78. [78]

    IEEE Transactions on Signal Processing , volume=

    Discrete signal processing on graphs: Frequency analysis , author=. IEEE Transactions on Signal Processing , volume=. 2014 , publisher=

  79. [79]

    Applied and Computational Harmonic Analysis , volume=

    Vertex-frequency analysis on graphs , author=. Applied and Computational Harmonic Analysis , volume=. 2016 , publisher=

  80. [80]

    Proceedings of the International Conference on Learning Representations (ICLR) , year=

    Analyzing the Expressive Power of Graph Neural Networks in a Spectral Perspective , author=. Proceedings of the International Conference on Learning Representations (ICLR) , year=

Showing first 80 references.