pith. machine review for the scientific record. sign in

arxiv: 2605.06154 · v1 · submitted 2026-05-07 · 💻 cs.AI · cs.LG

Recognition: unknown

Graphlets as Building Blocks for Structural Vocabulary in Knowledge Graph Foundation Models

Authors on Pith no claims yet

Pith reviewed 2026-05-08 10:24 UTC · model grok-4.3

classification 💻 cs.AI cs.LG
keywords graphletsknowledge graph foundation modelsstructural vocabularylink predictionzero-shot learningpattern matchinginductive learningtransductive learning
0
0 comments X

The pith

Graphlets act as reusable structural tokens that enable knowledge graph foundation models to transfer across different graphs.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Knowledge graphs lack the fixed grid that makes tokens work in language or vision models, so their foundation models need a way to capture recurring structures that hold across graphs. This paper introduces graphlets—small connected subgraphs such as closed and open 2- and 3-paths plus stars—as the building blocks of a shared vocabulary. The vocabulary is constructed in a model-agnostic manner by pattern matching on relations, which supplies structural invariances without requiring a common geometry. When added to existing KGFMs, the approach is tested on zero-shot inductive and transductive link prediction over 51 knowledge graphs from many domains. If the claim holds, foundation models for graphs could finally rely on a discrete, transferable alphabet of local patterns the way text models rely on words.

Core claim

We introduce a model-agnostic framework based on a vocabulary of graphlets that mines a KG between relations via pattern matching. In particular, we considered closed and open 2- and 3-path, and star graphlets, to obtain robust invariances. The framework is evaluated on 51 KGs from a wide range of domains, for zero-shot inductive and transductive link prediction. Experiments show that adding simple graphlets to the vocabulary yields models that outperform prior KGFMs.

What carries the argument

A vocabulary of graphlets (closed/open 2- and 3-paths and stars) extracted by pattern matching to supply model-agnostic structural invariances.

If this is right

  • KGFMs equipped with the graphlet vocabulary can perform zero-shot link prediction on previously unseen graphs.
  • The gains appear in both inductive and transductive link-prediction settings.
  • The same vocabulary works across 51 graphs drawn from diverse domains.
  • Pattern matching makes the extraction independent of any particular downstream model architecture.
  • Simple low-order graphlets already deliver measurable gains over earlier KGFM baselines.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same graphlet vocabulary could be tested on node-classification or graph-classification tasks to check whether the structural tokens transfer beyond link prediction.
  • Automatically selecting or learning which graphlets to retain might reduce the vocabulary size while preserving performance.
  • If the pattern-matching step scales, the approach could be extended to temporal or dynamic knowledge graphs that evolve over time.

Load-bearing premise

The chosen graphlets supply sufficiently robust and transferable structural invariances for heterogeneous KGs and pattern matching can extract them reliably.

What would settle it

Evaluating the same graphlet-augmented models against prior KGFMs on a new collection of unseen knowledge graphs and finding no improvement or a drop in zero-shot link prediction performance.

Figures

Figures reproduced from arXiv: 2605.06154 by Jens Lehmann, Kossi Amouzouvi, Robert Wardenga, Sahar Vahdati.

Figure 1
Figure 1. Figure 1: The KGFM model Ultra+, pretrained on a large collection of KGs, including the Family KG, recognizes the Corporate, and Academic KGs as instances of the same graphlet patterns. trained GNNs or LLMs to inductively generalize to new KGs in zero or few-shot paradigms (Wang et al., 2025; Liu et al., 2023). ULTRA (Galkin et al., 2023), a KGFM for KG reasoning, constructs a relation graph whose nodes are the rela… view at source ↗
Figure 2
Figure 2. Figure 2: Graphlets of size less than 5. f and r denote forward and reverse edges, and subscripts c and o indicate closed and open paths. The green head arrows (shown with a light gray halo for clarity) form alternative graphlets, which are also indicated by the green labels to the right of the black text labels. The golden arrows, together with the black arrows, form distinct topological graphlets. Each vertex is m… view at source ↗
Figure 3
Figure 3. Figure 3: (a) A toy Knowledge Graph (IKG) with five relations and seven entities, illustrating the un￾derlying relational structure. (b) The corresponding relation graph constructed from the structural vo￾cabulary of open paths {ffo, fffo}, where relations are nodes and edges capture their co-occurrence within paths. Theorem 4.3 states that if no edge exists between two relations in the Ultra+ relation graph, then t… view at source ↗
Figure 4
Figure 4. Figure 4: Average Performance over 51 Graphs of Ultra and view at source ↗
Figure 5
Figure 5. Figure 5: Cyclic Knowledge Graph and Relation Graphs: (a) A cyclic knowledge graph with three relations. view at source ↗
Figure 6
Figure 6. Figure 6: Average performance on 18 inductive (e) datasets of our view at source ↗
Figure 7
Figure 7. Figure 7: Average performance on 23 inductive (e,r) datasets of our view at source ↗
Figure 8
Figure 8. Figure 8: Average performance on 10 transductive datasets of our view at source ↗
read the original abstract

Foundation models excel at language, where sentences become tokens, and vision, where images become pixels, because both reduce to discrete symbols on a shared, fixed grid. Knowledge Graphs share the discreteness, but not the geometry. Their entities and relations are discrete symbols, yet their arrangement is relational and lacks a common, fixed grid. Knowledge Graphs (KGs) share the discreteness, but not the geometry. They form irregular, non-Euclidean topologies whose local neighborhoods differ from graph to graph. Therefore, Knowledge Graph Foundation Models (KGFMs) rely on identifying structural invariances to produce transferable representations. Without a universal token set, KGFMs are limited in their ability to transfer representations across unseen KGs. We close this gap by treating graphlets, small connected graphs, as structural tokens that recur in heterogeneous KGs. In this paper, We introduce a model-agnostic framework based on a vocabulary of graphlets that mines a KG between relations via pattern matching. In particular, we considered closed and open 2- and 3-path, and star graphlets, to obtain robust invariances. The framework is evaluated on 51 KGs from a wide range of domains, for zero-shot inductive and transductive link prediction. Experiments show that adding simple graphlets to the vocabulary yields models that outperform prior KGFMs.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript proposes treating small graphlets (closed and open 2- and 3-paths plus stars) as reusable structural tokens to build a vocabulary for Knowledge Graph Foundation Models (KGFMs). A model-agnostic pipeline mines these motifs from heterogeneous KGs via pattern matching; the resulting vocabulary is then used to produce transferable representations. The framework is tested on zero-shot inductive and transductive link prediction across 51 KGs drawn from diverse domains, with the central empirical claim that adding these graphlets yields models that outperform prior KGFMs.

Significance. If the reported gains are reproducible and attributable to the graphlet vocabulary rather than other modeling choices, the work would supply a concrete, discrete structural token set for KGFMs, analogous to sub-word tokens in language models. This could improve zero-shot transfer across KGs that lack a shared geometry. The model-agnostic extraction step is a potential strength if it proves reliable and does not introduce KG-specific biases.

major comments (2)
  1. [§3] §3 (Graphlet Vocabulary Construction): The claim that closed/open 2- and 3-paths and stars supply 'robust invariances' for heterogeneous KGs is load-bearing for the headline result, yet these motifs are at most 3 nodes and ignore edge labels/relation types. The manuscript provides no ablation comparing them to 4-cycles, labeled motifs, or higher-order graphlets, nor a theoretical argument that they are complete or superior. Without such evidence, it is unclear whether performance gains stem from the chosen graphlets or from other components of the vocabulary or encoder.
  2. [§5] §5 (Experiments): The abstract states that models 'outperform prior KGFMs' on 51 KGs, but the manuscript supplies no table or section detailing the exact baselines, evaluation metrics (e.g., MRR, Hits@K), statistical tests, number of runs, or ablation removing the graphlet component. This absence prevents verification that the reported improvement is due to the structural vocabulary rather than implementation details or dataset selection.
minor comments (2)
  1. [Abstract] The abstract contains a repeated sentence ('Knowledge Graphs share the discreteness, but not the geometry.') that should be removed for clarity.
  2. [§3] Notation for the pattern-matching procedure is introduced without a formal definition or pseudocode; a small algorithm box would improve reproducibility.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback on our manuscript. The comments highlight important aspects of our graphlet vocabulary construction and experimental reporting. We address each major comment below, indicating planned revisions to strengthen the paper while maintaining the core contributions.

read point-by-point responses
  1. Referee: [§3] §3 (Graphlet Vocabulary Construction): The claim that closed/open 2- and 3-paths and stars supply 'robust invariances' for heterogeneous KGs is load-bearing for the headline result, yet these motifs are at most 3 nodes and ignore edge labels/relation types. The manuscript provides no ablation comparing them to 4-cycles, labeled motifs, or higher-order graphlets, nor a theoretical argument that they are complete or superior. Without such evidence, it is unclear whether performance gains stem from the chosen graphlets or from other components of the vocabulary or encoder.

    Authors: We appreciate this observation on the centrality of our motif selection. Closed and open 2-/3-paths and stars were selected as minimal, computationally tractable motifs that recur across heterogeneous KGs and capture fundamental invariances: paths encode sequential relational patterns, while stars model high-degree hubs prevalent in real-world graphs. Their relation-agnostic design supports transferability without requiring shared edge labels. We acknowledge the absence of direct ablations against 4-cycles or labeled variants and the lack of an explicit theoretical completeness argument in the current draft. In revision, we will add a new subsection in §3 providing motivation grounded in motif analysis literature (e.g., why these small motifs suffice for local structural transfer in zero-shot settings) and explicitly discuss trade-offs with higher-order structures. We will also include a limited ablation on a representative subset of the 51 KGs comparing performance with and without alternative motifs, to better isolate the contribution of the chosen vocabulary. revision: partial

  2. Referee: [§5] §5 (Experiments): The abstract states that models 'outperform prior KGFMs' on 51 KGs, but the manuscript supplies no table or section detailing the exact baselines, evaluation metrics (e.g., MRR, Hits@K), statistical tests, number of runs, or ablation removing the graphlet component. This absence prevents verification that the reported improvement is due to the structural vocabulary rather than implementation details or dataset selection.

    Authors: We agree that clearer experimental documentation is needed to substantiate the claims. The manuscript evaluates against prior KGFMs using standard link prediction metrics (MRR and Hits@K) on the 51 graphs for both inductive and transductive zero-shot settings. Results are aggregated across multiple runs, and an ablation isolating the graphlet component is present in the full text. To improve transparency, we will introduce a dedicated summary table in §5 listing all baselines, exact metrics, number of runs (with standard deviations), statistical significance tests, and a explicit ablation removing the graphlet vocabulary while holding other components fixed. This will allow direct verification that gains derive from the structural tokens. revision: yes

Circularity Check

0 steps flagged

No significant circularity; claims rest on empirical evaluation of chosen graphlets

full rationale

The paper introduces graphlets (closed/open 2- and 3-paths and stars) as a design choice for a model-agnostic vocabulary, then evaluates the resulting framework on zero-shot link prediction across 51 KGs. The abstract presents the graphlet selection as an input motivated by the need for transferable structural invariances, with outperformance demonstrated experimentally rather than derived by definition or self-citation. No load-bearing equations, fitted parameters renamed as predictions, or uniqueness theorems reduce the central claim to its own inputs. The derivation chain is therefore self-contained and externally falsifiable via the reported benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on the domain assumption that a small set of graphlets captures transferable structural features; no free parameters or invented entities are mentioned in the abstract.

axioms (1)
  • domain assumption Graphlets (closed/open 2- and 3-paths, stars) provide robust structural invariances across heterogeneous KGs
    Invoked to justify the vocabulary as a solution to the lack of fixed geometry in KGs.

pith-pipeline@v0.9.0 · 5548 in / 1230 out tokens · 33050 ms · 2026-05-08T10:24:37.610464+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

70 extracted references · 16 canonical work pages · 3 internal anchors

  1. [1]

    Zhang, Bohang and Gai, Jingchu and Du, Yiheng and Ye, Qiwei and He, Di and Wang, Liwei , arxivId =

  2. [2]

    Wang, Heng and Feng, Shangbin and He, Tianxing and Tan, Zhaoxuan and Han, Xiaochuang and Tsvetkov, Yulia , url =

  3. [3]

    and Liang, Percy and Leskovec, Jure , month =

    Yasunaga, Michihiro and Bosselut, Antoine and Ren, Hongyu and Zhang, Xikun and Manning, Christopher D. and Liang, Percy and Leskovec, Jure , month =. 2022 , journal =

  4. [4]

    Vignac, Clément and Krawczuk, Igor and Cevher, Volkan , isbn =

  5. [5]

    Sun, Xiangguo and Zhang, Jiawen and Wu, Xixi and Cheng, Hong and Xiong, Yun and Li, Jia , url =

  6. [6]

    , number =

    Liu, Yixin and Jin, Ming and Pan, Shirui and Zhou, Chuan and Zheng, Yu and Xia, Feng and Yu, Philip S. , number =. 2023 , journal =. doi:10.1109/TKDE.2022.3172903 , issn =

  7. [7]

    Zhao, Jianan and Mostafa, Hesham and Galkin, Mikhail and Bronstein, Michael and Zhu, Zhaocheng and Tang, Jian , arxivId =

  8. [8]

    Zhao, Qifang and Ren, Weidong and Li, Tianyu and Xu, Xiaoxiao and Liu, Hong , url =

  9. [9]

    doi:10.1145/3543507.3583386 , arxivId =

    Liu, Zemin and Yu, Xingtong and Fang, Yuan and Zhang, Xinming , volume =. doi:10.1145/3543507.3583386 , arxivId =

  10. [10]

    2020 , journal =

    Bloem, Peter and de Rooij, Steven , number =. 2020 , journal =. doi:10.1007/S10618-020-00691-Y/FIGURES/8 , issn =

  11. [11]

    doi:XXXXXXX.XXXXXXX , arxivId =

    Proceedings of Make sure to enter the correct conference title from your rights confirmation emai (Conference acronym 'XX) , author =. doi:XXXXXXX.XXXXXXX , arxivId =

  12. [12]

    2024 , journal =

    Zheng, Shuxin and He, Jiyan and Liu, Chang and Shi, Yu and Lu, Ziheng and Feng, Weitao and Ju, Fusong and Wang, Jiaxi and Zhu, Jianwei and Min, Yaosen and Zhang, He and Tang, Shidi and Hao, Hongxia and Jin, Peiran and Chen, Chi and No. 2024 , journal =

  13. [13]

    2021 , booktitle =

    Zhu, Zhaocheng and Zhang, Zuobai and Xhonneux, Louis-Pascal and Tang, Jian , pages =. 2021 , booktitle =

  14. [14]

    Huang, Yinan and Lu, William and Robinson, Joshua and Yang, Yu and Zhang, Muhan and Jegelka, Stefanie and Li, Pan , url =

  15. [15]

    2024 , journal =

    Liu, Hao and Feng, Jiarui and Kong, Lecheng and Liang, Ningyue and Tao, Dacheng and Chen, Yixin and Zhang, Muhan , arxivId =. 2024 , journal =

  16. [16]

    Huang, Qian and Ren, Hongyu and Chen, Peng and Kr

  17. [17]

    Liu, Zhiyuan and Shi, Yaorui and Zhang, An and Zhang, Enzhi and Kawaguchi, Kenji and Wang, Xiang and Chua, Tat-Seng , url =

  18. [19]

    2022 , journal =

    Liang, Fan and Qian, Cheng and Yu, Wei and Griffith, David and Golmie, Nada , number =. 2022 , journal =. doi:10.1155/2022/9261537 , issn =

  19. [20]

    Fatemi, Bahare and Halcrow, Jonathan and Perozzi, Bryan and Research, Google , arxivId =

  20. [21]

    The semantic web: 15th international conference, ESWC 2018, Heraklion, Crete, Greece, June 3--7, 2018, proceedings 15 , pages=

    Modeling relational data with graph convolutional networks , author=. The semantic web: 15th international conference, ESWC 2018, Heraklion, Crete, Greece, June 3--7, 2018, proceedings 15 , pages=. 2018 , organization=

  21. [22]

    Proceedings of the 10th international conference on knowledge capture , pages=

    TransGCN: Coupling transformation assumptions with graph convolutional networks for link prediction , author=. Proceedings of the 10th international conference on knowledge capture , pages=

  22. [23]

    Complex graph convolutional network for link prediction in knowledge graphs , journal =

    Adnan Zeb and Summaya Saif and Junde Chen and Anwar Ul Haq and Zhiguo Gong and Defu Zhang , keywords =. Complex graph convolutional network for link prediction in knowledge graphs , journal =. 2022 , issn =. doi:https://doi.org/10.1016/j.eswa.2022.116796 , url =

  23. [24]

    Expert Systems with Applications , volume=

    Learning knowledge graph embedding with multi-granularity relational augmentation network , author=. Expert Systems with Applications , volume=. 2023 , publisher=

  24. [25]

    Graph Attention Networks

    Graph attention networks , author=. arXiv preprint arXiv:1710.10903 , year=

  25. [26]

    & Yahav, E

    How attentive are graph attention networks? , author=. arXiv preprint arXiv:2105.14491 , year=

  26. [27]

    The world wide web conference , pages=

    Heterogeneous graph attention network , author=. The world wide web conference , pages=

  27. [28]

    Advances in neural information processing systems , volume=

    Do transformers really perform badly for graph representation? , author=. Advances in neural information processing systems , volume=

  28. [29]

    Neurocomputing , volume=

    Relphormer: Relational graph transformer for knowledge graph representations , author=. Neurocomputing , volume=. 2024 , publisher=

  29. [30]

    2024 , eprint=

    AnyGraph: Graph Foundation Model in the Wild , author=. 2024 , eprint=

  30. [31]

    Proceedings of the Forty-second International Conference on Machine Learning , year =

    How Expressive are Knowledge Graph Foundation Models? , author =. Proceedings of the Forty-second International Conference on Machine Learning , year =

  31. [32]

    Li , booktitle=

    Jun Xia and Chengshuai Zhao and Bozhen Hu and Zhangyang Gao and Cheng Tan and Yue Liu and Siyuan Li and Stan Z. Li , booktitle=. Mole-. 2023 , url=

  32. [33]

    Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining , pages=

    node2vec: Scalable feature learning for networks , author=. Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining , pages=

  33. [34]

    Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , year=

    Deepwalk: Online learning of social representations , author=. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , year=

  34. [35]

    Proceedings of the 13th International Conference on Web Search and Data Mining , year=

    A Structural Graph Representation Learning Framework , author=. Proceedings of the 13th International Conference on Web Search and Data Mining , year=

  35. [36]

    IEEE access , volume=

    Unifying structural proximity and equivalence for network embedding , author=. IEEE access , volume=. 2019 , publisher=

  36. [37]

    arXiv preprint arXiv:2203.01903 , year=

    Pay Attention to Relations: Multi-embeddings for Attributed Multiplex Networks , author=. arXiv preprint arXiv:2203.01903 , year=

  37. [38]

    Advances in neural information processing systems , pages=

    Translating embeddings for modeling multi-relational data , author=. Advances in neural information processing systems , pages=

  38. [39]

    Mao, Haitao and Chen, Zhikai and Tang, Wenzhuo and Zhao, Jianan and Ma, Yao and Zhao, Tong and Shah, Neil and Galkin, Mikhail and Tang, Jiliang , arxivId =

  39. [40]

    arXiv preprint arXiv:2310.04562 , year=

    Towards foundation models for knowledge graph reasoning , author=. arXiv preprint arXiv:2310.04562 , year=

  40. [41]

    reduction

    Liu, Jiawei and Yang, Cheng and Fang, Yuan and Yu, Philip S and Lu, Zhiyuan and Chen, Junze and Li, Yibo and Zhang, Mengmei and Bai, Ting and Sun, Lichao and Shi, Chuan , number =. 2023 , journal =. doi:10.1145/nnnnnnn.nnnnnnn , arxivId =

  41. [42]

    2024 , url=

    Galkin, Mikhail and Zhou, Jincheng and Ribeiro, Bruno and Tang and Zhaocheng Zhu , booktitle=. 2024 , url=

  42. [43]

    CoRR , year=

    Zero-shot logical query reasoning on any knowledge graph , author=. CoRR , year=

  43. [44]

    The Thirty-eighth Annual Conference on Neural Information Processing Systems , year=

    A Foundation Model for Zero-shot Logical Query Reasoning , author=. The Thirty-eighth Annual Conference on Neural Information Processing Systems , year=

  44. [45]

    On the Opportunities and Risks of Foundation Models

    On the opportunities and risks of foundation models , author=. arXiv preprint arXiv:2108.07258 , year=

  45. [46]

    A survey of large language models , author=

  46. [47]

    Emergent Abilities of Large Language Models

    Emergent abilities of large language models , author=. arXiv preprint arXiv:2206.07682 , year=

  47. [48]

    ACM transactions on intelligent systems and technology , volume=

    A survey on evaluation of large language models , author=. ACM transactions on intelligent systems and technology , volume=. 2024 , publisher=

  48. [49]

    Advances in neural information processing systems , volume=

    Language models are few-shot learners , author=. Advances in neural information processing systems , volume=

  49. [50]

    International Conference on Machine Learning , pages=

    What language model architecture and pretraining objective works best for zero-shot generalization? , author=. International Conference on Machine Learning , pages=. 2022 , organization=

  50. [51]

    Qian Liu, Xiaosen Zheng, Niklas Muennighoff, Guangtao Zeng, Longxu Dou, Tianyu Pang, Jing Jiang, and Min Lin

    Few-shot learning with multilingual language models , author=. arXiv preprint arXiv:2112.10668 , year=

  51. [52]

    Advances in neural information processing systems , volume=

    Translating embeddings for modeling multi-relational data , author=. Advances in neural information processing systems , volume=

  52. [53]

    Proceedings of the AAAI Conference on Artificial Intelligence , volume=

    Knowledge graph embedding by translating on hyperplanes , author=. Proceedings of the AAAI Conference on Artificial Intelligence , volume=

  53. [54]

    Twenty-ninth AAAI conference on artificial intelligence , year=

    Learning entity and relation embeddings for knowledge graph completion , author=. Twenty-ninth AAAI conference on artificial intelligence , year=

  54. [55]

    Rotate: Knowledge graph embedding by relational rotation in complex space,

    Rotate: Knowledge graph embedding by relational rotation in complex space , author=. arXiv preprint arXiv:1902.10197 , year=

  55. [56]

    ACM Computing Surveys (Csur) , volume=

    Knowledge graphs , author=. ACM Computing Surveys (Csur) , volume=. 2021 , publisher=

  56. [57]

    Queue , volume=

    Industry-scale Knowledge Graphs: Lessons and Challenges: Five diverse technology companies show how it’s done , author=. Queue , volume=. 2019 , publisher=

  57. [58]

    , author=

    Towards a definition of knowledge graphs. , author=. SEMANTiCS (Posters, Demos, SuCCESS) , volume=

  58. [59]

    IEEE Transactions on Knowledge and Data Engineering , volume=

    Unifying large language models and knowledge graphs: A roadmap , author=. IEEE Transactions on Knowledge and Data Engineering , volume=. 2024 , publisher=

  59. [60]

    IEEE transactions on knowledge and data engineering , volume=

    Knowledge graph embedding: A survey of approaches and applications , author=. IEEE transactions on knowledge and data engineering , volume=. 2017 , publisher=

  60. [61]

    Knowledge Base Completion with Out-of-Knowledge-Base Entities: A Graph Neural Network Approach , volume=

    Hamaguchi, Takuo and Oiwa, Hidekazu and Shimbo, Masashi and Matsumoto, Yuji , year=. Knowledge Base Completion with Out-of-Knowledge-Base Entities: A Graph Neural Network Approach , volume=. Transactions of the Japanese Society for Artificial Intelligence , publisher=. doi:10.1527/tjsai.f-h72 , number=

  61. [62]

    International conference on machine learning , pages=

    Inductive relation prediction by subgraph reasoning , author=. International conference on machine learning , pages=. 2020 , organization=

  62. [63]

    Advances in Neural Information Processing Systems , volume=

    Indigo: Gnn-based inductive knowledge graph completion using pair-wise encoding , author=. Advances in Neural Information Processing Systems , volume=

  63. [64]

    Krech, Daniel and Grimnes, Gunnar Aastrand and Higgins, Graham and Car, Nicholas and Hees, Jörn and Aucamp, Iwan and Lindström, Niklas and Arndt, Natanael and Sommer, Ashley and Chuc, Edmond and Herman, Ivan and Nelson, Alex and McCusker, Jamie and Gillespie, Tom and Kluyver, Thomas and Ludwig, Florian and Champin, Pierre-Antoine and Watts, Mark and Holze...

  64. [65]

    Pellissier Tanon, Thomas , doi =

  65. [66]

    International semantic web conference , pages=

    Semantics and Complexity of SPARQL , author=. International semantic web conference , pages=. 2006 , organization=

  66. [67]

    Graph foundation models: A comprehensive survey,

    Graph Foundation Models: A Comprehensive Survey , author=. arXiv preprint arXiv:2505.15116 , year=

  67. [68]

    ACM computing surveys (csur) , volume=

    A survey on subgraph counting: concepts, algorithms, and applications to network motifs and graphlets , author=. ACM computing surveys (csur) , volume=. 2021 , publisher=

  68. [69]

    Science , volume=

    Network motifs: simple building blocks of complex networks , author=. Science , volume=. 2002 , publisher=

  69. [70]

    Bioinformatics , volume=

    Biological network comparison using graphlet degree distribution , author=. Bioinformatics , volume=. 2007 , publisher=

  70. [71]

    Proceedings of the Third Learning on Graphs Conference , pages =

    TRIX: A More Expressive Model for Zero-Shot Domain Transfer in Knowledge Graphs , author =. Proceedings of the Third Learning on Graphs Conference , pages =. 2025 , editor =