pith. machine review for the scientific record. sign in

arxiv: 1710.10903 · v3 · submitted 2017-10-30 · 📊 stat.ML · cs.AI· cs.LG· cs.SI

Recognition: 2 theorem links

· Lean Theorem

Graph Attention Networks

Adriana Romero, Arantxa Casanova, Guillem Cucurull, Petar Veli\v{c}kovi\'c, Pietro Li\`o, Yoshua Bengio

Pith reviewed 2026-05-10 17:59 UTC · model grok-4.3

classification 📊 stat.ML cs.AIcs.LGcs.SI
keywords graph attention networksGATself-attentiongraph neural networksnode classificationtransductive learninginductive learning
0
0 comments X

The pith

Graph attention networks allow nodes to assign different importance weights to their neighbors using masked self-attention layers.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces graph attention networks (GATs) as a neural network architecture for graph-structured data. It leverages masked self-attentional layers so nodes can attend to their neighbors and implicitly assign different weights to them. This avoids the need for costly matrix operations or prior knowledge of the graph structure, addressing limitations of graph convolution methods. The approach supports both transductive tasks on fixed graphs and inductive tasks on unseen graphs. Models based on this achieve or match state-of-the-art results on citation network benchmarks and protein-protein interaction data.

Core claim

By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems.

What carries the argument

Masked self-attentional layers that allow nodes to attend over their neighborhoods' features and assign varying weights without costly operations or upfront graph knowledge.

If this is right

  • GAT models apply to inductive problems where test graphs are not seen during training.
  • The architecture achieves or matches state-of-the-art on Cora, Citeseer, and Pubmed citation networks.
  • It performs well on protein-protein interaction datasets with unseen test graphs.
  • Stacking attentional layers enables learning different node importances in neighborhoods.
  • The method overcomes shortcomings of prior graph convolution approaches.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • This attention mechanism could improve interpretability by showing which connections matter most in a graph.
  • It may generalize to other graph-based tasks like link prediction or graph classification.
  • Combining GAT with other neural network components might yield further gains on complex datasets.

Load-bearing premise

That masked self-attentional layers can implicitly specify different weights to different nodes in a neighborhood without any costly matrix operation or knowing the graph structure upfront.

What would settle it

Running the GAT model on the Cora dataset and finding that its accuracy does not reach or exceed previous methods when the attention mechanism is removed or when graph structure knowledge is withheld.

read the original abstract

We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the-art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein-protein interaction dataset (wherein test graphs remain unseen during training).

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 3 minor

Summary. The paper introduces Graph Attention Networks (GATs), a neural architecture for graph-structured data that stacks masked self-attentional layers. Nodes attend over neighborhood features via a shared linear transformation followed by LeakyReLU and softmax normalization restricted to the adjacency neighborhood, enabling implicit per-neighbor weighting without matrix inversion or global graph knowledge. The approach is positioned as addressing limitations of spectral GNNs while supporting both transductive and inductive settings. Empirical evaluation shows the models match or exceed prior state-of-the-art on the Cora, Citeseer, and Pubmed citation networks (transductive) and a protein-protein interaction dataset (inductive, with unseen test graphs).

Significance. If the reported results hold, GATs provide a practical, inductive-capable alternative to spectral methods by replacing fixed convolution weights with learned attention coefficients computed locally from node features. The architecture requires only the provided adjacency at each forward pass and avoids precomputed bases or costly operations, directly enabling the inductive claim on PPI. Credit is due for the clear experimental protocols, use of public benchmarks, and the multi-head attention formulation that stabilizes training.

major comments (2)
  1. [§3.2] §3.2, Eq. (3)–(5): the masked self-attention coefficient computation is defined using a shared weight vector a and LeakyReLU, but the manuscript provides no ablation that isolates the contribution of the attention mechanism (e.g., replacing it with uniform or degree-based weights) from other design choices such as the two-layer architecture or the specific activation. This leaves open whether the performance gains on Cora/Citeseer/Pubmed are attributable to attention or to the overall model capacity.
  2. [§4.3] §4.3, Table 2: the inductive PPI results report micro-F1 scores, but the paper does not report variance across multiple random seeds or graph splits for the unseen test graphs, making it difficult to assess whether the claimed matching of SOTA is statistically robust given the inductive setting.
minor comments (3)
  1. [§2] §2: the related-work discussion of spectral methods could more explicitly contrast the O(N) per-layer cost of GAT attention with the eigen-decomposition requirements of earlier approaches.
  2. Notation: the symbol W is reused for both the linear transformation in the attention mechanism and the output projection; a distinct symbol would improve readability.
  3. Figure 1: the diagram of the attentional layer would benefit from an explicit arrow or label indicating the masking step that restricts attention to the neighborhood.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the positive assessment, the recommendation for minor revision, and the constructive comments. We address each major comment below.

read point-by-point responses
  1. Referee: [§3.2] §3.2, Eq. (3)–(5): the masked self-attention coefficient computation is defined using a shared weight vector a and LeakyReLU, but the manuscript provides no ablation that isolates the contribution of the attention mechanism (e.g., replacing it with uniform or degree-based weights) from other design choices such as the two-layer architecture or the specific activation. This leaves open whether the performance gains on Cora/Citeseer/Pubmed are attributable to attention or to the overall model capacity.

    Authors: We appreciate the referee's point. While the manuscript demonstrates that GAT outperforms strong non-attentional baselines such as GCN (which uses fixed, degree-normalized weights) on the citation networks, we acknowledge that a direct ablation replacing the learned attention coefficients with uniform weights would more cleanly isolate the mechanism's contribution. We will add this ablation (GAT with uniform aggregation) to the revised manuscript. revision: yes

  2. Referee: [§4.3] §4.3, Table 2: the inductive PPI results report micro-F1 scores, but the paper does not report variance across multiple random seeds or graph splits for the unseen test graphs, making it difficult to assess whether the claimed matching of SOTA is statistically robust given the inductive setting.

    Authors: We agree that variance estimates would strengthen the inductive results. The reported numbers follow the single-run protocol used by the PPI dataset creators and prior inductive GNN papers. In the revision we will rerun the PPI experiments across multiple random seeds, reporting mean micro-F1 and standard deviation in the updated Table 2. revision: yes

Circularity Check

0 steps flagged

No significant circularity detected

full rationale

The paper introduces GAT as a novel architecture whose core masked self-attention mechanism is explicitly defined via new equations (e.g., the attention coefficient computation using concatenated transformed features, LeakyReLU, and neighborhood softmax). This definition does not reduce to any prior fitted parameter, self-citation, or input by construction. Performance claims are purely empirical evaluations against external public benchmarks (Cora, Citeseer, Pubmed, PPI) rather than internal predictions. Prior work is cited only for context and shortcomings; the central construction stands independently and is not load-bearing on self-referential steps.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 1 invented entities

The model rests on standard neural-network differentiability and back-propagation assumptions plus the domain assumption that local neighborhood attention is sufficient to capture graph structure. No new physical entities are postulated; the attention layer is an architectural invention whose independent evidence is the reported benchmark gains.

free parameters (1)
  • attention weight matrices
    Learned parameters of the attention mechanism that are fitted during training on the target task.
axioms (1)
  • domain assumption Graph neighborhoods can be processed by differentiable attention functions without requiring global graph knowledge
    Invoked when the paper states the model is applicable to inductive problems where test graphs are unseen.
invented entities (1)
  • masked self-attentional layer no independent evidence
    purpose: To compute normalized attention coefficients over only the immediate neighbors of each node
    New architectural component introduced to replace fixed convolution weights.

pith-pipeline@v0.9.0 · 5477 in / 1273 out tokens · 52291 ms · 2026-05-10T17:59:21.789067+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 60 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Weather-Robust Cross-View Geo-Localization via Prototype-Based Semantic Part Discovery

    cs.CV 2026-05 unverdicted novelty 7.0

    SkyPart uses learnable prototypes for patch grouping, altitude modulation only in training, graph-attention readout, and Kendall-weighted loss to set new state-of-the-art single-pass performance on SUES-200, Universit...

  2. TopoU-Net: a U-Net architecture for topological domains

    cs.LG 2026-05 unverdicted novelty 7.0

    TopoU-Net is a rank-path U-Net for combinatorial complexes that encodes by lifting cochains upward along incidences, decodes by transporting downward, and merges via skip connections at matched ranks.

  3. CTQWformer: A CTQW-based Transformer for Graph Classification

    cs.LG 2026-05 unverdicted novelty 7.0

    CTQWformer fuses continuous-time quantum walks into a graph transformer and recurrent module to outperform standard GNNs and graph kernels on classification benchmarks.

  4. Structural Interpretations of Protein Language Model Representations via Differentiable Graph Partitioning

    cs.LG 2026-05 unverdicted novelty 7.0

    SoftBlobGIN combines ESM-2 representations with protein contact graphs via a lightweight GNN and differentiable substructure pooling to achieve 92.8% accuracy on enzyme classification, raise binding-site AUROC to 0.98...

  5. SGC-RML: A reliable and interpretable longitudinal assessment for PD in real-world DNS

    cs.LG 2026-05 unverdicted novelty 7.0

    SGC-RML creates an 8D symptom atlas from multimodal PD data and integrates conformal calibration to deliver reliable, rejectable longitudinal assessments.

  6. Graphlets as Building Blocks for Structural Vocabulary in Knowledge Graph Foundation Models

    cs.AI 2026-05 unverdicted novelty 7.0

    Graphlets mined as structural tokens improve zero-shot inductive and transductive link prediction in knowledge graph foundation models across 51 diverse graphs.

  7. Robustness of Graph Self-Supervised Learning to Real-World Noise: A Case Study on Text-Driven Biomedical Graphs

    cs.LG 2026-05 unverdicted novelty 7.0

    Feature reconstruction in GSSL is robust to noise in text-driven biomedical graphs while relation reconstruction is sensitive, with bidirectional GNN architectures performing better on noisy data and yielding up to 7%...

  8. LUMINA: A Grid Foundation Model for Benchmarking AC Optimal Power Flow Surrogate Learning

    cs.LG 2026-05 unverdicted novelty 7.0

    LUMINA-Bench is a standardized evaluation framework for ACOPF surrogate models that tests generalization across multiple grid topologies using accuracy and physics-constraint metrics.

  9. Graph Transformers and Stabilized Reinforcement Learning for Large-Scale Dynamic Routing Modulation and Spectrum Allocation in Elastic Optical Networks

    cs.NI 2026-05 conditional novelty 7.0

    Graph transformer RL for dynamic RMSA supports up to 13% more traffic than benchmarks on networks up to 143 nodes and 362 links.

  10. Empowering Heterogeneous Graph Foundation Models via Decoupled Relation Alignment

    cs.SI 2026-05 unverdicted novelty 7.0

    DRSA provides a plug-and-play alignment framework that decouples features and relations to prevent type collapse and relation confusion in heterogeneous graph foundation models.

  11. Advancing Edge Classification through High-Dimensional Causal Modeling of Node-Edge Interplay

    cs.LG 2026-05 unverdicted novelty 7.0

    CECF is a new causal framework for edge classification that balances high-dimensional edge features against node influences via GNN embeddings and cross-attention to achieve better performance than standard methods.

  12. PiGGO: Physics-Guided Learnable Graph Kalman Filters for Virtual Sensing of Nonlinear Dynamic Structures under Uncertainty

    cs.LG 2026-04 unverdicted novelty 7.0

    PiGGO integrates a learned graph neural ODE as the continuous-time dynamics model within an extended Kalman filter to enable online virtual sensing and uncertainty-aware state estimation for nonlinear dynamic systems ...

  13. Hamiltonian Graph Inference Networks: Joint structure discovery and dynamics prediction for lattice Hamiltonian systems from trajectory data

    cs.LG 2026-04 unverdicted novelty 7.0

    HGIN jointly recovers interaction graphs and predicts trajectories for lattice Hamiltonian systems from data, achieving six to thirteen orders of magnitude lower long-time errors than baselines on Klein-Gordon and dis...

  14. Continual Learning for fMRI-Based Brain Disorder Diagnosis via Functional Connectivity Matrices Generative Replay

    q-bio.TO 2026-04 conditional novelty 7.0

    A structure-aware VAE generates realistic FC matrices for replay, combined with multi-level knowledge distillation and hierarchical contextual bandit sampling, to enable continual fMRI-based brain disorder diagnosis a...

  15. CapBench: A Multi-PDK Dataset for Machine-Learning-Based Post-Layout Capacitance Extraction

    cs.AR 2026-04 accept novelty 7.0

    CapBench is a new multi-PDK dataset of post-layout 3D windows with high-fidelity capacitance labels and multiple ML-ready representations, plus baseline results showing CNN accuracy versus GNN speed trade-offs.

  16. Graph-RHO: Critical-path-aware Heterogeneous Graph Network for Long-Horizon Flexible Job-Shop Scheduling

    cs.LG 2026-04 unverdicted novelty 7.0

    Graph-RHO is a critical-path-aware heterogeneous graph network for rolling horizon optimization in flexible job-shop scheduling that achieves state-of-the-art solution quality and over 30% faster solve times on large ...

  17. SCOT: Multi-Source Cross-City Transfer with Optimal-Transport Soft-Correspondence Objective

    cs.LG 2026-04 unverdicted novelty 7.0

    SCOT uses Sinkhorn entropic optimal transport to learn explicit soft correspondences between unequal region sets for multi-source cross-city transfer, adding contrastive sharpening and cycle reconstruction for stabili...

  18. SCOT: Multi-Source Cross-City Transfer with Optimal-Transport Soft-Correspondence Objective

    cs.LG 2026-04 unverdicted novelty 7.0

    SCOT learns explicit soft region correspondences via entropic optimal transport and a shared prototype hub to improve multi-source cross-city transfer accuracy and robustness.

  19. Graph Topology Information Enhanced Heterogeneous Graph Representation Learning

    cs.LG 2026-04 unverdicted novelty 7.0

    ToGRL learns high-quality graph structures from raw heterogeneous graphs via a two-stage topology extraction process and prompt tuning, outperforming prior methods on five datasets.

  20. Hierarchical Mesh Transformers with Topology-Guided Pretraining for Morphometric Analysis of Brain Structures

    cs.CV 2026-04 unverdicted novelty 7.0

    A hierarchical mesh transformer using topology-guided pretraining on simplicial complexes achieves state-of-the-art results on Alzheimer's classification, amyloid prediction, and focal cortical dysplasia detection fro...

  21. GRASP -- Graph-Based Anomaly Detection Through Self-Supervised Classification

    cs.CR 2026-05 unverdicted novelty 6.0

    GRASP detects anomalies in system provenance graphs via self-supervised executable prediction from two-hop neighborhoods, outperforming prior PIDS on DARPA datasets by identifying all documented attacks where behavior...

  22. GCCM: Enhancing Generative Graph Prediction via Contrastive Consistency Model

    cs.AI 2026-05 unverdicted novelty 6.0

    GCCM prevents shortcut collapse in consistency models for graph prediction by using contrastive negative pairs and input feature perturbation, leading to better performance than deterministic baselines.

  23. A Unified Benchmark for Evaluating Knowledge Graph Construction Methods and Graph Neural Networks

    cs.LG 2026-05 unverdicted novelty 6.0

    A dual-purpose benchmark supplies two text-derived knowledge graphs and one expert reference graph on the same biomedical corpus to jointly measure construction method quality and GNN robustness via semi-supervised no...

  24. GEM: Graph-Enhanced Mixture-of-Experts with ReAct Agents for Dialogue State Tracking

    cs.CL 2026-05 unverdicted novelty 6.0

    GEM achieves 65.19% joint goal accuracy on MultiWOZ 2.2 by routing between a graph neural network expert for dialogue structure and a T5 expert for sequences, plus ReAct agents for value generation, outperforming prio...

  25. Exploring Sparse Matrix Multiplication Kernels on the Cerebras CS-3

    cs.DC 2026-04 unverdicted novelty 6.0

    Cerebras CS-3 achieves up to 100x speedup over CPU for SpMM and 20x for SDDMM at 90% sparsity, with performance improving for larger matrices, but becomes slower than CPU beyond 99% sparsity.

  26. Qubit-Scalable CVRP via Lagrangian Knapsack Decomposition and Noise-Aware Quantum Execution

    quant-ph 2026-04 unverdicted novelty 6.0

    A hybrid quantum framework decomposes CVRP into bounded-width knapsack subproblems, trains a reinforcement learning controller for Lagrangian multipliers, and uses a contextual bandit to adapt quantum hardware executi...

  27. Robustness of Spatio-temporal Graph Neural Networks for Fault Location in Partially Observable Distribution Grids

    cs.LG 2026-04 unverdicted novelty 6.0

    Measured-only graph topologies enable STGNNs to achieve up to 11-point F1 gains and 6x faster training versus full-topology GNNs and RNN baselines for fault location in partially observable distribution grids.

  28. ACT: Anti-Crosstalk Learning for Cross-Sectional Stock Ranking via Temporal Disentanglement and Structural Purification

    cs.LG 2026-04 unverdicted novelty 6.0

    ACT disentangles temporal scales in stock sequences and purifies structural relations in graphs to achieve state-of-the-art cross-sectional stock ranking on CSI300 and CSI500 with up to 74.25% improvement.

  29. TACENR: Task-Agnostic Contrastive Explanations for Node Representations

    cs.LG 2026-04 unverdicted novelty 6.0

    TACENR introduces a contrastive-learning method that identifies the most influential attribute, proximity, and structural features in node representations in a task-agnostic manner.

  30. LoReC: Rethinking Large Language Models for Graph Data Analysis

    cs.LG 2026-04 unverdicted novelty 6.0

    LoReC enhances LLMs for graph tasks via attention redistribution, graph re-injection into FFN, and logit rectification, yielding improvements over GraphLLM and GNN baselines on diverse datasets.

  31. Program Structure-aware Language Models: Targeted Software Testing beyond Textual Semantics

    cs.SE 2026-04 unverdicted novelty 6.0

    GLMTest integrates code property graphs and GNNs with LLMs to steer test case generation toward targeted branches, raising branch accuracy from 27.4% to 50.2% on the TestGenEval benchmark.

  32. TransXion: A High-Fidelity Graph Benchmark for Realistic Anti-Money Laundering

    cs.LG 2026-04 unverdicted novelty 6.0

    TransXion supplies a 3-million-transaction graph benchmark with profile-aware normal activity and stochastic illicit subgraphs that produces lower detection scores than prior AML datasets.

  33. DuConTE: Dual-Granularity Text Encoder with Topology-Constrained Attention for Text-attributed Graphs

    cs.CL 2026-04 unverdicted novelty 6.0

    DuConTE is a dual-granularity text encoder that incorporates graph topology into language model attention for improved node representations in text-attributed graphs.

  34. Region-Affinity Attention for Whole-Slide Breast Cancer Classification in Deep Ultraviolet Imaging

    cs.CV 2026-04 unverdicted novelty 6.0

    A novel Region-Affinity Attention mechanism classifies breast cancer on whole deep ultraviolet slides, achieving 92.67% accuracy and 95.97% AUC on 136 samples while outperforming standard attention methods.

  35. Graph self-supervised learning based on frequency corruption

    cs.LG 2026-04 unverdicted novelty 6.0

    FC-GSSL improves graph SSL by generating high-frequency biased corrupted graphs via low-frequency contribution-based corruption, reconstructing low-frequency features in an autoencoder, and aligning multi-view represe...

  36. NK-GAD: Neighbor Knowledge-Enhanced Unsupervised Graph Anomaly Detection

    cs.LG 2026-04 unverdicted novelty 6.0

    NK-GAD improves unsupervised graph anomaly detection on heterophilic graphs by combining a joint encoder for similar and dissimilar neighbors, neighbor reconstruction, center aggregation, and dual decoders, yielding a...

  37. A Structure-Preserving Graph Neural Solver for Parametric Hyperbolic Conservation Laws

    physics.comp-ph 2026-04 unverdicted novelty 6.0

    A structure-preserving GNN solver for parametric hyperbolic conservation laws achieves superior long-horizon stability and orders-of-magnitude speedups over high-resolution simulations on supersonic flow benchmarks.

  38. Learning Ad Hoc Network Dynamics via Graph-Structured World Models

    cs.LG 2026-04 unverdicted novelty 6.0

    G-RSSM learns per-node dynamics in wireless ad hoc networks via graph attention and trains clustering policies through imagined rollouts, generalizing from N=50 training to larger networks.

  39. TopFeaRe: Locating Critical State of Adversarial Resilience for Graphs Regarding Topology-Feature Entanglement

    cs.CR 2026-04 unverdicted novelty 6.0

    TopFeaRe models graph adversarial attacks as oscillations in a complex dynamic system and locates the critical resilience state via equilibrium-point theory applied to a two-dimensional topology-feature entangled function.

  40. Verify Before You Fix: Agentic Execution Grounding for Trustworthy Cross-Language Code Analysis

    cs.SE 2026-04 unverdicted novelty 6.0

    A framework combining universal AST normalization, hybrid graph-LLM embeddings, and strict execution-grounded validation achieves 89-92% intra-language accuracy and 74-80% cross-language F1 while resolving 70% of vuln...

  41. Relational Probing: LM-to-Graph Adaptation for Financial Prediction

    cs.CL 2026-04 unverdicted novelty 6.0

    Relational Probing replaces the LM output head with a trainable relation head that induces graphs from hidden states and optimizes them end-to-end for stock trend prediction, showing gains over co-occurrence baselines.

  42. AFGNN: API Misuse Detection using Graph Neural Networks and Clustering

    cs.SE 2026-04 unverdicted novelty 6.0

    AFGNN detects API misuses in Java code more effectively than prior methods by representing usage as graphs and clustering learned embeddings from self-supervised training.

  43. BiScale-GTR: Fragment-Aware Graph Transformers for Multi-Scale Molecular Representation Learning

    cs.LG 2026-04 unverdicted novelty 6.0

    BiScale-GTR achieves claimed state-of-the-art results on MoleculeNet, PharmaBench and LRGB by combining improved fragment tokenization with a parallel GNN-Transformer architecture that operates at both atom and fragme...

  44. Can We Trust a Black-box LLM? LLM Untrustworthy Boundary Detection via Bias-Diffusion and Multi-Agent Reinforcement Learning

    cs.AI 2026-04 unverdicted novelty 6.0

    GMRL-BD detects untrustworthy topic boundaries for black-box LLMs by combining bias-diffusion on a Wikipedia KG with multi-agent RL, supported by a released dataset labeling biases in models like Llama2 and Qwen2.

  45. Feature-Aware Anisotropic Local Differential Privacy for Utility-Preserving Graph Representation Learning in Metal Additive Manufacturing

    cs.LG 2026-04 unverdicted novelty 6.0

    FI-LDP-HGAT applies feature-importance-aware anisotropic local differential privacy to a hierarchical graph attention network, recovering 81.5% utility at epsilon=4 and 0.762 defect recall at epsilon=2 on a DED porosi...

  46. Towards Predicting Multi-Vulnerability Attack Chains in Software Supply Chains from Software Bill of Materials Graphs

    cs.SE 2026-04 unverdicted novelty 6.0

    The paper shows that heterogeneous graph attention networks can classify vulnerable components in real SBOMs at 91% accuracy and that a simple MLP can predict documented multi-vulnerability chains with 0.93 ROC-AUC.

  47. MMP-Refer: Multimodal Path Retrieval-augmented LLMs For Explainable Recommendation

    cs.IR 2026-04 conditional novelty 6.0

    MMP-Refer augments LLMs with multimodal retrieval paths and a trainable collaborative adapter to produce more accurate and explainable recommendations.

  48. Attention U-Net: Learning Where to Look for the Pancreas

    cs.CV 2018-04 unverdicted novelty 6.0

    Attention gates added to U-Net automatically focus on target organs in CT images and improve segmentation performance on abdominal datasets.

  49. Learning to Compress and Transmit: Adaptive Rate Control for Semantic Communications over LEO Satellite-to-Ground Links

    cs.NI 2026-05 unverdicted novelty 5.0

    RL agent adaptively controls compression rate in semantic satellite communications to achieve 95% qualified image frames with no packet loss by using SNR predictions and queue management.

  50. DCVD: Dual-Channel Cross-Modal Fusion for Joint Vulnerability Detection and Localization

    cs.CR 2026-05 unverdicted novelty 5.0

    DCVD performs joint function-level vulnerability detection and statement-level localization by extracting control-dependency and semantic features in parallel branches, fusing them with contrastive alignment and bidir...

  51. Multi-Level Graph Attention Network Contrastive Learning for Knowledge-Aware Recommendation

    cs.IR 2026-05 unverdicted novelty 5.0

    A multi-level graph attention network with contrastive learning outperforms prior methods on knowledge-aware recommendation by improving generalization across three comparison perspectives.

  52. Mid-Circuit Measurements for Clifford Noise Reduction in Hamiltonian Simulations

    quant-ph 2026-05 unverdicted novelty 5.0

    Mid-circuit measurements in Generalized Superfast Encoding combined with Clifford Noise Reduction reduce logical error rates by up to 54% in a six-qubit Clifford Trotter step for fermionic Hamiltonian simulation on ba...

  53. PLMGH: What Matters in PLM-GNN Hybrids for Code Classification and Vulnerability Detection

    cs.SE 2026-04 unverdicted novelty 5.0

    Controlled experiments show PLM-GNN hybrids improve code tasks over GNN-only baselines, with PLM source having larger impact than GNN backbone.

  54. Crystal Fractional Graph Neural Network for Energy Prediction of High-Entropy Alloys

    physics.comp-ph 2026-04 unverdicted novelty 5.0

    A crystal fractional graph neural network fuses local graph attention on 16-atom environments with global composition fractions to predict high-entropy alloy energies at RMSE levels comparable to first-principles calc...

  55. Robustness of Spatio-temporal Graph Neural Networks for Fault Location in Partially Observable Distribution Grids

    cs.LG 2026-04 unverdicted novelty 5.0

    Measured-only STGNNs (RGATv2, RGSAGE) achieve up to 11 F1 points higher and 6x faster training than RNN baselines for fault location on the IEEE 123-bus feeder under partial observability.

  56. Multi-Perspective Evidence Synthesis and Reasoning for Unsupervised Multimodal Entity Linking

    cs.CL 2026-04 unverdicted novelty 5.0

    MSR-MEL synthesizes instance-centric, group-level, lexical, and statistical evidence with LLMs and asymmetric teacher-student GNNs to outperform prior unsupervised methods on multimodal entity linking benchmarks.

  57. AROMA: Augmented Reasoning Over a Multimodal Architecture for Virtual Cell Genetic Perturbation Modeling

    q-bio.QM 2026-04 unverdicted novelty 5.0

    AROMA combines text, graph topology, and protein sequences with augmented reasoning and two-stage optimization to deliver more accurate and interpretable predictions of genetic perturbation effects in virtual cells, o...

  58. Inductive Subgraphs as Shortcuts: Causal Disentanglement for Heterophilic Graph Learning

    cs.LG 2026-04 unverdicted novelty 5.0

    Inductive subgraphs serve as shortcuts in heterophilic graphs, and CD-GNN disentangles spurious from causal subgraphs by blocking non-causal paths to improve robustness and accuracy.

  59. TabEmb: Joint Semantic-Structure Embedding for Table Annotation

    cs.LG 2026-04 unverdicted novelty 5.0

    TabEmb decouples LLM-based semantic column embeddings from graph-based structural modeling to produce joint representations that improve table annotation tasks.

  60. How Embeddings Shape Graph Neural Networks: Classical vs Quantum-Oriented Node Representations

    cs.LG 2026-04 unverdicted novelty 5.0

    Quantum-oriented embeddings deliver consistent gains on structure-driven graph datasets while classical baselines perform adequately on attribute-limited social graphs, under identical training pipelines across five T...

Reference graph

Works this paper leans on

17 extracted references · 17 canonical work pages · cited by 68 Pith papers · 3 internal anchors

  1. [1]

    Software avail- able from tensorflow.org

    URL https://www.tensorflow.org/. Software avail- able from tensorflow.org. James Atwood and Don Towsley. Diffusion-convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 1993–2001,

  2. [2]

    Long short-term memory-networks for machine reading.arXiv preprint arXiv:1601.06733,

    Jianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine reading. arXiv preprint arXiv:1601.06733,

  3. [3]

    Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation

    Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078,

  4. [4]

    Pro- grammable agents

    Misha Denil, Sergio G ´omez Colmenarejo, Serkan Cabi, David Saxton, and Nando de Freitas. Pro- grammable agents. arXiv preprint arXiv:1706.06383,

  5. [5]

    One-shot imitation learn- ing,

    Yan Duan, Marcin Andrychowicz, Bradly Stadie, Jonathan Ho, Jonas Schneider, Ilya Sutskever, Pieter Abbeel, and Wojciech Zaremba. One-shot imitation learning. arXiv preprint arXiv:1703.07326,

  6. [6]

    A general framework for adaptive processing of data structures

    10 Published as a conference paper at ICLR 2018 Paolo Frasconi, Marco Gori, and Alessandro Sperduti. A general framework for adaptive processing of data structures. IEEE transactions on Neural Networks, 9(5):768–786,

  7. [8]

    org/abs/1611.02344

    URL http://arxiv. org/abs/1611.02344. Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256,

  8. [9]

    Deep convolutional networks on graph-structured data

    Mikael Henaff, Joan Bruna, and Yann LeCun. Deep convolutional networks on graph-structured data. arXiv preprint arXiv:1506.05163,

  9. [10]

    Adam: A Method for Stochastic Optimization

    Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980,

  10. [11]

    A structured self-attentive sentence embedding.arXiv preprint arXiv:1703.03130,

    Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130,

  11. [12]

    Geometric deep learning on graphs and manifolds using mixture model cnns

    Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodol`a, Jan Svoboda, and Michael M Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. arXiv preprint arXiv:1611.08402,

  12. [13]

    Learning convolutional neural net- works for graphs

    Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural net- works for graphs. In Proceedings of The 33rd International Conference on Machine Learning , volume 48, pp. 2014–2023,

  13. [14]

    Deepwalk: Online learning of social repre- sentations

    11 Published as a conference paper at ICLR 2018 Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social repre- sentations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 701–710. ACM,

  14. [15]

    A simple neural network module for relational reasoning.arXiv preprint arXiv:1706.01427,

    Adam Santoro, David Raposo, David GT Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. A simple neural network module for relational reasoning.arXiv preprint arXiv:1706.01427,

  15. [16]

    doi: 10.1109/72.572108

    ISSN 1045-9227. doi: 10.1109/72.572108. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of machine learning research, 15(1):1929–1958,

  16. [17]

    Attention Is All You Need

    Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. arXiv preprint arXiv:1706.03762,

  17. [19]

    11 Published as a conference paper at ICLR 2020 A M ULTI-ROUND LSH A TTENTION In this section we describe in more detail the multi-hash version of our LSH attention mechanism

    URL http://arxiv.org/abs/1410.3916. Zhilin Yang, William Cohen, and Ruslan Salakhudinov. Revisiting semi-supervised learning with graph embeddings. In International Conference on Machine Learning, pp. 40–48,