Recognition: 2 theorem links
· Lean TheoremGraph Attention Networks
Pith reviewed 2026-05-10 17:59 UTC · model grok-4.3
The pith
Graph attention networks allow nodes to assign different importance weights to their neighbors using masked self-attention layers.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems.
What carries the argument
Masked self-attentional layers that allow nodes to attend over their neighborhoods' features and assign varying weights without costly operations or upfront graph knowledge.
If this is right
- GAT models apply to inductive problems where test graphs are not seen during training.
- The architecture achieves or matches state-of-the-art on Cora, Citeseer, and Pubmed citation networks.
- It performs well on protein-protein interaction datasets with unseen test graphs.
- Stacking attentional layers enables learning different node importances in neighborhoods.
- The method overcomes shortcomings of prior graph convolution approaches.
Where Pith is reading between the lines
- This attention mechanism could improve interpretability by showing which connections matter most in a graph.
- It may generalize to other graph-based tasks like link prediction or graph classification.
- Combining GAT with other neural network components might yield further gains on complex datasets.
Load-bearing premise
That masked self-attentional layers can implicitly specify different weights to different nodes in a neighborhood without any costly matrix operation or knowing the graph structure upfront.
What would settle it
Running the GAT model on the Cora dataset and finding that its accuracy does not reach or exceed previous methods when the attention mechanism is removed or when graph structure knowledge is withheld.
read the original abstract
We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the-art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein-protein interaction dataset (wherein test graphs remain unseen during training).
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces Graph Attention Networks (GATs), a neural architecture for graph-structured data that stacks masked self-attentional layers. Nodes attend over neighborhood features via a shared linear transformation followed by LeakyReLU and softmax normalization restricted to the adjacency neighborhood, enabling implicit per-neighbor weighting without matrix inversion or global graph knowledge. The approach is positioned as addressing limitations of spectral GNNs while supporting both transductive and inductive settings. Empirical evaluation shows the models match or exceed prior state-of-the-art on the Cora, Citeseer, and Pubmed citation networks (transductive) and a protein-protein interaction dataset (inductive, with unseen test graphs).
Significance. If the reported results hold, GATs provide a practical, inductive-capable alternative to spectral methods by replacing fixed convolution weights with learned attention coefficients computed locally from node features. The architecture requires only the provided adjacency at each forward pass and avoids precomputed bases or costly operations, directly enabling the inductive claim on PPI. Credit is due for the clear experimental protocols, use of public benchmarks, and the multi-head attention formulation that stabilizes training.
major comments (2)
- [§3.2] §3.2, Eq. (3)–(5): the masked self-attention coefficient computation is defined using a shared weight vector a and LeakyReLU, but the manuscript provides no ablation that isolates the contribution of the attention mechanism (e.g., replacing it with uniform or degree-based weights) from other design choices such as the two-layer architecture or the specific activation. This leaves open whether the performance gains on Cora/Citeseer/Pubmed are attributable to attention or to the overall model capacity.
- [§4.3] §4.3, Table 2: the inductive PPI results report micro-F1 scores, but the paper does not report variance across multiple random seeds or graph splits for the unseen test graphs, making it difficult to assess whether the claimed matching of SOTA is statistically robust given the inductive setting.
minor comments (3)
- [§2] §2: the related-work discussion of spectral methods could more explicitly contrast the O(N) per-layer cost of GAT attention with the eigen-decomposition requirements of earlier approaches.
- Notation: the symbol W is reused for both the linear transformation in the attention mechanism and the output projection; a distinct symbol would improve readability.
- Figure 1: the diagram of the attentional layer would benefit from an explicit arrow or label indicating the masking step that restricts attention to the neighborhood.
Simulated Author's Rebuttal
We thank the referee for the positive assessment, the recommendation for minor revision, and the constructive comments. We address each major comment below.
read point-by-point responses
-
Referee: [§3.2] §3.2, Eq. (3)–(5): the masked self-attention coefficient computation is defined using a shared weight vector a and LeakyReLU, but the manuscript provides no ablation that isolates the contribution of the attention mechanism (e.g., replacing it with uniform or degree-based weights) from other design choices such as the two-layer architecture or the specific activation. This leaves open whether the performance gains on Cora/Citeseer/Pubmed are attributable to attention or to the overall model capacity.
Authors: We appreciate the referee's point. While the manuscript demonstrates that GAT outperforms strong non-attentional baselines such as GCN (which uses fixed, degree-normalized weights) on the citation networks, we acknowledge that a direct ablation replacing the learned attention coefficients with uniform weights would more cleanly isolate the mechanism's contribution. We will add this ablation (GAT with uniform aggregation) to the revised manuscript. revision: yes
-
Referee: [§4.3] §4.3, Table 2: the inductive PPI results report micro-F1 scores, but the paper does not report variance across multiple random seeds or graph splits for the unseen test graphs, making it difficult to assess whether the claimed matching of SOTA is statistically robust given the inductive setting.
Authors: We agree that variance estimates would strengthen the inductive results. The reported numbers follow the single-run protocol used by the PPI dataset creators and prior inductive GNN papers. In the revision we will rerun the PPI experiments across multiple random seeds, reporting mean micro-F1 and standard deviation in the updated Table 2. revision: yes
Circularity Check
No significant circularity detected
full rationale
The paper introduces GAT as a novel architecture whose core masked self-attention mechanism is explicitly defined via new equations (e.g., the attention coefficient computation using concatenated transformed features, LeakyReLU, and neighborhood softmax). This definition does not reduce to any prior fitted parameter, self-citation, or input by construction. Performance claims are purely empirical evaluations against external public benchmarks (Cora, Citeseer, Pubmed, PPI) rather than internal predictions. Prior work is cited only for context and shortcomings; the central construction stands independently and is not load-bearing on self-referential steps.
Axiom & Free-Parameter Ledger
free parameters (1)
- attention weight matrices
axioms (1)
- domain assumption Graph neighborhoods can be processed by differentiable attention functions without requiring global graph knowledge
invented entities (1)
-
masked self-attentional layer
no independent evidence
Forward citations
Cited by 60 Pith papers
-
Weather-Robust Cross-View Geo-Localization via Prototype-Based Semantic Part Discovery
SkyPart uses learnable prototypes for patch grouping, altitude modulation only in training, graph-attention readout, and Kendall-weighted loss to set new state-of-the-art single-pass performance on SUES-200, Universit...
-
TopoU-Net: a U-Net architecture for topological domains
TopoU-Net is a rank-path U-Net for combinatorial complexes that encodes by lifting cochains upward along incidences, decodes by transporting downward, and merges via skip connections at matched ranks.
-
CTQWformer: A CTQW-based Transformer for Graph Classification
CTQWformer fuses continuous-time quantum walks into a graph transformer and recurrent module to outperform standard GNNs and graph kernels on classification benchmarks.
-
Structural Interpretations of Protein Language Model Representations via Differentiable Graph Partitioning
SoftBlobGIN combines ESM-2 representations with protein contact graphs via a lightweight GNN and differentiable substructure pooling to achieve 92.8% accuracy on enzyme classification, raise binding-site AUROC to 0.98...
-
SGC-RML: A reliable and interpretable longitudinal assessment for PD in real-world DNS
SGC-RML creates an 8D symptom atlas from multimodal PD data and integrates conformal calibration to deliver reliable, rejectable longitudinal assessments.
-
Graphlets as Building Blocks for Structural Vocabulary in Knowledge Graph Foundation Models
Graphlets mined as structural tokens improve zero-shot inductive and transductive link prediction in knowledge graph foundation models across 51 diverse graphs.
-
Robustness of Graph Self-Supervised Learning to Real-World Noise: A Case Study on Text-Driven Biomedical Graphs
Feature reconstruction in GSSL is robust to noise in text-driven biomedical graphs while relation reconstruction is sensitive, with bidirectional GNN architectures performing better on noisy data and yielding up to 7%...
-
LUMINA: A Grid Foundation Model for Benchmarking AC Optimal Power Flow Surrogate Learning
LUMINA-Bench is a standardized evaluation framework for ACOPF surrogate models that tests generalization across multiple grid topologies using accuracy and physics-constraint metrics.
-
Graph Transformers and Stabilized Reinforcement Learning for Large-Scale Dynamic Routing Modulation and Spectrum Allocation in Elastic Optical Networks
Graph transformer RL for dynamic RMSA supports up to 13% more traffic than benchmarks on networks up to 143 nodes and 362 links.
-
Empowering Heterogeneous Graph Foundation Models via Decoupled Relation Alignment
DRSA provides a plug-and-play alignment framework that decouples features and relations to prevent type collapse and relation confusion in heterogeneous graph foundation models.
-
Advancing Edge Classification through High-Dimensional Causal Modeling of Node-Edge Interplay
CECF is a new causal framework for edge classification that balances high-dimensional edge features against node influences via GNN embeddings and cross-attention to achieve better performance than standard methods.
-
PiGGO: Physics-Guided Learnable Graph Kalman Filters for Virtual Sensing of Nonlinear Dynamic Structures under Uncertainty
PiGGO integrates a learned graph neural ODE as the continuous-time dynamics model within an extended Kalman filter to enable online virtual sensing and uncertainty-aware state estimation for nonlinear dynamic systems ...
-
Hamiltonian Graph Inference Networks: Joint structure discovery and dynamics prediction for lattice Hamiltonian systems from trajectory data
HGIN jointly recovers interaction graphs and predicts trajectories for lattice Hamiltonian systems from data, achieving six to thirteen orders of magnitude lower long-time errors than baselines on Klein-Gordon and dis...
-
Continual Learning for fMRI-Based Brain Disorder Diagnosis via Functional Connectivity Matrices Generative Replay
A structure-aware VAE generates realistic FC matrices for replay, combined with multi-level knowledge distillation and hierarchical contextual bandit sampling, to enable continual fMRI-based brain disorder diagnosis a...
-
CapBench: A Multi-PDK Dataset for Machine-Learning-Based Post-Layout Capacitance Extraction
CapBench is a new multi-PDK dataset of post-layout 3D windows with high-fidelity capacitance labels and multiple ML-ready representations, plus baseline results showing CNN accuracy versus GNN speed trade-offs.
-
Graph-RHO: Critical-path-aware Heterogeneous Graph Network for Long-Horizon Flexible Job-Shop Scheduling
Graph-RHO is a critical-path-aware heterogeneous graph network for rolling horizon optimization in flexible job-shop scheduling that achieves state-of-the-art solution quality and over 30% faster solve times on large ...
-
SCOT: Multi-Source Cross-City Transfer with Optimal-Transport Soft-Correspondence Objective
SCOT uses Sinkhorn entropic optimal transport to learn explicit soft correspondences between unequal region sets for multi-source cross-city transfer, adding contrastive sharpening and cycle reconstruction for stabili...
-
SCOT: Multi-Source Cross-City Transfer with Optimal-Transport Soft-Correspondence Objective
SCOT learns explicit soft region correspondences via entropic optimal transport and a shared prototype hub to improve multi-source cross-city transfer accuracy and robustness.
-
Graph Topology Information Enhanced Heterogeneous Graph Representation Learning
ToGRL learns high-quality graph structures from raw heterogeneous graphs via a two-stage topology extraction process and prompt tuning, outperforming prior methods on five datasets.
-
Hierarchical Mesh Transformers with Topology-Guided Pretraining for Morphometric Analysis of Brain Structures
A hierarchical mesh transformer using topology-guided pretraining on simplicial complexes achieves state-of-the-art results on Alzheimer's classification, amyloid prediction, and focal cortical dysplasia detection fro...
-
GRASP -- Graph-Based Anomaly Detection Through Self-Supervised Classification
GRASP detects anomalies in system provenance graphs via self-supervised executable prediction from two-hop neighborhoods, outperforming prior PIDS on DARPA datasets by identifying all documented attacks where behavior...
-
GCCM: Enhancing Generative Graph Prediction via Contrastive Consistency Model
GCCM prevents shortcut collapse in consistency models for graph prediction by using contrastive negative pairs and input feature perturbation, leading to better performance than deterministic baselines.
-
A Unified Benchmark for Evaluating Knowledge Graph Construction Methods and Graph Neural Networks
A dual-purpose benchmark supplies two text-derived knowledge graphs and one expert reference graph on the same biomedical corpus to jointly measure construction method quality and GNN robustness via semi-supervised no...
-
GEM: Graph-Enhanced Mixture-of-Experts with ReAct Agents for Dialogue State Tracking
GEM achieves 65.19% joint goal accuracy on MultiWOZ 2.2 by routing between a graph neural network expert for dialogue structure and a T5 expert for sequences, plus ReAct agents for value generation, outperforming prio...
-
Exploring Sparse Matrix Multiplication Kernels on the Cerebras CS-3
Cerebras CS-3 achieves up to 100x speedup over CPU for SpMM and 20x for SDDMM at 90% sparsity, with performance improving for larger matrices, but becomes slower than CPU beyond 99% sparsity.
-
Qubit-Scalable CVRP via Lagrangian Knapsack Decomposition and Noise-Aware Quantum Execution
A hybrid quantum framework decomposes CVRP into bounded-width knapsack subproblems, trains a reinforcement learning controller for Lagrangian multipliers, and uses a contextual bandit to adapt quantum hardware executi...
-
Robustness of Spatio-temporal Graph Neural Networks for Fault Location in Partially Observable Distribution Grids
Measured-only graph topologies enable STGNNs to achieve up to 11-point F1 gains and 6x faster training versus full-topology GNNs and RNN baselines for fault location in partially observable distribution grids.
-
ACT: Anti-Crosstalk Learning for Cross-Sectional Stock Ranking via Temporal Disentanglement and Structural Purification
ACT disentangles temporal scales in stock sequences and purifies structural relations in graphs to achieve state-of-the-art cross-sectional stock ranking on CSI300 and CSI500 with up to 74.25% improvement.
-
TACENR: Task-Agnostic Contrastive Explanations for Node Representations
TACENR introduces a contrastive-learning method that identifies the most influential attribute, proximity, and structural features in node representations in a task-agnostic manner.
-
LoReC: Rethinking Large Language Models for Graph Data Analysis
LoReC enhances LLMs for graph tasks via attention redistribution, graph re-injection into FFN, and logit rectification, yielding improvements over GraphLLM and GNN baselines on diverse datasets.
-
Program Structure-aware Language Models: Targeted Software Testing beyond Textual Semantics
GLMTest integrates code property graphs and GNNs with LLMs to steer test case generation toward targeted branches, raising branch accuracy from 27.4% to 50.2% on the TestGenEval benchmark.
-
TransXion: A High-Fidelity Graph Benchmark for Realistic Anti-Money Laundering
TransXion supplies a 3-million-transaction graph benchmark with profile-aware normal activity and stochastic illicit subgraphs that produces lower detection scores than prior AML datasets.
-
DuConTE: Dual-Granularity Text Encoder with Topology-Constrained Attention for Text-attributed Graphs
DuConTE is a dual-granularity text encoder that incorporates graph topology into language model attention for improved node representations in text-attributed graphs.
-
Region-Affinity Attention for Whole-Slide Breast Cancer Classification in Deep Ultraviolet Imaging
A novel Region-Affinity Attention mechanism classifies breast cancer on whole deep ultraviolet slides, achieving 92.67% accuracy and 95.97% AUC on 136 samples while outperforming standard attention methods.
-
Graph self-supervised learning based on frequency corruption
FC-GSSL improves graph SSL by generating high-frequency biased corrupted graphs via low-frequency contribution-based corruption, reconstructing low-frequency features in an autoencoder, and aligning multi-view represe...
-
NK-GAD: Neighbor Knowledge-Enhanced Unsupervised Graph Anomaly Detection
NK-GAD improves unsupervised graph anomaly detection on heterophilic graphs by combining a joint encoder for similar and dissimilar neighbors, neighbor reconstruction, center aggregation, and dual decoders, yielding a...
-
A Structure-Preserving Graph Neural Solver for Parametric Hyperbolic Conservation Laws
A structure-preserving GNN solver for parametric hyperbolic conservation laws achieves superior long-horizon stability and orders-of-magnitude speedups over high-resolution simulations on supersonic flow benchmarks.
-
Learning Ad Hoc Network Dynamics via Graph-Structured World Models
G-RSSM learns per-node dynamics in wireless ad hoc networks via graph attention and trains clustering policies through imagined rollouts, generalizing from N=50 training to larger networks.
-
TopFeaRe: Locating Critical State of Adversarial Resilience for Graphs Regarding Topology-Feature Entanglement
TopFeaRe models graph adversarial attacks as oscillations in a complex dynamic system and locates the critical resilience state via equilibrium-point theory applied to a two-dimensional topology-feature entangled function.
-
Verify Before You Fix: Agentic Execution Grounding for Trustworthy Cross-Language Code Analysis
A framework combining universal AST normalization, hybrid graph-LLM embeddings, and strict execution-grounded validation achieves 89-92% intra-language accuracy and 74-80% cross-language F1 while resolving 70% of vuln...
-
Relational Probing: LM-to-Graph Adaptation for Financial Prediction
Relational Probing replaces the LM output head with a trainable relation head that induces graphs from hidden states and optimizes them end-to-end for stock trend prediction, showing gains over co-occurrence baselines.
-
AFGNN: API Misuse Detection using Graph Neural Networks and Clustering
AFGNN detects API misuses in Java code more effectively than prior methods by representing usage as graphs and clustering learned embeddings from self-supervised training.
-
BiScale-GTR: Fragment-Aware Graph Transformers for Multi-Scale Molecular Representation Learning
BiScale-GTR achieves claimed state-of-the-art results on MoleculeNet, PharmaBench and LRGB by combining improved fragment tokenization with a parallel GNN-Transformer architecture that operates at both atom and fragme...
-
Can We Trust a Black-box LLM? LLM Untrustworthy Boundary Detection via Bias-Diffusion and Multi-Agent Reinforcement Learning
GMRL-BD detects untrustworthy topic boundaries for black-box LLMs by combining bias-diffusion on a Wikipedia KG with multi-agent RL, supported by a released dataset labeling biases in models like Llama2 and Qwen2.
-
Feature-Aware Anisotropic Local Differential Privacy for Utility-Preserving Graph Representation Learning in Metal Additive Manufacturing
FI-LDP-HGAT applies feature-importance-aware anisotropic local differential privacy to a hierarchical graph attention network, recovering 81.5% utility at epsilon=4 and 0.762 defect recall at epsilon=2 on a DED porosi...
-
Towards Predicting Multi-Vulnerability Attack Chains in Software Supply Chains from Software Bill of Materials Graphs
The paper shows that heterogeneous graph attention networks can classify vulnerable components in real SBOMs at 91% accuracy and that a simple MLP can predict documented multi-vulnerability chains with 0.93 ROC-AUC.
-
MMP-Refer: Multimodal Path Retrieval-augmented LLMs For Explainable Recommendation
MMP-Refer augments LLMs with multimodal retrieval paths and a trainable collaborative adapter to produce more accurate and explainable recommendations.
-
Attention U-Net: Learning Where to Look for the Pancreas
Attention gates added to U-Net automatically focus on target organs in CT images and improve segmentation performance on abdominal datasets.
-
Learning to Compress and Transmit: Adaptive Rate Control for Semantic Communications over LEO Satellite-to-Ground Links
RL agent adaptively controls compression rate in semantic satellite communications to achieve 95% qualified image frames with no packet loss by using SNR predictions and queue management.
-
DCVD: Dual-Channel Cross-Modal Fusion for Joint Vulnerability Detection and Localization
DCVD performs joint function-level vulnerability detection and statement-level localization by extracting control-dependency and semantic features in parallel branches, fusing them with contrastive alignment and bidir...
-
Multi-Level Graph Attention Network Contrastive Learning for Knowledge-Aware Recommendation
A multi-level graph attention network with contrastive learning outperforms prior methods on knowledge-aware recommendation by improving generalization across three comparison perspectives.
-
Mid-Circuit Measurements for Clifford Noise Reduction in Hamiltonian Simulations
Mid-circuit measurements in Generalized Superfast Encoding combined with Clifford Noise Reduction reduce logical error rates by up to 54% in a six-qubit Clifford Trotter step for fermionic Hamiltonian simulation on ba...
-
PLMGH: What Matters in PLM-GNN Hybrids for Code Classification and Vulnerability Detection
Controlled experiments show PLM-GNN hybrids improve code tasks over GNN-only baselines, with PLM source having larger impact than GNN backbone.
-
Crystal Fractional Graph Neural Network for Energy Prediction of High-Entropy Alloys
A crystal fractional graph neural network fuses local graph attention on 16-atom environments with global composition fractions to predict high-entropy alloy energies at RMSE levels comparable to first-principles calc...
-
Robustness of Spatio-temporal Graph Neural Networks for Fault Location in Partially Observable Distribution Grids
Measured-only STGNNs (RGATv2, RGSAGE) achieve up to 11 F1 points higher and 6x faster training than RNN baselines for fault location on the IEEE 123-bus feeder under partial observability.
-
Multi-Perspective Evidence Synthesis and Reasoning for Unsupervised Multimodal Entity Linking
MSR-MEL synthesizes instance-centric, group-level, lexical, and statistical evidence with LLMs and asymmetric teacher-student GNNs to outperform prior unsupervised methods on multimodal entity linking benchmarks.
-
AROMA: Augmented Reasoning Over a Multimodal Architecture for Virtual Cell Genetic Perturbation Modeling
AROMA combines text, graph topology, and protein sequences with augmented reasoning and two-stage optimization to deliver more accurate and interpretable predictions of genetic perturbation effects in virtual cells, o...
-
Inductive Subgraphs as Shortcuts: Causal Disentanglement for Heterophilic Graph Learning
Inductive subgraphs serve as shortcuts in heterophilic graphs, and CD-GNN disentangles spurious from causal subgraphs by blocking non-causal paths to improve robustness and accuracy.
-
TabEmb: Joint Semantic-Structure Embedding for Table Annotation
TabEmb decouples LLM-based semantic column embeddings from graph-based structural modeling to produce joint representations that improve table annotation tasks.
-
How Embeddings Shape Graph Neural Networks: Classical vs Quantum-Oriented Node Representations
Quantum-oriented embeddings deliver consistent gains on structure-driven graph datasets while classical baselines perform adequately on attribute-limited social graphs, under identical training pipelines across five T...
Reference graph
Works this paper leans on
-
[1]
Software avail- able from tensorflow.org
URL https://www.tensorflow.org/. Software avail- able from tensorflow.org. James Atwood and Don Towsley. Diffusion-convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 1993–2001,
work page 1993
-
[2]
Long short-term memory-networks for machine reading.arXiv preprint arXiv:1601.06733,
Jianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine reading. arXiv preprint arXiv:1601.06733,
-
[3]
Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation
Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078,
work page internal anchor Pith review arXiv
-
[4]
Misha Denil, Sergio G ´omez Colmenarejo, Serkan Cabi, David Saxton, and Nando de Freitas. Pro- grammable agents. arXiv preprint arXiv:1706.06383,
-
[5]
One-shot imitation learn- ing,
Yan Duan, Marcin Andrychowicz, Bradly Stadie, Jonathan Ho, Jonas Schneider, Ilya Sutskever, Pieter Abbeel, and Wojciech Zaremba. One-shot imitation learning. arXiv preprint arXiv:1703.07326,
-
[6]
A general framework for adaptive processing of data structures
10 Published as a conference paper at ICLR 2018 Paolo Frasconi, Marco Gori, and Alessandro Sperduti. A general framework for adaptive processing of data structures. IEEE transactions on Neural Networks, 9(5):768–786,
work page 2018
-
[8]
URL http://arxiv. org/abs/1611.02344. Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256,
-
[9]
Deep convolutional networks on graph-structured data
Mikael Henaff, Joan Bruna, and Yann LeCun. Deep convolutional networks on graph-structured data. arXiv preprint arXiv:1506.05163,
-
[10]
Adam: A Method for Stochastic Optimization
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980,
work page internal anchor Pith review Pith/arXiv arXiv
-
[11]
A structured self-attentive sentence embedding.arXiv preprint arXiv:1703.03130,
Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130,
-
[12]
Geometric deep learning on graphs and manifolds using mixture model cnns
Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodol`a, Jan Svoboda, and Michael M Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. arXiv preprint arXiv:1611.08402,
-
[13]
Learning convolutional neural net- works for graphs
Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural net- works for graphs. In Proceedings of The 33rd International Conference on Machine Learning , volume 48, pp. 2014–2023,
work page 2014
-
[14]
Deepwalk: Online learning of social repre- sentations
11 Published as a conference paper at ICLR 2018 Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social repre- sentations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 701–710. ACM,
work page 2018
-
[15]
A simple neural network module for relational reasoning.arXiv preprint arXiv:1706.01427,
Adam Santoro, David Raposo, David GT Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. A simple neural network module for relational reasoning.arXiv preprint arXiv:1706.01427,
-
[16]
ISSN 1045-9227. doi: 10.1109/72.572108. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of machine learning research, 15(1):1929–1958,
-
[17]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. arXiv preprint arXiv:1706.03762,
work page internal anchor Pith review Pith/arXiv arXiv
-
[19]
URL http://arxiv.org/abs/1410.3916. Zhilin Yang, William Cohen, and Ruslan Salakhudinov. Revisiting semi-supervised learning with graph embeddings. In International Conference on Machine Learning, pp. 40–48,
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.