ATLAS achieves 12-30x faster out-of-core full-graph GNN inference on graphs up to 4B edges by switching to broadcast-based layer-wise execution with graph reordering, minimum-pending-message eviction, and GPU-accelerated tiered memory-disk hierarchy.
hub
Fast Graph Representation Learning with PyTorch Geometric
21 Pith papers cite this work. Polarity classification is still indexing.
abstract
We introduce PyTorch Geometric, a library for deep learning on irregularly structured input data such as graphs, point clouds and manifolds, built upon PyTorch. In addition to general graph data structures and processing methods, it contains a variety of recently published methods from the domains of relational learning and 3D data processing. PyTorch Geometric achieves high data throughput by leveraging sparse GPU acceleration, by providing dedicated CUDA kernels and by introducing efficient mini-batch handling for input examples of different size. In this work, we present the library in detail and perform a comprehensive comparative study of the implemented methods in homogeneous evaluation scenarios.
hub tools
citation-role summary
citation-polarity summary
fields
cs.LG 9 cs.CL 2 astro-ph.HE 1 cond-mat.mtrl-sci 1 cs.AI 1 cs.CV 1 cs.DC 1 cs.SI 1 physics.comp-ph 1 q-bio.BM 1years
2026 21verdicts
UNVERDICTED 21roles
background 1polarities
unclear 1representative citing papers
PiGGO integrates a learned graph neural ODE as the continuous-time dynamics model within an extended Kalman filter to enable online virtual sensing and uncertainty-aware state estimation for nonlinear dynamic systems with unknown model form and sparse sensing.
HopRank is a self-supervised LLM-tuning method that turns node classification into link prediction via hierarchical hop-based preference sampling, matching supervised GNN performance with zero labeled data on text-attributed graphs.
HND models hypergraph feature propagation as an anisotropic diffusion process governed by a continuous-time PDE, discretized into stable neural layers with energy dissipation and boundedness guarantees.
A differentiable forward model and likelihood enable probabilistic inference over many spatial morphologies for the Galactic Center gamma-ray Excess using variational methods on GPUs.
LOGGIA is a delay-aware graph neural routing algorithm using pre-training and RL that outperforms shortest-path and other neural methods in realistic network simulations.
SAGE is a self-evolving agentic graph-memory engine that dynamically constructs and refines structured memory graphs via writer-reader feedback, yielding performance gains on multi-hop QA, open-domain retrieval, and long-term agent benchmarks.
Graph invariants serve as expressive, task-agnostic baselines that characterize structural heterogeneity and match trained models across 26 datasets, indicating that expressivity is not the primary driver of performance.
Mochi aligns pre-training with inference via meta-learning for efficient graph foundation models, matching or exceeding prior models on 25 datasets with 8-27x less training time.
TACENR introduces a contrastive-learning method that identifies the most influential attribute, proximity, and structural features in node representations in a task-agnostic manner.
LogosKG delivers a novel hardware-aligned system for efficient multi-hop retrieval on billion-edge knowledge graphs without sacrificing fidelity, demonstrated via biomedical KG-LLM applications.
A structure-preserving GNN solver for parametric hyperbolic conservation laws achieves superior long-horizon stability and orders-of-magnitude speedups over high-resolution simulations on supersonic flow benchmarks.
PUFFIN discovers protein units by jointly learning structural partitioning of residue graphs and functional supervision via a graph neural network with structure-aware pooling.
TOPCELL reformulates standard cell topology optimization as an LLM generative task with GRPO fine-tuning, outperforming base models and matching exhaustive solvers with 85.91x speedup in 2nm/7nm industrial flows.
Chemical disorder plus compositional gradients in FePd films produce finite Dzyaloshinskii-Moriya interactions that stabilize chiral magnetic modulations with mixed Bloch-Néel character.
AGN is a variational framework for inserting plausible new nodes into incomplete networks by latent sampling and similarity attachment, shown on synthetic data to keep clustering and modularity changes modest compared to a baseline that allows new-new edges.
Compositional quantum circuits with symmetry-induced invariant losses produce trainable equivariant quantum GNNs that generalize on max-clique problems and improve hybrid recursive search accuracy and scalability.
Gaussian and linear cropping strategies for large point clouds improve 3D neural network performance over spherical crops, especially in outdoor scenes, and achieve new state-of-the-art results.
UBD creates a universal space for brain dynamics that predicts fMRI signals with Pearson's r greater than 0.9 across eight states and 963 subjects, revealing mechanisms of cognitive transitions and individual differences.
Measured-only STGNNs (RGATv2, RGSAGE) achieve up to 11 F1 points higher and 6x faster training than RNN baselines for fault location on the IEEE 123-bus feeder under partial observability.
Pre-training GNNs on ECFP prediction produces statistically significant QSAR gains on five of six Biogen benchmarks with OOD splits, but underperforms on heterogeneous datasets and complex endpoints like binding affinity.
citing papers explorer
-
ATLAS: Efficient Out-of-Core Inference for Billion-Scale Graph Neural Networks
ATLAS achieves 12-30x faster out-of-core full-graph GNN inference on graphs up to 4B edges by switching to broadcast-based layer-wise execution with graph reordering, minimum-pending-message eviction, and GPU-accelerated tiered memory-disk hierarchy.
-
PiGGO: Physics-Guided Learnable Graph Kalman Filters for Virtual Sensing of Nonlinear Dynamic Structures under Uncertainty
PiGGO integrates a learned graph neural ODE as the continuous-time dynamics model within an extended Kalman filter to enable online virtual sensing and uncertainty-aware state estimation for nonlinear dynamic systems with unknown model form and sparse sensing.
-
HopRank: Self-Supervised LLM Preference-Tuning on Graphs for Few-Shot Node Classification
HopRank is a self-supervised LLM-tuning method that turns node classification into link prediction via hierarchical hop-based preference sampling, matching supervised GNN performance with zero labeled data on text-attributed graphs.
-
Hypergraph Neural Diffusion: A PDE-Inspired Framework for Hypergraph Message Passing
HND models hypergraph feature propagation as an anisotropic diffusion process governed by a continuous-time PDE, discretized into stable neural layers with energy dissipation and boundedness guarantees.
-
High-dimensional inference for the $\gamma$-ray sky with differentiable programming
A differentiable forward model and likelihood enable probabilistic inference over many spatial morphologies for the Galactic Center gamma-ray Excess using variational methods on GPUs.
-
Towards Near-Real-Time Telemetry-Aware Routing with Neural Routing Algorithms
LOGGIA is a delay-aware graph neural routing algorithm using pre-training and RL that outperforms shortest-path and other neural methods in realistic network simulations.
-
SAGE: A Self-Evolving Agentic Graph-Memory Engine for Structure-Aware Associative Memory
SAGE is a self-evolving agentic graph-memory engine that dynamically constructs and refines structured memory graphs via writer-reader feedback, yielding performance gains on multi-hop QA, open-domain retrieval, and long-term agent benchmarks.
-
Invariant-Based Diagnostics for Graph Benchmarks
Graph invariants serve as expressive, task-agnostic baselines that characterize structural heterogeneity and match trained models across 26 datasets, indicating that expressivity is not the primary driver of performance.
-
Mochi: Aligning Pre-training and Inference for Efficient Graph Foundation Models via Meta-Learning
Mochi aligns pre-training with inference via meta-learning for efficient graph foundation models, matching or exceeding prior models on 25 datasets with 8-27x less training time.
-
TACENR: Task-Agnostic Contrastive Explanations for Node Representations
TACENR introduces a contrastive-learning method that identifies the most influential attribute, proximity, and structural features in node representations in a task-agnostic manner.
-
LogosKG: Hardware-Optimized Scalable and Interpretable Knowledge Graph Retrieval
LogosKG delivers a novel hardware-aligned system for efficient multi-hop retrieval on billion-edge knowledge graphs without sacrificing fidelity, demonstrated via biomedical KG-LLM applications.
-
A Structure-Preserving Graph Neural Solver for Parametric Hyperbolic Conservation Laws
A structure-preserving GNN solver for parametric hyperbolic conservation laws achieves superior long-horizon stability and orders-of-magnitude speedups over high-resolution simulations on supersonic flow benchmarks.
-
PUFFIN: Protein Unit Discovery with Functional Supervision
PUFFIN discovers protein units by jointly learning structural partitioning of residue graphs and functional supervision via a graph neural network with structure-aware pooling.
-
TOPCELL: Topology Optimization of Standard Cell via LLMs
TOPCELL reformulates standard cell topology optimization as an LLM generative task with GRPO fine-tuning, outperforming base models and matching exhaustive solvers with 85.91x speedup in 2nm/7nm industrial flows.
-
Disorder-induced chirality in superconductor-ferromagnet heterostructures revealed by neutron scattering and multiscale modeling
Chemical disorder plus compositional gradients in FePd films produce finite Dzyaloshinskii-Moriya interactions that stabilize chiral magnetic modulations with mixed Bloch-Néel character.
-
Astro Generative Network: A Variational Framework for Controlled Node Insertion in Incomplete Complex Networks
AGN is a variational framework for inserting plausible new nodes into incomplete networks by latent sampling and similarity attachment, shown on synthetic data to keep clustering and modularity changes modest compared to a baseline that allows new-new edges.
-
Compositional Quantum Heuristics for Max-Clique Detection
Compositional quantum circuits with symmetry-induced invariant losses produce trainable equivariant quantum GNNs that generalize on max-clique problems and improve hybrid recursive search accuracy and scalability.
-
From Spherical to Gaussian: A Comparative Analysis of Point Cloud Cropping Strategies in Large-Scale 3D Environments
Gaussian and linear cropping strategies for large point clouds improve 3D neural network performance over spherical crops, especially in outdoor scenes, and achieve new state-of-the-art results.
-
A Universal Space of Brain Dynamics for Unveiling Cognitive Transitions and Individual Differences
UBD creates a universal space for brain dynamics that predicts fMRI signals with Pearson's r greater than 0.9 across eight states and 963 subjects, revealing mechanisms of cognitive transitions and individual differences.
-
Robustness of Spatio-temporal Graph Neural Networks for Fault Location in Partially Observable Distribution Grids
Measured-only STGNNs (RGATv2, RGSAGE) achieve up to 11 F1 points higher and 6x faster training than RNN baselines for fault location on the IEEE 123-bus feeder under partial observability.
-
On Improving Graph Neural Networks for QSAR by Pre-training on Extended-Connectivity Fingerprints
Pre-training GNNs on ECFP prediction produces statistically significant QSAR gains on five of six Biogen benchmarks with OOD splits, but underperforms on heterogeneous datasets and complex endpoints like binding affinity.