Recognition: unknown
Graph Rewiring in GNNs to Mitigate Over-Squashing and Over-Smoothing: A Survey
read the original abstract
Graph Neural Networks are powerful models for learning from graph-structured data, yet their effectiveness is often limited by two critical challenges: over-squashing, where information from distant nodes is excessively compressed, and over-smoothing, where repeated propagation makes node representations indistinguishable. Both phenomena stem from the interaction between message passing and the input topology, ultimately degrading information flow and limiting the performance of GNNs. In this survey, we examine graph rewiring techniques, a class of methods designed to modify the graph topology to enhance information propagation in GNNs. We provide a comprehensive review of state-of-the-art rewiring approaches, delving into their theoretical underpinnings, practical implementations, and performance trade-offs.
This paper has not been read by Pith yet.
Forward citations
Cited by 2 Pith papers
-
From Model to Data (M2D): Shifting Complexity from GNNs to Graphs for Transparent Graph Learning
M2D distillation augments input graphs with model-derived features and structure, letting simple student GNNs match teacher performance while exposing mechanisms such as attention and fairness directly in the data.
-
Ollivier-Ricci Curvature of Riemannian Manifolds and Directed Graphs with Applications to Graph Neural Networks
Ollivier-Ricci curvature is extended from manifolds and undirected graphs to directed graphs with applications to graph neural networks.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.