pith. machine review for the scientific record. sign in

arxiv: 1905.09550 · v2 · submitted 2019-05-23 · 📊 stat.ML · cs.IT· cs.LG· math.IT· math.SP

Recognition: unknown

Revisiting Graph Neural Networks: All We Have is Low-Pass Filters

Authors on Pith no claims yet
classification 📊 stat.ML cs.ITcs.LGmath.ITmath.SP
keywords graphneuralnetworksfeaturelearningclassificationdatalow-pass
0
0 comments X
read the original abstract

Graph neural networks have become one of the most important techniques to solve machine learning problems on graph-structured data. Recent work on vertex classification proposed deep and distributed learning models to achieve high performance and scalability. However, we find that the feature vectors of benchmark datasets are already quite informative for the classification task, and the graph structure only provides a means to denoise the data. In this paper, we develop a theoretical framework based on graph signal processing for analyzing graph neural networks. Our results indicate that graph neural networks only perform low-pass filtering on feature vectors and do not have the non-linear manifold learning property. We further investigate their resilience to feature noise and propose some insights on GCN-based graph neural network design.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Learning Posterior Predictive Distributions for Node Classification from Synthetic Graph Priors

    cs.LG 2026-04 unverdicted novelty 7.0

    NodePFN pre-trains on synthetic graphs with controllable homophily and causal feature-label models to achieve 71.27 average accuracy on 23 node classification benchmarks without graph-specific training.

  2. Rethinking Generalization in Graph Neural Networks: A Structural Complexity Perspective

    cs.LG 2026-05 unverdicted novelty 5.0

    GNN generalization depends explicitly on graph structural complexity measured by effective edges, with a new regularization method shown to balance underfitting and overfitting.