Recognition: unknown
Spectral Networks and Locally Connected Networks on Graphs
read the original abstract
Convolutional Neural Networks are extremely efficient architectures in image and audio recognition tasks, thanks to their ability to exploit the local translational invariance of signal classes over their domain. In this paper we consider possible generalizations of CNNs to signals defined on more general domains without the action of a translation group. In particular, we propose two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian. We show through experiments that for low-dimensional graphs it is possible to learn convolutional layers with a number of parameters independent of the input size, resulting in efficient deep architectures.
This paper has not been read by Pith yet.
Forward citations
Cited by 5 Pith papers
-
FlexVector: A SpMM Vector Processor with Flexible VRF for GCNs on Varying-Sparsity Graphs
FlexVector achieves 3.78x speedup and 40.5% lower energy for GCN inference on five real-world datasets by using flexible VRFs and graph preprocessing to match varying-sparsity graphs.
-
ASPIRE: Make Spectral Graph Collaborative Filtering Great Again via Adaptive Filter Learning
ASPIRE learns adaptive graph filters via bi-level optimization to overcome low-frequency explosion bias in spectral collaborative filtering, achieving strong performance and stability.
-
Uniform Inductive Spatio-Temporal Kriging
UniSTOK improves inductive spatio-temporal kriging under incomplete observations by reliability-guided signal regulation and residual bias calibration.
-
Make Your LVLM KV Cache More Lightweight
LightKV compresses vision-token KV cache in LVLMs to 55% size via prompt-guided cross-modality aggregation, halving cache memory, cutting compute 40%, and maintaining performance on benchmarks.
-
Efficient and Scalable Granular-ball Graph Coarsening Method for Large-scale Graph Node Classification
A multi-granularity granular-ball coarsening algorithm reduces large graphs in linear time for faster GCN training on node classification, with experiments claiming superior performance over prior methods.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.