Spectra defines and controls effective capacity in graph embeddings via the Shannon effective rank of a trace-normalized kernel spectrum, making capacity a post-fit property rather than a pre-training hyperparameter.
What do gnns actually learn? towards understanding their representations
3 Pith papers cite this work. Polarity classification is still indexing.
fields
cs.LG 3years
2026 3verdicts
UNVERDICTED 3representative citing papers
Graph nodes are embedded as simplex compositions via ILR coordinates to yield intrinsically interpretable representations that preserve Aitchison geometry and enable subcompositional analysis.
TACENR introduces a contrastive-learning method that identifies the most influential attribute, proximity, and structural features in node representations in a task-agnostic manner.
citing papers explorer
-
Rank Is Not Capacity: Spectral Occupancy for Latent Graph Models
Spectra defines and controls effective capacity in graph embeddings via the Shannon effective rank of a trace-normalized kernel spectrum, making capacity a post-fit property rather than a pre-training hyperparameter.
-
Aitchison Embeddings for Learning Compositional Graph Representations
Graph nodes are embedded as simplex compositions via ILR coordinates to yield intrinsically interpretable representations that preserve Aitchison geometry and enable subcompositional analysis.
-
TACENR: Task-Agnostic Contrastive Explanations for Node Representations
TACENR introduces a contrastive-learning method that identifies the most influential attribute, proximity, and structural features in node representations in a task-agnostic manner.