pith. machine review for the scientific record. sign in

Aitchison Embeddings for Learning Compositional Graph Representations

1 Pith paper cite this work. Polarity classification is still indexing.

1 Pith paper citing it
abstract

Representation learning is central to graph machine learning, powering tasks such as link prediction and node classification. However, most graph embeddings are hard to interpret, offering limited insight into how learned features relate to graph structure. Many networks naturally admit a role-mixture view, where nodes are best described as mixtures over latent archetypal factors. Motivated by this structure, we propose a compositional graph embedding framework grounded in Aitchison geometry, the canonical geometry for comparing mixtures. Nodes are represented as simplex-valued compositions and embedded via isometric log-ratio (ILR) coordinates, which preserve Aitchison distances while enabling unconstrained optimization in Euclidean space. This yields intrinsically interpretable embeddings whose geometry reflects relative trade-offs among archetypes and supports coherent behavior under component restriction; we consider both fixed and learnable ILR bases. Across node classification and link prediction, our method achieves competitive performance with strong baselines while providing explainability by construction rather than post-hoc. Finally, subcompositional coherence enables principled component restriction: removing and renormalizing subsets preserves a well-defined geometry, which we exploit via subcompositional dimensionality removal to probe how archetype groups influence representations and predictions.

fields

cs.LG 1

years

2026 1

verdicts

UNVERDICTED 1

representative citing papers

Rank Is Not Capacity: Spectral Occupancy for Latent Graph Models

cs.LG · 2026-05-11 · unverdicted · novelty 7.0

Spectra defines and controls effective capacity in graph embeddings via the Shannon effective rank of a trace-normalized kernel spectrum, making capacity a post-fit property rather than a pre-training hyperparameter.

citing papers explorer

Showing 1 of 1 citing paper.

  • Rank Is Not Capacity: Spectral Occupancy for Latent Graph Models cs.LG · 2026-05-11 · unverdicted · none · ref 42 · internal anchor

    Spectra defines and controls effective capacity in graph embeddings via the Shannon effective rank of a trace-normalized kernel spectrum, making capacity a post-fit property rather than a pre-training hyperparameter.