Compositional interpretability defines explanations as commuting syntactic-semantic mapping pairs grounded in compositionality and minimum description length, with compressive refinement and a parsimony theorem guaranteeing concise human-aligned decompositions.
Title resolution pending
2 Pith papers cite this work. Polarity classification is still indexing.
2
Pith papers citing it
years
2026 2verdicts
UNVERDICTED 2representative citing papers
Neural feature maps create expressive kernels that enable fast, scalable, and consistent exact Gaussian process inference for regression and classification.
citing papers explorer
-
From Mechanistic to Compositional Interpretability
Compositional interpretability defines explanations as commuting syntactic-semantic mapping pairs grounded in compositionality and minimum description length, with compressive refinement and a parsimony theorem guaranteeing concise human-aligned decompositions.
-
Scalable Gaussian process inference via neural feature maps
Neural feature maps create expressive kernels that enable fast, scalable, and consistent exact Gaussian process inference for regression and classification.