A Lie-algebraic kernel reparameterizes 3D rotationally anisotropic Gaussian processes with explicit principal length-scales and SO(3) orientations, matching full SPD flexibility but improving interpretability over axis-aligned ARD.
Title resolution pending
11 Pith papers cite this work. Polarity classification is still indexing.
representative citing papers
ProtoSSL discovers generalizable prototypes from unlabeled time-series via self-supervision and assigns them to new tasks for interpretable predictions, outperforming supervised baselines in low-data regimes on ECG datasets.
Chain-of-Thought reasoning in LLMs is often unfaithful, with models relying on it variably by task and less so as models scale larger.
A latent mediation framework with sparse autoencoders enables non-additive token-level influence attribution in LLMs by learning orthogonal features and back-propagating attributions.
A think-aloud study reveals that AI tools in early research misrepresent uncertainty, obscure provenance, and create fragile trust, leading researchers to develop compensatory strategies to preserve scholarly judgment.
A novel algorithm learns sets of optimal quantile regression trees to predict full conditional distributions interpretably and efficiently.
A neurosymbolic pipeline extracts predicates from offer texts with an LLM and validates them via Logic Tensor Networks, delivering performance comparable to standard models plus built-in interpretability on a real corpus.
A divide-and-conquer method decomposes network intrusion detection into focused subtasks, allowing lightweight models to gain up to 43.3% higher local accuracy and 257x smaller size while improving robustness and explainability.
LLMs support decision prediction and rationale generation but lack evidence for genuine decision explanation, requiring stricter standards to avoid over-crediting.
The paper proposes a Causal-Agency Framework to restore human causal control at AI interfaces by combining causal models, uncertainty quantification, and human-centered evaluation.
A-ROM delivers competitive MedMNIST performance via pretrained ViT metric spaces, a concept dictionary, and kNN without backpropagation or fine-tuning, framed as interpretable few-shot learning under the Platonic Representation Hypothesis.
citing papers explorer
-
Interpretable Machine Learning for Spatial Science: A Lie-Algebraic Kernel for Rotationally Anisotropic Gaussian Processes
A Lie-algebraic kernel reparameterizes 3D rotationally anisotropic Gaussian processes with explicit principal length-scales and SO(3) orientations, matching full SPD flexibility but improving interpretability over axis-aligned ARD.
-
ProtoSSL: Interpretable Prototype Learning from Unlabeled Time-Series Data
ProtoSSL discovers generalizable prototypes from unlabeled time-series via self-supervision and assigns them to new tasks for interpretable predictions, outperforming supervised baselines in low-data regimes on ECG datasets.
-
Measuring Faithfulness in Chain-of-Thought Reasoning
Chain-of-Thought reasoning in LLMs is often unfaithful, with models relying on it variably by task and less so as models scale larger.
-
Correcting Influence: Unboxing LLM Outputs with Orthogonal Latent Spaces
A latent mediation framework with sparse autoencoders enables non-additive token-level influence attribution in LLMs by learning orthogonal features and back-propagating attributions.
-
How Researchers Navigate Accountability, Transparency, and Trust When Using AI Tools in Early-Stage Research: A Think-Aloud Study
A think-aloud study reveals that AI tools in early research misrepresent uncertainty, obscure provenance, and create fragile trust, leading researchers to develop compensatory strategies to preserve scholarly judgment.
-
Interpretable Quantile Regression by Optimal Decision Trees
A novel algorithm learns sets of optimal quantile regression trees to predict full conditional distributions interpretably and efficiently.
-
From Large Language Model Predicates to Logic Tensor Networks: Neurosymbolic Offer Validation in Regulated Procurement
A neurosymbolic pipeline extracts predicates from offer texts with an LLM and validates them via Logic Tensor Networks, delivering performance comparable to standard models plus built-in interpretability on a real corpus.
-
Robust and Explainable Divide-and-Conquer Learning for Intrusion Detection
A divide-and-conquer method decomposes network intrusion detection into focused subtasks, allowing lightweight models to gain up to 43.3% higher local accuracy and 257x smaller size while improving robustness and explainability.
-
LLMs Should Not Yet Be Credited with Decision Explanation
LLMs support decision prediction and rationale generation but lack evidence for genuine decision explanation, requiring stricter standards to avoid over-crediting.
-
Human Agency, Causality, and the Human Computer Interface in High-Stakes Artificial Intelligence
The paper proposes a Causal-Agency Framework to restore human causal control at AI interfaces by combining causal models, uncertainty quantification, and human-centered evaluation.
-
Toward Aristotelian Medical Representations: Backpropagation-Free Layer-wise Analysis for Interpretable Generalized Metric Learning on MedMNIST
A-ROM delivers competitive MedMNIST performance via pretrained ViT metric spaces, a concept dictionary, and kNN without backpropagation or fine-tuning, framed as interpretable few-shot learning under the Platonic Representation Hypothesis.