pith. machine review for the scientific record. sign in

arxiv: 1802.07814 · v2 · submitted 2018-02-21 · 💻 cs.LG · cs.AI· stat.ML

Recognition: unknown

Learning to Explain: An Information-Theoretic Perspective on Model Interpretation

Authors on Pith no claims yet
classification 💻 cs.LG cs.AIstat.ML
keywords modelfeaturefeaturesgiveninformationinterpretationlearningmethod
0
0 comments X
read the original abstract

We introduce instancewise feature selection as a methodology for model interpretation. Our method is based on learning a function to extract a subset of features that are most informative for each given example. This feature selector is trained to maximize the mutual information between selected features and the response variable, where the conditional distribution of the response variable given the input is the model to be explained. We develop an efficient variational approximation to the mutual information, and show the effectiveness of our method on a variety of synthetic and real data sets using both quantitative metrics and human evaluation.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 3 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. A New Technique for AI Explainability using Feature Association Map

    cs.LG 2026-05 unverdicted novelty 4.0

    FAMeX introduces a graph-theoretic Feature Association Map to explain feature importance in AI classification models and outperforms PFI and SHAP on eight benchmarks.

  2. A New Technique for AI Explainability using Feature Association Map

    cs.LG 2026-05 unverdicted novelty 4.0

    FAMeX creates a graph of feature associations to explain AI classification decisions and outperforms SHAP and permutation feature importance on eight benchmark datasets.

  3. A New Technique for AI Explainability using Feature Association Map

    cs.LG 2026-05 unverdicted novelty 3.0

    FAMeX is a graph-based XAI algorithm claimed to outperform PFI and SHAP at gauging feature importance for classification.