pith. machine review for the scientific record. sign in

arxiv: 1706.09773 · v4 · submitted 2017-06-29 · 💻 cs.LG · cs.CY· stat.ML

Recognition: unknown

Interpretability via Model Extraction

Authors on Pith no claims yet
classification 💻 cs.LG cs.CYstat.ML
keywords modellearningcomplexextractionmachineapproachinterpretablemodels
0
0 comments X
read the original abstract

The ability to interpret machine learning models has become increasingly important now that machine learning is used to inform consequential decisions. We propose an approach called model extraction for interpreting complex, blackbox models. Our approach approximates the complex model using a much more interpretable model; as long as the approximation quality is good, then statistical properties of the complex model are reflected in the interpretable model. We show how model extraction can be used to understand and debug random forests and neural nets trained on several datasets from the UCI Machine Learning Repository, as well as control policies learned for several classical reinforcement learning problems.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Debunking Grad-ECLIP: A Comprehensive Study on Its Incorrectness and Fundamental Principles for Model Interpretation

    cs.CV 2026-05 unverdicted novelty 4.0

    Grad-ECLIP is an equivalent but flawed variant of attention-based interpretation, with two principles proposed to ensure model explanations reflect the original model.

  2. Explainable Human Activity Recognition: A Unified Review of Concepts and Mechanisms

    cs.LG 2026-04 unverdicted novelty 4.0

    The paper delivers a mechanism-centric taxonomy and unified perspective on explainable human activity recognition methods across sensing modalities.