pith. machine review for the scientific record. sign in

arxiv: 1703.03717 · v2 · submitted 2017-03-10 · 💻 cs.LG · cs.AI· stat.ML

Recognition: unknown

Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations

Authors on Pith no claims yet
classification 💻 cs.LG cs.AIstat.ML
keywords modelsexplanationsrighttrainingwhenconditionsdatasetsdecision
0
0 comments X
read the original abstract

Neural networks are among the most accurate supervised learning methods in use today, but their opacity makes them difficult to trust in critical applications, especially when conditions in training differ from those in test. Recent work on explanations for black-box models has produced tools (e.g. LIME) to show the implicit rules behind predictions, which can help us identify when models are right for the wrong reasons. However, these methods do not scale to explaining entire datasets and cannot correct the problems they reveal. We introduce a method for efficiently explaining and regularizing differentiable models by examining and selectively penalizing their input gradients, which provide a normal to the decision boundary. We apply these penalties both based on expert annotation and in an unsupervised fashion that encourages diverse models with qualitatively different decision boundaries for the same classification problem. On multiple datasets, we show our approach generates faithful explanations and models that generalize much better when conditions differ between training and test.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Mitigating Shortcut Learning via Feature Disentanglement in Medical Imaging: A Benchmark Study

    cs.CV 2026-02 unverdicted novelty 6.0

    Benchmark shows that combining data rebalancing with feature disentanglement mitigates shortcut learning more effectively than rebalancing alone in medical imaging models.

  2. Shortcut Mitigation via Spurious-Positive Samples

    cs.LG 2026-05 unverdicted novelty 5.0

    A method uses spurious-positive samples to identify and regularize neurons that rely on spurious features, improving model robustness without extra annotations or balanced data.