pith. machine review for the scientific record. sign in

arxiv: 1902.08649 · v3 · submitted 2019-02-22 · 💻 cs.CL · cs.AI· cs.LG

Recognition: unknown

Saliency Learning: Teaching the Model Where to Pay Attention

Authors on Pith no claims yet
classification 💻 cs.CL cs.AIcs.LG
keywords modelexplanationmodelspredictionsdeephoweverlearningreliability
0
0 comments X
read the original abstract

Deep learning has emerged as a compelling solution to many NLP tasks with remarkable performances. However, due to their opacity, such models are hard to interpret and trust. Recent work on explaining deep models has introduced approaches to provide insights toward the model's behaviour and predictions, which are helpful for assessing the reliability of the model's predictions. However, such methods do not improve the model's reliability. In this paper, we aim to teach the model to make the right prediction for the right reason by providing explanation training and ensuring the alignment of the model's explanation with the ground truth explanation. Our experimental results on multiple tasks and datasets demonstrate the effectiveness of the proposed method, which produces more reliable predictions while delivering better results compared to traditionally trained models.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. SaliencyDecor: Enhancing Neural Network Interpretability through Feature Decorrelation

    cs.CV 2026-04 unverdicted novelty 5.0

    Enforcing feature decorrelation during training produces sharper saliency maps and higher accuracy on image classification benchmarks.