pith. machine review for the scientific record. sign in

arxiv: 1905.04172 · v1 · submitted 2019-05-10 · 📊 stat.ML · cs.CV· cs.LG

Recognition: unknown

On the Connection Between Adversarial Robustness and Saliency Map Interpretability

Authors on Pith no claims yet
classification 📊 stat.ML cs.CVcs.LG
keywords adversarialmodelssaliencyalignmentconnectionnetworksneuraltrained
0
0 comments X
read the original abstract

Recent studies on the adversarial vulnerability of neural networks have shown that models trained to be more robust to adversarial attacks exhibit more interpretable saliency maps than their non-robust counterparts. We aim to quantify this behavior by considering the alignment between input image and saliency map. We hypothesize that as the distance to the decision boundary grows,so does the alignment. This connection is strictly true in the case of linear models. We confirm these theoretical findings with experiments based on models trained with a local Lipschitz regularization and identify where the non-linear nature of neural networks weakens the relation.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Representation learning from OCT images

    cs.CV 2026-05 unverdicted novelty 3.0

    A structured survey of representation learning methods for retinal OCT image analysis, covering supervised, self-supervised, generative, multimodal, and foundation model approaches along with datasets and open problems.