pith. machine review for the scientific record. sign in

arxiv: 1802.05666 · v2 · submitted 2018-02-15 · 💻 cs.LG · cs.CR· stat.ML

Recognition: unknown

Adversarial Risk and the Dangers of Evaluating Against Weak Attacks

Authors on Pith no claims yet
classification 💻 cs.LG cs.CRstat.ML
keywords adversarialmodelsriskattacksdefensesdevelopevaluatingobjective
0
0 comments X
read the original abstract

This paper investigates recently proposed approaches for defending against adversarial examples and evaluating adversarial robustness. We motivate 'adversarial risk' as an objective for achieving models robust to worst-case inputs. We then frame commonly used attacks and evaluation metrics as defining a tractable surrogate objective to the true adversarial risk. This suggests that models may optimize this surrogate rather than the true adversarial risk. We formalize this notion as 'obscurity to an adversary,' and develop tools and heuristics for identifying obscured models and designing transparent models. We demonstrate that this is a significant problem in practice by repurposing gradient-free optimization techniques into adversarial attacks, which we use to decrease the accuracy of several recently proposed defenses to near zero. Our hope is that our formulations and results will help researchers to develop more powerful defenses.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. NeuroTrace: Inference Provenance-Based Detection of Adversarial Examples

    cs.CR 2026-04 unverdicted novelty 5.0

    NeuroTrace framework builds heterogeneous graphs of inference provenance to detect adversarial examples in DNNs, showing strong transferable performance across attack families in vision and malware domains.