pith. machine review for the scientific record. sign in

arxiv: 1707.08945 · v5 · submitted 2017-07-27 · 💻 cs.CR · cs.LG

Recognition: unknown

Robust Physical-World Attacks on Deep Learning Models

Authors on Pith no claims yet
classification 💻 cs.CR cs.LG
keywords adversarialphysicalexamplesrobustperturbationssignattackconditions
0
0 comments X
read the original abstract

Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input. Given that that emerging physical systems are using DNNs in safety-critical situations, adversarial examples could mislead these systems and cause dangerous situations.Therefore, understanding adversarial examples in the physical world is an important step towards developing resilient learning algorithms. We propose a general attack algorithm,Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions. Using the real-world case of road sign classification, we show that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Due to the current lack of a standardized testing method, we propose a two-stage evaluation methodology for robust physical adversarial examples consisting of lab and field tests. Using this methodology, we evaluate the efficacy of physical adversarial manipulations on real objects. Witha perturbation in the form of only black and white stickers,we attack a real stop sign, causing targeted misclassification in 100% of the images obtained in lab settings, and in 84.8%of the captured video frames obtained on a moving vehicle(field test) for the target classifier.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 4 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

    cs.CR 2017-12 unverdicted novelty 7.0

    Injecting around 50 poisoned samples with a stealthy trigger creates backdoors in deep learning models achieving over 90% attack success under a weak threat model with no model or data knowledge required.

  2. AVISE: Framework for Evaluating the Security of AI Systems

    cs.CR 2026-04 unverdicted novelty 6.0

    AVISE provides a new framework and automated SET that identifies jailbreak vulnerabilities in language models with 92% accuracy, finding all nine tested models vulnerable to an augmented Red Queen attack.

  3. Street-Legal Physical-World Adversarial Rim for License Plates

    cs.CV 2026-04 conditional novelty 6.0

    SPAR is a street-legal physical rim that cuts modern ALPR accuracy by 60% and reaches 18% targeted impersonation while costing under $100 and requiring no plate modification.

  4. Memory Efficient Full-gradient Attacks (MEFA) Framework for Adversarial Defense Evaluations

    cs.LG 2026-05 unverdicted novelty 5.0

    MEFA enables exact full-gradient white-box attacks on iterative stochastic purification defenses like diffusion and Langevin EBMs by trading recomputation for lower memory, revealing vulnerabilities missed by approxim...