TRAP is a tail-aware ranking attack that plants a backdoor in world models so that a trigger causes the model to reorder a few critical imagined trajectories and redirect planning while preserving normal behavior on clean inputs.
Title resolution pending
7 Pith papers cite this work. Polarity classification is still indexing.
years
2026 7representative citing papers
AdvAD produces physical-world adversarial patches with improved transferability to unseen object detectors by multi-model optimization, adaptive balancing, and physical variation robustness.
TriPatch generates transferable physical adversarial patches via multi-stage triplet loss, appearance consistency, and data augmentation to achieve higher attack success rates on pedestrian detectors than prior methods.
SPAR is a street-legal physical rim that cuts modern ALPR accuracy by 60% and reaches 18% targeted impersonation while costing under $100 and requiring no plate modification.
Adversarial patches transfer across three VLM architectures in autonomous driving scenarios with 73-91% success rates and affect 65-79% of critical decision frames even without target-specific optimization.
RACF corrects inconsistent depth camera distance estimates in autonomous vehicles using LiDAR and kinematic redundancy, achieving up to 35% RMSE reduction and better braking in tests on a Quanser QCar 2 platform.
The paper organizes existing physical adversarial attack literature into a surveillance-oriented taxonomy emphasizing temporal persistence, multi-modal sensing, carrier realism, and system-level objectives, concluding that robustness requires system-level evaluation over time and across sensors.
citing papers explorer
-
TRAP: Tail-aware Ranking Attack for World-Model Planning
TRAP is a tail-aware ranking attack that plants a backdoor in world models so that a trigger causes the model to reorder a few critical imagined trajectories and redirect planning while preserving normal behavior on clean inputs.
-
Transferable Physical-World Adversarial Patches Against Object Detection in Autonomous Driving
AdvAD produces physical-world adversarial patches with improved transferability to unseen object detectors by multi-model optimization, adaptive balancing, and physical variation robustness.
-
Transferable Physical-World Adversarial Patches Against Pedestrian Detection Models
TriPatch generates transferable physical adversarial patches via multi-stage triplet loss, appearance consistency, and data augmentation to achieve higher attack success rates on pedestrian detectors than prior methods.
-
Street-Legal Physical-World Adversarial Rim for License Plates
SPAR is a street-legal physical rim that cuts modern ALPR accuracy by 60% and reaches 18% targeted impersonation while costing under $100 and requiring no plate modification.
-
Understanding Adversarial Transferability in Vision-Language Models for Autonomous Driving: A Cross-Architecture Analysis
Adversarial patches transfer across three VLM architectures in autonomous driving scenarios with 73-91% success rates and affect 65-79% of critical decision frames even without target-specific optimization.
-
RACF: A Resilient Autonomous Car Framework with Object Distance Correction
RACF corrects inconsistent depth camera distance estimates in autonomous vehicles using LiDAR and kinematic redundancy, achieving up to 35% RMSE reduction and better braking in tests on a Quanser QCar 2 platform.
-
Physical Adversarial Attacks on AI Surveillance Systems:Detection, Tracking, and Visible--Infrared Evasion
The paper organizes existing physical adversarial attack literature into a surveillance-oriented taxonomy emphasizing temporal persistence, multi-modal sensing, carrier realism, and system-level objectives, concluding that robustness requires system-level evaluation over time and across sensors.