pith. machine review for the scientific record. sign in

arxiv: 2604.04349 · v1 · submitted 2026-04-06 · 💻 cs.RO · cs.LG

Recognition: 2 theorem links

· Lean Theorem

Adversarial Robustness Analysis of Cloud-Assisted Autonomous Driving Systems

Amr S. El-Wakeel, Maher Al Islam

Authors on Pith no claims yet

Pith reviewed 2026-05-10 20:21 UTC · model grok-4.3

classification 💻 cs.RO cs.LG
keywords adversarial robustnesscloud-assisted autonomous drivingIoV testbedperception attacksnetwork impairmentsYOLOv8PGD attackssafety-critical systems
0
0 comments X

The pith

Adversarial attacks on cloud perception and network delays in vehicle-cloud links jointly destabilize autonomous driving control.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper builds a hardware-in-the-loop testbed for the Internet of Vehicles to examine how adversarial attacks on cloud perception models and network problems in the vehicle-cloud connection affect autonomous driving safety. It applies white-box attacks like projected gradient descent to a YOLOv8 object detector running on the cloud, while also introducing delays and packet losses in the communication link. The results indicate sharp drops in detection accuracy and subsequent instability in vehicle control, including delayed responses and rule breaches. A reader would care because autonomous vehicles are moving toward cloud assistance for complex computations, making these combined vulnerabilities a practical concern for safe deployment.

Core claim

The authors present a hardware-in-the-loop IoV testbed that evaluates the combined impact of white-box adversarial attacks on a cloud-deployed YOLOv8 detector using FGSM and PGD, together with induced network delays and packet losses in the vehicle-cloud communication loop. They report that PGD at epsilon=0.04 reduces detection precision and recall from the clean baseline of 0.73 and 0.68 to 0.22 and 0.15, while delays corresponding to 3-4 frame losses and 0.5-5% packet loss rates destabilize closed-loop control leading to delayed actuation and rule violations.

What carries the argument

The hardware-in-the-loop IoV testbed integrating real-time perception, control, and communication to evaluate cross-layer vulnerabilities from adversarial manipulation of perception models and network adversaries.

Load-bearing premise

The hardware-in-the-loop IoV testbed and chosen attack parameters accurately represent realistic threats and conditions in deployed cloud-assisted autonomous driving systems.

What would settle it

An experiment showing that real-world cloud-assisted vehicles maintain safe control even under PGD perturbations at epsilon=0.04 combined with 150-250 ms network delays and 5% packet loss would falsify the claim that these jointly undermine safety.

Figures

Figures reproduced from arXiv: 2604.04349 by Amr S. El-Wakeel, Maher Al Islam.

Figure 1
Figure 1. Figure 1: Architecture of the cloud-assisted autonomous driving testbed, illustrating the interaction between the vehicle, cloud [PITH_FULL_IMAGE:figures/full_fig_p003_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Wireshark capture showing TCP communication [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: The clean image serves as the baseline, followed by FGSM and PGD attacks with increasing perturbation magnitudes [PITH_FULL_IMAGE:figures/full_fig_p004_3.png] view at source ↗
Figure 5
Figure 5. Figure 5: Confusion matrices for Clean, FGSM and PGD sce [PITH_FULL_IMAGE:figures/full_fig_p005_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Vehicle trajectories under varying network delays and [PITH_FULL_IMAGE:figures/full_fig_p005_6.png] view at source ↗
read the original abstract

Autonomous vehicles increasingly rely on deep learning-based perception and control, which impose substantial computational demands. Cloud-assisted architectures offload these functions to remote servers, enabling enhanced perception and coordinated decision-making through the Internet of Vehicles (IoV). However, this paradigm introduces cross-layer vulnerabilities, where adversarial manipulation of perception models and network impairments in the vehicle-cloud link can jointly undermine safety-critical autonomy. This paper presents a hardware-in-the-loop IoV testbed that integrates real-time perception, control, and communication to evaluate such vulnerabilities in cloud-assisted autonomous driving. A YOLOv8-based object detector deployed on the cloud is subjected to whitebox adversarial attacks using the Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD), while network adversaries induce delay and packet loss in the vehicle-cloud loop. Results show that adversarial perturbations significantly degrade perception performance, with PGD reducing detection precision and recall from 0.73 and 0.68 in the clean baseline to 0.22 and 0.15 at epsilon= 0.04. Network delays of 150-250 ms, corresponding to transient losses of approximately 3-4 frames, and packet loss rates of 0.5-5 % further destabilize closed-loop control, leading to delayed actuation and rule violations. These findings highlight the need for cross-layer resilience in cloud-assisted autonomous driving systems.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper presents a hardware-in-the-loop IoV testbed integrating real-time perception (cloud YOLOv8 detector), control, and communication to evaluate cross-layer vulnerabilities in cloud-assisted autonomous driving. It applies white-box FGSM and PGD adversarial attacks to the perception model and introduces network adversaries inducing 150-250 ms delays and 0.5-5% packet loss in the vehicle-cloud loop. Key results include PGD reducing detection precision/recall from 0.73/0.68 (clean) to 0.22/0.15 at epsilon=0.04, with network impairments causing delayed actuation and rule violations in closed-loop control. The work concludes that these findings underscore the need for cross-layer resilience in such systems.

Significance. If the reported degradations are substantiated with rigorous statistical validation and the attack parameters are shown to align with feasible real-world threat models, the work would be significant as one of the first hardware-in-the-loop demonstrations of joint perception and network attacks on cloud-assisted AVs. The testbed itself is a constructive contribution that moves beyond pure simulation. It provides concrete evidence that even modest adversarial perturbations and network impairments can compromise safety-critical autonomy, which could inform standards and defenses for IoV architectures. The empirical focus on closed-loop effects is a strength.

major comments (2)
  1. [Abstract] Abstract: The headline quantitative claims (PGD dropping precision from 0.73 to 0.22 and recall from 0.68 to 0.15 at epsilon=0.04, plus 150-250 ms delays causing rule violations) are presented without any mention of trial counts, variance, standard deviations, statistical tests, or how the clean baseline was selected and averaged. These details are load-bearing for the central empirical claim that the attacks produce significant, reproducible degradation.
  2. [Evaluation] Evaluation / Threat Model section: The experimental setup relies on white-box access to the YOLOv8 weights and precise injection of specific delay/packet-loss values inside the closed loop, yet provides no analysis or justification showing how these conditions map to realistic attack surfaces in deployed cloud-assisted systems (e.g., sensor compromise, V2X link attacks, or model extraction). This assumption directly supports the claim of practical vulnerabilities and requires explicit discussion or additional experiments.
minor comments (1)
  1. [Abstract] Abstract: The statement that 150-250 ms delays correspond to 'transient losses of approximately 3-4 frames' would be clearer if the assumed camera frame rate and exact calculation were stated explicitly.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the thoughtful and constructive review. The comments highlight important aspects of statistical rigor and threat model justification that will improve the manuscript. We address each major comment below, indicating the planned revisions.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The headline quantitative claims (PGD dropping precision from 0.73 to 0.22 and recall from 0.68 to 0.15 at epsilon=0.04, plus 150-250 ms delays causing rule violations) are presented without any mention of trial counts, variance, standard deviations, statistical tests, or how the clean baseline was selected and averaged. These details are load-bearing for the central empirical claim that the attacks produce significant, reproducible degradation.

    Authors: We agree that the abstract would benefit from explicit statistical context to support the central claims. The evaluation section reports results from repeated experimental trials under each condition, with the clean baseline computed as the mean performance across the same trial set without perturbations. In the revised manuscript, we will update the abstract to include the number of trials, observed variance or standard deviations, and any statistical tests used to establish significance of the reported degradations. This will make the headline results self-contained while preserving their quantitative accuracy. revision: yes

  2. Referee: [Evaluation] Evaluation / Threat Model section: The experimental setup relies on white-box access to the YOLOv8 weights and precise injection of specific delay/packet-loss values inside the closed loop, yet provides no analysis or justification showing how these conditions map to realistic attack surfaces in deployed cloud-assisted systems (e.g., sensor compromise, V2X link attacks, or model extraction). This assumption directly supports the claim of practical vulnerabilities and requires explicit discussion or additional experiments.

    Authors: The testbed is designed as a controlled hardware-in-the-loop platform to quantify the joint impact of perception attacks and network impairments on closed-loop autonomy, rather than to simulate a specific deployed attack. White-box access is employed to establish an upper bound on vulnerability, which is standard practice in adversarial robustness analysis. The selected delay and packet-loss ranges reflect documented IoV communication characteristics under congestion or interference. We will expand the Threat Model section in the revision to explicitly discuss feasible real-world vectors (such as cloud service compromise, V2X man-in-the-middle attacks, or model extraction) that could approximate the modeled conditions. This added discussion will clarify the mapping without requiring new experiments for the current contribution. revision: yes

Circularity Check

0 steps flagged

Empirical testbed measurements contain no derivation chain or self-referential reductions

full rationale

The paper describes a hardware-in-the-loop IoV testbed that applies white-box FGSM/PGD attacks to a cloud-hosted YOLOv8 detector and injects network delays/packet loss, then directly measures resulting precision, recall, and closed-loop rule violations. No equations, fitted parameters, predictions derived from prior fits, or self-citations are used to obtain the reported numbers; all outcomes are raw experimental observations. The reader's circularity score of 0.0 is confirmed: the work is self-contained empirical evaluation without any load-bearing derivation that reduces to its own inputs.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The central claim rests on empirical measurements from a custom testbed whose fidelity to real deployments is assumed rather than proven; no free parameters are fitted to produce the reported numbers, and no new entities are postulated.

axioms (2)
  • domain assumption YOLOv8 is a representative cloud-deployable object detector for autonomous driving perception.
    Used as the sole perception model subjected to attacks in the testbed.
  • domain assumption FGSM and PGD attacks with the stated epsilon values produce representative adversarial perturbations for this domain.
    Chosen without justification of why these specific attacks and strengths reflect real threats.

pith-pipeline@v0.9.0 · 5547 in / 1510 out tokens · 47503 ms · 2026-05-10T20:21:42.597024+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

Reference graph

Works this paper leans on

20 extracted references · 2 canonical work pages · 2 internal anchors

  1. [1]

    Exploration of the traffic safety of battery electric vehicles: A case study of tesla vehicle- involved crashes in pennsylvania, usa,

    C. Liu, M. Su, Z. Ma, K. Long, and C. Lu, “Exploration of the traffic safety of battery electric vehicles: A case study of tesla vehicle- involved crashes in pennsylvania, usa,”Transportation Research Record, p. 03611981241283445, 2024

  2. [2]

    Anomaly detection against gps spoofing attacks on connected and autonomous vehicles using learning from demonstration,

    Z. Yang, J. Ying, J. Shen, Y . Feng, Q. A. Chen, Z. M. Mao, and H. X. Liu, “Anomaly detection against gps spoofing attacks on connected and autonomous vehicles using learning from demonstration,”IEEE Transactions on Intelligent Transportation Systems, vol. 24, no. 9, pp. 9462–9475, 2023

  3. [3]

    Road traffic injuries,

    World Health Organization, “Road traffic injuries,” 2024. [Online]. Available: https://www.who.int/news-room/fact-sheets/detail/ road-traffic-injuries

  4. [4]

    Metadrive: Composing diverse driving scenarios for generalizable reinforcement learning,

    Q. Li, Z. Peng, L. Feng, Q. Zhang, Z. Xue, and B. Zhou, “Metadrive: Composing diverse driving scenarios for generalizable reinforcement learning,”IEEE transactions on pattern analysis and machine intelli- gence, vol. 45, no. 3, pp. 3461–3475, 2022

  5. [5]

    “real attackers don’t compute gradients

    G. Apruzzese, H. S. Anderson, S. Dambra, D. Freeman, F. Pierazzi, and K. Roundy, ““real attackers don’t compute gradients”: bridging the gap between adversarial ml research and practice,” in2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML). IEEE, 2023, pp. 339–364

  6. [6]

    Duckietown: an open, inexpensive and flexible platform for autonomy education and research,

    L. Paull, J. Tani, H. Ahn, J. Alonso-Mora, L. Carlone, M. Cap, Y . F. Chen, C. Choi, J. Dusek, Y . Fanget al., “Duckietown: an open, inexpensive and flexible platform for autonomy education and research,” in2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017, pp. 1497–1504

  7. [7]

    A survey on small-scale testbeds for connected and automated vehicles and robot swarms: A guide for creating a new testbed,

    A. Mokhtarian, J. Xu, P. Scheffe, M. Kloock, S. Sch ¨afer, H. Bang, V .-A. Le, S. Ulhas, J. Betz, S. Wilsonet al., “A survey on small-scale testbeds for connected and automated vehicles and robot swarms: A guide for creating a new testbed,”IEEE Robotics & Automation Magazine, 2024

  8. [8]

    Cloud-based connected vehicle control under time-varying delay: Stability analysis and controller synthesis,

    Q. Xu, X. Chang, J. Wang, C. Chen, M. Cai, J. Wang, K. Li, and D. Cao, “Cloud-based connected vehicle control under time-varying delay: Stability analysis and controller synthesis,”IEEE Transactions on Vehicular Technology, vol. 72, no. 11, pp. 14 074–14 086, 2023

  9. [9]

    Leveraging cloud computing to make autonomous vehicles safer,

    P. Schafhalter, S. Kalra, L. Xu, J. E. Gonzalez, and I. Stoica, “Leveraging cloud computing to make autonomous vehicles safer,” in2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2023, pp. 5559–5566

  10. [10]

    Distributed cloud model predictive control with delay compensation for heterogeneous vehicle platoons,

    J. Zhao, Y . Ma, L. Dai, Z. Sun, H. Chen, and Y . Xia, “Distributed cloud model predictive control with delay compensation for heterogeneous vehicle platoons,”IEEE Transactions on Vehicular Technology, 2025

  11. [11]

    Adversarial machine learning: A taxonomy and terminology of attacks and mitigations,

    A. Oprea and A. Vassilev, “Adversarial machine learning: A taxonomy and terminology of attacks and mitigations,” National Institute of Stan- dards and Technology, Tech. Rep., 2023

  12. [12]

    Explaining and Harnessing Adversarial Examples

    I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,”arXiv preprint arXiv:1412.6572, 2014

  13. [13]

    Towards Deep Learning Models Resistant to Adversarial Attacks

    A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,”arXiv preprint arXiv:1706.06083, 2017

  14. [14]

    A survey on adversarial attacks and defenses for object detection and their applications in autonomous vehicles,

    A. Amirkhani, M. P. Karimi, and A. Banitalebi-Dehkordi, “A survey on adversarial attacks and defenses for object detection and their applications in autonomous vehicles,”The Visual Computer, vol. 39, no. 11, pp. 5293–5307, 2023

  15. [15]

    Ultralytics yolov8,

    G. Jocher, A. Chaurasia, and J. Qiu, “Ultralytics yolov8,” 2023. [Online]. Available: https://github.com/ultralytics/ultralytics

  16. [16]

    Advances in adversarial attacks and defenses in computer vision: A survey,

    N. Akhtar, A. Mian, N. Kardan, and M. Shah, “Advances in adversarial attacks and defenses in computer vision: A survey,”IEEE Access, vol. 9, pp. 155 161–155 196, 2021

  17. [17]

    Autonomous vehicles: Sophisticated attacks, safety issues, challenges, open topics, blockchain, and future directions,

    A. Giannaros, A. Karras, L. Theodorakopoulos, C. Karras, P. Kranias, N. Schizas, G. Kalogeratos, and D. Tsolis, “Autonomous vehicles: Sophisticated attacks, safety issues, challenges, open topics, blockchain, and future directions,”Journal of Cybersecurity and Privacy, vol. 3, no. 3, pp. 493–543, 2023

  18. [18]

    A survey on cyber-security of connected and autonomous vehicles (cavs),

    X. Sun, F. R. Yu, and P. Zhang, “A survey on cyber-security of connected and autonomous vehicles (cavs),”IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 7, pp. 6240–6259, 2021

  19. [19]

    An investigation of cyber-attacks and security mechanisms for connected and autonomous vehicles,

    S. Gupta, C. Maple, and R. Passerone, “An investigation of cyber-attacks and security mechanisms for connected and autonomous vehicles,”IEEE Access, vol. 11, pp. 90 641–90 669, 2023

  20. [20]

    MITRE ATT&CK — attack.mitre.org,

    “MITRE ATT&CK — attack.mitre.org,” https://attack.mitre.org, [Ac- cessed 21-10-2025]