pith. machine review for the scientific record. sign in

arxiv: 2605.01301 · v1 · submitted 2026-05-02 · 💻 cs.CR

Recognition: unknown

From Stealthy Data Fabrication to Unsafe Driving: Realistic Scenario Attacks on Collaborative Perception

Authors on Pith no claims yet

Pith reviewed 2026-05-09 14:25 UTC · model grok-4.3

classification 💻 cs.CR
keywords collaborative perceptionautonomous vehiclesdata fabrication attackstealthy attacksafety-critical behaviorsconnected vehiclesperception securitypose manipulation
0
0 comments X

The pith

Subtle changes to object positions in shared vehicle perception data can trigger unsafe driving behaviors like sudden braking.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper shows that collaborative perception systems in connected autonomous vehicles can be attacked by making small, hard-to-detect adjustments to the reported locations of objects that are already present. These minor shifts travel through object tracking and motion prediction steps, producing large errors in how the vehicle plans its actions. The authors build an adaptive attack that runs in real time and matches changing traffic, achieving over 90 percent success at creating perception mistakes and causing safety-critical responses such as hard braking in half the tested cases. The work matters because sharing sensor data is intended to improve safety, yet the same sharing can be turned into a route for inducing dangerous vehicle decisions while staying under the radar of current checks.

Core claim

The authors introduce a stealthy data fabrication attack that manipulates the poses of existing objects in shared perception results while keeping changes small enough to avoid detection thresholds. These perturbations propagate through downstream components including object tracking and trajectory prediction, producing significant errors in predicted vehicle behaviors and ultimately unsafe driving commands. They also develop an online scenario-aware framework that adjusts the attack strategy during operation to fit dynamic traffic. Tests on the OPV2V and V2X-Real datasets confirm the attack induces detection errors with over 90 percent success and triggers behaviors such as unnecessary hard

What carries the argument

Stealthy pose manipulation of existing objects in shared collaborative perception data, which exploits propagation through tracking and prediction modules.

If this is right

  • The attack reaches over 90 percent success in inducing detection errors on standard datasets.
  • It causes safety-critical actions such as unnecessary hard braking in up to 50 percent of scenarios.
  • It evades most existing state-of-the-art defenses.
  • A new mitigation that checks localized safety-critical regions reaches 80 percent detection on the small perturbations, compared with 11 percent for prior methods.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Security testing for collaborative perception should evaluate the full pipeline from perception through planning rather than isolated detection accuracy.
  • Defenses may need to prioritize monitoring in areas that directly affect immediate vehicle control decisions.
  • Similar subtle perturbation techniques could apply to other systems that fuse shared sensor data for real-time control.

Load-bearing premise

Small pose errors stay below all practical detection thresholds and are amplified by standard object tracking and trajectory prediction in real dynamic traffic.

What would settle it

A physical experiment with real vehicles where the attack is applied and the resulting perception outputs are checked against improved anomaly detectors or logged vehicle control commands to see whether safety-critical maneuvers occur.

Figures

Figures reproduced from arXiv: 2605.01301 by Qingzhao Zhang, Runting Zhang, Z. Morley Mao.

Figure 1
Figure 1. Figure 1: Illustration of the perturb-to-move-in attack, where small per-frame shifts accumulate to induce unsafe behavior. prediction, distorting the victim’s understanding of surrounding traffic and causing unsafe behaviors. To illustrate, we consider a perturb-to-move-in scenario ( view at source ↗
Figure 2
Figure 2. Figure 2: Defending small perturbation (AttFusion & view at source ↗
Figure 3
Figure 3. Figure 3: A blind-spot stealthy attack [30] (image adapted). Perception errors occur > 20 m from the victim. perception disagrees with local sensing. The detection threshold is ∼2.7 m2 , to tolerant benign noises. As shown in view at source ↗
Figure 4
Figure 4. Figure 4: Overview of proposed attack PosePert and mitigation PoseGuard. IoU=0.31 conf=0.73 = 1.0 GT Target Det IoU=0.71 conf=0.67 = 1.5 IoU=0.67 conf=0.81 = 2.0 IoU=0.63 conf=0.90 = 2.5 IoU=0.61 conf=0.95 = 3.0 view at source ↗
Figure 5
Figure 5. Figure 5: Effect of 𝛽 on attack success (AttFusion/OPV2V [55]). IoU (0.31→0.62) and confidence (0.73→0.95) grows with the 𝛽 scaling while excessive 𝛽 (2.5 → 3.0) could harm. factors that strongly correlate with attack outcome: the relative po￾sitions of the original and target objects, the number and positions of benign vehicles (which determine the “opposition” the attacker must overcome in fusion), and the local f… view at source ↗
Figure 6
Figure 6. Figure 6: Signal dilution in global vs. local anomaly detection view at source ↗
Figure 7
Figure 7. Figure 7: IoU distributions before (blue) and after view at source ↗
Figure 10
Figure 10. Figure 10: Parameter sensitivity: attack suc￾cess and defense vs. 𝛽 and 𝜀. Global feature-level defenses are near-random. Global LU￾CIA achieves only 5–10% TPR because the perturbation affects <0.04% of the feature map, making the global L1 signal negligi￾ble. Since LUCIA assigns a uniform trust weight to each agent’s entire feature map, it cannot localize the small perturbed region without penalizing normal contrib… view at source ↗
Figure 11
Figure 11. Figure 11: Case study. the geometric influence of benign vehicles, including their distance, number, and point density around the target. When benign vehicles are close (< 10 m), their features strongly oppose the attacker during fusion, reducing IoU (e.g., 0.63→0.60), whereas beyond 20 m their influence diminishes, allowing the attacker to dominate. Increasing the number of benign vehicles (1–3) slightly reduces Io… view at source ↗
Figure 13
Figure 13. Figure 13: Ablation on attack variants view at source ↗
Figure 14
Figure 14. Figure 14: Scenario attack ADE and MinDist distributions. view at source ↗
read the original abstract

Collaborative perception allows connected and autonomous vehicles (CAVs) to improve perception by sharing sensory data, but it also introduces security risks from manipulated inputs. Prior work shows that attackers can spoof or remove objects by fabricating shared data, yet the practicality of such attacks in real-world driving remains unclear. Existing attacks are often detectable or evaluated in manually constructed scenarios, leaving open whether they can induce safety-critical outcomes in dynamic environments. To bridge this gap, we present a stealthy, scenario-realistic data fabrication attack that induces unsafe driving behaviors through end-to-end system effects. Instead of creating large, easily detectable anomalies, our attack subtly manipulates the poses of existing objects in shared perception results, keeping perturbations below detection thresholds. These small errors are then propagated through downstream modules, including object tracking and trajectory prediction, leading to significant deviations in predicted behaviors and ultimately unsafe driving decisions. We further design an online, scenario-aware attack framework that adapts to dynamic traffic conditions and optimizes attack strategies at runtime. Experiments on OPV2V and V2X-Real demonstrate that the attack achieves over 90% success in inducing detection errors and triggers safety-critical behaviors, such as unnecessary hard braking, in up to 50% of scenarios, while largely evading state-of-the-art defenses. We also propose a mitigation that focuses on detecting anomalies in localized, safety-critical regions, achieving an 80% detection rate on the small pose perturbation compared to 11% for the best existing methods.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper claims that a stealthy attack on collaborative perception in CAVs can be achieved by subtly perturbing the poses of existing objects in shared perception results (kept below detection thresholds), with these small errors propagating through object tracking and trajectory prediction to induce safety-critical behaviors such as unnecessary hard braking. An online, scenario-aware attack framework is introduced that adapts to dynamic traffic; evaluations on OPV2V and V2X-Real datasets report >90% success in inducing detection errors and up to 50% triggering of unsafe behaviors while evading SOTA defenses, plus a mitigation achieving 80% detection on small perturbations.

Significance. If the end-to-end propagation results hold under realistic conditions, the work would be significant for highlighting how small, scenario-realistic perturbations in collaborative perception can cascade to unsafe driving decisions, moving beyond isolated spoofing/removal attacks. The empirical evaluation on established datasets and the proposed localized-region mitigation provide practical value for CAV security research.

major comments (2)
  1. [Attack propagation description (Abstract and §3)] The central claim depends on small pose perturbations propagating through tracking and prediction to produce large trajectory deviations sufficient for safety-critical commands. However, no quantitative error-amplification analysis or ablation against standard smoothing mechanisms (e.g., Kalman filters with uncertainty handling) is provided, leaving open whether the reported >90% detection-error and 50% unsafe-behavior rates would hold when perturbations are attenuated by typical CAV pipelines.
  2. [Experiments and evaluation (Abstract and §5)] Quantitative results in the abstract and evaluation sections report >90% success and up to 50% unsafe-behavior triggering without error bars, exact perturbation magnitudes, full baseline comparisons, or controls for dynamic traffic variability; this weakens assessment of whether the attack reliably exceeds detection thresholds across the tested datasets.
minor comments (2)
  1. [Mitigation section] The mitigation strategy is described at a high level; adding implementation pseudocode or pseudocode for the anomaly detection in safety-critical regions would improve reproducibility.
  2. [Preliminaries] Notation for pose perturbation magnitude and success-rate metrics could be standardized earlier to aid readability.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the detailed and constructive comments on our manuscript. We are pleased that the referee recognizes the potential significance of demonstrating how small perturbations in collaborative perception can lead to safety-critical outcomes in CAVs. Below, we provide point-by-point responses to the major comments and outline the revisions we will make to address them.

read point-by-point responses
  1. Referee: [Attack propagation description (Abstract and §3)] The central claim depends on small pose perturbations propagating through tracking and prediction to produce large trajectory deviations sufficient for safety-critical commands. However, no quantitative error-amplification analysis or ablation against standard smoothing mechanisms (e.g., Kalman filters with uncertainty handling) is provided, leaving open whether the reported >90% detection-error and 50% unsafe-behavior rates would hold when perturbations are attenuated by typical CAV pipelines.

    Authors: We agree that an explicit quantitative analysis of how errors amplify through the tracking and prediction modules would provide stronger support for our claims. Our current evaluation demonstrates the end-to-end impact on real-world datasets (OPV2V and V2X-Real), where the perturbations lead to the reported detection errors and unsafe behaviors despite the presence of standard pipeline components. However, to address this concern directly, we will include in the revised manuscript a quantitative error-amplification study, including an ablation analysis that compares results with and without smoothing mechanisms such as Kalman filters. This will clarify the conditions under which small perturbations can still propagate to large effects. revision: yes

  2. Referee: [Experiments and evaluation (Abstract and §5)] Quantitative results in the abstract and evaluation sections report >90% success and up to 50% unsafe-behavior triggering without error bars, exact perturbation magnitudes, full baseline comparisons, or controls for dynamic traffic variability; this weakens assessment of whether the attack reliably exceeds detection thresholds across the tested datasets.

    Authors: We acknowledge the need for more rigorous statistical reporting and additional details in the experimental section. The results are derived from extensive testing across multiple scenarios in the datasets, but we will enhance the revised manuscript by adding error bars (e.g., standard deviations over multiple runs), specifying the exact perturbation magnitudes used in the attacks, providing more comprehensive baseline comparisons, and including additional controls and analysis for dynamic traffic variability such as different traffic densities and vehicle speeds. These additions will better substantiate the reliability of the attack success rates. revision: yes

Circularity Check

0 steps flagged

No circularity: purely empirical attack design and evaluation

full rationale

The paper describes an attack that subtly perturbs object poses in shared perception data, then evaluates end-to-end effects on tracking, prediction, and planning modules using external datasets (OPV2V, V2X-Real) and existing defenses. No equations, derivations, fitted parameters renamed as predictions, or self-citation chains appear in the provided text. The propagation claim is presented as an observed outcome of experiments rather than a mathematical reduction to the attack inputs. The central results (success rates, evasion) are measured against independent benchmarks, satisfying the criteria for a self-contained empirical study.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

Abstract provides limited technical detail; attack success rests on unstated assumptions about standard CAV module behavior and detection thresholds.

free parameters (1)
  • pose perturbation magnitude
    Small values chosen to stay below detection while still propagating to unsafe outcomes; no specific fitted values given.
axioms (1)
  • domain assumption Small pose errors in shared perception propagate through tracking and prediction to alter driving decisions
    Central to the attack effectiveness claim; invoked when describing end-to-end system effects.

pith-pipeline@v0.9.0 · 5568 in / 1286 out tokens · 27895 ms · 2026-05-09T14:25:19.503185+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

64 extracted references · 9 canonical work pages · 1 internal anchor

  1. [1]

    https://www.3gpp.org/specifications-technologies/releases/ release-14, 2017

    3GPP Release 14. https://www.3gpp.org/specifications-technologies/releases/ release-14, 2017

  2. [2]

    https://www.qualcomm.com/news/releases/2017/09/ qualcomm-announces-groundbreaking-cellular-v2x-solution-support- automotive, 2017

    Qualcomm C-V2X. https://www.qualcomm.com/news/releases/2017/09/ qualcomm-announces-groundbreaking-cellular-v2x-solution-support- automotive, 2017

  3. [3]

    https://carrier.huawei.com/en/products/wireless-network-v3/ Components/c-v2x, 2019

    Huawei C-V2X. https://carrier.huawei.com/en/products/wireless-network-v3/ Components/c-v2x, 2019

  4. [4]

    https://www.infineon.com/dgdl/Infineon-ISPN-Use- Case-Savari-Securing-V2X+communications-ABR-v01_00-EN.pdf?fileId= 5546d462689a790c0168e1c1f5e35221, 2019

    Infineon C-V2X. https://www.infineon.com/dgdl/Infineon-ISPN-Use- Case-Savari-Securing-V2X+communications-ABR-v01_00-EN.pdf?fileId= 5546d462689a790c0168e1c1f5e35221, 2019

  5. [5]

    https://carla.org/, 2021

    Carla: Open-source simulator for autonomous driving research. https://carla.org/, 2021

  6. [6]

    https://github.com/ Autoware-AI, 2022

    Autoware: Open-source software for self-driving vehicles. https://github.com/ Autoware-AI, 2022

  7. [7]

    http://apollo.auto, 2022

    Baidu Apollo. http://apollo.auto, 2022

  8. [8]

    https://www.bosch-mobility-solutions.com/en/solutions/ connectivity/v2x-connectivity-solutions-cv/, 2022

    Bosch C-V2X. https://www.bosch-mobility-solutions.com/en/solutions/ connectivity/v2x-connectivity-solutions-cv/, 2022

  9. [9]

    https://www.eclipse.org/sumo/, 2022

    SUMO: Simulation of Urban Mobility. https://www.eclipse.org/sumo/, 2022

  10. [10]

    https://aecc.org/, 2023

    Automotive Edge Computing Consortium. https://aecc.org/, 2023

  11. [11]

    https: //www.sae.org/standards/content/j3224_202208/, 2023

    J3224_202208: V2x sensor-sharing for cooperative and automated driving. https: //www.sae.org/standards/content/j3224_202208/, 2023

  12. [12]

    A survey on deep-learning-based lidar 3d object detection for autonomous driving.Sensors, 22(24):9577, 2022

    Simegnew Yihunie Alaba and John E Ball. A survey on deep-learning-based lidar 3d object detection for autonomous driving.Sensors, 22(24):9577, 2022

  13. [13]

    Invisible for both camera and lidar: Security of multi-sensor fusion based perception in autonomous driving under physical-world attacks

    Yulong Cao, Ningfei Wang, Chaowei Xiao, Dawei Yang, Jin Fang, Ruigang Yang, Qi Alfred Chen, Mingyan Liu, and Bo Li. Invisible for both camera and lidar: Security of multi-sensor fusion based perception in autonomous driving under physical-world attacks. In2021 IEEE Symposium on Security and Privacy (SP), pages 176–194. IEEE, 2021

  14. [14]

    Bringing different views together: A hybrid cooperative perception framework for connected autonomous vehicles.IEEE Network, pages 1–1, 2025

    Dominic Carrillo, Michael Nutt, Maarten Meijer, Junaid Khan, Song Fu, and Qing Yang. Bringing different views together: A hybrid cooperative perception framework for connected autonomous vehicles.IEEE Network, pages 1–1, 2025

  15. [15]

    A cooperative perception environment for traffic operations and control

    Hanlin Chen, Brian Liu, Xumiao Zhang, Feng Qian, Z Morley Mao, and Yiheng Feng. A cooperative perception environment for traffic operations and control. arXiv preprint arXiv:2208.02792, 2022

  16. [16]

    F-cooper: Feature based cooperative perception for autonomous vehicle edge computing system using 3d point clouds

    Qi Chen, Xu Ma, Sihai Tang, Jingda Guo, Qing Yang, and Song Fu. F-cooper: Feature based cooperative perception for autonomous vehicle edge computing system using 3d point clouds. InProceedings of the 4th ACM/IEEE Symposium on Edge Computing, pages 88–100, 2019

  17. [17]

    Cooper: Cooperative perception for connected autonomous vehicles based on 3d point clouds

    Qi Chen, Sihai Tang, Qing Yang, and Song Fu. Cooper: Cooperative perception for connected autonomous vehicles based on 3d point clouds. In2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), pages 514–524. IEEE, 2019

  18. [18]

    Coopernaut: End- to-end driving with cooperative perception for networked vehicles

    Jiaxun Cui, Hang Qiu, Dian Chen, Peter Stone, and Yuke Zhu. Coopernaut: End- to-end driving with cooperative perception for networked vehicles. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17252–17262, 2022

  19. [19]

    Security analysis of {Camera-LiDAR} fusion against {Black-Box} attacks on autonomous vehicles

    R Spencer Hallyburton, Yupei Liu, Yulong Cao, Z Morley Mao, and Miroslav Pajic. Security analysis of {Camera-LiDAR} fusion against {Black-Box} attacks on autonomous vehicles. In31st USENIX Security Symposium (USENIX Security 22), pages 1903–1920, 2022

  20. [20]

    Security-aware sensor fusion with mate: the multi-agent trust estimator

    R Spencer Hallyburton and Miroslav Pajic. Security-aware sensor fusion with mate: the multi-agent trust estimator. InProceedings of the 2025 ACM SIGSAC Conference on Computer and Communications Security, pages 2009–2023, 2025

  21. [21]

    Cp-guard+: A new paradigm for malicious agent detection and defense in collaborative perception.arXiv preprint arXiv:2502.07807, 2025

    Senkang Hu, Yihang Tao, Zihan Fang, Guowen Xu, Yiqin Deng, Sam Kwong, and Yuguang Fang. Cp-guard+: A new paradigm for malicious agent detection and defense in collaborative perception.arXiv preprint arXiv:2502.07807, 2025

  22. [22]

    Pla-lidar: Physical laser attacks against lidar-based 3d object detection in autonomous vehicle

    Zizhi Jin, Ji Xiaoyu, Yushi Cheng, Bo Yang, Chen Yan, and Wenyuan Xu. Pla-lidar: Physical laser attacks against lidar-based 3d object detection in autonomous vehicle. In2023 IEEE Symposium on Security and Privacy (SP), pages 710–727. IEEE Computer Society, 2022

  23. [23]

    Joint 3d proposal generation and object detection from view aggregation.IROS, 2018

    Jason Ku, Melissa Mozifian, Jungwook Lee, Ali Harakeh, and Steven Waslander. Joint 3d proposal generation and object detection from view aggregation.IROS, 2018

  24. [24]

    Carspeak: a content-centric network for autonomous driving.ACM SIG- COMM Computer Communication Review, 42(4):259–270, 2012

    Swarun Kumar, Lixin Shi, Nabeel Ahmed, Stephanie Gil, Dina Katabi, and Daniela Rus. Carspeak: a content-centric network for autonomous driving.ACM SIG- COMM Computer Communication Review, 42(4):259–270, 2012

  25. [25]

    Pointpillars: Fast encoders for object detection from point clouds

    Alex H Lang, Sourabh Vora, Holger Caesar, Lubing Zhou, Jiong Yang, and Oscar Beijbom. Pointpillars: Fast encoders for object detection from point clouds. InPro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12697–12705, 2019

  26. [26]

    Grip++: Enhanced graph-based interaction-aware trajectory prediction for autonomous driving.arXiv preprint arXiv:1907.07792, 2019

    Xin Li, Xiaowen Ying, and Mooi Choo Chuah. Grip++: Enhanced graph-based interaction-aware trajectory prediction for autonomous driving.arXiv preprint arXiv:1907.07792, 2019

  27. [27]

    Among us: Adversarially robust collaborative perception by consensus

    Yiming Li, Qi Fang, Jiamu Bai, Siheng Chen, Felix Juefei-Xu, and Chen Feng. Among us: Adversarially robust collaborative perception by consensus. InPro- ceedings of the IEEE/CVF International Conference on Computer Vision, pages 186–195, 2023

  28. [28]

    Fooling lidar per- ception via adversarial trajectory perturbation

    Yiming Li, Congcong Wen, Felix Juefei-Xu, and Chen Feng. Fooling lidar per- ception via adversarial trajectory perturbation. InProceedings of the IEEE/CVF International Conference on Computer Vision, pages 7898–7907, 2021

  29. [29]

    Lidar for autonomous driving: The principles, challenges, and trends for automotive lidar and perception systems.IEEE Signal Processing Magazine, 37(4):50–61, 2020

    You Li and Javier Ibanez-Guzman. Lidar for autonomous driving: The principles, challenges, and trends for automotive lidar and perception systems.IEEE Signal Processing Magazine, 37(4):50–61, 2020

  30. [30]

    Pretend benign: A stealthy adversarial attack by exploiting vulner- abilities in cooperative perception

    Hongwei Lin, Dongyu Pan, Qiming Xia, Hai Wu, Cheng Wang, Siqi Shen, and Chenglu Wen. Pretend benign: A stealthy adversarial attack by exploiting vulner- abilities in cooperative perception. InProceedings of the IEEE/CVF International Conference on Computer Vision, pages 19947–19956, 2025

  31. [31]

    Fusioneye: Perception sharing for connected vehicles and its bandwidth- accuracy trade-offs

    Hansi Liu, Pengfei Ren, Shubham Jain, Mohannad Murad, Marco Gruteser, and Fan Bai. Fusioneye: Perception sharing for connected vehicles and its bandwidth- accuracy trade-offs. In2019 16th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON), pages 1–9. IEEE, 2019

  32. [32]

    A first {Physical-World} trajectory prediction attack via {LiDAR- induced} deceptions in autonomous driving

    Yang Lou, Yi Zhu, Qun Song, Rui Tan, Chunming Qiao, Wei-Bin Lee, and Jian- ping Wang. A first {Physical-World} trajectory prediction attack via {LiDAR- induced} deceptions in autonomous driving. In33rd USENIX Security Symposium (USENIX Security 24), pages 6291–6308, 2024

  33. [33]

    Controlloc: Physical-world hijacking attack on visual perception in autonomous driving.arXivpreprintarXiv:2406.05810, 2024

    Chen Ma, Ningfei Wang, Zhengyu Zhao, Qian Wang, Qi Alfred Chen, and Chao Shen. Controlloc: Physical-world hijacking attack on visual perception in au- tonomous driving.arXiv preprint arXiv:2406.05810, 2024

  34. [34]

    Martínez and et al

    J. Martínez and et al. Safety validation of connected autonomous driving systems in urban intersections using the sunrise safety assurance framework.MDPI Sensors, 8(3):55, 2026. Safety validation of ADS with connected perception in real-world scenarios

  35. [35]

    Physical hijacking attacks against object trackers

    Raymond Muller, Yanmao Man, Z Berkay Celik, Ming Li, and Ryan Gerdes. Physical hijacking attacks against object trackers. InProceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, pages 2309–2322, 2022

  36. [36]

    Support for the i-street (implementing solutions from transportation research and evaluation of emerging technologies) testbed

    Florida Department of Transportation. Support for the i-street (implementing solutions from transportation research and evaluation of emerging technologies) testbed. Technical report, U.S. Department of Transportation / ROSA P, 2024. Real-world Living Lab for V2X and CAV infrastructure deployment. Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Qi...

  37. [37]

    Deep learning frontiers in 3d object detection: A comprehensive review for autonomous driving

    Ambati Pravallika, Mohammad Farukh Hashmi, and Aditya Gupta. Deep learning frontiers in 3d object detection: A comprehensive review for autonomous driving. IEEE Access, 12:173936–173980, 2024

  38. [38]

    Autocast: Scalable infrastructure-less cooperative perception for distributed collaborative driving.arXiv preprint arXiv:2112.14947, 2021

    Hang Qiu, Pohan Huang, Namo Asavisanu, Xiaochen Liu, Konstantinos Psou- nis, and Ramesh Govindan. Autocast: Scalable infrastructure-less cooperative perception for distributed collaborative driving.arXiv preprint arXiv:2112.14947, 2021

  39. [39]

    A comparative assessment of c-its technologies: Global c-v2x trials and deployments

    Qualcomm and iMOVE CRC. A comparative assessment of c-its technologies: Global c-v2x trials and deployments. Technical report, iMOVE Australia, 2023. Covers Qualcomm’s trials with Ford and AT&T

  40. [40]

    Tra- jectron++: Dynamically-feasible trajectory forecasting with heterogeneous data

    Tim Salzmann, Boris Ivanovic, Punarjay Chakravarty, and Marco Pavone. Tra- jectron++: Dynamically-feasible trajectory forecasting with heterogeneous data. InEuropean conference on computer vision, pages 683–700. Springer, 2020

  41. [41]

    Drift with devil: Security of multi-sensor fusion based localization in high-level autonomous driving under gps spoofing

    Junjie Shen, Jun Yeon Won, Zeyuan Chen, and Qi Alfred Chen. Drift with devil: Security of multi-sensor fusion based localization in high-level autonomous driving under gps spoofing. InProceedings of the 29th USENIX Conference on Security Symposium, pages 931–948, 2020

  42. [42]

    Pointrcnn: 3d object proposal generation and detection from point cloud

    Shaoshuai Shi, Xiaogang Wang, and Hongsheng Li. Pointrcnn: 3d object proposal generation and detection from point cloud. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 770–779, 2019

  43. [43]

    Vips: real-time perception fusion for infrastructure- assisted autonomous driving

    Shuyao Shi, Jiahe Cui, Zhehao Jiang, Zhenyu Yan, Guoliang Xing, Jianwei Niu, and Zhenchao Ouyang. Vips: real-time perception fusion for infrastructure- assisted autonomous driving. InProceedings of the 28th Annual International Conference on Mobile Computing And Networking, pages 133–146, 2022

  44. [44]

    An efficient and robust object-level cooperative perception framework for connected and automated driving.arXiv preprint arXiv:2210.06289, 2022

    Zhiying Song, Fuxi Wen, Hailiang Zhang, and Jun Li. An efficient and robust object-level cooperative perception framework for connected and automated driving.arXiv preprint arXiv:2210.06289, 2022

  45. [45]

    Real-time heterogeneous collaborative perception in edge-enabled vehicular environments

    Samuel Thornton, Nithin Santhanam, Rajeev Chhajer, and Sujit Dey. Real-time heterogeneous collaborative perception in edge-enabled vehicular environments. IEEE Open Journal of Vehicular Technology, 6:471–486, 2025

  46. [46]

    Physically realizable adversarial examples for lidar object detection

    James Tu, Mengye Ren, Sivabalan Manivasagam, Ming Liang, Bin Yang, Richard Du, Frank Cheng, and Raquel Urtasun. Physically realizable adversarial examples for lidar object detection. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13716–13725, 2020

  47. [47]

    Adversarial attacks on multi-agent communication

    James Tu, Tsunhsuan Wang, Jingkang Wang, Sivabalan Manivasagam, Mengye Ren, and Raquel Urtasun. Adversarial attacks on multi-agent communication. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7768–7777, 2021

  48. [48]

    A systematic literature review on vehicular collaborative perception—a computer vision perspective.IEEE Transactions on Intelligent Transportation Systems, 2025

    Lei Wan, Jianxin Zhao, Andreas Wiedholz, Manuel Bied, Mateus Martinez de Lu- cena, Abhishek Dinkar Jagtap, Andreas Festag, Antônio Augusto Fröhlich, Han- nan Ejaz Keen, and Alexey Vinel. A systematic literature review on vehicular collaborative perception—a computer vision perspective.IEEE Transactions on Intelligent Transportation Systems, 2025

  49. [49]

    From threat to trust: Exploiting attention mechanisms for attacks and defenses in cooperative perception

    Chenyi Wang, Raymond Muller, Ruoyu Song, Jean-Philippe Monteuuis, Jonathan Petit, Yanmao Man, Ryan Gerdes, Z Berkay Celik, and Ming Li. From threat to trust: Exploiting attention mechanisms for attacks and defenses in cooperative perception. In34th USENIX Security Symposium (USENIX Security 25), pages 7387–7406, 2025

  50. [50]

    V2vnet: Vehicle-to-vehicle communication for joint perception and prediction

    Tsun-Hsuan Wang, Sivabalan Manivasagam, Ming Liang, Bin Yang, Wenyuan Zeng, and Raquel Urtasun. V2vnet: Vehicle-to-vehicle communication for joint perception and prediction. InEuropean Conference on Computer Vision, pages 605–621. Springer, 2020

  51. [51]

    arXiv preprint arXiv:2008.08063 (2020)

    Xinshuo Weng, Jianren Wang, David Held, and Kris Kitani. Ab3dmot: A base- line for 3d multi-object tracking and new evaluation metrics.arXiv preprint arXiv:2008.08063, 2020

  52. [52]

    V2x-real: a largs-scale dataset for vehicle-to-everything cooperative perception

    Hao Xiang, Zhaoliang Zheng, Xin Xia, Runsheng Xu, Letian Gao, Zewei Zhou, Xu Han, Xinkai Ji, Mingxi Li, Zonglin Meng, et al. V2x-real: a largs-scale dataset for vehicle-to-everything cooperative perception. InEuropean Conference on Computer Vision, pages 455–470. Springer, 2024

  53. [53]

    Cobevt: Cooperative bird’s eye view semantic segmentation with sparse transformers.arXiv preprint arXiv:2207.02202, 2022

    Runsheng Xu, Zhengzhong Tu, Hao Xiang, Wei Shao, Bolei Zhou, and Jiaqi Ma. Cobevt: Cooperative bird’s eye view semantic segmentation with sparse transformers.arXiv preprint arXiv:2207.02202, 2022

  54. [54]

    V2x-vit: Vehicle-to-everything cooperative perception with vision transformer

    Runsheng Xu, Hao Xiang, Zhengzhong Tu, Xin Xia, Ming-Hsuan Yang, and Jiaqi Ma. V2x-vit: Vehicle-to-everything cooperative perception with vision transformer. InEuropean Conference on Computer Vision, pages 107–124. Springer, 2022

  55. [55]

    Opv2v: An open benchmark dataset and fusion pipeline for perception with vehicle- to-vehicle communication

    Runsheng Xu, Hao Xiang, Xin Xia, Xu Han, Jinlong Li, and Jiaqi Ma. Opv2v: An open benchmark dataset and fusion pipeline for perception with vehicle- to-vehicle communication. In2022 International Conference on Robotics and Automation (ICRA), pages 2583–2589. IEEE, 2022

  56. [56]

    Marius Zoellner

    Melih Yazgan, Qiyuan Wu, Iramm Hamdard, Shiqi Li, and J. Marius Zoellner. Slimcomm: Doppler-guided sparse queries for bandwidth-efficient cooperative 3- d perception. InProceedings of the IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2025

  57. [57]

    Keypoints-based deep feature fusion for cooperative vehicle detection of autonomous driving.IEEE Robotics and Automation Letters, 7(2):3054–3061, 2022

    Yunshuang Yuan, Hao Cheng, and Monika Sester. Keypoints-based deep feature fusion for cooperative vehicle detection of autonomous driving.IEEE Robotics and Automation Letters, 7(2):3054–3061, 2022

  58. [58]

    On adversarial robustness of trajectory prediction for autonomous vehi- cles

    Qingzhao Zhang, Shengtuo Hu, Jiachen Sun, Qi Alfred Chen, and Z Morley Mao. On adversarial robustness of trajectory prediction for autonomous vehi- cles. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15159–15168, 2022

  59. [59]

    On data fabrication in collaborative vehicular perception: Attacks and countermeasures

    Qingzhao Zhang, Shuowei Jin, Ruiyang Zhu, Jiachen Sun, Xumiao Zhang, Qi Al- fred Chen, and Z Morley Mao. On data fabrication in collaborative vehicular perception: Attacks and countermeasures. In33rd USENIX Security Symposium (USENIX Security 24), pages 6309–6326, 2024

  60. [60]

    Robust real-time multi-vehicle collaboration on asynchronous sensors

    Qingzhao Zhang, Xumiao Zhang, Ruiyang Zhu, Fan Bai, Mohammad Naserian, and Z Morley Mao. Robust real-time multi-vehicle collaboration on asynchronous sensors. InProceedings of the 29th Annual International Conference on Mobile Computing and Networking, pages 1–15, 2023

  61. [61]

    Emp: edge-assisted multi-vehicle perception

    Xumiao Zhang, Anlan Zhang, Jiachen Sun, Xiao Zhu, Y Ethan Guo, Feng Qian, and Z Morley Mao. Emp: edge-assisted multi-vehicle perception. InProceedings of the 27th Annual International Conference on Mobile Computing and Networking, pages 545–558, 2021

  62. [62]

    Made: Malicious agent detection for robust multi-agent collaborative perception

    Yangheng Zhao, Zhen Xiang, Sheng Yin, Xianghe Pang, Yanfeng Wang, and Si- heng Chen. Made: Malicious agent detection for robust multi-agent collaborative perception. In2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 13817–13823. IEEE, 2024

  63. [63]

    Open3D: A Modern Library for 3D Data Processing

    Qian-Yi Zhou, Jaesik Park, and Vladlen Koltun. Open3d: A modern library for 3d data processing.arXiv preprint arXiv:1801.09847, 2018

  64. [64]

    Yi Zhu, Chenglin Miao, Tianhang Zheng, Foad Hajiaghajani, Lu Su, and Chun- ming Qiao. Can we use arbitrary objects to attack lidar perception in autonomous driving? InProceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, pages 1945–1960, 2021. A Open Science Teh artifact will be online at https://anonymous.4open.science/r/...