pith. machine review for the scientific record. sign in

arxiv: 2604.20621 · v1 · submitted 2026-04-22 · 💻 cs.CR

Recognition: unknown

SoK: The Next Frontier in AV Security: Systematizing Perception Attacks and the Emerging Threat of Multi-Sensor Fusion

Authors on Pith no claims yet

Pith reviewed 2026-05-10 00:40 UTC · model grok-4.3

classification 💻 cs.CR
keywords autonomous vehiclesperception attacksmulti-sensor fusionsensor spoofingAV securityadversarial attackscross-modal threats
0
0 comments X

The pith

As autonomous vehicles fuse data from multiple sensors for robustness, attackers can exploit that same fusion to create undetectable failures.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This SoK paper reviews 48 peer-reviewed studies on perception attacks against autonomous vehicles and tracks how threats have moved from isolated attacks on single sensors to coordinated cross-modal exploits that target the fusion step itself. The authors build a taxonomy of 20 attack vectors grouped by sensor type, attack stage, medium, and perception module, which reveals recurring patterns in how fusion logic can be misled by inconsistencies across modalities. They point out that current research under-tests real-world conditions and rarely designs defenses that check consistency between sensors. A simulation of combined infrared and lidar spoofing is used to show one concrete case where fusion creates a vulnerability single-sensor attacks do not expose.

Core claim

The paper systematizes 48 studies into a unified taxonomy of 20 attack vectors organized by sensor type, attack stage, medium, and perception module. This reveals a shift from single-sensor exploits to complex cross-modal threats that compromise multi-sensor fusion. Key gaps identified include limited real-world testing, short-term evaluation bias, and the absence of defenses that account for inter-sensor consistency. The authors illustrate one gap with a proof-of-concept simulation that combines infrared and lidar spoofing to fool the fused perception pipeline.

What carries the argument

A unified taxonomy of 20 attack vectors that organizes threats across sensor type, attack stage, medium, and perception module to expose underexplored fusion-level and cross-sensor dependencies.

If this is right

  • Defenses must verify consistency across sensors rather than securing each sensor in isolation.
  • Evaluation of attacks and defenses should move from short-term simulations to longer-term real-world deployments.
  • Fusion algorithms need built-in checks for cross-modal inconsistencies that current designs largely omit.
  • Future attack research will likely focus on exploiting the redundancy that multi-sensor systems introduce for safety.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same redundancy-exploitation pattern could appear in other multi-modal systems such as robotic manipulation or drone navigation.
  • Standardizing fusion methods might reduce some attack surfaces but could also make remaining weaknesses more predictable if the standard itself is not stress-tested against coordinated inputs.
  • Safety regulations for autonomous vehicles could require explicit testing against fusion-targeted attacks rather than only single-sensor threats.

Load-bearing premise

The 48 selected studies represent the full range of perception attacks in the field, and the infrared-lidar spoofing simulation accurately reflects vulnerabilities in real multi-sensor fusion systems used in deployed vehicles.

What would settle it

A controlled test on a production autonomous vehicle in which simultaneous infrared and lidar spoofing produces a perception error that independent single-sensor attacks do not trigger.

Figures

Figures reproduced from arXiv: 2604.20621 by Raiful Hasan, Shahriar Rahman Khan, Tariqul Islam.

Figure 1
Figure 1. Figure 1: Overview of the Autonomous Vehicle (AV) system pipeline and a taxonomy of adversarial attack methods explored [PITH_FULL_IMAGE:figures/full_fig_p009_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Taxonomic visualization of AV perception attacks. Flows represent documented causal pathways (Target AV [PITH_FULL_IMAGE:figures/full_fig_p011_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: A representative result of a successful attack. [PITH_FULL_IMAGE:figures/full_fig_p014_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: PRISMA-style flow diagram of the paper selection [PITH_FULL_IMAGE:figures/full_fig_p019_4.png] view at source ↗
read the original abstract

Autonomous vehicles (AVs) increasingly rely on multi-sensor perception pipelines that combine data from cameras, lidar, radar, and other modalities to interpret the environment. This SoK systematizes 48 peer-reviewed studies on perception-layer attacks against AVs, tracking the field's evolution from single-sensor exploits to complex cross-modal threats that compromise multi-sensor fusion (MSF). We develop a unified taxonomy of 20 attack vectors organized by sensor type, attack stage, medium, and perception module, revealing patterns that expose underexplored vulnerabilities in fusion logic and cross-sensor dependencies. Our analysis identifies key research gaps, including limited real-world testing, short-term evaluation bias, and the absence of defenses that account for inter-sensor consistency. To illustrate one such gap, we validate a fusion-level vulnerability through a proof-of-concept simulation combining infrared and lidar spoofing. The findings highlight a fundamental shift in AV security: as systems fuse more sensors for robustness, attackers exploit the very redundancy meant to ensure safety. We conclude with directions for fusion-aware defense design and a research agenda for trustworthy perception in autonomous systems.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 3 minor

Summary. This SoK paper reviews 48 peer-reviewed studies on perception-layer attacks against autonomous vehicles, tracking the shift from single-sensor exploits to cross-modal threats targeting multi-sensor fusion (MSF). It introduces a unified taxonomy of 20 attack vectors organized by sensor type, attack stage, medium, and perception module, identifies gaps including limited real-world testing and absence of inter-sensor consistency defenses, and presents a proof-of-concept simulation of combined infrared and lidar spoofing to illustrate a fusion-level vulnerability. The work concludes that redundancy in sensor fusion creates new attack surfaces and outlines directions for fusion-aware defenses.

Significance. If the taxonomy accurately reflects the literature and the PoC demonstrates a representative vulnerability in deployed fusion pipelines, the paper would provide a timely systematization that shifts focus from isolated sensor attacks to the security of fusion logic itself. This could usefully inform both researchers and practitioners on underexplored cross-sensor dependencies and help prioritize defenses that preserve the safety benefits of redundancy.

major comments (1)
  1. [Proof-of-Concept Simulation] The proof-of-concept simulation section: the manuscript uses the simulation to validate a fusion-level vulnerability and support the central claim that attackers exploit redundancy in multi-sensor fusion. However, it does not specify the fusion algorithm (e.g., whether outlier rejection, temporal filtering, or learned consistency checks typical of real AV pipelines such as those in Apollo or Autoware are included), the sensor noise models, or the exact decision rule that is compromised. Without these details it remains possible that the demonstrated failure occurs only under a naive fusion rule that deployed systems would reject, weakening the load-bearing illustration of the redundancy-exploitation thesis.
minor comments (3)
  1. [Taxonomy and Literature Review] The description of the 48-study selection process and the derivation of the 20 attack vectors could be expanded with explicit inclusion criteria and inter-rater reliability measures to strengthen the systematization.
  2. [Abstract] The abstract refers to 'short-term evaluation bias' without defining the time horizons used to classify evaluations as short-term versus long-term in the AV security literature.
  3. [Figures] Figure captions for the taxonomy diagram and simulation results should explicitly state the source data or parameters so readers can reproduce the attack vectors and PoC outcomes.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for the constructive feedback on our SoK paper. We address the single major comment point by point below.

read point-by-point responses
  1. Referee: [Proof-of-Concept Simulation] The proof-of-concept simulation section: the manuscript uses the simulation to validate a fusion-level vulnerability and support the central claim that attackers exploit redundancy in multi-sensor fusion. However, it does not specify the fusion algorithm (e.g., whether outlier rejection, temporal filtering, or learned consistency checks typical of real AV pipelines such as those in Apollo or Autoware are included), the sensor noise models, or the exact decision rule that is compromised. Without these details it remains possible that the demonstrated failure occurs only under a naive fusion rule that deployed systems would reject, weakening the load-bearing illustration of the redundancy-exploitation thesis.

    Authors: We agree that the simulation description requires additional specificity to better support the central claim. In the revised manuscript we will add an expanded subsection detailing the fusion algorithm (a basic early-fusion pipeline with weighted averaging and simple outlier rejection based on spatial consistency thresholds), the sensor noise models (additive zero-mean Gaussian noise with variances drawn from publicly reported lidar and infrared sensor characterizations), and the exact decision rule (a consistency check that declares an object present only if detections align within a fixed distance threshold across modalities). We will also explicitly state that the PoC is a minimal illustrative case intended to demonstrate how cross-modal spoofing can evade basic redundancy mechanisms, rather than a faithful replica of any production pipeline such as Apollo or Autoware. This clarification will be accompanied by a short discussion of how more sophisticated temporal filtering or learned consistency checks could raise the bar for attackers while still leaving residual cross-sensor attack surfaces. revision: yes

Circularity Check

0 steps flagged

No significant circularity: SoK review with illustrative PoC

full rationale

This is a systematization of knowledge paper that reviews 48 peer-reviewed studies to build a taxonomy of 20 attack vectors and identify gaps in multi-sensor fusion security. The central claim—that attackers can exploit fusion redundancy—is presented as an observed pattern from the literature, not a mathematical derivation. The proof-of-concept simulation is explicitly described as an illustration of one identified gap rather than a result derived from or fitted to the review itself. No equations, parameter fitting, self-definitional constructs, or load-bearing self-citations appear in the provided text. The work is self-contained against external benchmarks (the cited studies) and does not reduce any claim to its inputs by construction.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

This is a systematization of knowledge paper that aggregates existing research rather than introducing new free parameters or invented entities. It relies on the assumption that the reviewed studies are representative.

axioms (1)
  • domain assumption The 48 peer-reviewed studies selected for review are representative of the field of perception attacks against AVs.
    The SoK is built upon reviewing and taxonomizing these studies to track evolution and identify patterns.

pith-pipeline@v0.9.0 · 5501 in / 1236 out tokens · 58194 ms · 2026-05-10T00:40:46.134766+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

139 extracted references · 7 canonical work pages · 1 internal anchor

  1. [1]

    Autonomous Vehicles Enabled by the Integration of IoT, Edge Intelligence, 5G, and Blockchain,

    A. Biswas and H.-C. Wang, “Autonomous Vehicles Enabled by the Integration of IoT, Edge Intelligence, 5G, and Blockchain,”Sensors, vol. 23, no. 4, p. 1963, 2023

  2. [2]

    Autonomous vehicles: challenges, opportunities, and future implications for trans- portation policies,

    S. A. Bagloee, M. Tavana, M. Asadi, and T. Oliver, “Autonomous vehicles: challenges, opportunities, and future implications for trans- portation policies,”Journal of modern transportation, vol. 24, no. 4, pp. 284–303, 2016

  3. [3]

    Preparing a nation for au- tonomous vehicles: opportunities, barriers and policy recommenda- tions,

    D. J. Fagnant and K. Kockelman, “Preparing a nation for au- tonomous vehicles: opportunities, barriers and policy recommenda- tions,”Transportation Research Part A: Policy and Practice, vol. 77, pp. 167–181, 2015

  4. [4]

    Standing Gen- eral Order on Crash Reporting for Automated Driving Systems,

    National Highway Traffic Safety Administration, “Standing Gen- eral Order on Crash Reporting for Automated Driving Systems,” tech. rep., NHTSA, 2024. Available at https://www.nhtsa.gov/laws- regulations/standing-general-order-crash-reporting

  5. [5]

    Multi-sensor Fusion Perception of Vehicle Environment and its Application in Obstacle Avoidance of Autonomous Vehicle,

    W. Li, X. Wan, Z. Ma, and Y . Hu, “Multi-sensor Fusion Perception of Vehicle Environment and its Application in Obstacle Avoidance of Autonomous Vehicle,”International Journal of Intelligent Trans- portation Systems Research, pp. 1–14, 2025

  6. [6]

    Sensor Spoofing Detection On Autonomous Vehicle Using Channel-spatial-temporal Attention Based Autoen- coder Network,

    M. Zhou and L. Han, “Sensor Spoofing Detection On Autonomous Vehicle Using Channel-spatial-temporal Attention Based Autoen- coder Network,”Mobile Networks and Applications, pp. 1–14, 2023

  7. [7]

    EMI-LiDAR: Uncovering Vulner- abilities of LiDAR Sensors in Autonomous Driving Setting using Electromagnetic Interference,

    S. H. V . Bhupathiraju, J. Sheldon, L. A. Bauer, V . Bindschaedler, T. Sugawara, and S. Rampazzi, “EMI-LiDAR: Uncovering Vulner- abilities of LiDAR Sensors in Autonomous Driving Setting using Electromagnetic Interference,” inProceedings of the 16th ACM Con- ference on Security and Privacy in Wireless and Mobile Networks, pp. 329–340, 2023

  8. [8]

    GhostImage: Remote Perception Attacks against Camera-based Image Classification Systems,

    Y . Man, M. Li, and R. Gerdes, “GhostImage: Remote Perception Attacks against Camera-based Image Classification Systems,” in 23rd International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2020), pp. 317–332, 2020

  9. [9]

    Who Is in Control? Practical Physical Layer Attack and Defense for mmWave-Based Sensing in Autonomous Vehicles,

    Z. Sun, S. Balakrishnan, L. Su, A. Bhuyan, P. Wang, and C. Qiao, “Who Is in Control? Practical Physical Layer Attack and Defense for mmWave-Based Sensing in Autonomous Vehicles,”IEEE Transac- tions on Information Forensics and Security, vol. 16, pp. 3199–3214, 2021

  10. [10]

    Adversarial Attack on Radar-based En- vironment Perception Systems,

    A. Guesmi and I. Alouani, “Adversarial Attack on Radar-based En- vironment Perception Systems,”arXiv preprint arXiv:2211.01112, 2022

  11. [11]

    Malicious Attacks against Multi-Sensor Fusion in Autonomous Driving,

    Y . Zhu, C. Miao, H. Xue, Y . Yu, L. Su, and C. Qiao, “Malicious Attacks against Multi-Sensor Fusion in Autonomous Driving,” in Proceedings of the 30th Annual International Conference on Mobile Computing and Networking, pp. 436–451, 2024

  12. [12]

    Fooling the Eyes of Autonomous Vehicles: Robust Physical Adversarial Exam- ples Against Traffic Sign Recognition Systems,

    W. Jia, Z. Lu, H. Zhang, Z. Liu, J. Wang, and G. Qu, “Fooling the Eyes of Autonomous Vehicles: Robust Physical Adversarial Exam- ples Against Traffic Sign Recognition Systems,” inProceedings of the Network and Distributed Systems Security (NDSS) Symposium 2022, (San Diego, CA, USA), NDSS, Apr. 2022

  13. [13]

    Adversary is on the Road: Attacks on Visual SLAM using Unnoticeable Adversarial Patch,

    B. Chen, W. Wang, P. Sikorski, and T. Zhu, “Adversary is on the Road: Attacks on Visual SLAM using Unnoticeable Adversarial Patch,” in33rd USENIX Security Symposium (USENIX Security 24), pp. 6345–6362, 2024

  14. [14]

    Au- tonomous Vehicles and Intelligent Automation: Applications, Chal- lenges, and Opportunities,

    G. Bathla, K. Bhadane, R. K. Singh, R. Kumar, R. Aluvalu, R. Krishnamurthi, A. Kumar, R. Thakur, and S. Basheer, “Au- tonomous Vehicles and Intelligent Automation: Applications, Chal- lenges, and Opportunities,”Mobile Information Systems, vol. 2022, no. 1, p. 7632892, 2022

  15. [15]

    PhantomLi- DAR: Cross-modality Signal Injection Attacks against LiDAR,

    Z. Jin, Q. Jiang, X. Lu, C. Yan, X. Ji, and W. Xu, “PhantomLi- DAR: Cross-modality Signal Injection Attacks against LiDAR,” in Proceedings of the Network and Distributed System Security (NDSS) Symposium 2025, (San Diego, CA, USA), NDSS, Feb. 2025

  16. [16]

    Secu- rity Analysis of Camera-LiDAR Fusion Against Black-Box Attacks on Autonomous Vehicles,

    R. S. Hallyburton, Y . Liu, Y . Cao, Z. M. Mao, and M. Pajic, “Secu- rity Analysis of Camera-LiDAR Fusion Against Black-Box Attacks on Autonomous Vehicles,” in31st USENIX Security Symposium (USENIX Security 22), pp. 1903–1920, 2022

  17. [17]

    Using grounded theory as a method for rigorously reviewing literature,

    J. F. Wolfswinkel, E. Furtmueller, and C. P. Wilderom, “Using grounded theory as a method for rigorously reviewing literature,” European journal of information systems, vol. 22, no. 1, pp. 45–55, 2013

  18. [18]

    Systematic Literature Studies: Database Searches vs. Backward Snowballing,

    S. Jalali and C. Wohlin, “Systematic Literature Studies: Database Searches vs. Backward Snowballing,” inProceedings of the ACM- IEEE international symposium on Empirical software engineering and measurement, pp. 29–38, 2012

  19. [19]

    Evaluating strategies for forward snowballing application to support secondary studies updates – Emergent Re- sults,

    K. R. Felizardo, A. Y . I. da Silva, É. F. de Souza, N. L. Vijaykumar, and E. Y . Nakagawa, “Evaluating strategies for forward snowballing application to support secondary studies updates – Emergent Re- sults,” inProceedings of the xxxii brazilian symposium on software engineering, pp. 184–189, 2018

  20. [20]

    Level-5 Autonomous Driving—Are We There Yet? A Review of Research Literature,

    M. A. Khan, H. E. Sayed, S. Malik, T. Zia, J. Khan, N. Alkaabi, and H. Ignatious, “Level-5 Autonomous Driving—Are We There Yet? A Review of Research Literature,”ACM Computing Surveys (CSUR), vol. 55, no. 2, pp. 1–38, 2022

  21. [21]

    Finding Critical Scenarios for Automated Driving Systems: A Systematic Mapping Study,

    X. Zhang, J. Tao, K. Tan, M. Törngren, J. M. G. Sánchez, M. R. Ramli, X. Tao, M. Gyllenhammar, F. Wotawa, N. Mohan,et al., “Finding Critical Scenarios for Automated Driving Systems: A Systematic Mapping Study,”IEEE Transactions on Software En- gineering, vol. 49, no. 3, pp. 991–1026, 2022

  22. [22]

    Perception, Planning, Control, and Coordination for Autonomous Vehicles,

    S. D. Pendleton, H. Andersen, X. Du, X. Shen, M. Meghjani, Y . H. Eng, D. Rus, and M. H. Ang, “Perception, Planning, Control, and Coordination for Autonomous Vehicles,”Machines, vol. 5, no. 1, p. 6, 2017

  23. [23]

    Perception, Planning and Control for Self- Driving System Based on On-board Sensors,

    Y . Dai and S.-G. Lee, “Perception, Planning and Control for Self- Driving System Based on On-board Sensors,”Advances in Mechan- ical Engineering, vol. 12, no. 9, p. 1687814020956494, 2020

  24. [24]

    Autonomous Driving Perception,

    R. Fan, S. Guo, and M. J. Bocus, “Autonomous Driving Perception,” Cham, Switzerland: Springer, vol. 1, 2023

  25. [25]

    Efficient perception, planning, and control algorithm for vision-based automated vehicles,

    D.-H. Lee, “Efficient perception, planning, and control algorithm for vision-based automated vehicles,”Applied Intelligence, vol. 54, no. 17, pp. 8278–8295, 2024

  26. [26]

    Autonomous Vehicle Security: Conceptual Model,

    A. O. Al Zaabi, C. Y . Yeun, and E. Damiani, “Autonomous Vehicle Security: Conceptual Model,” in2019 IEEE Transportation Elec- trification Conference and Expo, Asia-Pacific (ITEC Asia-Pacific), pp. 1–5, IEEE, 2019

  27. [27]

    Attack Models and Counter- measures for Autonomous Vehicles,

    M. C. Chow, M. Ma, and Z. Pan, “Attack Models and Counter- measures for Autonomous Vehicles,” inIntelligent Technologies for Internet of Vehicles, pp. 375–401, Springer, 2021

  28. [28]

    Fooling LiDAR Percep- tion via Adversarial Trajectory Perturbation,

    Y . Li, C. Wen, F. Juefei-Xu, and C. Feng, “Fooling LiDAR Percep- tion via Adversarial Trajectory Perturbation,” inProceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7898– 7907, 2021

  29. [29]

    Are Self- Driving Cars Secure? Evasion Attacks Against Deep Neural Net- works for Steering Angle Prediction,

    A. Chernikova, A. Oprea, C. Nita-Rotaru, and B. Kim, “Are Self- Driving Cars Secure? Evasion Attacks Against Deep Neural Net- works for Steering Angle Prediction,” in2019 IEEE Security and Privacy Workshops (SPW), pp. 132–137, IEEE, 2019

  30. [30]

    YOLOv4: Optimal Speed and Accuracy of Object Detection

    A. Bochkovskiy, C.-Y . Wang, and H.-Y . M. Liao, “YOLOv4: Op- timal Speed and Accuracy of Object Detection,”arXiv preprint arXiv:2004.10934, 2020

  31. [31]

    Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,

    S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,”IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 6, pp. 1137–1149, 2016

  32. [32]

    Single Camera Object Detection for Self- Driving Vehicle: A Review,

    S. Herman and K. Ismail, “Single Camera Object Detection for Self- Driving Vehicle: A Review,”Journal of the Society of Automotive Engineers Malaysia, vol. 1, no. 3, pp. 198–207, 2017

  33. [33]

    Simple-BEV: What Really Matters for Multi-Sensor BEV Percep- tion?

    A. W. Harley, Z. Fang, J. Li, R. Ambrus, and K. Fragkiadaki, “Simple-BEV: What Really Matters for Multi-Sensor BEV Percep- tion?.” https://simple-bev.github.io/, 2023

  34. [34]

    Real-Time Forward Collision Warn- ing System Using Nested Kalman Filter for Monocular Camera,

    Q. Lim, Y . He, and U.-X. Tan, “Real-Time Forward Collision Warn- ing System Using Nested Kalman Filter for Monocular Camera,” in 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 868–873, IEEE, 2018

  35. [35]

    Traffic Sign Detection for Navigation of Autonomous Car Prototype using Convolutional Neural Network,

    M. Ikhlayel, A. J. Iswara, A. Kurniawan, A. Zaini, and E. M. Yuniarno, “Traffic Sign Detection for Navigation of Autonomous Car Prototype using Convolutional Neural Network,” in2020 In- ternational Conference on Computer Engineering, Network, and Intelligent Multimedia (CENIM), pp. 205–210, IEEE, 2020

  36. [36]

    Deep Learning Traffic Sign Recognition in Autonomous Vehicle,

    S. M. Alhabshee and A. U. bin Shamsudin, “Deep Learning Traffic Sign Recognition in Autonomous Vehicle,” in2020 IEEE Student Conference on Research and Development (SCOReD), pp. 438–442, IEEE, 2020

  37. [37]

    Detection of Road Objects Based on Camera Sensors for Autonomous Driving in Various Traffic Situations,

    G. Li, W. Fan, H. Xie, and X. Qu, “Detection of Road Objects Based on Camera Sensors for Autonomous Driving in Various Traffic Situations,”IEEE Sensors Journal, vol. 22, no. 24, pp. 24253–24263, 2022

  38. [38]

    Bevdet: High-performance multi-camera 3d object detection in bird-eye-view.arXiv preprint arXiv:2112.11790, 2021

    J. Huang, G. Huang, Z. Zhu, Y . Ye, and D. Du, “BEVDet: High- performance Multi-camera 3D Object Detection in Bird-Eye-View,” arXiv preprint arXiv:2112.11790, 2021

  39. [39]

    Lane Detection in Autonomous Vehicles: A Systematic Review,

    N. J. Zakaria, M. I. Shapiai, R. Abd Ghani, M. N. M. Yassin, M. Z. Ibrahim, and N. Wahid, “Lane Detection in Autonomous Vehicles: A Systematic Review,”IEEE access, vol. 11, pp. 3729–3765, 2023

  40. [40]

    A comprehensive study on lane detecting autonomous car using computer vision,

    H. Gajjar, S. Sanyal, and M. Shah, “A comprehensive study on lane detecting autonomous car using computer vision,”Expert Systems with Applications, vol. 233, p. 120929, 2023

  41. [41]

    RoadWay: lane detection for autonomous driving vehicles via deep learning,

    G. Singal, H. Singhal, R. Kushwaha, V . Veeramsetty, T. Badal, and S. Lamba, “RoadWay: lane detection for autonomous driving vehicles via deep learning,”Multimedia Tools and Applications, vol. 82, no. 4, pp. 4965–4978, 2023

  42. [42]

    Blind Spot Obstacle Detection from Monocular Camera Images with Depth Cues Extracted by CNN,

    Y . Guo, I. Kumazawa, and C. Kaku, “Blind Spot Obstacle Detection from Monocular Camera Images with Depth Cues Extracted by CNN,”Automotive Innovation, vol. 1, no. 4, pp. 362–373, 2018

  43. [43]

    Sensing Structure for Blind Spot Detection System in Vehicles,

    S. S. G. Bagi, H. G. Garakani, B. Moshiri, and M. Khoshnevisan, “Sensing Structure for Blind Spot Detection System in Vehicles,” in 2019 International Conference on Control, Automation and Infor- mation Sciences (ICCAIS), pp. 1–6, IEEE, 2019

  44. [44]

    From smart parking towards autonomous valet parking: A survey, challenges and future Works,

    M. Khalid, K. Wang, N. Aslam, Y . Cao, N. Ahmad, and M. K. Khan, “From smart parking towards autonomous valet parking: A survey, challenges and future Works,”Journal of Network and Computer Applications, vol. 175, p. 102935, 2021

  45. [45]

    Au- toware on Board: Enabling Autonomous Vehicles with Embedded Systems,

    S. Kato, S. Tokunaga, Y . Maruyama, S. Maeda, M. Hirabayashi, Y . Kitsukawa, A. Monrroy, T. Ando, Y . Fujii, and T. Azumi, “Au- toware on Board: Enabling Autonomous Vehicles with Embedded Systems,” in2018 ACM/IEEE 9th International Conference on Cyber-Physical Systems (ICCPS), pp. 287–296, IEEE, 2018

  46. [46]

    Lidar for Self-Driving Cars,

    J. Hecht, “Lidar for Self-Driving Cars,”Optics and Photonics News, vol. 29, no. 1, pp. 26–33, 2018

  47. [47]

    PointRCNN: 3D Object Proposal Generation and Detection From Point Cloud,

    S. Shi, X. Wang, and H. Li, “PointRCNN: 3D Object Proposal Generation and Detection From Point Cloud,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 770–779, 2019

  48. [48]

    Pointpillars: Fast encoders for object detection from point clouds,

    A. H. Lang, S. V ora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom, “Pointpillars: Fast encoders for object detection from point clouds,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 12697–12705, 2019

  49. [49]

    LiDAR 3D Perception and Object Detection

    X. Rigoulet, “LiDAR 3D Perception and Object Detection.” https://www.digitalnuage.com/lidar-3d-perception-and-object- detection, Mar 2022

  50. [50]

    3D LIDAR Point Cloud based Intersection Recognition for Autonomous Driving,

    Q. Zhu, L. Chen, Q. Li, M. Li, A. Nüchter, and J. Wang, “3D LIDAR Point Cloud based Intersection Recognition for Autonomous Driving,” in2012 IEEE Intelligent Vehicles Symposium, pp. 456– 461, IEEE, 2012

  51. [51]

    The Rise of Radar for Autonomous Vehicles: Signal Processing Solutions and Future Research Directions,

    I. Bilik, O. Longman, S. Villeval, and J. Tabrikian, “The Rise of Radar for Autonomous Vehicles: Signal Processing Solutions and Future Research Directions,”IEEE signal processing Magazine, vol. 36, no. 5, pp. 20–31, 2019

  52. [52]

    Sensors and Sensor Fusion in Autonomous Vehicles,

    J. Koci ´c, N. Joviˇci´c, and V . Drndarevi´c, “Sensors and Sensor Fusion in Autonomous Vehicles,” in2018 26th Telecommunications Forum (TELFOR), pp. 420–425, IEEE, 2018

  53. [53]

    Automotive radars: A review of signal processing techniques,

    S. M. Patole, M. Torlak, D. Wang, and M. Ali, “Automotive radars: A review of signal processing techniques,”IEEE Signal Processing Magazine, vol. 34, no. 2, pp. 22–35, 2017

  54. [54]

    SoK: Rethinking Sensor Spoofing Attacks against Robotic Vehicles from a Systematic View,

    Y . Xu, X. Han, G. Deng, J. Li, Y . Liu, and T. Zhang, “SoK: Rethinking Sensor Spoofing Attacks against Robotic Vehicles from a Systematic View,” in2023 IEEE 8th European Symposium on Security and Privacy (EuroS&P), pp. 1082–1100, IEEE, 2023

  55. [55]

    A Survey of mmWave Radar-Based Sensing in Autonomous Vehicles, Smart Homes and Industry,

    H. Kong, C. Huang, J. Yu, and X. Shen, “A Survey of mmWave Radar-Based Sensing in Autonomous Vehicles, Smart Homes and Industry,”IEEE Communications Surveys & Tutorials, vol. 27, no. 1, pp. 463–508, 2024

  56. [56]

    Exploring Radar Data Representations in Autonomous Driving: A Compre- hensive Review,

    S. Yao, R. Guan, Z. Peng, C. Xu, Y . Shi, W. Ding, E. Gee Lim, Y . Yue, H. Seo, K. Lok Man, J. Ma, X. Zhu, and Y . Yue, “Exploring Radar Data Representations in Autonomous Driving: A Compre- hensive Review,”IEEE Transactions on Intelligent Transportation Systems, vol. 26, no. 6, pp. 7401–7425, 2025

  57. [57]

    Georeferencing, Geocoding,

    X. Yao, “Georeferencing, Geocoding,” inInternational Encyclope- dia of Human Geography(R. Kitchin and N. Thrift, eds.), pp. 458– 465, Oxford: Elsevier, 2009

  58. [58]

    Exploiting Temporal Relations on Radar Perception for Autonomous Driving,

    P. Li, P. Wang, K. Berntorp, and H. Liu, “Exploiting Temporal Relations on Radar Perception for Autonomous Driving,” inPro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17071–17080, 2022

  59. [59]

    4D mmWave Radar for Autonomous Driving Perception: A Compre- hensive Survey,

    L. Fan, J. Wang, Y . Chang, Y . Li, Y . Wang, and D. Cao, “4D mmWave Radar for Autonomous Driving Perception: A Compre- hensive Survey,”IEEE Transactions on Intelligent Vehicles, vol. 9, no. 4, pp. 4606–4620, 2024

  60. [60]

    Adversarial Attacks against LiDAR Semantic Segmentation in Au- tonomous Driving,

    Y . Zhu, C. Miao, F. Hajiaghajani, M. Huai, L. Su, and C. Qiao, “Adversarial Attacks against LiDAR Semantic Segmentation in Au- tonomous Driving,” inProceedings of the 19th ACM conference on embedded networked sensor systems, pp. 329–342, 2021

  61. [61]

    Fusion is Not Enough: Single Modal Attack on Fu- sion Models for 3D Object Detection,

    Z. Cheng, H. Choi, S. Feng, J. C. Liang, G. Tao, D. Liu, M. Zuzak, and X. Zhang, “Fusion is Not Enough: Single Modal Attack on Fu- sion Models for 3D Object Detection,” inThe Twelfth International Conference on Learning Representations (ICLR), 2024

  62. [62]

    Generating 3d adversarial point clouds under the principle of lidars,

    B. Yang, Y . Cheng, Z. Jin, X. Ji, and W. Xu, “Generating 3d adversarial point clouds under the principle of lidars,” inProc. 4th Int. Workshop Automot. Auto. Vehicle Secur., pp. 1–6, 2022

  63. [63]

    Physical 3D Adversarial Attacks against Monocular Depth Estimation in Autonomous Driving,

    J. Zheng, C. Lin, J. Sun, Z. Zhao, Q. Li, and C. Shen, “Physical 3D Adversarial Attacks against Monocular Depth Estimation in Autonomous Driving,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 24452–24461, 2024

  64. [64]

    Physical Attack on Monocular Depth Estimation with Optimal Adversarial Patches,

    Z. Cheng, J. Liang, H. Choi, G. Tao, Z. Cao, D. Liu, and X. Zhang, “Physical Attack on Monocular Depth Estimation with Optimal Adversarial Patches,” inEuropean conference on computer vision, pp. 514–532, Springer, 2022

  65. [65]

    A First Physical-World Trajectory Prediction Attack via LiDAR- induced Deceptions in Autonomous Driving,

    Y . Lou, Y . Zhu, Q. Song, R. Tan, C. Qiao, W.-B. Lee, and J. Wang, “A First Physical-World Trajectory Prediction Attack via LiDAR- induced Deceptions in Autonomous Driving,” in33rd USENIX Security Symposium (USENIX Security 24), pp. 6291–6308, 2024

  66. [66]

    PLA-LiDAR: Physical Laser Attacks against LiDAR-based 3D Object Detection in Autonomous Vehicle,

    Z. Jin, X. Ji, Y . Cheng, B. Yang, C. Yan, and W. Xu, “PLA-LiDAR: Physical Laser Attacks against LiDAR-based 3D Object Detection in Autonomous Vehicle,” in2023 IEEE Symposium on Security and Privacy (SP), pp. 1822–1839, IEEE, 2023

  67. [67]

    SlowLiDAR: Increasing the Latency of LiDAR-Based Detection Using Adver- sarial Examples,

    H. Liu, Y . Wu, Z. Yu, Y . V orobeychik, and N. Zhang, “SlowLiDAR: Increasing the Latency of LiDAR-Based Detection Using Adver- sarial Examples,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5146–5155, 2023

  68. [68]

    Robust Deep Multi-modal Learning Based on Gated Information Fusion Network,

    J. Kim, J. Koh, Y . Kim, J. Choi, Y . Hwang, and J. W. Choi, “Robust Deep Multi-modal Learning Based on Gated Information Fusion Network,” inAsian Conference on Computer Vision, pp. 90–106, Springer, 2018

  69. [69]

    Radar and Camera Early Fusion for Vehicle Detection in Advanced Driver Assistance Systems,

    T.-Y . Lim, A. Ansari, B. Major, D. Fontijne, M. Hamilton, R. Gowaikar, and S. Subramanian, “Radar and Camera Early Fusion for Vehicle Detection in Advanced Driver Assistance Systems,” in Machine learning for autonomous driving workshop at the 33rd conference on neural information processing systems, vol. 2, 2019

  70. [70]

    Enhancing lane detection with a lightweight collaborative late fusion model,

    L. L. F. Jahn, S. Park, Y . Lim, J. An, and G. Choi, “Enhancing lane detection with a lightweight collaborative late fusion model,” Robotics and Autonomous Systems, vol. 175, p. 104680, 2024

  71. [71]

    Introspec- tive Failure Prediction for Autonomous Driving Using Late Fusion of State and Camera Information,

    C. B. Kuhn, M. Hofbauer, G. Petrovic, and E. Steinbach, “Introspec- tive Failure Prediction for Autonomous Driving Using Late Fusion of State and Camera Information,”IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 5, pp. 4445–4459, 2020

  72. [72]

    Play the Imitation Game: Model Extraction Attack against Autonomous Driving Localization,

    Q. Zhang, J. Shen, M. Tan, Z. Zhou, Z. Li, Q. A. Chen, and H. Zhang, “Play the Imitation Game: Model Extraction Attack against Autonomous Driving Localization,” inProceedings of the 38th Annual Computer Security Applications Conference, pp. 56– 70, 2022

  73. [73]

    VisionGuard: Secure and Robust Visual Perception of Autonomous Vehicles in Practice,

    X. Han, H. Wang, K. Zhao, G. Deng, Y . Xu, H. Liu, H. Qiu, and T. Zhang, “VisionGuard: Secure and Robust Visual Perception of Autonomous Vehicles in Practice,” inProceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security, pp. 1864–1878, 2024

  74. [74]

    Asynchronous Sensor Fusion of GPS, IMU and CAN- Based Odometry for Heavy-Duty Vehicles,

    V . Girbés-Juan, L. Armesto, D. Hernández-Ferrándiz, J. F. Dols, and A. Sala, “Asynchronous Sensor Fusion of GPS, IMU and CAN- Based Odometry for Heavy-Duty Vehicles,”IEEE Transactions on Vehicular Technology, vol. 70, no. 9, pp. 8617–8626, 2021

  75. [75]

    Inertial Measurement Units-Based Probe Vehicles: Automatic Calibration, Trajectory Es- timation, and Context Detection,

    M. Mousa, K. Sharma, and C. G. Claudel, “Inertial Measurement Units-Based Probe Vehicles: Automatic Calibration, Trajectory Es- timation, and Context Detection,”IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 10, pp. 3133–3143, 2017

  76. [76]

    Stereo Visual SLAM for Autonomous Vehicles: A Review,

    B. Gao, H. Lang, and J. Ren, “Stereo Visual SLAM for Autonomous Vehicles: A Review,” in2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 1316–1322, IEEE, 2020

  77. [77]

    Performance of design op- tions of automated ARIMA model construction for dynamic vehicle GPS location prediction,

    M. S. Alzyout and M. A. Alsmirat, “Performance of design op- tions of automated ARIMA model construction for dynamic vehicle GPS location prediction,”Simulation modelling practice and theory, vol. 104, p. 102148, 2020

  78. [78]

    A Comparison of ARIMA and LSTM in Forecasting Time Series,

    S. Siami-Namini, N. Tavakoli, and A. S. Namin, “A Comparison of ARIMA and LSTM in Forecasting Time Series,” in2018 17th IEEE international conference on machine learning and applications (ICMLA), pp. 1394–1401, IEEE, 2018

  79. [79]

    Drift with Devil: Security of Multi-Sensor Fusion based Localization in High-Level Autonomous Driving under GPS Spoofing,

    J. Shen, J. Y . Won, Z. Chen, and Q. A. Chen, “Drift with Devil: Security of Multi-Sensor Fusion based Localization in High-Level Autonomous Driving under GPS Spoofing,” in29th USENIX secu- rity symposium (USENIX Security 20), pp. 931–948, 2020

  80. [80]

    Security, privacy, and safety aspects of civilian drones: A survey,

    R. Altawy and A. M. Youssef, “Security, privacy, and safety aspects of civilian drones: A survey,”ACM Transactions on Cyber-Physical Systems, vol. 1, no. 2, pp. 1–25, 2016

Showing first 80 references.