pith. machine review for the scientific record. sign in

arxiv: 2604.03325 · v1 · submitted 2026-04-02 · 💻 cs.CV · cs.AI· cs.RO

Recognition: 2 theorem links

· Lean Theorem

Safety-Aligned 3D Object Detection: Single-Vehicle, Cooperative, and End-to-End Perspectives

Alois Knoll, Brian Hsuan-Cheng Liao, Chih-Hong Cheng, Hasan Esen

Authors on Pith no claims yet

Pith reviewed 2026-05-13 21:30 UTC · model grok-4.3

classification 💻 cs.CV cs.AIcs.RO
keywords safety-aligned 3D detectionautonomous vehiclescooperative perceptionend-to-end drivingcollision reductionNDS-USCEC-IoUperception optimization
0
0 comments X

The pith

Safety-aware metrics for 3D object detection in autonomous vehicles reduce collision rates by nearly 30 percent when integrated into end-to-end planning.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper examines how to make 3D object detection safety-focused for connected autonomous vehicles by treating high-impact errors differently from routine ones. It applies this lens to single-vehicle models across architectures, to cooperative vehicle-infrastructure systems, and to end-to-end perception-to-planning stacks. Standard metrics such as mAP and NDS often fail to reflect safety gains, whereas the proposed NDS-USC evaluation and EC-IoU training loss improve detection of objects that matter for avoiding crashes. The strongest result appears when EC-IoU is inserted into SparseDrive, cutting simulated collisions by nearly 30 percent. Readers should care because the approach directly ties perception training to system-level safety outcomes rather than average accuracy.

Core claim

The authors show that safety-oriented evaluation via NDS-USC and optimization via EC-IoU improve safety-critical detection performance in single-vehicle settings, demonstrate the safety advantages of cooperative models over vehicle-only baselines, and achieve a nearly 30 percent collision-rate reduction by incorporating EC-IoU into the SparseDrive end-to-end framework.

What carries the argument

EC-IoU, a safety-aware variant of the intersection-over-union loss that weights errors according to their potential collision impact, together with the NDS-USC metric that extends the nuScenes Detection Score to emphasize safety-critical mistakes.

If this is right

  • Gains on standard mAP and NDS benchmarks do not reliably improve safety-oriented NDS-USC scores.
  • Cooperative vehicle-infrastructure detection models deliver higher safety impact than single-vehicle models alone.
  • Safety-aware fine-tuning with EC-IoU raises performance on the subset of detections that affect collision risk.
  • Direct insertion of EC-IoU into an end-to-end perception-to-planning pipeline lowers system-level collision rates.
  • Safety-aligned perception evaluation supplies a practical route to higher CAV safety across modular, cooperative, and end-to-end designs.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same safety-loss approach could be tested on other perception modules such as lane or traffic-light detection to check for similar collision reductions.
  • Industry benchmarks may shift toward requiring NDS-USC-style reporting once the collision-rate link is confirmed in more diverse conditions.
  • Extending the method to multi-agent cooperative networks might address remaining edge cases that single-vehicle or simple V2I setups still miss.

Load-bearing premise

Improvements measured by NDS-USC and EC-IoU in simulation or benchmarks will reduce actual collisions in real-world driving without further physical validation.

What would settle it

A controlled on-road test in which vehicles equipped with the EC-IoU-trained detectors show no reduction in observed collision or near-miss rates compared with baseline models would disprove the claimed safety translation.

Figures

Figures reproduced from arXiv: 2604.03325 by Alois Knoll, Brian Hsuan-Cheng Liao, Chih-Hong Cheng, Hasan Esen.

Figure 1
Figure 1. Figure 1: Asymmetry in the safety consequences of perception errors: The red [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: USC applies perspective-view (PV) and bird’s-eye-view (BEV) [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: IoU and EC-IoU evaluation (with different [PITH_FULL_IMAGE:figures/full_fig_p004_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Benchmarking results of representative 3D object detectors on [PITH_FULL_IMAGE:figures/full_fig_p005_4.png] view at source ↗
Figure 6
Figure 6. Figure 6: TUMTraf cooperative perception benchmark [8]. [PITH_FULL_IMAGE:figures/full_fig_p006_6.png] view at source ↗
Figure 5
Figure 5. Figure 5: Qualitative comparison of PETR fine-tuned with the standard loss (red) [PITH_FULL_IMAGE:figures/full_fig_p006_5.png] view at source ↗
Figure 7
Figure 7. Figure 7: Top: vehicle-only lidar-based model. Bottom: cooperative lidar￾based model. The cooperative model reduces false positives, but slightly overestimates the lead vehicle’s longitudinal distance from the ego perspective. In the bird’s-eye view, the roadside lidar point cloud is shown in red, while the ego vehicle appears on the right in white and is moving leftward. We also exclude empty cases when averaging A… view at source ↗
Figure 8
Figure 8. Figure 8: SparseDrive architecture [10], featuring sparse query-based perception [PITH_FULL_IMAGE:figures/full_fig_p008_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Qualitative comparison of baseline and EC-IoU-optimized SparseDrive [PITH_FULL_IMAGE:figures/full_fig_p009_9.png] view at source ↗
read the original abstract

Perception plays a central role in connected and autonomous vehicles (CAVs), underpinning not only conventional modular driving stacks, but also cooperative perception systems and recent end-to-end driving models. While deep learning has greatly improved perception performance, its statistical nature makes perfect predictions difficult to attain. Meanwhile, standard training objectives and evaluation benchmarks treat all perception errors equally, even though only a subset is safety-critical. In this paper, we investigate safety-aligned evaluation and optimization for 3D object detection that explicitly characterize high-impact errors. Building on our previously proposed safety-oriented metric, NDS-USC, and safety-aware loss function, EC-IoU, we make three contributions. First, we present an expanded study of single-vehicle 3D object detection models across diverse neural network architectures and sensing modalities, showing that gains under standard metrics such as mAP and NDS may not translate to safety-oriented criteria represented by NDS-USC. With EC-IoU, we reaffirm the benefit of safety-aware fine-tuning for improving safety-critical detection performance. Second, we conduct an ego-centric, safety-oriented evaluation of AV-infrastructure cooperative object detection models, underscoring its superiority over vehicle-only models and demonstrating a safety impact analysis that illustrates the potential contribution of cooperative models to "Vision Zero." Third, we integrate EC-IoU into SparseDrive and show that safety-aware perception hardening can reduce collision rate by nearly 30% and improve system-level safety directly in an end-to-end perception-to-planning framework. Overall, our results indicate that safety-aligned perception evaluation and optimization offer a practical path toward enhancing CAV safety across single-vehicle, cooperative, and end-to-end autonomy settings.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper investigates safety-aligned evaluation and optimization for 3D object detection in CAVs. Building on prior NDS-USC metric and EC-IoU loss, it presents an expanded study of single-vehicle detectors across architectures and modalities showing that mAP/NDS gains may not translate to safety criteria; conducts ego-centric safety evaluation of cooperative models highlighting superiority and Vision Zero potential; and integrates EC-IoU into SparseDrive to achieve nearly 30% collision-rate reduction in an end-to-end perception-to-planning loop. Overall it argues that safety-focused perception hardening offers a practical path to improved CAV safety across single-vehicle, cooperative, and end-to-end settings.

Significance. If the central results hold, the work provides concrete evidence that prioritizing safety-critical errors via tailored metrics and losses can directly improve system-level safety metrics such as collision rate. The multi-perspective analysis (single-vehicle, cooperative, end-to-end) and the demonstration of downstream planning benefits are notable strengths. The approach reuses previously introduced metrics on new settings without introducing new free parameters, which supports reproducibility.

major comments (2)
  1. [§5.3] §5.3 (end-to-end integration): The claim of a nearly 30% collision-rate reduction when integrating EC-IoU into SparseDrive is shown only inside a simulator. The manuscript provides no physical validation, edge-case testing, or analysis of sim-to-real transfer for the specific detection errors penalized by EC-IoU, which is load-bearing for the system-level safety conclusion.
  2. [§4–§5] Experimental protocol (throughout §4–§5): Results report consistent directional improvements under NDS-USC and EC-IoU but omit error bars, full baseline implementation details, and complete hyper-parameter settings for the SparseDrive integration, making independent verification of the 30% figure difficult.
minor comments (2)
  1. [§2] Notation for NDS-USC and EC-IoU is introduced in the abstract and §2 but would benefit from a compact reminder table when first used in the cooperative and end-to-end sections.
  2. [§4] Figure captions for the cooperative perception results could explicitly state the number of infrastructure sensors and the exact scenario distribution used.

Simulated Author's Rebuttal

2 responses · 1 unresolved

We appreciate the referee's detailed feedback on our manuscript. We have carefully considered each comment and provide point-by-point responses below. Where appropriate, we will revise the manuscript to address the concerns raised.

read point-by-point responses
  1. Referee: [§5.3] §5.3 (end-to-end integration): The claim of a nearly 30% collision-rate reduction when integrating EC-IoU into SparseDrive is shown only inside a simulator. The manuscript provides no physical validation, edge-case testing, or analysis of sim-to-real transfer for the specific detection errors penalized by EC-IoU, which is load-bearing for the system-level safety conclusion.

    Authors: We acknowledge that our end-to-end experiments are performed in a simulated environment, as is common in the field for safety-critical autonomous driving research. Physical validation on real vehicles would require extensive resources and safety protocols that are outside the scope of this academic study. However, we will add a dedicated discussion section on the limitations of simulation, including potential sim-to-real transfer challenges for the errors penalized by EC-IoU, and suggest directions for future real-world validation. We believe the simulator results still provide valuable evidence for the benefits of safety-aligned perception. revision: partial

  2. Referee: [§4–§5] Experimental protocol (throughout §4–§5): Results report consistent directional improvements under NDS-USC and EC-IoU but omit error bars, full baseline implementation details, and complete hyper-parameter settings for the SparseDrive integration, making independent verification of the 30% figure difficult.

    Authors: We agree that including error bars, detailed implementation information, and hyper-parameter settings is essential for reproducibility. In the revised version, we will report standard deviations or error bars from multiple experimental runs where applicable, provide full details on baseline implementations, and include complete hyper-parameter configurations for the SparseDrive integration in the supplementary material or main text. revision: yes

standing simulated objections not resolved
  • Physical validation and sim-to-real transfer analysis for the end-to-end integration results

Circularity Check

0 steps flagged

Minor reliance on prior self-introduced metrics applied to new settings; no derivation reduces to fitted inputs or self-citation loops

full rationale

The paper's core contributions consist of empirical evaluations: applying the previously defined NDS-USC metric and EC-IoU loss (cited from the authors' earlier work) across single-vehicle, cooperative, and end-to-end settings, with the key result being an observed ~30% collision-rate drop when EC-IoU is integrated into SparseDrive inside a simulator. No equation or claim reduces by construction to its own inputs; the safety gains are measured outcomes rather than tautological predictions, and the self-citations serve only to reference the metric definitions rather than to justify the new experimental findings. This qualifies as a normal, low-level self-citation without circularity in the derivation chain.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract-only review yields no explicit free parameters, axioms, or invented entities; the work rests on prior definitions of NDS-USC and EC-IoU whose internal assumptions are not restated here.

pith-pipeline@v0.9.0 · 5623 in / 1044 out tokens · 30277 ms · 2026-05-13T21:30:24.016929+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

43 extracted references · 43 canonical work pages · 1 internal anchor

  1. [1]

    Autonomous Mobility Everywhere,

    Pony.ai, “Autonomous Mobility Everywhere,” https://pony.ai/, 2025, [Online; accessed 31-May-2025]

  2. [2]

    The World’s Most Experienced Driver,

    Waymo, “The World’s Most Experienced Driver,” https://waymo.com/, 2025, [Online; accessed 31-May-2025]

  3. [3]

    Standing general order on crash reporting for automated driving systems,

    National Highway Traffic Safety Administration (NHTSA), “Standing general order on crash reporting for automated driving systems,” https://www.nhtsa.gov/laws-regulations/ standing-general-order-crash-reporting, 2025, [Online; accessed 31-May-2025]

  4. [4]

    Robust physical-world attacks on deep learning visual classification,

    I. Evtimov, K. Eykholt, E. Fernandes, T. Kohno, B. Li, A. Prakash, A. Rahmati, F. Tramer, and D. Song, “Robust physical-world attacks on deep learning visual classification,” inIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018

  5. [5]

    USC: Uncom- promising spatial constraints for safety-oriented 3D object detectors in autonomous driving,

    B. H.-C. Liao, C.-H. Cheng, H. Esen, and A. Knoll, “USC: Uncom- promising spatial constraints for safety-oriented 3D object detectors in autonomous driving,” inIEEE Intelligent Transportation Systems Conference (ITSC), 2024

  6. [6]

    EC-IoU: Orienting safety for object detectors via ego-centric intersection-over-union,

    ——, “EC-IoU: Orienting safety for object detectors via ego-centric intersection-over-union,” inIEEE/RSJ International Conference on In- telligent Robots and Systems (IROS), 2024

  7. [7]

    nuScenes: A multimodal dataset for autonomous driving,

    H. Caesar, V . Bankiti, A. H. Lang, S. V ora, V . E. Liong, Q. Xu, A. Kr- ishnan, Y . Pan, G. Baldan, and O. Beijbom, “nuScenes: A multimodal dataset for autonomous driving,” inIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020

  8. [8]

    TUMTraf V2X cooperative perception dataset,

    W. Zimmer, G. A. Wardana, S. Sritharan, X. Zhou, R. Song, and A. C. Knoll, “TUMTraf V2X cooperative perception dataset,” inIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024

  9. [9]

    Vision Zero—Implementing a policy for traffic safety,

    R. Johansson, “Vision Zero—Implementing a policy for traffic safety,” Safety Science, vol. 47, no. 6, pp. 826–831, 2009

  10. [10]

    SparseDrive: End-to-end autonomous driving via sparse scene representation,

    W. Sun, X. Lin, Y . Shi, C. Zhang, H. Wu, and S. Zheng, “SparseDrive: End-to-end autonomous driving via sparse scene representation,” in IEEE International Conference on Robotics and Automation (ICRA), 2025

  11. [11]

    Are we ready for autonomous driving? The KITTI vision benchmark suite,

    A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? The KITTI vision benchmark suite,” inIEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012

  12. [12]

    Learning to evaluate perception models using planner-centric metrics,

    J. Philion, A. Kar, and S. Fidler, “Learning to evaluate perception models using planner-centric metrics,” inIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020

  13. [13]

    Revisiting 3D object detection from an egocentric perspective,

    B. Deng, C. R. Qi, M. Najibi, T. Funkhouser, Y . Zhou, and D. Anguelov, “Revisiting 3D object detection from an egocentric perspective,” in Advances in Neural Information Processing Systems (NeurIPS), 2021

  14. [14]

    SHARD: Safety and human performance analysis for requirements in detection,

    K. T. Mori and S. Peters, “SHARD: Safety and human performance analysis for requirements in detection,”IEEE Transactions on Intelligent Vehicles, vol. 9, no. 1, pp. 3010–3021, 2024

  15. [15]

    UnitBox: An advanced object detection network,

    J. Yu, Y . Jiang, Z. Wang, Z. Cao, and T. Huang, “UnitBox: An advanced object detection network,” inACM International Conference on Multimedia (MM), 2016

  16. [16]

    Generalized Intersection over Union: A metric and a loss for bounding box regression,

    H. Rezatofighi, N. Tsoi, J. Gwak, A. Sadeghian, I. Reid, and S. Savarese, “Generalized Intersection over Union: A metric and a loss for bounding box regression,” inIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019

  17. [17]

    Focal and efficient IoU loss for accurate bounding box regression,

    Y .-F. Zhang, W. Ren, Z. Zhang, Z. Jia, L. Wang, and T. Tan, “Focal and efficient IoU loss for accurate bounding box regression,”Neurocomput- ing, vol. 506, no. C, p. 146–157, 2022

  18. [18]

    Alpha- IoU: A family of power intersection over union losses for bounding box regression,

    J. He, S. Erfani, X. Ma, J. Bailey, Y . Chi, and X.-S. Hua, “Alpha- IoU: A family of power intersection over union losses for bounding box regression,” inAdvances in Neural Information Processing Systems (NeurIPS), 2021

  19. [19]

    Probabilistic and geometric depth: Detecting objects in perspective,

    T. Wang, X. Zhu, J. Pang, and D. Lin, “Probabilistic and geometric depth: Detecting objects in perspective,” inConference on Robot Learn- ing (CoRL), 2021

  20. [20]

    Safety-aware hardening of 3D object detection neural network systems,

    C.-H. Cheng, “Safety-aware hardening of 3D object detection neural network systems,” inComputer Safety, Reliability, and Security (Safe- Comp), 2020

  21. [21]

    A safety-adapted loss for pedestrian detection in automated driving,

    M. Lyssenko, P. Pimplikar, M. Bieshaar, F. Nozarian, and R. Triebel, “A safety-adapted loss for pedestrian detection in automated driving,” inIEEE International Conference on Robotics and Automation (ICRA), 2024

  22. [22]

    Vehicle-to-everything cooperative perception for autonomous driving,

    T. Huang, J. Liu, X. Zhou, D. C. Nguyen, M. R. Azghadi, Y . Xia, Q.- L. Han, and S. Sun, “Vehicle-to-everything cooperative perception for autonomous driving,”Proceedings of the IEEE, 2025

  23. [23]

    OPV2V: An open benchmark dataset and fusion pipeline for perception with vehicle- to-vehicle communication,

    R. Xu, H. Xiang, X. Xia, X. Han, J. Li, and J. Ma, “OPV2V: An open benchmark dataset and fusion pipeline for perception with vehicle- to-vehicle communication,” inProceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2022, pp. 2583–2589

  24. [24]

    V2X-ViT: Vehicle-to-everything cooperative perception with vision transformer,

    R. Xu, H. Xiang, Z. Tu, X. Xia, M.-H. Yang, and J. Ma, “V2X-ViT: Vehicle-to-everything cooperative perception with vision transformer,” inComputer Vision – ECCV 2022, ser. Lecture Notes in Computer Science. Springer, 2022, pp. 107–124

  25. [25]

    DAIR-V2X: A large-scale dataset for vehicle- infrastructure cooperative 3d object detection,

    H. Yu, Y . Luo, M. Shu, Y . Huo, Z. Yang, Y . Shi, Z. Guo, H. Li, X. Hu, J. Yuan, and Z. Nie, “DAIR-V2X: A large-scale dataset for vehicle- infrastructure cooperative 3d object detection,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 21 361–21 370

  26. [26]

    ALVINN: An autonomous land vehicle in a neu- ral network,

    D. A. Pomerleau, “ALVINN: An autonomous land vehicle in a neu- ral network,” inAdvances in Neural Information Processing Systems (NeurIPS), 1988

  27. [27]

    End to End Learning for Self-Driving Cars

    M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to end learning for self-driving cars,”arXiv preprint arXiv:1604.07316, 2016

  28. [28]

    Planning- oriented autonomous driving,

    Y . Hu, J. Yang, L. Chen, K. Li, C. Sima, X. Zhu, S. Chai, S. Du, T. Lin, W. Wang, L. Lu, X. Jia, Q. Liu, J. Dai, Y . Qiao, and H. Li, “Planning- oriented autonomous driving,” inIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023

  29. [29]

    A survey on vision- language-action models for autonomous driving,

    S. Jiang, Z. Huang, K. Qian, Z. Luo, T. Zhu, Y . Zhong, Y . Tang, M. Kong, Y . Wang, S. Jiao, H. Ye, Z. Sheng, X. Zhao, T. Wen, Z. Fu, S. Chen, K. Jiang, D. Yang, S. Choi, and L. Sun, “A survey on vision- language-action models for autonomous driving,” inProceedings of the IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2025, pp. 4524–4536

  30. [30]

    FCOS3D: Fully convolutional one-stage monocular 3D object detection,

    T. Wang, X. Zhu, J. Pang, and D. Lin, “FCOS3D: Fully convolutional one-stage monocular 3D object detection,” inIEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2021

  31. [31]

    DETR3D: 3D object detection from multi-view images via 3D-to-2D queries,

    Y . Wang, V . Guizilini, T. Zhang, Y . Wang, H. Zhao, and J. Solomon, “DETR3D: 3D object detection from multi-view images via 3D-to-2D queries,” inConference on Robot Learning (CoRL), 2021

  32. [32]

    PETR: Position embedding transformation for multi-view 3D object detection,

    Y . Liu, T. Wang, X. Zhang, and J. Sun, “PETR: Position embedding transformation for multi-view 3D object detection,” inEuropean Con- ference on Computer Vision (ECCV), 2022

  33. [33]

    PointPillars: Fast encoders for object detection from point clouds,

    A. H. Lang, S. V ora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom, “PointPillars: Fast encoders for object detection from point clouds,” inIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019

  34. [34]

    SSN: Shape signature networks for multi-class object detection from point clouds,

    X. Zhu, Y . Ma, T. Wang, Y . Xu, J. Shi, and D. Lin, “SSN: Shape signature networks for multi-class object detection from point clouds,” inEuropean Conference on Computer Vision (ECCV), 2020

  35. [35]

    Center-based 3D object detection and tracking,

    T. Yin, X. Zhou, and P. Kr ¨ahenb¨uhl, “Center-based 3D object detection and tracking,” inIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021

  36. [36]

    BEVFusion: Multi-task multi-sensor fusion with unified bird’s-eye view representation,

    Z. Liu, H. Tang, A. Amini, X. Yang, H. Mao, D. Rus, and S. Han, “BEVFusion: Multi-task multi-sensor fusion with unified bird’s-eye view representation,” inIEEE International Conference on Robotics and Automation (ICRA), 2023

  37. [37]

    MMDetection3D: OpenMMLab next- generation platform for general 3D object detection,

    MMDetection3D Contributors, “MMDetection3D: OpenMMLab next- generation platform for general 3D object detection,” https://github.com/ open-mmlab/mmdetection3d, 2020

  38. [38]

    Signalized intersection informational guide,

    Federal Highway Administration, “Signalized intersection informational guide,” U.S. Department of Transportation, Washington, DC, Tech. Rep. FHW A-SA-13-027, 2013

  39. [39]

    Automatische dauerz¨ahlstellen auf autobahnen und bundesstraßen,

    Bundesanstalt f ¨ur Straßenwesen (BASt), “Automatische dauerz¨ahlstellen auf autobahnen und bundesstraßen,” https://www.bast.de/DE/ Fachthemen/Verkehrstechnik/Dauerzaehlstellen/dauerzaehlstellen node.html, 2025, [Online; accessed 17-June-2025]

  40. [40]

    Do autonomous vehicles outperform latest-generation human-driven vehicles? A comparison to Waymo’s auto liability insurance claims at 25.3 M miles

    L. Di Lillo, T. Gode, X. Zhou, J. Scanlon, R. Chen, and T. Victor, “Do autonomous vehicles outperform latest-generation human-driven vehicles? A comparison to Waymo’s auto liability insurance claims at 25.3 M miles.” 2024

  41. [41]

    D. C. Montgomery, E. A. Peck, and G. G. Vining,Introduction to Linear Regression Analysis, 6th ed. Hoboken, NJ: Wiley, 2021

  42. [42]

    Benchmarking robustness of 3D object detection to common corruptions in autonomous driving,

    Y . Dong, C. Kang, J. Zhang, Z. Zhu, Y . Wang, X. Yang, H. Su, X. Wei, and J. Zhu, “Benchmarking robustness of 3D object detection to common corruptions in autonomous driving,” inCVPR, 2023

  43. [43]

    Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability?

    N. Kalra and S. M. Paddock, “Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability?” Transportation Research Part A: Policy and Practice, vol. 94, pp. 182– 193, 2016