Recognition: unknown
From Stealthy Data Fabrication to Unsafe Driving: Realistic Scenario Attacks on Collaborative Perception
Pith reviewed 2026-05-09 14:25 UTC · model grok-4.3
The pith
Subtle changes to object positions in shared vehicle perception data can trigger unsafe driving behaviors like sudden braking.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The authors introduce a stealthy data fabrication attack that manipulates the poses of existing objects in shared perception results while keeping changes small enough to avoid detection thresholds. These perturbations propagate through downstream components including object tracking and trajectory prediction, producing significant errors in predicted vehicle behaviors and ultimately unsafe driving commands. They also develop an online scenario-aware framework that adjusts the attack strategy during operation to fit dynamic traffic. Tests on the OPV2V and V2X-Real datasets confirm the attack induces detection errors with over 90 percent success and triggers behaviors such as unnecessary hard
What carries the argument
Stealthy pose manipulation of existing objects in shared collaborative perception data, which exploits propagation through tracking and prediction modules.
If this is right
- The attack reaches over 90 percent success in inducing detection errors on standard datasets.
- It causes safety-critical actions such as unnecessary hard braking in up to 50 percent of scenarios.
- It evades most existing state-of-the-art defenses.
- A new mitigation that checks localized safety-critical regions reaches 80 percent detection on the small perturbations, compared with 11 percent for prior methods.
Where Pith is reading between the lines
- Security testing for collaborative perception should evaluate the full pipeline from perception through planning rather than isolated detection accuracy.
- Defenses may need to prioritize monitoring in areas that directly affect immediate vehicle control decisions.
- Similar subtle perturbation techniques could apply to other systems that fuse shared sensor data for real-time control.
Load-bearing premise
Small pose errors stay below all practical detection thresholds and are amplified by standard object tracking and trajectory prediction in real dynamic traffic.
What would settle it
A physical experiment with real vehicles where the attack is applied and the resulting perception outputs are checked against improved anomaly detectors or logged vehicle control commands to see whether safety-critical maneuvers occur.
Figures
read the original abstract
Collaborative perception allows connected and autonomous vehicles (CAVs) to improve perception by sharing sensory data, but it also introduces security risks from manipulated inputs. Prior work shows that attackers can spoof or remove objects by fabricating shared data, yet the practicality of such attacks in real-world driving remains unclear. Existing attacks are often detectable or evaluated in manually constructed scenarios, leaving open whether they can induce safety-critical outcomes in dynamic environments. To bridge this gap, we present a stealthy, scenario-realistic data fabrication attack that induces unsafe driving behaviors through end-to-end system effects. Instead of creating large, easily detectable anomalies, our attack subtly manipulates the poses of existing objects in shared perception results, keeping perturbations below detection thresholds. These small errors are then propagated through downstream modules, including object tracking and trajectory prediction, leading to significant deviations in predicted behaviors and ultimately unsafe driving decisions. We further design an online, scenario-aware attack framework that adapts to dynamic traffic conditions and optimizes attack strategies at runtime. Experiments on OPV2V and V2X-Real demonstrate that the attack achieves over 90% success in inducing detection errors and triggers safety-critical behaviors, such as unnecessary hard braking, in up to 50% of scenarios, while largely evading state-of-the-art defenses. We also propose a mitigation that focuses on detecting anomalies in localized, safety-critical regions, achieving an 80% detection rate on the small pose perturbation compared to 11% for the best existing methods.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper claims that a stealthy attack on collaborative perception in CAVs can be achieved by subtly perturbing the poses of existing objects in shared perception results (kept below detection thresholds), with these small errors propagating through object tracking and trajectory prediction to induce safety-critical behaviors such as unnecessary hard braking. An online, scenario-aware attack framework is introduced that adapts to dynamic traffic; evaluations on OPV2V and V2X-Real datasets report >90% success in inducing detection errors and up to 50% triggering of unsafe behaviors while evading SOTA defenses, plus a mitigation achieving 80% detection on small perturbations.
Significance. If the end-to-end propagation results hold under realistic conditions, the work would be significant for highlighting how small, scenario-realistic perturbations in collaborative perception can cascade to unsafe driving decisions, moving beyond isolated spoofing/removal attacks. The empirical evaluation on established datasets and the proposed localized-region mitigation provide practical value for CAV security research.
major comments (2)
- [Attack propagation description (Abstract and §3)] The central claim depends on small pose perturbations propagating through tracking and prediction to produce large trajectory deviations sufficient for safety-critical commands. However, no quantitative error-amplification analysis or ablation against standard smoothing mechanisms (e.g., Kalman filters with uncertainty handling) is provided, leaving open whether the reported >90% detection-error and 50% unsafe-behavior rates would hold when perturbations are attenuated by typical CAV pipelines.
- [Experiments and evaluation (Abstract and §5)] Quantitative results in the abstract and evaluation sections report >90% success and up to 50% unsafe-behavior triggering without error bars, exact perturbation magnitudes, full baseline comparisons, or controls for dynamic traffic variability; this weakens assessment of whether the attack reliably exceeds detection thresholds across the tested datasets.
minor comments (2)
- [Mitigation section] The mitigation strategy is described at a high level; adding implementation pseudocode or pseudocode for the anomaly detection in safety-critical regions would improve reproducibility.
- [Preliminaries] Notation for pose perturbation magnitude and success-rate metrics could be standardized earlier to aid readability.
Simulated Author's Rebuttal
We thank the referee for the detailed and constructive comments on our manuscript. We are pleased that the referee recognizes the potential significance of demonstrating how small perturbations in collaborative perception can lead to safety-critical outcomes in CAVs. Below, we provide point-by-point responses to the major comments and outline the revisions we will make to address them.
read point-by-point responses
-
Referee: [Attack propagation description (Abstract and §3)] The central claim depends on small pose perturbations propagating through tracking and prediction to produce large trajectory deviations sufficient for safety-critical commands. However, no quantitative error-amplification analysis or ablation against standard smoothing mechanisms (e.g., Kalman filters with uncertainty handling) is provided, leaving open whether the reported >90% detection-error and 50% unsafe-behavior rates would hold when perturbations are attenuated by typical CAV pipelines.
Authors: We agree that an explicit quantitative analysis of how errors amplify through the tracking and prediction modules would provide stronger support for our claims. Our current evaluation demonstrates the end-to-end impact on real-world datasets (OPV2V and V2X-Real), where the perturbations lead to the reported detection errors and unsafe behaviors despite the presence of standard pipeline components. However, to address this concern directly, we will include in the revised manuscript a quantitative error-amplification study, including an ablation analysis that compares results with and without smoothing mechanisms such as Kalman filters. This will clarify the conditions under which small perturbations can still propagate to large effects. revision: yes
-
Referee: [Experiments and evaluation (Abstract and §5)] Quantitative results in the abstract and evaluation sections report >90% success and up to 50% unsafe-behavior triggering without error bars, exact perturbation magnitudes, full baseline comparisons, or controls for dynamic traffic variability; this weakens assessment of whether the attack reliably exceeds detection thresholds across the tested datasets.
Authors: We acknowledge the need for more rigorous statistical reporting and additional details in the experimental section. The results are derived from extensive testing across multiple scenarios in the datasets, but we will enhance the revised manuscript by adding error bars (e.g., standard deviations over multiple runs), specifying the exact perturbation magnitudes used in the attacks, providing more comprehensive baseline comparisons, and including additional controls and analysis for dynamic traffic variability such as different traffic densities and vehicle speeds. These additions will better substantiate the reliability of the attack success rates. revision: yes
Circularity Check
No circularity: purely empirical attack design and evaluation
full rationale
The paper describes an attack that subtly perturbs object poses in shared perception data, then evaluates end-to-end effects on tracking, prediction, and planning modules using external datasets (OPV2V, V2X-Real) and existing defenses. No equations, derivations, fitted parameters renamed as predictions, or self-citation chains appear in the provided text. The propagation claim is presented as an observed outcome of experiments rather than a mathematical reduction to the attack inputs. The central results (success rates, evasion) are measured against independent benchmarks, satisfying the criteria for a self-contained empirical study.
Axiom & Free-Parameter Ledger
free parameters (1)
- pose perturbation magnitude
axioms (1)
- domain assumption Small pose errors in shared perception propagate through tracking and prediction to alter driving decisions
Reference graph
Works this paper leans on
-
[1]
https://www.3gpp.org/specifications-technologies/releases/ release-14, 2017
3GPP Release 14. https://www.3gpp.org/specifications-technologies/releases/ release-14, 2017
2017
-
[2]
https://www.qualcomm.com/news/releases/2017/09/ qualcomm-announces-groundbreaking-cellular-v2x-solution-support- automotive, 2017
Qualcomm C-V2X. https://www.qualcomm.com/news/releases/2017/09/ qualcomm-announces-groundbreaking-cellular-v2x-solution-support- automotive, 2017
2017
-
[3]
https://carrier.huawei.com/en/products/wireless-network-v3/ Components/c-v2x, 2019
Huawei C-V2X. https://carrier.huawei.com/en/products/wireless-network-v3/ Components/c-v2x, 2019
2019
-
[4]
https://www.infineon.com/dgdl/Infineon-ISPN-Use- Case-Savari-Securing-V2X+communications-ABR-v01_00-EN.pdf?fileId= 5546d462689a790c0168e1c1f5e35221, 2019
Infineon C-V2X. https://www.infineon.com/dgdl/Infineon-ISPN-Use- Case-Savari-Securing-V2X+communications-ABR-v01_00-EN.pdf?fileId= 5546d462689a790c0168e1c1f5e35221, 2019
2019
-
[5]
https://carla.org/, 2021
Carla: Open-source simulator for autonomous driving research. https://carla.org/, 2021
2021
-
[6]
https://github.com/ Autoware-AI, 2022
Autoware: Open-source software for self-driving vehicles. https://github.com/ Autoware-AI, 2022
2022
-
[7]
http://apollo.auto, 2022
Baidu Apollo. http://apollo.auto, 2022
2022
-
[8]
https://www.bosch-mobility-solutions.com/en/solutions/ connectivity/v2x-connectivity-solutions-cv/, 2022
Bosch C-V2X. https://www.bosch-mobility-solutions.com/en/solutions/ connectivity/v2x-connectivity-solutions-cv/, 2022
2022
-
[9]
https://www.eclipse.org/sumo/, 2022
SUMO: Simulation of Urban Mobility. https://www.eclipse.org/sumo/, 2022
2022
-
[10]
https://aecc.org/, 2023
Automotive Edge Computing Consortium. https://aecc.org/, 2023
2023
-
[11]
https: //www.sae.org/standards/content/j3224_202208/, 2023
J3224_202208: V2x sensor-sharing for cooperative and automated driving. https: //www.sae.org/standards/content/j3224_202208/, 2023
2023
-
[12]
A survey on deep-learning-based lidar 3d object detection for autonomous driving.Sensors, 22(24):9577, 2022
Simegnew Yihunie Alaba and John E Ball. A survey on deep-learning-based lidar 3d object detection for autonomous driving.Sensors, 22(24):9577, 2022
2022
-
[13]
Invisible for both camera and lidar: Security of multi-sensor fusion based perception in autonomous driving under physical-world attacks
Yulong Cao, Ningfei Wang, Chaowei Xiao, Dawei Yang, Jin Fang, Ruigang Yang, Qi Alfred Chen, Mingyan Liu, and Bo Li. Invisible for both camera and lidar: Security of multi-sensor fusion based perception in autonomous driving under physical-world attacks. In2021 IEEE Symposium on Security and Privacy (SP), pages 176–194. IEEE, 2021
2021
-
[14]
Bringing different views together: A hybrid cooperative perception framework for connected autonomous vehicles.IEEE Network, pages 1–1, 2025
Dominic Carrillo, Michael Nutt, Maarten Meijer, Junaid Khan, Song Fu, and Qing Yang. Bringing different views together: A hybrid cooperative perception framework for connected autonomous vehicles.IEEE Network, pages 1–1, 2025
2025
-
[15]
A cooperative perception environment for traffic operations and control
Hanlin Chen, Brian Liu, Xumiao Zhang, Feng Qian, Z Morley Mao, and Yiheng Feng. A cooperative perception environment for traffic operations and control. arXiv preprint arXiv:2208.02792, 2022
-
[16]
F-cooper: Feature based cooperative perception for autonomous vehicle edge computing system using 3d point clouds
Qi Chen, Xu Ma, Sihai Tang, Jingda Guo, Qing Yang, and Song Fu. F-cooper: Feature based cooperative perception for autonomous vehicle edge computing system using 3d point clouds. InProceedings of the 4th ACM/IEEE Symposium on Edge Computing, pages 88–100, 2019
2019
-
[17]
Cooper: Cooperative perception for connected autonomous vehicles based on 3d point clouds
Qi Chen, Sihai Tang, Qing Yang, and Song Fu. Cooper: Cooperative perception for connected autonomous vehicles based on 3d point clouds. In2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), pages 514–524. IEEE, 2019
2019
-
[18]
Coopernaut: End- to-end driving with cooperative perception for networked vehicles
Jiaxun Cui, Hang Qiu, Dian Chen, Peter Stone, and Yuke Zhu. Coopernaut: End- to-end driving with cooperative perception for networked vehicles. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17252–17262, 2022
2022
-
[19]
Security analysis of {Camera-LiDAR} fusion against {Black-Box} attacks on autonomous vehicles
R Spencer Hallyburton, Yupei Liu, Yulong Cao, Z Morley Mao, and Miroslav Pajic. Security analysis of {Camera-LiDAR} fusion against {Black-Box} attacks on autonomous vehicles. In31st USENIX Security Symposium (USENIX Security 22), pages 1903–1920, 2022
1903
-
[20]
Security-aware sensor fusion with mate: the multi-agent trust estimator
R Spencer Hallyburton and Miroslav Pajic. Security-aware sensor fusion with mate: the multi-agent trust estimator. InProceedings of the 2025 ACM SIGSAC Conference on Computer and Communications Security, pages 2009–2023, 2025
2025
-
[21]
Senkang Hu, Yihang Tao, Zihan Fang, Guowen Xu, Yiqin Deng, Sam Kwong, and Yuguang Fang. Cp-guard+: A new paradigm for malicious agent detection and defense in collaborative perception.arXiv preprint arXiv:2502.07807, 2025
-
[22]
Pla-lidar: Physical laser attacks against lidar-based 3d object detection in autonomous vehicle
Zizhi Jin, Ji Xiaoyu, Yushi Cheng, Bo Yang, Chen Yan, and Wenyuan Xu. Pla-lidar: Physical laser attacks against lidar-based 3d object detection in autonomous vehicle. In2023 IEEE Symposium on Security and Privacy (SP), pages 710–727. IEEE Computer Society, 2022
2022
-
[23]
Joint 3d proposal generation and object detection from view aggregation.IROS, 2018
Jason Ku, Melissa Mozifian, Jungwook Lee, Ali Harakeh, and Steven Waslander. Joint 3d proposal generation and object detection from view aggregation.IROS, 2018
2018
-
[24]
Carspeak: a content-centric network for autonomous driving.ACM SIG- COMM Computer Communication Review, 42(4):259–270, 2012
Swarun Kumar, Lixin Shi, Nabeel Ahmed, Stephanie Gil, Dina Katabi, and Daniela Rus. Carspeak: a content-centric network for autonomous driving.ACM SIG- COMM Computer Communication Review, 42(4):259–270, 2012
2012
-
[25]
Pointpillars: Fast encoders for object detection from point clouds
Alex H Lang, Sourabh Vora, Holger Caesar, Lubing Zhou, Jiong Yang, and Oscar Beijbom. Pointpillars: Fast encoders for object detection from point clouds. InPro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12697–12705, 2019
2019
-
[26]
Xin Li, Xiaowen Ying, and Mooi Choo Chuah. Grip++: Enhanced graph-based interaction-aware trajectory prediction for autonomous driving.arXiv preprint arXiv:1907.07792, 2019
-
[27]
Among us: Adversarially robust collaborative perception by consensus
Yiming Li, Qi Fang, Jiamu Bai, Siheng Chen, Felix Juefei-Xu, and Chen Feng. Among us: Adversarially robust collaborative perception by consensus. InPro- ceedings of the IEEE/CVF International Conference on Computer Vision, pages 186–195, 2023
2023
-
[28]
Fooling lidar per- ception via adversarial trajectory perturbation
Yiming Li, Congcong Wen, Felix Juefei-Xu, and Chen Feng. Fooling lidar per- ception via adversarial trajectory perturbation. InProceedings of the IEEE/CVF International Conference on Computer Vision, pages 7898–7907, 2021
2021
-
[29]
Lidar for autonomous driving: The principles, challenges, and trends for automotive lidar and perception systems.IEEE Signal Processing Magazine, 37(4):50–61, 2020
You Li and Javier Ibanez-Guzman. Lidar for autonomous driving: The principles, challenges, and trends for automotive lidar and perception systems.IEEE Signal Processing Magazine, 37(4):50–61, 2020
2020
-
[30]
Pretend benign: A stealthy adversarial attack by exploiting vulner- abilities in cooperative perception
Hongwei Lin, Dongyu Pan, Qiming Xia, Hai Wu, Cheng Wang, Siqi Shen, and Chenglu Wen. Pretend benign: A stealthy adversarial attack by exploiting vulner- abilities in cooperative perception. InProceedings of the IEEE/CVF International Conference on Computer Vision, pages 19947–19956, 2025
2025
-
[31]
Fusioneye: Perception sharing for connected vehicles and its bandwidth- accuracy trade-offs
Hansi Liu, Pengfei Ren, Shubham Jain, Mohannad Murad, Marco Gruteser, and Fan Bai. Fusioneye: Perception sharing for connected vehicles and its bandwidth- accuracy trade-offs. In2019 16th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON), pages 1–9. IEEE, 2019
2019
-
[32]
A first {Physical-World} trajectory prediction attack via {LiDAR- induced} deceptions in autonomous driving
Yang Lou, Yi Zhu, Qun Song, Rui Tan, Chunming Qiao, Wei-Bin Lee, and Jian- ping Wang. A first {Physical-World} trajectory prediction attack via {LiDAR- induced} deceptions in autonomous driving. In33rd USENIX Security Symposium (USENIX Security 24), pages 6291–6308, 2024
2024
-
[33]
Chen Ma, Ningfei Wang, Zhengyu Zhao, Qian Wang, Qi Alfred Chen, and Chao Shen. Controlloc: Physical-world hijacking attack on visual perception in au- tonomous driving.arXiv preprint arXiv:2406.05810, 2024
-
[34]
Martínez and et al
J. Martínez and et al. Safety validation of connected autonomous driving systems in urban intersections using the sunrise safety assurance framework.MDPI Sensors, 8(3):55, 2026. Safety validation of ADS with connected perception in real-world scenarios
2026
-
[35]
Physical hijacking attacks against object trackers
Raymond Muller, Yanmao Man, Z Berkay Celik, Ming Li, and Ryan Gerdes. Physical hijacking attacks against object trackers. InProceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, pages 2309–2322, 2022
2022
-
[36]
Support for the i-street (implementing solutions from transportation research and evaluation of emerging technologies) testbed
Florida Department of Transportation. Support for the i-street (implementing solutions from transportation research and evaluation of emerging technologies) testbed. Technical report, U.S. Department of Transportation / ROSA P, 2024. Real-world Living Lab for V2X and CAV infrastructure deployment. Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Qi...
2024
-
[37]
Deep learning frontiers in 3d object detection: A comprehensive review for autonomous driving
Ambati Pravallika, Mohammad Farukh Hashmi, and Aditya Gupta. Deep learning frontiers in 3d object detection: A comprehensive review for autonomous driving. IEEE Access, 12:173936–173980, 2024
2024
-
[38]
Hang Qiu, Pohan Huang, Namo Asavisanu, Xiaochen Liu, Konstantinos Psou- nis, and Ramesh Govindan. Autocast: Scalable infrastructure-less cooperative perception for distributed collaborative driving.arXiv preprint arXiv:2112.14947, 2021
-
[39]
A comparative assessment of c-its technologies: Global c-v2x trials and deployments
Qualcomm and iMOVE CRC. A comparative assessment of c-its technologies: Global c-v2x trials and deployments. Technical report, iMOVE Australia, 2023. Covers Qualcomm’s trials with Ford and AT&T
2023
-
[40]
Tra- jectron++: Dynamically-feasible trajectory forecasting with heterogeneous data
Tim Salzmann, Boris Ivanovic, Punarjay Chakravarty, and Marco Pavone. Tra- jectron++: Dynamically-feasible trajectory forecasting with heterogeneous data. InEuropean conference on computer vision, pages 683–700. Springer, 2020
2020
-
[41]
Drift with devil: Security of multi-sensor fusion based localization in high-level autonomous driving under gps spoofing
Junjie Shen, Jun Yeon Won, Zeyuan Chen, and Qi Alfred Chen. Drift with devil: Security of multi-sensor fusion based localization in high-level autonomous driving under gps spoofing. InProceedings of the 29th USENIX Conference on Security Symposium, pages 931–948, 2020
2020
-
[42]
Pointrcnn: 3d object proposal generation and detection from point cloud
Shaoshuai Shi, Xiaogang Wang, and Hongsheng Li. Pointrcnn: 3d object proposal generation and detection from point cloud. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 770–779, 2019
2019
-
[43]
Vips: real-time perception fusion for infrastructure- assisted autonomous driving
Shuyao Shi, Jiahe Cui, Zhehao Jiang, Zhenyu Yan, Guoliang Xing, Jianwei Niu, and Zhenchao Ouyang. Vips: real-time perception fusion for infrastructure- assisted autonomous driving. InProceedings of the 28th Annual International Conference on Mobile Computing And Networking, pages 133–146, 2022
2022
-
[44]
Zhiying Song, Fuxi Wen, Hailiang Zhang, and Jun Li. An efficient and robust object-level cooperative perception framework for connected and automated driving.arXiv preprint arXiv:2210.06289, 2022
-
[45]
Real-time heterogeneous collaborative perception in edge-enabled vehicular environments
Samuel Thornton, Nithin Santhanam, Rajeev Chhajer, and Sujit Dey. Real-time heterogeneous collaborative perception in edge-enabled vehicular environments. IEEE Open Journal of Vehicular Technology, 6:471–486, 2025
2025
-
[46]
Physically realizable adversarial examples for lidar object detection
James Tu, Mengye Ren, Sivabalan Manivasagam, Ming Liang, Bin Yang, Richard Du, Frank Cheng, and Raquel Urtasun. Physically realizable adversarial examples for lidar object detection. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13716–13725, 2020
2020
-
[47]
Adversarial attacks on multi-agent communication
James Tu, Tsunhsuan Wang, Jingkang Wang, Sivabalan Manivasagam, Mengye Ren, and Raquel Urtasun. Adversarial attacks on multi-agent communication. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7768–7777, 2021
2021
-
[48]
A systematic literature review on vehicular collaborative perception—a computer vision perspective.IEEE Transactions on Intelligent Transportation Systems, 2025
Lei Wan, Jianxin Zhao, Andreas Wiedholz, Manuel Bied, Mateus Martinez de Lu- cena, Abhishek Dinkar Jagtap, Andreas Festag, Antônio Augusto Fröhlich, Han- nan Ejaz Keen, and Alexey Vinel. A systematic literature review on vehicular collaborative perception—a computer vision perspective.IEEE Transactions on Intelligent Transportation Systems, 2025
2025
-
[49]
From threat to trust: Exploiting attention mechanisms for attacks and defenses in cooperative perception
Chenyi Wang, Raymond Muller, Ruoyu Song, Jean-Philippe Monteuuis, Jonathan Petit, Yanmao Man, Ryan Gerdes, Z Berkay Celik, and Ming Li. From threat to trust: Exploiting attention mechanisms for attacks and defenses in cooperative perception. In34th USENIX Security Symposium (USENIX Security 25), pages 7387–7406, 2025
2025
-
[50]
V2vnet: Vehicle-to-vehicle communication for joint perception and prediction
Tsun-Hsuan Wang, Sivabalan Manivasagam, Ming Liang, Bin Yang, Wenyuan Zeng, and Raquel Urtasun. V2vnet: Vehicle-to-vehicle communication for joint perception and prediction. InEuropean Conference on Computer Vision, pages 605–621. Springer, 2020
2020
-
[51]
arXiv preprint arXiv:2008.08063 (2020)
Xinshuo Weng, Jianren Wang, David Held, and Kris Kitani. Ab3dmot: A base- line for 3d multi-object tracking and new evaluation metrics.arXiv preprint arXiv:2008.08063, 2020
-
[52]
V2x-real: a largs-scale dataset for vehicle-to-everything cooperative perception
Hao Xiang, Zhaoliang Zheng, Xin Xia, Runsheng Xu, Letian Gao, Zewei Zhou, Xu Han, Xinkai Ji, Mingxi Li, Zonglin Meng, et al. V2x-real: a largs-scale dataset for vehicle-to-everything cooperative perception. InEuropean Conference on Computer Vision, pages 455–470. Springer, 2024
2024
-
[53]
Runsheng Xu, Zhengzhong Tu, Hao Xiang, Wei Shao, Bolei Zhou, and Jiaqi Ma. Cobevt: Cooperative bird’s eye view semantic segmentation with sparse transformers.arXiv preprint arXiv:2207.02202, 2022
-
[54]
V2x-vit: Vehicle-to-everything cooperative perception with vision transformer
Runsheng Xu, Hao Xiang, Zhengzhong Tu, Xin Xia, Ming-Hsuan Yang, and Jiaqi Ma. V2x-vit: Vehicle-to-everything cooperative perception with vision transformer. InEuropean Conference on Computer Vision, pages 107–124. Springer, 2022
2022
-
[55]
Opv2v: An open benchmark dataset and fusion pipeline for perception with vehicle- to-vehicle communication
Runsheng Xu, Hao Xiang, Xin Xia, Xu Han, Jinlong Li, and Jiaqi Ma. Opv2v: An open benchmark dataset and fusion pipeline for perception with vehicle- to-vehicle communication. In2022 International Conference on Robotics and Automation (ICRA), pages 2583–2589. IEEE, 2022
2022
-
[56]
Marius Zoellner
Melih Yazgan, Qiyuan Wu, Iramm Hamdard, Shiqi Li, and J. Marius Zoellner. Slimcomm: Doppler-guided sparse queries for bandwidth-efficient cooperative 3- d perception. InProceedings of the IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2025
2025
-
[57]
Keypoints-based deep feature fusion for cooperative vehicle detection of autonomous driving.IEEE Robotics and Automation Letters, 7(2):3054–3061, 2022
Yunshuang Yuan, Hao Cheng, and Monika Sester. Keypoints-based deep feature fusion for cooperative vehicle detection of autonomous driving.IEEE Robotics and Automation Letters, 7(2):3054–3061, 2022
2022
-
[58]
On adversarial robustness of trajectory prediction for autonomous vehi- cles
Qingzhao Zhang, Shengtuo Hu, Jiachen Sun, Qi Alfred Chen, and Z Morley Mao. On adversarial robustness of trajectory prediction for autonomous vehi- cles. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15159–15168, 2022
2022
-
[59]
On data fabrication in collaborative vehicular perception: Attacks and countermeasures
Qingzhao Zhang, Shuowei Jin, Ruiyang Zhu, Jiachen Sun, Xumiao Zhang, Qi Al- fred Chen, and Z Morley Mao. On data fabrication in collaborative vehicular perception: Attacks and countermeasures. In33rd USENIX Security Symposium (USENIX Security 24), pages 6309–6326, 2024
2024
-
[60]
Robust real-time multi-vehicle collaboration on asynchronous sensors
Qingzhao Zhang, Xumiao Zhang, Ruiyang Zhu, Fan Bai, Mohammad Naserian, and Z Morley Mao. Robust real-time multi-vehicle collaboration on asynchronous sensors. InProceedings of the 29th Annual International Conference on Mobile Computing and Networking, pages 1–15, 2023
2023
-
[61]
Emp: edge-assisted multi-vehicle perception
Xumiao Zhang, Anlan Zhang, Jiachen Sun, Xiao Zhu, Y Ethan Guo, Feng Qian, and Z Morley Mao. Emp: edge-assisted multi-vehicle perception. InProceedings of the 27th Annual International Conference on Mobile Computing and Networking, pages 545–558, 2021
2021
-
[62]
Made: Malicious agent detection for robust multi-agent collaborative perception
Yangheng Zhao, Zhen Xiang, Sheng Yin, Xianghe Pang, Yanfeng Wang, and Si- heng Chen. Made: Malicious agent detection for robust multi-agent collaborative perception. In2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 13817–13823. IEEE, 2024
2024
-
[63]
Open3D: A Modern Library for 3D Data Processing
Qian-Yi Zhou, Jaesik Park, and Vladlen Koltun. Open3d: A modern library for 3d data processing.arXiv preprint arXiv:1801.09847, 2018
work page internal anchor Pith review arXiv 2018
-
[64]
Yi Zhu, Chenglin Miao, Tianhang Zheng, Foad Hajiaghajani, Lu Su, and Chun- ming Qiao. Can we use arbitrary objects to attack lidar perception in autonomous driving? InProceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, pages 1945–1960, 2021. A Open Science Teh artifact will be online at https://anonymous.4open.science/r/...
2021
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.