pith. machine review for the scientific record. sign in

arxiv: 2605.02900 · v1 · submitted 2026-03-28 · 💻 cs.CR · cs.AI· cs.CV· cs.RO

Recognition: 1 theorem link

· Lean Theorem

Safety in Embodied AI: A Survey of Risks, Attacks, and Defenses

Bo Li, Cong Wang, Hanxun Huang, James Bailey, Jianping Wang, Jiayu Li, Jingjing Chen, Jun Sun, Ming Wen, Qi Zhang, Sarah Erfani, Tao Gui, Tiehua Zhang, Wei-Ying Ma, Xiang Zheng, Xiao Li, Xingjun Ma, Xin Wang, Xinyu Xia, Xipeng Qiu, Xuanjing Huang, Xun Gong, Ye Sun, Yifeng Gao, Yige Li, Yi Liu, Yixin Cao, Yixu Wang, Yu-Gang Jiang, Yunhan Zhao, Yutao Wu, Zhineng Chen, Zhipeng Wei, Zuxuan Wu

Authors on Pith no claims yet

Pith reviewed 2026-05-14 22:17 UTC · model grok-4.3

classification 💻 cs.CR cs.AIcs.CVcs.RO
keywords embodied AIsafety risksadversarial attacksjailbreak attacksdefense mechanismsmulti-level taxonomyhuman-robot interactionrobust planning
0
0 comments X

The pith

Embodied AI requires safety measures that span the full pipeline from perception and planning to physical action and human interaction.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This survey establishes a structured overview of safety issues in embodied AI by reviewing attacks and defenses throughout the agent's pipeline. Embodied systems integrate perception, cognition, planning, and action in physical environments where failures can cause direct harm unlike purely digital AI. The review covers adversarial, backdoor, jailbreak, and hardware attacks together with defenses such as detection, safe training, and robust inference. It connects findings to advances in vision-language models and identifies specific overlooked problems including fragile multimodal fusion and unstable planning under attacks.

Core claim

The paper claims that safety research in embodied AI can be systematically organized through a multi-level taxonomy covering the full pipeline from perception and cognition to planning, action, interaction, and agentic systems, synthesizing insights from over 400 papers on risks and defenses while revealing challenges such as the fragility of multimodal perception fusion, the instability of planning under jailbreak attacks, and the trustworthiness of human-agent interaction in open scenarios.

What carries the argument

A multi-level taxonomy that organizes attacks, defenses, and risks across perception, cognition, planning, action, interaction, and agentic systems while linking them to multimodal foundation models.

Load-bearing premise

The assumption that the selection and synthesis of over 400 papers together with the proposed taxonomy accurately captures the current state of the field without major omissions.

What would settle it

A follow-up survey that identifies substantial embodied AI safety risks, attack types, or defense categories omitted from this taxonomy would show the framework does not fully unify the field.

read the original abstract

Embodied Artificial Intelligence (Embodied AI) integrates perception, cognition, planning, and interaction into agents that operate in open-world, safety-critical environments. As these systems gain autonomy and enter domains such as transportation, healthcare, and industrial or assistive robotics, ensuring their safety becomes both technically challenging and socially indispensable. Unlike digital AI systems, embodied agents must act under uncertain sensing, incomplete knowledge, and dynamic human-robot interactions, where failures can directly lead to physical harm. This survey provides a comprehensive and structured review of safety research in embodied AI, examining attacks and defenses across the full embodied pipeline, from perception and cognition to planning, action and interaction, and agentic system. We introduce a multi-level taxonomy that unifies fragmented lines of work and connects embodied-specific safety findings with broader advances in vision, language, and multimodal foundation models. Our review synthesizes insights from over 400 papers spanning adversarial, backdoor, jailbreak, and hardware-level attacks; attack detection, safe training and robust inference; and risk-aware human-agent interaction. This analysis reveals several overlooked challenges, including the fragility of multimodal perception fusion, the instability of planning under jailbreak attacks, and the trustworthiness of human-agent interaction in open-ended scenarios. By organizing the field into a coherent framework and identifying critical research gaps, this survey provides a roadmap for building embodied agents that are not only capable and autonomous but also safe, robust, and reliable in real-world deployment.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript is a survey reviewing safety research in Embodied AI. It examines risks, attacks (adversarial, backdoor, jailbreak, hardware-level), and defenses across the full embodied pipeline from perception and cognition through planning, action, interaction, and agentic systems. The central contributions are a multi-level taxonomy unifying fragmented research lines and the identification of overlooked challenges such as multimodal fusion fragility and planning instability under attacks, synthesized from over 400 papers.

Significance. If the paper selection is systematic and the taxonomy is well-justified, the survey would provide a valuable organizing framework that connects embodied AI safety findings to advances in vision, language, and multimodal models. This could serve as a useful roadmap for researchers working on safe deployment in domains like robotics and autonomous systems.

major comments (2)
  1. [Introduction / Literature Review] The claim of synthesizing 'over 400 papers' is load-bearing for the comprehensiveness assertion, yet the manuscript lacks an explicit methods subsection detailing search strategy, databases, keywords, time range, and inclusion criteria. Without this, it is impossible to assess selection bias or reproducibility of the review process.
  2. [Taxonomy Definition Section] The multi-level taxonomy is presented as unifying previously fragmented lines, but the paper does not include a direct comparison table or discussion against prior taxonomies from related surveys on AI safety or robotics safety. This weakens the novelty claim for the taxonomy structure.
minor comments (2)
  1. [Abstract] Clarify the exact count of papers reviewed and the publication years covered to allow readers to gauge recency of the synthesis.
  2. [Challenges and Future Directions] The listed challenges (e.g., multimodal perception fusion fragility) would benefit from one or two concrete paper citations or brief examples in the main text to make the gaps more specific and actionable.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback and the recommendation for minor revision. The comments help strengthen the methodological transparency and positioning of our contributions. We address each major point below and will revise the manuscript accordingly.

read point-by-point responses
  1. Referee: [Introduction / Literature Review] The claim of synthesizing 'over 400 papers' is load-bearing for the comprehensiveness assertion, yet the manuscript lacks an explicit methods subsection detailing search strategy, databases, keywords, time range, and inclusion criteria. Without this, it is impossible to assess selection bias or reproducibility of the review process.

    Authors: We agree that an explicit methods subsection would improve reproducibility and allow better assessment of selection bias. In the revised version, we will insert a dedicated 'Review Methodology' subsection early in the Introduction. It will specify the databases (Google Scholar, arXiv, IEEE Xplore, ACM Digital Library), search keywords (combinations of 'embodied AI', 'robotics safety', 'adversarial attacks on perception', 'jailbreak planning', etc.), time range (2015–2024), and inclusion criteria (peer-reviewed papers plus high-impact preprints directly addressing embodied safety risks, attacks, or defenses). This addition will make the synthesis of over 400 papers fully transparent. revision: yes

  2. Referee: [Taxonomy Definition Section] The multi-level taxonomy is presented as unifying previously fragmented lines, but the paper does not include a direct comparison table or discussion against prior taxonomies from related surveys on AI safety or robotics safety. This weakens the novelty claim for the taxonomy structure.

    Authors: We acknowledge the value of explicit comparison. While our multi-level taxonomy is novel in its integration of the full embodied pipeline (perception through agentic systems) and its linkage to multimodal foundation-model advances, we will add a comparison table in the Taxonomy section. The table will contrast our structure against representative prior taxonomies from AI safety and robotics safety surveys, with accompanying text that highlights embodied-specific dimensions (e.g., physical-action consequences and real-time human interaction) not covered elsewhere. This will clarify the incremental contribution without altering the core taxonomy. revision: yes

Circularity Check

0 steps flagged

No significant circularity in this literature survey

full rationale

This manuscript is a literature survey that synthesizes findings from over 400 prior papers without presenting any original derivations, quantitative predictions, fitted parameters, or first-principles results. The multi-level taxonomy is an organizational framework constructed from the reviewed literature rather than a self-defined or fitted construct, and all claims rest on external citations to independent work. No load-bearing self-citation chains, ansatzes, or reductions of predictions to inputs by construction are present, making the paper self-contained as a review.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

As a survey the paper introduces no new free parameters or invented entities. Its claims rest on the domain assumption that the reviewed literature is representative.

axioms (1)
  • domain assumption The collection of over 400 papers is representative of embodied AI safety research without major selection bias
    The survey's ability to reveal overlooked challenges depends on the completeness and balance of the literature sample.

pith-pipeline@v0.9.0 · 5690 in / 1153 out tokens · 33789 ms · 2026-05-14T22:17:29.123090+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

Reference graph

Works this paper leans on

300 extracted references · 300 canonical work pages · 10 internal anchors

  1. [1]

    Securing vision-based autonomous systems: A comprehensive taxonomy.Artificial Intelligence(AIJ), 2025

    Mohamed Abdelfattah et al. Securing vision-based autonomous systems: A comprehensive taxonomy.Artificial Intelligence(AIJ), 2025

  2. [2]

    Practical hidden voice attacks against speech and speaker recognition systems

    Hadi Abdullah, Washington Garcia, Christian Peeters, Patrick Traynor, Kevin RB Butler, and Joseph Wilson. Practical hidden voice attacks against speech and speaker recognition systems. InNDSS, 2019

  3. [3]

    Safe llm-controlled robots with formal guarantees via reachability analysis.arXivpreprintarXiv:2503.03911, 2025

    Abulikemu Abuduweili, Rahul Shrestha, Yue Hu, and Changliu Tian. Safe llm-controlled robots with formal guarantees via reachability analysis.arXivpreprintarXiv:2503.03911, 2025

  4. [4]

    Vision-only robot navigation in a neural radiance world.IEEE Roboticsand AutomationLetters (RA-L), 2021

    Michal Adamkiewicz, Timothy Chen, Adam Caccavale, Rachel Gardner, Preston Culbertson, Jeannette Bohg, and Mac Schwager. Vision-only robot navigation in a neural radiance world.IEEE Roboticsand AutomationLetters (RA-L), 2021

  5. [5]

    Cascading failures in agentic ai

    Adversa AI. Cascading failures in agentic ai. Adversa AI Research Blog, 2025

  6. [6]

    Do as i can, not as i say: Grounding language in robotic affordances

    Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gober, Karol Gopalakrishnan, et al. Do as i can, not as i say: Grounding language in robotic affordances. InCoRL, 2022

  7. [7]

    Generating safe and efficient task plans for robot agents with large language models

    Michael Ahn et al. Generating safe and efficient task plans for robot agents with large language models. InICRA, 2024

  8. [8]

    Distributionally adaptive meta reinforcement learning

    Anurag Ajay, Abhishek Gupta, Dibya Ghosh, Sergey Levine, and Pulkit Agrawal. Distributionally adaptive meta reinforcement learning. InNeurIPS, 2022

  9. [9]

    The adolescence of technology

    Dario Amodei. The adolescence of technology. https://www.darioamodei.com/essay/ the-adolescence-of-technology, 2025

  10. [10]

    Chips-messagerobustauthentication(chimera)forgpscivilian signals

    Jon M Anderson, Katherine L Carroll, Nathan P DeVilbiss, James T Gillis, Joanna C Hinks, Brady W O’Hanlon, JosephJRushanan,LoganScott,andReneeAYazdi. Chips-messagerobustauthentication(chimera)forgpscivilian signals. InGNSS+, 2017

  11. [11]

    Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments

    Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, and Anton van den Hengel. Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. InCVPR, 2017

  12. [12]

    Moral anchor system: A predictive framework for AI value alignment and drift prevention.arXiv preprintarXiv:2510.04073, 2024

    Anonymous. Moral anchor system: A predictive framework for AI value alignment and drift prevention.arXiv preprintarXiv:2510.04073, 2024

  13. [13]

    Towards resistant and resilient ai.OpenReviewPreprint, 2024

    Anonymous. Towards resistant and resilient ai.OpenReviewPreprint, 2024

  14. [14]

    A-MEM: Agentic memory for LLM agents

    Anonymous. A-MEM: Agentic memory for LLM agents. InNeurIPS, 2025

  15. [15]

    Safe continual reinforcement learning methods for nonstationary environments.arXiv preprint arXiv:2601.05152, 2025

    Anonymous. Safe continual reinforcement learning methods for nonstationary environments.arXiv preprint arXiv:2601.05152, 2025

  16. [16]

    Self-improving embodied foundation models.https://self-improving-efms.github.io/, 2025

    Anonymous. Self-improving embodied foundation models.https://self-improving-efms.github.io/, 2025

  17. [17]

    Backdoors in DRL: Four environments focusing on in-distribution triggers, 2025

    Chace Ashcraft, Ted Staley, Josh Carney, Cameron Hickert, Kiran Karra, and Nathan Drenkow. Backdoors in DRL: Four environments focusing on in-distribution triggers, 2025

  18. [18]

    Multi-robot coordination with adversarial perception

    Rayan Bahrami and Hamidreza Jafarnejadsani. Multi-robot coordination with adversarial perception. InICUAS, 2025

  19. [19]

    Multi-robot coordination with adversarial perception.arXiv preprintarXiv:2504.09047, 2025

    Rayan Bahrami and Hamidreza Jafarnejadsani. Multi-robot coordination with adversarial perception.arXiv preprintarXiv:2504.09047, 2025

  20. [20]

    RAT: Adversarial attacks on deep reinforcement agents for targeted behaviors

    Fengshuo Bai, Runze Liu, Yali Du, Ying Wen, and Yaodong Yang. RAT: Adversarial attacks on deep reinforcement agents for targeted behaviors. InAAAI, 2025. 47

  21. [21]

    Universal closed-box adversarial attack for trajectory representation via controlling high-dimensional iterative constraints.IEEEInternet ofThingsJournal (IoT-J), 2025

    Guangyao Bai, Jie Li, Yucheng Shi, Lei Shi, Yufei Gao, Chenguang Fan, and Guanxi Chen. Universal closed-box adversarial attack for trajectory representation via controlling high-dimensional iterative constraints.IEEEInternet ofThingsJournal (IoT-J), 2025

  22. [22]

    Badnaver: Exploring jailbreak attacks on vision-and-language navigation.arXivpreprint arXiv:2505.12443, 2025

    Zijing Bai, Yuanlin Guo, Bingqian Chen, Teng Wang, Jing Zhang, and Feng Zheng. Badnaver: Exploring jailbreak attacks on vision-and-language navigation.arXivpreprint arXiv:2505.12443, 2025

  23. [23]

    CleanCLIP: Mitigating data poisoning attacks in multimodal contrastive learning

    Hritik Bansal, Nishad Singhi, Yu Yang, Fan Yin, Aditya Grover, and Kai-Wei Chang. CleanCLIP: Mitigating data poisoning attacks in multimodal contrastive learning. InICCV, 2023

  24. [24]

    The safety challenge of world models for embodied ai agents: A review.arXiv preprintarXiv:2510.05865, 2025

    Lorenzo Baraldi, Zifan Zeng, Chongzhe Zhang, Aradhana Nayak, Hongbo Zhu, Feng Liu, Qunli Zhang, Peng Wang, Shiming Liu, Zheng Hu, et al. The safety challenge of world models for embodied ai agents: A review.arXiv preprintarXiv:2510.05865, 2025

  25. [25]

    On minimizing adversarial counterfactual error in adversarial reinforcement learning

    Roman Belaire, Arunesh Sinha, and Pradeep Varakantham. On minimizing adversarial counterfactual error in adversarial reinforcement learning. InICLR, 2024

  26. [26]

    Regret-based defense in adversarial reinforcement learning

    Roman Belaire, Pradeep Varakantham, Thanh Nguyen, and David Lo. Regret-based defense in adversarial reinforcement learning. InAAMAS, 2024

  27. [27]

    Optimizinghuman-robothandovers: Theimpactofadaptivetransportmethods

    GiovanniBelmonteetal. Optimizinghuman-robothandovers: Theimpactofadaptivetransportmethods. Robotics, 2023

  28. [28]

    International ai safety report 2025: Second key update — technical safeguards and risk management

    Yoshua Bengio et al. International ai safety report 2025: Second key update — technical safeguards and risk management. arXivpreprintarXiv:2511.19863, 2025

  29. [29]

    Hello me, meet the real me: Audio deepfake attacks on voice assistants.arXiv preprintarXiv:2302.10328, 2023

    Domna Bilika, Nikoletta Michopoulou, Efthimios Alepis, and Constantinos Patsakis. Hello me, meet the real me: Audio deepfake attacks on voice assistants.arXiv preprintarXiv:2302.10328, 2023

  30. [30]

    Kevin Black, Noah Brown, Danny Driess, Adnan Esmail, Michael Equi, Chelsea Finn, Niccolo Fusai, Lachy Groom, Karol Hausman, Brian Ichter, et al.π0: A vision-language-action flow model for general robot control.arXiv preprintarXiv:2410.24164, 2024

  31. [31]

    Securing the lane: Defences against patch attacks on autonomous vehicle’s lane detection

    Romana Blazevic, Alexander Toch, Omar Veledar, and Georg Macher. Securing the lane: Defences against patch attacks on autonomous vehicle’s lane detection. InEuroS&PW, 2025

  32. [32]

    The emergence of adversarial communication in multi-agent reinforcement learning

    Jan Blumenkamp and Amanda Prorok. The emergence of adversarial communication in multi-agent reinforcement learning. InCoRL, 2020

  33. [33]

    RT-2: Vision-language-action models transfer web knowledge to robotic control

    Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, et al. RT-2: Vision-language-action models transfer web knowledge to robotic control. InCoRL, 2023

  34. [34]

    Plug in the safety chip: Enforcing constraints for LLM-driven robot agents.Brown UniversityTechnicalReport, 2024

    Brown University H2R Lab. Plug in the safety chip: Enforcing constraints for LLM-driven robot agents.Brown UniversityTechnicalReport, 2024

  35. [35]

    Stochastic model predictive control with a safety guarantee for automated driving

    Jan Brüdigam, Dirk Wollherr, Marion Leibold, and Martin Buss. Stochastic model predictive control with a safety guarantee for automated driving. InIV, 2020

  36. [36]

    Schoellig

    Lukas Brunke, Yanni Zhang, Adrian Röfer, and Angela P. Schoellig. Semantically safe robot manipulation: From semantic scene understanding to motion safeguards.IEEERoboticsandAutomationLetters(RA-L), 2024

  37. [37]

    Chai: Command hijacking against embodied ai.arXiv preprint arXiv:2510.00181, 2024

    Luis Burbano, Diego Ortiz, and Qi Sun. Chai: Command hijacking against embodied ai.arXiv preprint arXiv:2510.00181, 2024

  38. [38]

    Diffusion models-based purification for common corruptions on robust 3d object detection.Sensors, 2024

    Mumuxin Cai, Xupeng Wang, Ferdous Sohel, and Hang Lei. Diffusion models-based purification for common corruptions on robust 3d object detection.Sensors, 2024

  39. [39]

    Summit: A simulator for urban driving in massive mixed traffic

    Panpan Cai, Yiyuan Lee, Yuanfu Luo, and David Hsu. Summit: A simulator for urban driving in massive mixed traffic. InICRA, 2019

  40. [40]

    BadVLA: Towards backdoor attacks on vision-language-action models via objective-decoupled optimization

    Yitao Cai et al. BadVLA: Towards backdoor attacks on vision-language-action models via objective-decoupled optimization. OpenReview, 2024

  41. [41]

    Adversarial Objects Against LiDAR-Based Autonomous Driving Systems

    Yulong Cao, Chaowei Xiao, Dawei Yang, Jing Fang, Ruigang Yang, Mingyan Liu, and Bo Li. Adversarial objects against lidar-based autonomous driving systems.arXivpreprintarXiv:1907.05418, 2019. 48

  42. [42]

    Invisible for both camera and lidar

    Yulong Cao, Ningfei Wang, Chaowei Xiao, Dawei Yang, Jin Fang, Ruigang Yang, Qi Alfred Chen, Mingyan Liu, and Bo Li. Invisible for both camera and lidar. InS&P, 2021

  43. [43]

    Advdo: Realistic adversarial attacks for trajectory prediction

    Yulong Cao, Chaowei Xiao, Anima Anandkumar, Danfei Xu, and Marco Pavone. Advdo: Realistic adversarial attacks for trajectory prediction. InECCV, 2022

  44. [44]

    Hidden voice commands

    Nicholas Carlini, Pratyush Mishra, Tavish Vaidya, Yuankai Zhang, Micah Sherr, Clay Shields, David Wagner, and Wenchao Zhou. Hidden voice commands. InUSENIXSecurity, 2016

  45. [45]

    Niloy, Yushun Dong, Jundong Li, Amit K

    Tathagata Chakraborty, Utsab Ghosh, Xiaoyu Zhang, Faysal F. Niloy, Yushun Dong, Jundong Li, Amit K. Roy- Chowdhury, and Chaoming Song. HEAL: An empirical study on hallucinations in embodied agents driven by large language models. InEMNLP, 2025

  46. [46]

    Adversarial attacks on monocular pose estimation

    Hemang Chawla, Arnav Varma, Elahe Arani, and Bahram Zonooz. Adversarial attacks on monocular pose estimation. InIROS, 2022

  47. [47]

    Adversary is on the road: Attacks on visual slam with robust perturbations on point clouds

    Baodong Chen, Wei Wang, Pascal Sikorski, and Ting Zhu. Adversary is on the road: Attacks on visual slam with robust perturbations on point clouds. InUSENIX Security, 2024

  48. [48]

    Alemzadeh, and Xugui Zhou

    Cheng Chen, Grant Xiao, Daehyun Lee, Lishan Yang, Evgenia Smirni, H. Alemzadeh, and Xugui Zhou. Safety interventions against adversarial patches in an open-source driver assistance system. InDSN, 2025

  49. [49]

    AgentSpec: Customizable runtime enforcement for safe and reliable LLM agents

    Haoyu Chen et al. AgentSpec: Customizable runtime enforcement for safe and reliable LLM agents. InICSE, 2026

  50. [50]

    Lidattack: Robust black-box attack on lidar-based object detection

    Jinyin Chen, Danxin Liao, Yunjie Yan, Sheng Xiang, and Haibin Zheng. Lidattack: Robust black-box attack on lidar-based object detection. InITSC, 2024

  51. [51]

    SafeMind: Benchmarking and mitigating safety risks in embodied LLM agents.arXiv preprintarXiv:2509.25885, 2025

    Ruolin Chen, Yinqian Sun, Jihang Wang, Mingyang Lv, Qian Zhang, and Yi Zeng. SafeMind: Benchmarking and mitigating safety risks in embodied LLM agents.arXiv preprintarXiv:2509.25885, 2025

  52. [52]

    Shapeshifter: Robust physical adversarial attack on faster r-cnn object detector

    Shang-Tse Chen, Cory Cornelius, Jason Martin, and Duen Horng Chau. Shapeshifter: Robust physical adversarial attack on faster r-cnn object detector. InECML PKDD, 2018

  53. [53]

    Metamorph: Injecting inaudible commands into over-the-air voice controlled systems

    Tao Chen, Longfei Shangguan, Zhenjiang Li, and Kyle Jamieson. Metamorph: Injecting inaudible commands into over-the-air voice controlled systems. InNDSS, 2020

  54. [54]

    Catnips: Collision avoidance through neural implicit probabilistic scenes.IEEETransactionson Robotics(T-RO), 2023

    Timothy Chen, Preston Culbertson, and Mac Schwager. Catnips: Collision avoidance through neural implicit probabilistic scenes.IEEETransactionson Robotics(T-RO), 2023

  55. [55]

    Splat-nav: Safe real-time robot navigation in gaussian splatting maps.IEEETransactionson Robotics(T-RO), 2024

    Timothy Chen, Ola Shorinwa, Joseph Bruno, Aiden Swann, Javier Yu, Weijia Zeng, Keiko Nagami, Philip Dames, and Mac Schwager. Splat-nav: Safe real-time robot navigation in gaussian splatting maps.IEEETransactionson Robotics(T-RO), 2024

  56. [56]

    Safer-splat: A control barrier function for safe navigation with online gaussian splatting maps

    Timothy Chen, Aiden Swann, Javier Yu, Ola Shorinwa, Riku Murai, Monroe Kennedy III, and Mac Schwager. Safer-splat: A control barrier function for safe navigation with online gaussian splatting maps. InICRA, 2024

  57. [57]

    Metawave: Attackingmmwavesensingwithmeta-material-enhancedtags

    Xingyu Chen, Zhengxiong Li, Biacheng Chen, Yi Zhu, Chris Xiaoxuan Lu, Zhengyu Peng, Feng Lin, Wenyao Xu, KuiRen,andChunmingQiao. Metawave: Attackingmmwavesensingwithmeta-material-enhancedtags. In NDSS, 2023

  58. [58]

    Multi-object hallucination in vision language models

    Xuweiyi Chen, Ziqiao Jiang, Xuejun Liu, Yueqing Xu, and Derek Hoiem. Multi-object hallucination in vision language models. InNeurIPS, 2024

  59. [59]

    MARNet: Backdoor attacks against cooperative multi-agent reinforcement learning.IEEE TransactionsonDependable and SecureComputing(TDSC), 2023

    Yanjiao Chen, Zhicong Zheng, and Xueluan Gong. MARNet: Backdoor attacks against cooperative multi-agent reinforcement learning.IEEE TransactionsonDependable and SecureComputing(TDSC), 2023

  60. [60]

    Diffusion policy attacker: Crafting adversarial attacks for diffusion- based policies

    Yipu Chen, Haotian Xue, and Yongxin Chen. Diffusion policy attacker: Crafting adversarial attacks for diffusion- based policies. InNeurIPS, 2024

  61. [61]

    Revisiting adversarial perception attacks and defense methods on autonomous driving systems

    Yuxin Chen et al. Revisiting adversarial perception attacks and defense methods on autonomous driving systems. InDSN-W, 2025

  62. [62]

    Devil’s whisper: A general approach for physical adversarial attacks against commercial black-box speech recognition devices

    Yuxuan Chen, Xuejing Yuan, Jiangshan Zhang, Yue Zhao, Shengzhi Zhang, Kai Chen, and XiaoFeng Wang. Devil’s whisper: A general approach for physical adversarial attacks against commercial black-box speech recognition devices. InUSENIX Security, 2020. 49

  63. [63]

    Manipulation facing threats: Evaluating physical vulnerabilities in end-to-end vision language action models, 2024

    Hao Cheng, Erjia Xiao, Chengyuan Yu, Zhao Yao, Jiahang Cao, Qiang Zhang, Jiaxu Wang, Mengshu Sun, Kaidi Xu, Jindong Gu, and Renjing Xu. Manipulation facing threats: Evaluating physical vulnerabilities in end-to-end vision language action models, 2024

  64. [64]

    Attacking autonomous driving agents with adversarial machine learning.arXiv preprint arXiv:2511.14876, 2025

    Jun Cheng et al. Attacking autonomous driving agents with adversarial machine learning.arXiv preprint arXiv:2511.14876, 2025

  65. [65]

    Universal adversarial attack against 3d object tracking

    Riran Cheng, Nan Sang, Yinyuan Zhou, and Xupeng Wang. Universal adversarial attack against 3d object tracking. InHPCC, 2021

  66. [66]

    Black-box explainability-guided adversarial attack for 3d object tracking.IEEETransactionson Circuits andSystemsforVideoTechnology(TCSVT), 2025

    Riran Cheng, Xupeng Wang, Ferdous Sohel, and Hang Lei. Black-box explainability-guided adversarial attack for 3d object tracking.IEEETransactionson Circuits andSystemsforVideoTechnology(TCSVT), 2025

  67. [67]

    Physical attack on monocular depth estimation with optimal adversarial patches

    Zhiyuan Cheng, James Liang, Hongjun Choi, Guanhong Tao, Zhiwen Cao, Dongfang Liu, and Xiangyu Zhang. Physical attack on monocular depth estimation with optimal adversarial patches. InECCV, 2022

  68. [68]

    Adopt: Lidar spoofing attack detection based on point-level temporal consistency

    Minkyoung Cho, Yulong Cao, Zixiang Zhou, and Z Morley Mao. Adopt: Lidar spoofing attack detection based on point-level temporal consistency. InBMVC, 2023

  69. [69]

    Sentinet: Detecting localized universal attacks against deep learning systems

    Edward Chou, Florian Tramer, and Giancarlo Pellegrino. Sentinet: Detecting localized universal attacks against deep learning systems. InSPW, 2018

  70. [70]

    Gupta, Mykel J

    Shushman Choudhury, Jayesh K. Gupta, Mykel J. Kochenderfer, Dorsa Sadigh, and Jeannette Bohg. Dynamic multi-robot task allocation under uncertainty and temporal constraints.AutonomousRobots, 2020

  71. [71]

    Pybullet, a python module for physics simulation for games, robotics and machine learning.http://pybullet.org, 2016–2021

    Erwin Coumans and Yunfei Bai. Pybullet, a python module for physics simulation for games, robotics and machine learning.http://pybullet.org, 2016–2021

  72. [72]

    Unveilingthestealthythreat: Analyzing slow drift gps spoofing attacks for autonomous vehicles in urban environments and enabling the resilience.arXiv preprintarXiv:2401.01394, 2024

    SagarDasgupta,AbdullahAhmed,MizanurRahman,andThejeshNBandi. Unveilingthestealthythreat: Analyzing slow drift gps spoofing attacks for autonomous vehicles in urban environments and enabling the resilience.arXiv preprintarXiv:2401.01394, 2024

  73. [73]

    Navsim: Data-drivennon-reactiveautonomousvehiclesimulation and benchmarking

    Daniel Dauner, Marcel Hallgarten, Tianyu Li, Xinshuo Weng, Zhiyu Huang, Zetong Yang, Hongyang Li, Igor Gilitschenski,BorisIvanovic,MarcoPavone,etal. Navsim: Data-drivennon-reactiveautonomousvehiclesimulation and benchmarking. InNeurIPS, 2024

  74. [74]

    Open Challenges in Multi-Agent Security: Towards Secure Systems of Interacting AI Agents

    Christian Schroeder de Witt. Open challenges in multi-agent security: Towards secure systems of interacting AI agents. arXiv preprintarXiv:2505.02077, 2025

  75. [75]

    Ai agents under threat: A survey of key security challenges and future pathways.ACMComputingSurveys(CSUR), 2024

    Zehang Deng, Yongjian Guo, Changzhou Han, Wanlun Ma, Junwu Xiong, Sheng Wen, and Yang Xiang. Ai agents under threat: A survey of key security challenges and future pathways.ACMComputingSurveys(CSUR), 2024

  76. [76]

    A novel human intention prediction approach based on fuzzy rules through wearable sensing in human–robot handover.Robotics, 2023

    Chaozheng Ding, Ying Liu, and Jing Zhao. A novel human intention prediction approach based on fuzzy rules through wearable sensing in human–robot handover.Robotics, 2023

  77. [77]

    Learning to collide: An adaptive safety-critical scenarios generating method

    Wenhao Ding, Baiming Chen, Minjun Xu, and Ding Zhao. Learning to collide: An adaptive safety-critical scenarios generating method. InIROS, 2020

  78. [78]

    Doan, Yingjie Lao, Peng Yang, and Ping Li

    Khoa D. Doan, Yingjie Lao, Peng Yang, and Ping Li. Defending backdoor attacks on vision transformer via patch processing. InAAAI, 2022

  79. [79]

    Viewfool: Evaluating the robustness of visual recognition to adversarial viewpoints

    Yinpeng Dong, Shouwei Ruan, Hang Su, Caixin Kang, Xingxing Wei, and Jun Zhu. Viewfool: Evaluating the robustness of visual recognition to adversarial viewpoints. InNeurIPS, 2022

  80. [80]

    Carla: An open urban driving simulator

    Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun. Carla: An open urban driving simulator. InCoRL, 2017

Showing first 80 references.