pith. machine review for the scientific record. sign in

arxiv: 2510.17687 · v2 · submitted 2025-10-20 · 💻 cs.CR · cs.AI

Recognition: unknown

CrossGuard: Safeguarding MLLMs against Joint-Modal Implicit Malicious Attacks

Authors on Pith no claims yet
classification 💻 cs.CR cs.AI
keywords implicitattackscrossguardexplicitmllmsthreatsacrossexisting
0
0 comments X
read the original abstract

Multimodal Large Language Models (MLLMs) achieve strong reasoning and perception capabilities but are increasingly vulnerable to jailbreak attacks. While existing work focuses on explicit attacks, where malicious content resides in a single modality, recent studies reveal implicit attacks, in which benign text and image inputs jointly express unsafe intent. Such joint-modal threats are difficult to detect and remain underexplored, largely due to the scarcity of high-quality implicit data. We propose ImpForge, an automated red-teaming pipeline that leverages reinforcement learning with tailored reward modules to generate diverse implicit samples across 14 domains. Building on this dataset, we further develop CrossGuard, an intent-aware safeguard providing robust and comprehensive defense against both explicit and implicit threats. Extensive experiments across safe and unsafe benchmarks, implicit and explicit attacks, and multiple out-of-domain settings demonstrate that CrossGuard significantly outperforms existing defenses, including advanced MLLMs and guardrails, achieving stronger security while maintaining high utility. This offers a balanced and practical solution for enhancing MLLM robustness against real-world multimodal threats. Our code is released: https://github.com/ZhangXu0963/CrossGuard.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. A Systematic Investigation of The RL-Jailbreaker in LLMs

    cs.LG 2026-05 unverdicted novelty 5.0

    Dense rewards and extended episode lengths in the RL jailbreaking framework are the primary drivers of successful attacks on LLMs.