CaMeL protects LLM agents from prompt injection by separating trusted control flows from untrusted data and enforcing capability policies on tool calls, achieving 77% task success with provable security on AgentDojo versus 84% undefended.
Title resolution pending
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.CR 1years
2025 1verdicts
CONDITIONAL 1representative citing papers
citing papers explorer
-
Defeating Prompt Injections by Design
CaMeL protects LLM agents from prompt injection by separating trusted control flows from untrusted data and enforcing capability policies on tool calls, achieving 77% task success with provable security on AgentDojo versus 84% undefended.