pith. machine review for the scientific record. sign in

arxiv: 2509.25926 · v2 · submitted 2025-09-30 · 💻 cs.CR · cs.LG

Recognition: unknown

Preventing Prompt Injection with Type-Directed Privilege Separation

Authors on Pith no claims yet
classification 💻 cs.CR cs.LG
keywords injectionpromptdatadefensesattackslanguagemethodprevent
0
0 comments X
read the original abstract

Modern language models have enabled the development of agentic systems that achieve strong performance on reasoning-intensive tasks. Unfortunately, this has come with a security cost; these systems are vulnerable to prompt injection, a specialized attack where an adversary subverts the intended functionality of an agent by supplying an injected task of their own. Previous approaches address this challenge with detectors and fine-tuning defenses but are vulnerable to adaptive attacks. Other methods propose system-level defenses that guarantee security, but these are often based on techniques that prevent inter-component communication and thus are constrained in problem coverage. To this end, we introduce type-directed privilege separation, a new technique that expands the set of tasks that can be protected with system-level defenses. Our method works by converting untrusted data to a curated set of data types; unlike raw strings, each data type is limited in scope and content, eliminating the possibility for prompt injection. We evaluate our method across several case studies and find that designs using our principles can systematically prevent prompt injection attacks while featuring strong, non-trivial utility. Our approach is intuitive to understand and compatible with any language model.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. An AI Agent Execution Environment to Safeguard User Data

    cs.CR 2026-04 unverdicted novelty 6.0

    GAAP guarantees confidentiality of private user data for AI agents by enforcing user-specified permissions deterministically through persistent information flow tracking, without trusting the agent or requiring attack...