pith. machine review for the scientific record. sign in

ACE: A Security Architecture for LLM-Integrated App Systems

6 Pith papers cite this work. Polarity classification is still indexing.

6 Pith papers citing it

fields

cs.CR 6

years

2026 6

verdicts

UNVERDICTED 6

representative citing papers

ARGUS: Defending LLM Agents Against Context-Aware Prompt Injection

cs.CR · 2026-05-05 · unverdicted · novelty 6.0

ARGUS defends LLM agents from context-aware prompt injections by tracking information provenance and verifying decisions against trustworthy evidence, reducing attack success to 3.8% while retaining 87.5% task utility.

An AI Agent Execution Environment to Safeguard User Data

cs.CR · 2026-04-21 · unverdicted · novelty 6.0

GAAP guarantees confidentiality of private user data for AI agents by enforcing user-specified permissions deterministically through persistent information flow tracking, without trusting the agent or requiring attack-free models.

Parallax: Why AI Agents That Think Must Never Act

cs.CR · 2026-04-14 · unverdicted · novelty 6.0

Parallax enforces structural separation between AI thinking and acting via independent multi-tier validation, information flow control, and state rollback, blocking 98.9% of 280 adversarial attacks with zero false positives even when the reasoning system is fully compromised.

ClawLess: A Security Model of AI Agents

cs.CR · 2026-04-07 · unverdicted · novelty 5.0

ClawLess introduces a formal fine-grained security model for AI agents with runtime-adaptive policies enforced via user-space kernel and BPF syscall interception.

citing papers explorer

Showing 6 of 6 citing papers.

  • ClawGuard: Out-of-Band Detection of LLM Agent Workflow Hijacking via EM Side Channel cs.CR · 2026-05-07 · unverdicted · none · ref 26

    ClawGuard detects LLM agent workflow hijacking by capturing and classifying electromagnetic emanations from hardware with 0.9945 AUC, 100% true-positive rate, and 1.16% false-positive rate on a 7.82 TB RF dataset.

  • ARGUS: Defending LLM Agents Against Context-Aware Prompt Injection cs.CR · 2026-05-05 · unverdicted · none · ref 20

    ARGUS defends LLM agents from context-aware prompt injections by tracking information provenance and verifying decisions against trustworthy evidence, reducing attack success to 3.8% while retaining 87.5% task utility.

  • An AI Agent Execution Environment to Safeguard User Data cs.CR · 2026-04-21 · unverdicted · none · ref 32

    GAAP guarantees confidentiality of private user data for AI agents by enforcing user-specified permissions deterministically through persistent information flow tracking, without trusting the agent or requiring attack-free models.

  • Parallax: Why AI Agents That Think Must Never Act cs.CR · 2026-04-14 · unverdicted · none · ref 50

    Parallax enforces structural separation between AI thinking and acting via independent multi-tier validation, information flow control, and state rollback, blocking 98.9% of 280 adversarial attacks with zero false positives even when the reasoning system is fully compromised.

  • Engineering Robustness into Personal Agents with the AI Workflow Store cs.CR · 2026-05-11 · unverdicted · none · ref 30 · 2 links

    AI agents should shift from on-the-fly plan synthesis to invoking pre-engineered, tested, and reusable workflows stored in an AI Workflow Store to gain reliability and security.

  • ClawLess: A Security Model of AI Agents cs.CR · 2026-04-07 · unverdicted · none · ref 14

    ClawLess introduces a formal fine-grained security model for AI agents with runtime-adaptive policies enforced via user-space kernel and BPF syscall interception.