pith. machine review for the scientific record. sign in

arxiv: 1806.03287 · v2 · submitted 2018-06-08 · 📊 stat.ML · cs.CR· cs.LG

Recognition: unknown

Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware

Dan Boneh, Florian Tram\`er

Authors on Pith no claims yet
classification 📊 stat.ML cs.CRcs.LG
keywords untrustedexecutioncomputationsdnnsslalomtrustedverifiabledelegates
0
0 comments X
read the original abstract

As Machine Learning (ML) gets applied to security-critical or sensitive domains, there is a growing need for integrity and privacy for outsourced ML computations. A pragmatic solution comes from Trusted Execution Environments (TEEs), which use hardware and software protections to isolate sensitive computations from the untrusted software stack. However, these isolation guarantees come at a price in performance, compared to untrusted alternatives. This paper initiates the study of high performance execution of Deep Neural Networks (DNNs) in TEEs by efficiently partitioning DNN computations between trusted and untrusted devices. Building upon an efficient outsourcing scheme for matrix multiplication, we propose Slalom, a framework that securely delegates execution of all linear layers in a DNN from a TEE (e.g., Intel SGX or Sanctum) to a faster, yet untrusted, co-located processor. We evaluate Slalom by running DNNs in an Intel SGX enclave, which selectively delegates work to an untrusted GPU. For canonical DNNs (VGG16, MobileNet and ResNet variants) we obtain 6x to 20x increases in throughput for verifiable inference, and 4x to 11x for verifiable and private inference.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 6 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Toward Web 4.0: Bidirectional Trust between AI Agents and Blockchain

    cs.CR 2026-05 accept novelty 7.0

    The paper delivers a systematization of knowledge on AI agent-blockchain interactions via a bidirectional trust framework, an Agent-Blockchain Interaction Model, a five-dimensional evaluation lens, and nine identified...

  2. PragLocker: Protecting Agent Intellectual Property in Untrusted Deployments via Non-Portable Prompts

    cs.CR 2026-05 unverdicted novelty 7.0

    PragLocker protects agent prompts as IP by building non-portable obfuscated versions that function only on the intended LLM through code-symbol semantic anchoring followed by target-model feedback noise injection.

  3. Agentic Witnessing: Pragmatic and Scalable TEE-Enabled Privacy-Preserving Auditing

    cs.CR 2026-04 unverdicted novelty 7.0

    Agentic Witnessing enables privacy-preserving auditing of semantic properties in private data by running an LLM auditor in a TEE that answers binary queries and produces cryptographic transcripts of its reasoning.

  4. Intelligence Delivery Network: Toward an Internet Architecture for the AI Age

    cs.NI 2026-05 unverdicted novelty 5.0

    IDN proposes treating AI intelligence as deliverable network services positioned dynamically across distributed compute environments to improve efficiency, latency, and privacy.

  5. When Agents Handle Secrets: A Survey of Confidential Computing for Agentic AI

    cs.CR 2026-05 unverdicted novelty 5.0

    A survey providing a taxonomy of TEE platforms, an agent-centric threat model, and open challenges for applying confidential computing to secure agentic AI systems.

  6. When Agents Handle Secrets: A Survey of Confidential Computing for Agentic AI

    cs.CR 2026-05 unverdicted novelty 4.0

    A structured survey of confidential computing for agentic AI that catalogs TEE platforms, agent-specific threats, transferable defenses, and remaining gaps in end-to-end frameworks.