SkCC compiles LLM skills via SkIR to achieve portability across agent frameworks, reduce adaptation effort from O(m×n) to O(m+n), and enforce security with reported gains in task success rates and token efficiency.
Mahoney, Kurt Keutzer, and Amir Gholami
4 Pith papers cite this work. Polarity classification is still indexing.
representative citing papers
Slipstream uses asynchronous compaction with trajectory-grounded judge validation to improve long-horizon agent accuracy by up to 8.8 percentage points and reduce latency by up to 39.7%.
PlanCompiler uses a typed node registry, static validation, and deterministic compilation to reach 278/300 successes on structured LLM pipeline benchmarks, outperforming GPT-4.1 and Claude Sonnet baselines at lower cost.
SGLang is a new system that speeds up structured LLM programs by up to 6.4x using RadixAttention for KV cache reuse and compressed finite state machines for output decoding.
citing papers explorer
-
SkCC: Portable and Secure Skill Compilation for Cross-Framework LLM Agents
SkCC compiles LLM skills via SkIR to achieve portability across agent frameworks, reduce adaptation effort from O(m×n) to O(m+n), and enforce security with reported gains in task success rates and token efficiency.
-
Slipstream: Trajectory-Grounded Compaction Validation for Long-Horizon Agents
Slipstream uses asynchronous compaction with trajectory-grounded judge validation to improve long-horizon agent accuracy by up to 8.8 percentage points and reduce latency by up to 39.7%.
-
PlanCompiler: A Deterministic Compilation Architecture for Structured Multi-Step LLM Pipelines
PlanCompiler uses a typed node registry, static validation, and deterministic compilation to reach 278/300 successes on structured LLM pipeline benchmarks, outperforming GPT-4.1 and Claude Sonnet baselines at lower cost.
-
SGLang: Efficient Execution of Structured Language Model Programs
SGLang is a new system that speeds up structured LLM programs by up to 6.4x using RadixAttention for KV cache reuse and compressed finite state machines for output decoding.