pith. machine review for the scientific record. sign in

arxiv: 2601.07667 · v2 · submitted 2026-01-12 · 💻 cs.CL · cs.AI· cs.LG

Recognition: unknown

Adaptive Layer Selection for Layer-Wise Token Pruning in LLM Inference

Rei Taniguchi , Yuyang Dong , Makoto Onizuka , Chuan Xiao

Authors on Pith no claims yet
classification 💻 cs.CL cs.AIcs.LG
keywords tokencachetasksinferencelayer-wisepruningreductionselection
0
0 comments X
read the original abstract

Due to the prevalence of large language models (LLMs), key-value (KV) cache reduction for LLM inference has received remarkable attention. Among numerous works that have been proposed in recent years, layer-wise token pruning approaches, which select a subset of tokens at particular layers to retain in KV cache and prune others, are one of the most popular schemes. They primarily adopt a set of pre-defined layers, at which tokens are selected. Such design is inflexible in the sense that the accuracy significantly varies across tasks and deteriorates in harder tasks such as KV retrieval. In this paper, we propose ASL, a training-free method that adaptively chooses the selection layer for KV cache reduction, exploiting the variance of token ranks ordered by attention score. The proposed method balances the performance across different tasks while meeting the user-specified KV budget requirement. ASL operates during the prefilling stage and can be jointly used with existing KV cache reduction methods such as SnapKV to optimize the decoding stage. By evaluations on the InfiniteBench, RULER, and NIAH benchmarks, we show that ASL, equipped with one-shot token selection, adaptively trades inference speed for accuracy, outperforming state-of-the-art layer-wise token pruning methods in difficult tasks.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. When Does Value-Aware KV Eviction Help? A Fixed-Contract Diagnostic for Non-Monotone Cache Compression

    cs.LG 2026-05 unverdicted novelty 6.0

    A fixed-contract probe shows value-aware KV eviction recovers needed evidence in 72.6% of accuracy-improving cases on LongBench but only 32.4% otherwise, suggesting an order of recover evidence, rank value, then prese...

  2. When Less Latent Leads to Better Relay: Information-Preserving Compression for Latent Multi-Agent LLM Collaboration

    cs.LG 2026-04 unverdicted novelty 6.0

    Orthogonal Backfill compression for latent KV caches in multi-agent LLMs reduces communication by 79.8-89.4% while achieving comparable or superior performance to full relay on 7 of 9 benchmarks.