pith. machine review for the scientific record. sign in

arxiv: 2510.13999 · v3 · submitted 2025-10-15 · 💻 cs.LG · cs.AI

Recognition: unknown

REAP the Experts: Why Pruning Prevails for One-Shot MoE compression

Authors on Pith no claims yet
classification 💻 cs.LG cs.AI
keywords pruningexpertcompressionexpertsmergingreapactivationbenchmarks
0
0 comments X
read the original abstract

Sparsely-activated Mixture-of-Experts (SMoE) models offer efficient pre-training and low latency but their large parameter counts create significant memory overhead, motivating research into expert compression. Contrary to recent findings favouring expert merging on discriminative benchmarks, we find that expert pruning is a superior strategy for generative tasks. We demonstrate that existing merging techniques introduce an irreducible error due to the loss of fine-grained routing control over experts. Leveraging this insight, we propose Router-weighted Expert Activation Pruning (REAP), a novel pruning criterion that considers both router gate-values and expert activation norms to minimize the reconstruction error bound. Across a diverse set of SMoE models ranging from 20B to 1T parameters, REAP consistently outperforms merging and other pruning methods on generative benchmarks, especially at 50% compression. Notably, our method achieves near-lossless compression on code generation tasks with Qwen3-Coder-480B and Kimi-K2, even after pruning 50% of experts.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 4 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. HodgeCover: Higher-Order Topological Coverage Drives Compression of Sparse Mixture-of-Experts

    cs.LG 2026-05 unverdicted novelty 8.0

    HodgeCover isolates the harmonic kernel of a simplicial Laplacian on an expert 2-complex to identify irreducible merge cycles and selects experts for aggressive compression, matching or exceeding baselines on open-wei...

  2. Star Elastic: Many-in-One Reasoning LLMs with Efficient Budget Control

    cs.LG 2026-05 unverdicted novelty 7.0

    Star Elastic trains N nested submodels in a single post-training job on a parent reasoning LLM, supporting elastic budget control that matches or exceeds independent baselines while cutting training compute by up to 360x.

  3. Model Compression with Exact Budget Constraints via Riemannian Manifolds

    cs.LG 2026-05 unverdicted novelty 7.0

    The budget constraint in discrete model compression defines a Riemannian manifold allowing exact-constraint first-order optimization via Riemannian Constrained Optimization (RCO) without extra hyperparameters.

  4. SlimQwen: Exploring the Pruning and Distillation in Large MoE Model Pre-training

    cs.LG 2026-05 unverdicted novelty 6.0

    Pruning pretrained MoE models outperforms training from scratch, different compression methods converge after continued pretraining, and combining KD with language modeling loss plus progressive schedules yields a com...