Recognition: 2 theorem links
· Lean TheoremDECO: Sparse Mixture-of-Experts with Dense-Comparable Performance on End-Side Devices
Pith reviewed 2026-05-13 07:25 UTC · model grok-4.3
The pith
DECO sparse MoE matches dense Transformer performance while activating only 20% of experts.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
DECO achieves performance comparable to dense Transformers of the same total parameter count by activating only 20% of its experts through differentiable ReLU-based routing enhanced by learnable expert-wise scaling and the NormSiLU activation function, which stabilizes the routed-expert activation ratio and increases intrinsic sparsity. The architecture also benefits from using non-gated MLP experts.
What carries the argument
ReLU-based routing with learnable expert-wise scaling that adaptively balances routed and shared experts, together with the NormSiLU activation function for stable sparsity.
Load-bearing premise
The learned expert-wise scaling and NormSiLU will keep producing stable sparsity and performance matching dense models when model size, data distribution, or hardware change substantially.
What would settle it
Observe whether a DECO model trained at larger scale or on shifted data distributions maintains performance parity with its dense counterpart and keeps the 20% activation ratio stable.
Figures
read the original abstract
While Mixture-of-Experts (MoE) scales model capacity without proportionally increasing computation, its massive total parameter footprint creates significant storage and memory-access bottlenecks, which hinder efficient end-side deployment that simultaneously requires high performance, low computational cost, and small storage overhead. To achieve these properties, we present DECO, a sparse MoE architecture designed to match the performance of dense Transformers under identical total parameter budgets and training tokens. DECO utilizes the differentiable and flexible ReLU-based routing enhanced by learnable expert-wise scaling, which adaptively balances the contributions of routed and shared experts. Furthermore, we introduce NormSiLU, an activation function that normalizes inputs prior to SiLU operators, producing a more stable trend of routed-expert activation ratio and a higher intrinsic sparsity level. We also identify an empirical advantage in using non-gated MLP experts with ReLU-based routing, indicating the possibility of MoE architecture simplification. Experiments demonstrate that DECO, activating only 20% of experts, matches dense performance and outperforms established MoE baselines. Our specialized acceleration kernel delivers a 3.00$\times$ speedup on real hardware compared with dense inference. Codes and checkpoints are all available at https://github.com/thunlp/DECO.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript introduces DECO, a sparse Mixture-of-Experts architecture for end-side devices that matches dense Transformer performance under identical total parameter budgets and training tokens. It employs ReLU-based routing augmented by learnable expert-wise scaling factors to balance routed and shared experts, introduces the NormSiLU activation (input normalization before SiLU) to stabilize the routed-expert activation ratio at ~20% sparsity, and uses non-gated MLP experts. Experiments report that DECO outperforms established MoE baselines while a custom kernel achieves 3.00× inference speedup on real hardware; code and checkpoints are released.
Significance. If the empirical performance match holds under the reported constraints, the result is significant for memory-constrained deployment of high-capacity models, as it decouples total parameters from active computation and storage overhead without requiring post-training compression. The open release of code, checkpoints, and hardware measurements strengthens reproducibility and enables direct verification of the claimed speedup.
major comments (2)
- [Experiments] Experiments section (and associated tables/figures): the central claim that DECO matches dense performance while activating only 20% of experts is presented without reported standard deviations, number of random seeds, or statistical significance tests for the accuracy comparisons. This makes it impossible to assess whether the match is robust or within variance of the dense baseline.
- [Method] Section describing NormSiLU and expert-wise scaling: the stabilization of the routed-expert activation ratio is asserted empirically, yet no scaling curves, ablations on expert count, or analysis under distribution shift are provided. The interaction between input normalization in NormSiLU and growing expert capacity therefore remains untested, which is load-bearing for the claim that the 20% sparsity level and dense-comparable accuracy will persist beyond the evaluated end-side model sizes.
minor comments (2)
- [Method] The abstract and method sections use “non-gated MLP experts” without an explicit equation or diagram contrasting them to standard gated experts; a short comparison equation would clarify the claimed simplification.
- [Experiments] Figure captions and table footnotes should explicitly state the exact model sizes, dataset, and token budget used for each dense vs. DECO comparison to allow immediate replication.
Simulated Author's Rebuttal
We thank the referee for the constructive comments, which help improve the clarity and rigor of our work. We address each major comment point by point below and will revise the manuscript to incorporate the suggested additions.
read point-by-point responses
-
Referee: [Experiments] Experiments section (and associated tables/figures): the central claim that DECO matches dense performance while activating only 20% of experts is presented without reported standard deviations, number of random seeds, or statistical significance tests for the accuracy comparisons. This makes it impossible to assess whether the match is robust or within variance of the dense baseline.
Authors: We agree that reporting variability and statistical tests is essential for robust evaluation. In the revised manuscript, we will explicitly state the number of random seeds (we used 3 seeds for all main experiments) and include standard deviations alongside mean accuracy values in Tables 1-3 and Figure 2. We will also add pairwise t-test results (with p-values) comparing DECO to the dense baseline to confirm that observed differences fall within expected variance. revision: yes
-
Referee: [Method] Section describing NormSiLU and expert-wise scaling: the stabilization of the routed-expert activation ratio is asserted empirically, yet no scaling curves, ablations on expert count, or analysis under distribution shift are provided. The interaction between input normalization in NormSiLU and growing expert capacity therefore remains untested, which is load-bearing for the claim that the 20% sparsity level and dense-comparable accuracy will persist beyond the evaluated end-side model sizes.
Authors: We acknowledge that additional analysis would better support the generalizability of NormSiLU. In the revision, we will add a new figure with scaling curves of the routed-expert activation ratio versus expert count (from 4 to 32 experts) and model size. We will also include an ablation table on expert count and a short analysis of activation ratio stability under distribution shift using a held-out validation set from a different domain. revision: yes
Circularity Check
No circularity; empirical results independent of inputs
full rationale
The paper introduces DECO via ReLU routing with learnable expert-wise scaling and the NormSiLU activation, then validates the 20% activation claim through end-to-end training and hardware benchmarks on fixed model sizes and token budgets. No equations reduce a prediction to a fitted parameter by construction, no load-bearing premise rests on self-citation, and no ansatz or uniqueness result is smuggled in. The architecture choices and performance match are presented as design decisions confirmed experimentally rather than tautologically derived from the same quantities.
Axiom & Free-Parameter Ledger
free parameters (1)
- expert-wise scaling factors
axioms (1)
- domain assumption Standard dense transformer pre-training assumptions hold for the MoE variant.
invented entities (1)
-
NormSiLU
no independent evidence
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
NormSiLU, an activation function that normalizes inputs prior to SiLU operators, producing a more stable trend of routed-expert activation ratio and a higher intrinsic sparsity level
-
IndisputableMonolith/Foundation/BranchSelection.leanbranch_selection unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
ReLU-based routing enhanced by learnable expert-wise scaling, which adaptively balances the contributions of routed and shared experts
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Advances in neural information processing systems , volume=
Language models are few-shot learners , author=. Advances in neural information processing systems , volume=. 2020 , url=
work page 2020
-
[2]
Finetuned Language Models Are Zero-Shot Learners
Finetuned language models are zero-shot learners , author=. arXiv preprint arXiv:2109.01652 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[3]
LLaMA: Open and Efficient Foundation Language Models
Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth. arXiv preprint arXiv:2302.13971 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[4]
Advances in Neural Information Processing Systems , volume=
Training language models to follow instructions with human feedback , author=. Advances in Neural Information Processing Systems , volume=. 2022 , url=
work page 2022
- [5]
-
[6]
Touvron, Hugo and Martin, Louis and Stone, Kevin and Albert, Peter and Almahairi, Amjad and Babaei, Yasmine and Bashlykov, Nikolay and Batra, Soumya and Bhargava, Prajjwal and Bhosale, Shruti and others , journal=. 2023 , url=
work page 2023
-
[7]
Achiam, Josh and Adler, Steven and Agarwal, Sandhini and Ahmad, Lama and Akkaya, Ilge and Aleman, Florencia Leoni and Almeida, Diogo and Altenschmidt, Janko and Altman, Sam and Anadkat, Shyamal and others , journal=. 2023 , url=
work page 2023
-
[8]
Pope, Reiner and Douglas, Sholto and Chowdhery, Aakanksha and Devlin, Jacob and Bradbury, James and Heek, Jonathan and Xiao, Kefan and Agrawal, Shivani and Dean, Jeff , journal=. Efficiently scaling. 2023 , url=
work page 2023
-
[9]
Aminabadi, Reza Yazdani and Rajbhandari, Samyam and Awan, Ammar Ahmad and Li, Cheng and Li, Du and Zheng, Elton and Ruwase, Olatunji and Smith, Shaden and Zhang, Minjia and Rasley, Jeff and others , booktitle=. 2022 , organization=
work page 2022
-
[10]
International Conference on Machine Learning , pages=
Smoothquant: Accurate and efficient post-training quantization for large language models , author=. International Conference on Machine Learning , pages=. 2023 , organization=
work page 2023
-
[11]
Advances in Neural Information Processing Systems , volume=
Towards efficient post-training quantization of pre-trained language models , author=. Advances in Neural Information Processing Systems , volume=. 2022 , url=
work page 2022
-
[12]
A comprehensive study on post-training quantization for large language models , author=. arXiv preprint arXiv:2303.08302 , year=
-
[13]
A Simple and Effective Pruning Approach for Large Language Models , author=. arXiv preprint arXiv:2306.11695 , year=
-
[14]
Frantar, Elias and Alistarh, Dan , booktitle=. 2023 , organization=
work page 2023
-
[15]
Xia, Mengzhou and Gao, Tianyu and Zeng, Zhiyuan and Chen, Danqi , journal=. Sheared. 2023 , url=
work page 2023
-
[16]
Leviathan, Yaniv and Kalman, Matan and Matias, Yossi , booktitle=. Fast inference from. 2023 , organization=
work page 2023
-
[17]
Liu, Zichang and Wang, Jue and Dao, Tri and Zhou, Tianyi and Yuan, Binhang and Song, Zhao and Shrivastava, Anshumali and Zhang, Ce and Tian, Yuandong and Re, Christopher and others , booktitle=. 2023 , organization=
work page 2023
-
[18]
Song, Yixin and Mi, Zeyu and Xie, Haotong and Chen, Haibo , journal=. 2023 , url=
work page 2023
-
[19]
Adversarial robustness of sparse local
Muthukumar, Ramchandran and Sulam, Jeremias , journal=. Adversarial robustness of sparse local. 2023 , publisher=
work page 2023
-
[20]
Ahmad, Subutai and Scheinkman, Luiz , journal=. How can we be so dense?. 2019 , url=
work page 2019
-
[21]
Correia, Gon. Adaptively Sparse. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages=. 2019 , url=
work page 2019
-
[22]
The Lazy Neuron Phenomenon: On Emergence of Activation Sparsity in
Li, Zonglin and You, Chong and Bhojanapalli, Srinadh and Li, Daliang and Rawat, Ankit Singh and Reddi, Sashank J and Ye, Ke and Chern, Felix and Yu, Felix and Guo, Ruiqi and others , booktitle=. The Lazy Neuron Phenomenon: On Emergence of Activation Sparsity in. 2022 , url=
work page 2022
-
[23]
Zhang, Susan and Roller, Stephen and Goyal, Naman and Artetxe, Mikel and Chen, Moya and Chen, Shuohui and Dewan, Christopher and Diab, Mona and Li, Xian and Lin, Xi Victoria and others , journal=. 2022 , url=
work page 2022
-
[24]
Deep learning using rectified linear units (
Agarap, Abien Fred , journal=. Deep learning using rectified linear units (. 2018 , url=
work page 2018
-
[25]
Chowdhery, Aakanksha and Narang, Sharan and Devlin, Jacob and Bosma, Maarten and Mishra, Gaurav and Roberts, Adam and Barham, Paul and Chung, Hyung Won and Sutton, Charles and Gehrmann, Sebastian and others , journal=. 2023 , url=
work page 2023
- [26]
-
[27]
Mirzadeh, Iman and Alizadeh, Keivan and Mehta, Sachin and Del Mundo, Carlo C and Tuzel, Oncel and Samei, Golnoosh and Rastegari, Mohammad and Farajtabar, Mehrdad , journal=. 2023 , url=
work page 2023
-
[28]
arXiv preprint arXiv:2010.01048 , year=
The Efficacy of L_1 Regularization in Two-Layer Neural Networks , author=. arXiv preprint arXiv:2010.01048 , year=
-
[29]
Transformed L_1 regularization for learning sparse deep neural networks , author=. Neural Networks , volume=. 2019 , publisher=
work page 2019
-
[30]
Hendrycks, Dan and Gimpel, Kevin , journal=. Gaussian error linear units (. 2016 , url=
work page 2016
-
[31]
Sigmoid-weighted linear units for neural network function approximation in reinforcement learning , author=. Neural networks , volume=. 2018 , publisher=
work page 2018
-
[32]
L_2 regularization, and rotational invariance , author=
Feature selection, L_1 vs. L_2 regularization, and rotational invariance , author=. Proceedings of the twenty-first international conference on Machine learning , pages=. 2004 , url=
work page 2004
-
[33]
A survey of sparse representation: algorithms and applications , author=. IEEE access , volume=. 2015 , publisher=
work page 2015
-
[34]
Journal of physics: Conference series , volume=
An overview of overfitting and its solutions , author=. Journal of physics: Conference series , volume=. 2019 , organization=
work page 2019
- [35]
-
[36]
International Conference on Machine Learning , pages=
Language modeling with gated convolutional networks , author=. International Conference on Machine Learning , pages=. 2017 , organization=
work page 2017
- [37]
-
[38]
Han, Xu and Zeng, Guoyang and Zhao, Weilin and Liu, Zhiyuan and Zhang, Zhengyan and Zhou, Jie and Zhang, Jun and Chao, Jia and Sun, Maosong , booktitle=. 2022 , url=
work page 2022
-
[39]
International Conference on Machine Learning , pages=
Flexgen: High-throughput generative inference of large language models with a single GPU , author=. International Conference on Machine Learning , pages=. 2023 , organization=
work page 2023
-
[40]
Evaluating Large Language Models Trained on Code
Evaluating large language models trained on code , author=. arXiv preprint arXiv:2107.03374 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[41]
Program Synthesis with Large Language Models
Program synthesis with large language models , author=. arXiv preprint arXiv:2108.07732 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[42]
Bisk, Yonatan and Zellers, Rowan and Gao, Jianfeng and Choi, Yejin and others , booktitle=. 2020 , url=
work page 2020
-
[43]
Sap, Maarten and Rashkin, Hannah and Chen, Derek and Le Bras, Ronan and Choi, Yejin , booktitle=. 2019 , url=
work page 2019
-
[44]
Zellers, Rowan and Holtzman, Ari and Bisk, Yonatan and Farhadi, Ali and Choi, Yejin , booktitle=. 2019 , url=
work page 2019
-
[45]
Sakaguchi, Keisuke and Le Bras, Ronan and Bhagavatula, Chandra and Choi, Yejin , booktitle=. 2020 , url=
work page 2020
-
[46]
Think you have Solved Question Answering?
Clark, Peter and Cowhey, Isaac and Etzioni, Oren and Khot, Tushar and Sabharwal, Ashish and Schoenick, Carissa and Tafjord, Oyvind , journal=. Think you have Solved Question Answering?. 2018 , url=
work page 2018
-
[47]
Talmor, Alon and Herzig, Jonathan and Lourie, Nicholas and Berant, Jonathan , booktitle=. 2019 , url=
work page 2019
-
[48]
2011 AAAI Spring Symposium Series , year=
Choice of plausible alternatives: An evaluation of commonsense causal reasoning , author=. 2011 AAAI Spring Symposium Series , year=
work page 2011
-
[49]
Clark, Christopher and Lee, Kenton and Chang, Ming-Wei and Kwiatkowski, Tom and Collins, Michael and Toutanova, Kristina , booktitle=. 2019 , url=
work page 2019
-
[50]
Paperno, Denis and Kruszewski, Germ. The. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages=. 2016 , url=
work page 2016
-
[51]
Clark, Jonathan H and Choi, Eunsol and Collins, Michael and Garrette, Dan and Kwiatkowski, Tom and Nikolaev, Vitaly and Palomaki, Jennimaria , journal=. 2020 , url=
work page 2020
-
[52]
Training Verifiers to Solve Math Word Problems
Training verifiers to solve math word problems , author=. arXiv preprint arXiv:2110.14168 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[53]
Measuring Massive Multitask Language Understanding
Measuring massive multitask language understanding , author=. arXiv preprint arXiv:2009.03300 , year=
work page internal anchor Pith review Pith/arXiv arXiv 2009
-
[54]
Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them
Challenging big-bench tasks and whether chain-of-thought can solve them , author=. arXiv preprint arXiv:2210.09261 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[55]
Zhong, Wanjun and Cui, Ruixiang and Guo, Yiduo and Liang, Yaobo and Lu, Shuai and Wang, Yanlin and Saied, Amin and Chen, Weizhu and Duan, Nan , journal=. 2023 , url=
work page 2023
-
[56]
Scaling Laws for Neural Language Models
Scaling laws for neural language models , author=. arXiv preprint arXiv:2001.08361 , year=
work page internal anchor Pith review Pith/arXiv arXiv 2001
-
[57]
Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding , author=. arXiv preprint arXiv:1510.00149 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[58]
Proceedings of the IEEE conference on computer vision and pattern recognition , pages=
Quantization and training of neural networks for efficient integer-arithmetic-only inference , author=. Proceedings of the IEEE conference on computer vision and pattern recognition , pages=. 2018 , url=
work page 2018
-
[59]
Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=
Data-free quantization through weight equalization and bias correction , author=. Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=. 2019 , url=
work page 2019
-
[60]
International Conference on Machine Learning , pages=
Improving neural network quantization without retraining using outlier channel splitting , author=. International Conference on Machine Learning , pages=. 2019 , organization=
work page 2019
-
[61]
Advances in neural information processing systems , volume=
Learning both weights and connections for efficient neural network , author=. Advances in neural information processing systems , volume=. 2015 , url=
work page 2015
- [62]
-
[63]
International Conference on Learning Representations , year=
Pruning Convolutional Neural Networks for Resource Efficient Inference , author=. International Conference on Learning Representations , year=
-
[64]
The Journal of Machine Learning Research , volume=
Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks , author=. The Journal of Machine Learning Research , volume=. 2021 , publisher=
work page 2021
-
[65]
Distilling the Knowledge in a Neural Network
Distilling the knowledge in a neural network , author=. arXiv preprint arXiv:1503.02531 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[66]
arXiv preprint arXiv:1903.12136 , year=
Distilling task-specific knowledge from bert into simple neural networks , author=. arXiv preprint arXiv:1903.12136 , year=
-
[67]
Touvron, Hugo and Cord, Matthieu and Douze, Matthijs and Massa, Francisco and Sablayrolles, Alexandre and J. Training data-efficient image. International Conference on Machine Learning , pages=. 2021 , organization=
work page 2021
-
[68]
MiniLLM: On-Policy Distillation of Large Language Models
Knowledge Distillation of Large Language Models , author=. arXiv preprint arXiv:2306.08543 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[69]
Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes , author=. arXiv preprint arXiv:2305.02301 , year=
-
[70]
Wang, Yiding and Chen, Kai and Tan, Haisheng and Guo, Kun , booktitle=. Tabi:. 2023 , url=
work page 2023
-
[71]
Accelerating Large Language Model Decoding with Speculative Sampling
Accelerating large language model decoding with speculative sampling , author=. arXiv preprint arXiv:2302.01318 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[72]
Miao, Xupeng and Oliaro, Gabriele and Zhang, Zhihao and Cheng, Xinhao and Wang, Zeyu and Wong, Rae Ying Yee and Chen, Zhuoming and Arfeen, Daiyaan and Abhyankar, Reyna and Jia, Zhihao , journal=. 2023 , url=
work page 2023
-
[73]
Zeroquant: Efficient and affordable post-training quantization for large-scale
Yao, Zhewei and Yazdani Aminabadi, Reza and Zhang, Minjia and Wu, Xiaoxia and Li, Conglong and He, Yuxiong , journal=. Zeroquant: Efficient and affordable post-training quantization for large-scale. 2022 , url=
work page 2022
-
[74]
Massive language models can be accurately pruned in one-shot , author=. arXiv preprint arXiv:2301.00774 , year=
-
[75]
Zheng, Ningxin and Jiang, Huiqiang and Zhang, Quanlu and Han, Zhenhua and Ma, Lingxiao and Yang, Yuqing and Yang, Fan and Zhang, Chengruidong and Qiu, Lili and Yang, Mao and others , booktitle=. 2023 , url=
work page 2023
-
[76]
Advances in neural information processing systems , volume=
Learning structured sparsity in deep neural networks , author=. Advances in neural information processing systems , volume=. 2016 , url=
work page 2016
-
[77]
International Conference on Learning Representations , year=
Exploring Sparsity in Recurrent Neural Networks , author=. International Conference on Learning Representations , year=
-
[78]
arXiv preprint arXiv:1702.06257 , year=
The power of sparsity in convolutional neural networks , author=. arXiv preprint arXiv:1702.06257 , year=
-
[79]
An Image is Worth 16x16 Words:
Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and others , booktitle=. An Image is Worth 16x16 Words:. 2020 , url=
work page 2020
-
[80]
Alizadeh, Keivan and Mirzadeh, Iman and Belenko, Dmitry and Khatamifard, Karen and Cho, Minsik and Del Mundo, Carlo C and Rastegari, Mohammad and Farajtabar, Mehrdad , journal=. 2023 , url=
work page 2023
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.