Amortized-Precision Quantization (APQ) and the MAQEE bi-level framework jointly optimize bit-widths and exit thresholds for early-exit ViTs, cutting BOPs by up to 95% with maintained accuracy across vision tasks.
Moequant: Enhancing quantization for mixture-of-experts large language models via expert-balanced sampling and affinity guidance.arXiv preprint arXiv:2505.03804
2 Pith papers cite this work. Polarity classification is still indexing.
years
2026 2verdicts
UNVERDICTED 2representative citing papers
GSQ applies a Gumbel-Softmax relaxation to learn discrete grid assignments in scalar quantization, closing most of the accuracy gap to vector methods like QTIP on Llama-3.1 models at 2-3 bits while using only symmetric scalar grids.
citing papers explorer
-
Amortized-Precision Quantization for Early-Exit Vision Transformers
Amortized-Precision Quantization (APQ) and the MAQEE bi-level framework jointly optimize bit-widths and exit thresholds for early-exit ViTs, cutting BOPs by up to 95% with maintained accuracy across vision tasks.
-
GSQ: Highly-Accurate Low-Precision Scalar Quantization for LLMs via Gumbel-Softmax Sampling
GSQ applies a Gumbel-Softmax relaxation to learn discrete grid assignments in scalar quantization, closing most of the accuracy gap to vector methods like QTIP on Llama-3.1 models at 2-3 bits while using only symmetric scalar grids.