Recognition: no theorem link
Accelerating Large Language Model Decoding with Speculative Sampling
Pith reviewed 2026-05-11 07:23 UTC · model grok-4.3
The pith
Speculative sampling generates multiple tokens per transformer call by drafting sequences from a smaller model and verifying them with rejection sampling that matches the target distribution exactly.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Speculative sampling enables the generation of multiple tokens from each call to the target transformer by drafting continuations with a smaller model and accepting or rejecting them via a modified rejection sampling procedure that matches the target distribution exactly within numerical precision. When benchmarked on Chinchilla, this yields a 2-2.5x speedup in distributed setups.
What carries the argument
The speculative sampling procedure, which interleaves parallel scoring of short draft sequences with a rejection-sampling rule that preserves the exact token probabilities of the target model.
If this is right
- Decoding throughput on existing hardware increases by a factor of two to two-and-a-half for large models.
- No changes to model weights or architecture are required to obtain the speedup.
- The output distribution remains identical to standard autoregressive sampling, so downstream applications see no quality change.
- The technique applies directly in distributed training and inference setups without additional synchronization.
Where Pith is reading between the lines
- Pairing each large model with a lightweight draft model could become a standard deployment pattern for latency-sensitive services.
- The same parallel-verification idea might apply to other autoregressive generation tasks such as image or audio synthesis once suitable draft models exist.
- If the draft model can be made even cheaper, the effective cost per generated token could fall further without retraining the target model.
Load-bearing premise
Scoring several short continuations from the draft model in parallel takes about as long as sampling one token from the much larger target model.
What would settle it
A timing measurement on the 70B model showing that the observed wall-clock speedup drops below 1.5x, or a statistical test showing that token distributions produced by speculative sampling differ from those of standard sampling from the target model.
read the original abstract
We present speculative sampling, an algorithm for accelerating transformer decoding by enabling the generation of multiple tokens from each transformer call. Our algorithm relies on the observation that the latency of parallel scoring of short continuations, generated by a faster but less powerful draft model, is comparable to that of sampling a single token from the larger target model. This is combined with a novel modified rejection sampling scheme which preserves the distribution of the target model within hardware numerics. We benchmark speculative sampling with Chinchilla, a 70 billion parameter language model, achieving a 2-2.5x decoding speedup in a distributed setup, without compromising the sample quality or making modifications to the model itself.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces speculative sampling, an algorithm to accelerate transformer decoding by generating multiple tokens per target model call. A smaller draft model proposes short candidate continuations, which are scored in parallel by the target model; a modified rejection sampling step then accepts or rejects tokens to ensure the output distribution exactly matches the target model's distribution (within hardware numerics). The authors report 2-2.5x decoding speedups on the 70B Chinchilla model in a distributed setup, with no model modifications and no degradation in sample quality.
Significance. If the central algorithmic claim holds, the work is significant for large-model inference: it delivers practical speedups on a real 70B model while exactly preserving the target distribution and requiring no retraining or architectural changes. The self-contained construction, absence of free parameters, and direct empirical validation on Chinchilla address the key latency assumption (parallel draft scoring comparable to one target forward pass) and make the technique immediately deployable. This combination of theoretical correctness and measured wall-clock gains on a production-scale model is a clear strength.
major comments (1)
- [§3] §3 (Algorithm): The modified rejection sampling procedure is asserted to preserve the target distribution exactly. A concise derivation or proof sketch showing that the per-token acceptance probabilities (especially when the draft proposes a sequence of length >1) yield the correct marginals under the target would allow independent verification of edge cases such as zero-probability proposals or numerical underflow.
minor comments (3)
- [§4] The experimental section would benefit from an explicit statement of the draft-model architecture and size relative to Chinchilla-70B, as well as the precise hardware configuration used for the distributed timing measurements.
- [§4] Figure 2 (speedup curves) lacks error bars or multiple-run statistics; adding these would clarify the stability of the reported 2-2.5x factor across different prompt lengths.
- A short related-work paragraph contrasting the method with prior speculative decoding or speculative execution techniques would help readers situate the novelty of the rejection-sampling modification.
Simulated Author's Rebuttal
We thank the referee for their positive evaluation and constructive feedback on the algorithmic section. The request for a proof sketch is well-taken; we provide a concise derivation below and will incorporate it into the revised manuscript to support independent verification of the distribution-preserving property.
read point-by-point responses
-
Referee: [§3] §3 (Algorithm): The modified rejection sampling procedure is asserted to preserve the target distribution exactly. A concise derivation or proof sketch showing that the per-token acceptance probabilities (especially when the draft proposes a sequence of length >1) yield the correct marginals under the target would allow independent verification of edge cases such as zero-probability proposals or numerical underflow.
Authors: We agree an explicit sketch aids verification and will add the following to §3 of the revised manuscript. Let q be the draft distribution and p the target. For a token x ~ q the acceptance probability is min(1, p(x)/q(x)). The probability of accepting x is min(p(x), q(x)). Upon rejection (probability max(0, 1 - p(x)/q(x))), we resample from the residual r(y) ∝ max(0, p(y) - q(y)). The total output probability for any z is therefore min(p(z), q(z)) + max(0, p(z) - q(z)) = p(z). This identity holds conditionally at each position given an accepted prefix, so sequential application over a draft sequence of length K > 1 yields the exact target marginal at every step. For zero-probability proposals: tokens with q(x) = 0 are never proposed by the draft (softmax support is full in practice); we add a small epsilon in code to avoid division issues. Numerical underflow is handled via log-space ratios with clamping to [0,1], preserving the distribution up to floating-point precision as stated in the paper. revision: yes
Circularity Check
No significant circularity in algorithmic construction
full rationale
The paper introduces speculative sampling as an algorithmic method combining a draft model for generating candidate tokens with a modified rejection sampling procedure to preserve the exact target distribution. Correctness follows directly from the rejection sampling analysis (which is a standard technique and not derived from the paper's own fitted values or self-referential equations). The central benchmark result is an empirical measurement of speedup under the stated latency assumption, not a 'prediction' that reduces to inputs by construction. No self-citations are used to justify uniqueness theorems, no ansatzes are smuggled via prior work, and no parameters are fitted then relabeled as predictions. The derivation chain is self-contained and externally verifiable via the rejection sampler's properties.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Latency of parallel scoring of short continuations from a faster draft model is comparable to single-token sampling from the target model.
Forward citations
Cited by 60 Pith papers
-
The Coupling Tax: How Shared Token Budgets Undermine Visible Chain-of-Thought Under Fixed Output Limits
Shared token budgets between visible chain-of-thought and answers create a coupling tax that makes non-thinking competitive on math benchmarks, with a truncation decomposition predicting the crossover and split budget...
-
SlimSpec: Low-Rank Draft LM-Head for Accelerated Speculative Decoding
SlimSpec replaces the standard LM-head in draft models with a low-rank version to deliver 4-5x faster speculative decoding while preserving full vocabulary and competitive acceptance rates.
-
Test-Time Speculation
Test-Time Speculation adapts draft models online via target-model verifications to sustain high acceptance lengths during long LLM generations.
-
Future Validity is the Missing Statistic: From Impossibility to $\Phi$-Estimation for Grammar-Faithful Speculative Decoding
Speculative decoding under local grammar masking samples from the projected distribution μ^proj instead of the grammar-conditional μ*, and the future-validity function Φ corrects it via a Doob transform to achieve exa...
-
Selective Rollout: Mid-Trajectory Termination for Multi-Sample Agent RL
A one-parameter early-termination gate based on mean pairwise prefix edit distance reduces wall-clock time by 10.7% and raises held-out success by 2.5 pp in GRPO on ALFWorld by cutting zero-advantage batch dilution.
-
UniVer: A Unified Perspective for Multi-step and Multi-draft Speculative Decoding
UniVer frames tree-based speculative decoding as conditional optimal transport, proving it is lossless with optimal acceptance rates and delivering 4.2-8.5% longer accepted sequences than standard rejection sampling.
-
Component-Aware Self-Speculative Decoding in Hybrid Language Models
Component-aware self-speculative decoding achieves high acceptance rates in parallel hybrid models like Falcon-H1 but fails in sequential ones like Qwen3.5, with the gap tied to how components are integrated.
-
An Empirical Study of Speculative Decoding on Software Engineering Tasks
Speculative decoding accelerates LLM inference on SE tasks without accuracy loss, with model-based methods suiting code generation and model-free methods suiting repository-level repair and editing.
-
FASER: Fine-Grained Phase Management for Speculative Decoding in Dynamic LLM Serving
FASER delivers up to 53% higher throughput and 1.92x lower latency in dynamic LLM serving by adjusting speculative lengths per request, early pruning of rejects, and overlapping draft/verification phases via frontiers.
-
Copy-as-Decode: Grammar-Constrained Parallel Prefill for LLM Editing
Copy-as-Decode recasts LLM editing as grammar-constrained decoding over copy and generate primitives, delivering closed-form upper-bound speedups of 13x pooled on editing benchmarks via parallel prefill without any training.
-
WISV: Wireless-Informed Semantic Verification for Distributed Speculative Decoding in Device-Edge LLM Inference
WISV uses a channel-aware semantic acceptance policy on hidden representations to boost accepted sequence length by up to 60.8% and cut interaction rounds by 37.3% in distributed speculative decoding, with under 1% ac...
-
Speculative Decoding for Autoregressive Video Generation
A training-free speculative decoding method for block-based autoregressive video diffusion uses a quality router on worst-frame ImageReward scores to accept drafter proposals, achieving up to 2.09x speedup at 95.7% qu...
-
From Tokens to Steps: Verification-Aware Speculative Decoding for Efficient Multi-Step Reasoning
SpecGuard adds step-level verification to speculative decoding via attention grounding and log-probability scores, yielding 3.6% higher accuracy and 11% lower latency on reasoning benchmarks.
-
MARS: Enabling Autoregressive Models Multi-Token Generation
MARS fine-tunes autoregressive models to predict multiple tokens per step via continued training on instruction data, achieving 1.5-1.7x throughput while matching baseline accuracy and supporting real-time speed adjustment.
-
Cactus: Accelerating Auto-Regressive Decoding with Constrained Acceptance Speculative Sampling
Cactus uses constrained optimization to guarantee bounded divergence from the verifier LLM distribution during speculative sampling, raising acceptance rates without the distortion seen in typical acceptance sampling.
-
Do NOT Think That Much for 2+3=? On the Overthinking of o1-Like LLMs
o1-like models overthink easy tasks; self-training reduces compute use without accuracy loss on GSM8K, MATH500, GPQA, and AIME.
-
Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation
Scaled vanilla autoregressive models based on Llama achieve 2.18 FID on ImageNet 256x256 image generation, beating popular diffusion models without visual inductive biases.
-
Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads
Medusa augments LLMs with multiple decoding heads and tree-based attention to predict and verify several tokens in parallel, yielding 2.2-3.6x inference speedup via two fine-tuning regimes.
-
N-vium: Mixture-of-Exits Transformer for Accelerated Exact Generation
N-vium achieves 57.9% wall-clock speedup over matched standard transformers at no perplexity cost by mixing exact predictions from multiple model depths.
-
CATS: Cascaded Adaptive Tree Speculation for Memory-Limited LLM Inference Acceleration
CATS achieves up to 5.08x wall-clock speedup for LLM generation on edge devices via memory-matched cascaded tree speculation, outperforming prior methods by 1.45x with no quality loss.
-
DECO: Sparse Mixture-of-Experts with Dense-Comparable Performance on End-Side Devices
DECO sparse MoE matches dense Transformer performance at 20% expert activation with a 3x hardware inference speedup.
-
DECO: Sparse Mixture-of-Experts with Dense-Comparable Performance on End-Side Devices
DECO matches dense model performance at 20% expert activation via ReLU-based routing with learnable scaling and the NormSiLU activation, plus a 3x real-hardware speedup.
-
Breaking the Reward Barrier: Accelerating Tree-of-Thought Reasoning via Speculative Exploration
SPEX accelerates Tree-of-Thought LLM reasoning 1.2-3x via speculative path selection, dynamic budget allocation across queries, and adaptive early termination, with up to 4.1x when combined with token speculative decoding.
-
Attention Drift: What Autoregressive Speculative Decoding Models Learn
Drafter models in speculative decoding suffer progressive attention drift caused by monotonically growing hidden-state magnitudes along the residual path; post-norm plus per-state RMSNorm reduces this drift and improv...
-
Edit-Based Refinement for Parallel Masked Diffusion Language Models
ME-DLM augments parallel masked diffusion models with edit-distance-supervised refinements to raise quality on coding and math benchmarks while using far fewer diffusion steps.
-
PARD-2: Target-Aligned Parallel Draft Model for Dual-Mode Speculative Decoding
PARD-2 uses Confidence-Adaptive Token optimization to align draft model training with acceptance length in speculative decoding, enabling dual-mode operation and up to 6.94x lossless speedup on Llama3.1-8B.
-
Rethinking Dense Sequential Chains: Reasoning Language Models Can Extract Answers from Sparse, Order-Shuffling Chain-of-Thoughts
Reasoning language models extract answers from sparse, order-shuffled chain-of-thought traces with little accuracy loss.
-
SpecBlock: Block-Iterative Speculative Decoding with Dynamic Tree Drafting
SpecBlock achieves 8-19% higher speedup than EAGLE-3 in LLM speculative decoding by using repeated block expansions with hidden-state inheritance, a dynamic rank head, and a valid-prefix training mask.
-
CASCADE: Context-Aware Relaxation for Speculative Image Decoding
CASCADE formalizes semantic interchangeability and convergence in target model representations to enable context-aware acceptance relaxation in tree-based speculative decoding, delivering up to 3.6x speedup on text-to...
-
Policy-Guided Stepwise Model Routing for Cost-Effective Reasoning
A small RL-trained policy for stepwise model routing between LLM sizes improves the accuracy-cost tradeoff on math benchmarks over handcrafted strategies and matches large process reward model methods.
-
SpecKV: Adaptive Speculative Decoding with Compression-Aware Gamma Selection
SpecKV uses a small MLP trained on draft model confidence and entropy to dynamically choose the optimal speculation length gamma, achieving 56% better performance than fixed gamma=4 across various tasks and compressio...
-
SPECTRE: Hybrid Ordinary-Parallel Speculative Serving for Resource-Efficient LLM Inference
SPECTRE delivers up to 2.28x speedup on large-model LLM inference by turning idle tail-model services into remote speculative drafters using hybrid parallel decoding and priority scheduling.
-
SPECTRE: Hybrid Ordinary-Parallel Speculative Serving for Resource-Efficient LLM Inference
SPECTRE achieves up to 2.28x speedup for large-model LLM serving by running speculative draft generation and target verification in parallel using idle tail-model services.
-
When Less is Enough: Efficient Inference via Collaborative Reasoning
A large model generates a compact reasoning signal that a small model uses to solve tasks, reducing the large model's output tokens by up to 60% on benchmarks like AIME and GPQA.
-
Position-Aware Drafting for Inference Acceleration in LLM-Based Generative List-Wise Recommendation
PAD-Rec augments standard draft models with item-position and step-position embeddings plus learnable gates, delivering up to 3.1x wall-clock speedup and 5% average gain over strong speculative-decoding baselines on f...
-
Select to Think: Unlocking SLM Potential with Local Sufficiency
Small language models can achieve near large-model reasoning performance by learning to re-rank their own top-K token predictions after distilling selection from the large model.
-
Unifying Sparse Attention with Hierarchical Memory for Scalable Long-Context LLM Serving
SPIN co-designs sparse attention with hierarchical memory to achieve 1.66-5.66x higher throughput, 7-9x lower TTFT, and up to 58% lower TPOT than vLLM and original sparse implementations.
-
When Hidden States Drift: Can KV Caches Rescue Long-Range Speculative Decoding?
KV cache reuse improves long-range draft acceptance in speculative decoding but delivers only marginal end-to-end speedups due to drafter limitations.
-
When Hidden States Drift: Can KV Caches Rescue Long-Range Speculative Decoding?
KV cache reuse improves long-range draft acceptance rates in speculative decoding but delivers only marginal end-to-end speedups because shallow drafters cannot accurately estimate target queries and receive sparse gr...
-
MCAP: Deployment-Time Layer Profiling for Memory-Constrained LLM Inference
MCAP uses load-time Monte Carlo profiling to estimate layer importance, enabling dynamic quantization (W4A8 vs W4A16) and memory tiering (GPU/RAM/SSD) that delivers 1.5-1.8x higher decode throughput than llama-cpp Q4_...
-
DiP-SD: Distributed Pipelined Speculative Decoding for Efficient LLM Inference at the Edge
DiP-SD jointly optimizes batch count, user-to-batch assignment, and per-user draft lengths to deliver up to 17.89x throughput over autoregressive decoding and 1.93x over greedy batching in a device-edge Qwen deployment.
-
Accelerating Speculative Decoding with Block Diffusion Draft Trees
DDTree builds a draft tree from a block diffusion drafter using a best-first heap on its output probabilities and verifies the tree in one target-model pass via an ancestor-only attention mask, increasing average acce...
-
Topology-Aware Reasoning over Incomplete Knowledge Graph with Graph-Based Soft Prompting
A GNN-encoded subgraph soft prompting method lets LLMs perform topology-aware reasoning over incomplete KGs for KBQA, reaching SOTA on three of four benchmarks via a two-stage LLM pipeline.
-
SMART: When is it Actually Worth Expanding a Speculative Tree?
SMART uses marginal benefit-cost analysis to dynamically build efficient speculative trees, achieving 15-20% additional speedup in LLM and MLLM inference.
-
DualDiffusion: A Speculative Decoding Strategy for Masked Diffusion Models
DualDiffusion combines a lightweight drafter using approximations with a full verifier to reduce generation steps in masked diffusion models while keeping accuracy on MMLU and GSM8K.
-
Open-Loop Planning, Closed-Loop Verification: Speculative Verification for VLA
SV-VLA uses infrequent heavy VLA planning of action chunks plus a lightweight closed-loop verifier to achieve both efficiency and robustness in dynamic robot control.
-
SnapKV: LLM Knows What You are Looking for Before Generation
SnapKV selects clustered important KV positions per attention head from an observation window at the prompt end, yielding 3.6x faster generation and 8.2x better memory efficiency on 16K-token inputs with comparable pe...
-
GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints
Uptraining multi-head transformer checkpoints to grouped-query attention models achieves near multi-head quality at multi-query inference speeds using 5% additional compute.
-
GELATO: Generative Entropy- and Lyapunov-based Adaptive Token Offloading for Device-Edge Speculative LLM Inference
GELATO combines drift-plus-penalty Lyapunov control with generative entropy early exiting to adaptively offload tokens in device-edge speculative decoding, delivering higher throughput and lower energy use than prior ...
-
NoisyCoconut: Counterfactual Consensus via Latent Space Reasoning
Injecting noise into LLM latent trajectories creates diverse reasoning paths whose agreement acts as a confidence signal for selective abstention, cutting error rates from 40-70% to under 15% on math tasks.
-
EdgeFM: Efficient Edge Inference for Vision-Language Models
EdgeFM is an agent-driven framework that strips non-essential features from VLMs and packages reusable optimized kernels, achieving up to 1.49x speedup over TensorRT-Edge-LLM on NVIDIA Orin while enabling first end-to...
-
SpecFed: Accelerating Federated LLM Inference with Speculative Decoding and Compressed Transmission
SpecFed accelerates federated LLM inference via speculative decoding for parallel processing and top-K compression with server-side reconstruction, achieving high fidelity with reduced communication overhead.
-
Tandem: Riding Together with Large and Small Language Models for Efficient Reasoning
Tandem lets a large model supply compact strategic guidance to a small model for reasoning tasks, achieving similar or better performance at about 40 percent lower cost through adaptive early stopping.
-
Distributed Generative Inference of LLM at Internet Scales with Multi-Dimensional Communication Optimization
BloomBee is a distributed LLM inference system that achieves up to 1.76x higher throughput and 43.2% lower latency than prior decentralized systems by optimizing communication across multiple dimensions in low-bandwid...
-
Acceptance Dynamics Across Cognitive Domains in Speculative Decoding
Empirical measurements across four NLP domains show task type is a stronger predictor of speculative decoding acceptance than tree depth, with chat uniquely achieving expected accepted length over 1 token per step.
-
A-IO: Adaptive Inference Orchestration for Memory-Bound NPUs
A-IO adaptively orchestrates LLM inference on NPUs to address memory bottlenecks, model scaling paradoxes, and synchronization costs in speculative decoding.
-
ConfigSpec: Profiling-Based Configuration Selection for Distributed Edge--Cloud Speculative LLM Serving
ConfigSpec shows that optimal configurations for speculative LLM inference conflict across goodput (favoring smallest drafters at device-specific K=2-10), cost (favoring largest drafters at K=2), and energy (favoring ...
-
ComplianceNLP: Knowledge-Graph-Augmented RAG for Multi-Framework Regulatory Gap Detection
ComplianceNLP integrates knowledge-graph-augmented RAG, multi-task legal text extraction, and gap analysis to detect regulatory compliance gaps, reporting 87.7 F1 and real-world efficiency gains over GPT-4o baselines.
-
Efficient LLM-based Advertising via Model Compression and Parallel Verification
An Efficient Generative Targeting framework accelerates LLM inference in advertising via adaptive group quantization, layer-adaptive hierarchical sparsification, and prefix-tree parallel verification while accepting l...
-
Latency and Cost of Multi-Agent Intelligent Tutoring at Scale
Priority PayGo keeps multi-agent tutoring responses under 4 seconds even at 50 concurrent users, while costs stay below textbook prices per student.
Reference graph
Works this paper leans on
-
[1]
Vivit: Avideovisiontransformer
A.Arnab, M.Dehghani, G.Heigold, C.Sun, M.Lucic, andC.Schmid. Vivit: Avideovisiontransformer. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 6816–6826. IEEE Computer Society,
work page 2021
- [2]
-
[4]
URLhttps://arxiv.org/abs/2107.03374. A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, et al. Palm: Scaling language modeling with pathways.arXiv preprint arXiv:2204.02311,
work page internal anchor Pith review Pith/arXiv arXiv
-
[5]
LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale
T. Dettmers, M. Lewis, Y. Belkada, and L. Zettlemoyer. Llm. int8 (): 8-bit matrix multiplication for transformers at scale.arXiv preprint arXiv:2208.07339,
work page internal anchor Pith review arXiv
-
[6]
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Min- derer,G.Heigold,S.Gelly,etal. Animageisworth16x16words: Transformersforimagerecognition at scale.arXiv preprint arXiv:2010.11929,
work page internal anchor Pith review Pith/arXiv arXiv 2010
- [7]
-
[8]
Training Compute-Optimal Large Language Models
8 Accelerating Large Language Model Decoding with Speculative Sampling J. Hoffmann, S. Borgeaud, A. Mensch, E. Buchatskaya, T. Cai, E. Rutherford, D. d. L. Casas, L. A. Hendricks, J.Welbl, A.Clark, etal. Trainingcompute-optimallargelanguagemodels. arXivpreprint arXiv:2203.15556,
work page internal anchor Pith review Pith/arXiv arXiv
-
[9]
X. Jiao, Y. Yin, L. Shang, X. Jiang, X. Chen, L. Li, F. Wang, and Q. Liu. TinyBERT: Distilling BERT for natural language understanding. InFindings of the Association for Computational Linguis- tics: EMNLP 2020, pages 4163–4174, Online, Nov
work page 2020
-
[10]
Unifiedqa: Crossing format boundaries with a single QA system.CoRR, abs/2005.00700, 2020a
Association for Computational Linguis- tics. doi: 10.18653/v1/2020.findings-emnlp.372. URL https://aclanthology.org/2020. findings-emnlp.372. Y. Kim and A. M. Rush. Sequence-level knowledge distillation.CoRR, abs/1606.07947,
- [11]
-
[12]
The depth-to-width interplay in self-attention.arXiv preprint arXiv:2006.12467,
Y. Levine, N. Wies, O. Sharir, H. Bata, and A. Shashua. The depth-to-width interplay in self-attention. arXiv preprint arXiv:2006.12467,
-
[13]
S. Narayan, S. B. Cohen, and M. Lapata. Don’t give me the details, just the summary! topic- aware convolutional neural networks for extreme summarization. InProceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium, Oct.-Nov
work page 2018
-
[14]
Association for Computational Linguistics. doi: 10.18653/v1/D18-1206. URL https://aclanthology.org/D18-1206. R. Pope, S. Douglas, A. Chowdhery, J. Devlin, J. Bradbury, A. Levskaya, J. Heek, K. Xiao, S. Agrawal, and J. Dean. Efficiently scaling transformer inference.arXiv preprint arXiv:2211.05102,
-
[15]
J. W. Rae, S. Borgeaud, T. Cai, K. Millican, J. Hoffmann, F. Song, J. Aslanides, S. Henderson, R. Ring, S. Young, et al. Scaling language models: Methods, analysis & insights from training gopher.arXiv preprint arXiv:2112.11446,
work page internal anchor Pith review arXiv
-
[16]
V. Sanh, L. Debut, J. Chaumond, and T. Wolf. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter.arXiv preprint arXiv:1910.01108,
work page internal anchor Pith review Pith/arXiv arXiv 1910
-
[18]
URL http://arxiv.org/abs/1911.02150. M. Shoeybi, M. Patwary, R. Puri, P. LeGresley, J. Casper, and B. Catanzaro. Megatron-lm: Training multi-billion parameter language models using model parallelism.arXiv preprint arXiv:1909.08053,
work page internal anchor Pith review Pith/arXiv arXiv 1911
-
[20]
URLhttp://arxiv.org/abs/1811.03115. A. Wiggers and E. Hoogeboom. Predictive sampling with forecasting autoregressive models. In H. D. III and A. Singh, editors,Proceedings of the 37th International Conference on Machine Learning, volume 119 ofProceedings of Machine Learning Research, pages 10260–10269. PMLR, 13–18 Jul
-
[21]
Y., Zhang, M., Wu, X., Li, C., and He, Y
URL https://proceedings.mlr.press/v119/wiggers20a.html. 9 Accelerating Large Language Model Decoding with Speculative Sampling Z. Yao, R. Y. Aminabadi, M. Zhang, X. Wu, C. Li, and Y. He. Zeroquant: Efficient and affordable post-training quantization for large-scale transformers.arXiv preprint arXiv:2206.01861,
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.