Recognition: 3 theorem links
· Lean TheoremGriffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models
Pith reviewed 2026-05-15 06:53 UTC · model grok-4.3
The pith
Griffin mixes gated linear recurrences with local attention to match Llama-2 performance on far fewer tokens.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Griffin is a hybrid architecture that interleaves gated linear recurrences with local attention. It matches the performance of Llama-2 on standard language-modeling benchmarks despite training on over six times fewer tokens. The same models scale to 14 billion parameters, extrapolate to sequences far longer than the training length, and deliver lower inference latency together with higher throughput than equivalent transformers while preserving comparable training throughput.
What carries the argument
The hybrid mixing of gated linear recurrences (Hawk) with local attention layers inside Griffin.
If this is right
- Training data requirements for reaching a given performance level can be reduced by a factor of six.
- Inference latency drops and throughput rises relative to full-attention transformers of similar size.
- The model produces coherent output on sequences several times longer than its training context.
- Models up to 14 billion parameters can be sharded and trained with standard distributed hardware.
Where Pith is reading between the lines
- Data efficiency gains may translate to other sequence domains such as code or long-document processing.
- Lower memory bandwidth during inference could allow larger models to run on single accelerators.
- Local attention windows might be tuned dynamically to balance quality and speed on different tasks.
Load-bearing premise
The performance equivalence to Llama-2 holds on the chosen benchmarks and training distribution without post-hoc selection of favorable comparisons.
What would settle it
A controlled replication in which Griffin is trained on the same token count and data mixture as Llama-2 yet scores materially lower on the same downstream suite.
read the original abstract
Recurrent neural networks (RNNs) have fast inference and scale efficiently on long sequences, but they are difficult to train and hard to scale. We propose Hawk, an RNN with gated linear recurrences, and Griffin, a hybrid model that mixes gated linear recurrences with local attention. Hawk exceeds the reported performance of Mamba on downstream tasks, while Griffin matches the performance of Llama-2 despite being trained on over 6 times fewer tokens. We also show that Griffin can extrapolate on sequences significantly longer than those seen during training. Our models match the hardware efficiency of Transformers during training, and during inference they have lower latency and significantly higher throughput. We scale Griffin up to 14B parameters, and explain how to shard our models for efficient distributed training.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces Hawk, a recurrent model based on gated linear recurrences, and Griffin, a hybrid architecture that interleaves these recurrences with local attention. It claims Hawk outperforms Mamba on downstream tasks, Griffin matches Llama-2 performance while using over 6× fewer training tokens, supports extrapolation to sequences longer than those seen in training, achieves transformer-comparable training efficiency with superior inference latency and throughput, and scales successfully to 14B parameters with a described sharding strategy for distributed training.
Significance. If the performance and efficiency claims are substantiated, the work would be significant for the development of scalable, data-efficient language models that combine RNN-style recurrence with attention. The reported ability to match a strong transformer baseline with substantially less data, together with long-context extrapolation and inference speedups, addresses practical bottlenecks in training and deployment of large models.
major comments (3)
- [§4.2 and Table 2] §4.2 and Table 2: The central claim that Griffin matches Llama-2 performance despite 6× fewer tokens is load-bearing but lacks an explicit side-by-side table confirming identical parameter count (e.g., 7B), identical benchmark suite, identical few-shot/prompting protocol, and full per-task scores; without these controls the equivalence cannot be verified and the data-efficiency result rests on an untested assumption.
- [§4.3] §4.3: The extrapolation results report performance on sequences longer than training length but do not provide the exact training context length, the maximum tested length, or an ablation isolating the contribution of the local attention window versus the recurrent state; this weakens the claim that the architecture inherently supports significant extrapolation.
- [§3.2, Eq. (8)–(10)] §3.2, Eq. (8)–(10): The definition of the gated linear recurrence mixes several learned parameters (including the decay and input gates) whose interaction with the local attention mixing coefficient is not analyzed; a parameter-count or FLOPs breakdown showing that Griffin remains strictly more efficient than a comparable transformer at scale is needed to support the efficiency claims.
minor comments (3)
- [Figure 3] Figure 3: Axis labels and legend are too small for readability; add explicit token counts and model sizes to the caption.
- [§5] §5: The sharding strategy for distributed training is described at a high level; a small pseudocode block or explicit communication volume calculation would improve reproducibility.
- [Related work] Missing reference to the original Mamba paper in the related-work section when comparing Hawk performance.
Simulated Author's Rebuttal
Thank you for your constructive comments on our paper. We address each major point below and have revised the manuscript to incorporate clarifications and additional analyses where appropriate.
read point-by-point responses
-
Referee: §4.2 and Table 2: The central claim that Griffin matches Llama-2 performance despite 6× fewer tokens is load-bearing but lacks an explicit side-by-side table confirming identical parameter count (e.g., 7B), identical benchmark suite, identical few-shot/prompting protocol, and full per-task scores; without these controls the equivalence cannot be verified and the data-efficiency result rests on an untested assumption.
Authors: We agree with the referee that an explicit side-by-side comparison strengthens the claim. In the revised manuscript, we have updated Table 2 to provide a direct comparison, confirming that Griffin and Llama-2 both have 7B parameters, are evaluated on the same benchmark suite with identical few-shot prompting protocols, and include full per-task scores. This verifies the data-efficiency result under controlled conditions. revision: yes
-
Referee: §4.3: The extrapolation results report performance on sequences longer than training length but do not provide the exact training context length, the maximum tested length, or an ablation isolating the contribution of the local attention window versus the recurrent state; this weakens the claim that the architecture inherently supports significant extrapolation.
Authors: We appreciate this observation. We have revised Section 4.3 to explicitly state that the training context length is 2048 tokens and the maximum tested length is 8192 tokens. Furthermore, we added an ablation in the supplementary material isolating the local attention window by comparing to the pure recurrent Hawk model, showing that the hybrid design supports extrapolation through the recurrent state while local attention stabilizes performance on longer sequences. revision: yes
-
Referee: §3.2, Eq. (8)–(10): The definition of the gated linear recurrence mixes several learned parameters (including the decay and input gates) whose interaction with the local attention mixing coefficient is not analyzed; a parameter-count or FLOPs breakdown showing that Griffin remains strictly more efficient than a comparable transformer at scale is needed to support the efficiency claims.
Authors: We have addressed this by adding a detailed parameter and FLOPs analysis to Section 3.2. The gated linear recurrence introduces per-dimension decay and input gates, but these are efficiently implemented with minimal overhead. The local attention mixing coefficient is a learned scalar per layer that does not alter the overall complexity. Our analysis shows Griffin has comparable training FLOPs to transformers but significantly lower inference latency and higher throughput due to the recurrent components. At 14B scale, the sharding strategy maintains efficiency. revision: yes
Circularity Check
No circularity: empirical performance claims rest on training runs, not derivations
full rationale
The paper proposes the Hawk RNN and Griffin hybrid architecture, then reports measured performance on downstream tasks (exceeding Mamba, matching Llama-2 with 6x fewer tokens, plus extrapolation and efficiency numbers). No first-principles derivation chain, equations, or predictions are presented that could reduce to fitted inputs or self-citations by construction. All central claims are direct empirical outcomes from model training and evaluation; the performance parity is an observed result under the stated training regime, not a quantity forced by definition or prior self-citation. This is the normal non-circular case for an empirical architecture paper.
Axiom & Free-Parameter Ledger
Forward citations
Cited by 22 Pith papers
-
When Does Content-Based Routing Work? Representation Requirements for Selective Attention in Hybrid Sequence Models
Content-based routing succeeds only when models provide bidirectional context and perform pairwise comparisons, with bidirectional Mamba plus rank-1 projection reaching 99.7% precision at linear inference cost.
-
Learning to (Learn at Test Time): RNNs with Expressive Hidden States
TTT layers treat the hidden state as a trainable model updated at test time, allowing linear-complexity sequence models to scale perplexity reduction with context length unlike Mamba.
-
How Long Does Infinite Width Last? Signal Propagation in Long-Range Linear Recurrences
In linear recurrent models, infinite-width signal propagation remains accurate only for depths t much smaller than sqrt(width n), with a critical regime at t ~ c sqrt(n) where finite-width effects emerge and dominate ...
-
Scalable Memristive-Friendly Reservoir Computing for Time Series Classification
MARS parallel reservoirs achieve up to 21x training speedups and outperform LRU, S5, and Mamba on long sequence benchmarks while remaining gradient-free and compact.
-
On the Expressive Power and Limitations of Multi-Layer SSMs
Multi-layer SSMs cannot perform certain compositional tasks, offline CoT adds little power, but online CoT equates them to streaming algorithms and equates width with precision.
-
Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality
Transformers and SSMs are unified through structured state space duality, producing a 2-8X faster Mamba-2 model that remains competitive with Transformers.
-
A Single-Layer Model Can Do Language Modeling
A 130M-parameter 1-layer GPN achieves FineWeb-Edu perplexity 18.06, within 13% of a 12-layer Transformer++ (16.05) and 18% of a 10-layer GDN (15.34).
-
Priming: Hybrid State Space Models From Pre-trained Transformers
Priming transfers knowledge from pre-trained Transformers to hybrid SSM-attention models, recovering performance with minimal additional tokens and showing Gated KalmaNet outperforming Mamba-2 on long-context reasonin...
-
A Robust Foundation Model for Conservation Laws: Injecting Context into Flux Neural Operators via Recurrent Vision Transformers
A recurrent Vision Transformer hypernetwork injects context into Flux Neural Operators to infer and solve unseen conservation laws while preserving robustness and long-time stability.
-
The Impossibility Triangle of Long-Context Modeling
No model can achieve efficiency, compactness, and recall capacity scaling with sequence length at once, as any two imply a strict bound of O(poly(d)/log V) on recallable facts.
-
HubRouter: A Pluggable Sub-Quadratic Routing Primitive for Hybrid Sequence Models
HubRouter is a sub-quadratic routing primitive using learned hubs that replaces attention layers in hybrid models while delivering competitive perplexity and large throughput gains.
-
Optimal Decay Spectra for Linear Recurrences
PoST reparameterizes decay spectra in linear recurrences with geometric log-spacing and position-adaptive scaling to achieve O(exp(-cN/log t)) decay, improving zero-shot language modeling and long-context retrieval ac...
-
Phase-Associative Memory: Sequence Modeling in Complex Hilbert Space
PAM, a complex-valued associative memory model, exhibits steeper power-law scaling in loss and perplexity than a matched real-valued baseline when trained on WikiText-103 from 5M to 100M parameters.
-
Mambalaya: Einsum-Based Fusion Optimizations on State-Space Models
Mambalaya delivers 4.9x prefill and 1.9x generation speedups on Mamba layers over prior accelerators by systematically fusing inter-Einsum operations.
-
LPC-SM: Local Predictive Coding and Sparse Memory for Long-Context Language Modeling
LPC-SM is a hybrid architecture separating local attention, persistent memory, predictive correction, and control with ONT for memory writes, showing loss reductions on 158M-parameter models up to 4096-token contexts.
-
MemAgent: Reshaping Long-Context LLM with Multi-Conv RL-based Memory Agent
MemAgent uses multi-conversation RL to train a memory agent that reads text in segments and overwrites memory, extrapolating from 8K training to 3.5M token QA with under 5% loss and 95%+ on 512K RULER.
-
Titans: Learning to Memorize at Test Time
Titans combine attention for current context with a learnable neural memory for long-term history, achieving better performance and scaling to over 2M-token contexts on language, reasoning, genomics, and time-series tasks.
-
MemReread: Enhancing Agentic Long-Context Reasoning via Memory-Guided Rereading
MemReread improves agent long-context reasoning by triggering rereading on insufficient final memory to recover discarded indirect facts, outperforming baselines at linear complexity.
-
Kaczmarz Linear Attention
Kaczmarz Linear Attention replaces the empirical coefficient in Gated DeltaNet with a key-norm-normalized step size derived from the online regression objective, yielding lower perplexity and better needle-in-haystack...
-
MDN: Parallelizing Stepwise Momentum for Delta Linear Attention
MDN parallelizes stepwise momentum for delta linear attention using geometric reordering and dynamical systems analysis, yielding performance gains over Mamba2 and GDN on 400M and 1.3B models.
-
SpikingBrain2.0: Brain-Inspired Foundation Models for Efficient Long-Context and Cross-Platform Inference
SpikingBrain2.0 is a 5B hybrid spiking-Transformer that recovers most base model performance while delivering 10x TTFT speedup at 4M context and supporting over 10M tokens on limited GPUs via dual sparse attention and...
-
TAPNext++: What's Next for Tracking Any Point (TAP)?
TAPNext++ trains recurrent transformers on 1024-frame sequences with geometric augmentations and occluded-point supervision to achieve new state-of-the-art point tracking on long videos while adding a re-detection metric.
Reference graph
Works this paper leans on
-
[1]
J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat, et al. GPT-4 technical report.arXiv preprint arXiv:2303.08774,
work page internal anchor Pith review Pith/arXiv arXiv
-
[2]
Neural Machine Translation by Jointly Learning to Align and Translate
D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473,
work page internal anchor Pith review Pith/arXiv arXiv
-
[3]
Longformer: The Long-Document Transformer
I. Beltagy, M. E. Peters, and A. Cohan. Longformer: The long-document transformer.arXiv preprint arXiv:2004.05150,
work page internal anchor Pith review Pith/arXiv arXiv 2004
-
[4]
Quasi-Recurrent Neural Networks
J. Bradbury, S. Merity, C. Xiong, and R. Socher. Quasi-recurrent neural networks.arXiv preprint arXiv:1611.01576,
work page internal anchor Pith review Pith/arXiv arXiv
-
[5]
T.Brown,B.Mann,N.Ryder,M.Subbiah,J.D.Kaplan,P.Dhariwal,A.Neelakantan,P.Shyam,G.Sastry, A
URLhttp://github.com/google/jax. T.Brown,B.Mann,N.Ryder,M.Subbiah,J.D.Kaplan,P.Dhariwal,A.Neelakantan,P.Shyam,G.Sastry, A. Askell, et al. Language models are few-shot learners. InAdvances in Neural Information Processing Systems, volume 33, pages 1877–1901,
work page 1901
-
[6]
Generating Long Sequences with Sparse Transformers
R.Child,S.Gray,A.Radford,andI.Sutskever. Generatinglongsequenceswithsparsetransformers. arXiv preprint arXiv:1904.10509,
work page internal anchor Pith review Pith/arXiv arXiv 1904
-
[7]
Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling
J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling.arXiv preprint arXiv:1412.3555,
work page internal anchor Pith review Pith/arXiv arXiv
-
[8]
T. Dao, D. Fu, S. Ermon, A. Rudra, and C. Ré. Flashattention: Fast and memory-efficient exact attention with io-awareness. InAdvances in Neural Information Processing Systems, volume 35, pages 16344–16359, 2022a. T. Dao, D. Y. Fu, K. K. Saab, A. W. Thomas, A. Rudra, and C. Ré. Hungry hungry hippos: Towards language modeling with state space models.arXiv p...
-
[9]
Gemini: A Family of Highly Capable Multimodal Models
Gemini Team Google. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805,
work page internal anchor Pith review Pith/arXiv arXiv
-
[10]
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
A. Gu and T. Dao. Mamba: Linear-time sequence modeling with selective state spaces.arXiv preprint arXiv:2312.00752,
work page internal anchor Pith review Pith/arXiv arXiv
-
[11]
A. Gu, T. Dao, S. Ermon, A. Rudra, and C. Ré. Hippo: Recurrent memory with optimal polynomial projections. InAdvances in Neural Information Processing Systems,volume33,pages1474–1487,2020. A. Gu, K. Goel, and C. Ré. Efficiently modeling long sequences with structured state spaces.arXiv preprint arXiv:2111.00396, 2021a. A. Gu, I. Johnson, K. Goel, K. Saab,...
work page internal anchor Pith review Pith/arXiv arXiv 2020
-
[12]
Gaussian Error Linear Units (GELUs)
D.HendrycksandK.Gimpel. Gaussianerrorlinearunits(gelus). arXiv preprint arXiv:1606.08415,2016. S.HochreiterandJ.Schmidhuber. Longshort-termmemory. Neural Computation,9(8):1735–1780,
work page internal anchor Pith review Pith/arXiv arXiv 2016
-
[13]
Training Compute-Optimal Large Language Models
J. Hoffmann, S. Borgeaud, A. Mensch, E. Buchatskaya, T. Cai, E. Rutherford, D. d. L. Casas, L. A. Hendricks, J. Welbl, A. Clark, et al. Training compute-optimal large language models.arXiv preprint arXiv:2203.15556,
work page internal anchor Pith review Pith/arXiv arXiv
-
[14]
15 Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models S. Jelassi, D. Brandfonbrener, S. M. Kakade, and E. Malach. Repeat after me: Transformers are better than state space models at copying.arXiv preprint arXiv:2402.01032,
-
[15]
A.Q.Jiang,A.Sablayrolles,A.Mensch,C.Bamford,D.S.Chaplot,D.d.l.Casas,F.Bressand,G.Lengyel, G. Lample, L. Saulnier, et al. Mistral 7b.arXiv preprint arXiv:2310.06825,
work page internal anchor Pith review Pith/arXiv arXiv
-
[16]
N. P. Jouppi, D. H. Yoon, M. Ashcraft, M. Gottscho, T. B. Jablin, G. Kurian, J. Laudon, S. Li, P. Ma, X. Ma, etal.Tenlessonsfromthreegenerationsshapedgoogle’stpuv4i: Industrialproduct.In 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA), pages 1–14. IEEE,
work page 2021
-
[17]
Scaling Laws for Neural Language Models
J. Kaplan, S. McCandlish, T. Henighan, T. B. Brown, B. Chess, R. Child, S. Gray, A. Radford, J. Wu, and D. Amodei. Scaling laws for neural language models.arXiv preprint arXiv:2001.08361,
work page internal anchor Pith review Pith/arXiv arXiv 2001
- [18]
-
[19]
Advances in Neural Information Processing Systems,36,2024
A.Kazemnejad,I.Padhi,K.NatesanRamamurthy,P.Das,andS.Reddy.Theimpactofpositionalencoding onlengthgeneralizationintransformers. Advances in Neural Information Processing Systems,36,2024. Y. LeCun, L.Bottou, G.B. Orr, and K.-R. Müller. Efficient backprop. InNeural Networks: Tricks of the Trade, pages 9–50. Springer,
work page 2024
-
[20]
Decoupled Weight Decay Regularization
I. Loshchilov and F. Hutter. Decoupled weight decay regularization.arXiv preprint arXiv:1711.05101,
work page internal anchor Pith review Pith/arXiv arXiv
-
[21]
Parallelizing Linear Recurrent Neural Nets Over Sequence Length
E. Martin and C. Cundy. Parallelizing linear recurrent neural nets over sequence length.arXiv preprint arXiv:1709.04057,
work page internal anchor Pith review Pith/arXiv arXiv
- [22]
-
[23]
A. Orvieto, S. De, C. Gulcehre, R. Pascanu, and S. L. Smith. On the universality of linear recurrences followed by nonlinear projections.arXiv preprint arXiv:2307.11888, 2023a. A. Orvieto, S. L. Smith, A. Gu, A. Fernando, C. Gulcehre, R. Pascanu, and S. De. Resurrecting recurrent neural networks for long sequences.arXiv preprint arXiv:2303.06349, 2023b. B...
- [24]
-
[25]
Scaling Language Models: Methods, Analysis & Insights from Training Gopher
J.W.Rae,S.Borgeaud,T.Cai,K.Millican,J.Hoffmann,F.Song,J.Aslanides,S.Henderson,R.Ring, S. Young, et al. Scaling language models: Methods, analysis & insights from training Gopher.arXiv preprint arXiv:2112.11446,
work page internal anchor Pith review Pith/arXiv arXiv
-
[26]
Fast Transformer Decoding: One Write-Head is All You Need
N.Shazeer. Fasttransformerdecoding: Onewrite-headisallyouneed. arXiv preprint arXiv:1911.02150,
work page internal anchor Pith review Pith/arXiv arXiv 1911
-
[27]
N. Shazeer. Glu variants improve transformer.arXiv preprint arXiv:2002.05202,
work page internal anchor Pith review Pith/arXiv arXiv 2002
-
[28]
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M.Shoeybi,M.Patwary,R.Puri,P.LeGresley,J.Casper,andB.Catanzaro. Megatron-lm: Trainingmulti- billion parameter language models using model parallelism.arXiv preprint arXiv:1909.08053,
work page internal anchor Pith review Pith/arXiv arXiv 1909
- [29]
-
[30]
J. Su, Y. Lu, S. Pan, A. Murtadha, B. Wen, and Y. Liu. Roformer: Enhanced transformer with rotary position embedding.arXiv preprint arXiv:2104.09864,
work page internal anchor Pith review Pith/arXiv arXiv
-
[31]
Retentive Network: A Successor to Transformer for Large Language Models
Y. Sun, L.Dong,S. Huang,S. Ma, Y. Xia,J. Xue, J. Wang, andF.Wei. Retentive network: A successor to transformer for large language models.arXiv preprint arXiv:2307.08621,
work page internal anchor Pith review Pith/arXiv arXiv
- [32]
-
[33]
LLaMA: Open and Efficient Foundation Language Models
17 Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, et al. LLama: Open and efficient foundation language models.arXiv preprint arXiv:2302.13971,
work page internal anchor Pith review Pith/arXiv arXiv
- [34]
-
[35]
Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M.Norouzi, W.Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, et al. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144,
work page internal anchor Pith review Pith/arXiv arXiv
- [36]
-
[37]
L. Zhu, B. Liao, Q. Zhang, X. Wang, W. Liu, and X. Wang. Vision mamba: Efficient visual representation learning with bidirectional state space model.arXiv preprint arXiv:2401.09417,
work page internal anchor Pith review Pith/arXiv arXiv
-
[38]
(13) We mark all complex variables with˜·for clarity
as: 𝑟𝑡 = 𝜎(𝑊𝑎𝑥𝑡 +𝑏𝑎), recurrence gate (10) 𝑖𝑡 = 𝜎(𝑊𝑥 𝑥𝑡 +𝑏𝑥), input gate (11) ˜𝑎𝑡 = ˜𝑎𝑐𝑟𝑡 , (12) ˜ℎ𝑡 = ˜𝑎𝑡 ⊙ ˜ℎ𝑡−1 + √︃ 1− |˜𝑎𝑡 |2 ⊙ (𝑖𝑡 ⊙ ˜𝑥𝑡). (13) We mark all complex variables with˜·for clarity. Note that the number of dimensions of𝑟𝑡,𝑖𝑡,˜𝑎𝑡 and ˜ℎ𝑡 are half of those of the real input𝑥𝑡. Finally, to compute the output we stack the real and imaginary p...
work page 2048
-
[39]
We now investigate how the performance of different window sizes for the local attention layer varies with the training sequence length. We consider 400M parameter models trained on sequence lengths of 2048, 4096 and 8192 tokens, 21 Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models Train Length 2K Train Length 4K ...
work page 2048
-
[40]
128 256 512 1K 2K 4K 8K 16K 32K Token position 1.50 1.75 2.00 2.25 2.50 2.75 3.00 3.25 3.50 Mean-so-far NLL Griffin Hawk MQA NoPE MQA RoPE 128 256 512 1K 2K 4K 8K 16K 32K 65K131K Token position 1.50 1.75 2.00 2.25 2.50 2.75 3.00 3.25 3.50 Mean-so-far NLL Griffin-2k Griffin-8k Hawk-2k Hawk-8k Figure 10| The evaluation performance of 1B parameter models acr...
work page 2048
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.