pith. machine review for the scientific record. sign in

Approximating two-layer feedforward networks for efficient transformers.arXiv preprint arXiv:2310.10837

2 Pith papers cite this work. Polarity classification is still indexing.

2 Pith papers citing it

fields

cs.CL 1 cs.LG 1

years

2026 1 2025 1

representative citing papers

Sparse Layers are Critical to Scaling Looped Language Models

cs.LG · 2026-05-09 · unverdicted · novelty 6.0

Looped MoE models scale better than standard transformers because different experts activate on each loop pass, recovering expressivity without extra parameters, and support superior early exits.

citing papers explorer

Showing 2 of 2 citing papers.