pith. machine review for the scientific record. sign in

Ultrallada: Scaling the context length to 128k for diffusion large language models

1 Pith paper cite this work. Polarity classification is still indexing.

1 Pith paper citing it

citation-role summary

background 1

citation-polarity summary

fields

cs.LG 1

years

2026 1

verdicts

UNVERDICTED 1

roles

background 1

polarities

background 1

representative citing papers

DMax: Aggressive Parallel Decoding for dLLMs

cs.LG · 2026-04-09 · unverdicted · novelty 5.0

DMax enables faster parallel decoding in diffusion language models by using on-policy training to recover from errors and soft embedding interpolations for iterative revision, boosting tokens per forward pass roughly 2-3x on benchmarks while preserving accuracy.

citing papers explorer

Showing 1 of 1 citing paper.

  • DMax: Aggressive Parallel Decoding for dLLMs cs.LG · 2026-04-09 · unverdicted · none · ref 28

    DMax enables faster parallel decoding in diffusion language models by using on-policy training to recover from errors and soft embedding interpolations for iterative revision, boosting tokens per forward pass roughly 2-3x on benchmarks while preserving accuracy.