NI Sampling accelerates discrete diffusion language models up to 14.3 times by training a neural indicator to select which tokens to sample at each step using a trajectory-preserving objective.
arXiv preprint arXiv:2412.17153 , year=
2 Pith papers cite this work. Polarity classification is still indexing.
2
Pith papers citing it
years
2026 2verdicts
UNVERDICTED 2representative citing papers
Dual-Rerank fuses autoregressive and non-autoregressive generative reranking via knowledge distillation and uses list-wise decoupled RL optimization to improve whole-page utility and cut latency in industrial video search.
citing papers explorer
-
NI Sampling: Accelerating Discrete Diffusion Sampling by Token Order Optimization
NI Sampling accelerates discrete diffusion language models up to 14.3 times by training a neural indicator to select which tokens to sample at each step using a trajectory-preserving objective.
-
Dual-Rerank: Fusing Causality and Utility for Industrial Generative Reranking
Dual-Rerank fuses autoregressive and non-autoregressive generative reranking via knowledge distillation and uses list-wise decoupled RL optimization to improve whole-page utility and cut latency in industrial video search.