Autoregressive semantic ID generation creates tree-induced probability correlations that prevent generative recommenders from capturing simple patterns; Latte adds latent tokens to relax these correlations.
How well does generative recommendation generalize?
3 Pith papers cite this work. Polarity classification is still indexing.
fields
cs.IR 3years
2026 3representative citing papers
RECAP improves next-POI prediction by reconstructing sparse transitions via multi-hop graph transitivity and user revisit signals, yielding gains on tail transitions across real datasets.
Auto-regressive next-token prediction is strictly equivalent to full-vocabulary maximum likelihood estimation in generative recommendation under bijective item-to-token-sequence mapping.
citing papers explorer
-
Expressiveness Limits of Autoregressive Semantic ID Generation in Generative Recommendation
Autoregressive semantic ID generation creates tree-induced probability correlations that prevent generative recommenders from capturing simple patterns; Latte adds latent tokens to relax these correlations.
-
Beyond Long Tail POIs: Transition-Centered Generalization for Human Mobility Prediction
RECAP improves next-POI prediction by reconstructing sparse transitions via multi-hop graph transitivity and user revisit signals, yielding gains on tail transitions across real datasets.
-
On the Equivalence Between Auto-Regressive Next Token Prediction and Full-Item-Vocabulary Maximum Likelihood Estimation in Generative Recommendation--A Short Note
Auto-regressive next-token prediction is strictly equivalent to full-vocabulary maximum likelihood estimation in generative recommendation under bijective item-to-token-sequence mapping.