pith. machine review for the scientific record. sign in

arxiv: 2508.00570 · v3 · submitted 2025-08-01 · 💻 cs.IR

Recognition: unknown

SPRINT: Scalable and Predictive Intent Refinement for LLM-Enhanced Session-based Recommendation

Authors on Pith no claims yet
classification 💻 cs.IR
keywords intentrecommendationsprintwhilecontextinferenceintentsllms
0
0 comments X
read the original abstract

Large language models (LLMs) have enhanced conventional recommendation models via user profiling, which generates representative textual profiles from users' historical interactions. However, their direct application to session-based recommendation (SBR) remains challenging due to severe session context scarcity and poor scalability. In this paper, we propose SPRINT, a scalable SBR framework that incorporates reliable and informative intents while ensuring high efficiency in both training and inference. SPRINT constrains LLM-based profiling with a global intent pool and validates inferred intents based on recommendation performance to mitigate noise and hallucinations under limited context. To ensure scalability, LLMs are selectively invoked only for uncertain sessions during training, while a lightweight intent predictor generalizes intent prediction to all sessions without LLM dependency at inference time. Experiments on real-world datasets show that SPRINT consistently outperforms state-of-the-art methods while providing more explainable recommendations.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Mixture of Sequence: Theme-Aware Mixture-of-Experts for Long-Sequence Recommendation

    cs.IR 2026-03 unverdicted novelty 6.0

    MoS applies theme-aware routing to extract multi-scale theme-specific subsequences from noisy long user sequences, achieving state-of-the-art recommendation performance with fewer FLOPs than comparable MoE models.