pith. machine review for the scientific record. sign in

arxiv: 2604.03671 · v1 · submitted 2026-04-04 · 💻 cs.IR

Recognition: no theorem link

User Simulator-Guided Multi-Turn Preference Optimization for Reasoning LLM-based Conversational Recommendation

Authors on Pith no claims yet

Pith reviewed 2026-05-13 17:20 UTC · model grok-4.3

classification 💻 cs.IR
keywords conversational recommender systemsuser simulatormulti-turn preference optimizationreinforcement learningLLM-based recommendationsupervised fine-tuningpreference alignmentmulti-turn dialogue
0
0 comments X

The pith

SMTPO aligns simulator feedback via multi-task SFT then applies RL with fine-grained rewards to reduce error accumulation in multi-turn conversational recommendations

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Conversational recommender systems struggle with sparse dialogue histories and single-turn limits that prevent accurate modeling of complex user preferences. LLM-based simulators can generate multi-turn feedback to help, yet without true preference labels their outputs often drift, causing errors to build across interactions and hurting generalization. The paper introduces SMTPO to first apply multi-task supervised fine-tuning on the simulator so its feedback better reflects diverse user needs, then train the reasoning LLM recommender with initial SFT on preference patterns followed by reinforcement learning that uses detailed rewards to align outputs progressively with actual preferences. Experiments on public datasets show gains in recommendation quality and transferability.

Core claim

We propose SMTPO, a user simulator-guided multi-turn preference optimization framework. Multi-task SFT is applied to the simulator to align its generated feedback with true user preferences in the absence of explicit labels. The reasoning LLM recommender first learns preference reasoning and recommendation patterns through SFT, then uses reinforcement learning with fine-grained reward design to progressively correct biases and align with true preferences across multiple turns, yielding improved recommendation performance.

What carries the argument

SMTPO framework, which uses multi-task SFT to align simulator feedback without labels and then RL with fine-grained rewards to optimize the recommender's multi-turn policy

If this is right

  • Multi-turn dialogues become more effective at capturing complex and evolving user preferences.
  • Error accumulation from biased simulator feedback is reduced through progressive RL alignment.
  • The recommender achieves better accuracy and generalization on public datasets.
  • The approach demonstrates transferability across different conversational recommendation scenarios.
  • Preference reasoning patterns learned via SFT enable more stable policy optimization in later RL stages.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The two-stage structure (SFT then RL) may apply to other LLM agents that rely on synthetic interaction data for training.
  • Periodic fine-tuning of the simulator on fresh real-user traces could further limit long-term drift in deployed systems.
  • Similar simulator-plus-RL pipelines might reduce bias in non-recommendation conversational tasks such as tutoring or customer support.

Load-bearing premise

Multi-task SFT on the simulator sufficiently aligns its feedback with true user preferences without explicit labels, and subsequent RL can progressively correct any remaining bias accumulation across turns.

What would settle it

A direct comparison on held-out real user data showing that the multi-task SFT simulator produces no measurable improvement in feedback alignment or that the RL stage yields no gain in multi-turn recommendation metrics over single-turn baselines.

Figures

Figures reproduced from arXiv: 2604.03671 by Wei Wei, Xiangchen Pan, Xingyuan Xiang.

Figure 1
Figure 1. Figure 1: The previous method frozen recommender param [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Overview of SMTPO training and multi-turn interaction: (1) The multi-turn interaction process among the recom [PITH_FULL_IMAGE:figures/full_fig_p004_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: The performance impact of different rewards on [PITH_FULL_IMAGE:figures/full_fig_p008_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Impact of multi-turn preference optimization on [PITH_FULL_IMAGE:figures/full_fig_p008_4.png] view at source ↗
read the original abstract

Conversational Recommender Systems (CRSs) leverage natural language interactions for personalized recommendation, yet information-scarce dialogue histories and single-turn recommendation paradigms may severely hinder accurate modeling of complex user preferences. To alleviate this issue, recent studies have introduced LLM-based user simulators, which generate natural language feedback and perform simulated multi-turn interactions to assist recommendation. Nevertheless, since simulators cannot access true user preference labels during inference, their feedback may deviate from actual user interests, causing errors to accumulate over multiple interactions and severely affecting the generalization of the recommender. Inspired by the multi-step reasoning capabilities of LLMs and the effectiveness of reinforcement learning in policy optimization, we propose SMTPO, a user simulator-guided multi-turn preference optimization conversational recommendation framework. To align simulator-generated feedback with true user preferences in the absence of explicit labels, we enhance feedback quality via multi-task supervised fine-tuning (SFT), enabling the simulator to better reflect users' complex and diverse needs. To address the challenge of biased feedback destabilizing multi-turn optimization, we first allow the reasoning LLM-based recommender to learn preference reasoning and recommendation patterns through SFT and then employ reinforcement learning with fine-grained reward design to progressively align with true user preferences, improving recommendation performance. Extensive experiments on public datasets demonstrate the effectiveness and transferability of our method.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 1 minor

Summary. The manuscript proposes SMTPO, a user simulator-guided multi-turn preference optimization framework for LLM-based conversational recommender systems. It addresses potential bias in simulator feedback by applying multi-task supervised fine-tuning (SFT) to the simulator to better align generated responses with true user preferences in the absence of explicit labels, then uses SFT on the recommender followed by reinforcement learning (RL) with a fine-grained reward design to optimize multi-turn preference reasoning and recommendation performance. The authors claim that experiments on public datasets demonstrate the effectiveness and transferability of the approach.

Significance. If the simulator alignment via multi-task SFT and the subsequent RL correction prove robust, the work could meaningfully advance conversational recommender systems by mitigating error accumulation across multi-turn interactions in information-scarce settings and by leveraging LLM reasoning for more stable preference optimization. The combination of simulator-guided RL with fine-grained rewards offers a concrete path for handling distribution shift between simulated and real user feedback.

major comments (3)
  1. Abstract: The central empirical claim that 'extensive experiments on public datasets demonstrate the effectiveness' is unsupported by any quantitative results, error bars, ablation studies, baseline comparisons, or details on how the fine-grained rewards are constructed, leaving the primary contribution without visible empirical grounding.
  2. Method description (simulator alignment): The multi-task SFT step for enhancing simulator feedback quality is described only at a high level; no auxiliary tasks, supervision signals, or held-out preference data are specified, which directly undermines the claim that this step aligns feedback with true preferences when no ground-truth labels are available at inference time.
  3. RL stage: The fine-grained reward design intended to 'progressively align with true user preferences' is not formalized (no equations or pseudocode), making it impossible to verify whether the reward can escape simulator-induced bias or whether it risks reinforcing misalignment across multi-turn rollouts.
minor comments (1)
  1. Abstract: The phrasing 'enhance feedback quality via multi-task supervised fine-tuning' is repeated without elaboration; a brief parenthetical on task construction would improve clarity.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive and detailed comments on our manuscript. We have prepared point-by-point responses below and will revise the paper to address the concerns raised, particularly by adding more explicit details and empirical highlights where the current presentation was insufficiently concrete.

read point-by-point responses
  1. Referee: Abstract: The central empirical claim that 'extensive experiments on public datasets demonstrate the effectiveness' is unsupported by any quantitative results, error bars, ablation studies, baseline comparisons, or details on how the fine-grained rewards are constructed, leaving the primary contribution without visible empirical grounding.

    Authors: We agree that the abstract would be strengthened by including concise quantitative highlights. In the revised manuscript we will add a brief summary of key results (e.g., relative gains in HR@10 and NDCG@10 over strong baselines, along with a high-level description of the fine-grained reward components). Full tables, ablations, error bars, and statistical significance tests already appear in Section 4; the abstract update will simply surface the main empirical outcomes without violating length constraints. revision: yes

  2. Referee: Method description (simulator alignment): The multi-task SFT step for enhancing simulator feedback quality is described only at a high level; no auxiliary tasks, supervision signals, or held-out preference data are specified, which directly undermines the claim that this step aligns feedback with true preferences when no ground-truth labels are available at inference time.

    Authors: Section 3.2 of the manuscript already specifies the auxiliary tasks (preference classification and conditional response generation) and the use of held-out preference data derived from training-set attribute masking. To eliminate any ambiguity we will expand this section with the explicit multi-task loss formulation, the precise supervision signals, and a data-construction diagram. This will make the alignment procedure fully reproducible even in the absence of ground-truth labels at inference. revision: yes

  3. Referee: RL stage: The fine-grained reward design intended to 'progressively align with true user preferences' is not formalized (no equations or pseudocode), making it impossible to verify whether the reward can escape simulator-induced bias or whether it risks reinforcing misalignment across multi-turn rollouts.

    Authors: The fine-grained reward is formalized in Equation (5) and the rollout procedure is given in Algorithm 1. The reward combines per-turn preference matching, cumulative recommendation accuracy, and an explicit bias-correction term. In the revision we will add a short derivation showing how the bias penalty discourages reinforcement of simulator misalignment and will include the hyper-parameter values used for weighting. This should allow readers to verify the mechanism directly. revision: yes

Circularity Check

0 steps flagged

No circularity: standard SFT+RL pipeline with no self-referential reductions

full rationale

The paper describes a two-stage process of multi-task SFT on the simulator followed by RL on the recommender using fine-grained rewards. No equations, parameter fits, or derivations are shown that reduce the claimed performance gains to quantities defined by the method's own inputs or fitted values. No load-bearing self-citations, uniqueness theorems, or ansatzes imported from prior author work appear in the provided text. The central claims rest on the empirical effectiveness of applying established LLM alignment techniques to the conversational recommendation setting, which remains externally falsifiable on public datasets and does not collapse into a closed definitional loop.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract-only review provides no explicit free parameters, axioms, or invented entities; the method description relies on standard SFT and RL components without detailing any new fitted quantities or postulates.

pith-pipeline@v0.9.0 · 5530 in / 1028 out tokens · 34416 ms · 2026-05-13T17:20:24.820188+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

51 extracted references · 51 canonical work pages · 9 internal anchors

  1. [1]

    Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Floren- cia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report.arXiv preprint arXiv:2303.08774 (2023)

  2. [2]

    Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V Le, Christo- pher Ré, and Azalia Mirhoseini. 2024. Large language monkeys: Scaling inference compute with repeated sampling.arXiv preprint arXiv:2407.21787(2024)

  3. [3]

    Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, et al . 2024. A survey on evaluation of large language models.ACM transactions on intelligent systems and technology15, 3 (2024), 1–45

  4. [4]

    Qibin Chen, Junyang Lin, Yichang Zhang, Ming Ding, Yukuo Cen, Hongxia Yang, and Jie Tang. 2019. Towards knowledge-based recommender dialog system.arXiv preprint arXiv:1908.05391(2019)

  5. [5]

    Huy Dao, Yang Deng, Dung D Le, and Lizi Liao. 2024. Broadening the view: Demonstration-augmented prompt learning for conversational recommendation. InProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. 785–795

  6. [6]

    Tri Dao. 2023. Flashattention-2: Faster attention with better parallelism and work partitioning.arXiv preprint arXiv:2307.08691(2023)

  7. [7]

    Yang Deng, Yaliang Li, Fei Sun, Bolin Ding, and Wai Lam. 2021. Unified conversa- tional recommendation policy learning via graph-based reinforcement learning. InProceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 1431–1441

  8. [8]

    Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers). 4171–4186

  9. [9]

    Xiaofei Dong, Xueqiang Zhang, Weixin Bu, Dan Zhang, and Feng Cao. 2024. A survey of llm-based agents: Theories, technologies, applications and suggestions. In2024 3rd International Conference on Artificial Intelligence, Internet of Things and Cloud Computing Technology (AIoTC). IEEE, 407–413

  10. [10]

    Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models.arXiv e-prints(2024), arXiv–2407

  11. [11]

    Yi Fang, Wenjie Wang, Yang Zhang, Fengbin Zhu, Qifan Wang, Fuli Feng, and Xiangnan He. 2025. Reason4Rec: Large Language Models for Recommendation with Deliberative User Preference Alignment.arXiv preprint arXiv:2502.02061 (2025)

  12. [12]

    Xueyang Feng, Jingsen Zhang, Jiakai Tang, Wei Li, Guohao Cai, Xu Chen, Quanyu Dai, Yue Zhu, and Zhenhua Dong. 2025. Expectation Confirmation Preference Optimization for Multi-Turn Conversational Recommendation Agent.arXiv preprint arXiv:2506.14302(2025)

  13. [13]

    Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al . 2025. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning.arXiv preprint arXiv:2501.12948(2025)

  14. [14]

    Shirley Anugrah Hayati, Dongyeop Kang, Qingxiaoyang Zhu, Weiyan Shi, and Zhou Yu. 2020. Inspired: Toward sociable recommendation dialog systems.arXiv preprint arXiv:2009.14306(2020)

  15. [15]

    Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. 2020. Lightgcn: Simplifying and powering graph convolution network for recommendation. InProceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval. 639–648

  16. [16]

    Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, and Julian McAuley. 2023. Large language models as zero-shot conversational recommenders. InProceedings of the 32nd ACM international conference on information and knowledge management. 720–730

  17. [17]

    Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. 2022. Lora: Low-rank adaptation of large language models.ICLR1, 2 (2022), 3

  18. [18]

    Xu Huang, Weiwen Liu, Xiaolong Chen, Xingmei Wang, Hao Wang, Defu Lian, Yasheng Wang, Ruiming Tang, and Enhong Chen. 2024. Understanding the planning of LLM agents: A survey.arXiv preprint arXiv:2402.02716(2024)

  19. [19]

    Wenqiang Lei, Xiangnan He, Yisong Miao, Qingyun Wu, Richang Hong, Min- Yen Kan, and Tat-Seng Chua. 2020. Estimation-action-reflection: Towards deep interaction between conversational and recommender systems. InProceedings of the 13th international conference on web search and data mining. 304–312

  20. [20]

    Wenqiang Lei, Gangyi Zhang, Xiangnan He, Yisong Miao, Xiang Wang, Liang Chen, and Tat-Seng Chua. 2020. Interactive path reasoning on graph for conver- sational recommendation. InProceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining. 2073–2083

  21. [21]

    Raymond Li, Samira Ebrahimi Kahou, Hannes Schulz, Vincent Michalski, Laurent Charlin, and Chris Pal. 2018. Towards deep conversational recommendations. Advances in neural information processing systems31 (2018)

  22. [22]

    Wendi Li, Wei Wei, Xiaoye Qu, Xian-Ling Mao, Ye Yuan, Wenfeng Xie, and Dangyang Chen. 2023. Trea: Tree-structure reasoning schema for conversational recommendation.arXiv preprint arXiv:2307.10543(2023)

  23. [23]

    Ying-Chun Lin, Jennifer Neville, Jack W Stokes, Longqi Yang, Tara Safavi, Mengt- ing Wan, Scott Counts, Siddharth Suri, Reid Andersen, Xiaofeng Xu, et al. 2024. Interpretable user satisfaction estimation for conversational systems with large language models.arXiv preprint arXiv:2403.12388(2024)

  24. [24]

    Zhangchi Qiu, Ye Tao, Shirui Pan, and Alan Wee-Chung Liew. 2024. Knowledge graphs and pretrained language models enhanced representation learning for conversational recommender systems.IEEE transactions on neural networks and learning systems(2024)

  25. [25]

    Wasswa Shafik. 2024. Introduction to chatgpt. InAdvanced applications of gener- ative AI and natural language processing models. IGI Global Scientific Publishing, 1–25

  26. [26]

    Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, Cheng Chen, Cheng Li, Chenjun Xiao, Chenzhuang Du, Chonghua Liao, et al. 2025. Kimi k1. 5: Scaling reinforcement learning with llms.arXiv preprint arXiv:2501.12599(2025)

  27. [27]

    Xiaolei Wang, Xinyu Tang, Wayne Xin Zhao, Jingyuan Wang, and Ji-Rong Wen

  28. [28]

    Rethinking the evaluation for conversational recommendation in the era of large language models.arXiv preprint arXiv:2305.13112(2023)

  29. [29]

    Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models.arXiv preprint arXiv:2203.11171(2022)

  30. [30]

    Xiaolei Wang, Chunxuan Xia, Junyi Li, Fanzhe Meng, Lei Huang, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. 2025. Search-Based Interaction For Conver- sation Recommendation via Generative Reward Model Based Simulated User. InProceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval. 75–84

  31. [31]

    Xiaolei Wang, Kun Zhou, Ji-Rong Wen, and Wayne Xin Zhao. 2022. Towards unified conversational recommender systems via knowledge-enhanced prompt learning. InProceedings of the 28th ACM SIGKDD conference on knowledge discov- ery and data mining. 1929–1937

  32. [32]

    Yuling Wang, Changxin Tian, Binbin Hu, Yanhua Yu, Ziqi Liu, Zhiqiang Zhang, Jun Zhou, Liang Pang, and Xiao Wang. 2024. Can small language models be good reasoners for sequential recommendation?. InProceedings of the ACM Web Conference 2024. 3876–3887

  33. [33]

    Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models.Advances in neural information processing systems35 (2022), 24824–24837

  34. [34]

    Yibiao Wei, Jie Zou, Weikang Guo, Guoqing Wang, Xing Xu, and Yang Yang

  35. [35]

    InProceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval

    MSCRS: Multi-modal Semantic Graph Prompt Learning Framework for Conversational Recommender Systems. InProceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval. 42–52

  36. [36]

    Junda Wu, Cheng-Chun Chang, Tong Yu, Zhankui He, Jianing Wang, Yupeng Hou, and Julian McAuley. 2024. Coral: collaborative retrieval-augmented large language models improve long-tail recommendation. InProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 3391–3401

  37. [37]

    Yunjia Xi, Weiwen Liu, Jianghao Lin, Bo Chen, Ruiming Tang, Weinan Zhang, and Yong Yu. 2024. Memocrs: Memory-enhanced sequential conversational recommender systems with large language models. InProceedings of the 33rd ACM International Conference on Information and Knowledge Management. 2585– 2595

  38. [38]

    Shitao Xiao, Zheng Liu, Peitian Zhang, Niklas Muennighoff, Defu Lian, and Jian-Yun Nie. 2024. C-pack: Packed resources for general chinese embeddings. InProceedings of the 47th international ACM SIGIR conference on research and development in information retrieval. 641–649

  39. [39]

    Zhouhang Xie, Junda Wu, Hyunsik Jeon, Zhankui He, Harald Steck, Rahul Jha, Dawen Liang, Nathan Kallus, and Julian McAuley. 2024. Neighborhood-based collaborative filtering for conversational recommendation. InProceedings of the 18th ACM Conference on Recommender Systems. 1045–1050. User Simulator-Guided Multi-Turn Preference Optimization for Reasoning LLM...

  40. [40]

    Kerui Xu, Jingxuan Yang, Jun Xu, Sheng Gao, Jun Guo, and Ji-Rong Wen. 2021. Adapting user preference to online feedback in multi-round conversational rec- ommendation. InProceedings of the 14th ACM international conference on web search and data mining. 364–372

  41. [41]

    Ting Yang and Li Chen. 2024. Unleashing the retrieval potential of large language models in conversational recommender systems. InProceedings of the 18th ACM Conference on Recommender Systems. 43–52

  42. [42]

    Se-eun Yoon, Zhankui He, Jessica Maria Echterhoff, and Julian McAuley. 2024. Evaluating large language models as generative user simulators for conversa- tional recommendation.arXiv preprint arXiv:2403.09738(2024)

  43. [43]

    Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Weinan Dai, Tiantian Fan, Gaohong Liu, Lingjun Liu, et al. 2025. Dapo: An open- source llm reinforcement learning system at scale.arXiv preprint arXiv:2503.14476 (2025)

  44. [44]

    Yu Yue, Yufeng Yuan, Qiying Yu, Xiaochen Zuo, Ruofei Zhu, Wenyuan Xu, Jiaze Chen, Chengyi Wang, TianTian Fan, Zhengyin Du, et al. 2025. Vapo: Efficient and reliable reinforcement learning for advanced reasoning tasks.arXiv preprint arXiv:2504.05118(2025)

  45. [45]

    Xiaoyu Zhang, Xin Xin, Dongdong Li, Wenxuan Liu, Pengjie Ren, Zhumin Chen, Jun Ma, and Zhaochun Ren. 2023. Variational reasoning over incomplete knowl- edge graphs for conversational recommendation. InProceedings of the sixteenth ACM international conference on web search and data mining. 231–239

  46. [46]

    Keyu Zhao, Fengli Xu, and Yong Li. 2025. Reason-to-Recommend: Using Interaction-of-Thought Reasoning to Enhance LLM Recommendation.arXiv preprint arXiv:2506.05069(2025)

  47. [47]

    Bowen Zheng, Yupeng Hou, Hongyu Lu, Yu Chen, Wayne Xin Zhao, Ming Chen, and Ji-Rong Wen. 2024. Adapting large language models by integrating collaborative semantics for recommendation. In2024 IEEE 40th International Conference on Data Engineering (ICDE). IEEE, 1435–1448

  48. [48]

    Yongsen Zheng, Ruilin Xu, Ziliang Chen, Guohua Wang, Mingjie Qian, Jinghui Qin, and Liang Lin. 2024. HyCoRec: Hypergraph-Enhanced Multi-Preference Learning for Alleviating Matthew Effect in Conversational Recommendation. InProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2526–2537

  49. [49]

    Kun Zhou, Wayne Xin Zhao, Shuqing Bian, Yuanhang Zhou, Ji-Rong Wen, and Jingsong Yu. 2020. Improving conversational recommender systems via knowl- edge graph based semantic fusion. InProceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining. 1006–1014

  50. [50]

    Yaochen Zhu, Chao Wan, Harald Steck, Dawen Liang, Yesu Feng, Nathan Kallus, and Jundong Li. 2025. Collaborative Retrieval for Large Language Model-based Conversational Recommender Systems. InProceedings of the ACM on Web Con- ference 2025. 3323–3334

  51. [51]

    Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, and Jundong Li. 2024. Collab- orative large language model for recommender systems. InProceedings of the ACM Web Conference 2024. 3162–3172