Recognition: unknown
DIAURec: Dual-Intent Space Representation Optimization for Recommendation
Pith reviewed 2026-05-10 16:54 UTC · model grok-4.3
The pith
DIAURec reconstructs user and item representations from dual intent spaces using collaborative and language signals to improve recommendation quality.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The central claim is that unifying intent and language modeling through reconstruction in prototype and distribution intent spaces, followed by optimization with alignment and uniformity as primary objectives, coarse- and fine-grained matching for cross-space consistency, and intra-space plus interaction regularization to prevent collapse, yields user and item representations that more comprehensively capture latent preferences and deliver stronger recommendation performance.
What carries the argument
Dual-intent space reconstruction that forms prototype and distribution spaces from collaborative and language signals, then optimizes them with alignment, uniformity, coarse/fine-grained matching, and regularization terms.
If this is right
- Representations achieve greater consistency between collaborative and language-derived intent spaces.
- The model gains robustness against representation collapse in the reconstructed spaces.
- Recommendation quality improves consistently over fifteen baseline methods across three public datasets.
- Affinity between users and their interacted items increases in the learned feature space.
Where Pith is reading between the lines
- The dual-space reconstruction could be tested on sequential recommendation tasks where language signals carry temporal context.
- Similar regularization might stabilize training in other sparse-data settings such as session-based or cold-start recommendation.
- If alignment across spaces proves central, the framework could inform multimodal extensions that add visual or textual item content.
Load-bearing premise
That the specific combination of dual-space reconstruction, alignment and uniformity objectives, matching techniques, and regularization terms will produce representations that genuinely capture latent preferences better than existing methods without introducing dataset-specific artifacts or overfitting.
What would settle it
An ablation study on the same three datasets in which the intra-space and interaction regularization terms are removed and performance is compared directly to the full DIAURec model to check whether the claimed gains disappear.
Figures
read the original abstract
General recommender systems deliver personalized services by learning user and item representations, with the central challenge being how to capture latent user preferences. However, representations derived from sparse interactions often fail to comprehensively characterize user behaviors, thereby limiting recommendation effectiveness. Recent studies attempt to enhance user representations through sophisticated modeling strategies ($e.g.,$ intent or language modeling). Nevertheless, most works primarily concentrate on model interpretability instead of representation optimization. This imbalance has led to limited progress, as representation optimization is crucial for recommendation quality by promoting the affinity between users and their interacted items in the feature space, yet remains largely overlooked. To overcome these limitations, we propose DIAURec, a novel representation learning framework that unifies intent and language modeling for recommendation. DIAURec reconstructs representations based on the prototype and distribution intent spaces formed by collaborative and language signals. Furthermore, we design a comprehensive representation optimization strategy. Specifically, we adopts alignment and uniformity as the primary optimization objectives, and incorporates both coarse- and fine-grained matching to achieve effective alignment across different spaces, thereby enhancing representational consistency. Additionally, we further introduce intra-space and interaction regularization to enhance model robustness and prevent representation collapse in reconstructed space representation. Experiments on three public datasets against fifteen baseline methods show that DIAURec consistently outperforms state-of-the-art baselines, fully validating its effectiveness and superiority.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes DIAURec, a recommendation framework that unifies intent and language modeling by reconstructing user and item representations from prototype and distribution intent spaces derived from collaborative and language signals. It optimizes these via alignment and uniformity losses, coarse- and fine-grained matching across spaces, and intra-space plus interaction regularization to improve consistency and avoid collapse. Experiments on three public datasets against fifteen baselines report consistent outperformance, validating the approach's effectiveness.
Significance. If the empirical claims hold under rigorous controls, the work advances representation optimization in recommender systems by integrating dual intent spaces with contrastive objectives and regularizers. This could improve capture of latent preferences beyond standard intent or language modeling, with the alignment/uniformity strategy and collapse-prevention terms as potential strengths if they demonstrably outperform prior methods without dataset artifacts.
major comments (2)
- [§4] §4 (Experiments): The central claim of consistent superiority over 15 baselines on three datasets lacks reported details on data splits, hyperparameter tuning protocol, error bars, statistical significance tests (e.g., paired t-tests), or ablation studies isolating the dual-space reconstruction, alignment/uniformity losses, and regularizers; without these, the outperformance cannot be verified as robust rather than artifactual.
- [§3.3] §3.3 (Optimization objectives): Alignment and uniformity are standard contrastive losses; the manuscript must show via equations or ablations that the reported gains do not reduce to quantities controlled solely by fitted parameters in the prototype/distribution spaces, as this would undermine the claim that the dual-intent reconstruction plus regularizers genuinely enhance preference modeling.
minor comments (2)
- [Abstract] Abstract: grammatical error in 'we adopts alignment' should be 'we adopt alignment'.
- [§3] Notation: ensure consistent use of symbols for prototype vs. distribution spaces across sections to avoid reader confusion.
Simulated Author's Rebuttal
We thank the referee for the constructive comments. We address each major comment point-by-point below and will revise the manuscript to strengthen the experimental details and clarify the role of our optimization components.
read point-by-point responses
-
Referee: [§4] §4 (Experiments): The central claim of consistent superiority over 15 baselines on three datasets lacks reported details on data splits, hyperparameter tuning protocol, error bars, statistical significance tests (e.g., paired t-tests), or ablation studies isolating the dual-space reconstruction, alignment/uniformity losses, and regularizers; without these, the outperformance cannot be verified as robust rather than artifactual.
Authors: We agree that these details are necessary to verify robustness and reproducibility. In the revised manuscript, we will expand §4 to include explicit data split procedures (including ratios and whether random or chronological), the full hyperparameter tuning protocol with search ranges and validation criteria, results reported as mean ± standard deviation over multiple random seeds with error bars in tables and figures, paired t-test results for statistical significance against baselines, and comprehensive ablation studies that isolate the dual-intent space reconstruction, alignment/uniformity losses, coarse- and fine-grained matching, and intra-space/interaction regularizers. These additions will confirm that the gains are attributable to our framework rather than experimental artifacts. revision: yes
-
Referee: [§3.3] §3.3 (Optimization objectives): Alignment and uniformity are standard contrastive losses; the manuscript must show via equations or ablations that the reported gains do not reduce to quantities controlled solely by fitted parameters in the prototype/distribution spaces, as this would undermine the claim that the dual-intent reconstruction plus regularizers genuinely enhance preference modeling.
Authors: We acknowledge that alignment and uniformity are standard contrastive losses. Our key contribution is their integration with dual-intent (prototype and distribution) space reconstruction from collaborative and language signals, plus the coarse/fine-grained matching and regularization terms to prevent collapse and improve consistency. In the revision, we will add explicit equations in §3.3 showing the reconstruction process and the complete objective (including how losses operate across spaces). We will also include ablation experiments that fix the base spaces and losses while removing or ablating the matching and regularizers, demonstrating performance drops that cannot be recovered by parameter tuning alone. This will show the gains arise from the full dual-intent reconstruction and regularizers. revision: yes
Circularity Check
No significant circularity in derivation chain
full rationale
The paper's framework reconstructs user/item representations from dual prototype and distribution intent spaces (formed from collaborative and language signals) and optimizes them using alignment/uniformity losses drawn from standard contrastive learning literature, plus coarse/fine-grained matching and intra/inter-space regularizers. No equations or steps in the provided abstract reduce by construction to self-defined quantities, fitted parameters renamed as predictions, or load-bearing self-citations whose validity depends on the current work. The central claim is an empirical assertion of outperformance on three datasets versus 15 baselines; this is externally falsifiable and does not rely on a closed mathematical loop. The derivation is therefore self-contained against external benchmarks.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
Dhillon, Joydeep Ghosh, and Suvrit Sra
Arindam Banerjee, Inderjit S. Dhillon, Joydeep Ghosh, and Suvrit Sra. 2005. Clustering on the Unit Hypersphere using von Mises-Fisher Distributions.Journal of Machine Learning Research6 (2005), 1345–1382
2005
-
[2]
Keqin Bao, Jizhi Zhang, Yang Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He. 2023. Tallrec: An effective and efficient tuning framework to align large language model with recommendation. InProceedings of the 17th ACM conference on recommender systems. 1007–1014
2023
-
[3]
Mikhail Belkin and Partha Niyogi. 2003. Laplacian eigenmaps for dimensionality reduction and data representation.Neural computation15, 6 (2003), 1373–1396
2003
-
[4]
Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. 2006. Manifold regulariza- tion: A geometric framework for learning from labeled and unlabeled examples. Journal of machine learning research7, 11 (2006)
2006
-
[5]
Xuheng Cai, Chao Huang, Lianghao Xia, and Xubin Ren. 2023. LightGCL: Simple Yet Effective Graph Contrastive Learning for Recommendation. InThe Eleventh International Conference on Learning Representations. https://arxiv.org/abs/2302. 08191
2023
-
[6]
Huiyuan Chen, Vivian Lai, Hongye Jin, Zhimeng Jiang, Mahashweta Das, and Xia Hu. 2024. Towards mitigating dimensional collapse of representations in collaborative filtering. InProceedings of the 17th ACM international conference on web search and data mining. 106–115
2024
-
[7]
Yongjun Chen, Zhiwei Liu, Jia Li, Julian McAuley, and Caiming Xiong. 2022. Intent Contrastive Learning for Sequential Recommendation. InProceedings of the ACM Web Conference 2022. 2172–2182
2022
-
[8]
Zefeng Chen, Wensheng Gan, Jiayang Wu, Kaixia Hu, and Hong Lin. 2025. Data scarcity in recommendation systems: A survey.ACM Transactions on Recom- mender Systems3, 3 (2025), 1–31
2025
-
[9]
Tim R Davidson, Luca Falorsi, Nicola De Cao, Thomas Kipf, and Jakub M Tomczak
- [10]
-
[11]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1. 4171–4186
2019
-
[12]
Wenqi Fan, Yao Ma, Qing Li, Yuan He, Eric Zhao, Jiliang Tang, and Dawei Yin
-
[13]
InThe World Wide Web Conference
Graph Neural Networks for Social Recommendation. InThe World Wide Web Conference. 417–426
-
[14]
Chen Gao, Yu Zheng, Nian Li, Yinfeng Li, Yingrong Qin, Jinghua Piao, Yuhan Quan, Jianxin Chang, Depeng Jin, Xiangnan He, and Yong Li. 2023. A Survey of Graph Neural Networks for Recommender Systems: Challenges, Methods, and Directions.ACM Transactions on Recommender Systems1, 1, Article 3 (2023), 51 pages
2023
-
[15]
Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple Contrastive Learning of Sentence Embeddings. InProceedings of the 2021 Conference on Em- pirical Methods in Natural Language Processing. 6894–6910
2021
-
[16]
Jie Gui, Tuo Chen, Jing Zhang, Qiong Cao, Zhenan Sun, Hao Luo, and Dacheng Tao. 2024. A survey on self-supervised learning: Algorithms, applications, and future trends.IEEE Transactions on Pattern Analysis and Machine Intelligence46, 12 (2024), 9052–9071
2024
-
[17]
Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, YongDong Zhang, and Meng Wang. 2020. LightGCN: Simplifying and Powering Graph Convolution Net- work for Recommendation. InProceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. 639–648
2020
-
[18]
Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural collaborative filtering. InProceedings of the 26th international conference on world wide web. 173–182
2017
-
[19]
Junyang Jiang, Deqing Yang, Yanghua Xiao, and Chenlu Shen. 2019. Convolu- tional Gaussian embeddings for personalized recommendation with uncertainty. InProceedings of the 28th International Joint Conference on Artificial Intelligence. AAAI Press, 2642–2648
2019
-
[20]
Yangqin Jiang, Chao Huang, and Lianghao Huang. 2023. Adaptive graph con- trastive learning for recommendation. InProceedings of the 29th ACM SIGKDD conference on knowledge discovery and data mining. 4252–4261
2023
-
[21]
Diederik P Kingma. 2014. Adam: A method for stochastic optimization.arXiv preprint arXiv:1412.6980(2014)
work page internal anchor Pith review Pith/arXiv arXiv 2014
-
[22]
Hyeyoung Ko, Suyeon Lee, Yoonseo Park, and Anna Choi. 2022. A survey of recommendation systems: recommendation models, techniques, and application fields.Electronics11, 1 (2022), 141
2022
-
[23]
Guillaume Lample, Alexis Conneau, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018. Word translation without parallel data. InInternational conference on learning representations
2018
-
[24]
Dawen Liang, Rahul G Krishnan, Matthew D Hoffman, and Tony Jebara. 2018. Variational autoencoders for collaborative filtering. InProceedings of the 2018 world wide web conference. 689–698
2018
-
[25]
Qijiong Liu, Jieming Zhu, Yanting Yang, Quanyu Dai, Zhaocheng Du, Xiao-Ming Wu, Zhou Zhao, Rui Zhang, and Zhenhua Dong. 2024. Multimodal Pretraining, Adaptation, and Generation for Recommendation: A Survey. InProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 6566–6576
2024
-
[26]
Yixin Liu, Ming Jin, Shirui Pan, Chuan Zhou, Yu Zheng, Feng Xia, and Philip S Yu
-
[27]
Graph self-supervised learning: A survey.IEEE Transactions on Knowledge and Data Engineering35, 6 (2022), 5879–5900
2022
-
[28]
Ziang Lu, Lei Sang, Lin Mu, and Yiwen Zhang. 2026. From Clues to Generation: Language-Guided Conditional Diffusion for Cross-Domain Recommendation. arXiv preprint arXiv:2604.05365(2026). arXiv:2604.05365
work page internal anchor Pith review Pith/arXiv arXiv 2026
-
[29]
Ruijia Ma, Yahong Lian, and Chunyao Song. 2025. Sub-interest-aware represen- tation uniformity for recommender system. InProceedings of the Thirty-Ninth AAAI Conference on Artificial Intelligence. AAAI Press, Article 1372, 9 pages
2025
-
[30]
Andriy Mnih and Russ R Salakhutdinov. 2007. Probabilistic matrix factorization. Advances in neural information processing systems20 (2007)
2007
-
[31]
Arvind Neelakantan, Tao Xu, Raul Puri, Alec Radford, Jesse Michael Han, Jerry Tworek, Qiming Yuan, Nikolas Tezak, Jong Wook Kim, Chris Hallacy, et al
- [32]
-
[33]
Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding.arXiv preprint arXiv:1807.03748(2018)
work page internal anchor Pith review Pith/arXiv arXiv 2018
-
[34]
Seongmin Park, Mincheol Yoon, Jae-woong Lee, Hogun Park, and Jongwuk Lee. 2023. Toward a Better Understanding of Loss Functions for Collaborative Filtering. InProceedings of the 32nd ACM International Conference on Information DIAURec: Dual-Intent Space Representation Optimization for Recommendation SIGIR ’26, July 20–24, 2026, Melbourne, VIC, Australia. ...
2023
-
[35]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library.Advances in neural information processing systems32 (2019)
2019
-
[36]
Shaowen Peng, Kazunari Sugiyama, Xin Liu, and Tsunenori Mine. 2025. Balancing Embedding Spectrum for Recommendation.ACM Transaction Recommendation System3, 4, Article 56 (2025), 25 pages
2025
-
[37]
Yingtao Peng, Chen Gao, Yu Zhang, Tangpeng Dan, Xiaoyi Du, Hengliang Luo, Yong Li, and Xiaofeng Meng. 2025. Denoising Alignment with Large Language Model for Recommendation.ACM Transaction Information Systems43, 2, Article 32 (2025), 35 pages
2025
-
[38]
Leonardo Peroni and Sergey Gorinsky. 2025. An end-to-end pipeline perspective on video streaming in best-effort networks: a survey and tutorial.Comput. Surveys 57, 12 (2025), 1–47
2025
-
[39]
Ben Poole, Sherjil Ozair, Aaron Van Den Oord, Alex Alemi, and George Tucker
-
[40]
InInternational conference on machine learning
On variational bounds of mutual information. InInternational conference on machine learning. PMLR, 5171–5180
-
[41]
Lutz Prechelt. 2002. Early stopping-but when? InNeural Networks: Tricks of the trade. Springer, 55–69
2002
-
[42]
Xubin Ren, Wei Wei, Lianghao Xia, Lixin Su, Suqi Cheng, Junfeng Wang, Dawei Yin, and Chao Huang. 2024. Representation Learning with Large Language Models for Recommendation. InProceedings of the ACM Web Conference 2024 (WWW ’24). 3464–3475
2024
-
[43]
Xubin Ren, Lianghao Xia, Jiashu Zhao, Dawei Yin, and Chao Huang. 2023. Disen- tangled contrastive collaborative filtering. InProceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval. 1137–1146
2023
-
[44]
Kartik Sharma, Yeon-Chang Lee, Sivagami Nambi, Aditya Salian, Shlok Shah, Sang-Wook Kim, and Srijan Kumar. 2024. A survey of graph neural networks for social recommender systems.Comput. Surveys56, 10 (2024), 1–34
2024
-
[45]
Leheng Sheng, An Zhang, Yi Zhang, Yuxin Chen, Xiang Wang, and Tat-Seng Chua
-
[46]
InIn International Conference on Learning Representations
Language Representations Can be What Recommenders Need: Findings and Potentials. InIn International Conference on Learning Representations
-
[47]
Erfani, and Junhao Gan
Yixin Su, Rui Zhang, Sarah M. Erfani, and Junhao Gan. 2021. Neural Graph Matching based Collaborative Filtering. InProceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 849–858
2021
-
[48]
Yang Sui, Yu-Neng Chuang, Guanchu Wang, Jiamu Zhang, Tianyi Zhang, Jiayi Yuan, Hongyi Liu, Andrew Wen, Shaochen Zhong, Na Zou, et al . 2025. Stop overthinking: A survey on efficient reasoning for large language models.arXiv preprint arXiv:2503.16419(2025)
work page internal anchor Pith review arXiv 2025
-
[49]
Chen Wang, Liangwei Yang, Zhiwei Liu, Xiaolong Liu, Mingdai Yang, Yueqing Liang, and Philip S. Yu. 2024. Collaborative Alignment for Recommendation. InProceedings of the 33rd ACM International Conference on Information and Knowledge Management. 2315–2325
2024
-
[50]
Chenyang Wang, Yuanqing Yu, Weizhi Ma, Min Zhang, Chong Chen, Yiqun Liu, and Shaoping Ma. 2022. Towards Representation Alignment and Uniformity in Collaborative Filtering. InProceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 1816–1825
2022
-
[51]
Tongzhou Wang and Phillip Isola. 2020. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. InInternational conference on machine learning. PMLR, 9929–9939
2020
-
[52]
Xiang Wang, Xiangnan He, Meng Wang, Fuli Feng, and Tat-Seng Chua. 2019. Neural graph collaborative filtering. InProceedings of the 42nd international ACM SIGIR conference on Research and development in Information Retrieval. 165–174
2019
-
[53]
Xiang Wang, Hongye Jin, An Zhang, Xiangnan He, Tong Xu, and Tat-Seng Chua
-
[54]
InProceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval
Disentangled Graph Collaborative Filtering. InProceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. 1001–1010
-
[55]
Yu Wang, Lei Sang, Yi Zhang, and Yiwen Zhang. 2025. Intent Representation Learning with Large Language Model for Recommendation. InProceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval. 1870–1879
2025
- [56]
-
[57]
Yu Wang, Yonghui Yang, Le Wu, Yi Zhang, and Richang Hong. 2025. Multimodal Large Language Models with Adaptive Preference Optimization for Sequential Recommendation.arXiv preprint arXiv:2511.18740(2025)
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[58]
Wei Wei, Xubin Ren, Jiabin Tang, Qinyong Wang, Lixin Su, Suqi Cheng, Junfeng Wang, Dawei Yin, and Chao Huang. 2024. LLMRec: Large Language Models with Graph Augmentation for Recommendation. InProceedings of the 17th ACM International Conference on Web Search and Data Mining. 806–815
2024
-
[59]
Jiancan Wu, Xiang Wang, Fuli Feng, Xiangnan He, Liang Chen, Jianxun Lian, and Xing Xie. 2021. Self-supervised graph learning for recommendation. InProceed- ings of the 44th international ACM SIGIR conference on research and development in information retrieval. 726–735
2021
-
[60]
Le Wu, Xiangnan He, Xiang Wang, Kun Zhang, and Meng Wang. 2023. A Survey on Accuracy-Oriented Neural Recommendation: From Collaborative Filtering to Information-Rich Recommendation.IEEE Transactions on Knowledge and Data Engineering35, 5 (2023), 4425–4445
2023
-
[61]
Libing Wu, Cong Quan, Chenliang Li, Qian Wang, Bolong Zheng, and Xiangyang Luo. 2019. A context-aware user-item representation learning for item recom- mendation.ACM Transactions on Information Systems37, 2 (2019), 1–29
2019
-
[62]
Likang Wu, Zhi Zheng, Zhaopeng Qiu, Hao Wang, Hongchao Gu, Tingjia Shen, Chuan Qin, Chen Zhu, Hengshu Zhu, Qi Liu, et al . 2024. A survey on large language models for recommendation.World Wide Web27, 5 (2024), 60
2024
-
[63]
Yihong Wu, Le Zhang, Fengran Mo, Tianyu Zhu, Weizhi Ma, and Jian-Yun Nie
-
[64]
InProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’24)
Unifying Graph Convolution and Contrastive Learning in Collaborative Fil- tering. InProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’24). 3425–3436
-
[65]
Yunjia Xi, Weiwen Liu, Jianghao Lin, Xiaoling Cai, Hong Zhu, Jieming Zhu, Bo Chen, Ruiming Tang, Weinan Zhang, and Yong Yu. 2024. Towards Open-World Recommendation with Knowledge Augmentation from Large Language Models. InProceedings of the 18th ACM Conference on Recommender Systems. 12–22
2024
-
[66]
Liangwei Yang, Zhiwei Liu, Chen Wang, Mingdai Yang, Xiaolong Liu, Jing Ma, and Philip S. Yu. 2023. Graph-based Alignment and Uniformity for Recommendation. InProceedings of the 32nd ACM International Conference on Information and Knowledge Management (CIKM ’23). 4395–4399
2023
-
[67]
Junliang Yu, Hongzhi Yin, Xin Xia, Tong Chen, Lizhen Cui, and Quoc Viet Hung Nguyen. 2022. Are Graph Augmentations Necessary?: Simple Graph Contrastive Learning for Recommendation. InProceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. 1294–1303
2022
-
[68]
Weiqi Yue, Yuyu Yin, Xin Zhang, Binbin Shi, Tingting Liang, and Jian Wan. 2025. CoT4Rec: Revealing User Preferences Through Chain of Thought for Recom- mender Systems. InProceedings of the AAAI Conference on Artificial Intelligence, Vol. 39. 13142–13151
2025
-
[69]
Yang Zhang, Fuli Feng, Jizhi Zhang, Keqin Bao, Qifan Wang, and Xiangnan He
-
[70]
CoLLM: Integrating Collaborative Embeddings Into Large Language Models for Recommendation.IEEE Transactions on Knowledge and Data Engineering37, 5 (2025), 2329–2340
2025
-
[71]
Yi Zhang, Lei Sang, and Yiwen Zhang. 2024. Exploring the Individuality and Collectivity of Intents behind Interactions for Graph Collaborative Filtering. In Proceedings of the 47th International ACM SIGIR Conference (SIGIR ’24). Association for Computing Machinery, 1253–1262
2024
-
[72]
Yi Zhang, Yiwen Zhang, Yu Wang, Tong Chen, and Hongzhi Yin. 2025. Towards distribution matching between collaborative and language spaces for generative recommendation. InProceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2006–2016
2025
-
[73]
Yu Zhang, Yiwen Zhang, Yi Zhang, Lei Sang, and Yun Yang. 2025. Unveiling Contrastive Learning‘ Capability of Neighborhood Aggregation for Collabora- tive Filtering. InProceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval. 1985–1994
2025
-
[74]
Zihuai Zhao, Wenqi Fan, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, et al. 2024. Recommender systems in the era of large language models (llms).IEEE Transactions on Knowledge and Data Engineering36, 11 (2024), 6889–6907
2024
-
[75]
Lei Zheng, Vahid Noroozi, and Philip S. Yu. 2017. Joint Deep Modeling of Users and Items Using Reviews for Recommendation. InProceedings of the Tenth ACM International Conference on Web Search and Data Mining. 425–434
2017
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.