pith. machine review for the scientific record. sign in

arxiv: 2604.02833 · v2 · submitted 2026-04-03 · 💻 cs.IR

Recognition: 2 theorem links

· Lean Theorem

BIPCL: Bilateral Intent-Enhanced Sequential Recommendation via Embedding Perturbation Contrastive Learning

Authors on Pith no claims yet

Pith reviewed 2026-05-13 18:54 UTC · model grok-4.3

classification 💻 cs.IR
keywords sequential recommendationcontrastive learningintent modelingembedding perturbationrecommender systemslatent intentsuser behavior modelingcollaborative filtering
0
0 comments X

The pith

BIPCL improves sequential recommendation by distilling collective intent semantics bilaterally and creating contrastive views through bounded embedding perturbations.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes BIPCL to address challenges in modeling evolving user preferences from sequential interactions by capturing multiple latent intents more effectively. Existing approaches often suffer from information isolation across users and items and difficulty in building semantically consistent yet discriminative contrastive views. BIPCL introduces a bilateral intent-enhancement mechanism that distills shared intent prototypes from behaviorally similar entities on both user and item sides and integrates them into representations. It then applies bounded direction-aware perturbations to item embeddings to generate contrastive views without disrupting temporal dependencies, followed by multi-level alignment. Experiments on benchmarks show consistent gains over prior methods, with ablations confirming the role of each element.

Core claim

BIPCL is an end-to-end framework that explicitly integrates multi-intent signals into both item and sequence representations via a bilateral intent-enhancement mechanism, where shared intent prototypes capture collective intent semantics distilled from behaviorally similar entities. It constructs effective contrastive views by injecting bounded, direction-aware perturbations directly into structural item embeddings and enforces multi-level contrastive alignment across interaction- and intent-level representations, alleviating information isolation and improving robustness under sparse supervision.

What carries the argument

Bilateral intent-enhancement mechanism with shared intent prototypes on user and item sides, paired with embedding perturbation contrastive learning that injects bounded direction-aware changes into item embeddings to produce views for multi-level alignment.

If this is right

  • Shared intent prototypes reduce information isolation by propagating collective semantics between user sequences and item representations.
  • Bounded perturbations preserve temporal and structural dependencies while generating sufficiently discriminative contrastive views.
  • Multi-level contrastive alignment strengthens representations at both the interaction level and the intent level.
  • The overall design yields higher recommendation accuracy than prior methods on standard sequential datasets.
  • Ablation results confirm that each added component contributes measurably to the observed gains.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The bilateral prototype sharing could reduce cold-start degradation by supplying collective intent signals when individual histories are short.
  • The perturbation technique for view construction might transfer to other sequence models that rely on contrastive learning but must keep order intact.
  • If prototypes are learned from narrow behavioral clusters, performance could suffer on datasets with highly diverse intent patterns, pointing to a need for adaptive prototype selection.
  • Extending the multi-level alignment to include session-level or attribute-level contrasts could further refine the learned representations.

Load-bearing premise

Shared intent prototypes distilled from behaviorally similar entities capture useful collective semantics without introducing noise or spurious correlations that degrade representation quality.

What would settle it

Removing the bilateral intent-enhancement or the embedding perturbation component yields no gain or a drop in metrics such as HR@10 or NDCG@10 across multiple benchmark datasets compared with the full model.

Figures

Figures reproduced from arXiv: 2604.02833 by Shanfan Zhang, Yongyi Lin, Yuan Rao.

Figure 1
Figure 1. Figure 1: Collective Intent Integration in BIPCL. Sequences ending with the same item can reflect different underlying intents. Capturing intent patterns shared across sequences allows sequence representations to incorporate intent-level semantics and support consistent recommendations. or preference prediction at inference time. This non-explicit trans￾fer of global behavioral priors leaves user representations hea… view at source ↗
Figure 3
Figure 3. Figure 3: Performance comparison across user groups with different interaction levels (sparse, normal, and popular users). [PITH_FULL_IMAGE:figures/full_fig_p007_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Hyperparameter sensitivity of BIPCL. 𝚫NDCG@20 denotes the relative gain over the default configuration. degradation, as excessive regularization interferes with discrimina￾tive modeling and harms recommendation accuracy. Perturbation magnitude 𝜀 [PITH_FULL_IMAGE:figures/full_fig_p008_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Density distributions of item intent embeddings. [PITH_FULL_IMAGE:figures/full_fig_p008_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Visualization of user interaction and recommenda [PITH_FULL_IMAGE:figures/full_fig_p008_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: CPU and GPU memory consumption of BIPCL and baseline methods on three large-scale datasets [PITH_FULL_IMAGE:figures/full_fig_p012_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Hyperparameter sensitivity of BIPCL. Removing the intent enhancement module (w/o Intent) or replac￾ing the sequence encoder with an average-pooled representation (w/o Pooling) substantially degrades performance. These results demonstrate the importance of explicit multi-intent modeling and long-term sequential dependency capture in BIPCL. In contrast, replacing the gated fusion mechanism with direct aggreg… view at source ↗
read the original abstract

Accurately modeling users' evolving preferences from sequential interactions remains a central challenge in recommender systems. Recent studies emphasize the importance of capturing multiple latent intents underlying user behaviors. However, existing methods often fail to effectively exploit collective intent signals shared across users and items, leading to information isolation and limited robustness. Meanwhile, current contrastive learning approaches struggle to construct views that are both semantically consistent and sufficiently discriminative. In this work, we propose BIPCL, an end-to-end Bilateral Intent-enhanced, Embedding Perturbation-based Contrastive Learning framework. BIPCL explicitly integrates multi-intent signals into both item and sequence representations via a bilateral intent-enhancement mechanism. Specifically, shared intent prototypes on the user and item sides capture collective intent semantics distilled from behaviorally similar entities, which are subsequently integrated into representation learning. This design alleviates information isolation and improves robustness under sparse supervision. To construct effective contrastive views without disrupting temporal or structural dependencies, BIPCL injects bounded, direction-aware perturbations directly into structural item embeddings. On this basis, BIPCL further enforces multi-level contrastive alignment across interaction- and intent-level representations. Extensive experiments on benchmark datasets demonstrate that BIPCL consistently outperforms state-of-the-art baselines, with ablation studies confirming the contribution of each component.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript proposes BIPCL, a bilateral intent-enhanced sequential recommendation framework that distills shared intent prototypes from behaviorally similar users and items, integrates them bilaterally into sequence and item representations, and applies bounded direction-aware embedding perturbations to generate contrastive views for multi-level alignment. It claims this mitigates information isolation and improves robustness under sparse supervision, with extensive experiments on benchmark datasets showing consistent outperformance over state-of-the-art baselines and ablations confirming each component's contribution.

Significance. If the empirical results and prototype quality hold, the work could advance sequential recommendation by explicitly leveraging collective intent signals across entities while preserving temporal structure via perturbation-based views rather than augmentation that breaks dependencies. The bilateral design and multi-level contrastive loss address a recognized limitation in prior intent-aware and contrastive methods.

major comments (2)
  1. [§3.2] §3.2: The shared intent prototypes are distilled from behaviorally similar entities using implicit similarity (co-occurrence or embedding proximity) without explicit debiasing or external validation; in sparse sequential data this risks capturing spurious correlations or popularity bias that propagate into both user and item representations before bilateral enhancement and perturbation contrastive loss are applied, directly undermining the robustness claim.
  2. [Experiments] Experiments section: The central claim of consistent outperformance and component contributions rests on empirical results, yet the abstract and reported evidence provide no quantitative metrics, error bars, dataset statistics, statistical significance tests, or baseline implementation details; without these the load-bearing performance assertions cannot be assessed.
minor comments (2)
  1. Clarify the precise definition of behavioral similarity for prototype construction and the selection criterion for the number of prototypes, as these are free parameters whose sensitivity is not analyzed.
  2. Add a table or figure summarizing dataset characteristics (sparsity, sequence length distribution) to support the sparsity-robustness argument.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the detailed and constructive feedback. We address each major comment below with clarifications and revisions to strengthen the manuscript.

read point-by-point responses
  1. Referee: [§3.2] §3.2: The shared intent prototypes are distilled from behaviorally similar entities using implicit similarity (co-occurrence or embedding proximity) without explicit debiasing or external validation; in sparse sequential data this risks capturing spurious correlations or popularity bias that propagate into both user and item representations before bilateral enhancement and perturbation contrastive loss are applied, directly undermining the robustness claim.

    Authors: We agree that implicit similarity-based prototype distillation can introduce spurious correlations or popularity bias, particularly in sparse data. The bilateral intent-enhancement and bounded direction-aware perturbations are intended to improve robustness by enforcing multi-level alignment that downweights inconsistent signals, as supported by our ablation results showing gains under sparse settings. In the revision, we have expanded §3.2 with a dedicated paragraph discussing this risk, added a simple popularity-stratified evaluation in the experiments, and clarified how the contrastive loss contributes to bias mitigation without claiming explicit debiasing. revision: partial

  2. Referee: Experiments section: The central claim of consistent outperformance and component contributions rests on empirical results, yet the abstract and reported evidence provide no quantitative metrics, error bars, dataset statistics, statistical significance tests, or baseline implementation details; without these the load-bearing performance assertions cannot be assessed.

    Authors: The full manuscript contains dataset statistics (Table 1), main results with relative improvements (Table 2), and ablation studies (Table 3 and Figure 4). However, we acknowledge the presentation lacked sufficient statistical rigor. In the revised version, we have added error bars from 5 random seeds, paired t-test p-values for all main comparisons, expanded baseline implementation details in the appendix, and included dataset sparsity statistics. The abstract remains a high-level summary without numbers, consistent with standard practice. revision: yes

Circularity Check

0 steps flagged

No significant circularity detected

full rationale

The paper defines BIPCL as an empirical architecture combining bilateral intent prototypes, embedding perturbations, and multi-level contrastive losses, then validates via experiments on standard benchmarks. No derivation chain, equation, or claim reduces by construction to its own inputs or fitted parameters renamed as predictions. No self-citation is invoked as a uniqueness theorem or load-bearing premise. The model is self-contained against external data splits and baselines.

Axiom & Free-Parameter Ledger

2 free parameters · 2 axioms · 1 invented entities

The framework introduces new modeling components whose value is asserted empirically; multiple design choices function as free parameters and domain assumptions.

free parameters (2)
  • number of shared intent prototypes
    Determines granularity of collective intent signals; value chosen to balance expressiveness and overfitting risk.
  • perturbation bound and direction parameters
    Control magnitude and direction of embedding changes; fitted or tuned to maintain semantic consistency.
axioms (2)
  • domain assumption Behaviorally similar entities share meaningful collective intent semantics that can be distilled into prototypes
    Invoked to justify the bilateral intent-enhancement mechanism.
  • domain assumption Bounded perturbations preserve temporal and structural dependencies in sequences
    Required for the contrastive view construction to remain valid.
invented entities (1)
  • shared intent prototypes no independent evidence
    purpose: Capture and inject collective intent semantics into user and item representations
    New modeling construct introduced to alleviate information isolation

pith-pipeline@v0.9.0 · 5521 in / 1358 out tokens · 34516 ms · 2026-05-13T18:54:35.743115+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

67 extracted references · 67 canonical work pages

  1. [1]

    Ting Bai, Jian-Yun Nie, Wayne Xin Zhao, Yutao Zhu, Pan Du, and Ji-Rong Wen

  2. [2]

    InThe 41st International ACM SIGIR Conference on Research & Development in Information Retrieval (SIGIR ’18)

    An Attribute-aware Neural Attentive Model for Next Basket Recommenda- tion. InThe 41st International ACM SIGIR Conference on Research & Development in Information Retrieval (SIGIR ’18). Association for Computing Machinery, New York, NY, USA, 1201–1204. doi:10.1145/3209978.3210129

  3. [3]

    Yukuo Cen, Jianwei Zhang, Xu Zou, Chang Zhou, Hongxia Yang, and Jie Tang

  4. [4]

    InProceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD ’20)

    Controllable Multi-Interest Framework for Recommendation. InProceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD ’20). Association for Computing Machinery, New York, NY, USA, 2942–2951. doi:10.1145/3394486.3403344

  5. [5]

    Zheng Chai, Zhihong Chen, Chenliang Li, Rong Xiao, Houyi Li, Jiawei Wu, Jingxu Chen, and Haihong Tang. 2022. User-Aware Multi-Interest Learning for Candidate Matching in Recommenders. InProceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’22). Association for Computing Machinery, New York,...

  6. [6]

    doi:10.1145/3477495.3532073

  7. [7]

    Gaode Chen, Yuezihan Jiang, Rui Huang, Kuo Cai, Yunze Luo, Ruina Sun, Qi Zhang, Han Li, and Kun Gai. 2024. Missing Interest Modeling with Lifelong User Behavior Data for Retrieval Recommendation. InProceedings of the 33rd ACM International Conference on Information and Knowledge Management (CIKM ’24). Association for Computing Machinery, New York, NY, USA...

  8. [8]

    Gaode Chen, Xinghua Zhang, Yanyan Zhao, Cong Xue, and ji Xiang. 2021. Ex- ploring Periodicity and Interactivity in Multi-Interest Framework for Sequential Recommendation. InProceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI ’21). 1426–1433

  9. [9]

    Hong Chen, Yudong Chen, Xin Wang, Ruobing Xie, Rui Wang, Feng Xia, and Wenwu Zhu. 2021. Curriculum disentangled recommendation with noisy multi- feedback. InProceedings of the 35th International Conference on Neural Information Processing Systems (NIPS ’21). Curran Associates Inc

  10. [10]

    Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A Simple Framework for Contrastive Learning of Visual Representations. In Proceedings of the 37th International Conference on Machine Learning (ICML ’20). JMLR.org

  11. [11]

    Xu Chen, Hongteng Xu, Yongfeng Zhang, Jiaxi Tang, Yixin Cao, Zheng Qin, and Hongyuan Zha. 2018. Sequential Recommendation with User Memory Networks. InProceedings of the Eleventh ACM International Conference on Web Search and Data Mining (WSDM ’18). Association for Computing Machinery, New York, NY, USA, 108–116. doi:10.1145/3159652.3159668

  12. [12]

    Yongjun Chen, Zhiwei Liu, Jia Li, Julian McAuley, and Caiming Xiong. 2022. Intent Contrastive Learning for Sequential Recommendation. InProceedings of the ACM Web Conference 2022 (WWW ’22). Association for Computing Machinery, New York, NY, USA, 2172–2182. doi:10.1145/3485447.3512090

  13. [13]

    Yu-Ting Cheng, Yu-Yen Ho, and Jyun-Yu Jiang. 2025. Collaborative Interest Mod- eling in Recommender Systems. InProceedings of the Nineteenth ACM Conference on Recommender Systems (RecSys ’25). Association for Computing Machinery, New York, NY, USA, 533–538. doi:10.1145/3705328.3748023

  14. [14]

    Yizhou Dang, Enneng Yang, Guibing Guo, Linying Jiang, Xingwei Wang, Xiaoxiao Xu, Qinghui Sun, and Hong Liu. 2024. TiCoSeRec: Augmenting Data to Uniform Sequences by Time Intervals for Effective Recommendation.IEEE Transactions on Knowledge and Data Engineering36, 6 (2024), 2686–2700. doi:10.1109/TKDE. 2023.3324312

  15. [15]

    Tim Donkers, Benedikt Loepp, and Júrgen Ziegler. 2017. Sequential User-based Recurrent Neural Network Recommendations. InProceedings of the Eleventh ACM Conference on Recommender Systems (RecSys ’17). Association for Computing Machinery, New York, NY, USA, 152–160. doi:10.1145/3109859.3109877

  16. [16]

    Yingpeng Du, Ziyan Wang, Zhu Sun, Yining Ma, Hongzhi Liu, and Jie Zhang

  17. [17]

    InProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’24)

    Disentangled Multi-interest Representation Learning for Sequential Rec- ommendation. InProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’24). Association for Computing Machinery, New York, NY, USA, 677–688. doi:10.1145/3637528.3671800

  18. [18]

    Hui Fang, Danning Zhang, Yiheng Shu, and Guibing Guo. 2020. Deep Learning for Sequential Recommendation: Algorithms, Influential Factors, and Evaluations. ACM Trans. Inf. Syst.39, 1 (2020), 1–42. doi:10.1145/3426723

  19. [19]

    Goodfellow, Jonathon Shlens, and Christian Szegedy

    Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. InInternational Conference on Learning Representations (ICLR ’15)

  20. [20]

    Ruining He and Julian McAuley. 2016. Fusing Similarity Models with Markov Chains for Sparse Sequential Recommendation. In2016 IEEE 16th International Conference on Data Mining (ICDM) (ICDM ’16). 191–200. doi:10.1109/ICDM.2016. 0030

  21. [21]

    Zhiyu He, Zhixin Ling, Jiayu Li, Zhiqiang Guo, Weizhi Ma, Xinchen Luo, Min Zhang, and Guorui Zhou. 2025. Short Video Segment-level User Dynamic In- terests Modeling in Personalized Recommendation. InProceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’25). Association for Computing Machine...

  22. [22]

    Jin Huang, Wayne Xin Zhao, Hongjian Dou, Ji-Rong Wen, and Edward Y. Chang

  23. [23]

    InThe 41st International ACM SIGIR Conference on Research & Development in Information Retrieval (SIGIR ’18)

    Improving Sequential Recommendation with Knowledge-Enhanced Mem- ory Networks. InThe 41st International ACM SIGIR Conference on Research & Development in Information Retrieval (SIGIR ’18). Association for Computing Machinery, New York, NY, USA, 505–514. doi:10.1145/3209978.3210017

  24. [24]

    Sébastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On Using Very Large Target Vocabulary for Neural Machine Translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Lin- guistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) (ACL ’15). Association...

  25. [25]

    Mengyuan Jing, Yanmin Zhu, Tianzi Zang, and Ke Wang. 2023. Contrastive Self-supervised Learning in Recommender Systems: A Survey.ACM Trans. Inf. Syst.42, 2 (2023), 1–39. doi:10.1145/3627158

  26. [26]

    Clark Mingxuan Ju, Leonardo Neves, Bhuvesh Kumar, Liam Collins, Tong Zhao, Yuwei Qiu, Qing Dou, Sohail Nizam, Sen Yang, and Neil Shah. 2025. Revisiting Self-attention for Cross-domain Sequential Recommendation. InProceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.2 (KDD ’25). Association for Computing Machinery, New Yor...

  27. [27]

    Wang-Cheng Kang and Julian McAuley. 2018. Self-Attentive Sequential Recom- mendation. In2018 IEEE International Conference on Data Mining (ICDM) (ICDM ’18). 197–206. doi:10.1109/ICDM.2018.00035

  28. [28]

    Feature level Deeper Self-Attention Network for Sequential Recommendation

  29. [29]

    and Xu, Jiajie and Wang, Deqing and Liu, Guanfeng and Zhou, Xiaofang

    Zhang, Tingting and Zhao, Pengpeng and Liu, Yanchi and Sheng, Victor S. and Xu, Jiajie and Wang, Deqing and Liu, Guanfeng and Zhou, Xiaofang. InProceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19. International Joint Conferences on Artificial Intelligence Organization, 4320–4326. doi:10.24963/ijcai.2019/600

  30. [30]

    Chao Li, Zhiyuan Liu, Mengmeng Wu, Yuchi Xu, Huan Zhao, Pipei Huang, Guoliang Kang, Qiwei Chen, Wei Li, and Dik Lun Lee. 2019. Multi-Interest Network with Dynamic Routing for Recommendation at Tmall. InProceedings of the 28th ACM International Conference on Information and Knowledge Management (CIKM ’19). Association for Computing Machinery, New York, NY,...

  31. [31]

    doi:10.1145/3357384.3357814

  32. [32]

    Jian Li, Jieming Zhu, Qiwei Bi, Guohao Cai, Lifeng Shang, Zhenhua Dong, Xin Jiang, and Qun Liu. 2022. MINER: Multi-Interest Matching Network for News Recommendation. InFindings of the Association for Computational Lin- guistics: ACL 2022. Association for Computational Linguistics, 343–352. https: //aclanthology.org/2022.findings-acl.29/

  33. [33]

    Nian Li, Xin Ban, Cheng Ling, Chen Gao, Lantao Hu, Peng Jiang, Kun Gai, Yong Li, and Qingmin Liao. 2024. Modeling User Fatigue for Sequential Recommendation. InProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’24). Association for Computing Machinery, New York, NY, USA, 996–1005. doi:10...

  34. [34]

    Sitao Lin, Shuai Tang, Xiaofeng Zhang, Jianghong Ma, and Ziao Wang. 2026. CoDeR+: Interest-aware Counterfactual Reasoning for Sequential Recommenda- tion.ACM Trans. Inf. Syst.44, 2 (2026), 1–39. doi:10.1145/3778863

  35. [35]

    Xiaolin Lin, Weike Pan, and Zhong Ming. 2025. Towards Interest Drift-driven User Representation Learning in Sequential Recommendation. InProceedings of the 48th International ACM SIGIR Conference on Research and Development in Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Trovato et al. Information Retrieval (SIGIR ’25). Association for Computin...

  36. [36]

    Danyang Liu, Yuji Yang, Mengdi Zhang, Wei Wu, Xing Xie, and Guangzhong Sun. 2022. Knowledge Enhanced Multi-Interest Network for the Generation of Recommendation Candidates. InProceedings of the 31st ACM International Conference on Information & Knowledge Management (CIKM ’22). Association for Computing Machinery, New York, NY, USA, 3322–3331. doi:10.1145/...

  37. [37]

    Yaokun Liu, Xiaowang Zhang, Minghui Zou, and Zhiyong Feng. 2024. Attribute Simulation for Item Embedding Enhancement in Multi-interest Recommendation. InProceedings of the 17th ACM International Conference on Web Search and Data Mining (WSDM ’24). Association for Computing Machinery, New York, NY, USA, 482–491. doi:10.1145/3616855.3635841

  38. [38]

    Yue Liu, Shihao Zhu, Jun Xia, Yingwei Ma, Jian Ma, Xinwang Liu, Shengju Yu, Kejun Zhang, and Wenliang Zhong. 2024. End-to-end Learnable Clustering for Intent Learning in Recommendation. InAdvances in Neural Information Processing Systems (NIPS ’24). Curran Associates, Inc., 5913–5949. doi:10.52202/079017-0192

  39. [39]

    Yu, Julian McAuley, and Caiming Xiong

    Zhiwei Liu, Yongjun Chen, Jia Li, Philip S. Yu, Julian McAuley, and Caiming Xiong. 2021.Contrastive Self-supervised Sequential Recommendation with Robust Augmentation. arXiv:2108.06479 doi:10.48550/arXiv.2108.06479

  40. [40]

    Jianxin Ma, Chang Zhou, Peng Cui, Hongxia Yang, and Wenwu Zhu. 2019. Learning Disentangled Representations for Recommendation. InAdvances in Neural Information Processing Systems (NIPS ’19). Curran Associates, Inc., New York, NY, USA. https://proceedings.neurips.cc/paper_files/paper/2019/file/ a2186aa7c086b46ad4e8bf81e2a3a19b-Paper.pdf

  41. [41]

    Chang Meng, Ziqi Zhao, Wei Guo, Yingxue Zhang, Haolun Wu, Chen Gao, Dong Li, Xiu Li, and Ruiming Tang. 2023. Coarse-to-Fine Knowledge-Enhanced Multi- Interest Learning Framework for Multi-Behavior Recommendation.ACM Trans. Inf. Syst.42, 1 (2023), 1–27. doi:10.1145/3606369

  42. [42]

    Bo Pei, Yingzheng Zhu, Guangjin Wang, Huajuan Duan, Wenya Wu, Fuyong Xu, Yizhao Zhu, Peiyu Liu, and Ran Lu. 2025. Intent Contrastive Learning Based on Multi-view Augmentation for Sequential Recommendation. InProceedings of the 31st International Conference on Computational Linguistics (COLING ’25). Association for Computational Linguistics, 3300–3309. htt...

  43. [43]

    Xiuyuan Qin, Huanhuan Yuan, Pengpeng Zhao, Junhua Fang, Fuzhen Zhuang, Guanfeng Liu, Yanchi Liu, and Victor Sheng. 2023. Meta-optimized Contrastive Learning for Sequential Recommendation. InProceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’23). Association for Computing Machinery, New Y...

  44. [44]

    Xiuyuan Qin, Huanhuan Yuan, Pengpeng Zhao, Guanfeng Liu, Fuzhen Zhuang, and Victor S. Sheng. 2024. Intent Contrastive Learning with Cross Subsequences for Sequential Recommendation. InProceedings of the 17th ACM International Con- ference on Web Search and Data Mining (WSDM ’24). Association for Computing Machinery, New York, NY, USA, 548–556. doi:10.1145...

  45. [45]

    Ruihong Qiu, Zi Huang, Hongzhi Yin, and Zijian Wang. 2022. Contrastive Learn- ing for Representation Degeneration Problem in Sequential Recommendation. InProceedings of the Fifteenth ACM International Conference on Web Search and Data Mining (WSDM ’22). Association for Computing Machinery, New York, NY, USA, 813–823. doi:10.1145/3488560.3498433

  46. [46]

    Steffen Rendle, Christoph Freudenthaler, and Lars Schmidt-Thieme. 2010. Factor- izing personalized Markov chains for next-basket recommendation. InProceedings of the 19th International Conference on World Wide Web (WWW ’10). Association for Computing Machinery, New York, NY, USA, 811–820. doi:10.1145/1772690. 1772773

  47. [47]

    Weiqi Shao, Xu Chen, Jiashu Zhao, Long Xia, Jingsen Zhang, and Dawei Yin. 2023. Sequential Recommendation with User Evolving Preference Decomposition. In Proceedings of the Annual International ACM SIGIR Conference on Research and Development in Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23). Association for Computing Machinery, New York,...

  48. [48]

    Fei Sun, Jun Liu, Jian Wu, Changhua Pei, Xiao Lin, Wenwu Ou, and Peng Jiang

  49. [49]

    InProceedings of the 28th ACM International Conference on Information and Knowledge Management(Beijing, China)(CIKM ’19)

    BERT4Rec: Sequential Recommendation with Bidirectional Encoder Repre- sentations from Transformer. InProceedings of the 28th ACM International Confer- ence on Information and Knowledge Management (CIKM ’19). Association for Com- puting Machinery, New York, NY, USA, 1441–1440. doi:10.1145/3357384.3357895

  50. [50]

    Yu Tian, Jianxin Chang, Yanan Niu, Yang Song, and Chenliang Li. 2022. When Multi-Level Meets Multi-Interest: A Multi-Grained Neural Model for Sequential Recommendation. InProceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’22). Association for Computing Machinery, New York, NY, USA, 1632–...

  51. [51]

    Maolin Wang, Yutian Xiao, Binhao Wang, Sheng Zhang, Shanshan Ye, Wanyu Wang, Hongzhi Yin, Ruocheng Guo, and Zenglin Xu. 2025. FindRec: Stein-Guided Entropic Flow for Multi-Modal Sequential Recommendation. InProceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.2 (KDD ’25). Association for Computing Machinery, New York, NY,...

  52. [52]

    Tongzhou Wang and Phillip Isola. 2020. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. InProceedings of the 37th International Conference on Machine Learning (ICML ’20). JMLR.org, 9929–9939

  53. [53]

    Wuhong Wang, Jianhui Ma, Yuren Zhang, Kai Zhang, Junzhe Jiang, Yihui Yang, Yacong Zhou, and Zheng Zhang. 2025. Intent Oriented Contrastive Learning for Sequential Recommendation.Proceedings of the AAAI Conference on Artificial Intelligence39, 12 (2025), 12748–12756. doi:10.1609/aaai.v39i12.33390

  54. [54]

    Haolun Wu, Ofer Meshi, Masrour Zoghi, Fernando Diaz, Xue Liu, Craig Boutilier, and Maryam Karimzadehgan. 2024. Density-based User Representation using Gaussian Process Regression for Multi-interest Personalized Retrieval. InAd- vances in Neural Information Processing Systems (NIPS ’24). Curran Associates, Inc., 52568–52594. doi:10.52202/079017-1666

  55. [55]

    Jiancan Wu, Xiang Wang, Fuli Feng, Xiangnan He, Liang Chen, Jianxun Lian, and Xing Xie. 2021. Self-supervised Graph Learning for Recommendation. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’21). Association for Computing Machinery, New York, NY, USA, 726–735. doi:10.1145/3404835.3462862

  56. [56]

    Wenhao Wu, Xiaojie Li, Lin Wang, Jialiang Zhou, Di Wu, Qinye Xie, Qingheng Zhang, Yin Zhang, Shuguang Han, Fei Huang, and Jufeng Chen. 2025. IU4Rec: Interest Unit-Based Product Organization and Recommendation for E-Commerce Platform. InProceedings of the 31st ACM SIGKDD Conference on Knowledge Dis- covery and Data Mining V.2 (KDD ’25). Association for Com...

  57. [57]

    Zhibo Xiao, Luwei Yang, Wen Jiang, Yi Wei, Yi Hu, and Hao Wang. 2020. Deep Multi-Interest Network for Click-through Rate Prediction. InProceedings of the 29th ACM International Conference on Information & Knowledge Management (CIKM ’20). Association for Computing Machinery, New York, NY, USA, 2265–

  58. [58]

    doi:10.1145/3340531.3412092

  59. [59]

    Xu Xie, Fei Sun, Zhaoyang Liu, Shiwen Wu, Jinyang Gao, Jiandong Zhang, Bolin Ding, and Bin Cui. 2022. Contrastive Learning for Sequential Recommendation. In2022 IEEE 38th International Conference on Data Engineering (ICDE) (ICDE ’22). IEEE, 1259–1273. doi:10.1109/ICDE53745.2022.00099

  60. [60]

    Yueqi Xie, Jingqi Gao, Peilin Zhou, Qichen Ye, Yining Hua, Jae Boum Kim, Fangzhao Wu, and Sunghun Kim. 2023. Rethinking Multi-Interest Learning for Candidate Matching in Recommender Systems. InProceedings of the 17th ACM Conference on Recommender Systems (RecSys ’23). Association for Comput- ing Machinery, New York, NY, USA, 283–293. doi:10.1145/3604915.3608766

  61. [61]

    Xiang Ying, Rui Ding, Yue Zhao, Mei Yu, and Mankun Zhao. 2025. DPT: Dynamic Preference Transfer for Cross-Domain Sequential Recommendation. InProceed- ings of the 34th ACM International Conference on Information and Knowledge Management (CIKM ’25). Association for Computing Machinery, New York, NY, USA, 3909–3919. doi:10.1145/3746252.3761075

  62. [62]

    Junliang Yu, Hongzhi Yin, Xin Xia, Tong Chen, Lizhen Cui, and Quoc Viet Hung Nguyen. 2022. Are Graph Augmentations Necessary? Simple Graph Contrastive Learning for Recommendation. InProceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’22). Association for Computing Machinery, New York, NY,...

  63. [63]

    Jun Yuan, Guohao Cai, and Zhenhua Dong. 2025. A Contextual-Aware Position Encoding for Sequential Recommendation. InCompanion Proceedings of the ACM on Web Conference 2025 (WWW ’25). Association for Computing Machinery, New York, NY, USA, 577–585. doi:10.1145/3701716.3715206

  64. [64]

    Shengyu Zhang, Lingxiao Yang, Dong Yao, Yujie Lu, Fuli Feng, Zhou Zhao, Tat-seng Chua, and Fei Wu. 2022. Re4: Learning to Re-contrast, Re-attend, Re- construct for Multi-interest Recommendation. InProceedings of the ACM Web Conference 2022 (WWW ’22). Association for Computing Machinery, New York, NY, USA, 2216–2226. doi:10.1145/3485447.3512094

  65. [65]

    Yabin Zhang, Zhenlei Wang, Wenhui Yu, Lantao Hu, Peng Jiang, Kun Gai, and Xu Chen. 2024. Soft Contrastive Sequential Recommendation.ACM Trans. Inf. Syst.42, 6 (2024), 1–28. doi:10.1145/3665325

  66. [66]

    Kun Zhou, Hui Wang, Wayne Xin Zhao, Yutao Zhu, Sirui Wang, Fuzheng Zhang, Zhongyuan Wang, and Ji-Rong Wen. 2020. S3-Rec: Self-Supervised Learning for Sequential Recommendation with Mutual Information Maximization. InPro- ceedings of the 29th ACM International Conference on Information & Knowledge Management (CIKM ’20). Association for Computing Machinery,...

  67. [67]

    Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, and Liang Wang. 2021. Graph Contrastive Learning with Adaptive Augmentation. InProceedings of the Web Conference 2021 (WWW ’21). Association for Computing Machinery, New York, NY, USA, 2069–2080. doi:10.1145/3442381.3449802 BIPCL: Bilateral Intent-Enhanced Sequential Recommendation via Embedding Perturba...