pith. machine review for the scientific record. sign in

arxiv: 2605.11145 · v1 · submitted 2026-05-11 · 💻 cs.IR · cs.LG

Recognition: 2 theorem links

· Lean Theorem

Debiasing Message Passing to Mitigate Popularity Bias in GNN-based Collaborative Filtering

Ahmed Sayeed Faruk, Elena Zheleva, Md Aminul Islam, Sourav Medya

Authors on Pith no claims yet

Pith reviewed 2026-05-13 02:29 UTC · model grok-4.3

classification 💻 cs.IR cs.LG
keywords collaborative filteringgraph neural networkspopularity biasdebiasingmessage passingrecommender systemslong-tail itemsaggregation weighting
0
0 comments X

The pith

Adaptive embedding-aware weights and layer-wise amplification during GNN message passing reduce popularity bias in collaborative filtering.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

GNN-based collaborative filtering models propagate signals over user-item graphs but repeatedly aggregate high-degree items, which skews recommendations toward popular products and buries long-tail ones. Prior debiasing strategies that adjust loss functions or post-process outputs leave the core aggregation step untouched, so bias persists through multiple layers. DPAA intervenes inside message passing by deriving interaction weights from current embedding representations and by scaling the contribution of each hop to favor distant, less popular neighbors. A smooth switch from fixed pre-trained embeddings to the model's own evolving representations keeps the weighting stable while training proceeds.

Core claim

DPAA counters popularity amplification by computing per-interaction weights from a representation-aware popularity signal and by applying layer-specific multipliers that increase the reach of higher-order neighborhoods containing more diverse items.

What carries the argument

Adaptive interaction weighting derived from embedding-aware popularity signals, stabilized by a pre-trained-to-current transition, plus layer-wise amplification of neighborhood contributions inside the GNN aggregation step.

Load-bearing premise

Adaptive embedding-aware interaction weighting combined with layer-wise amplification will directly counteract bias propagated through GNN aggregation without introducing training instability or new biases from the weighting itself.

What would settle it

An experiment in which DPAA is inserted into standard GNN layers yet popularity bias metrics remain unchanged or worsen on long-tail test items would show the weighting does not achieve the intended correction.

Figures

Figures reproduced from arXiv: 2605.11145 by Ahmed Sayeed Faruk, Elena Zheleva, Md Aminul Islam, Sourav Medya.

Figure 1
Figure 1. Figure 1: Training data distribution across real-world [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Performance comparison of different methods for [PITH_FULL_IMAGE:figures/full_fig_p008_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: By default, DPAA uses 𝛾 = 1, meaning that IIW is applied only at the first layer [PITH_FULL_IMAGE:figures/full_fig_p008_3.png] view at source ↗
Figure 3
Figure 3. Figure 3: Performance comparison of DPAA variants for [PITH_FULL_IMAGE:figures/full_fig_p009_3.png] view at source ↗
read the original abstract

Collaborative filtering (CF) models based on graph neural networks (GNNs) achieve strong performance in recommender systems by propagating user-item signals over interaction graphs. However, they are highly susceptible to popularity bias, since skewed interaction distributions and repeated message passing across high-order neighborhoods amplify the influence of popular items while suppressing long-tail ones. Existing debiasing approaches, including re-weighting objectives, regularization, causal methods, and post-processing, are less effective in GNN-based settings because they do not directly counteract bias propagated through the aggregation process, and recent in-aggregation weighting methods often rely on static heuristics or unstable embedding estimates. We propose Debiasing Popularity Amplification in Aggregation (DPAA), a popularity debiasing framework for GNN-based CF that integrates adaptive, embedding-aware interaction weighting and layer-wise weighting directly into message passing. DPAA assigns interaction-level weights from a representation-aware popularity signal, stabilized by a smooth transition from pre-trained to evolving model embeddings during training. It further introduces a layer-wise weighting that amplifies higher-order neighborhoods, surfacing long-range interactions with diverse and underexposed items. Experiments on real-world and semi-synthetic datasets show that DPAA outperforms state-of-the-art popularity-bias correction methods for GNN-based CF.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper proposes DPAA (Debiasing Popularity Amplification in Aggregation), a framework for GNN-based collaborative filtering that integrates adaptive, embedding-aware interaction weighting—stabilized by a smooth transition from pre-trained to evolving embeddings—with layer-wise amplification directly into the message passing process to mitigate popularity bias propagation. It claims this directly counters bias amplified by skewed interactions and high-order neighborhoods, outperforming existing debiasing methods on real-world and semi-synthetic datasets.

Significance. If the central claims hold with rigorous validation, the work would advance bias mitigation in GNN-based CF by intervening at the aggregation level rather than through post-hoc or objective-level adjustments. The design choice of pre-training for weight stabilization and layer-wise amplification to surface long-range diverse interactions is a targeted contribution that could improve long-tail item exposure if shown to be stable.

major comments (2)
  1. [DPAA framework description] The description of DPAA's interaction-level weights: these are computed from a representation-aware popularity signal that begins with pre-trained embeddings produced by standard GNN training on the same skewed interaction graph. This risks carrying forward initial popularity bias into the adaptive weights; the transition schedule must be shown (via equations or analysis) to sufficiently suppress correlation with item degree before higher-order aggregation, or else the layer-wise amplification risks magnifying residual bias rather than surfacing long-tail items.
  2. [Experiments] Experimental evaluation section: the abstract and summary claim outperformance over SOTA popularity-bias correction methods but supply no concrete metrics, ablation results, statistical significance tests, dataset statistics, or baseline comparisons. Without these details, the central empirical claim cannot be assessed for robustness or practical impact.
minor comments (1)
  1. [Abstract] The abstract would be strengthened by briefly noting the specific datasets used and key quantitative improvements to better convey the scale of the claimed gains.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback on our work. We address each major comment below with clarifications and indicate the revisions we will make to strengthen the manuscript.

read point-by-point responses
  1. Referee: [DPAA framework description] The description of DPAA's interaction-level weights: these are computed from a representation-aware popularity signal that begins with pre-trained embeddings produced by standard GNN training on the same skewed interaction graph. This risks carrying forward initial popularity bias into the adaptive weights; the transition schedule must be shown (via equations or analysis) to sufficiently suppress correlation with item degree before higher-order aggregation, or else the layer-wise amplification risks magnifying residual bias rather than surfacing long-tail items.

    Authors: We agree that the initialization from pre-trained embeddings on the skewed graph requires careful handling to avoid propagating bias. The DPAA design uses a smooth transition schedule (detailed in Section 3.2) that progressively shifts weight computation from the fixed pre-trained embeddings to the model's evolving embeddings during training. To directly address the concern, we will add explicit equations for the transition function and new analysis (including plots of weight-degree correlation over epochs) demonstrating that correlation with item popularity drops substantially prior to higher-order message passing. This ensures the layer-wise amplification prioritizes diverse long-tail interactions. revision: yes

  2. Referee: [Experiments] Experimental evaluation section: the abstract and summary claim outperformance over SOTA popularity-bias correction methods but supply no concrete metrics, ablation results, statistical significance tests, dataset statistics, or baseline comparisons. Without these details, the central empirical claim cannot be assessed for robustness or practical impact.

    Authors: The full experimental section (Section 4) already contains the requested details: concrete metrics (Recall@10/20, NDCG@10/20, and fairness-aware measures), ablation studies on each DPAA component, paired t-test significance results, dataset statistics (including interaction sparsity and popularity distributions), and comparisons against all listed baselines on both real-world and semi-synthetic data. To improve accessibility, we will revise the abstract to include one or two key quantitative highlights and add a compact results summary table to the introduction. revision: partial

Circularity Check

0 steps flagged

No significant circularity in DPAA derivation or claims

full rationale

The paper presents DPAA as an algorithmic framework that integrates adaptive interaction weighting (from a representation-aware popularity signal with smooth transition from pre-trained to evolving embeddings) and layer-wise amplification into GNN message passing. No equations or derivation steps are provided in the available text that reduce the debiasing outcome to a tautological re-expression of the input interaction graph or fitted parameters by construction. The pre-training step is described as stabilization rather than a load-bearing self-citation or fitted-input prediction that forces the final result. Experimental outperformance on real-world and semi-synthetic datasets is claimed as external validation, with no uniqueness theorem, ansatz smuggling, or renaming of known results invoked as the central justification. The method is self-contained as a practical debiasing technique whose independence from the target bias signal is asserted via the transition schedule and amplification, without mathematical reduction to inputs.

Axiom & Free-Parameter Ledger

2 free parameters · 2 axioms · 0 invented entities

The approach rests on standard GNN propagation assumptions and introduces new weighting schemes whose parameters are learned from data; no new physical entities are postulated.

free parameters (2)
  • interaction-level weights
    Computed from representation-aware popularity signal and stabilized during training; values are data-dependent and not fixed a priori.
  • layer-wise amplification factors
    Control emphasis on higher-order neighborhoods; chosen or learned to surface long-range interactions.
axioms (2)
  • domain assumption Skewed interaction distributions combined with repeated message passing across high-order neighborhoods amplify popular-item influence.
    Explicitly stated as the source of bias in GNN-based CF.
  • standard math GNN-based collaborative filtering propagates user-item signals over interaction graphs.
    Foundational modeling choice for the entire framework.

pith-pipeline@v0.9.0 · 5532 in / 1538 out tokens · 74829 ms · 2026-05-13T02:29:26.516830+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

Reference graph

Works this paper leans on

87 extracted references · 87 canonical work pages · 1 internal anchor

  1. [1]

    Himan Abdollahpouri, Robin Burke, and Bamshad Mobasher. 2017. Controlling popularity bias in learning-to-rank recommendation. InProceedings of the eleventh ACM conference on recommender systems. 42–46

  2. [2]

    Himan Abdollahpouri, Robin Burke, and Bamshad Mobasher. 2019. Managing popularity bias in recommender systems with personalized re-ranking. InFLAIRS

  3. [3]

    Qingyao Ai, Keping Bi, Cheng Luo, Jiafeng Guo, and W Bruce Croft. 2018. Unbi- ased learning to rank with unbiased propensity estimation. InThe 41st interna- tional ACM SIGIR conference on research & development in information retrieval. 385–394

  4. [4]

    Ljubisa Bojic. 2024. AI alignment: Assessing the global impact of recommender systems.Futures(2024), 103383

  5. [5]

    Stephen Bonner and Flavian Vasile. 2018. Causal embeddings for recommendation. InProceedings of the 12th ACM conference on recommender systems. 104–112

  6. [6]

    Ludovico Boratto, Gianni Fenu, and Mirko Marras. 2021. Connecting user and item perspectives in popularity debiasing for collaborative recommendation. Information Processing & Management58, 1 (2021), 102387

  7. [7]

    Léon Bottou, Jonas Peters, Joaquin Quiñonero-Candela, Denis X Charles, D Max Chickering, Elon Portugaly, Dipankar Ray, Patrice Simard, and Ed Snelson. 2013. Counterfactual reasoning and learning systems: The example of computational advertising.The Journal of Machine Learning Research14, 1 (2013), 3207–3260

  8. [8]

    Rocío Cañamares and Pablo Castells. 2018. Should I follow the crowd? A prob- abilistic analysis of the effectiveness of popularity in recommender systems. InThe 41st International ACM SIGIR Conference on Research & Development in Information Retrieval. 415–424

  9. [9]

    Allison JB Chaney, Brandon M Stewart, and Barbara E Engelhardt. 2018. How algorithmic confounding in recommendation systems increases homogeneity and decreases utility. InProceedings of the 12th ACM conference on recommender systems. 224–232

  10. [10]

    Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun. 2020. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. InProceedings of the AAAI conference on artificial intelligence, Vol. 34. 3438–3445

  11. [11]

    Jiawei Chen, Hande Dong, Xiang Wang, Fuli Feng, Meng Wang, and Xiangnan He. 2023. Bias and debias in recommender system: A survey and future directions. ACM Transactions on Information Systems41, 3 (2023), 1–39

  12. [12]

    Jiajia Chen, Jiancan Wu, Jiawei Chen, Xin Xin, Yong Li, and Xiangnan He. 2024. How graph convolutions amplify popularity bias for recommendation?Frontiers of Computer Science18, 5 (2024), 185603

  13. [13]

    Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, and Yaliang Li. 2020. Simple and deep graph convolutional networks. InInternational conference on machine learning. PMLR, 1725–1735

  14. [14]

    Zhihong Chen, Rong Xiao, Chenliang Li, Gangfeng Ye, Haochuan Sun, and Hongbo Deng. 2020. Esam: Discriminative domain adaptation with non-displayed items to improve long-tail performance. InProceedings of the 43rd international ACM SIGIR conference on research and development in information retrieval. 579– 588

  15. [15]

    Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolu- tional neural networks on graphs with fast localized spectral filtering.Advances in neural information processing systems29 (2016). 9

  16. [16]

    Wenqi Fan, Xiangyu Zhao, Lin Wang, Xiao Chen, Jingtong Gao, Qidong Liu, and Shijie Wang. 2023. Trustworthy recommender systems: Foundations and frontiers. InProceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 5796–5797

  17. [17]

    Schwab, and Ari S

    Jonathan Frankle, David J. Schwab, and Ari S. Morcos. 2020. The Early Phase of Neural Network Training. InInternational Conference on Learning Representa- tions

  18. [18]

    Chongming Gao, Shijun Li, Wenqiang Lei, Jiawei Chen, Biao Li, Peng Jiang, Xiangnan He, Jiaxin Mao, and Tat-Seng Chua. 2022. KuaiRec: A fully-observed dataset and insights for evaluating recommender systems. InProceedings of the 31st ACM International Conference on Information & Knowledge Management. 540–550

  19. [19]

    Chen Gao, Xiang Wang, Xiangnan He, and Yong Li. 2022. Graph neural net- works for recommender system. InProceedings of the fifteenth ACM international conference on web search and data mining. 1623–1625

  20. [20]

    Alois Gruson, Praveen Chandar, Christophe Charbuillet, James McInerney, Samantha Hansen, Damien Tardieu, and Ben Carterette. 2019. Offline evalu- ation to make decisions about playlistrecommendation algorithms. InProceedings of the Twelfth ACM International Conference on Web Search and Data Mining. 420–428

  21. [21]

    Ming He, Changshu Li, Xinlei Hu, Xin Chen, and Jiwen Wang. 2022. Mitigating popularity bias in recommendation via counterfactual inference. InInternational Conference on Database Systems for Advanced Applications. Springer, 377–388

  22. [22]

    Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. 2020. Lightgcn: Simplifying and powering graph convolution network for recommendation. InProceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval. 639–648

  23. [23]

    Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural collaborative filtering. InProceedings of the 26th international conference on world wide web. 173–182

  24. [24]

    Yanbiao Ji, Yue Ding, Dan Luo, Chang Liu, Yuxiang Lu, Xin Xin, and Hongtao Lu

  25. [25]

    InThe Thirty-ninth Annual Conference on Neural Information Processing Systems

    How Does Topology Bias Distort Message Passing in Graph Recommender? A Dirichlet Energy Perspective. InThe Thirty-ninth Annual Conference on Neural Information Processing Systems

  26. [26]

    Thorsten Joachims, Adith Swaminathan, and Tobias Schnabel. 2017. Unbiased learning-to-rank with biased feedback. InProceedings of the tenth ACM interna- tional conference on web search and data mining. 781–789

  27. [27]

    Minseok Kim, Jinoh Oh, Jaeyoung Do, and Sungjin Lee. 2022. Debiasing neighbor aggregation for graph neural network in recommender systems. InProceedings of the 31st ACM International Conference on Information & Knowledge Management. 4128–4132

  28. [28]

    Kipf and Max Welling

    Thomas N. Kipf and Max Welling. 2017. Semi-Supervised Classification with Graph Convolutional Networks. InThe Fifth International Conference on Learning Representations

  29. [29]

    Walid Krichene and Steffen Rendle. 2020. On sampled metrics for item recom- mendation. InProceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining. 1748–1757

  30. [30]

    Geon Lee, Kyungho Kim, and Kijung Shin. 2024. Post-training embedding en- hancement for long-tail recommendation. InProceedings of the 33rd ACM inter- national conference on information and knowledge management. 3857–3861

  31. [31]

    Allen Lin, Jianling Wang, Ziwei Zhu, and James Caverlee. 2022. Quantifying and mitigating popularity bias in conversational recommender systems. InProceedings of the 31st ACM international conference on information & knowledge management. 1238–1247

  32. [32]

    Siyi Lin, Chongming Gao, Jiawei Chen, Sheng Zhou, Binbin Hu, Yan Feng, Chun Chen, and Can Wang. 2025. How do recommendation models amplify popularity bias? An analysis from the spectral perspective. InProceedings of the Eighteenth ACM International Conference on Web Search and Data Mining. 659–668

  33. [33]

    Zihan Lin, Changxin Tian, Yupeng Hou, and Wayne Xin Zhao. 2022. Improving graph collaborative filtering with neighborhood-enriched contrastive learning. InProceedings of the ACM web conference 2022. 2320–2329

  34. [34]

    Fan Liu, Zhiyong Cheng, Lei Zhu, Zan Gao, and Liqiang Nie. 2021. Interest-aware message-passing GCN for recommendation. InProceedings of the web conference

  35. [35]

    Yuanhao Liu, Qi Cao, Huawei Shen, Yunfan Wu, Shuchang Tao, and Xueqi Cheng

  36. [36]

    InProceedings of the 46th international ACM SIGIR conference on research and development in information retrieval

    Popularity debiasing from exposure to interaction in collaborative filtering. InProceedings of the 46th international ACM SIGIR conference on research and development in information retrieval. 1801–1805

  37. [37]

    Sebastian Lubos, Alexander Felfernig, and Markus Tautschnig. 2023. An overview of video recommender systems: state-of-the-art and research issues.Frontiers in big Data6 (2023), 1281614

  38. [38]

    Dan Luo, Lixin Zou, Qingyao Ai, Zhiyu Chen, Chenliang Li, Dawei Yin, and Brian D Davison. 2024. Unbiased Learning-to-Rank Needs Unconfounded Propen- sity Estimation. InProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. 1535–1545

  39. [39]

    Dan Luo, Lixin Zou, Qingyao Ai, Zhiyu Chen, Dawei Yin, and Brian D Davison

  40. [40]

    InProceedings of the Sixteenth ACM International Conference on Web Search and Data Mining

    Model-based unbiased learning to rank. InProceedings of the Sixteenth ACM International Conference on Web Search and Data Mining. 895–903

  41. [41]

    Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, and Xiuqiang He

  42. [42]

    InProceedings of the 30th ACM international conference on information & knowledge management

    UltraGCN: ultra simplification of graph convolutional networks for recom- mendation. InProceedings of the 30th ACM international conference on information & knowledge management. 1253–1262

  43. [43]

    Benjamin M Marlin and Richard S Zemel. 2009. Collaborative prediction and ranking with non-random missing data. InProceedings of the third ACM conference on Recommender systems. 5–12

  44. [44]

    Mark EJ Newman. 2005. Power laws, Pareto distributions and Zipf’s law.Con- temporary physics46, 5 (2005), 323–351

  45. [45]

    Wentao Ning, Reynold Cheng, Xiao Yan, Ben Kao, Nan Huo, Nur Al Hasan Haldar, and Bo Tang. 2024. Debiasing recommendation with personal popularity. In Proceedings of the ACM Web Conference 2024. 3400–3409

  46. [46]

    Harrie Oosterhuis and Maarten de Rijke. 2020. Policy-aware unbiased learning to rank for top-k rankings. InProceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. 489–498

  47. [47]

    Shaowen Peng, Kazunari Sugiyama, and Tsunenori Mine. 2022. SVD-GCN: A simplified graph convolution paradigm for recommendation. InProceedings of the 31st ACM international conference on information & knowledge management. 1625–1634

  48. [48]

    Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme

  49. [49]

    BPR: Bayesian personalized ranking from implicit feedback.arXiv preprint arXiv:1205.2618(2012)

  50. [50]

    Yuta Saito. 2020. Doubly robust estimator for ranking metrics with post-click conversions. InProceedings of the 14th ACM Conference on Recommender Systems. 92–100

  51. [51]

    Tobias Schnabel, Adith Swaminathan, Ashudeep Singh, Navin Chandak, and Thorsten Joachims. 2016. Recommendations as Treatments: Debiasing Learning and Evaluation. InProceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48. 1670–1679

  52. [52]

    Xiaoyu Shi, Quanliang Liu, Hong Xie, Di Wu, Bo Peng, MingSheng Shang, and Defu Lian. 2023. Relieving popularity bias in interactive recommendation: A diversity-novelty-aware reinforcement learning approach.ACM Transactions on Information Systems42, 2 (2023), 1–30

  53. [53]

    Harald Steck. 2018. Calibrated recommendations. InProceedings of the 12th ACM conference on recommender systems. 154–162

  54. [54]

    Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, Yoshua Bengio, et al . 2018. Graph attention networks. InInternational conference on learning representations, Vol. 6. Ithaca, 2

  55. [55]

    Chen Wang, Yueqing Liang, Zhiwei Liu, Tao Zhang, and Philip S Yu. 2021. Pre- training graph neural network for cross domain recommendation. In2021 IEEE Third International Conference on Cognitive Machine Intelligence (CogMI). IEEE, 140–145

  56. [56]

    Chenxu Wang, Aodian Liu, and Tao Qin. 2024. Learning-to-rank debias with popularity-weighted negative sampling and popularity regularization.Neuro- computing587 (2024), 127681

  57. [57]

    Chenyang Wang, Yuanqing Yu, Weizhi Ma, Min Zhang, Chong Chen, Yiqun Liu, and Shaoping Ma. 2022. Towards representation alignment and uniformity in collaborative filtering. InProceedings of the 28th ACM SIGKDD conference on knowledge discovery and data mining. 1816–1825

  58. [58]

    Wenjie Wang, Fuli Feng, Xiangnan He, Xiang Wang, and Tat-Seng Chua. 2021. Deconfounded recommendation for alleviating bias amplification. InProceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining. 1717– 1725

  59. [59]

    Xuanhui Wang, Michael Bendersky, Donald Metzler, and Marc Najork. 2016. Learning to Rank with Selection Bias in Personal Search. InProceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval. Association for Computing Machinery, New York, NY, USA, 115–124

  60. [60]

    Xuanhui Wang, Nadav Golbandi, Michael Bendersky, Donald Metzler, and Marc Najork. 2018. Position Bias Estimation for Unbiased Learning to Rank in Personal Search. InProceedings of the 11th ACM International Conference on Web Search and Data Mining (WSDM). 610–618

  61. [61]

    Xiang Wang, Xiangnan He, Meng Wang, Fuli Feng, and Tat-Seng Chua. 2019. Neural graph collaborative filtering. InProceedings of the 42nd international ACM SIGIR conference on Research and development in Information Retrieval. 165–174

  62. [62]

    Xiang Wang, Hongye Jin, An Zhang, Xiangnan He, Tong Xu, and Tat-Seng Chua. 2020. Disentangled graph collaborative filtering. InProceedings of the 43rd international ACM SIGIR conference on research and development in information retrieval. 1001–1010

  63. [63]

    Zifeng Wang, Xi Chen, Rui Wen, Shao-Lun Huang, Ercan Kuruoglu, and Yefeng Zheng. 2020. Information theoretic counterfactual learning from missing-not-at- random feedback.Advances in Neural Information Processing Systems33 (2020), 1854–1864

  64. [64]

    Tianxin Wei, Fuli Feng, Jiawei Chen, Ziwei Wu, Jinfeng Yi, and Xiangnan He

  65. [65]

    InProceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining

    Model-agnostic counterfactual reasoning for eliminating popularity bias in recommender system. InProceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining. 1791–1800

  66. [66]

    Hongyi Wen, Xinyang Yi, Tiansheng Yao, Jiaxi Tang, Lichan Hong, and Ed H Chi

  67. [67]

    InProceedings of the ACM web conference 2022

    Distributionally-robust recommendations for improving worst-case user 10 experience. InProceedings of the ACM web conference 2022. 3606–3610

  68. [68]

    Jiancan Wu, Xiang Wang, Fuli Feng, Xiangnan He, Liang Chen, Jianxun Lian, and Xing Xie. 2021. Self-supervised graph learning for recommendation. InProceed- ings of the 44th international ACM SIGIR conference on research and development in information retrieval. 726–735

  69. [69]

    Yao Wu, Jian Cao, and Guandong Xu. 2023. Fairness in recommender systems: evaluation approaches and assurance strategies.ACM Transactions on Knowledge Discovery from Data18, 1 (2023), 1–37

  70. [70]

    Longqi Yang, Yin Cui, Yuan Xuan, Chenyang Wang, Serge Belongie, and Debo- rah Estrin. 2018. Unbiased offline recommender evaluation for missing-not-at- random implicit feedback. InProceedings of the 12th ACM conference on recom- mender systems. 279–287

  71. [71]

    Sirui Yao and Bert Huang. 2017. Beyond parity: Fairness objectives for collabora- tive filtering.Advances in neural information processing systems30 (2017)

  72. [72]

    Junliang Yu, Xin Xia, Tong Chen, Lizhen Cui, Nguyen Quoc Viet Hung, and Hongzhi Yin. 2023. XSimGCL: Towards extremely simple graph contrastive learning for recommendation.IEEE Transactions on Knowledge and Data Engi- neering36, 2 (2023), 913–926

  73. [73]

    Junliang Yu, Hongzhi Yin, Xin Xia, Tong Chen, Lizhen Cui, and Quoc Viet Hung Nguyen. 2022. Are graph augmentations necessary? simple graph contrastive learning for recommendation. InProceedings of the 45th international ACM SIGIR conference on research and development in information retrieval. 1294–1303

  74. [74]

    Bowen Yuan, Yaxu Liu, Jui-Yang Hsia, Zhenhua Dong, and Chih-Jen Lin. 2020. Un- biased Ad click prediction for position-aware advertising systems. InProceedings of the 14th ACM Conference on Recommender Systems. 368–377

  75. [75]

    An Zhang, Wenchang Ma, Xiang Wang, and Tat-Seng Chua. 2022. Incorporating bias-aware margins into contrastive loss for collaborative filtering.Advances in Neural Information Processing Systems35 (2022), 7866–7878

  76. [76]

    An Zhang, Leheng Sheng, Zhibo Cai, Xiang Wang, and Tat-Seng Chua. 2023. Empowering collaborative filtering with principled adversarial contrastive loss. Advances in Neural Information Processing Systems36 (2023), 6242–6266

  77. [77]

    Junwei Zhang, Min Gao, Junliang Yu, Lei Guo, Jundong Li, and Hongzhi Yin. 2021. Double-scale self-supervised hypergraph learning for group recommendation. In Proceedings of the 30th ACM international conference on information & knowledge management. 2557–2567

  78. [78]

    Yang Zhang, Fuli Feng, Xiangnan He, Tianxin Wei, Chonggang Song, Guohui Ling, and Yongdong Zhang. 2021. Causal intervention for leveraging popularity bias in recommendation. InProceedings of the 44th international ACM SIGIR conference on research and development in information retrieval. 11–20

  79. [79]

    Yin Zhang, Ruoxi Wang, Derek Zhiyuan Cheng, Tiansheng Yao, Xinyang Yi, Lichan Hong, James Caverlee, and Ed H Chi. 2023. Empowering long-tail item recommendation through cross decoupling network (cdn). InProceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 5608– 5617

  80. [80]

    Chu Zhao, Enneng Yang, Yuliang Liang, Pengxiang Lan, Yuting Liu, Jianzhe Zhao, Guibing Guo, and Xingwei Wang. 2025. Graph representation learning via causal diffusion for out-of-distribution recommendation. InProceedings of the ACM on Web Conference 2025. 334–346

Showing first 80 references.