pith. machine review for the scientific record. sign in

arxiv: 2604.21750 · v1 · submitted 2026-04-23 · 💻 cs.IR

Recognition: unknown

Multistakeholder Impacts of Profile Portability in a Recommender Ecosystem

Anas Buhayh, Clement Canel, Elizabeth McKinnie, Robin Burke

Authors on Pith no claims yet

Pith reviewed 2026-05-09 20:02 UTC · model grok-4.3

classification 💻 cs.IR
keywords recommender systemsdata portabilityalgorithmic pluralismmultistakeholder evaluationuser utilityprofile portability
0
0 comments X

The pith

Data portability when users switch between recommendation algorithms produces varying effects on user utility across different algorithms.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper examines structural changes to recommender systems through algorithmic pluralism, where algorithms operate separately from platforms and users can choose their preferred one. It focuses on data portability policies that let users move their profiles when switching, asking what this means for the accuracy of user models and resulting outcomes. Simulations of different portability scenarios show that these changes affect user utility in distinct ways depending on the underlying algorithm. The work emphasizes policy considerations for building recommendation ecosystems that remain equitable for users, providers, and platforms under emerging data ownership rules.

Core claim

Data portability scenarios in a decoupled recommender ecosystem produce varying effects on user utility across different recommendation algorithms, with direct consequences for multistakeholder outcomes.

What carries the argument

Simulations of profile portability between decoupled algorithms, tracking how transferred user data preserves or alters modeling fidelity and downstream utility for users and other stakeholders.

If this is right

  • Some algorithms gain or lose relative appeal to users once profile portability is introduced.
  • Policy design for data portability must account for algorithm-specific utility shifts to avoid unintended inequities.
  • Multistakeholder balance in recommendation platforms depends on how data moves between independent algorithms.
  • Structural decoupling of algorithms from platforms interacts with portability rules in ways that affect niche consumers and providers.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Platforms may need standardized profile exchange formats to reduce the fidelity loss the simulations treat as variable.
  • The findings could be tested by applying the same portability scenarios to live user data from an existing platform.
  • Longer-term effects on provider diversity or user retention could be examined by extending the simulation horizon.

Load-bearing premise

User profiles can be ported between different recommendation algorithms while preserving enough modeling fidelity to support reliable utility calculations.

What would settle it

A real-world deployment in which users actually switch algorithms with ported profiles and measured changes in recommendation accuracy or satisfaction diverge from the simulated utility patterns.

Figures

Figures reproduced from arXiv: 2604.21750 by Anas Buhayh, Clement Canel, Elizabeth McKinnie, Robin Burke.

Figure 1
Figure 1. Figure 1: Profile portability options. We explore two dimensions of portability. The first is exclusivity, the extent to which a user’s profile data remains tied to a given algorithm and does not follow the user when switching to a new recommender. Since algorithm operators compete, they may be unwilling to share user data with rivals, creating significant ‘lock￾in’ as users may hesitate to switch if their data cann… view at source ↗
Figure 2
Figure 2. Figure 2: Platform portability impacts on mean consumer [PITH_FULL_IMAGE:figures/full_fig_p006_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Algorithm sensitivity of mean provider utility for [PITH_FULL_IMAGE:figures/full_fig_p007_3.png] view at source ↗
read the original abstract

Optimizing outcomes for multiple stakeholders in recommender systems has historically focused on algorithmic interventions, such as developing multi-objective models or re-ranking results from existing algorithms. However, structural changes to the recommendation ecosystem itself remain understudied. This paper explores the implications of algorithmic pluralism (also known as "middleware" in the governance literature), in which recommendation algorithms are decoupled from platforms, enabling users to select their preferred algorithm. Prior simulation work demonstrates that algorithmic choice benefits niche consumers and providers. Yet this approach raises critical questions about user modeling in the context of data portability: when users switch algorithms, what happens to their data? Noting that multiple data portability regulations have emerged to strengthen user data ownership and control. We examine how such policies affect user models and stakeholders' outcomes in recommendation setting. Our findings reveal that data portability scenarios produce varying effects on user utility across different recommendation algorithms. We highlight key policy considerations and implications for designing equitable recommendation ecosystems.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper examines structural changes in recommender systems via algorithmic pluralism (decoupled algorithms allowing user choice) and the role of data portability policies. It uses simulations to show that different portability scenarios produce varying effects on user utility across recommendation algorithms, while discussing multistakeholder implications and policy considerations.

Significance. If the simulation results hold under scrutiny, the work fills a gap in multistakeholder recommender research by shifting focus from algorithmic interventions to ecosystem-level changes like data portability. It builds on prior simulation studies of algorithmic choice by incorporating regulatory aspects, potentially informing equitable design of recommendation platforms.

major comments (2)
  1. [Methods / Experimental Setup] The simulation methodology (described in the methods and experimental setup sections) models profile portability by transferring user profiles between decoupled algorithms but provides no details on feature alignment, representation conversion, handling of algorithm-specific side information, or attrition/loss of interaction history. This assumption is load-bearing for the central claim that observed utility differences arise from portability policies rather than from unmodeled incompatibilities.
  2. [Results / Simulation Description] No information is given on simulation parameters (e.g., number of users/items, algorithm hyperparameters, number of runs), validation against real-world data, error bars, confidence intervals, or sensitivity analysis. Without these, the reported varying effects on user utility across algorithms cannot be assessed for robustness or generalizability.
minor comments (2)
  1. [Abstract / Introduction] The abstract and introduction use 'algorithmic pluralism' and 'middleware' interchangeably without a clear definition or reference to the governance literature on the first use.
  2. [Figures / Tables] Figure captions and table descriptions could be expanded to explicitly state what is being measured (e.g., which utility metric and stakeholder perspective) to improve readability.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their constructive and detailed feedback. We address each major comment below with clarifications on our simulation approach and commit to specific revisions that strengthen the manuscript without altering its core claims.

read point-by-point responses
  1. Referee: [Methods / Experimental Setup] The simulation methodology (described in the methods and experimental setup sections) models profile portability by transferring user profiles between decoupled algorithms but provides no details on feature alignment, representation conversion, handling of algorithm-specific side information, or attrition/loss of interaction history. This assumption is load-bearing for the central claim that observed utility differences arise from portability policies rather than from unmodeled incompatibilities.

    Authors: We acknowledge that the current description of profile portability in the Methods section is high-level and does not explicitly address feature alignment, representation conversion, side-information handling, or potential loss of interaction history. In the revised manuscript we will expand this section to specify our modeling choices: a shared user-item interaction matrix serves as the common representation for transfer; algorithm-specific embeddings are derived from this matrix where needed; side information is limited to the core profile data available under the portability policy; and interaction history is retained in full for the transferred profile (with a note on the simplifying assumption that no attrition occurs). These additions will make the scope of the simulation transparent and support the claim that utility differences stem from the portability scenarios themselves. revision: yes

  2. Referee: [Results / Simulation Description] No information is given on simulation parameters (e.g., number of users/items, algorithm hyperparameters, number of runs), validation against real-world data, error bars, confidence intervals, or sensitivity analysis. Without these, the reported varying effects on user utility across algorithms cannot be assessed for robustness or generalizability.

    Authors: We agree that the Experimental Setup and Results sections currently omit key reproducibility details. In the revision we will add a dedicated paragraph listing the simulation scale (number of users and items), algorithm hyperparameters, number of independent runs, and statistical reporting (error bars and confidence intervals on utility metrics). We will also include a sensitivity analysis varying core parameters to demonstrate stability of the observed cross-algorithm differences. Our study is intentionally synthetic to isolate policy effects; we will explicitly note the absence of direct real-world validation as a limitation and outline how future empirical work could address it. revision: yes

Circularity Check

0 steps flagged

No circularity in simulation-based policy analysis

full rationale

The paper conducts a simulation study of data portability effects on multistakeholder outcomes in recommender systems. No mathematical derivation chain, fitted parameters renamed as predictions, or self-citation load-bearing premises are present in the provided text. The central findings on varying user utility across algorithms arise from explicit scenario modeling rather than any reduction to input assumptions by construction. The work is self-contained against external benchmarks of simulation validity.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on unstated simulation assumptions about profile transfer fidelity and stakeholder utility functions; no free parameters, axioms, or invented entities are explicitly listed in the abstract.

axioms (1)
  • domain assumption User profiles remain sufficiently informative after portability for algorithm-specific utility calculations
    Implicit in the claim that portability scenarios affect user utility differently across algorithms

pith-pipeline@v0.9.0 · 5465 in / 1067 out tokens · 30495 ms · 2026-05-09T20:02:25.376661+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

70 extracted references · 25 canonical work pages · 2 internal anchors

  1. [1]

    S.3195 - 117th Congress (2021-2022): Consumer Online Privacy Rights Act

    2021. S.3195 - 117th Congress (2021-2022): Consumer Online Privacy Rights Act. https://www.congress.gov/bill/117th-congress/senate-bill/3195 Accessed Jan. 20 2026

  2. [2]

    S.4309 - 117th Congress (2021-2022): ACCESS Act of 2022

    2022. S.4309 - 117th Congress (2021-2022): ACCESS Act of 2022. https://www. congress.gov/bill/117th-congress/senate-bill/4309 Accessed Jan. 20 2026. Multistakeholder Impacts of Profile Portability in a Recommender Ecosystem UMAP’26, June, 2026, Gothenburg

  3. [3]

    Himan Abdollahpouri. 2019. Popularity Bias in Ranking and Recommendation. InProceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society(New York, NY, USA)(AIES ’19). Association for Computing Machinery, 529–530. doi:10. 1145/3306618.3314309

  4. [4]

    Himan Abdollahpouri, Gediminas Adomavicius, Robin Burke, Ido Guy, Dietmar Jannach, Toshihiro Kamishima, Jan Krasnodebski, and Luiz Pizzato. 2020. Multi- stakeholder Recommendation: Survey and Research Directions.User Modeling and User-Adapted Interaction30, 1 (March 1 2020), 127–158. doi:10.1007/s11257- 019-09256-1

  5. [5]

    Himan Abdollahpouri and Robin Burke. 2019. Multi-Stakeholder Recommen- dation and Its Connection to Multi-Sided Fairness. arXiv:1907.13158 [cs.IR] doi:10.48550/arXiv.1907.13158

  6. [6]

    Himan Abdollahpouri, Masoud Mansoury, Robin Burke, and Bamshad Mobasher. 2019. The Unfairness of Popularity Bias in Recommendation. arXiv:1907.13286 [cs.IR] http://arxiv.org/abs/1907.13286

  7. [7]

    Fabian Abel, Samur Araújo, Qi Gao, and Geert-Jan Houben. 2011. Analyzing cross-system user modeling on the social web. InLecture Notes in Computer Science. Springer Berlin Heidelberg, Berlin, Heidelberg, 28–43

  8. [8]

    Juan Alvarez De La Vega, Marta Cecchinato, and John Rooksby. 2021. ’Why Lose Control?’ A Study of Freelancers’ Experiences with Gig Economy Platforms. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yoshifumi Kitamura, Aaron Quigley, Katherine Isbister, Takeo Igarashi, Pernille Bjørn, and Steven Drucker (Eds.). ACM, New Yo...

  9. [9]

    Krisztian Balog and ChengXiang Zhai. 2023. User simulation for evaluating information access systems. InProceedings of the Annual International ACM SIGIR Conference on Research and Development in Information Retrieval in the Asia Pacific Region. 302–305

  10. [10]

    Dimitrios Bountouridis, Jaron Harambam, Mykola Makhortykh, Mónica Marrero, Nava Tintarev, and Claudia Hauff. 2019. Siren: A simulation framework for understanding the effects of recommender systems in online news environments. InProceedings of the Conference on Fairness, Accountability, and Transparency. 150–159

  11. [11]

    Sarah Bouraga, Ivan Jureta, and Stéphane Faulkner. 2016. Towards data portability between online social networks, a conceptual model of the portable user profile. Int. J. Virtual Communities Soc. Netw.8, 3 (July 2016), 37–54

  12. [12]

    Lucas Bourtoule, Varun Chandrasekaran, Christopher A Choquette-Choo, Hen- grui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. 2021. Machine unlearning. In2021 IEEE symposium on security and privacy (SP). IEEE, 141–159

  13. [13]

    Karen L Boyd. 2021. Datasheets for datasets help ML engineers notice and understand ethical issues in training data.Proceedings of the ACM on Human- Computer Interaction5, CSCW2 (2021), 1–27

  14. [14]

    Anas Buhayh, Elizabeth McKinnie, and Robin Burke. 2024. Decoupled Recom- mender Systems: Exploring Alternative Recommender Ecosystem Designs. In Recommender Systems for Sustainability and Social Good: First International Work- shop, RecSoGood 2024, Bari, Italy, October 18, 2024, Proceedings, Vol. 2470. Springer Nature, 5

  15. [15]

    Anas Buhayh, Elizabeth McKinnie, Clement Canel, and Robin Burke. 2025. Simu- lating the Algorithm Store: Multistakeholder Impacts of Recommender Choice. InAdjunct Proceedings of the 33rd ACM Conference on User Modeling, Adaptation and Personalization. 274–279

  16. [16]

    Robin Burke. 2017. Multisided Fairness for Recommendation. arXiv:1707.00093 [cs.CY] https://arxiv.org/abs/1707.00093

  17. [17]

    Yongle Chao, Meihe Xu, Aurelia Tamò-Larrieux, and Konrad Kollnig. 2025. Data portability strategies in the EU: Moving beyond individual rights.Computer Law & Security Review57 (2025), 106135. doi:10.1016/j.clsr.2025.106135

  18. [18]

    Yoonseo Choi, Eun Jeong Kang, Min Kyung Lee, and Juho Kim. 2023. Creator- friendly algorithms: Behaviors, challenges, and design opportunities in algorith- mic platforms. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Vol. 32. ACM, New York, NY, USA, 1–22

  19. [19]

    Samantha Dalal, Ngan Chiem, Nikoo Karbassi, Yuhan Liu, and Andrés Monroy- Hernández. 2023. Understanding Human Intervention in the Platform Economy: A Case Study of an Indie Food Delivery Service. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems. ACM, Hamburg, Germany, 1–16. doi:10.1145/3544548.3581517

  20. [20]

    Mukund Deshpande and George Karypis. 2004. Item-based top-n recommenda- tion algorithms.ACM Transactions on Information Systems (TOIS)22, 1 (2004), 143–177

  21. [21]

    Michael D Ekstrand, Anubrata Das, Robin Burke, Fernando Diaz, et al . 2022. Fairness in information access systems.Foundations and Trends®in Information Retrieval16, 1-2 (2022), 1–177

  22. [22]

    2016.Matchmakers: The new economics of multisided platforms

    David S Evans and Richard Schmalensee. 2016.Matchmakers: The new economics of multisided platforms. Harvard Business Review Press

  23. [23]

    Daniel Fleder and Kartik Hosanagar. 2009. The Impact of Recommender Systems on Sales Diversity.Management Science55, 5 (2009), 697–712. doi:10.1287/mnsc. 1080.0974

  24. [24]

    Francis Fukuyama, Barak Richman, and Ashish Goel. 2021. Ending big tech’s information monopoly.Foreign Aff.100, 1 (2021), 98–110

  25. [25]

    Francis Fukuyama, Barak Richman, and Ashish Goel. 2021. How to save democ- racy from technology.Foreign Affairs100, 1 (2021), 98–110

  26. [26]

    Bhavya Ghai, Mihir Mishra, and Klaus Mueller. 2022. Cascaded debiasing: Study- ing the cumulative effect of multiple fairness-enhancing interventions. InPro- ceedings of the 31st ACM International Conference on Information & Knowledge Management. 3082–3091

  27. [27]

    Jyotirmoy Gope and Sanjay Kumar Jain. 2017. A Survey on Solving Cold Start Problem in Recommender Systems. In2017 International Conference on Computing, Communication and Automation (ICCCA). 133–138. doi:10.1109/CCAA.2017. 8229786

  28. [28]

    F Maxwell Harper and Joseph A Konstan. 2015. The Movielens datasets: History and context.ACM Transactions on Interactive Intelligent Systems5, 4 (2015), 1–19

  29. [29]

    Naieme Hazrati and Francesco Ricci. 2022. Recommender systems effect on the evolution of users’ choices distribution.Information Processing & Management 59, 1 (2022), 102766

  30. [30]

    Benjamin Heitmann, James G Kim, Alexandre Passant, Conor Hayes, and Hong- Gee Kim. 2010. An architecture for privacy-enabled user profile portability on the web of data. InProceedings of the 1st International workshop on Information Heterogeneity and Fusion in Recommender Systems. 16–23

  31. [31]

    Katja Hofmann, Lihong Li, Filip Radlinski, et al . 2016. Online evaluation for information retrieval.Foundations and Trends®in Information Retrieval10, 1 (2016), 1–117

  32. [32]

    Luke Hogg, Renée DiResta, Francis Fukuyama, Richard Reisman, Daphne Keller, Aviv Ovadya, Luke Thorburn, Jonathan Stray, and Shubhi Mathur. 2024. Shaping the Future of Social Media with Middleware. arXiv:2412.10283 [cs.CY] https: //arxiv.org/abs/2412.10283

  33. [33]

    Yupeng Hou, Jiacheng Li, Zhankui He, An Yan, Xiusi Chen, and Julian McAuley

  34. [34]

    Bridging Language and Items for Retrieval and Recommendation: Benchmarking LLMs as Semantic Encoders

    Bridging Language and Items for Retrieval and Recommendation. arXiv:2403.03952 [cs.IR] https://arxiv.org/abs/2403.03952

  35. [35]

    Jordan, Niki Kilbertus, and Sarah Dean

    Jiri Hron, Karl Krauth, Michael I. Jordan, Niki Kilbertus, and Sarah Dean

  36. [36]

    arXiv:2206.13102 [cs.GT] https://arxiv.org/abs/2206.13102

    Modeling Content Creator Incentives on Algorithm-Curated Platforms. arXiv:2206.13102 [cs.GT] https://arxiv.org/abs/2206.13102

  37. [37]

    Yifan Hu, Yehuda Koren, and Chris Volinsky. 2008. Collaborative filtering for implicit feedback datasets. In2008 Eighth IEEE international conference on data mining. Ieee, 263–272

  38. [38]

    Shomik Jain, Vinith Suriyakumar, Kathleen Creel, and Ashia Wilson. 2024. Algo- rithmic pluralism: A structural approach to equal opportunity. InProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency. 197–206

  39. [39]

    Jack Jamieson and Naomi Yamashita. 2023. Escaping the walled garden? User perspectives of control in data portability for social media. InProceedings of the ACM Human-Computer Interaction (HCI) Conference, Vol. 7. ACM, 1–27. Issue

  40. [40]

    Dietmar Jannach, Markus Zanker, Alexander Felfernig, and Gerhard Friedrich

  41. [41]

    doi:10.1007/s11257-015-9165-7

    What Recommenders Recommend: An Analysis of Accuracy, Popularity, and Diversity.User Modeling and User-Adapted Interaction25, 1 (2015), 89–129. doi:10.1007/s11257-015-9165-7

  42. [42]

    Diane Kelly et al. 2009. Methods for evaluating interactive information retrieval systems with users.Foundations and Trends®in Information Retrieval3, 1–2 (2009), 1–224

  43. [43]

    Pigi Kouki, Ilias Fountalis, Nikolaos Vasiloglou, Xiquan Cui, Edo Liberty, and Khalifeh Al Jadda. 2020. From the lab to production: A case study of session- based recommendations in the home-improvement domain. InFourteenth ACM Conference on Recommender Systems. 140–149

  44. [44]

    Karl Krauth, Sarah Dean, Alex Zhao, Wenshuo Guo, Mihaela Curmei, Benjamin Recht, and Michael I. Jordan. 2020. Do Offline Metrics Predict Online Performance in Recommender Systems? arXiv:2011.07931 [cs.IR] https://arxiv.org/abs/2011. 07931

  45. [45]

    Pavel Merinov and Francesco Ricci. 2024. Positive-sum impact of multistake- holder recommender systems for urban tourism promotion and user utility. In Proceedings of the 18th ACM Conference on Recommender Systems. 939–944

  46. [46]

    Martin Mladenov, Chih-Wei Hsu, Vihan Jain, Eugene Ie, Christopher Colby, Nicolas Mayoraz, Hubert Pham, Dustin Tran, Ivan Vendrov, and Craig Boutilier

  47. [47]

    arXiv:2103.08057 [cs.LG] https://arxiv.org/abs/2103.08057

    RecSim NG: Toward Principled Uncertainty Modeling for Recommender Ecosystems. arXiv:2103.08057 [cs.LG] https://arxiv.org/abs/2103.08057

  48. [48]

    Mika Nakashima. 2022. Comparison of legal systems for data portability in the EU, the US and Japan and the direction of legislation in Japan. InHuman choice and digital by default: Autonomy vs digital determination. https://link.springer. com/chapter/10.1007/978-3-031-15688-5_14

  49. [49]

    Aviv Ovadya. [n. d.]. How Platform Recommendation Systems Might Reduce Division and Strengthen Democracy

  50. [50]

    2017.Guidelines on the right to data portability

    Article 29 Data Protection Working Party. 2017.Guidelines on the right to data portability. Technical Report. European Commission. https://ec.europa.eu/ newsroom/article29/items/611233

  51. [51]

    Behnam Rahdari, Branislav Kveton, and Peter Brusilovsky. 2022. From Ranked Lists to Carousels: A Carousel Click Model.arXiv preprint arXiv:2209.13426 (2022). UMAP’26, June, 2026, Gothenburg Anas Buhayh, Elizabeth McKinnie, Clement Canel, and Robin Burke

  52. [52]

    C Rajendra-Nicolucci, M Sugarman, and E Zuckerman. 2023. The three legged stool: A manifesto for a smaller, denser internet. https://tinyurl.com/idpi3leg

  53. [53]

    Naime Ranjbar Kermany, Weiliang Zhao, Jian Yang, Jia Wu, and Luiz Pizzato

  54. [54]

    doi:10.1007/s11280-021-00946-8

    A Fairness-Aware Multi-Stakeholder Recommender System.World Wide Web24, 6 (November 1 2021), 1995–2018. doi:10.1007/s11280-021-00946-8

  55. [55]

    Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme

  56. [56]

    BPR: Bayesian personalized ranking from implicit feedback.arXiv preprint arXiv:1205.2618(2012)

  57. [57]

    David Rohde, Stephen Bonner, Travis Dunlop, Flavian Vasile, and Alexandros Karatzoglou. 2018. Recogym: A reinforcement learning environment for the problem of product recommendation in online advertising.arXiv preprint arXiv:1808.00720(2018)

  58. [58]

    Everyone wants to do the model work, not the data work

    Nithya Sambasivan, Shivani Kapania, Hannah Highfill, Diana Akrong, Praveen Paritosh, and Lora M Aroyo. 2021. “Everyone wants to do the model work, not the data work”: Data Cascades in High-Stakes AI. Inproceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–15

  59. [59]

    Heike Schweitzer and Robert Welker. 2019. Competition policy for the digital era.Competition Policy International Antitrust Chronicle (2019). https://www.competitionpolicyinternational.com/wp-content/uploads/ 2019/12/CPI-Schweitzer-Welker.pdf

  60. [60]

    Madhavi Singh. 2024. Reimagining Social Media through Middleware: A Struc- tural Path to Competition and User Agency.NCJL & Tech.26 (2024), 459

  61. [61]

    Improving fairness in AI models on electronic health records: the case for federated learning methods

    Jessie J. Smith, Anas Buhayh, Anushka Kathait, Pradeep Ragothaman, Nicholas Mattei, Robin Burke, and Amy Voida. 2023. The Many Faces of Fairness: Explor- ing the Institutional Logics of Multistakeholder Microlending Recommendation. InProceedings of the 2023 ACM Conference on Fairness, Accountability, and Trans- parency(New York, NY, USA)(FAccT ’23). Assoc...

  62. [62]

    Zoltán Szlávik, Wojtek Kowalczyk, and Martijn Schut. 2011. Diversity measure- ment of recommender systems under different user choice models. InProceedings of the international AAAI conference on web and social media, Vol. 5. 369–376

  63. [63]

    Verhulst

    Stefaan G. Verhulst. 2023. Steering Responsible AI: A Case for Algorithmic Pluralism. arXiv:2311.12010 [cs.AI] https://arxiv.org/abs/2311.12010

  64. [64]

    Mengting Wan and Julian McAuley. 2018. Item recommendation on monotonic behavior chains. InProceedings of the 12th ACM conference on recommender systems. 86–94

  65. [65]

    Brian Willard and Greg Fair. 2018. Introducing data transfer project: An open source platform promoting universal data portability. https://opensource. googleblog.com/2018/07/introducing-data-transfer-project.html Accessed Jan. 20 2026

  66. [66]

    Martin Wischenbart, Stefan Mitsch, Elisabeth Kapsammer, Angelika Kusel, Birgit Pröll, Werner Retschitzegger, Wieland Schwinger, Johannes Schönböck, Manuel Wimmer, and Stephan Lechner. 2012. User profile integration made easy: model- driven extraction and transformation of social network schemas. InProceedings of the 21st International Conference on World ...

  67. [67]

    Haolun Wu, Chen Ma, Bhaskar Mitra, Fernando Diaz, and Xue Liu. 2022. A Multi-Objective Optimization Framework for Multi-Stakeholder Fairness-Aware Recommendation.ACM Transactions on Information Systems41, 2 (December 21 2022), 47:1–47:29. doi:10.1145/3564285

  68. [68]

    Fan Yao, Chuanhao Li, Denis Nekipelov, Hongning Wang, and Haifeng Xu. 2023. How Bad is Top-𝐾 Recommendation under Competing Content Creators?. In International Conference on Machine Learning. PMLR, 39674–39701

  69. [69]

    Konstan, Robin Burke, Yongfeng Zhang, et al

    Sirui Yao, Joseph A. Konstan, Robin Burke, Yongfeng Zhang, et al. 2021. Mea- suring Recommender System Effects with Simulated Users.arXiv preprint arXiv:2101.04526(2021). https://arxiv.org/abs/2101.04526

  70. [70]

    Eva Zangerle and Christine Bauer. 2022. Evaluating recommender systems: survey and framework.ACM computing surveys55, 8 (2022), 1–38