pith. machine review for the scientific record. sign in

arxiv: 2605.01503 · v1 · submitted 2026-05-02 · 📡 eess.SY · cs.SY

Recognition: unknown

Recommender Systems as Control Systems

Authors on Pith no claims yet

Pith reviewed 2026-05-09 18:10 UTC · model grok-4.3

classification 📡 eess.SY cs.SY
keywords recommender systemscontrol theoryfairnessdynamical systemslong-term dynamicsbiasperformance optimization
0
0 comments X

The pith

Recommender systems modeled as control systems show that fairness can improve long-term performance instead of trading off against utility.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes viewing recommender systems through control theory to examine the long-term effects of fairness interventions on users and creators. It challenges the idea that fairness is always a trade-off with utility by demonstrating that optimizing for fairness over time can enhance overall system performance. This insight matters because recommender systems shape information access and content creation, and understanding their dynamics can lead to better designs that avoid biases like polarization and popularity skew. The analysis requires grasping the feedback loops between recommendations and user behavior.

Core claim

By interpreting recommender systems as control systems, the authors analyze how fairness interventions shape long-term system behavior. Fairness concerns for users include opinion polarization and representation bias, while for creators it involves popularity bias. The central claim is that fairness should not be viewed as a simple trade-off against utility; when optimized over time, it can benefit overall system performance, provided the underlying dynamics are understood.

What carries the argument

A control-theoretic model of recommender systems, where users and creators are treated as a dynamical system with state transitions that can be influenced by fairness interventions.

If this is right

  • Fairness interventions, when optimized over time, can lead to better overall system performance.
  • Understanding the dynamical interactions is necessary to achieve these performance gains from fairness.
  • Addressing user-side issues like opinion polarization and creator-side popularity bias can have positive long-term effects.
  • Recommender system design should incorporate control-theoretic analysis for fairness policies.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • This framework could be applied to design new algorithms that explicitly optimize fairness for sustained performance improvements.
  • Real-world platforms might test this by comparing fairness-aware policies against utility-only ones in A/B tests over extended periods.
  • Similar control interpretations could help analyze fairness in other algorithmic systems like search engines or social feeds.

Load-bearing premise

That modeling users and creators as a controllable dynamical system with well-defined state transitions adequately represents the real feedback loops and incentive structures in actual recommender systems.

What would settle it

A controlled experiment or simulation of a recommender system where time-optimized fairness interventions result in lower long-term utility or engagement metrics compared to non-fair baselines.

Figures

Figures reproduced from arXiv: 2605.01503 by Giulia De Pasquale, Paolo Frasca, Sarah Dean.

Figure 1
Figure 1. Figure 1: FIGURE 1: Formal interaction model for a content recom view at source ↗
Figure 2
Figure 2. Figure 2: FIGURE 2: In the user-oriented neighborhood method, Peter view at source ↗
Figure 3
Figure 3. Figure 3: FIGURE 3: A simplified illustration of the latent factor approach view at source ↗
Figure 5
Figure 5. Figure 5: FIGURE 5: Representation bias propagated via the sampling view at source ↗
Figure 7
Figure 7. Figure 7: FIGURE 7: Trade-off between engagement and polarization as view at source ↗
Figure 6
Figure 6. Figure 6: FIGURE 6: Comparison of individual feedback-loop mecha view at source ↗
Figure 9
Figure 9. Figure 9: FIGURE 9: Outcomes for users and creators under recommen view at source ↗
read the original abstract

We propose a control-theoretic interpretation of recommender systems and use this perspective to analyze how fairness interventions shape long-term system behavior. Fairness concerns arise for both users and creators, ranging from opinion polarization and representation bias on the user side to popularity bias on the creator side. A central insight of our analysis is that fairness should not be viewed as a simple trade-off against utility. When optimized over time, it can in fact be beneficial for overall system performance. Realizing these gains, however, requires a clear understanding of the underlying dynamics.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript proposes a control-theoretic interpretation of recommender systems, modeling users and creators as components of a dynamical system subject to feedback loops. It examines fairness concerns including opinion polarization and representation bias for users and popularity bias for creators, and advances the qualitative insight that fairness interventions, when optimized over time, can improve rather than merely trade off against long-term system performance. The work concludes by stressing the importance of understanding these underlying dynamics to realize potential gains.

Significance. If the control-theoretic abstraction can be formalized and validated, the perspective would usefully reframe fairness in recommender systems as a dynamic optimization problem rather than a static trade-off, potentially informing long-horizon algorithm design. The interdisciplinary bridge between control theory and recommendation research is a positive contribution, but the absence of explicit models, derivations, or empirical tests limits the result to an interpretive lens whose practical significance remains to be demonstrated.

major comments (2)
  1. [Abstract] Abstract and opening sections: the central claim that fairness 'can in fact be beneficial for overall system performance' when optimized over time is presented as emerging from the control-theoretic analysis, yet no state-space representation, difference equations, or stability/optimality conditions are supplied to show how fairness interventions alter the closed-loop dynamics or yield net performance gains.
  2. [Introduction] The modeling assumption that users and creators form a controllable dynamical system with well-defined state transitions is invoked to support the long-term insight, but no concrete state vector, input/output mapping, or disturbance model is given, leaving the abstraction untestable against real recommender feedback loops.
minor comments (2)
  1. Notation for control-theoretic concepts (e.g., state, input, output) should be introduced explicitly with reference to standard recommender variables such as user-item matrices or engagement signals.
  2. The paper would benefit from a short related-work subsection contrasting the proposed view with existing dynamic models of recommendation (e.g., multi-armed bandits or reinforcement-learning formulations).

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed feedback. We value the recognition of the control-theoretic perspective as a potential bridge between fields and the reframing of fairness as dynamic optimization. We address each major comment below, clarifying the conceptual scope of the work while committing to revisions that improve clarity without altering the manuscript's interpretive focus.

read point-by-point responses
  1. Referee: [Abstract] Abstract and opening sections: the central claim that fairness 'can in fact be beneficial for overall system performance' when optimized over time is presented as emerging from the control-theoretic analysis, yet no state-space representation, difference equations, or stability/optimality conditions are supplied to show how fairness interventions alter the closed-loop dynamics or yield net performance gains.

    Authors: We acknowledge that the manuscript advances the claim through qualitative application of control-theoretic principles (e.g., feedback stabilization and long-horizon optimization) rather than through explicit derivations. The analysis draws on general properties of dynamical systems to argue that time-optimized interventions need not trade off against performance. We agree this could be better anchored. In revision we will add a concise conceptual state-space sketch in the abstract and introduction, defining example states (user opinion vectors, creator popularity) and inputs (recommendation policies) to illustrate how fairness adjustments can influence closed-loop trajectories, while preserving the paper's non-formal character. revision: yes

  2. Referee: [Introduction] The modeling assumption that users and creators form a controllable dynamical system with well-defined state transitions is invoked to support the long-term insight, but no concrete state vector, input/output mapping, or disturbance model is given, leaving the abstraction untestable against real recommender feedback loops.

    Authors: The contribution is framed as an interpretive abstraction to highlight cross-disciplinary implications, not as a platform-specific testable model. A single concrete state vector would reduce generality across diverse recommender systems. We will revise the introduction to state the abstraction level explicitly, provide illustrative mappings (e.g., states as distributions over user preferences and creator visibility, disturbances as exogenous content shifts), and note that full instantiation and empirical validation remain open directions for subsequent research. revision: partial

Circularity Check

0 steps flagged

No significant circularity

full rationale

The paper advances a control-theoretic modeling lens for recommender systems and derives qualitative insights about long-term fairness effects from that abstraction. No equations, parameter-fitting procedures, or self-citation chains are present that would reduce the central claims to their own inputs by construction. The modeling assumptions are stated as an interpretive framework rather than as fitted predictions or uniqueness theorems imported from prior author work, rendering the analysis self-contained.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The abstract invokes a control-systems abstraction without specifying the state-space model, transition dynamics, or cost functions; these constitute domain assumptions whose validity is not demonstrated.

axioms (1)
  • domain assumption Recommender systems can be faithfully represented as a controllable dynamical system whose state includes user opinions and creator popularity.
    Stated in the proposal of the control-theoretic interpretation.

pith-pipeline@v0.9.0 · 5375 in / 1140 out tokens · 40155 ms · 2026-05-09T18:10:41.849831+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

82 extracted references · 39 canonical work pages · 1 internal anchor

  1. [1]

    Matrix factorization techniques for recommender systems,

    Y . Koren, R. Bell, and C. V olinsky, “Matrix factorization techniques for recommender systems,” Computer, vol. 42, no. 8, pp. 30–37, 2009

  2. [2]

    In Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization

    N. Pagan, J. Baumann, E. Elokda, G. De Pasquale, S. Bolognani, and A. Hannák, “A classification of feedback loops and their relation to biases in automated decision-making systems,” in Proceedings of the 3rd 202X « IEEE CONTROL SYSTEMS 15 ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23, New York, NY , USA, 2...

  3. [3]

    Feedback loop and bias amplification in recommender systems,

    M. Mansoury, H. Abdollahpouri, M. Pechenizkiy, B. Mobasher, and R. Burke, “Feedback loop and bias amplification in recommender systems,” inProceedings of the 29th ACM International Conference on Information & Knowledge Management, ser. CIKM ’20. New York, NY , USA: Association for Computing Machinery, 2020, p. 2145–2148. [Online]. Available: https://doi.o...

  4. [4]

    ACM Trans

    J. Chen, H. Dong, X. Wang, F. Feng, M. Wang, and X. He, “Bias and debias in recommender system: A survey and future directions,” ACM Trans. Inf. Syst. , vol. 41, no. 3, Feb. 2023. [Online]. Available: https://doi.org/10.1145/3564284

  5. [5]

    Fairness in social influence maximization via optimal transport,

    S. Chowdhary, G. De Pasquale, N. Lanzetti, A.-A. Stoica, and F. Dörfler, “Fairness in social influence maximization via optimal transport,” in Advances in Neural Information Processing Systems , A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, and C. Zhang, Eds., vol. 37. Curran Associates, Inc., 2024, pp. 10 380– 10 413. [Online]. Ava...

  6. [6]

    Algorithmic glass ceiling in social networks: The effects of social recommendations on network diversity,

    A.-A. Stoica, C. Riederer, and A. Chaintreau, “Algorithmic glass ceiling in social networks: The effects of social recommendations on network diversity,” ser. WWW ’18. Republic and Canton of Geneva, CHE: International World Wide Web Conferences Steering Committee, 2018, p. 923–932. [Online]. Available: https://doi.org/10.1145/3178876.3186140

  7. [7]

    Diversified social influ- ence maximization,

    T. Fangshuang, Q. Liu, H. Zhu, E. Chen, and F. Zhu, “Diversified social influ- ence maximization,” in 2014 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM) , 2014, pp. 455–459

  8. [8]

    Ranking with long-term constraints,

    K. Brantley, Z. Fang, S. Dean, and T. Joachims, “Ranking with long-term constraints,” 2024. [Online]. Available: https://arxiv.org/abs/2307.04923

  9. [9]

    Group fairness for content creators: the role of human and algorithmic biases under popularity-based recommendations,

    S. Ionescu, A. Hannak, and N. Pagan, “Group fairness for content creators: the role of human and algorithmic biases under popularity-based recommendations,” in Proceedings of the 17th ACM Conference on Recommender Systems , ser. RecSys ’23. New York, NY , USA: Association for Computing Machinery, 2023, p. 863–870. [Online]. Available: https://doi.org/10.1...

  10. [10]

    The order of recommendation matters: Structured exploration for improving the fairness of content creators,

    S. Jaoua, N. Pagan, A. Hannák, and S. Ionescu, “The order of recommendation matters: Structured exploration for improving the fairness of content creators,”

  11. [11]

    Available: https://arxiv.org/abs/2510.20698

    [Online]. Available: https://arxiv.org/abs/2510.20698

  12. [12]

    The role of luck in the success of social media influencers,

    S. Ionescu, A. Hannák, and N. Pagan, “The role of luck in the success of social media influencers,” Applied Network Science , vol. 8, no. 1, p. 46, 2023

  13. [13]

    Content creation within the algorithmic environment: A systematic review,

    Y . Liang, J. Li, J. Aroles, and E. Granter, “Content creation within the algorithmic environment: A systematic review,” Work, Employment and Society , vol. 39, no. 4, pp. 787–813, 2025

  14. [14]

    Accounting for AI and users shaping one another: The role of mathematical models,

    S. Dean, E. Dong, M. Jagadeesan, and L. Leqi, “Accounting for AI and users shaping one another: The role of mathematical models,” arXiv preprint arXiv:2404.12366, 2024

  15. [15]

    In: Proceedings of the 10th International Conference on World Wide Web

    B. Sarwar, G. Karypis, J. Konstan, and J. Riedl, “Item-based collaborative filtering recommendation algorithms,” in Proceedings of the 10th International Conference on World Wide Web , ser. WWW ’01. New York, NY , USA: Association for Computing Machinery, 2001, p. 285–295. [Online]. Available: https://doi.org/10.1145/371920.372071

  16. [16]

    A unifying framework for fairness- aware influence maximization,

    G. Farnadi, B. Babaki, and M. Gendreau, “A unifying framework for fairness- aware influence maximization,” in Proceedings of the International World Wide Web Conference (WWW 2020) , 2020, pp. 714–722

  17. [17]

    Seeding network influence in biased networks and the benefits of diversity,

    A.-A. Stoica, J. X. Han, and A. Chaintreau, “Seeding network influence in biased networks and the benefits of diversity,” in Proceedings of The Web Conference (WWW 2020) , 2020, pp. 2089–2098

  18. [18]

    Homophily influences ranking of minorities in social networks,

    F. Karimi, M. Génois, C. Wagner, P. Singer, and M. Strohmaier, “Homophily influences ranking of minorities in social networks,”Scientific Reports, vol. 8, no. 1, p. 11077, 2018. [Online]. Available: https://doi.org/10.1038/s41598-018-29405-7

  19. [19]

    Gaps in information access in social networks,

    B. Fish, A. Bashardoust, D. Boyd, S. Friedler, C. Scheidegger, and S. Venkata- subramanian, “Gaps in information access in social networks,” in Proceedings of the International World Wide Web Conference (WWW), San Francisco, USA, 2019, pp. 480–490

  20. [20]

    Group influence maximization problem in social networks,

    J. Zhu, S. Ghosh, and W. Wu, “Group influence maximization problem in social networks,” IEEE Transactions on Computational Social Systems , vol. 6, no. 6, pp. 1156–1164, 2019

  21. [21]

    Group-fairness in influence maximization,

    A. Tsang, B. Wilder, E. Rice, M. Tambe, and Y . Zick, “Group-fairness in influence maximization,” in Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) , 2019, pp. 5997–6005

  22. [22]

    ISBN 9781450340359

    P. Covington, J. Adams, and E. Sargin, “Deep neural networks for YouTube recommendations,” in Proceedings of the 10th ACM Conference on Recommender Systems , ser. RecSys ’16. New York, NY , USA: Association for Computing Machinery, 2016, p. 191–198. [Online]. Available: https: //doi.org/10.1145/2959100.2959190

  23. [23]

    Sampling-bias-corrected neural modeling for large corpus item recommendations,

    X. Yi, J. Yang, L. Hong, D. Z. Cheng, L. Heldt, A. Kumthekar, Z. Zhao, L. Wei, and E. Chi, “Sampling-bias-corrected neural modeling for large corpus item recommendations,” in Proceedings of the 13th ACM Conference on Recommender Systems , ser. RecSys ’19. New York, NY , USA: Association for Computing Machinery, 2019, p. 269–277. [Online]. Available: https...

  24. [25]

    BPR: Bayesian Personalized Ranking from Implicit Feedback

    [Online]. Available: http://arxiv.org/abs/1205.2618

  25. [26]

    Deep Learning Recommendation Model for Personalization and Recommendation Systems

    M. Naumov, D. Mudigere, H.-J. M. Shi, J. Huang, N. Sundaraman, J. Park, X. Wang, U. Gupta, C.-J. Wu, A. G. Azzolini, D. Dzhulgakov, A. Mallevich, I. Cherniavskii, Y . Lu, R. Krishnamoorthi, A. Yu, V . Kondratenko, S. Pereira, X. Chen, W. Chen, V . Rao, B. Jia, L. Xiong, and M. Smelyanskiy, “Deep learning recommendation model for personalization and recomm...

  26. [27]

    Top-k off-policy correction for a reinforce recommender system,

    M. Chen, A. Beutel, P. Covington, S. Jain, F. Belletti, and E. Chi, “Top-k off-policy correction for a reinforce recommender system,” 01 2019, pp. 456–464

  27. [28]

    ISBN 9798400701924

    D. Liu, V . Do, N. Usunier, and M. Nickel, “Group fairness without demographics using social networks,” in 2023 ACM Conference on Fairness, Accountability, and Transparency , ser. FAccT ’23. ACM, 2023, p. 1432–1449. [Online]. Available: http://dx.doi.org/10.1145/3593013.3594091

  28. [29]

    When collaborative filtering is not collaborative: Unfairness of pca for recommendations,

    D. Liu, J. Baek, and T. Eliassi-Rad, “When collaborative filtering is not collaborative: Unfairness of pca for recommendations,” 2025. [Online]. Available: https://arxiv.org/abs/2310.09687

  29. [30]

    Identifying and upweighting power-niche users to mitigate popularity bias in recommendations,

    D. Liu, E. Weis, M. Laber, T. Eliassi-Rad, and B. Klein, “Identifying and upweighting power-niche users to mitigate popularity bias in recommendations,” arXiv preprint arXiv:2509.17265 , 2025

  30. [31]

    A survey on the fairness of recommender systems,

    Y . Wang, W. Ma, M. Zhang, Y . Liu, and S. Ma, “A survey on the fairness of recommender systems,” vol. 41, no. 3, Feb. 2023. [Online]. Available: https://doi.org/10.1145/3547333

  31. [32]

    The impact of recommendation systems on opinion dynamics: Microscopic versus macroscopic effects,

    N. Lanzetti, F. Dörfler, and N. Pagan, “The impact of recommendation systems on opinion dynamics: Microscopic versus macroscopic effects,” in 2023 62nd IEEE Conference on Decision and Control (CDC) , 2023, pp. 4824–4829

  32. [33]

    Learning to control misinforma- tion: a closed-loop approach for misinformation mitigation over social networks,

    N. Pagan, A. Philippou, and G. De Pasquale, “Learning to control misinforma- tion: a closed-loop approach for misinformation mitigation over social networks,” in 8th Annual Learning for Dynamics & Control Conference (L4DC) , 2026

  33. [34]

    The closed loop between opinion formation and personalized recommendations,

    W. S. Rossi, J. W. Polderman, and P. Frasca, “The closed loop between opinion formation and personalized recommendations,” IEEE Transactions on Control of Network Systems, vol. 9, no. 3, pp. 1092–1103, 2021

  34. [35]

    Policy design for two-sided platforms with participation dynamics,

    H. Kiyohara, F. Yao, and S. Dean, “Policy design for two-sided platforms with participation dynamics,” in Forty-second International Conference on Machine Learning, 2025. [Online]. Available: https://openreview.net/forum?id=qr4a4uS82y

  35. [36]

    Degenerate feedback loops in recommender systems,

    R. Jiang, S. Chiappa, T. Lattimore, A. György, and P. Kohli, “Degenerate feedback loops in recommender systems,” in Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society , 2019, pp. 383–390

  36. [37]

    Modelling opinion dynamics in the age of algorithmic personalisation,

    N. Perra and L. E. C. Rocha, “Modelling opinion dynamics in the age of algorithmic personalisation,” Scientific Reports , vol. 9, no. 1, p. 7261, 2019. [Online]. Available: https://doi.org/10.1038/s41598-019-43830-2

  37. [38]

    Network- aware recommender system via online feedback optimization,

    S. Chandrasekaran, G. De Pasquale, G. Belgioioso, and F. Dörfler, “Network- aware recommender system via online feedback optimization,” IEEE Transactions on Automatic Control , pp. 1–16, 2025

  38. [39]

    Control strategies for recommendation systems in social networks,

    B. Sprenger, G. De Pasquale, R. Soloperto, J. Lygeros, and F. Dörfler, “Control strategies for recommendation systems in social networks,” IEEE Control Systems Letters, vol. 8, pp. 634–639, 2024

  39. [40]

    Socially-aware recommender systems mitigate opinion clusterization,

    L. Schüepp, C. Amo Alonso, F. Dörfler, and G. De Pasquale, “Socially-aware recommender systems mitigate opinion clusterization,” 2026. [Online]. Available: https://arxiv.org/abs/2601.02412

  40. [42]

    On the long-term impact of algorithmic decision policies: Effort unfairness and feature segregation through social learning,

    H. Heidari, V . Nanda, and K. P. Gummadi, “On the long-term impact of algorithmic decision policies: Effort unfairness and feature segregation through social learning,” in Proceedings of the 36th International Conference on Machine 16 IEEE CONTROL SYSTEMS » 202X Learning (ICML), 2019, pp. 4787–4796

  41. [43]

    How do classifiers induce agents to invest effort strategically?

    J. Kleinberg and M. Raghavan, “How do classifiers induce agents to invest effort strategically?” ACM Transactions on Economics and Computation , vol. 8, no. 4, 2020. [Online]. Available: https://doi.org/10.1145/3417742

  42. [45]

    How do fair decisions fare in long-term qualification?

    X. Zhang, R. Tu, Y . Liu, M. Liu, H. Kjellström, K. Zhang, and C. Zhang, “How do fair decisions fare in long-term qualification?” in Advances in Neural Information Processing Systems (NeurIPS) , 2020, pp. 1–13

  43. [46]

    The Ethics of Emotion in Artificial Intelligence Systems,

    C. Hertweck, C. Heitz, and M. Loi, “On the moral justification of statistical parity,” in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency , ser. FAccT ’21. New York, NY , USA: Association for Computing Machinery, 2021, p. 747–757. [Online]. Available: https://doi.org/10.1145/3442188.3445936

  44. [47]

    Fairness without demographics in repeated loss minimization,

    T. Hashimoto, M. Srivastava, H. Namkoong, and P. Liang, “Fairness without demographics in repeated loss minimization,” in Proceedings of the 35th International Conference on Machine Learning (ICML) , ser. Proceedings of Machine Learning Research, vol. 80. PMLR, 2018, pp. 1929–1938. [Online]. Available: https://proceedings.mlr.press/v80/hashimoto18a.html

  45. [48]

    Long-term impacts of fair machine learning,

    X. Zhang, M. M. Khalili, and M. Liu, “Long-term impacts of fair machine learning,” Ergonomics in Design , vol. 28, no. 3, pp. 7–11, 2020. [Online]. Available: https://doi.org/10.1177/1064804619884160

  46. [49]

    Group retention when using machine learning in sequential decision making: The interplay between user dynamics and fairness,

    X. Zhang, M. M. Khalili, C. Tekin, and M. Liu, “Group retention when using machine learning in sequential decision making: The interplay between user dynamics and fairness,” in Advances in Neural Information Processing Systems (NeurIPS 2019), 2019

  47. [50]

    A framework for understanding sources of harm throughout the machine learning life cycle,

    H. Suresh and J. Guttag, “A framework for understanding sources of harm throughout the machine learning life cycle,” in Proceedings of the 1st ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization , ser. EAAMO ’21. New York, NY , USA: Association for Computing Machinery,

  48. [51]

    Available: https://doi.org/10.1145/3465416.3483305

    [Online]. Available: https://doi.org/10.1145/3465416.3483305

  49. [52]

    Fake views removal and popularity on YouTube,

    M. Castaldo, P. Frasca, T. Venturini, and F. Gargiulo, “Fake views removal and popularity on YouTube,” Scientific Reports, vol. 14, no. 1, p. 15443, 2024

  50. [53]

    White, Margaret Mitchell, Timnit Gebru, Ben Hutchinson, Jamila Smith-Loud, Daniel Theron, and Parker Barnes

    A. D’Amour, H. Srinivasan, J. Atwood, P. Baljekar, D. Sculley, and Y . Halpern, “Fairness is not static: deeper understanding of long term fairness via simulation studies,” ser. FAT* ’20. New York, NY , USA: Association for Computing Machinery, 2020, p. 525–534. [Online]. Available: https://doi.org/10.1145/3351095.3372878

  51. [54]

    Delayed impact of fair machine learning,

    L. T. Liu, S. Dean, E. Rolf, M. Simchowitz, and M. Hardt, “Delayed impact of fair machine learning,” in Proceedings of the 35th International Conference on Machine Learning , ser. Proceedings of Machine Learning Research, J. Dy and A. Krause, Eds., vol. 80. PMLR, 10–15 Jul 2018, pp. 3150–3158. [Online]. Available: https://proceedings.mlr.press/v80/liu18c.html

  52. [55]

    In: Proceedings of the 2018 World Wide Web Conference, pp

    L. Hu and Y . Chen, “A short-term intervention for long-term fairness in the labor market,” in Proceedings of the 2018 World Wide Web Conference (WWW), 2018, pp. 1389–1398. [Online]. Available: https://doi.org/10.1145/ 3178876.3186044

  53. [56]

    Fairness-aware fake news mitigation using counter information propagation,

    A. Saxena, C. G. Bierbooms, and M. Pechenizkiy, “Fairness-aware fake news mitigation using counter information propagation,” vol. 53, pp. 27 483–27 504, Feb. 2023

  54. [57]

    Investigating potential factors associated with gender discrimination in collaborative recommender systems

    M. Mansoury, H. Abdollahpouri, J. Smith, A. Dehpanah, M. Pechenizkiy, and B. Mobasher, “Investigating potential factors associated with gender discrimination in collaborative recommender systems.” 2020

  55. [58]

    Modelling the closed loop dynamics between a social media recommender system and users’ opinions,

    E. C. Davidson and M. Ye, “Modelling the closed loop dynamics between a social media recommender system and users’ opinions,” 2025. [Online]. Available: https://arxiv.org/abs/2507.19792

  56. [59]

    Optimal control synthesis of closed-loop recom- mendation systems over social networks,

    S. Mariano and P. Frasca, “Optimal control synthesis of closed-loop recom- mendation systems over social networks,” arXiv preprint arXiv:2603.10275, 2026

  57. [60]

    Emotion shapes the diffusion of moralized content in social networks,

    W. J. Brady, J. A. Wills, J. T. Jost, J. A. Tucker, J. J. Van Bavel, and S. T. Fiske, “Emotion shapes the diffusion of moralized content in social networks,” Proceedings of the National Academy of Sciences , vol. 114, no. 28, pp. 7313– 7318, 2017

  58. [61]

    Out-group animosity drives engagement on social media,

    S. Rathje, J. J. Van Bavel, and S. van der Linden, “Out-group animosity drives engagement on social media,” Proceedings of the National Academy of Sciences , vol. 118, no. 26, p. e2024292118, 2021

  59. [62]

    Optimizing social network interventions via hypergradient-based recommender system design,

    M. Kühne, P. D. Grontas, G. De Pasquale, G. Belgioioso, F. Dorfler, and J. Lygeros, “Optimizing social network interventions via hypergradient-based recommender system design,” in Proceedings of the 42nd International Conference on Machine Learning , ser. Proceedings of Machine Learning Research, A. Singh, M. Fazel, D. Hsu, S. Lacoste-Julien, F. Berkenkam...

  60. [63]

    Minimizing polarization and disagreement in social networks,

    C. Musco, C. Musco, and C. E. Tsourakakis, “Minimizing polarization and disagreement in social networks,” in Proceedings of the 2018 World Wide Web Conference, ser. WWW ’18. Republic and Canton of Geneva, CHE: International World Wide Web Conferences Steering Committee, 2018, p. 369–378. [Online]. Available: https://doi.org/10.1145/3178876.3186103

  61. [64]

    Analyzing the

    U. Chitra and C. Musco, “Analyzing the impact of filter bubbles on social network polarization,” in Proceedings of the 13th International Conference on Web Search and Data Mining , ser. WSDM ’20. New York, NY , USA: Association for Computing Machinery, 2020, p. 115–123. [Online]. Available: https://doi.org/10.1145/3336191.3371825

  62. [65]

    Xu, J., Yang, Y ., Chen, J., Jiang, X., Wang, C., Lu, J., and Sun, Y

    X. Chen, J. Lijffijt, and T. De Bie, “Quantifying and minimizing risk of conflict in social networks,” ser. KDD ’18. New York, NY , USA: Association for Computing Machinery, 2018, p. 1197–1205. [Online]. Available: https://doi.org/10.1145/3219819.3220074

  63. [66]

    Minimizing polarization and disagreement in social networks via link recommendation,

    L. Zhu, Q. Bao, and Z. Zhang, “Minimizing polarization and disagreement in social networks via link recommendation,” in Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS) . Red Hook, NY , USA: Curran Associates, Inc., 2021

  64. [67]

    User-creator feature polarization in recommender systems with dual influence,

    T. Lin, K. Jin, A. Estornell, X. Zhang, Y . Chen, and Y . Liu, “User-creator feature polarization in recommender systems with dual influence,” ser. NIPS ’24. Red Hook, NY , USA: Curran Associates Inc., 2024

  65. [68]

    Individual fairness for social media influencers,

    S. Ionescu, N. Pagan, and A. Hannák, “Individual fairness for social media influencers,” in Complex Networks and Their Applications XI , ser. Studies in Computational Intelligence, H. Cherifi, R. N. Mantegna, L. M. Rocha, C. Cherifi, and S. Miccichè, Eds. Cham: Springer, 2023, vol. 1077, pp. —

  66. [69]

    A meritocratic network formation model for the rise of social media influencers,

    N. Pagan, W. Mei, C. Li, S. Ionescu, F. Menczer, and A. Flammini, “A meritocratic network formation model for the rise of social media influencers,” Nature Communications, vol. 12, no. 1, p. 6865, 2021

  67. [70]

    Visibility allocation systems: How algorithmic design shapes online visibility and societal outcomes,

    S. Ionescu, R. Forsberg, E. Lichtenegger, S. Jaoua, K. Jaglan, F. Dorfler, and A. Hannak, “Visibility allocation systems: How algorithmic design shapes online visibility and societal outcomes,” 2025. [Online]. Available: https://arxiv.org/abs/2510.17241

  68. [71]

    A unifying and general account of fairness measurement in recommender systems,

    E. Amigó, Y . Deldjoo, S. Mizzaro, and A. Bellogín, “A unifying and general account of fairness measurement in recommender systems,” Information Processing & Management , vol. 60, no. 1, p. 103115, 2023. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0306457322002163

  69. [72]

    2020 , isbn =

    M. Mansoury, H. Abdollahpouri, M. Pechenizkiy, B. Mobasher, and R. Burke, “Fairmatch: A graph-based approach for improving aggregate diversity in recommender systems,” in Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization, ser. UMAP ’20. New York, NY , USA: Association for Computing Machinery, 2020, p. 154–162. [Online...

  70. [73]

    User-oriented fairness in recommendation,

    Y . Li, H. Chen, Z. Fu, Y . Ge, and Y . Zhang, “User-oriented fairness in recommendation,” in Proceedings of the Web Conference 2021 (WWW ’21). New York, NY , USA: Association for Computing Machinery, 2021, pp. 624–632

  71. [74]

    Improving recommender system diversity with variational autoencoders,

    S. Borar, H. Weerts, B. Gebre, and M. Pechenizkiy, “Improving recommender system diversity with variational autoencoders,” inAdvances in Bias and Fairness in Information Retrieval, ser. Communications in Computer and Information Science, L. Boratto, S. Faralli, M. Marras, and G. Stilo, Eds. Cham: Springer, 2023, vol. 1840

  72. [75]

    A graph-based approach for mitigating multi-sided exposure bias in recommender systems,

    M. Mansoury, H. Abdollahpouri, M. Pechenizkiy, B. Mobasher, and R. Burke, “A graph-based approach for mitigating multi-sided exposure bias in recommender systems,” vol. 40, no. 2, Nov. 2021. [Online]. Available: https://doi.org/10.1145/3470948

  73. [76]

    Optimizing long-term social welfare in recommender systems: A constrained matching approach,

    M. Mladenov, E. Creager, O. Ben-Porat, K. Swersky, R. Zemel, and C. Boutilier, “Optimizing long-term social welfare in recommender systems: A constrained matching approach,” in Proceedings of the 37th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, H. D. III and A. Singh, Eds., vol. 119. PMLR, 13–18 Jul 2020, p...

  74. [77]

    Learning with exposure constraints in recommendation systems,

    O. Ben-Porat and R. Torkan, “Learning with exposure constraints in recommendation systems,” ser. WWW ’23. New York, NY , USA: Association for Computing Machinery, 2023, p. 3456–3466. [Online]. Available: https: 202X « IEEE CONTROL SYSTEMS 17 //doi.org/10.1145/3543507.3583320

  75. [78]

    Controlling fairness and bias in dynamic learning-to-rank,

    M. Morik, A. Singh, J. Hong, and T. Joachims, “Controlling fairness and bias in dynamic learning-to-rank,” in Proceedings of the 43rd international ACM SIGIR conference on research and development in information retrieval , 2020, pp. 429– 438

  76. [79]

    Fast algorithms for online stochastic convex programming,

    S. Agrawal and N. R. Devanur, “Fast algorithms for online stochastic convex programming,” in Proceedings of the twenty-sixth annual ACM-SIAM symposium on Discrete algorithms . SIAM, 2014, pp. 1405–1424

  77. [80]

    Control interpretations for first-order optimization methods,

    B. Hu and L. Lessard, “Control interpretations for first-order optimization methods,” in 2017 American Control Conference (ACC) . IEEE, 2017, pp. 1583– 1588

  78. [81]

    Model cards for model reporting

    H. Mouzannar, M. I. Ohannessian, and N. Srebro, “From fair decision making to social equality,” in Proceedings of the Conference on Fairness, Accountability, and Transparency , ser. FAT* ’19. New York, NY , USA: Association for Computing Machinery, 2019, p. 359–368. [Online]. Available: https://doi.org/10.1145/3287560.3287599

  79. [82]

    The disparate equilibria of algorithmic decision making when individuals invest rationally,

    L. T. Liu, A. Wilson, N. Haghtalab, A. T. Kalai, C. Borgs, and J. Chayes, “The disparate equilibria of algorithmic decision making when individuals invest rationally,” ser. FAT* ’20. New York, NY , USA: Association for Computing Machinery, 2020, p. 381–391. [Online]. Available: https: //doi.org/10.1145/3351095.3372861

  80. [83]

    Proceedings of the ACM Web Conference 2024 , pages =

    N. Immorlica, M. Jagadeesan, and B. Lucier, “Clickbait vs. quality: How engagement-based optimization shapes the content landscape in online platforms,” in Proceedings of the ACM Web Conference 2024 , ser. WWW ’24. New York, NY , USA: Association for Computing Machinery, 2024, p. 36–45. [Online]. Available: https://doi.org/10.1145/3589334.3645353

Showing first 80 references.