pith. machine review for the scientific record. sign in

arxiv: 2604.14035 · v1 · submitted 2026-04-15 · 💻 cs.LG · cs.AI

Recognition: unknown

First-See-Then-Design: A Multi-Stakeholder View for Optimal Performance-Fairness Trade-Offs

Christoph Heitz, Isabel Valera, Kavya Gupta, Nektarios Kalampalikis

Authors on Pith no claims yet

Pith reviewed 2026-05-10 13:48 UTC · model grok-4.3

classification 💻 cs.LG cs.AI
keywords algorithmic fairnessmulti-stakeholder decision makingstochastic policiesperformance-fairness trade-offwelfare economicsdistributive justiceoutcome uncertaintypost-hoc optimization
0
0 comments X

The pith

Stochastic policies can outperform deterministic ones on performance-fairness trade-offs when stakeholder utilities reward outcome uncertainty.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper moves fairness analysis beyond prediction accuracy to the actual welfare effects of decisions on both the decision maker and the affected individuals. It grounds the approach in welfare economics by defining a social planner's utility that tracks inequality in outcomes across groups according to chosen justice principles. Fair decision-making is then cast as a post-hoc optimization that traces the best achievable pairs of decision-maker utility and social-planner utility. The authors derive conditions on the utilities under which randomized policies dominate fixed ones and show empirically that even simple stochastic rules can expand the attainable trade-off frontier by exploiting uncertainty in final outcomes.

Core claim

The paper proposes a multi-stakeholder framework for fair algorithmic decision-making grounded in welfare economics and distributive justice, explicitly modeling the utilities of both the decision maker and decision subjects, and defining fairness via a social planner's utility that captures inequalities in decision subjects' utilities across groups under different justice-based fairness notions. It formulates fair decision-making as a post-hoc multi-objective optimization problem, characterizing the achievable performance-fairness trade-offs in the two-dimensional utility space under deterministic versus stochastic and shared versus group-specific policy classes, and identifies conditions (

What carries the argument

Post-hoc multi-objective optimization over the joint space of decision-maker utility and social-planner utility, comparing deterministic and stochastic policy classes.

Load-bearing premise

The framework assumes that the utilities of the decision maker and decision subjects can be accurately elicited or specified in advance and that the social planner's utility correctly encodes the chosen justice notion across groups.

What would settle it

A controlled experiment in which stakeholder utilities are deliberately misspecified or only partially known, followed by measurement of whether stochastic policies still produce strictly superior utility pairs compared with deterministic policies.

Figures

Figures reproduced from arXiv: 2604.14035 by Christoph Heitz, Isabel Valera, Kavya Gupta, Nektarios Kalampalikis.

Figure 1
Figure 1. Figure 1: Multi-stakeholder view of decision-making [PITH_FULL_IMAGE:figures/full_fig_p003_1.png] view at source ↗
Figure 3
Figure 3. Figure 3: Conditions under which stochastic policies expand the [PITH_FULL_IMAGE:figures/full_fig_p009_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Visual representation for calculation of the [PITH_FULL_IMAGE:figures/full_fig_p010_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Egalitarian fairness Gain across util￾ity spectrum, for both shared and group￾specific for German Credit dataset. Datasets. We evaluate on five datasets: two synthetic (credit, hiring) and three real, German Credit [29], Home Credit [48, 50], and MIMIC-III Sepsis [31] spanning finance, hiring, and healthcare and differing in group imbalance. For each dataset, we specify domain-motivated DM and DS utilities… view at source ↗
Figure 6
Figure 6. Figure 6: PFs comparing deterministic and stochastic policies. Stochastic policies consistently expand the PF, outperforming their deterministic counterparts in both shared and group-specific settings. Notably, stochastic group-specific policies trace broader and fairer regions, approaching the utopia point, where both stakeholder utilities are optimal. Dataset Setting Egalitarian Rawlsian 𝑛𝐻𝑉 (↑) 𝑛𝐻𝑉𝑡𝑒𝑠𝑡 (↑) 𝐴𝑈𝐶𝑓 𝑎… view at source ↗
Figure 8
Figure 8. Figure 8: Effect of stochasticity. Shared stochastic policies (colored by 𝛽) populate interior regions of the utility-space PFs, enabling smoother and more flexible trade-offs than deterministic policies. the strongest fairness improvements in high-utility regions, which are particularly relevant in high-stakes settings where institutional objectives must be maintained. Our empirical results demonstrate that group￾s… view at source ↗
Figure 7
Figure 7. Figure 7: PFs, nHV and 𝐴𝑈𝐶fair when each DS group has different utility constants. For completeness, we report German Credit results with heteroge￾neous DS utilities (i.e., female applicants have a higher benefit when a loan is approved and repaid) [PITH_FULL_IMAGE:figures/full_fig_p013_7.png] view at source ↗
Figure 9
Figure 9. Figure 9: Hypervolume gain as a function of the utility [PITH_FULL_IMAGE:figures/full_fig_p014_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Group-specific stochastic policies apply [PITH_FULL_IMAGE:figures/full_fig_p015_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Shared stochastic policies (colored by 𝛽). Unlike [PITH_FULL_IMAGE:figures/full_fig_p028_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Decision function of stochastic policies across different [PITH_FULL_IMAGE:figures/full_fig_p028_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Test-time Pareto fronts, obtained by evaluating training Pareto-optimal policies on the test set and comparing [PITH_FULL_IMAGE:figures/full_fig_p030_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: Fairness gain under Egalitarian justice achieved by switching from deterministic to stochastic policies, for both [PITH_FULL_IMAGE:figures/full_fig_p030_14.png] view at source ↗
Figure 15
Figure 15. Figure 15: Hypervolume gain of stochastic policy over deterministic policies as a function of the utility symmetry ratio [PITH_FULL_IMAGE:figures/full_fig_p031_15.png] view at source ↗
read the original abstract

Fairness in algorithmic decision-making is often defined in the predictive space, where predictive performance - used as a proxy for decision-maker (DM) utility - is traded off against prediction-based fairness notions, such as demographic parity or equality of opportunity. This perspective, however, ignores how predictions translate into decisions and ultimately into utilities and welfare for both DM and decision subjects (DS), as well as their allocation across social-salient groups. In this paper, we propose a multi-stakeholder framework for fair algorithmic decision-making grounded in welfare economics and distributive justice, explicitly modeling the utilities of both the DM and DS, and defining fairness via a social planner's utility that captures inequalities in DS utilities across groups under different justice-based fairness notions (e.g., Egalitarian, Rawlsian). We formulate fair decision-making as a post-hoc multi-objective optimization problem, characterizing the achievable performance-fairness trade-offs in the two-dimensional utility space of DM utility and the social planner's utility, under different decision policy classes (deterministic vs. stochastic, shared vs. group-specific). Using the proposed framework, we then identify conditions (in terms of the stakeholders' utilities) under which stochastic policies are more optimal than deterministic ones, and empirically demonstrate that simple stochastic policies can yield superior performance-fairness trade-offs by leveraging outcome uncertainty. Overall, we advocate a shift from prediction-centric fairness to a transparent, justice-based, multi-stakeholder approach that supports the collaborative design of decision-making policies.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. The paper proposes a multi-stakeholder framework for fair algorithmic decision-making grounded in welfare economics and distributive justice. It explicitly models utilities for the decision maker (DM) and decision subjects (DS), defines fairness through a social planner's utility that encodes group inequalities under notions such as Egalitarian and Rawlsian justice, and casts fair decision-making as a post-hoc multi-objective optimization problem. The framework characterizes achievable trade-offs in the DM-utility vs. social-planner-utility plane across policy classes (deterministic vs. stochastic, shared vs. group-specific), derives conditions on stakeholder utilities under which stochastic policies are strictly superior, and provides empirical evidence that simple stochastic policies can improve the performance-fairness frontier by exploiting outcome uncertainty.

Significance. If the derived conditions are general and the empirical demonstrations hold under the stated utility specifications, the work is significant because it shifts fairness research from prediction-space proxies to an explicit, transparent utility-based formulation that incorporates multiple stakeholders and justice principles. The identification of stochastic-superiority conditions and the concrete empirical results on simple randomization constitute a clear, falsifiable contribution that could guide collaborative policy design.

major comments (1)
  1. [§4] §4 (Conditions for stochastic superiority): The central claim that stochastic policies can be more optimal than deterministic ones rests on conditions expressed in terms of the stakeholders' utilities. However, these conditions appear to be derived under particular functional forms chosen for the social planner's utility (e.g., how group inequalities are aggregated under Egalitarian or Rawlsian notions). For linear or only weakly concave forms, Jensen-type arguments imply that deterministic policies remain optimal, so the claimed advantage from outcome uncertainty would not hold. The manuscript should either supply a general proof that covers the tested regimes or include a sensitivity analysis demonstrating that the reported superiority is robust to reasonable variations in the social-planner utility.
minor comments (2)
  1. [Abstract] Abstract: the phrases 'stochastic superiority' and 'simple stochastic policies' are used without a one-sentence definition or illustrative example; a brief clarification would aid readers unfamiliar with the utility-plane formulation.
  2. [§5] §5 (Empirical evaluation): the reported experiments would benefit from an explicit statement of the exact functional forms and parameter values used to instantiate the social planner's utility for each justice notion, together with the precise definition of the 'simple stochastic policies' that were tested.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their thoughtful and constructive review. We are encouraged by the recognition of the framework's potential to shift fairness research toward explicit multi-stakeholder utilities and justice principles. We address the major comment on the stochastic superiority conditions below.

read point-by-point responses
  1. Referee: [§4] §4 (Conditions for stochastic superiority): The central claim that stochastic policies can be more optimal than deterministic ones rests on conditions expressed in terms of the stakeholders' utilities. However, these conditions appear to be derived under particular functional forms chosen for the social planner's utility (e.g., how group inequalities are aggregated under Egalitarian or Rawlsian notions). For linear or only weakly concave forms, Jensen-type arguments imply that deterministic policies remain optimal, so the claimed advantage from outcome uncertainty would not hold. The manuscript should either supply a general proof that covers the tested regimes or include a sensitivity analysis demonstrating that the reported superiority is robust to reasonable variations in the social-planner utility.

    Authors: We appreciate the referee's observation on the scope of the stochastic superiority conditions. The conditions are stated directly in terms of the DM, DS, and social planner utilities and are derived for any social planner utility satisfying the monotonicity and (strict) concavity properties associated with the Egalitarian and Rawlsian justice principles used in the paper. We agree that, for linear or only weakly concave social planner utilities, Jensen's inequality implies deterministic policies are optimal. To address this, the revised manuscript will include (i) an explicit clarification of the concavity threshold required for stochastic superiority and (ii) a sensitivity analysis that varies the concavity parameter of the social planner utility across the tested regimes, confirming that the reported advantage of simple stochastic policies holds under the justice notions examined and remains robust to moderate relaxations of concavity. revision: partial

Circularity Check

0 steps flagged

No significant circularity; framework derives conditions from explicit utility definitions without reduction to inputs

full rationale

The paper grounds its multi-stakeholder model in external welfare-economics and distributive-justice concepts, explicitly defines DM/DS utilities and the social-planner utility as functions of group-wise outcomes, then analytically characterizes the DM-utility vs. social-planner-utility frontier for deterministic vs. stochastic policies. The claimed conditions for stochastic superiority are stated as functions of those utilities rather than being fitted or self-defined; empirical demonstrations use the same definitions but do not rename fitted parameters as predictions. No self-citation chain, ansatz smuggling, or renaming of known results is load-bearing for the central claims. The derivation remains self-contained against the stated assumptions.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

Based on abstract only; the framework rests on the ability to define and optimize stakeholder utilities and to apply justice notions via a social planner utility. No free parameters or invented entities are explicitly named.

axioms (2)
  • domain assumption Utilities of decision makers and decision subjects can be modeled and used as the basis for post-hoc optimization.
    The entire trade-off analysis depends on these utilities being available and correctly specified.
  • domain assumption A social planner's utility can capture inequalities across groups under Egalitarian or Rawlsian justice notions.
    This defines the fairness objective in the multi-objective problem.

pith-pipeline@v0.9.0 · 5587 in / 1468 out tokens · 44269 ms · 2026-05-10T13:48:16.208018+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

74 extracted references · 3 canonical work pages · 1 internal anchor

  1. [1]

    Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, and Hanna Wallach. 2018. A reductions approach to fair classification. InInternational conference on machine learning. PMLR, 60–69

  2. [2]

    Richard J Arneson. 1999. Egalitarianism and responsibility.The Journal of Ethics3, 3 (1999), 225–247

  3. [3]

    Charles Audet, Jean Bigeon, Dominique Cartier, Sébastien Le Digabel, and Ludovic Salomon. 2021. Performance indicators in multiob- jective optimization.European journal of operational research292, 2 (2021), 397–422

  4. [4]

    Pranjal Awasthi, Matthäus Kleindessner, and Jamie Morgenstern. 2020. Equalized odds postprocessing under imperfect group information. InInternational conference on artificial intelligence and statistics. PMLR, 1770–1780

  5. [5]

    2023.Fairness and machine learning: Limitations and opportunities

    Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2023.Fairness and machine learning: Limitations and opportunities. MIT press

  6. [6]

    Solon Barocas and Andrew D Selbst. 2016. Big data’s disparate impact.Calif. L. Rev.104 (2016), 671

  7. [7]

    Joachim Baumann, Corinna Hertweck, Michele Loi, and Christoph Heitz. 2022. Distributive justice as the foundational premise of fair ML: Unification, extension, and interpretation of group fairness metrics.ArXiv(2022)

  8. [8]

    Fabian Beigang. 2022. On the advantages of distinguishing between predictive and allocative fairness in algorithmic decision-making. Minds and Machines32, 4 (2022), 655–682

  9. [9]

    2025.Data for Causal Mediation Analysis

    Ruta Binkyte. 2025.Data for Causal Mediation Analysis. doi:10.5281/zenodo.16359243

  10. [10]

    R¯uta Binkyt˙e, Ljupcho Grozdanovski, and Sami Zhioua. 2022. On the need and applicability of causality for fair machine learning.ArXiv (2022)

  11. [11]

    Jakob Bossek. 2018. Performance assessment of multi-objective evolutionary algorithms with the R package ecr. InProceedings of the Genetic and Evolutionary Computation Conference Companion. 1350–1356

  12. [12]

    Sílvia Casacuberta, Isaac Robinson, and Connor Wagaman. 2023. Augmenting Fairness With Welfare: A Framework for Algorithmic Justice. (2023)

  13. [13]

    L Elisa Celis, Lingxiao Huang, Vijay Keswani, and Nisheeth K Vishnoi. 2019. Classification with fairness constraints: A meta-algorithm with provable guarantees. InProceedings of the conference on fairness, accountability, and transparency. 319–328

  14. [14]

    Alexandra Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments.Big data (2017)

  15. [15]

    Sam Corbett-Davies, Johann D Gaebler, Hamed Nilforoshan, Ravi Shroff, and Sharad Goel. 2023. The measure and mismeasure of fairness.Journal of Machine Learning Research(2023), 1–117

  16. [16]

    Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. Algorithmic decision making and the cost of fairness. InProceedings of the acm international conference on knowledge discovery and data mining. 797–806

  17. [17]

    William Dieterich, Christina Mendoza, and Tim Brennan. 2016. COMPAS risk scales: Demonstrating accuracy equity and predictive parity.Northpointe Inc(2016)

  18. [18]

    Michele Donini, Luca Oneto, Shai Ben-David, John S Shawe-Taylor, and Massimiliano Pontil. 2018. Empirical risk minimization under fairness constraints.Advances in neural information processing systems31 (2018)

  19. [19]

    Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. InProceedings of the 3rd innovations in theoretical computer science conference. 214–226

  20. [20]

    Sorelle A Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. 2021. The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making.Commun. ACM64, 4 (2021), 136–143

  21. [21]

    Hafsa Habehh and Suril Gohel. 2021. Machine learning in healthcare.Current genomics(2021), 291–300

  22. [22]

    Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of opportunity in supervised learning.Advances in neural information processing systems29 (2016)

  23. [23]

    Hoda Heidari, Claudio Ferrari, Krishna Gummadi, and Andreas Krause. 2018. Fairness behind a veil of ignorance: A welfare analysis for automated decision making.Advances in neural information processing systems31 (2018). First-See-Then-Design Framework FAccT ’26, June 25–28, 2026, Montréal, Canada

  24. [24]

    Hoda Heidari, Michele Loi, Krishna P Gummadi, and Andreas Krause. 2019. A moral framework for understanding fair ml through economic models of equality of opportunity. InProceedings of the conference on fairness, accountability, and transparency. 181–190

  25. [25]

    Corinna Hertweck, Joachim Baumann, Michele Loi, Eleonora Viganò, and Christoph Heitz. 2023. A justice-based framework for the analysis of algorithmic fairness-utility trade-offs.ArXiv(2023)

  26. [26]

    Corinna Hertweck, Christoph Heitz, and Michele Loi. 2021. On the moral justification of statistical parity. InProceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 747–757

  27. [27]

    Corinna Hertweck, Michele Loi, and Christoph Heitz. 2024. Group Fairness Refocused: Assessing the Social Impact of ML Systems. In 2024 11th IEEE Swiss Conference on Data Science (SDS). IEEE, 189–196

  28. [28]

    Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network.arXiv preprint arXiv:1503.02531 (2015)

  29. [29]

    Hans Hofmann. 1994. Statlog (german credit data) data set.UCI Repository of Machine Learning Databases(1994)

  30. [30]

    Nils Holtug. 2017. Prioritarianism. (2017)

  31. [31]

    Nianzong Hou, Mingzhe Li, Lu He, Bing Xie, Lin Wang, Rumin Zhang, Yong Yu, Xiaodong Sun, Zhengsheng Pan, and Kai Wang. 2020. Predicting 30-days mortality for MIMIC-III patients with sepsis-3: a machine learning approach using XGboost.Journal of Translational Medicine18, 1 (07 Dec 2020), 462. doi:10.1186/s12967-020-02620-5

  32. [32]

    Lily Hu and Yiling Chen. 2020. Fair classification and social welfare. InProceedings of the 2020 conference on fairness, accountability, and transparency. 535–545

  33. [33]

    Matthew Joseph, Michael Kearns, Jamie H Morgenstern, and Aaron Roth. 2016. Fairness in learning: Classic and contextual bandits. Advances in neural information processing systems(2016)

  34. [34]

    Shida Kang, Kaiwen Li, and Rui Wang. 2024. A survey on Pareto front learning for multi-objective optimization.Journal of Membrane Computing(2024), 1–7

  35. [35]

    Amir-Hossein Karimi, Julius Von Kügelgen, Bernhard Schölkopf, and Isabel Valera. 2020. Algorithmic recourse under imperfect causal knowledge: a probabilistic approach.Advances in neural information processing systems(2020)

  36. [36]

    2019.The ethical algorithm: The science of socially aware algorithm design

    Michael Kearns and Aaron Roth. 2019.The ethical algorithm: The science of socially aware algorithm design. Oxford University Press

  37. [37]

    Niki Kilbertus, Manuel Gomez Rodriguez, Bernhard Schölkopf, Krikamol Muandet, and Isabel Valera. 2020. Fair decisions despite imperfect predictions. InInternational Conference on Artificial Intelligence and Statistics. PMLR, 277–287

  38. [38]

    2017.Distributive justice

    Julian Lamont. 2017.Distributive justice. Routledge

  39. [39]

    Michelle Seng Ah Lee, Luciano Floridi, and Jatinder Singh. 2021. Formalising trade-offs beyond algorithmic fairness: lessons from ethical philosophy and welfare economics.AI and Ethics1, 4 (2021), 529–544

  40. [40]

    Annie Liang and Jay Lu. 2024. Algorithmic Fairness and Social Welfare. InAEA Papers and Proceedings, Vol. 114. American Economic Association 2014 Broadway, Suite 305, Nashville, TN 37203, 628–632

  41. [41]

    Annie Liang, Jay Lu, and Xiaosheng Mu. 2022. Algorithmic design: Fairness versus accuracy. InProceedings of the 23rd ACM Conference on Economics and Computation. 58–59

  42. [42]

    Suyun Liu and Luis Nunes Vicente. 2022. Accuracy and fairness trade-offs in machine learning: A stochastic multi-objective approach. Computational Management Science(2022), 513–537

  43. [43]

    Yang Liu, Goran Radanovic, Christos Dimitrakakis, Debmalya Mandal, and David C Parkes. 2017. Calibrated fairness in bandits.ArXiv (2017)

  44. [44]

    Alan Lundgard. 2020. Measuring justice in machine learning.ArXiv(2020)

  45. [45]

    Ali A Mahmoud, Tahani AL Shawabkeh, Walid A Salameh, and Ibrahim Al Amro. 2019. Performance predicting in hiring process and performance appraisals using machine learning. In2019 10th international conference on information and communication systems (ICICS). IEEE, 110–115

  46. [46]

    Ayan Majumdar, Deborah D Kanubala, Kavya Gupta, and Isabel Valera. 2025. A Causal Framework to Measure and Mitigate Non-binary Treatment Discrimination.ArXiv(2025)

  47. [47]

    Natalia Martinez, Martin Bertran, and Guillermo Sapiro. 2020. Minimax pareto fairness: A multi objective perspective. InInternational conference on machine learning. PMLR, 6755–6764

  48. [48]

    Caspar Matthys. 2019. Predicting an applicant’s capability of repaying a bank loan. (2019)

  49. [49]

    Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. A survey on bias and fairness in machine learning.ACM computing surveys (CSUR)(2021), 1–35

  50. [50]

    Anna Montoya, inversion, KirillOdintsov, and Martin Kotek. 2018. Home Credit Default Risk. https://kaggle.com/competitions/home- credit-default-risk. Kaggle

  51. [51]

    Vincenzo Moscato, Antonio Picariello, and Giancarlo Sperlí. 2021. A benchmark of machine learning approaches for credit score prediction.Expert Systems with Applications(2021)

  52. [52]

    Rashmi Nagpal, Rasoul Shahsavarifar, Vaibhav Goyal, and Amar Gupta. 2025. Optimizing fairness and accuracy: a Pareto optimal approach for decision-making.AI and Ethics5, 2 (2025), 1743–1756. FAccT ’26, June 25–28, 2026, Montréal, Canada Gupta et al

  53. [53]

    Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations.Science(2019), 447–453

  54. [54]

    Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. 2017. On fairness and calibration.Advances in neural information processing systems30 (2017)

  55. [55]

    Ashesh Rambachan, Jon Kleinberg, Jens Ludwig, and Sendhil Mullainathan. 2020. An economic perspective on algorithmic fairness. In AEA Papers and Proceedings. American Economic Association 2014 Broadway, Suite 305, Nashville, TN 37203, 91–95

  56. [56]

    Miriam Rateike, Ayan Majumdar, Olga Mineeva, Krishna P Gummadi, and Isabel Valera. 2022. Don’t throw it away! the utility of unlabeled data in fair decision making. InProceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 1421–1433

  57. [57]

    2001.Justice as fairness: A restatement

    John Rawls. 2001.Justice as fairness: A restatement. Harvard University Press

  58. [58]

    John Rawls. 2017. A theory of justice. InApplied ethics. Routledge, 21–29

  59. [59]

    Nery Riquelme, Christian Von Lücken, and Benjamin Baran. 2015. Performance metrics in multi-objective optimization. In2015 Latin American computing conference (CLEI). IEEE, 1–11

  60. [60]

    John E Roemer and Alain Trannoy. 2015. Equality of opportunity. InHandbook of income distribution. Elsevier, 217–300

  61. [61]

    Nir Rosenfeld and Haifeng Xu. 2025. Machine Learning Should Maximize Welfare, but Not by (Only) Maximizing Accuracy.ArXiv (2025)

  62. [62]

    Teresa Scantamburlo, Joachim Baumann, and Christoph Heitz. 2025. On prediction-modelers and decision-makers: why fairness requires more than a fair prediction model.Ai & Society40, 2 (2025), 353–369

  63. [63]

    Andrew D Selbst, Danah Boyd, Sorelle A Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and abstraction in sociotechnical systems. InProceedings of the conference on fairness, accountability, and transparency. 59–68

  64. [64]

    Amartya Sen. 1979. Equality of what? (1979)

  65. [65]

    Ke Shang, Hisao Ishibuchi, Linjun He, and Lie Meng Pang. 2020. A survey on the hypervolume indicator in evolutionary multiobjective optimization.IEEE Transactions on Evolutionary Computation(2020), 1–20

  66. [66]

    Liam Shields. 2020. Sufficientarianism 1.Philosophy Compass(2020), 1–10

  67. [67]

    Vittoria Vineis, Giuseppe Perelli, and Gabriele Tolomei. 2025. Beyond Predictions: A Participatory Framework for Multi-Stakeholder Decision-Making.ArXiv(2025)

  68. [68]

    Lequn Wang, Yiwei Bai, Wen Sun, and Thorsten Joachims. 2021. Fairness of Exposure in Stochastic Bandits. InInternational conference on machine learning. PMLR

  69. [69]

    Dennis Wei. 2021. Decision-making under selective labels: Optimal finite-domain policies and beyond. InInternational Conference on Machine Learning. PMLR, 11035–11046

  70. [70]

    Susan Wei and Marc Niethammer. 2022. The fairness-accuracy Pareto front.Statistical Analysis and Data Mining: The ASA Data Science Journal15, 3 (2022), 287–302

  71. [71]

    Lyndon While, Lucas Bradstreet, and Luigi Barone. 2011. A fast way of calculating exact hypervolumes.IEEE Transactions on Evolutionary Computation16, 1 (2011), 86–95

  72. [72]

    Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi. 2017. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. InProceedings of the 26th international conference on world wide web. 1171–1180

  73. [73]

    1999.Evolutionary algorithms for multiobjective optimization: Methods and applications

    Eckart Zitzler. 1999.Evolutionary algorithms for multiobjective optimization: Methods and applications. Vol. 63. Shaker Ithaca

  74. [74]

    male single

    Eckart Zitzler and Lothar Thiele. 1998. Multiobjective optimization using evolutionary algorithms—a comparative case study. In International conference on parallel problem solving from nature. Springer, 292–301. First-See-Then-Design Framework FAccT ’26, June 25–28, 2026, Montréal, Canada A Related Work Multi-Stakeholder Fairness and Multi-objective Optim...