pith. machine review for the scientific record. sign in

arxiv: 2605.10604 · v1 · submitted 2026-05-11 · 💻 cs.LG · cs.AI· cs.CY

Recognition: 2 theorem links

· Lean Theorem

Fairness vs Performance: Characterizing the Pareto Frontier of Algorithmic Decision Systems

Christoph Heitz, Mieke Wilms

Authors on Pith no claims yet

Pith reviewed 2026-05-12 04:55 UTC · model grok-4.3

classification 💻 cs.LG cs.AIcs.CY
keywords algorithmic fairnessPareto frontiermulti-objective optimizationthreshold rulesgroup fairnessbinary decision systemssuccess probability
0
0 comments X

The pith

The Pareto frontier of fairness versus performance in algorithmic decisions consists of deterministic, group-specific threshold rules on success probability.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper frames the design of fair binary decision systems as a multi-objective optimization that simultaneously maximizes decision-maker utility and group fairness. It establishes that the resulting Pareto-optimal rules are always deterministic thresholds applied separately to each group's predicted success probabilities. This characterization holds for arbitrary utility functions, any population distribution, and many fairness metrics, and is reachable by pre-, in-, or post-processing methods alike. The work extends prior optimality results that identified only lower-bound thresholds by demonstrating that upper-bound thresholds can also appear on the frontier for certain fairness definitions.

Core claim

For arbitrary utility functions, population distributions, and a wide range of group fairness metrics, the Pareto-optimal decision rules are deterministic and take the form of group-specific thresholds applied to individuals' success probability. Depending on the fairness metric, these thresholds may be upper bounds that prefer lower success probabilities within a group.

What carries the argument

Multi-objective optimization of decision-maker utility against group fairness metrics, which reduces all Pareto-optimal solutions to deterministic group-specific thresholds on predicted success probability.

If this is right

  • The same Pareto frontier applies whether fairness is enforced by pre-processing, in-processing, or post-processing.
  • For some fairness metrics the frontier includes upper-bound thresholds that select lower-probability individuals.
  • The frontier's location is fully determined by population statistics, the chosen utility, and the fairness metric.
  • Results extend existing lower-bound threshold theorems to generalized fairness metrics and partial fairness regimes.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Regulators could audit deployed systems by checking whether their observed trade-off lies on the computed threshold frontier for the relevant population and metric.
  • When optimality is the goal, simple per-group threshold rules may suffice and complex models may add no further improvement.
  • The same reduction-to-thresholds approach could be tested for individual fairness notions or for non-binary decision problems.

Load-bearing premise

That every possible decision rule can be considered in the optimization without any model-class or computational restrictions, allowing joint optimization of fairness metrics with arbitrary utilities over any population.

What would settle it

A specific population distribution, utility function, and fairness metric for which some non-threshold rule achieves a strictly better utility-fairness combination than any group-specific threshold rule.

Figures

Figures reproduced from arXiv: 2605.10604 by Christoph Heitz, Mieke Wilms.

Figure 1
Figure 1. Figure 1: The Pareto frontier for different settings of DS utility matrices. The plots show a sample of all possible threshold [PITH_FULL_IMAGE:figures/full_fig_p012_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Comparison of the Pareto frontier (PF) obtained by the PF-SMG algorithm and our approach (performed on a Logistic [PITH_FULL_IMAGE:figures/full_fig_p013_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: (a)-(d) show the four possible shapes of [PITH_FULL_IMAGE:figures/full_fig_p021_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: This plot shows the distributions 𝑔(𝑝|𝑎) for the two groups 𝐴 = 0 and 𝐴 = 1 [PITH_FULL_IMAGE:figures/full_fig_p023_4.png] view at source ↗
read the original abstract

Designing fair algorithmic decision systems requires balancing model performance with fairness toward affected individuals: More fairness might require sacrificing some performance and vice versa, yet the space of possible trade-offs is still poorly understood. We investigate fairness in binary prediction-based decision problems by conceptualizing decision making as a multi-objective optimization problem that simultaneously considers decision-maker utility and group fairness. We investigate the set of Pareto-optimal decision rules for arbitrary utility functions for decision maker, arbitrary population distributions, and a wide range of group fairness metrics. We find that the Pareto frontier consists of deterministic, group-specific threshold rules applied to individuals' success probability. This complements existing optimality theorems from literature which, for specific fairness constraints, posit lower-bound threshold rules only. However we also show that, depending on the used fairness metric, the Pareto frontier may include upper-bound threshold rules, thus preferring individuals with lower success probabilities. We show that the location of the Pareto frontier depends only on population characteristics, utility functions and fairness score, but not on the technical design of the algorithm - our findings hold for pre-, in-, and post-processing approaches alike. Our results generalize existing optimality theorems for fairness-constrained classification and extend them to generalized fairness metrics and fairness principles, and to partial fairness regimes. This paper connects formal fairness research with legal and ethical requirements to search for less discriminatory alternatives, offering a principled foundation for evaluating and comparing algorithmic decision systems.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 3 minor

Summary. The paper frames fairness-performance trade-offs in binary decision systems as a multi-objective optimization problem over decision-maker utility and group fairness metrics. It claims that, for arbitrary utilities, arbitrary population distributions, and a wide range of fairness metrics, the Pareto frontier consists exclusively of deterministic group-specific threshold rules (lower- or upper-bound) applied to individuals' success probability p(y=1|x). This generalizes prior optimality results that were limited to specific fairness constraints and lower-bound thresholds only; the location of the frontier depends only on population characteristics, utilities, and the fairness metric, independent of pre-, in-, or post-processing algorithmic design. The work also extends the results to generalized fairness metrics, fairness principles, and partial fairness regimes.

Significance. If the characterization is correct, the result is significant: it supplies a simple, structural description of the entire fairness-performance Pareto frontier that applies beyond the narrow settings of existing theorems, directly supporting legal and ethical mandates to identify less discriminatory alternatives. The independence from algorithmic design choices is a notable strength, as is the explicit inclusion of upper-bound thresholds for certain metrics.

major comments (3)
  1. [Abstract / §3] Abstract and the multi-objective formulation (likely §3): the claim that the result holds for 'arbitrary utility functions' is load-bearing for the central generality statement, yet the derivation appears to require that utility depends only on (d, y). If the decision-maker utility can depend on additional covariates x (e.g., u(d, y, x) where x is not fully captured by p(y=1|x)), individuals with identical success probabilities are no longer interchangeable and the optimal rule for fixed fairness level generally thresholds on a linear combination of p and x, destroying the pure threshold-on-p structure. The manuscript should either restrict the claim to utilities of the form E[u(d, y)] or provide a counter-example showing the property survives.
  2. [§4 (Pareto frontier theorem)] Theorem characterizing the Pareto frontier (likely §4): the proof that all Pareto-optimal rules are group-specific thresholds on p must explicitly derive why the fairness metric and utility together induce monotonicity in p alone. Without the explicit step showing that the Lagrangian or weighted objective is monotone in p within each group, it is unclear whether the result extends to the 'wide range of group fairness metrics' asserted, especially for metrics that are not linear in the confusion matrix.
  3. [Abstract / §5] Independence from algorithmic design (claim in abstract and §5): while the Pareto set is characterized at the level of decision rules, the manuscript should verify that the same frontier is attained by pre-, in-, and post-processing methods under the stated assumptions; otherwise the 'holds for pre-, in-, and post-processing approaches alike' statement is not fully supported.
minor comments (3)
  1. [Abstract] The abstract states the main findings but contains no equations or proof sketches; adding a compact statement of the key optimality condition would improve readability.
  2. [Throughout] Notation for the success probability p(y=1|x) and the group-specific thresholds should be introduced once and used consistently; occasional shifts between 'success probability' and 'score' are mildly confusing.
  3. [Introduction / Related Work] The paper cites existing optimality theorems but does not include a short table comparing the new result to the most closely related prior theorems (e.g., Hardt et al., Corbett-Davies et al.); such a table would help readers locate the contribution.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive and detailed comments. We address each major comment below, indicating revisions where appropriate to strengthen the manuscript.

read point-by-point responses
  1. Referee: [Abstract / §3] Abstract and the multi-objective formulation (likely §3): the claim that the result holds for 'arbitrary utility functions' is load-bearing for the central generality statement, yet the derivation appears to require that utility depends only on (d, y). If the decision-maker utility can depend on additional covariates x (e.g., u(d, y, x) where x is not fully captured by p(y=1|x)), individuals with identical success probabilities are no longer interchangeable and the optimal rule for fixed fairness level generally thresholds on a linear combination of p and x, destroying the pure threshold-on-p structure. The manuscript should either restrict the claim to utilities of the form E[u(d, y)] or provide a counter-example showing the property survives.

    Authors: We thank the referee for this important clarification. Our derivations and proofs throughout the paper assume that the decision-maker utility takes the form E[u(d, y)], which is the standard decision-theoretic setup for binary classification-based decisions in the fairness literature. Utilities that depend on additional covariates x beyond what is captured by p(y=1|x) would indeed violate the interchangeability of individuals with the same p and generally lead to non-threshold rules. We will revise the abstract, §3, and related sections to explicitly restrict the claim to utilities of the form E[u(d, y)], thereby aligning the stated generality with the actual assumptions and proofs. revision: yes

  2. Referee: [§4 (Pareto frontier theorem)] Theorem characterizing the Pareto frontier (likely §4): the proof that all Pareto-optimal rules are group-specific thresholds on p must explicitly derive why the fairness metric and utility together induce monotonicity in p alone. Without the explicit step showing that the Lagrangian or weighted objective is monotone in p within each group, it is unclear whether the result extends to the 'wide range of group fairness metrics' asserted, especially for metrics that are not linear in the confusion matrix.

    Authors: We agree that an explicit derivation of monotonicity would improve the clarity and rigor of the proof, particularly for the generalized (possibly nonlinear) fairness metrics. The current proof of the Pareto frontier theorem proceeds by formulating the multi-objective problem via a weighted Lagrangian and arguing that, within each group, the contribution to both the utility and the fairness constraint is monotone in p; this induces the threshold structure. To address the concern, we will add an explicit intermediate step or lemma in §4 that derives the monotonicity of the effective objective in p for each group, covering both linear confusion-matrix-based metrics and the broader class of generalized metrics considered in the paper. revision: yes

  3. Referee: [Abstract / §5] Independence from algorithmic design (claim in abstract and §5): while the Pareto set is characterized at the level of decision rules, the manuscript should verify that the same frontier is attained by pre-, in-, and post-processing methods under the stated assumptions; otherwise the 'holds for pre-, in-, and post-processing approaches alike' statement is not fully supported.

    Authors: The Pareto frontier is characterized over the space of all feasible decision rules (group-specific thresholds on p), which is independent of implementation method. Post-processing directly realizes any such rule by applying the identified thresholds to a model's output probabilities. For pre- and in-processing, any threshold rule on p can be attained by training a model whose outputs are (or can be calibrated to) p(y=1|x) and then applying the group-specific thresholds, or by incorporating the fairness metric into the training objective to reach the same effective rules. We will add a concise verification paragraph in §5 explaining how each of the three approaches can attain the characterized frontier under the paper's assumptions. revision: yes

Circularity Check

0 steps flagged

No circularity: Pareto frontier derived from multi-objective optimization without reduction to inputs by construction

full rationale

The paper sets up decision-making as a multi-objective optimization balancing arbitrary decision-maker utility and group fairness metrics over arbitrary population distributions. It then characterizes the Pareto frontier as deterministic group-specific threshold rules on success probability p(y=1|x). This structure follows directly from the optimization under the problem's linearity in conditional probabilities (individuals with identical p are interchangeable for the stated objectives), without any fitted parameters being renamed as predictions, self-definitional loops, or load-bearing self-citations. No equations or steps reduce the claimed result to its own inputs by construction; the derivation is self-contained against the stated assumptions and generalizes prior optimality theorems without circularity.

Axiom & Free-Parameter Ledger

0 free parameters · 3 axioms · 0 invented entities

The analysis rests on standard domain assumptions from algorithmic fairness research regarding the availability of success probabilities and the definability of fairness metrics. No free parameters or new entities are introduced in the abstract.

axioms (3)
  • domain assumption Decision-making in binary prediction problems can be represented using individual success probabilities.
    This underpins the use of threshold rules on these probabilities.
  • domain assumption Group fairness can be measured using a wide range of metrics that can be optimized jointly with utility.
    Assumed to define the multi-objective problem and Pareto frontier.
  • domain assumption The set of possible decision rules includes all possible mappings from predictions to decisions.
    Necessary for characterizing the full Pareto frontier.

pith-pipeline@v0.9.0 · 5552 in / 1518 out tokens · 85338 ms · 2026-05-12T04:55:53.088358+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

Reference graph

Works this paper leans on

43 extracted references · 43 canonical work pages

  1. [1]

    Richard J. Arneson. 1997. Egalitarianism and the Undeserving Poor.Journal of Political Philosophy5, 4 (1997), 327–350. doi:10.1111/1467- 9760.00037 FAccT ’26, June 25–28, 2026, Montreal, QC, Canada Wilms and Heitz

  2. [2]

    2023.Fairness and Machine Learning: Limitations and Opportunities

    Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2023.Fairness and Machine Learning: Limitations and Opportunities. MIT Press

  3. [3]

    Joachim Baumann, Anikó Hannák, and Christoph Heitz. 2022. Enforcing Group Fairness in Algorithmic Decision Making: Utility Maximization Under Sufficiency. In2022 ACM Conference on Fairness Accountability and Transparency (FAccT ’22). ACM, 2315–2326. doi:10.1145/3531146.3534645

  4. [4]

    Joachim Baumann, Corinna Hertweck, Michele Loi, and Christoph Heitz. 2023. Distributive Justice as the Foundational Premise of Fair ML: Unification, Extension, and Interpretation of Group Fairness Metrics. arXiv:2206.02897 [cs.CY] https://arxiv.org/abs/2206.02897

  5. [5]

    Andrew L Beam and Isaac S Kohane. 2018. Big data and machine learning in health care.JAMA319, 13 (2018), 1317–1318. doi:10.1001/ jama.2017.18391

  6. [6]

    Emily Black, John Logan Koepke, Pauline T Kim, Solon Barocas, and Mingwei Hsu. 2024. Less Discriminatory Algorithms.The Georgetown Law Journal(2024), 53–120. https://civilrights.org/2014/02/27/civil-rights-principles-era-

  7. [7]

    Toon Calders and Sicco Verwer. 2010. Three naive bayes approaches for discrimination-free classification.Data mining and knowledge discovery21, 2 (2010), 277–292. doi:10.1007/s10618-010-0190-x

  8. [8]

    Simon Caton and Christian Haas. 2020. Fairness in Machine Learning: A Survey.CoRRabs/2010.04053 (2020). https://arxiv.org/abs/ 2010.04053

  9. [9]

    Cen, Salil Goyal, Zaynah Javed, Ananya Karthik, Percy Liang, and Daniel E

    Sarah H. Cen, Salil Goyal, Zaynah Javed, Ananya Karthik, Percy Liang, and Daniel E. Ho. 2025. Audits Under Resource, Data, and Access Constraints: Scaling Laws For Less Discriminatory Alternatives. arXiv:2509.05627 [cs.CY] https://arxiv.org/abs/2509.05627

  10. [10]

    Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. Algorithmic Decision Making and the Cost of Fairness. InProceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’17). ACM, 797–806. doi:10.1145/3097983.3098095

  11. [11]

    Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. InProceedings of the 3rd Innovations in Theoretical Computer Science Conference (ITCS ’12). ACM, 214–226. doi:10.1145/2090236.2090255

  12. [12]

    Marco Favier and Toon Calders. 2025. Cherry on the cake: fairness is NOT an optimization problem.Machine Learning114, 7 (2025), 160. https://doi.org/10.1007/s10994-025-06792-3

  13. [13]

    Will Fleisher. 2021. What’s Fair about Individual Fairness?. InProceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’21). ACM, 480–490. doi:10.1145/3461702.3462621

  14. [14]

    Harry Frankfurt. 1987. Equality as a Moral Ideal.Ethics98, 1 (1987), 21–43. doi:10.1086/292913

  15. [15]

    Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian

    Sorelle A. Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. 2016. On the (im)possibility of fairness.CoRRabs/1609.07236 (2016). http://arxiv.org/abs/1609.07236

  16. [16]

    Andreas Fuster, Paul Goldsmith-Pinkham, Tarun Ramadorai, and Ansgar Walther. 2017. Predictably Unequal? The Effects of Machine Learning on Credit Markets.The Journal of Finance77, 1 (2017), 5–47. doi:10.1111/jofi.13090

  17. [17]

    Hafsa Habehh and Suril Gohel. 2021. Machine learning in healthcare.Current genomics22, 4 (2021), 291–300

  18. [18]

    Moritz Hardt, Eric Price, and Nathan Srebro. 2016. Equality of Opportunity in Supervised Learning.CoRRabs/1610.02413 (2016). http://arxiv.org/abs/1610.02413

  19. [19]

    Corinna Hertweck, Joachim Baumann, Michele Loi, Eleonora Viganò, and Christoph Heitz. 2023. A Justice-Based Framework for the Analysis of Algorithmic Fairness-Utility Trade-Offs. (2023). https://arxiv.org/abs/2206.02891

  20. [20]

    Corinna Hertweck, Christoph Heitz, and Michele Loi. 2024. What’s Distributive Justice Got to Do with It? Rethinking Algorithmic Fairness from a Perspective of Approximate Justice.Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society1 (Oct. 2024), 597–608. doi:10.1609/aies.v7i1.31661

  21. [21]

    Corinna Hertweck, Michele Loi, and Christoph Heitz. 2024. Group Fairness Refocused: Assessing the Social Impact of ML Systems. In 2024 11th IEEE Swiss Conference on Data Science (SDS). 189–196. doi:10.1109/SDS60720.2024.00034

  22. [22]

    Nils Holtug. 2007. Prioritarianism. InEgalitarianism: new essays on the nature and value of equality, Nils Holtug and Kasper Lippert- Rasmussen (Eds.). Clarendon Press, 125–156

  23. [23]

    Lily Hu and Yiling Chen. 2018. Welfare and distributional impacts of fair classification. (2018). https://arxiv.org/abs/1807.01134

  24. [24]

    Taeuk Jang, Pengyi Shi, and Xiaoqian Wang. 2022. Group-aware threshold adaptation for fair classification. InProceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 6988–6995. doi:10.1609/aaai.v36i6.20657

  25. [25]

    Nikita Kozodoi, Johannes Jacob, and Stefan Lessmann. 2022. Fairness in credit scoring: Assessment, implementation and profit implications.European Journal of Operational Research297, 3 (2022), 1083–1094. doi:10.1016/j.ejor.2021.06.023

  26. [26]

    Francesca Lagioia, Riccardo Rovatti, and Giovanni Sartor. 2023. Algorithmic fairness through group parities? The case of COMPAS- SAPMOC.AI & society38, 2 (2023), 459–478. doi:10.1007/s00146-022-01441-y

  27. [27]

    Benjamin Laufer, Manish Raghavan, and Solon Barocas. 2025. What Constitutes a Less Discriminatory Algorithm?. InProceedings of the Symposium on Computer Science and Law on ZZZ (CSLA W ’25). ACM, 136–151. doi:10.1145/3709025.3712214

  28. [28]

    Suyun Liu and Luis Nunes Vicente. 2022. Accuracy and fairness trade-offs in machine learning: A stochastic multi-objective approach. Computational Management Science19, 3 (2022), 513–537. doi:10.1007/s10287-022-00425-z

  29. [29]

    Shira Mitchell, Eric Potash, Solon Barocas, Alexander D’Amour, and Kristian Lum. 2021. Algorithmic Fairness: Choices, Assumptions, and Definitions.Annual Review of Statistics and Its Application(2021). https://api.semanticscholar.org/CorpusID:228893833 Characterizing the Pareto Frontier of Algorithmic Decision Systems FAccT ’26, June 25–28, 2026, Montreal...

  30. [30]

    Vincenzo Moscato, Antonio Picariello, and Giancarlo Sperlí. 2021. A benchmark of machine learning approaches for credit score prediction.Expert Systems with Applications165 (2021), 113986. doi:10.1016/j.eswa.2020.113986

  31. [31]

    Dana Pessach and Erez Shmueli. 2022. A Review on Fairness in Machine Learning.ACM Computing Surveys (CSUR)55, 3, Article 51 (Feb. 2022), 44 pages. doi:10.1145/3494672

  32. [32]

    1999.A Theory of Justice: Revised Edition

    John Rawls. 1999.A Theory of Justice: Revised Edition. Harvard University Press, Cambridge, MA and London, England

  33. [33]

    K Shailaja, Banoth Seetharamulu, and MA Jabbar. 2018. Machine learning in healthcare: A review. In2018 Second international conference on electronics, communication and aerospace technology (ICECA). IEEE, 910–914. doi:10.1109/ICECA.2018.8474918

  34. [34]

    Chiappa Silvia, Jiang Ray, Stepleton Tom, Pacchiano Aldo, Jiang Heinrich, and Aslanides John. 2020. A general approach to fairness with optimal transport. InProceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 3633–3640. doi:10.1609/aaai.v34i04.5771

  35. [35]

    Hua Tang, Lu Cheng, Ninghao Liu, and Mengnan Du. 2023. A theoretical approach to characterize the accuracy-fairness trade-off pareto frontier. (2023). https://arxiv.org/abs/2310.12785

  36. [36]

    Ana Valdivia, Javier Sánchez-Monedero, and Jorge Casillas. 2021. How fair can we go in machine learning? Assessing the boundaries of accuracy and fairness.International Journal of Intelligent Systems36, 4 (2021), 1619–1643. doi:10.1002/int.22354

  37. [37]

    Marvin Van Bekkum and Frederik Zuiderveen Borgesius. 2021. Digital welfare fraud detection and the Dutch SyRI judgment.European Journal of Social Security23, 4 (2021), 323–340. doi:10.1177/13882627211031257

  38. [38]

    Sahil Verma and Julia Rubin. 2018. Fairness definitions explained. InProceedings of the international workshop on software fairness (FairWare ’18). ACM, 1–7. https://doi.org/10.1145/3194770.3194776

  39. [39]

    Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2021. Why fairness cannot be automated: Bridging the gap between EU non- discrimination law and AI.Computer Law & Security Review41 (2021), 105567. doi:10.1016/j.clsr.2021.105567

  40. [40]

    Hilde Weerts, Lambèr Royakkers, and Mykola Pechenizkiy. 2026. Are There Exceptions to Goodhart’s Law? On the Moral Justification of Fairness-Aware Machine Learning.ACM Journal on Responsible Computing3, 1 (2026). doi:10.1145/3771734

  41. [41]

    Hilde Weerts, Raphaële Xenidis, Fabien Tarissan, Henrik Palmer Olsen, and Mykola Pechenizkiy. 2023. Algorithmic unfairness through the lens of EU non-discrimination law: Or why the law is not a decision tree. InProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’23). ACM, 805–816. https://doi.org/10.1145/3593013.3594044

  42. [42]

    Susan Wei and Marc Niethammer. 2022. The fairness-accuracy Pareto front.Statistical Analysis and Data Mining: The ASA Data Science Journal15, 3 (2022), 287–302. doi:10.1002/sam.11560

  43. [43]

    Qingquan Zhang, Jialin Liu, Zeqi Zhang, Junyi Wen, Bifei Mao, and Xin Yao. 2022. Mitigating unfairness via evolutionary multiobjective ensemble learning.IEEE transactions on evolutionary computation27, 4 (2022), 848–862. doi:10.1109/TEVC.2022.3209544 A Utility-based fairness evaluation In this section, we show that the classical confusion-based metrics ar...