pith. machine review for the scientific record. sign in

arxiv: 2604.02641 · v3 · submitted 2026-04-03 · 💻 cs.HC

Recognition: no theorem link

The Paradox of Prioritization in Public Sector Algorithms

Authors on Pith no claims yet

Pith reviewed 2026-05-13 19:12 UTC · model grok-4.3

classification 💻 cs.HC
keywords algorithmic prioritizationpublic sector algorithmsresource allocationintersectional identitiesdisparitiesinequality perceptionsscarcityefficiency
0
0 comments X

The pith

Prioritization algorithms in the public sector generate significant relative disparities between groups of intersecting identities as resources grow scarce.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Public agencies increasingly use algorithmic tools to rank and allocate limited resources such as benefits or services. This paper demonstrates that the ranking process itself produces larger gaps in outcomes across groups defined by overlapping identities, with the gaps widening as the pool of available resources shrinks. A sympathetic reader would care because prevailing arguments treat prioritization as a neutral way to improve efficiency, yet the study shows it can heighten feelings of inequality among those affected. The authors emphasize that real resource limits prevent any simple assumption that algorithmic ranking lets agencies serve more people without creating new relative losses.

Core claim

We demonstrate the fallibility of adopting a prioritization approach in the public sector by showing how the underlying mechanisms of prioritization generate significant relative disparities between groups of intersectional identities as resources become increasingly scarce. We argue that despite prevailing arguments that prioritization of resources can lead to efficient allocation outcomes, prioritization can intensify perceptions of inequality for impacted individuals. Efficiencies generated by algorithmic tools should not be conflated with the dominant rhetoric that efficiency necessarily entails doing more with less under real-world constraints.

What carries the argument

The structural design of prioritization, which ranks individuals for scarce resources and produces relative comparisons that grow more uneven as availability declines.

If this is right

  • Prioritization mechanisms produce greater relative disparities between intersectional groups when resources become more limited.
  • Efficiencies from algorithmic ranking do not automatically mean serving more people without raising perceptions of inequality.
  • Real resource constraints in public programs must be modeled explicitly rather than assumed away when evaluating these tools.
  • The experiences of people subject to prioritization can worsen even when overall allocation metrics improve.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Agencies could test non-ranking allocation methods such as lotteries for certain services to limit the relative-disparity effect.
  • The result connects to wider questions about how automated systems handle scarcity in welfare and housing programs.
  • Measuring both objective gaps and subjective fairness perceptions in live deployments would give a fuller picture of the trade-offs.

Load-bearing premise

The structural design of prioritization itself can be isolated to show that it generates relative disparities between intersectional groups under realistic public sector conditions, independent of any specific data set or algorithmic model.

What would settle it

A simulation or deployment study of public resource allocation in which relative outcome gaps between intersectional groups stay constant or shrink as the total resources available decrease would falsify the central claim.

Figures

Figures reproduced from arXiv: 2604.02641 by Erina Seh-Young Moon, Shion Guha.

Figure 1
Figure 1. Figure 1: Two Types of Prioritization Approaches for Resource Allocation: A strict hierarchical prioritization approach (1st row) [PITH_FULL_IMAGE:figures/full_fig_p003_1.png] view at source ↗
Figure 2
Figure 2. Figure 2 [PITH_FULL_IMAGE:figures/full_fig_p010_2.png] view at source ↗
Figure 2
Figure 2. Figure 2: Comparing Subgroup Resource Receipt Rates between Two Unhoused Groups, Families and Single Adults for Weighted [PITH_FULL_IMAGE:figures/full_fig_p011_2.png] view at source ↗
read the original abstract

Public sector agencies perform the critical task of implementing the redistributive role of the State by acting as the leading provider of critical public services that many rely on. In recent years, public agencies have been increasingly adopting algorithmic prioritization tools to determine which individuals should be allocated scarce public resources. Prior work on these tools has largely focused on assessing and improving their fairness, accuracy, and validity. However, what remains understudied is how the structural design of prioritization itself shapes both the effectiveness of these tools and the experiences of those subject to them under realistic public sector conditions. In this study, we demonstrate the fallibility of adopting a prioritization approach in the public sector by showing how the underlying mechanisms of prioritization generate significant relative disparities between groups of intersectional identities as resources become increasingly scarce. We argue that despite prevailing arguments that prioritization of resources can lead to efficient allocation outcomes, prioritization can intensify perceptions of inequality for impacted individuals. We contend that efficiencies generated by algorithmic tools should not be conflated with the dominant rhetoric that efficiency necessarily entails "doing more with less" and we highlight the risks of overlooking resource constraints present in real-world implementation contexts.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper claims that algorithmic prioritization tools in public sector resource allocation generate significant relative disparities across intersectional identity groups as resources become scarcer, intensifying perceptions of inequality; it argues that this structural effect undermines claims of efficiency gains and that efficiencies should not be conflated with 'doing more with less' under real-world constraints.

Significance. If the demonstration holds with concrete mechanisms, the result would contribute to HCI and public-sector algorithm literature by isolating how prioritization design itself shapes inequality experiences, beyond standard fairness metrics, and by cautioning against efficiency rhetoric in constrained settings.

major comments (2)
  1. [Abstract and demonstration section] The central demonstration that prioritization mechanisms generate disparities as scarcity increases (Abstract; § on demonstration) relies on an unspecified prioritization function, tie-breaking rule, operationalization of intersectional identities, group attribute distributions, and simulation/experimental protocol. Without these, it is impossible to isolate the claimed effect from implicit modeling assumptions about eligibility or need.
  2. [Methods/demonstration and results] The claim of 'significant relative disparities' and intensified inequality perceptions lacks any formal model, equations, simulation results, or empirical data (no methods, tables, or figures referenced in the provided text). This renders the argument conceptual rather than a demonstration, weakening the load-bearing assertion that prioritization itself produces the effect under realistic conditions.
minor comments (1)
  1. [Abstract and Introduction] The abstract and introduction could more clearly distinguish the paper's contribution from prior fairness/accuracy work by citing specific studies on prioritization mechanics.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their constructive feedback, which highlights important areas for strengthening the rigor of our demonstration. We address each major comment below and commit to revisions that add the requested formal details without altering the core conceptual argument.

read point-by-point responses
  1. Referee: [Abstract and demonstration section] The central demonstration that prioritization mechanisms generate disparities as scarcity increases (Abstract; § on demonstration) relies on an unspecified prioritization function, tie-breaking rule, operationalization of intersectional identities, group attribute distributions, and simulation/experimental protocol. Without these, it is impossible to isolate the claimed effect from implicit modeling assumptions about eligibility or need.

    Authors: We agree that the current demonstration section presents the mechanism at a conceptual level without explicit parameterization. In the revised manuscript we will specify a formal prioritization function (a linear scoring model combining need and eligibility criteria), tie-breaking rules (random lottery among tied cases), operationalization of intersectional identities (combinations of race, gender, and income brackets drawn from U.S. Census and administrative data distributions), and a complete simulation protocol (Monte Carlo runs across scarcity ratios from 10% to 90% allocation). These additions will allow readers to isolate the prioritization effect from eligibility assumptions. revision: yes

  2. Referee: [Methods/demonstration and results] The claim of 'significant relative disparities' and intensified inequality perceptions lacks any formal model, equations, simulation results, or empirical data (no methods, tables, or figures referenced in the provided text). This renders the argument conceptual rather than a demonstration, weakening the load-bearing assertion that prioritization itself produces the effect under realistic conditions.

    Authors: The manuscript is currently framed as a conceptual analysis supported by illustrative scenarios. We acknowledge that this leaves the quantitative claims of 'significant relative disparities' under-supported. We will add a dedicated methods section containing the formal model equations, describe the simulation design, and include new tables and figures reporting allocation rates and disparity metrics (e.g., relative group shares and inequality indices) across scarcity levels. These results will directly ground the demonstration in observable outputs while preserving the paper's focus on structural effects. revision: yes

Circularity Check

0 steps flagged

No circularity: conceptual argument with no derivations or fitted inputs

full rationale

The paper advances a conceptual claim that prioritization mechanisms in public sector algorithms generate relative disparities across intersectional groups as resources grow scarce. The provided text contains no equations, parameters, simulations, or derivations. The argument is presented as a demonstration of structural effects without reducing any prediction to a fitted input, self-definition, or self-citation chain. No load-bearing step equates output to input by construction, satisfying the default expectation that most papers are non-circular.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on the domain assumption that prioritization mechanisms generate disparities under scarcity, which is not supported by specific evidence, models, or derivations in the provided abstract.

axioms (1)
  • domain assumption The structural design of prioritization shapes effectiveness and experiences under realistic public sector conditions
    Invoked as the basis for demonstrating fallibility and disparities but not derived or evidenced in the abstract.

pith-pipeline@v0.9.0 · 5492 in / 1279 out tokens · 63447 ms · 2026-05-13T19:12:43.813796+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

61 extracted references · 61 canonical work pages

  1. [1]

    2016.Machine Bias

    Julia Angwin Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016.Machine Bias. Retrieved August 30, 2022 from https: //www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

  2. [2]

    2019.Race after technology: abolitionist tools for the New Jim Code

    Ruha Benjamin. 2019.Race after technology: abolitionist tools for the New Jim Code. Polity Press, Cambridge, UK

  3. [3]

    Khiara M. Bridges. 2017.The Poverty of Privacy Rights. Stanford University Press, Stanford, CA, USA

  4. [4]

    Adriane Chapman, Philip Grylls, Pamela Ugwudike, David Gammack, and Jacqui Ayling. 2022. A Data-driven analysis of the interplay between Criminological theory and predictive policing algorithms. InProceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency(Seoul, Republic of Korea)(FAccT ’22). Association for Computing Machinery, ...

  5. [5]

    Hao-Fei Cheng, Logan Stapleton, Anna Kawakami, Venkatesh Sivaraman, Yanghuidi Cheng, Diana Qing, Adam Perer, Kenneth Holstein, Zhiwei Steven Wu, and Haiyi Zhu. 2022. How Child Welfare Workers Reduce Racial Disparities in Algorithmic Decisions. InProceedings of the 2022 CHI Conference on Human Factors in Computing Systems(New Orleans, LA, USA)(CHI ’22). As...

  6. [6]

    Lingwei Cheng, Cameron Drayton, Alexandra Chouldechova, and Rhema Vaithianathan. 2024. Algorithm-Assisted Decision Making and Racial Disparities in Housing: A Study of the Allegheny Housing Assessment Tool. arXiv:2407.21209 [cs.HC] https://arxiv.org/abs/ 2407.21209

  7. [7]

    Alexandra Chouldechova. 2017. Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments.Big Data 5, 2 (2017), 153–163. doi:10.1089/big.2016.0047

  8. [8]

    Sarah Cohodes and Susha Roy. 2025. Thirty Years of Charter Schools: What Does Lottery-Based Research Tell Us?Journal of School Choice19, 1 (2025), 8–49. doi:10.1080/15582159.2024.2379644

  9. [9]

    Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. Algorithmic Decision Making and the Cost of Fairness. InProceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’17). Association for Computing Machinery, New York, NY, USA, 797–806. doi:10.1145/3097983.3098095

  10. [10]

    Amanda Coston, Anna Kawakami, Haiyi Zhu, Ken Holstein, and Hoda Heidari. 2023. A Validity Perspective on Evaluating the Justified Use of Data-driven Decision-making Algorithms. In2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML). IEEE Computer Society, Los Alamitos, CA, USA, 690–704. doi:10.1109/SaTML54575.2023.00050

  11. [11]

    Courtney Cronley. 2022. Invisible intersectionality in measuring vulnerability among individuals experiencing homelessness – critically appraising the VI-SPDAT.Journal of Social Distress and Homelessness31, 1 (2022), 23–33. doi:10.1080/10530789.2020.1852502

  12. [12]

    Lina Dencik, Joanna Redden, Arne Hintz, and Harry Warne. 2019. The ‘golden view’: data-driven governance in the scoring society. Internet Policy Review8, 2 (2019). https://policyreview.info/articles/analysis/golden-view-data-driven-governance-scoring-society

  13. [13]

    John Ecker, Molly Brown, Tim Aubry, Katherine Francombe Pridham, and Stephen W. Hwang. 2022. Coordinated Access and Coordinated Entry System Processes in the Housing and Homelessness Sector: A Critical Commentary on Current Practices.Housing Policy Debate 32, 6 (Nov 2022), 876–895. doi:10.1080/10511482.2022.2058580

  14. [14]

    2018.Automating inequality: How high-tech tools profile, police, and punish the poor

    Virginia Eubanks. 2018.Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press. The Paradox of Prioritization in Public Sector Algorithms FAccT ’26, June 25–28, 2026, Montreal, QC, Canada

  15. [15]

    Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian

    Sorelle A. Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. 2021. The (Im)possibility of fairness: different value systems require different mechanisms for fair decision making.Commun. ACM64, 4 (March 2021), 136–143. doi:10.1145/3433949

  16. [16]

    Marissa Gerchick, Tobi Jegede, Tarak Shah, Ana Gutierrez, Sophie Beiers, Noam Shemtov, Kath Xu, Anjana Samant, and Aaron Horowitz

  17. [17]

    Vera Liao, and Chenhao Tan

    The Devil is in the Details: Interrogating Values Embedded in the Allegheny Family Screening Tool. InProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency(Chicago, IL, USA)(FAccT ’23). Association for Computing Machinery, New York, NY, USA, 1292–1310. doi:10.1145/3593013.3594081

  18. [18]

    2025.Ground-breaking use of AI saves taxpayers’ money and delivers greater government efficiency

    Gov.uk. 2025.Ground-breaking use of AI saves taxpayers’ money and delivers greater government efficiency. Retrieved jan 11, 2026 from https://www.gov.uk/government/news/ground-breaking-use-of-ai-saves-taxpayers-money-and-delivers-greater-government- efficiency

  19. [19]

    Ben Green. 2020. The false promise of risk assessments: epistemic reform and the limits of fairness. InProceedings of the 2020 Conference on Fairness, Accountability, and Transparency(Barcelona, Spain)(FAT* ’20). Association for Computing Machinery, New York, NY, USA, 594–606. doi:10.1145/3351095.3372869

  20. [20]

    DEPARTMENT OF GOVERNMENT EFFICIENCY

    The White House. 2025.ESTABLISHING AND IMPLEMENTING THE PRESIDENT’S “DEPARTMENT OF GOVERNMENT EFFICIENCY”. Retrieved Sept 2, 2025 from https://www.whitehouse.gov/presidential-actions/2025/01/establishing-and-implementing-the-presidents- department-of-government-efficiency/

  21. [21]

    Jacobs, Su Lin Blodgett, Solon Barocas, Hal Daumé, and Hanna Wallach

    Abigail Z. Jacobs, Su Lin Blodgett, Solon Barocas, Hal Daumé, and Hanna Wallach. 2020. The Meaning and Measurement of Bias: Lessons from Natural Language Processing. InProceedings of the 2020 Conference on Fairness, Accountability, and Transparency(Barcelona, Spain) (FAT* ’20). Association for Computing Machinery, New York, NY, USA, 706. doi:10.1145/33510...

  22. [22]

    Rebecca Ann Johnson and Simone Zhang. 2022. What is the Bureaucratic Counterfactual? Categorical versus Algorithmic Prioritization in U.S. Social Policy. InProceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency(Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 1671–1682. doi:10.114...

  23. [23]

    Naveena Karusala, Sohini Upadhyay, Rajesh Veeraraghavan, and Krzysztof Z. Gajos. 2024. Understanding Contestability on the Margins: Implications for the Design of Algorithmic Decision-making in Public Services. InProceedings of the CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI ’24). Association for Computing Machinery, New Yo...

  24. [24]

    Anna Kawakami, Venkatesh Sivaraman, Hao-Fei Cheng, Logan Stapleton, Yanghuidi Cheng, Diana Qing, Adam Perer, Zhiwei Steven Wu, Haiyi Zhu, and Kenneth Holstein. 2022. Improving Human-AI Partnerships in Child Welfare: Understanding Worker Practices, Challenges, and Desires for Algorithmic Decision Support. InProceedings of the 2022 CHI Conference on Human F...

  25. [25]

    Jon Kleinberg. 2018. Inherent Trade-Offs in Algorithmic Fairness.SIGMETRICS Perform. Eval. Rev.46, 1 (June 2018), 40. doi:10.1145/ 3292040.3219634

  26. [26]

    Ramaravind Kommiya Mothilal, Shion Guha, and Syed Ishtiaque Ahmed. 2024. Towards a Non-Ideal Methodological Framework for Responsible ML. InProceedings of the CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI ’24). Association for Computing Machinery, New York, NY, USA, Article 477, 17 pages. doi:10.1145/3613904.3642501

  27. [27]

    Chasalow, and Sarah Riley

    Karen Levy, Kyla E. Chasalow, and Sarah Riley. 2021. Algorithms and Decision-Making in the Public Sector.Annual Review of Law and Social Science17, 1 (2021), 309–334. doi:10.1146/annurev-lawsocsci-041221-023808

  28. [28]

    Liu, Solon Barocas, Jon Kleinberg, and Karen Levy

    Lydia T. Liu, Solon Barocas, Jon Kleinberg, and Karen Levy. 2024. On the actionability of outcome prediction. InProceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence and Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence and Fourteenth Symposium on Educational Advances in Artificial Intelligence (AAAI’24/IA...

  29. [29]

    Lydia T. Liu, Inioluwa Deborah Raji, Angela Zhou, Luke Guerdan, Jessica Hullman, Daniel Malinsky, Bryan Wilder, Simone Zhang, Hammaad Adam, Amanda Coston, Ben Laufer, Ezinne Nwankwo, Michael Zanger-Tishler, Eli Ben-Michael, Solon Barocas, Avi Feller, Marissa Gerchick, Talia Gillis, Shion Guha, Daniel Ho, Lily Hu, Kosuke Imai, Sayash Kapoor, Joshua Loftus,...

  30. [30]

    Tasfia Mashiat, Xavier Gitiaux, Huzefa Rangwala, Patrick Fowler, and Sanmay Das. 2022. Trade-offs between Group Fairness Metrics in Societal Resource Allocation. In2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 1095–1105. doi:10.1145/3531146.3533171

  31. [31]

    Kelly McConvey, Shion Guha, and Anastasia Kuzminykh. 2023. A Human-Centered Review of Algorithms in Decision-Making in Higher Education. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems(Hamburg, Germany)(CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 223, 15 pages. doi:10.1145/3544548.3580658

  32. [32]

    Erina Seh-Young Moon and Shion Guha. 2024. A Human-Centered Review of Algorithms in Homelessness Research. InProceedings of the CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI ’24). Association for Computing Machinery, New York, NY, USA, Article 870, 15 pages. doi:10.1145/3613904.3642392 FAccT ’26, June 25–28, 2026, Montreal, Q...

  33. [33]

    Erina Seh-Young Moon, Devansh Saxena, Dipto Das, and Shion Guha. 2025. The Datafication of Care in Public Homelessness Services. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, Article 829, 16 pages. doi:10.1145/3706598.3713232

  34. [34]

    Erina Seh-Young Moon, Matthew Tamura, Angelina Zhai, Nuzaira Habib, Behnaz Shirazi, Altaf Kassam, Devansh Saxena, and Shion Guha. 2026. The Promises and Perils of using LLMs for Effective Public Services. InProceedings of the 2026 CHI Conference on Human Factors in Computing Systems (CHI ’26). Association for Computing Machinery, New York, NY, USA, Articl...

  35. [35]

    Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations.Science366, 6464 (2019), 447–453. arXiv:https://www.science.org/doi/pdf/10.1126/science.aax2342 doi:10.1126/science.aax2342

  36. [36]

    2023.About Reaching Home: Canada’s Homelessness Strategy

    Government of Canada. 2023.About Reaching Home: Canada’s Homelessness Strategy. Retrieved August 31, 2023 from https://www. infrastructure.gc.ca/homelessness-sans-abri/index-eng.html

  37. [37]

    2025.Canada partners with Cohere to accelerate world-leading artificial intelligence

    Government of Canada. 2025.Canada partners with Cohere to accelerate world-leading artificial intelligence. Retrieved Sept 2, 2025 from https://www.canada.ca/en/innovation-science-economic-development/news/2025/08/canada-partners-with-cohere-to-accelerate- world-leading-artificial-intelligence.html

  38. [38]

    2021.Homelessness Solutions Service Plan

    City of Toronto. 2021.Homelessness Solutions Service Plan. Retrieved aug 17, 2024 from https://www.toronto.ca/wp-content/uploads/ 2021/11/9885-Attachment-2-Homelessness-Solutions-Service-Plan.pdf

  39. [39]

    City of Toronto. 2025. Street Needs Assessment. Retrieved jan 4, 2026 from https://www.toronto.ca/community-people/housing- shelter/homeless-help/about-torontos-shelter-system/street-needs-assessment/

  40. [40]

    City of Toronto. 2026. Shelter System Flow Data. Retrieved jan 1, 2026 from https://www.toronto.ca/city-government/data-research- maps/research-reports/housing-and-homelessness-research-and-reports/shelter-system-flow-data/

  41. [41]

    2014.The Unequal Homeless: Men on the Streets, Women in their Place

    Joanne Passaro. 2014.The Unequal Homeless: Men on the Streets, Women in their Place. Routledge, New York. doi:10.4324/9781315021690

  42. [42]

    2024.Handbook on Public Policy and Artificial Intelligence

    Regine Paul, Emma Carmel, and Jennifer Cobbe. 2024.Handbook on Public Policy and Artificial Intelligence. Edward Elgar Publishing, Cheltenham, UK. doi:10.4337/9781803922171

  43. [43]

    Perdomo, Tolani Britton, Moritz Hardt, and Rediet Abebe

    Juan C. Perdomo, Tolani Britton, Moritz Hardt, and Rediet Abebe. 2023. Difficult Lessons on Social Prediction from Wisconsin Public Schools. arXiv:2304.06205 [cs.CY] https://arxiv.org/abs/2304.06205

  44. [44]

    Peek, and Ezekiel J

    Govind Persad, Monica E. Peek, and Ezekiel J. Emanuel. 2020. Fairly Prioritizing Groups for Access to COVID-19 Vaccines.JAMA324, 16 (Oct. 2020), 1601–1602. doi:10.1001/jama.2020.18513

  45. [45]

    Dasha Pruss. 2023. Ghosting the Machine: Judicial Resistance to a Recidivism Risk Assessment Instrument. InProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency(Chicago, IL, USA)(FAccT ’23). Association for Computing Machinery, New York, NY, USA, 312–323. doi:10.1145/3593013.3593999

  46. [46]

    Lisa Reutter and Heidrun Åm and. 2024. Constructing the data economy: tracing expectations of value creation in policy documents. Critical Policy Studies18, 4 (2024), 639–659. doi:10.1080/19460171.2023.2300436

  47. [47]

    Everyone wants to do the model work, not the data work

    Nithya Sambasivan, Shivani Kapania, Hannah Highfill, Diana Akrong, Praveen Paritosh, and Lora M Aroyo. 2021. “Everyone wants to do the model work, not the data work”: Data Cascades in High-Stakes AI. InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21). Association for Computing Machinery, New York, NY, USA, 1–15. doi:1...

  48. [48]

    Devansh Saxena, Karla Badillo-Urquiola, Pamela Wisniewski, and Shion Guha. 2021. A Framework of High-Stakes Algorithmic Decision- Making for the Public Sector Developed through a Case Study of Child-Welfare.Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021)

  49. [49]

    Devansh Saxena, Karla Badillo-Urquiola, Pamela J Wisniewski, and Shion Guha. 2020. A Human-Centered Review of Algorithms used within the US Child Welfare System. InProceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–15

  50. [50]

    Devansh Saxena and Shion Guha. 2024. Algorithmic Harms in Child Welfare: Uncertainties in Practice, Organization, and Street-level Decision-making.ACM J. Responsib. Comput.1, 1, Article 2 (March 2024), 32 pages. doi:10.1145/3616473

  51. [51]

    2025.DHS Playbook for Public Sector Generative Artificial Intelligence Deployment

    Homeland Security. 2025.DHS Playbook for Public Sector Generative Artificial Intelligence Deployment. Retrieved Sept 2, 2025 from https://www.dhs.gov/sites/default/files/2025-01/25_0106_ocio_dhs-playbook-for-public-sector-generative-artificial-intelligence- deployment-508-signed.pdf

  52. [52]

    Andrew D Selbst, Danah Boyd, Sorelle A Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and abstraction in sociotechnical systems. InProceedings of the conference on fairness, accountability, and transparency. 59–68

  53. [53]

    Ali Shirali, Rediet Abebe, and Moritz Hardt. 2024. Allocation Requires Prediction Only if Inequality Is Low. InProceedings of the 41st International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 235), Ruslan Salakhutdinov, Zico Kolter, Katherine Heller, Adrian Weller, Nuria Oliver, Jonathan Scarlett, and Felix Berkenkamp (...

  54. [54]

    Who is the right homeless client?

    Dilruba Showkat, Angela D. R. Smith, Wang Lingqing, and Alexandra To. 2023. “Who is the right homeless client?”: Values in Algorithmic Homelessness Service Provision and Machine Learning Research. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23). Association for Computing Machinery, New York, NY, USA, 1–21. doi:10.1...

  55. [55]

    1997.Policy paradox : the art of political decision making

    D A Stone. 1997.Policy paradox : the art of political decision making. W.W. Norton, New York

  56. [56]

    Pelle Tracey and Patricia Garcia. 2024. Intermediation: Algorithmic Prioritization in Practice in Homeless Services.Proc. ACM Hum.-Comput. Interact.8, CSCW2 (Nov. 2024), 412:1–412:24

  57. [57]

    1983.Spheres of justice : a defense of pluralism and equality

    M Walzer. 1983.Spheres of justice : a defense of pluralism and equality. Basic Books

  58. [58]

    Angelina Wang, Sayash Kapoor, Solon Barocas, and Arvind Narayanan. 2024. Against Predictive Optimization: On the Legitimacy of Decision-making Algorithms That Optimize Predictive Accuracy.ACM J. Responsib. Comput.1, 1 (March 2024), 9:1–9:45. doi:10.1145/ 3636509

  59. [59]

    C C Yancey and M C Rourke. 2023. Emergency Department Triage

  60. [60]

    Miri Zilka, Holli Sargeant, and Adrian Weller. 2022. Transparency, Governance and Regulation of Algorithmic Tools Deployed in the Criminal Justice System: a UK Case Study. InProceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society(Oxford, United Kingdom)(AIES ’22). Association for Computing Machinery, New York, NY, USA, 880–889. doi:10.1145/...

  61. [61]

    Marta Ziosi and Dasha Pruss. 2024. Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool. InProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency(Rio de Janeiro, Brazil)(FAccT ’24). Association for Computing Machinery, New York, NY, USA, 1596–1608. doi:10.1145/3630106.36589...