pith. machine review for the scientific record. sign in

arxiv: 2605.12643 · v1 · submitted 2026-05-12 · 💻 cs.HC

Recognition: no theorem link

Co-Designing Organizational Justice Indicators for Algorithmic Systems

Amy Voida, Fujiko Robledo Yamamoto, Nicholas Mattei, Pradeep Ragothaman, Robin Burke

Authors on Pith no claims yet

Pith reviewed 2026-05-14 20:07 UTC · model grok-4.3

classification 💻 cs.HC
keywords organizational justicealgorithmic fairnessrecommender systemsco-designnormative concernsmicro-lendingstakeholder workshops
0
0 comments X

The pith

Organizational justice subsumes distributional fairness and supplies concrete metrics for algorithmic recommenders.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper claims that conventional fairness definitions in machine learning, limited to comparative distributions of outcomes, fail to capture the full set of normative issues that arise in real organizations. It advances organizational justice as an encompassing framework that integrates distributional concerns with procedural and interactional dimensions. Through co-design workshops at Kiva Microfunds, employees from different roles surfaced specific justice goals tied to the organization's personalized recommendation system for micro-loans. From these goals the authors derive a suite of monitorable metrics that the organization can use to track the system's effects and to guide configuration decisions. If the framework holds, organizations gain a practical language and set of indicators for negotiating trade-offs among competing stakeholder values rather than defaulting to narrow statistical parity measures.

Core claim

We propose organizational justice as a framework that subsumes distributional fairness as well as other normative concerns. In the Kiva case study, workshops with employees across departments reveal design trade-offs among the normative goals each group prioritizes; these goals are then translated into a concrete suite of metrics that the organization can apply to monitor and assess the recommender system's impact on organizational justice concerns and to inform configuration and deployment choices.

What carries the argument

The organizational justice framework (distributive, procedural, and interactional dimensions) applied to a personalized loan recommendation system, turned into monitorable metrics through employee co-design workshops.

If this is right

  • Kiva can track whether its recommender system advances or undermines the justice goals its employees articulated.
  • Design trade-offs between different justice dimensions become explicit and negotiable inside the organization.
  • Metrics can seed ongoing internal discussions about appropriate system configuration rather than treating fairness as a one-time technical fix.
  • The same process can be repeated when the organization updates its recommendation logic or adds new stakeholder groups.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Similar co-design exercises could surface justice metrics for other algorithmic systems inside nonprofits or public agencies.
  • Organizations adopting these metrics might need to decide how to weight conflicting justice dimensions when they cannot be simultaneously satisfied.
  • The framework invites longitudinal study of whether the metrics actually change deployment decisions over time.

Load-bearing premise

The normative concerns voiced by the participating Kiva employees are representative enough of the organization's overall values to support stable, generalizable metrics.

What would settle it

Run the derived metrics on historical Kiva recommendation logs and find that they produce conflicting signals or fail to flag known stakeholder complaints that the workshops had surfaced.

Figures

Figures reproduced from arXiv: 2605.12643 by Amy Voida, Fujiko Robledo Yamamoto, Nicholas Mattei, Pradeep Ragothaman, Robin Burke.

Figure 1
Figure 1. Figure 1: Kiva search options when visiting the loan page. [PITH_FULL_IMAGE:figures/full_fig_p003_1.png] view at source ↗
read the original abstract

Fairness in machine learning is often conceptualized narrowly in comparative, distributional terms. In studying stakeholders' concepts of fairness, we find that this framing is insufficient to capture the full range of issues raised. As an alternative, we propose organizational justice as a framework that subsumes distributional fairness as well as other normative concerns. We conduct a case study of organizational justice relative to personalized recommendation in the context of Kiva Microfunds, a nonprofit micro-lending organization whose mission is to increase financial access for underserved communities across the world. We report on the results of co-design workshops conducted with Kiva employees who are involved in different departments and whose roles often lead them to prioritize normative concerns that are most supportive of the stakeholders with whom they work most closely. We apply organizational justice to understand design trade-offs among different normative goals stakeholders invoke. Based on these goals, we identify a suite of metrics that Kiva employees can use to monitor and assess the recommender system's impact on their organizational justice concerns and to seed discussions within the organization about appropriate configuration and deployment of this system in context.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper claims that narrow distributional fairness in ML is insufficient to capture stakeholder normative concerns in organizational settings. It proposes organizational justice as a subsuming framework and reports a case study at Kiva Microfunds in which co-design workshops with employees from multiple departments elicited concerns about a personalized recommender system; from these inputs the authors derive a suite of metrics intended for ongoing monitoring and configuration of the system.

Significance. If the derived metrics demonstrate stability and cross-stakeholder validity, the work would provide a concrete bridge between qualitative justice frameworks and deployable indicators for algorithmic systems, advancing HCI and fairness research beyond purely statistical approaches in mission-driven organizations.

major comments (3)
  1. [Methods] Methods section: The description of how workshop transcripts were coded and mapped onto organizational justice dimensions (distributive, procedural, interactional) is not sufficiently detailed to allow verification of the metric extraction process or assessment of inter-rater reliability.
  2. [Results] Results section: The central claim that the elicited metrics can serve as stable, organization-wide indicators rests on a single snapshot of employee input; no longitudinal follow-up, comparison with direct beneficiary data, or test of metric stability across time or departments is reported, undermining the generalizability asserted in the abstract.
  3. [Discussion] Discussion section: The subsumption argument (organizational justice encompasses distributional fairness plus other concerns) is illustrated by the Kiva examples but lacks an explicit mapping table or comparison against standard fairness metrics (e.g., demographic parity or equalized odds) that would make the subsumption claim load-bearing rather than interpretive.
minor comments (2)
  1. [Abstract] Abstract: The phrase 'co-design workshops conducted with Kiva employees' should be accompanied by a parenthetical note on participant count and departmental distribution to give readers immediate context.
  2. [Figures] Any framework diagram (if present) would benefit from explicit visual indication of how distributional fairness is positioned as a subset within the organizational justice dimensions.

Simulated Author's Rebuttal

3 responses · 1 unresolved

We thank the referee for their constructive and detailed feedback, which identifies key areas where the manuscript can be clarified and strengthened. We address each major comment below and indicate the revisions we plan to make.

read point-by-point responses
  1. Referee: [Methods] Methods section: The description of how workshop transcripts were coded and mapped onto organizational justice dimensions (distributive, procedural, interactional) is not sufficiently detailed to allow verification of the metric extraction process or assessment of inter-rater reliability.

    Authors: We agree that the Methods section requires greater detail to support verification. In the revised manuscript we will expand this section with a step-by-step account of the transcript analysis process, including how segments were identified, how they were mapped to the distributive, procedural, and interactional justice dimensions, and how the resulting metrics were derived. We will also clarify that the analysis was performed collaboratively by the research team through iterative discussion rather than independent coding, and we will explicitly note the implications for reliability in a qualitative study of this type. revision: yes

  2. Referee: [Results] Results section: The central claim that the elicited metrics can serve as stable, organization-wide indicators rests on a single snapshot of employee input; no longitudinal follow-up, comparison with direct beneficiary data, or test of metric stability across time or departments is reported, undermining the generalizability asserted in the abstract.

    Authors: We accept this critique. The study reports a single round of workshops and therefore cannot demonstrate stability or cross-stakeholder validity. In revision we will (a) rewrite the abstract to describe the output as candidate metrics generated through co-design rather than validated organization-wide indicators, and (b) add an explicit limitations subsection that acknowledges the single-snapshot design, the employee-only sample, and the absence of longitudinal or beneficiary data. These changes will temper the generalizability language while preserving the exploratory contribution of the case study. We cannot supply new longitudinal or beneficiary data within the scope of this revision. revision: partial

  3. Referee: [Discussion] Discussion section: The subsumption argument (organizational justice encompasses distributional fairness plus other concerns) is illustrated by the Kiva examples but lacks an explicit mapping table or comparison against standard fairness metrics (e.g., demographic parity or equalized odds) that would make the subsumption claim load-bearing rather than interpretive.

    Authors: We agree that an explicit comparison would make the subsumption argument more rigorous. We will insert a new table in the Discussion that systematically maps each organizational justice dimension to representative statistical fairness metrics (demographic parity, equalized odds, etc.), showing which workshop-elicited concerns are addressed by the latter and which are not. This table will be accompanied by brief textual analysis that uses the Kiva examples to illustrate the gaps. revision: yes

standing simulated objections not resolved
  • The lack of longitudinal follow-up, cross-department stability testing, and direct beneficiary input cannot be addressed without new empirical data collection, which lies beyond the scope of the present study.

Circularity Check

0 steps flagged

No circularity: empirical case study with independent workshop input

full rationale

The paper conducts co-design workshops with Kiva employees to elicit normative concerns and then applies the organizational justice framework interpretively to identify metrics. No mathematical derivations, fitted parameters, predictions, or self-citations are used to justify the central subsumption claim; the framework is adopted from prior literature and instantiated via the collected qualitative data. The derivation chain is self-contained because the metrics and trade-off analysis are directly grounded in the workshop outputs rather than reducing to any input by construction or self-referential definition.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The claim rests on the domain assumption that organizational justice theory is an appropriate superset for fairness concerns in algorithmic systems and that workshop participants' views can be translated into actionable metrics without additional validation steps.

axioms (1)
  • domain assumption Organizational justice framework subsumes and extends distributional fairness for algorithmic systems
    Invoked directly in the proposal as the alternative framing.

pith-pipeline@v0.9.0 · 5495 in / 1130 out tokens · 60941 ms · 2026-05-14T20:07:12.002080+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

40 extracted references · 4 canonical work pages · 1 internal anchor

  1. [1]

    Himan Abdollahpouri, Gediminas Adomavicius, Robin Burke, Ido Guy, Dietmar Jannach, Toshihiro Kamishima, Jan Krasnodebski, and Luiz Pizzato. 2020. Multistakeholder recommendation: Survey and research directions.User Modeling and User-Adapted Interaction30, 1 (2020), 127–158

  2. [2]

    Mladen Adamovic. 2023. Organizational justice research: A review, synthesis, and research agenda.European Management Review20, 4 (2023), 762–782

  3. [3]

    Saleema Amershi, Maya Cakmak, William Bradley Knox, and Todd Kulesza. 2014. Power to the people: The role of humans in interactive machine learning.AI magazine35, 4 (2014), 105–120

  4. [4]

    2000.Sorting things out: Classification and its consequences

    Geoffrey C Bowker and Susan Leigh Star. 2000.Sorting things out: Classification and its consequences. MIT press

  5. [5]

    Robin Burke. 2017. Multisided fairness for recommendation.arXiv preprint rXiv:1707.00093(2017)

  6. [6]

    Robin Burke, Gediminas Adomavicius, Toine Bogers, Tommaso Di Noia, Dominik Kowald, Julia Neidhardt, Özlem Özgöbek, Maria Soledad Pera, Nava Tintarev, and Jürgen Ziegler. 2025. De-centering the (Traditional) user: Multistakeholder evaluation of recommender systems. International Journal of Human-Computer Studies203 (2025), 103560. doi:10.1016/j.ijhcs.2025.103560

  7. [7]

    Robin Burke, Nicholas Mattei, Vladislav Grozin, Amy Voida, and Nasim Sonboli. 2022. Multi-agent Social Choice for Dynamic Fairness- aware Recommendation. InAdjunct Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization(Barcelona, Spain)(UMAP ’22 Adjunct). Association for Computing Machinery, New York, NY, USA, 234–244. doi...

  8. [8]

    Robin Burke, Nasim Sonboli, and Aldo Ordonez-Gauger. 2018. Balanced neighborhoods for multi-sided fairness in recommendation. In Conference on fairness, accountability and transparency. PMLR, 202–214

  9. [9]

    Allison JB Chaney, Brandon M Stewart, and Barbara E Engelhardt. 2018. How algorithmic confounding in recommendation systems increases homogeneity and decreases utility. InProceedings of the 12th ACM conference on recommender systems. 224–232

  10. [10]

    Jason A Colquitt. 2001. On the dimensionality of organizational justice: a construct validation of a measure.Journal of applied psychology 86, 3 (2001), 386

  11. [11]

    Jason A Colquitt, Jerald Greenberg, and Cindy P Zapata-Phelan. 2013. What is organizational justice? A historical overview. InHandbook of organizational justice. Psychology Press, 3–56

  12. [12]

    Jason A Colquitt and Jessica B Rodell. 2015. Measuring justice and fairness.The Oxford handbook of justice in the workplace1 (2015), 187–202

  13. [13]

    Jonathan Dodge, Q Vera Liao, Yunfeng Zhang, Rachel KE Bellamy, and Casey Dugan. 2019. Explaining models: an empirical study of how explanations impact fairness judgment. InProceedings of the 24th international conference on intelligent user interfaces. 275–285

  14. [14]

    Michael D Ekstrand, Anubrata Das, Robin Burke, Fernando Diaz, et al. 2022. Fairness in information access systems.Foundations and Trends®in Information Retrieval16, 1-2 (2022), 1–177. FAccT ’26, June 25–28, 2026, Montreal, QC, Canada Robledo, Mattei, Ragothaman, Burke, Voida

  15. [15]

    Ekstrand, Afsaneh Razi, Aleksandra Sarcevic, Maria Soledad Pera, Robin Burke, and Katherine Landau Wright

    Michael D. Ekstrand, Afsaneh Razi, Aleksandra Sarcevic, Maria Soledad Pera, Robin Burke, and Katherine Landau Wright. 2025. Recommending With, Not For: Co-Designing Recommender Systems for Social Good.ACM Trans. Recomm. Syst.(Aug. 2025). doi:10. 1145/3759261 Just Accepted

  16. [16]

    Hengyi Fu and Yao Lyu. 2025. Implementing Facial Recognition Technology in a University Setting: An Organizational Justice Perspective. International Journal of Human–Computer Interaction41, 15 (2025), 9248–9261

  17. [17]

    Jerald Greenberg and Tom R Tyler. 1987. Why procedural justice in organizations?Social Justice Research1, 2 (1987), 127–142

  18. [18]

    Guusje Juijn, Niya Stoimenova, João Reis, and Dong Nguyen. 2023. Perceived algorithmic fairness using organizational justice theory: An empirical case study on algorithmic hiring. InProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. 775–785

  19. [19]

    Kiva Microfunds. 2025. Kiva - Loans that change lives. https://www.kiva.org Accessed: 2025-05-07

  20. [20]

    Min Kyung Lee. 2018. Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management.Big data & society5, 1 (2018), 2053951718756684

  21. [21]

    Min Kyung Lee and Su Baykal. 2017. Algorithmic mediation in group decisions: Fairness perceptions of algorithmically mediated vs. discussion-based social division. InProceedings of the 2017 acm conference on computer supported cooperative work and social computing. 1035–1048

  22. [22]

    Min Kyung Lee, Anuraag Jain, Hea Jin Cha, Shashank Ojha, and Daniel Kusbit. 2019. Procedural justice in algorithmic fairness: Leveraging transparency and outcome control for fair algorithmic mediation.Proceedings of the ACM on human-computer interaction3, CSCW (2019), 1–26

  23. [23]

    Min Kyung Lee, Ji Tae Kim, and Leah Lizarondo. 2017. A human-centered approach to algorithmic services: Considerations for fair and motivating smart community service management that allocates donations to non-profit organizations. InProceedings of the 2017 CHI conference on human factors in computing systems. 3365–3376

  24. [24]

    Min Kyung Lee, Daniel Kusbit, Anson Kahng, Ji Tae Kim, Xinran Yuan, Allissa Chan, Daniel See, Ritesh Noothigattu, Siheon Lee, Alexandros Psomas, et al. 2019. WeBuildAI: Participatory framework for algorithmic governance.Proceedings of the ACM on human- computer interaction3, CSCW (2019), 1–35

  25. [25]

    Susan Leigh Star. 2010. This is not a boundary object: Reflections on the origin of a concept.Science, technology, & human values35, 5 (2010), 601–617

  26. [26]

    Mulligan, Joshua A

    Deirdre K. Mulligan, Joshua A. Kroll, Nitin Kohli, and Richmond Y. Wong. 2019. This Thing Called Fairness: Disciplinary Confusion Realizing a Value in Technology.Proc. ACM Hum.-Comput. Interact.3, CSCW, Article 119 (Nov. 2019), 36 pages. doi:10.1145/3359221

  27. [27]

    Carla Alvial Palavicino, Cristian Matti, and Christoph Brodnik. 2023. Co-creation for Transformative Innovation Policy: an implementa- tion case for projects structured as portfolio of knowledge services.Evidence & Policy19, 2 (2023), 323–339

  28. [28]

    Cassidy Pyle, Kat Roemmich, and Nazanin Andalibi. 2024. US job-seekers’ organizational justice perceptions of emotion AI-enabled interviews.Proceedings of the ACM on human-computer interaction8, CSCW2 (2024), 1–42

  29. [29]

    Emilee Rader, Kelley Cotter, and Janghee Cho. 2018. Explanations as mechanisms for supporting algorithmic transparency. InProceedings of the 2018 CHI conference on human factors in computing systems. 1–13

  30. [30]

    Iyad Rahwan. 2018. Society-in-the-loop: programming the algorithmic social contract.Ethics and information technology20, 1 (2018), 5–14

  31. [31]

    Lionel P Robert, Casey Pierce, Liz Marquis, Sangmi Kim, and Rasha Alahmad. 2020. Designing fair AI for managing employees in organizations: a review, critique, and design agenda.Human–Computer Interaction35, 5-6 (2020), 545–575

  32. [32]

    Nripsuta Ani Saxena, Karen Huang, Evan DeFilippis, Goran Radanovic, David C Parkes, and Yang Liu. 2019. How do fairness definitions fare? Examining public attitudes towards algorithmic definitions of fairness. InProceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 99–106

  33. [33]

    Garima Sharma, Angela Greco, Sylvia Grewatsch, and Pratima Bansal. 2022. Cocreating forward: How researchers and managers can address problems together.Academy of Management Learning & Education21, 3 (2022), 350–368

  34. [34]

    Jessie J Smith, Lex Beattie, and Henriette Cramer. 2023. Scoping fairness objectives and identifying fairness metrics for recommender systems: The practitioners’ perspective. InProceedings of the ACM web conference 2023. 3648–3659

  35. [35]

    Jessie J Smith, Anas Buhayh, Anushka Kathait, Pradeep Ragothaman, Nicholas Mattei, Robin Burke, and Amy Voida. 2023. The many faces of fairness: Exploring the institutional logics of multistakeholder microlending recommendation. InProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 1652–1663

  36. [36]

    Jessie J Smith, Aishwarya Satwani, Robin Burke, and Casey Fiesler. 2024. Recommend me? designing fairness metrics with providers. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency. 2389–2399

  37. [37]

    Nasim Sonboli, Jessie J Smith, Florencia Cabral Berenfus, Robin Burke, and Casey Fiesler. 2021. Fairness and transparency in rec- ommendation: The users’ perspective. InProceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization. 274–279

  38. [38]

    Susan Leigh Star and James R Griesemer. 1989. Institutional ecology,translations’ and boundary objects: Amateurs and professionals in Berkeley’s Museum of Vertebrate Zoology, 1907-39.Social studies of science19, 3 (1989), 387–420

  39. [39]

    1990.Basics of qualitative research

    Anselm Strauss, Juliet Corbin, et al. 1990.Basics of qualitative research. Vol. 15. sage Newbury Park, CA. Co-Designing Organizational Justice Indicators for Algorithmic Systems FAccT ’26, June 25–28, 2026, Montreal, QC, Canada

  40. [40]

    Iris Wanzenböck, Joeri H Wesseling, Koen Frenken, Marko P Hekkert, and K Matthias Weber. 2020. A framework for mission-oriented innovation policy: Alternative pathways through the problem–solution space.Science and public policy47, 4 (2020), 474–489. Received 20 February 2007; revised 12 March 2009; accepted 5 June 2009