Recognition: unknown
From Universal to Individualized Actionability: Revisiting Personalization in Algorithmic Recourse
Pith reviewed 2026-05-10 17:38 UTC · model grok-4.3
The pith
Personalizing algorithmic recourse with individual actionability constraints degrades recommendation validity and plausibility while revealing socio-demographic disparities.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The central claim is that shifting from universal to individualized actionability in causal algorithmic recourse—operationalized through pre-hoc user prompting for hard constraints on changeable features and soft constraints on preferences—substantially degrades the validity and plausibility of generated recommendations across both amortized and non-amortized approaches, with hard constraints exerting the strongest negative effect, and simultaneously exposes disparities in recourse cost and plausibility across socio-demographic groups.
What carries the argument
Individual actionability, defined by hard constraints that limit which features can be changed and soft individualized constraints that encode preferences over action values and costs, operationalized via pre-hoc user prompting inside the causal algorithmic recourse framework.
If this is right
- Hard individual actionability constraints substantially degrade the plausibility and validity of recourse recommendations.
- The degradation appears across both amortized and non-amortized recourse approaches.
- Incorporating individual actionability reveals disparities in the cost and plausibility of actions across socio-demographic groups.
- Personalization through pre-hoc user prompting creates measurable trade-offs with other recourse desiderata such as validity and plausibility.
Where Pith is reading between the lines
- If user preferences change over time, repeating the pre-hoc prompting step or shifting to online updates would be needed to keep recommendations aligned.
- The uncovered group disparities imply that universal-actionability methods may hide fairness problems that only become visible once individual constraints are modeled.
- Testing the same constraints on datasets with weaker causal assumptions could clarify how much the observed degradation depends on accurate causal graphs.
- Similar individual constraint modeling could be applied to other personalized decision systems beyond recourse, such as credit or hiring recommendations.
Load-bearing premise
User-provided rankings or scores accurately reflect stable individual preferences and the chosen causal models correctly capture the data-generating process for the tested datasets.
What would settle it
Empirical results on standard datasets showing no measurable drop in validity or plausibility scores when hard constraints are enforced compared with universal actionability, or no observable differences in cost and plausibility across socio-demographic groups.
Figures
read the original abstract
Algorithmic recourse aims to provide actionable recommendations that enable individuals to change unfavorable model outcomes, and prior work has extensively studied properties such as efficiency, robustness, and fairness. However, the role of personalization in recourse remains largely implicit and underexplored. While existing approaches incorporate elements of personalization through user interactions, they typically lack an explicit definition of personalization and do not systematically analyze its downstream effects on other recourse desiderata. In this paper, we formalize personalization as individual actionability, characterized along two dimensions: hard constraints that specify which features are individually actionable, and soft, individualized constraints that capture preferences over action values and costs. We operationalize these dimensions within the causal algorithmic recourse framework, adopting a pre-hoc user-prompting approach in which individuals express preferences via rankings or scores prior to the generation of any recourse recommendation. Through extensive empirical evaluation, we investigate how personalization interacts with key recourse desiderata, including validity, cost, and plausibility. Our results highlight important trade-offs: individual actionability constraints, particularly hard ones, can substantially degrade the plausibility and validity of recourse recommendations across amortized and non-amortized approaches. Notably, we also find that incorporating individual actionability can reveal disparities in the cost and plausibility of recourse actions across socio-demographic groups. These findings underscore the need for principled definitions, careful operationalization, and rigorous evaluation of personalization in algorithmic recourse.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper formalizes personalization in algorithmic recourse as individual actionability, consisting of hard constraints on which features are actionable for a given individual and soft constraints capturing individualized preferences over action values and costs. It operationalizes this via a pre-hoc user-prompting approach (users supply rankings or scores before recommendation generation) embedded in the causal recourse framework, then empirically evaluates interactions with validity, cost, and plausibility across amortized and non-amortized methods. The central findings are that hard constraints substantially degrade plausibility and validity while revealing socio-demographic disparities in cost and plausibility.
Significance. If the empirical results hold after addressing the assumptions, the work is significant for explicitly defining and analyzing personalization—an aspect previously left implicit—thereby surfacing concrete trade-offs that affect the practical utility and fairness of recourse. The pre-hoc prompting provides a pragmatic operationalization, and the amortized/non-amortized comparison broadens the scope. Credit is due for focusing on falsifiable empirical trade-offs rather than purely theoretical claims.
major comments (3)
- [Abstract] Abstract: the claim of an 'extensive empirical evaluation' investigating trade-offs with validity, cost, and plausibility is presented without any mention of datasets, sample sizes, statistical tests, or controls for confounding factors. This omission is load-bearing because the headline results on degradation and disparities cannot be assessed without these details.
- [Causal Framework and Experiments] Causal Framework and Experiments: validity and plausibility are defined via interventions in the chosen causal models, yet no validation, sensitivity analysis, or goodness-of-fit checks are reported for whether these models recover the data-generating process on the experimental datasets. This assumption directly supports the central claim that hard constraints degrade validity and plausibility.
- [User Prompting and Soft Constraints] User Prompting and Soft Constraints: soft constraints rest on user-supplied rankings or scores assumed to represent stable individual preferences. No robustness checks against elicitation noise, context dependence, or instability are described, which is load-bearing for the reported socio-demographic disparities in cost and plausibility.
minor comments (2)
- [Notation] The distinction between amortized and non-amortized recourse approaches is referenced repeatedly but never given a concise definition or forward reference in the main text.
- [Results] A summary table comparing key metrics (validity, cost, plausibility) across constraint types and methods would improve readability of the empirical results.
Simulated Author's Rebuttal
We thank the referee for their constructive and detailed feedback, which identifies key areas for improving the clarity, rigor, and transparency of our empirical claims. We address each major comment point by point below, with a commitment to revisions that strengthen the manuscript without misrepresenting our contributions.
read point-by-point responses
-
Referee: [Abstract] Abstract: the claim of an 'extensive empirical evaluation' investigating trade-offs with validity, cost, and plausibility is presented without any mention of datasets, sample sizes, statistical tests, or controls for confounding factors. This omission is load-bearing because the headline results on degradation and disparities cannot be assessed without these details.
Authors: We agree that the abstract would benefit from greater specificity to allow readers to contextualize the headline results. Due to length constraints, we did not include these details originally. In the revised manuscript, we will add a concise clause to the abstract noting the use of three standard datasets (Adult, German Credit, COMPAS), that evaluations are averaged over 5 runs with 95% confidence intervals, and that socio-demographic disparities are assessed via stratified analysis. Full experimental details, including controls for confounding, remain in Section 4. revision: yes
-
Referee: [Causal Framework and Experiments] Causal Framework and Experiments: validity and plausibility are defined via interventions in the chosen causal models, yet no validation, sensitivity analysis, or goodness-of-fit checks are reported for whether these models recover the data-generating process on the experimental datasets. This assumption directly supports the central claim that hard constraints degrade validity and plausibility.
Authors: The referee is correct that the manuscript does not report explicit validation or sensitivity checks for the causal models. These models follow standard specifications from prior recourse literature on the same datasets. To address this directly, we will add a dedicated paragraph in the experimental setup (Section 4.1) reporting goodness-of-fit via interventional distribution comparisons and sensitivity analysis under edge-weight perturbations. This will substantiate that the models are sufficient for the validity and plausibility metrics used. revision: yes
-
Referee: [User Prompting and Soft Constraints] User Prompting and Soft Constraints: soft constraints rest on user-supplied rankings or scores assumed to represent stable individual preferences. No robustness checks against elicitation noise, context dependence, or instability are described, which is load-bearing for the reported socio-demographic disparities in cost and plausibility.
Authors: We acknowledge that the pre-hoc prompting approach assumes user rankings and scores reflect stable preferences, without reported checks for elicitation noise or instability. This assumption underpins the disparity results. In revision, we will expand the discussion of limitations (Section 5) to explicitly note this and add simulated robustness experiments injecting Gaussian noise into user scores at varying levels, reporting how the socio-demographic cost and plausibility disparities change. This will quantify sensitivity without requiring new user studies. revision: partial
Circularity Check
No circularity: new formalization and empirical evaluation are self-contained
full rationale
The paper defines individual actionability (hard/soft constraints) as a new formalization within the causal recourse framework, then reports results from fresh experiments on validity, cost, plausibility, and group disparities. No quoted step reduces a claimed result to a fitted parameter, self-citation, or definitional tautology. The derivation chain consists of independent operationalization followed by data-driven evaluation rather than any of the enumerated circular patterns.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Causal models can be used to generate valid and plausible recourse actions
Reference graph
Works this paper leans on
-
[1]
Carlo Abrate, Federico Siciliano, Francesco Bonchi, and Fabrizio Silvestri. 2024. Human-in-the-Loop Personalized Counterfactual Recourse. InExplainable Artificial Intelligence, Luca Longo, Sebastian Lapuschkin, and Christin Seifert (Eds.). Springer Nature Switzerland, Cham, 18–38. doi:10.1007/978-3-031-63800-8_2
- [2]
-
[3]
Andrew Bell, Joao Fonseca, and Julia Stoyanovich. 2024. The Game Of Recourse: Simulating Algorithmic Recourse over Time to Improve Its Reliability and Fairness. InCompanion of the 2024 International Conference on Management of Data(Santiago AA, Chile)(SIGMOD ’24). Association for Computing Machinery, New York, NY , USA, 464–467. doi:10.1145/3626246.3654742
-
[4]
Marina Ceccon, Alessandro Fabris, Goran Radanovi´c, Asia J. Biega, and Gian Antonio Susto. 2025. Reinforcement Learning for Durable Algorithmic Recourse. arXiv:2509.22102 [cs.LG] https://arxiv.org/abs/2509. 22102
-
[5]
Seung Hyun Cheon, Anneke Wernerfelt, Sorelle A. Friedler, and Berk Ustun. 2024. Feature Responsiveness Scores: Model-Agnostic Explanations for Recourse. arXiv:2410.22598 [stat.ML] doi:10.48550/arXiv.2410.22598 13
-
[6]
European Commission. 2021b. Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts.Pub.L. No. COM(2021) 206final(2021b)
2021
-
[7]
Susanne Dandl, Christoph Molnar, Martin Binder, and Bernd Bischl. 2020. Multi-objective counterfactual explanations. InInternational conference on parallel problem solving from nature. Springer, 448–469
2020
-
[8]
Giovanni De Toni, Bruno Lepri, and Andrea Passerini. 2023. Synthesizing Explainable Counterfactual Policies for Algorithmic Recourse with Program Synthesis.Machine Learning112, 4 (2023), 1389–1409
2023
-
[9]
Giovanni De Toni, Stefano Teso, Bruno Lepri, and Andrea Passerini. 2025. Time Can Invalidate Algorithmic Recourse. InProceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’25). Association for Computing Machinery, New York, NY , USA, 89–107. doi:10.1145/3715275.3732008
- [10]
-
[11]
Ricardo Dominguez-Olmedo, Amir H Karimi, and Bernhard Schölkopf. 2022. On the adversarial robustness of causal algorithmic recourse. InInternational Conference on Machine Learning. PMLR, 5324–5342
2022
-
[12]
Seyedehdelaram Esfahani, Giovanni De Toni, Bruno Lepri, Andrea Passerini, Katya Tentori, and Massimo Zancanaro. 2024. Preference Elicitation in Interactive and User-centered Algorithmic Recourse: an Initial Exploration. InProceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization (UMAP ’24). ACM, 249–254. doi:10.1145/3627043.3659556
-
[13]
Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian
Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and Removing Disparate Impact. InProceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining(Sydney, NSW, Australia)(KDD ’15). Association for Computing Machinery, New York, NY , USA, 259–268. doi:1...
-
[14]
João Fonseca, Andrew Bell, Carlo Abrate, Francesco Bonchi, and Julia Stoyanovich. 2023. Setting the Right Expectations: Algorithmic Recourse Over Time. InProceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization(Boston, MA, USA)(EAAMO ’23). Association for Computing Machinery, New York, NY , USA, Article 29, 11...
-
[15]
Moritz Hardt, Nimrod Megiddo, Christos Papadimitriou, and Mary Wootters. 2016. Strategic Classification. InProceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science(Cambridge, Massachusetts, USA)(ITCS ’16). Association for Computing Machinery, New York, NY , USA, 111–122. doi:10. 1145/2840728.2840730
- [16]
-
[17]
Paul Hellwig, Victoria Buchholz, Stefan Kopp, and Günter W. Maier. 2023. Let the user have a say - voice in automated decision-making.Computers in Human Behavior138 (Jan. 2023), 107446. doi: 10.1016/j.chb. 2022.107446
-
[18]
Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical Reparameterization with Gumbel-Softmax. arXiv:1611.01144 [stat.ML]
work page internal anchor Pith review arXiv 2017
-
[19]
Adrián Javaloy, Pablo Sánchez-Martín, and Isabel Valera. 2023. Causal Normalizing Flows: from Theory to Practice.Advances in Neural Information Processing Systems36 (2023), 58833–58864
2023
- [20]
-
[21]
Kaggle. 2011. Give Me Some Credit. https://www.kaggle.com/competitions/GiveMeSomeCredit/ overview. Accessed: 2026-01-12
2011
-
[22]
Kentaro Kanamori, Takuya Takagi, Ken Kobayashi, and Hiroki Arimura. 2020. DACE: Distribution-aware Counterfactual Explanation by Mixed-Integer Linear Optimization.. InIJCAI. 2855–2862
2020
-
[23]
Amir-Hossein Karimi, Gilles Barthe, Borja Balle, and Isabel Valera. 2020. Model-Agnostic Counterfactual Explanations for Consequential Decisions. InProceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics. PMLR, 895–905
2020
-
[24]
Amir-Hossein Karimi, Gilles Barthe, Bernhard Schölkopf, and Isabel Valera. 2022. A Survey of Algorithmic Recourse: Contrastive Explanations and Consequential Recommendations.Comput. Surveys55, 5 (2022), 1–29. 14
2022
-
[25]
Amir-Hossein Karimi, Bernhard Schölkopf, and Isabel Valera. 2021. Algorithmic Recourse: from Counterfactual Explanations to Interventions. InProceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 353–362
2021
-
[26]
Amir-Hossein Karimi, Julius V on Kügelgen, Bernhard Schölkopf, and Isabel Valera. 2020. Algorithmic Recourse under Imperfect Causal Knowledge: a Probabilistic Approach.Advances in Neural Information Processing Systems33 (2020), 265–277
2020
-
[27]
Mark T. Keane, Eoin M. Kenny, Eoin Delaney, and Barry Smyth. 2021. If Only We Had Better Counterfactual Explanations: Five Key Deficits to Rectify in the Evaluation of Counterfactual XAI Techniques. InProceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI-2021). International Joint Conferences on Artificial Intellige...
-
[28]
Durk P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. 2016. Improved Variational Inference with Inverse Autoregressive Flow.Advances in Neural Information Processing Systems29 (2016)
2016
-
[29]
Seunghun Koh, Byung Hyung Kim, and Sungho Jo. 2024. Understanding the User Perception and Experience of Interactive Algorithmic Recourse Customization.ACM Trans. Comput.-Hum. Interact.31, 3, Article 43 (Aug. 2024), 25 pages. doi:10.1145/3674503
-
[30]
2024.Leveraging Actionable Explanations to Improve People’s Reactions to AI-Based Decisions
Markus Langer and Isabel Valera. 2024.Leveraging Actionable Explanations to Improve People’s Reactions to AI-Based Decisions. Springer Nature Switzerland, 293–306. doi:10.1007/978-3-031-73741-1_18
-
[31]
Benton Li, Nativ Levy, Brit Youngmann, Sainyam Galhotra, and Sudeepa Roy. 2025. Fair and Actionable Causal Prescription Ruleset.Proceedings of the ACM on Management of Data3, 3 (2025), 1–28
2025
- [32]
-
[33]
Ayan Majumdar and Isabel Valera. 2024. CARMA: A Practical Framework to Generate Recommendations for Causal Algorithmic Recourse at Scale. InProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency. 1745–1762
2024
-
[34]
2019.G20 Ministerial Statement on Trade and Digital Economy.https://oecd.ai/en/wonk/documents/g20-ai-principles
G20 Trade Ministers and Digital Economy Ministers. 2019.G20 Ministerial Statement on Trade and Digital Economy.https://oecd.ai/en/wonk/documents/g20-ai-principles
2019
-
[35]
Ramaravind K. Mothilal, Amit Sharma, and Chenhao Tan. 2020. Explaining machine learning classifiers through diverse counterfactual explanations. InProceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* ’20). ACM. doi:10.1145/3351095.3372850
-
[36]
Daniel Nemirovsky, Nicolas Thiebaut, Ye Xu, and Abhishek Gupta. 2022. CounteRGAN: Generating Counter- factuals for Real-time Recourse and Interpretability using Residual GANs. InProceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence. 1488–1497
2022
-
[37]
George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan
-
[38]
Normalizing Flows for Probabilistic Modeling and Inference.Journal of Machine Learning Research22, 57 (2021), 1–64
2021
-
[39]
George Papamakarios, Theo Pavlakou, and Iain Murray. 2017. Masked Autoregressive Flow for Density Estimation. Advances in Neural Information Processing Systems30 (2017)
2017
-
[40]
Martin Pawelczyk, Klaus Broelemann, and Gjergji Kasneci. 2020. Learning Model-Agnostic Counterfactual Explanations for Tabular Data. InProceedings of The Web Conference 2020. ACM, 3126–3132
2020
-
[41]
Martin Pawelczyk, Teresa Datta, Johan Van den Heuvel, Gjergji Kasneci, and Himabindu Lakkaraju. [n. d.]. Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse. InThe Eleventh International Conference on Learning Representations
- [42]
-
[43]
Judea Pearl. 2009. Causal Inference in Statistics: An Overview.Statististics Surveys3 (2009), 96–146
2009
-
[44]
2009.Causality
Judea Pearl. 2009.Causality. Cambridge University Press
2009
-
[45]
Nicholas Perello, Cyrus Cousins, Yair Zick, and Przemyslaw Grabowicz. 2025. Discrimination Induced by Algorithmic Recourse Objectives. InProceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency. 1653–1663
2025
-
[46]
Rafael Poyiadzi, Kacper Sokol, Raul Santos-Rodriguez, Tijl De Bie, and Peter Flach. 2020. FACE: Feasible and Actionable Counterfactual Explanations. InProceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 344–350. 15
2020
-
[47]
Kaivalya Rawal and Himabindu Lakkaraju. 2020. Beyond individualized recourse: Interpretable and interactive summaries of actionable recourses.Advances in Neural Information Processing Systems33 (2020), 12187–12198
2020
-
[48]
Chris Russell. 2019. Efficient Search for Diverse Coherent Explanations. InProceedings of the Conference on Fairness, Accountability, and Transparency. ACM, 20–28
2019
-
[49]
AI Safety Summit. 2023. Bletchley Declaration. https://www.gov.uk/ government/publications/ai-safety-summit-2023-the-bletchley-declaration/ the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023
2023
-
[50]
Avital Shulner-Tal, Tsvi Kuflik, and Doron Kliger. 2022. Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system.Ethics and Information Technology24, 1 (Jan. 2022). doi:10.1007/s10676-022-09623-4
-
[51]
Ronal Singh, Tim Miller, Henrietta Lyons, Liz Sonenberg, Eduardo Velloso, Frank Vetere, Piers Howe, and Paul Dourish. 2023. Directive Explanations for Actionable Explainability in Machine Learning Applications.ACM Transactions on Interactive Intelligent Systems13, 4 (Dec. 2023), 1–26. doi:10.1145/3579363
-
[52]
Tomu Tominaga, Naomi Yamashita, and Takeshi Kurashima. 2025. The Role of Initial Acceptance Attitudes Toward AI Decisions in Algorithmic Recourse. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY , USA, Article 260, 20 pages. doi:10.1145/3706598.3713573
-
[53]
2022.Recommendation on the Ethics of Artificial Intelligence
UNESCO. 2022.Recommendation on the Ethics of Artificial Intelligence. https://unesdoc.unesco.org/ ark:/48223/pf0000381137
2022
-
[54]
Sohini Upadhyay, Shalmali Joshi, and Himabindu Lakkaraju. 2021. Towards Robust and Reliable Algorithmic Recourse. InAdvances in Neural Information Processing Systems, M. Ranzato, A. Beygelzimer, Y . Dauphin, P.S. Liang, and J. Wortman Vaughan (Eds.), V ol. 34. Curran Associates, Inc., 16926–16937.https://proceedings. neurips.cc/paper_files/paper/2021/file...
2021
-
[55]
Sohini Upadhyay, Himabindu Lakkaraju, and Krzysztof Z Gajos. 2025. Counterfactual Explanations May Not Be the Best Algorithmic Recourse Approach. InProceedings of the 30th International Conference on Intelligent User Interfaces. 446–462
2025
-
[56]
Berk Ustun, Alexander Spangher, and Yang Liu. 2019. Actionable Recourse in Linear Classification. InProceed- ings of the Conference on Fairness, Accountability, and Transparency. ACM, 10–19
2019
-
[57]
Richard Uth, Nelli Niemitz, Isabel Valera, and Markus Langer. 2025. Personalizing Explanations in AI-based Decisions: The Effects of Personalization and (Mis)aligning with Individual Preferences.Computers in Human Behavior(2025)
2025
-
[58]
Suresh Venkatasubramanian and Mark Alfano. 2020. The philosophical basis of algorithmic recourse. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* ’20). ACM, 284–293. doi:10.1145/3351095.3372876
- [59]
-
[60]
Sahil Verma, Keegan Hines, and John P Dickerson. 2022. Amortized Generation of Sequential Algorithmic Recourses for Black-Box Models. InProceedings of the AAAI Conference on Artificial Intelligence, V ol. 36. 8512–8519
2022
-
[61]
Julius V on Kügelgen, Amir-Hossein Karimi, Umang Bhatt, Isabel Valera, Adrian Weller, and Bernhard Schölkopf
-
[62]
InProceedings of the AAAI Conference on Artificial Intelligence, V ol
On the Fairness of Causal Algorithmic Recourse. InProceedings of the AAAI Conference on Artificial Intelligence, V ol. 36. 9584–9594
-
[63]
Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR.Harv. JL & Tech.31 (2017), 841
2017
-
[64]
Yongjie Wang, Qinxu Ding, Ke Wang, Yue Liu, Xingyu Wu, Jinglong Wang, Yong Liu, and Chunyan Miao
-
[65]
The Skyline of Counterfactual Explanations for Machine Learning Decision Models. InProceedings of the 30th ACM International Conference on Information & Knowledge Management (CIKM ’21). ACM, 2030–2039. doi:10.1145/3459637.3482397
-
[66]
Yongjie Wang, Hangwei Qian, Yongjie Liu, Wei Guo, and Chunyan Miao. 2023. Flexible and Robust Coun- terfactual Explanations with Minimal Satisfiable Perturbations. InProceedings of the 32nd ACM International Conference on Information and Knowledge Management (CIKM ’23). ACM, 2596–2605. doi:10.1145/3583780. 3614885 16
-
[67]
Wang, Jennifer Wortman Vaughan, Rich Caruana, and Duen Horng Chau
Zijie J. Wang, Jennifer Wortman Vaughan, Rich Caruana, and Duen Horng Chau. 2023. GAM Coach: Towards Interactive and User-centered Algorithmic Recourse. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23). ACM, 1–20. doi:10.1145/3544548.3580816
- [68]
-
[69]
Jayanth Yetukuri, Ian Hardy, and Yang Liu. 2023. Towards User Guided Actionable Recourse. InProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’23). ACM, 742–751. doi:10.1145/3600211. 3604708 A Background A.1 Extended Related Work Wachter et al. [ 61] introducedcounterfactual explanations, combining ideas from adversarial perturb...
-
[70]
˜fGe :Ge=U Ge withU Ge ∼Bernoulli(0.5)
-
[71]
˜fAg :Ag=−35 +U Ag withU Ag ∼Gamma(10,3.5)
-
[72]
˜fEd :Ed=−0.5 +σ(−1 + 0.5·Ge+σ(0.1·Ag) +U Ed)withU Ed ∼ N(0,0.25)
-
[73]
˜fLA :LA= 0.01·(Ag−5)·(Ag−5) + (1−Ge) +U LA withU LA ∼ N(0,4)
-
[74]
˜fDur :Dur=−1 + 0.1·Ag+ 2·(1−Ge) +LA+U Dur withU Dur ∼ N(0,9)
-
[75]
˜fInc :Inc=−4 + 0.1·(Ag+ 35) + 2·Ge+Ge·Ed+U Inc withU Inc ∼ N(0,4)
-
[76]
˜fSav :Sav=−4 + 1.5·1{Inc >0} ·Inc+U Sav withU Sav ∼ N(0,0.25)
-
[77]
Bernoulli(a) denotes the Bernoulli distribution with a determining the probability that the random variable takes the value1
˜fY :Y=1{y≥0}with y=σ(0.3·(−LA−Dur+Inc+Sav+α·Inc·Sav)) andα= 1if1{Inc >0} ∧1{Sav >0}, otherwiseα=−1 In the above equations, σ denotes the sigmoid function, N(a, b) denotes the Gaussian distribution with mean a and variance b and Gamma(a, b) denotes the Gamma distribution where a is the concentration/shape, and b is the scale. Bernoulli(a) denotes the Bern...
1940
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.