Recognition: unknown
Aequitas: A Bias and Fairness Audit Toolkit
read the original abstract
Recent work has raised concerns on the risk of unintended bias in AI systems being used nowadays that can affect individuals unfairly based on race, gender or religion, among other possible characteristics. While a lot of bias metrics and fairness definitions have been proposed in recent years, there is no consensus on which metric/definition should be used and there are very few available resources to operationalize them. Therefore, despite recent awareness, auditing for bias and fairness when developing and deploying AI systems is not yet a standard practice. We present Aequitas, an open source bias and fairness audit toolkit that is an intuitive and easy to use addition to the machine learning workflow, enabling users to seamlessly test models for several bias and fairness metrics in relation to multiple population sub-groups. Aequitas facilitates informed and equitable decisions around developing and deploying algorithmic decision making systems for both data scientists, machine learning researchers and policymakers.
This paper has not been read by Pith yet.
Forward citations
Cited by 4 Pith papers
-
Do Fair Models Reason Fairly? Counterfactual Explanation Consistency for Procedural Fairness in Credit Decisions
Outcome-fair credit models often exhibit hidden procedural bias through inconsistent reasoning across groups, which the CEC framework mitigates by enforcing consistent feature attributions via counterfactuals.
-
Fairness of Explanations in Artificial Intelligence (AI): A Unifying Framework, Axioms, and Future Direction toward Responsible AI
A conditional invariance framework defines explanation fairness as explanations being statistically independent of protected attributes given task-relevant features, unifying existing metrics and enabling procedural b...
-
MIFair: A Mutual-Information Framework for Intersectionality and Multiclass Fairness
MIFair defines fairness via mutual information independence between predictions and sensitive attributes, supplies a flexible metric template plus regularization-based mitigation, proves equivalences to standard notio...
-
FairLogue: A Toolkit for Intersectional Fairness Analysis in Clinical Machine Learning Models
FairLogue provides modular tools to quantify intersectional fairness gaps in clinical ML using extended demographic parity, equalized odds, and counterfactual methods, shown on a glaucoma surgery prediction task from ...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.