pith. machine review for the scientific record. sign in

arxiv: 1811.05577 · v2 · submitted 2018-11-14 · 💻 cs.LG · cs.AI· cs.CY

Recognition: unknown

Aequitas: A Bias and Fairness Audit Toolkit

Authors on Pith no claims yet
classification 💻 cs.LG cs.AIcs.CY
keywords biasfairnessaequitasrecentsystemsauditdeployingdeveloping
0
0 comments X
read the original abstract

Recent work has raised concerns on the risk of unintended bias in AI systems being used nowadays that can affect individuals unfairly based on race, gender or religion, among other possible characteristics. While a lot of bias metrics and fairness definitions have been proposed in recent years, there is no consensus on which metric/definition should be used and there are very few available resources to operationalize them. Therefore, despite recent awareness, auditing for bias and fairness when developing and deploying AI systems is not yet a standard practice. We present Aequitas, an open source bias and fairness audit toolkit that is an intuitive and easy to use addition to the machine learning workflow, enabling users to seamlessly test models for several bias and fairness metrics in relation to multiple population sub-groups. Aequitas facilitates informed and equitable decisions around developing and deploying algorithmic decision making systems for both data scientists, machine learning researchers and policymakers.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 4 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Do Fair Models Reason Fairly? Counterfactual Explanation Consistency for Procedural Fairness in Credit Decisions

    cs.LG 2026-05 unverdicted novelty 7.0

    Outcome-fair credit models often exhibit hidden procedural bias through inconsistent reasoning across groups, which the CEC framework mitigates by enforcing consistent feature attributions via counterfactuals.

  2. Fairness of Explanations in Artificial Intelligence (AI): A Unifying Framework, Axioms, and Future Direction toward Responsible AI

    cs.AI 2026-05 unverdicted novelty 6.0

    A conditional invariance framework defines explanation fairness as explanations being statistically independent of protected attributes given task-relevant features, unifying existing metrics and enabling procedural b...

  3. MIFair: A Mutual-Information Framework for Intersectionality and Multiclass Fairness

    cs.LG 2026-04 unverdicted novelty 6.0

    MIFair defines fairness via mutual information independence between predictions and sensitive attributes, supplies a flexible metric template plus regularization-based mitigation, proves equivalences to standard notio...

  4. FairLogue: A Toolkit for Intersectional Fairness Analysis in Clinical Machine Learning Models

    cs.LG 2026-04 conditional novelty 5.0

    FairLogue provides modular tools to quantify intersectional fairness gaps in clinical ML using extended demographic parity, equalized odds, and counterfactual methods, shown on a glaucoma surgery prediction task from ...