pith. machine review for the scientific record. sign in

arxiv: 1610.02413 · v1 · submitted 2016-10-07 · 💻 cs.LG

Recognition: unknown

Equality of Opportunity in Supervised Learning

Authors on Pith no claims yet
classification 💻 cs.LG
keywords obliviouspredictortargetattributeavailableclassificationdiscriminationlearning
0
0 comments X
read the original abstract

We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 5 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Fairness vs Performance: Characterizing the Pareto Frontier of Algorithmic Decision Systems

    cs.LG 2026-05 unverdicted novelty 6.0

    The Pareto frontier of fair algorithmic decisions consists of deterministic group-specific threshold rules on predicted success probabilities, which can include upper bounds for some fairness metrics and holds indepen...

  2. Revisiting Fairness Impossibility with Endogenous Behavior

    cs.GT 2026-04 unverdicted novelty 6.0

    Error-rate balance and predictive parity become compatible under endogenous behavior by adjusting stakes differently across groups, introducing a new form of unequal treatment in consequences.

  3. Ethical and social risks of harm from Language Models

    cs.CL 2021-12 accept novelty 6.0

    The authors provide a detailed taxonomy of 21 risks associated with language models, covering discrimination, information leaks, misinformation, malicious applications, interaction harms, and societal impacts like job...

  4. FAIR_XAI: Improving Multimodal Foundation Model Fairness via Explainability for Wellbeing Assessment

    cs.AI 2026-04 unverdicted novelty 4.0

    Vision-language models for wellbeing assessment exhibit dataset-dependent performance and demographic biases, with explainability interventions providing inconsistent fairness gains at potential accuracy costs.

  5. Man and machine: artificial intelligence and judicial decision making

    cs.AI 2026-03 unverdicted novelty 2.0

    A synthetic review across multiple fields concludes that AI decision aids have modest or nonexistent effects on judicial outcomes while identifying gaps in understanding human-AI interactions.