pith. machine review for the scientific record. sign in

arxiv: 1610.08077 · v1 · submitted 2016-10-25 · 📊 stat.ML · cs.LG

Recognition: unknown

A statistical framework for fair predictive algorithms

Authors on Pith no claims yet
classification 📊 stat.ML cs.LG
keywords datamodelspredictionspredictivebiasmethodalgorithmscomputer
0
0 comments X
read the original abstract

Predictive modeling is increasingly being employed to assist human decision-makers. One purported advantage of replacing human judgment with computer models in high stakes settings-- such as sentencing, hiring, policing, college admissions, and parole decisions-- is the perceived "neutrality" of computers. It is argued that because computer models do not hold personal prejudice, the predictions they produce will be equally free from prejudice. There is growing recognition that employing algorithms does not remove the potential for bias, and can even amplify it, since training data were inevitably generated by a process that is itself biased. In this paper, we provide a probabilistic definition of algorithmic bias. We propose a method to remove bias from predictive models by removing all information regarding protected variables from the permitted training data. Unlike previous work in this area, our framework is general enough to accommodate arbitrary data types, e.g. binary, continuous, etc. Motivated by models currently in use in the criminal justice system that inform decisions on pre-trial release and paroling, we apply our proposed method to a dataset on the criminal histories of individuals at the time of sentencing to produce "race-neutral" predictions of re-arrest. In the process, we demonstrate that the most common approach to creating "race-neutral" models-- omitting race as a covariate-- still results in racially disparate predictions. We then demonstrate that the application of our proposed method to these data removes racial disparities from predictions with minimal impact on predictive accuracy.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Causal Discovery via Statistical Power (CDSP)

    stat.ME 2026-05 unverdicted novelty 6.0

    CDSP uses an effect-size asymmetry assumption and statistical power to estimate causal directions from bivariate data with uncertainty, reducing false discoveries by 18% on 100 benchmark pairs.