Eliminating Latent Discrimination: Train Then Mask

Authors

  • Soheil Ghili Yale University
  • Ehsan Kazemi Yale University
  • Amin Karbasi Yale University

DOI:

https://doi.org/10.1609/aaai.v33i01.33013672

Abstract

How can we control for latent discrimination in predictive models? How can we provably remove it? Such questions are at the heart of algorithmic fairness and its impacts on society. In this paper, we define a new operational fairness criteria, inspired by the well-understood notion of omitted variable-bias in statistics and econometrics. Our notion of fairness effectively controls for sensitive features and provides diagnostics for deviations from fair decision making. We then establish analytical and algorithmic results about the existence of a fair classifier in the context of supervised learning. Our results readily imply a simple, but rather counter-intuitive, strategy for eliminating latent discrimination. In order to prevent other features proxying for sensitive features, we need to include sensitive features in the training phase, but exclude them in the test/evaluation phase while controlling for their effects. We evaluate the performance of our algorithm on several realworld datasets and show how fairness for these datasets can be improved with a very small loss in accuracy.

Downloads

Published

2019-07-17

How to Cite

Ghili, S., Kazemi, E., & Karbasi, A. (2019). Eliminating Latent Discrimination: Train Then Mask. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 3672-3680. https://doi.org/10.1609/aaai.v33i01.33013672

Issue

Section

AAAI Technical Track: Machine Learning