Can I Trust My Fairness Metric? Assessing Fairness with Unlabeled Data and Bayesian Inference

NeurIPS 2020

Can I Trust My Fairness Metric? Assessing Fairness with Unlabeled Data and Bayesian Inference

Dec 06, 2020
|
26 views
|
Details
We investigate the problem of reliably assessing group fairness when labeled examples are few but unlabeled examples are plentiful. We propose a general Bayesian framework that can augment labeled data with unlabeled data to produce more accurate and lower-variance estimates compared to methods based on labeled data alone. Our approach estimates calibrated scores for unlabeled examples in each group using a hierarchical latent variable model conditioned on labeled examples. This in turn allows for inference of posterior distributions with associated notions of uncertainty for a variety of group fairness metrics. We demonstrate that our approach leads to significant and consistent reductions in estimation error across multiple well-known fairness datasets, sensitive attributes, and predictive models. The results show the benefits of using both unlabeled data and Bayesian inference in terms of assessing whether a prediction model is fair or not. Speakers: Disi Ji, Padhraic Smyth, Mark Steyvers

Comments
loading...