Censored and Fair Universal Representations using Generative Adversarial Models release_6xrsq3ukwrdplfgc4r66udmsmy

by Peter Kairouz and Jiachun Liao and Chong Huang and Lalitha Sankar

Released as a article .

2020  

Abstract

We present a data-driven framework for learning censored and fair universal representations (CFUR) that ensure statistical fairness guarantees for all downstream learning tasks that may not be known a priori. Our framework leverages recent advancements in adversarial learning to allow a data holder to learn censored and fair representations that decouple a set of sensitive attributes from the rest of the dataset. The resulting problem of finding the optimal randomizing mechanism with specific fairness/censoring guarantees is formulated as a constrained minimax game between an encoder and an adversary where the constraint ensures a measure of usefulness (utility) of the representation. We show that for appropriately chosen adversarial loss functions, our framework enables defining demographic parity for fair representations and also clarifies the optimal adversarial strategy against strong information-theoretic adversaries. We evaluate the performance of our proposed framework on multi-dimensional Gaussian mixture models and publicly datasets including the UCI Census, GENKI, Human Activity Recognition (HAR), and the UTKFace. Our experimental results show that multiple sensitive features can be effectively censored while ensuring accuracy for several a priori unknown downstream tasks. Finally, our results also make precise the tradeoff between censoring and fidelity for the representation as well as the fairness-utility tradeoffs for downstream tasks.
In text/plain format

Archived Files and Locations

application/pdf  3.7 MB
file_n46hp2s6ejherietk4nbck6roe
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2020-03-24
Version   v5
Language   en ?
arXiv  1910.00411v5
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 1d275ffa-6edd-413d-b6cc-62d63f5ddd4f
API URL: JSON