Differentially Private Bayesian Learning on Distributed Data
release_tj2z67ax5ndvxfvuxhyu2gefzy
by
Mikko Heikkilä and Eemil Lagerspetz and Samuel Kaski and Kana
Shimizu and Sasu Tarkoma and Antti Honkela
2017
Abstract
Many applications of machine learning, for example in health care, would
benefit from methods that can guarantee privacy of data subjects. Differential
privacy (DP) has become established as a standard for protecting learning
results. The standard DP algorithms require a single trusted party to have
access to the entire data, which is a clear weakness. We consider DP Bayesian
learning in a distributed setting, where each party only holds a single sample
or a few samples of the data. We propose a learning strategy based on a secure
multi-party sum function for aggregating summaries from data holders and the
Gaussian mechanism for DP. Our method builds on an asymptotically optimal and
practically efficient DP Bayesian inference with rapidly diminishing extra
cost.
In text/plain
format
Archived Files and Locations
application/pdf 512.0 kB
file_5wu44mqhkvcinhmp6rf5fi2pvu
|
arxiv.org (repository) web.archive.org (webarchive) |
1703.01106v2
access all versions, variants, and formats of this works (eg, pre-prints)