Differentially Private Bayesian Learning on Distributed Data release_tj2z67ax5ndvxfvuxhyu2gefzy

by Mikko Heikkilä and Eemil Lagerspetz and Samuel Kaski and Kana Shimizu and Sasu Tarkoma and Antti Honkela

Released as a article .

2017  

Abstract

Many applications of machine learning, for example in health care, would benefit from methods that can guarantee privacy of data subjects. Differential privacy (DP) has become established as a standard for protecting learning results. The standard DP algorithms require a single trusted party to have access to the entire data, which is a clear weakness. We consider DP Bayesian learning in a distributed setting, where each party only holds a single sample or a few samples of the data. We propose a learning strategy based on a secure multi-party sum function for aggregating summaries from data holders and the Gaussian mechanism for DP. Our method builds on an asymptotically optimal and practically efficient DP Bayesian inference with rapidly diminishing extra cost.
In text/plain format

Archived Files and Locations

application/pdf  512.0 kB
file_5wu44mqhkvcinhmp6rf5fi2pvu
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2017-05-29
Version   v2
Language   en ?
arXiv  1703.01106v2
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 96d0d688-c23d-4ff6-af25-41fb2f85f789
API URL: JSON