Privacy-Preserving Deep Inference for Rich User Data on The Cloud
release_ta2l5vwspzfwdlhegd7fcz2nta
by
Seyed Ali Osia, Ali Shahin Shamsabadi, Ali Taheri, Kleomenis Katevas,
Hamid R. Rabiee, Nicholas D. Lane, Hamed Haddadi
2017
Abstract
Deep neural networks are increasingly being used in a variety of machine
learning applications applied to rich user data on the cloud. However, this
approach introduces a number of privacy and efficiency challenges, as the cloud
operator can perform secondary inferences on the available data. Recently,
advances in edge processing have paved the way for more efficient, and private,
data processing at the source for simple tasks and lighter models, though they
remain a challenge for larger, and more complicated models. In this paper, we
present a hybrid approach for breaking down large, complex deep models for
cooperative, privacy-preserving analytics. We do this by breaking down the
popular deep architectures and fine-tune them in a particular way. We then
evaluate the privacy benefits of this approach based on the information exposed
to the cloud service. We also asses the local inference cost of different
layers on a modern handset for mobile applications. Our evaluations show that
by using certain kind of fine-tuning and embedding techniques and at a small
processing costs, we can greatly reduce the level of information available to
unintended tasks applied to the data feature on the cloud, and hence achieving
the desired tradeoff between privacy and performance.
In text/plain
format
Archived Files and Locations
application/pdf 1.3 MB
file_xr3wm3sporgn5dqhlz5re2ltfe
|
arxiv.org (repository) web.archive.org (webarchive) |
1710.01727v2
access all versions, variants, and formats of this works (eg, pre-prints)