Privacy Enhancement for Cloud-Based Few-Shot Learning release_orei2dkm5nan5bfxzpvrxhhrue

by Archit Parnami, Muhammad Usama, Liyue Fan, Minwoo Lee

Released as a article .

2022  

Abstract

Requiring less data for accurate models, few-shot learning has shown robustness and generality in many application domains. However, deploying few-shot models in untrusted environments may inflict privacy concerns, e.g., attacks or adversaries that may breach the privacy of user-supplied data. This paper studies the privacy enhancement for the few-shot learning in an untrusted environment, e.g., the cloud, by establishing a novel privacy-preserved embedding space that preserves the privacy of data and maintains the accuracy of the model. We examine the impact of various image privacy methods such as blurring, pixelization, Gaussian noise, and differentially private pixelization (DP-Pix) on few-shot image classification and propose a method that learns privacy-preserved representation through the joint loss. The empirical results show how privacy-performance trade-off can be negotiated for privacy-enhanced few-shot learning.
In text/plain format

Archived Files and Locations

application/pdf  13.4 MB
file_l5e45m25ijh3zpwz3ulb2rrpni
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2022-05-10
Version   v1
Language   en ?
arXiv  2205.07864v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: fdf24ff9-f4cd-448c-928b-f70f7271b459
API URL: JSON