Analysis and Mitigations of Reverse Engineering Attacks on Local Feature Descriptors
release_owrcwtt3cfel5cy4iwedfs4kiu
by
Deeksha Dangwal, Vincent T. Lee, Hyo Jin Kim, Tianwei Shen, Meghan Cowan, Rajvi Shah, Caroline Trippel, Brandon Reagen, Timothy Sherwood, Vasileios Balntas, Armin Alaghi, Eddy Ilg
2021
Abstract
As autonomous driving and augmented reality evolve, a practical concern is
data privacy. In particular, these applications rely on localization based on
user images. The widely adopted technology uses local feature descriptors,
which are derived from the images and it was long thought that they could not
be reverted back. However, recent work has demonstrated that under certain
conditions reverse engineering attacks are possible and allow an adversary to
reconstruct RGB images. This poses a potential risk to user privacy. We take
this a step further and model potential adversaries using a privacy threat
model. Subsequently, we show under controlled conditions a reverse engineering
attack on sparse feature maps and analyze the vulnerability of popular
descriptors including FREAK, SIFT and SOSNet. Finally, we evaluate potential
mitigation techniques that select a subset of descriptors to carefully balance
privacy reconstruction risk while preserving image matching accuracy; our
results show that similar accuracy can be obtained when revealing less
information.
In text/plain
format
Archived Files and Locations
application/pdf 8.6 MB
file_ihq3acsyovdwlj2ohaaw5prpty
|
arxiv.org (repository) web.archive.org (webarchive) |
2105.03812v1
access all versions, variants, and formats of this works (eg, pre-prints)