Timing Cache Accesses to Eliminate Side Channels in Shared Software
release_374xyqtwhzdfra6mpon2aoj4ua
by
Divya Ojha, Sandhya Dwarkadas
2021
Abstract
Timing side channels have been used to extract cryptographic keys and
sensitive documents, even from trusted enclaves. In this paper, we focus on
cache side channels created by access to shared code or data in the memory
hierarchy. This vulnerability is exploited by several known attacks, e.g,
evict+reload for recovering an RSA key and Spectre variants for data leaked due
to speculative accesses. The key insight in this paper is the importance of the
first access to the shared data after a victim brings the data into the cache.
To eliminate the timing side channel, we ensure that the first access by a
process to any cache line loaded by another process results in a miss. We
accomplish this goal by using a combination of timestamps and a novel hardware
design to allow efficient parallel comparisons of the timestamps. The solution
works at all the cache levels and defends against an attacker process running
on another core, same core, or another hyperthread. Our design retains the
benefits of a shared cache: allowing processes to utilize the entire cache for
their execution and retaining a single copy of shared code and data (data
deduplication). Our implementation in the GEM5 simulator demonstrates that the
system is able to defend against RSA key extraction. We evaluate performance
using SPECCPU2006 and observe overhead due to first access delay to be 2.17%.
The overhead due to the security context bookkeeping is of the order of 0.3%.
In text/plain
format
Archived Files and Locations
application/pdf 780.8 kB
file_o6ogvxwytfbyndz5xfthisbdg4
|
arxiv.org (repository) web.archive.org (webarchive) |
2009.14732v2
access all versions, variants, and formats of this works (eg, pre-prints)