Low Overhead Instruction Latency Characterization for NVIDIA GPGPUs
release_3hpbhedygjebtf7qr5qfdy6mui
by
Yehia Arafa, Abdel-Hameed Badawy, Gopinath Chennupati, Nandakishore
Santhi, Stephan Eidenbenz
2019
Abstract
The last decade has seen a shift in the computer systems industry where
heterogeneous computing has become prevalent. Graphics Processing Units (GPUs)
are now present in supercomputers to mobile phones and tablets. GPUs are used
for graphics operations as well as general-purpose computing (GPGPUs) to boost
the performance of compute-intensive applications. However, the percentage of
undisclosed characteristics beyond what vendors provide is not small. In this
paper, we introduce a very low overhead and portable analysis for exposing the
latency of each instruction executing in the GPU pipeline(s) and the access
overhead of the various memory hierarchies found in GPUs at the
micro-architecture level. Furthermore, we show the impact of the various
optimizations the CUDA compiler can perform over the various latencies. We
perform our evaluation on seven different high-end NVIDIA GPUs from five
different generations/architectures: Kepler, Maxwell, Pascal, Volta, and
Turing. The results in this paper can help architects to have an accurate
characterization of the latencies of these GPUs, which will help in modeling
the hardware accurately. Also, software developers can perform informed
optimizations to their applications.
In text/plain
format
Archived Files and Locations
application/pdf 512.5 kB
file_7f5pihw4svaj5a2flx6zgfhfye
|
arxiv.org (repository) web.archive.org (webarchive) |
1905.08778v1
access all versions, variants, and formats of this works (eg, pre-prints)