Flexible Performant GEMM Kernels on GPUs release_liixdanhubbdhpxafs6qcphdt4

by Thomas Faingnaert, Tim Besard, Bjorn De Sutter

Released as a article .

2020  

Abstract

General Matrix Multiplication or GEMM kernels take center place in high performance computing and machine learning. Recent NVIDIA GPUs include GEMM accelerators, such as NVIDIA's Tensor Cores. Their exploitation is hampered by the two-language problem: it requires either low-level programming which implies low programmer productivity or using libraries that only offer a limited set of components. Because rephrasing algorithms in terms of established components often introduces overhead, the libraries' lack of flexibility limits the freedom to explore new algorithms. Researchers using GEMMs can hence not enjoy programming productivity, high performance, and research flexibility at once. In this paper we solve this problem. We present three sets of abstractions and interfaces to program GEMMs within the scientific Julia programming language. The interfaces and abstractions are co-designed for researchers' needs and Julia's features to achieve sufficient separation of concerns and flexibility to easily extend basic GEMMs in many different ways without paying a performance price. Comparing our GEMMs to state-of-the-art libraries cuBLAS and CUTLASS, we demonstrate that our performance is mostly on par with, and in some cases even exceeds, the libraries, without having to write a single line of code in CUDA C++ or assembly, and without facing flexibility limitations.
In text/plain format

Archived Files and Locations

application/pdf  7.1 MB
file_7q5qcrefvfdffi5v7s3nd3c7gm
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2020-09-28
Version   v2
Language   en ?
arXiv  2009.12263v2
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 5638ad71-955c-40b3-a6d6-3e316f80f882
API URL: JSON