GPUs: Using HMM to blur the lines between CPU and GPU programming (45 minutes session) | Breakout session
HMM (Heteregeneous Memory Management) is the name of an upcoming Linux kernel patchset, authored by Red Hat's Jerome Glisse. The patchset enables graphics processing unit (GPU) programmers (CUDA programmers, for example) to write code that treats "a pointer as a pointer": the same pointer values can be used in both central processing unit (CPU) and GPU code. This significantly simplifies both writing new CUDA programs00a0and porting older C/C++ (or even Fortran) programs to use GPU acceleration. In other words, malloc(3) can be called to allocate a buffer on the CPU, and that buffer's address can be passed to a CUDA kernel that runs on the GPU. HMM migrates the pages automatically. This session will cover the improved programming model,00a0some bandwidth and tuning considerations,00a0and possibly kernel implementation details (upon request, if time allows).
John Hubbard
Principal Software Engineer NVIDIA
Linux kernel driver engineer for NVIDIA since 2007. Various systems software programming jobs before that, since 1996. Eight years as an Unrestricted Line Officer in the U.S. Navy's submarine force, including 2 years as a certified Naval instructor (Pearl Harbor, Hawaii), finishing up as the Combat Systems Officer ("Weps") of the USS Bluefish (SSN 675). BSEE from Utah State University, 1987. VLSI certificate from UCSC (2015).
J. Glisse
Linux Kernel Engineer Red Hat
Please ask Jerome for this Bio section, as well as his actual job title. thanks, --jhubbard
Room 151B
Thursday, 4th May, 16:30 - 17:15