pith. machine review for the scientific record. sign in

arxiv: 1901.10008 · v2 · submitted 2019-01-28 · 💻 cs.DC · cs.LG

Recognition: unknown

The OoO VLIW JIT Compiler for GPU Inference

Authors on Pith no claims yet
classification 💻 cs.DC cs.LG
keywords inferencecompilerlatencymultiplexingslosutilizationabstractionsaccelerated
0
0 comments X
read the original abstract

Current trends in Machine Learning~(ML) inference on hardware accelerated devices (e.g., GPUs, TPUs) point to alarmingly low utilization. As ML inference is increasingly time-bounded by tight latency SLOs, increasing data parallelism is not an option. The need for better efficiency motivates GPU multiplexing. Furthermore, existing GPU programming abstractions force programmers to micro-manage GPU resources in an early-binding, context-free fashion. We propose a VLIW-inspired Out-of-Order (OoO) Just-in-Time (JIT) compiler that coalesces and reorders execution kernels at runtime for throughput-optimal device utilization while satisfying latency SLOs. We quantify the inefficiencies of space-only and time-only multiplexing alternatives and demonstrate an achievable 7.7x opportunity gap through spatial coalescing.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.