pith. machine review for the scientific record. sign in

arxiv: 1802.04799 · v3 · submitted 2018-02-12 · 💻 cs.LG · cs.AI· cs.PL

Recognition: unknown

TVM: An Automated End-to-End Optimizing Compiler for Deep Learning

Authors on Pith no claims yet
classification 💻 cs.LG cs.AIcs.PL
keywords hardwarelearningdeepback-endsacceleratoracrosscompilerdevices
0
0 comments X
read the original abstract

There is an increasing need to bring machine learning to a wide diversity of hardware devices. Current frameworks rely on vendor-specific operator libraries and optimize for a narrow range of server-class GPUs. Deploying workloads to new platforms -- such as mobile phones, embedded devices, and accelerators (e.g., FPGAs, ASICs) -- requires significant manual effort. We propose TVM, a compiler that exposes graph-level and operator-level optimizations to provide performance portability to deep learning workloads across diverse hardware back-ends. TVM solves optimization challenges specific to deep learning, such as high-level operator fusion, mapping to arbitrary hardware primitives, and memory latency hiding. It also automates optimization of low-level programs to hardware characteristics by employing a novel, learning-based cost modeling method for rapid exploration of code optimizations. Experimental results show that TVM delivers performance across hardware back-ends that are competitive with state-of-the-art, hand-tuned libraries for low-power CPU, mobile GPU, and server-class GPUs. We also demonstrate TVM's ability to target new accelerator back-ends, such as the FPGA-based generic deep learning accelerator. The system is open sourced and in production use inside several major companies.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 4 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Prism: Symbolic Superoptimization of Tensor Programs

    cs.PL 2026-04 unverdicted novelty 8.0

    Prism is the first symbolic superoptimizer for tensor programs that uses sGraph for compact representation of program families, two-level search, e-graph equivalence checking, and auto-tuning to achieve up to 2.2x spe...

  2. Nautilus: An Auto-Scheduling Tensor Compiler for Efficient Tiled GPU Kernels

    cs.PL 2026-04 unverdicted novelty 7.0

    Nautilus auto-compiles math-like tensor descriptions into optimized GPU kernels, delivering up to 42% higher throughput than prior compilers on transformer models across NVIDIA GPUs.

  3. Co-Design of CNN Accelerators for TinyML using Approximate Matrix Decomposition

    cs.AR 2026-04 unverdicted novelty 6.0

    A co-design framework using approximate matrix decomposition and genetic algorithms delivers 33% average latency reduction in TinyML CNN FPGA accelerators with 1.3% average accuracy loss versus standard systolic arrays.

  4. Record-Remix-Replay: Hierarchical GPU Kernel Optimization using Evolutionary Search

    cs.DC 2026-04 unverdicted novelty 6.0

    R^3 optimizes full scientific applications on GPUs better than tuning kernel parameters or compiler flags alone while running nearly an order of magnitude faster than modern evolutionary search methods.