pith. machine review for the scientific record. sign in

arxiv: 2601.07160 · v2 · submitted 2026-01-12 · 💻 cs.AI · cs.LG

Recognition: unknown

AscendKernelGen: A Systematic Study of LLM-Based Kernel Generation for Neural Processing Units

Bingxu Mu, Bin She, Bin Zhou, Cen Yan, Chang-Dong Wang, Dongyang Tao, Fan Xu, Feidiao Yang, Guanghuan Fang, Jianyang Zhai, Jiayu Li, Pengfei Li, Weicheng Xue, Xiansong Huang, Xinzi Cao, Yao Lu, Yihan Su, Yonghong Tian, Yutong Lu, Zhiheng Hu

Authors on Pith no claims yet
classification 💻 cs.AI cs.LG
keywords ascendkernelgengenerationkernelkernelsllmscodecompilationcomplex
0
0 comments X
read the original abstract

To meet the ever-increasing demand for computational efficiency, Neural Processing Units (NPUs) have become critical in modern AI infrastructure. However, unlocking their full potential requires developing high-performance compute kernels using vendor-specific Domain-Specific Languages (DSLs), a task that demands deep hardware expertise and is labor-intensive. While Large Language Models (LLMs) have shown promise in general code generation, they struggle with the strict constraints and scarcity of training data in the NPU domain. Our preliminary study reveals that state-of-the-art general-purpose LLMs fail to generate functional complex kernels for Ascend NPUs, yielding a near-zero success rate. To address these challenges, we propose AscendKernelGen, a generation-evaluation integrated framework for NPU kernel development. We introduce Ascend-CoT, a high-quality dataset incorporating chain-of-thought reasoning derived from real-world kernel implementations, and KernelGen-LM, a domain-adaptive model trained via supervised fine-tuning and reinforcement learning with execution feedback. Furthermore, we design NPUKernelBench, a comprehensive benchmark for assessing compilation, correctness, and performance across varying complexity levels. Experimental results demonstrate that our approach significantly bridges the gap between general LLMs and hardware-specific coding. Specifically, the compilation success rate on complex Level-2 kernels improves from 0% to 95.5% (Pass@10), while functional correctness achieves 64.3% compared to the baseline's complete failure. These results highlight the critical role of domain-specific reasoning and rigorous evaluation in automating accelerator-aware code generation. AscendKernGen is available at https://huggingface.co/AscendKernelGen and https://github.com/weich97/NPUKernelBench.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. InCoder-32B-Thinking: Industrial Code World Model for Thinking

    cs.AR 2026-04 unverdicted novelty 6.0

    InCoder-32B-Thinking uses error-feedback synthesized thinking traces and a code world model to reach top open-source scores on general and industrial code benchmarks including 81.3% on LiveCodeBench and 84.0% on CAD-Coder.