File size: 3,079 Bytes
2e4e618
 
 
 
 
 
e087df2
754670c
9d5531e
 
 
36404ef
754670c
 
 
ff3f8ad
6c43949
754670c
 
 
ff3f8ad
754670c
796bca4
92fae39
796bca4
 
754670c
 
5ce667d
ff3f8ad
754670c
5ce667d
754670c
5ce667d
754670c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
language:
- en
...
---

# AscendKernelGen/KernelGen-LM-32B-RL

![License](https://img.shields.io/badge/License-Apache-yellow)
[![arXiv](https://img.shields.io/badge/arXiv-2601.07160-b31b1b.svg)](https://arxiv.org/abs/2601.07160)

KernelGen-LM-32B-RL is a state-of-the-art domain-adaptive large language model specialized for low-level NPU kernel generation, specifically for the Huawei Ascend architecture using the AscendC programming language. Built upon the Qwen3-32B backbone, it is trained on the Ascend-CoT dataset and refined via reinforcement learning with execution feedback. It achieves unprecedented success rates in generating complex, functional hardware kernels, improving compilation success on L2 tasks from 0% (baseline) to 95.5% (Pass@10), while functional correctness achieves
64.3% compared to the baseline’s complete failure.

**Other artifacts:**
* The **AscendKernelGen Technical Report** is published at https://arxiv.org/abs/2601.07160.
* The **NPUKernelBench** evaluation framework is published at https://git.openi.org.cn/PCL-Benchmark/NPUKernelBench.

## Introduction

Our framework, **AscendKernelGen (AKGen)**, bridges the gap between general-purpose code generation and hardware-specific programming through a closed-loop system of data construction, training, and evaluation. Key innovations include:

* **Ascend-CoT Dataset:** A high-quality, domain-specific dataset incorporating **Chain-of-Thought (CoT)** reasoning. It combines documentation-based reasoning, code-centric reasoning derived from real-world kernel implementations, and general reasoning chains to capture the structured logic required for low-level NPU programming.
* **Domain-Adaptive Post-Training:** A two-stage optimization process that yields **KernelGen-LM**. We first employ **Supervised Fine-Tuning (SFT)** with error-derived supervision (correcting API misuse and numerical errors). This is followed by **Reinforcement Learning (RL)** using Direct Preference Optimization (DPO), driven by execution-based correctness and performance signals.
* **Hardware-Grounded Evaluation:** Validated using **NPUKernelBench**, a comprehensive benchmark that assesses compilation success, functional correctness, and performance (latency) on real Ascend hardware across varying complexity levels.
* **Performance:** The model demonstrates siginificant improvement on complex Level-2 kernels compared to baselines, and effectively solving tasks where general-purpose models (like Qwen3, Llama3.1) fail completely.

## Citation
@article{cao2026ascendkernelgen,
  title={AscendKernelGen: A Systematic Study of LLM-Based Kernel Generation for Neural Processing Units},
  author={Xinzi Cao and Jianyang Zhai and Pengfei Li and Zhiheng Hu and Cen Yan and Bingxu Mu and Guanghuan Fang and Bin She and Jiayu Li and Yihan Su and Dongyang Tao and Xiansong Huang and Fan Xu and Feidiao Yang and Yao Lu and Chang-Dong Wang and Yutong Lu and Weicheng Xue and Bin Zhou and Yonghong Tian},
  journal={arXiv preprint arXiv:2601.07160},
  year={2026},
  url={https://arxiv.org/abs/2601.07160}
}