Text Generation
Transformers
Safetensors
English
Chinese
qwen3
code-generation
npu
ascend
chain-of-thought
conversational
text-generation-inference
Instructions to use AscendKernelGen/KernelGen-LM-4B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use AscendKernelGen/KernelGen-LM-4B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="AscendKernelGen/KernelGen-LM-4B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("AscendKernelGen/KernelGen-LM-4B") model = AutoModelForCausalLM.from_pretrained("AscendKernelGen/KernelGen-LM-4B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use AscendKernelGen/KernelGen-LM-4B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "AscendKernelGen/KernelGen-LM-4B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "AscendKernelGen/KernelGen-LM-4B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/AscendKernelGen/KernelGen-LM-4B
- SGLang
How to use AscendKernelGen/KernelGen-LM-4B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "AscendKernelGen/KernelGen-LM-4B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "AscendKernelGen/KernelGen-LM-4B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "AscendKernelGen/KernelGen-LM-4B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "AscendKernelGen/KernelGen-LM-4B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use AscendKernelGen/KernelGen-LM-4B with Docker Model Runner:
docker model run hf.co/AscendKernelGen/KernelGen-LM-4B
| language: | |
| - en | |
| license: apache-2.0 | |
| library_name: transformers | |
| pipeline_tag: text-generation | |
| # AscendKernelGen / KernelGen-LM-4B | |
|  | |
| [](https://huggingface.co/papers/2601.07160) | |
| KernelGen-LM-4B is a state-of-the-art domain-adaptive large language model specialized for low-level NPU kernel generation, specifically for the Huawei Ascend architecture using the AscendC programming language. Built upon the Qwen3-4B backbone, it is trained on the Ascend-CoT dataset and refined via reinforcement learning with execution feedback. | |
| This model was introduced in the paper [AscendKernelGen: A Systematic Study of LLM-Based Kernel Generation for Neural Processing Units](https://huggingface.co/papers/2601.07160). | |
| **Other artifacts:** | |
| * **GitHub Repository:** [NPUKernelBench](https://github.com/weich97/NPUKernelBench) | |
| * **Evaluation Framework (OpenI):** [NPUKernelBench](https://git.openi.org.cn/PCL-Benchmark/NPUKernelBench) | |
| ## Introduction | |
| Our framework, **AscendKernelGen (AKGen)**, bridges the gap between general-purpose code generation and hardware-specific programming through a closed-loop system of data construction, training, and evaluation. Key innovations include: | |
| * **Ascend-CoT Dataset:** A high-quality, domain-specific dataset incorporating **Chain-of-Thought (CoT)** reasoning. It combines documentation-based reasoning, code-centric reasoning derived from real-world kernel implementations, and general reasoning chains to capture the structured logic required for low-level NPU programming. | |
| * **Domain-Adaptive Post-Training:** A two-stage optimization process that yields **KernelGen-LM**. We first employ **Supervised Fine-Tuning (SFT)** with error-derived supervision (correcting API misuse and numerical errors). This is followed by **Reinforcement Learning (RL)** using Direct Preference Optimization (DPO), driven by execution-based correctness and performance signals. | |
| * **Hardware-Grounded Evaluation:** Validated using **NPUKernelBench**, a comprehensive benchmark that assesses compilation success, functional correctness, and performance (latency) on real Ascend hardware across varying complexity levels. | |
| * **Performance:** The model demonstrates siginificant improvement on complex Level-2 kernels compared to baselines, and effectively solving tasks where general-purpose models (like Qwen3, Llama3.1) fail completely. | |
| ## Citation | |
| ```bibtex | |
| @article{cao2026ascendkernelgen, | |
| title={AscendKernelGen: A Systematic Study of LLM-Based Kernel Generation for Neural Processing Units}, | |
| author={Xinzi Cao and Jianyang Zhai and Pengfei Li and Zhiheng Hu and Cen Yan and Bingxu Mu and Guanghuan Fang and Bin She and Jiayu Li and Yihan Su and Dongyang Tao and Xiansong Huang and Fan Xu and Feidiao Yang and Yao Lu and Chang-Dong Wang and Yutong Lu and Weicheng Xue and Bin Zhou and Yonghong Tian}, | |
| journal={arXiv preprint arXiv:2601.07160}, | |
| year={2026}, | |
| url={https://arxiv.org/abs/2601.07160} | |
| } | |
| ``` |