Papers
arxiv:2601.15727

Towards Automated Kernel Generation in the Era of LLMs

Published on Jan 22
· Submitted by
Guang Liu
on Jan 23
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

Large language models and agent-based systems are being leveraged to automate kernel generation and optimization, addressing the scalability challenges in hardware-specific code development through structured approaches and systematic benchmarking.

AI-generated summary

The performance of modern AI systems is fundamentally constrained by the quality of their underlying kernels, which translate high-level algorithmic semantics into low-level hardware operations. Achieving near-optimal kernels requires expert-level understanding of hardware architectures and programming models, making kernel engineering a critical but notoriously time-consuming and non-scalable process. Recent advances in large language models (LLMs) and LLM-based agents have opened new possibilities for automating kernel generation and optimization. LLMs are well-suited to compress expert-level kernel knowledge that is difficult to formalize, while agentic systems further enable scalable optimization by casting kernel development as an iterative, feedback-driven loop. Rapid progress has been made in this area. However, the field remains fragmented, lacking a systematic perspective for LLM-driven kernel generation. This survey addresses this gap by providing a structured overview of existing approaches, spanning LLM-based approaches and agentic optimization workflows, and systematically compiling the datasets and benchmarks that underpin learning and evaluation in this domain. Moreover, key open challenges and future research directions are further outlined, aiming to establish a comprehensive reference for the next generation of automated kernel optimization. To keep track of this field, we maintain an open-source GitHub repository at https://github.com/flagos-ai/awesome-LLM-driven-kernel-generation.

Community

Paper submitter

Summary of Key Points

  • Kernel quality is a fundamental bottleneck for modern AI system performance, yet high-quality kernel engineering is expert-intensive, time-consuming, and difficult to scale.

  • Recent advances in large language models (LLMs) and LLM-based agents enable automated kernel generation and optimization by capturing expert knowledge and supporting iterative, feedback-driven optimization loops.

  • Despite rapid progress, existing work is fragmented and lacks a unified, systematic perspective.

  • This survey provides a structured overview of LLM-based kernel generation methods and agentic optimization workflows, and compiles the key datasets and benchmarks used for training and evaluation.

  • The paper further identifies open challenges and outlines future research directions, aiming to serve as a comprehensive reference for next-generation automated kernel optimization.

Resources

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2601.15727 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2601.15727 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2601.15727 in a Space README.md to link it from this page.

Collections including this paper 2