KernelBench-X: A Comprehensive Benchmark for Evaluating LLM-Generated GPU Kernels
Abstract
KernelBench-X benchmark reveals that task structure significantly impacts LLM-generated Triton kernel correctness more than method design, while iterative refinement improves correctness at the expense of performance, and correctness does not guarantee efficiency.
LLM-based Triton kernel generation has attracted significant interest, yet a fundamental empirical question remains unanswered: where does this capability break down, and why? We present KernelBench-X, a benchmark designed to answer this question through category-aware evaluation of correctness and hardware efficiency across 176 tasks in 15 categories. Our systematic comparison of five representative methods yields three main findings. First, task structure determines correctness more than method design. Category explains nearly three times more variance in semantic correctness than method (9.4% vs 3.3% explained deviance), and 72% of Fusion tasks fail across all five methods while Math tasks are solved consistently. Second, iterative refinement improves correctness, but not performance. Across GEAK iterations, compile rate rises from 52.3% to 68.8% while average speedup declines from 1.58times to 1.44times; newly rescued kernels consistently underperform persistently correct ones (1.16times vs 1.58times speedup in round~0to1). Third, correctness does not imply efficiency. 46.6% of correct kernels are slower than the PyTorch eager baseline, and cross-hardware speedup variance reaches 21.4times. Besides, quantization remains completely unsolved (0/30 successes) despite non-trivial compilation rates, revealing systematic misunderstanding of numerical computation contracts rather than surface-level syntax errors. These findings suggest that future progress depends on handling global coordination, explicitly modeling numerical precision, and incorporating hardware efficiency into generation. The code is available at https://github.com/BonnieW05/KernelBenchX
Community
KernelBench-X: A Comprehensive Benchmark for Evaluating LLM-Generated GPU Kernels
the striking part for me is that correctness tracks task structure far more than the generation method, yet quantization remains a total mismatch of numeric contracts rather than surface syntax. an ablation i want to see is isolating the numeric contracts: keep the same kernel code but inject tiny precision variations and orderings to see if quantized results drift across GPUs, which would explain the 0/30 successes. if you add a hardware-aware penalty or signal for numerical fidelity during search, you might curb the repair bias where newly rescued kernels slow things down. btw the arxivlens breakdown helped me parse the method details—here's a quick walkthrough that covers this and might be a nice companion reading: https://arxivlens.com/PaperView/Details/kernelbench-x-a-comprehensive-benchmark-for-evaluating-llm-generated-gpu-kernels-4543-7296c188
Get this paper in your agent:
hf papers read 2605.04956 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
