Infatoshi commited on
Commit
96c8d72
·
verified ·
1 Parent(s): 367dc36

Upload blog_post.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. blog_post.md +202 -0
blog_post.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: "kernrl: Teaching LLMs to Write Fast GPU Kernels"
3
+ thumbnail: /blog/assets/kernrl/thumbnail.png
4
+ authors:
5
+ - user: Infatoshi
6
+ date: 2026-01-20
7
+ tags:
8
+ - openenv
9
+ - reinforcement-learning
10
+ - cuda
11
+ - triton
12
+ - gpu
13
+ - grpo
14
+ ---
15
+
16
+ # kernrl: Teaching LLMs to Write Fast GPU Kernels
17
+
18
+ What if we could train language models to write optimized GPU code? Not just syntactically correct code, but kernels that actually run faster than PyTorch's defaults?
19
+
20
+ That's the goal of **kernrl** - an RL environment where agents learn to optimize GPU kernels through trial and error, receiving real performance feedback from actual hardware.
21
+
22
+ ## The Problem: GPU Programming is Hard
23
+
24
+ Writing efficient GPU code requires understanding memory hierarchies, thread synchronization, and hardware-specific optimizations. Even experienced engineers spend significant time tuning kernels for different architectures.
25
+
26
+ Consider a simple softmax operation. PyTorch's implementation works, but a hand-tuned Triton kernel can be 2-5x faster by:
27
+ - Fusing the max, subtract, exp, sum, and divide operations
28
+ - Using efficient memory access patterns
29
+ - Avoiding unnecessary global memory round-trips
30
+
31
+ ```python
32
+ # PyTorch baseline - multiple kernel launches
33
+ def softmax(x):
34
+ max_val = x.max(dim=-1, keepdim=True)
35
+ x = x - max_val
36
+ exp_x = torch.exp(x)
37
+ return exp_x / exp_x.sum(dim=-1, keepdim=True)
38
+
39
+ # Triton kernel - single fused operation
40
+ @triton.jit
41
+ def softmax_kernel(input_ptr, output_ptr, n_cols, BLOCK_SIZE: tl.constexpr):
42
+ row_idx = tl.program_id(0)
43
+ col_offsets = tl.arange(0, BLOCK_SIZE)
44
+ mask = col_offsets < n_cols
45
+
46
+ row = tl.load(input_ptr + row_idx * n_cols + col_offsets, mask=mask, other=-float('inf'))
47
+ row_max = tl.max(row, axis=0)
48
+ numerator = tl.exp(row - row_max)
49
+ softmax_output = numerator / tl.sum(numerator, axis=0)
50
+
51
+ tl.store(output_ptr + row_idx * n_cols + col_offsets, softmax_output, mask=mask)
52
+ ```
53
+
54
+ ## Enter kernrl
55
+
56
+ kernrl is an [OpenEnv](https://github.com/meta-pytorch/OpenEnv) environment that frames GPU kernel optimization as an RL problem:
57
+
58
+ - **State**: A PyTorch reference implementation + GPU info
59
+ - **Action**: CUDA/Triton kernel code
60
+ - **Reward**: Based on compilation, correctness, and speedup
61
+
62
+ The environment evaluates submitted kernels on real GPU hardware, providing concrete feedback:
63
+
64
+ ```python
65
+ from kernrl import kernrl_env, KernelAction
66
+
67
+ env = kernrl_env(base_url="http://localhost:8000")
68
+ obs = env.reset(problem_id="L1_23_Softmax")
69
+
70
+ # Submit a kernel
71
+ result = env.step(KernelAction(code=triton_kernel_code))
72
+
73
+ print(f"Compiled: {result.observation.compilation_success}")
74
+ print(f"Correct: {result.observation.correctness_pass}")
75
+ print(f"Speedup: {result.observation.speedup}x")
76
+ ```
77
+
78
+ ## 89 Problems Across 10 Difficulty Levels
79
+
80
+ kernrl includes a diverse problem set spanning from basic operations to cutting-edge architectures:
81
+
82
+ | Level | Category | Examples |
83
+ |-------|----------|----------|
84
+ | 1 | Simple Ops | matmul, softmax, conv2d, layernorm |
85
+ | 2 | Fused Ops | matmul+GELU+softmax, conv+batchnorm |
86
+ | 3 | Attention | Vision attention, causal attention, transformer blocks |
87
+ | 4 | Novel Layers | DeepSeek MLA, MoE, GQA, FP8 matmul, INT4 GEMM |
88
+ | 5 | Scientific | N-body simulation, stencils, sparse matrix ops |
89
+ | 6 | Graphics | Ray tracing, histogram, bilateral filter |
90
+ | 7 | Signal | FFT, convolution, median filter |
91
+ | 8 | Video | Motion estimation, optical flow, deblocking |
92
+ | 9 | Primitives | Prefix scan, radix sort, stream compaction |
93
+ | 10 | Cryptography | SHA-256, AES, ChaCha20 |
94
+
95
+ Level 4 is particularly interesting - it includes architectures like DeepSeek's Multi-head Latent Attention and Mixture of Experts that weren't in most training data, testing whether models can truly reason about kernel optimization rather than memorize solutions.
96
+
97
+ ## Training with GRPO
98
+
99
+ We use TRL's GRPOTrainer with a custom rollout function that interacts with the kernrl environment:
100
+
101
+ ```python
102
+ from trl import GRPOConfig, GRPOTrainer
103
+ from trl.experimental.openenv import generate_rollout_completions
104
+
105
+ def rollout_func(prompts, trainer):
106
+ # Generate kernel code
107
+ outputs = generate_rollout_completions(trainer, prompts)
108
+
109
+ # Evaluate in environment
110
+ env_rewards = []
111
+ for completion in outputs:
112
+ code = extract_code(completion)
113
+ result = env.step(KernelAction(code=code))
114
+ env_rewards.append(compute_reward(result))
115
+
116
+ return {
117
+ "prompt_ids": [...],
118
+ "completion_ids": [...],
119
+ "logprobs": [...],
120
+ "env_reward": env_rewards,
121
+ }
122
+
123
+ trainer = GRPOTrainer(
124
+ model="Qwen/Qwen2.5-Coder-1.5B-Instruct",
125
+ reward_funcs=[reward_from_env],
126
+ rollout_func=rollout_func,
127
+ args=GRPOConfig(use_vllm=True),
128
+ )
129
+ trainer.train()
130
+ ```
131
+
132
+ The reward structure encourages incremental progress:
133
+ - **+0.1** for successful compilation
134
+ - **+0.3** for correctness (output matches reference within tolerance)
135
+ - **+0.3** for beating the baseline
136
+ - **+0.0 to +0.6** bonus scaled by log2(speedup)
137
+
138
+ This means a model can learn even before achieving speedups - first learn to write valid code, then correct code, then fast code.
139
+
140
+ ## Why This Matters
141
+
142
+ ### For AI Research
143
+ GPU kernel optimization is a well-defined domain with clear metrics (correctness + speed), making it ideal for studying:
144
+ - How LLMs reason about low-level code optimization
145
+ - Whether RL can teach models skills not present in training data
146
+ - Transfer learning between GPU architectures (H100 vs B200)
147
+
148
+ ### For Practical Applications
149
+ As AI models grow, inference costs dominate. A model that can automatically optimize kernels could:
150
+ - Reduce serving costs by finding faster implementations
151
+ - Adapt to new hardware without manual tuning
152
+ - Enable efficient deployment on edge devices
153
+
154
+ ### For the Community
155
+ kernrl provides:
156
+ - A standardized benchmark for kernel optimization capabilities
157
+ - Real hardware evaluation (not just syntax checking)
158
+ - Integration with the broader OpenEnv ecosystem
159
+
160
+ ## Try It Yourself
161
+
162
+ **HuggingFace Space**: [huggingface.co/spaces/Infatoshi/kernrl](https://huggingface.co/spaces/Infatoshi/kernrl)
163
+
164
+ **Training Notebook**: [huggingface.co/Infatoshi/kernrl-training](https://huggingface.co/Infatoshi/kernrl-training)
165
+
166
+ **OpenEnv PR**: [github.com/meta-pytorch/OpenEnv/pull/308](https://github.com/meta-pytorch/OpenEnv/pull/308)
167
+
168
+ To run locally with GPU:
169
+
170
+ ```bash
171
+ # Clone OpenEnv
172
+ git clone https://github.com/meta-pytorch/OpenEnv.git
173
+ cd OpenEnv/envs/kernrl
174
+
175
+ # Install
176
+ pip install -e .
177
+
178
+ # Start server
179
+ uvicorn kernrl.server.app:app --host 0.0.0.0 --port 8000
180
+ ```
181
+
182
+ Or with Docker:
183
+
184
+ ```bash
185
+ docker build -t kernrl -f server/Dockerfile .
186
+ docker run --gpus all -p 8000:8000 kernrl
187
+ ```
188
+
189
+ ## What's Next
190
+
191
+ We're excited to see what the community builds with kernrl:
192
+
193
+ - **Curriculum learning**: Start with L1, progressively add harder problems
194
+ - **Multi-turn optimization**: Let models iterate based on profiling feedback
195
+ - **Architecture-specific training**: Specialize models for H100 vs B200
196
+ - **Novel reward shaping**: Incorporate memory bandwidth, occupancy metrics
197
+
198
+ The code is open source and contributions are welcome. Whether you're interested in RL, GPU programming, or both - we'd love to see what optimizations your models can discover.
199
+
200
+ ---
201
+
202
+ *kernrl was built for the OpenEnv Challenge. Special thanks to the Meta PyTorch team for the OpenEnv framework and Hugging Face for TRL.*