File size: 4,236 Bytes
5fed0fc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 |
QKNorm Optimization Problem
============================
Problem Setting
---------------
Design and optimize high-performance implementations for Query-Key Normalization (QKNorm) on GPU. This problem focuses on implementing efficient normalization kernels that apply RMSNorm to query and key tensors.
This is a **memory-bound** (even **launch-bound**) **tiny operator**. Performance optimization requires careful attention to:
1. **Memory Efficiency**: Focus on **vectorized memory access patterns**. Minimize memory transactions and maximize memory bandwidth utilization.
2. **Operation Fusion**: **Avoid additional transpose/contiguous kernels**. Fuse operations to reduce kernel launch overhead and memory traffic.
3. **Non-Contiguous Input Handling**: **Be aware that inputs may be non-contiguous** due to weight-QKV fusion. Your implementation should efficiently handle non-contiguous memory layouts without triggering expensive memory copies.
Target
------
- **Primary**: Ensure correctness across diverse tensor shapes
- **Secondary**: Maximize geometric mean speedup over baseline (higher is better)
- **Tertiary**: Minimize kernel launch overhead and memory usage
API Specification
-----------------
Implement a `Solution` class that returns a qknorm implementation:
```python
class Solution:
def solve(self, spec_path: str = None) -> dict:
"""
Returns a dict with either:
- {"code": "python_code_string"}
- {"program_path": "path/to/kernel.py"}
"""
# Your implementation
pass
```
Your kernel implementation must provide:
```python
import torch
import flashinfer
def qknorm(q: torch.Tensor, k: torch.Tensor, norm_weight: torch.Tensor):
"""
Apply RMSNorm to query and key tensors.
Args:
q: Query tensor of arbitrary shape (will be reshaped to 2D)
k: Key tensor of arbitrary shape (will be reshaped to 2D)
norm_weight: Normalization weight tensor of shape (hidden_dim,)
Returns:
Tuple of (q_normalized, k_normalized) tensors
"""
pass
```
Required Default Implementation:
```python
def default_qknorm(q: torch.Tensor, k: torch.Tensor, norm_weight: torch.Tensor):
q_2d = q.contiguous().view(-1, q.shape[-1])
k_2d = k.contiguous().view(-1, k.shape[-1])
q_o = torch.empty_like(q_2d)
k_o = torch.empty_like(k_2d)
flashinfer.norm.rmsnorm(q_2d, norm_weight, out=q_o)
flashinfer.norm.rmsnorm(k_2d, norm_weight, out=k_o)
return q_o.view(q.shape), k_o.view(k.shape)
```
Baseline Implementation:
```python
def customized_qknorm(q: torch.Tensor, k: torch.Tensor, norm_weight: torch.Tensor):
q_o = torch.empty(q.shape, device=q.device, dtype=q.dtype)
k_o = torch.empty(k.shape, device=k.device, dtype=k.dtype)
flashinfer.norm.rmsnorm(q, norm_weight, out=q_o)
flashinfer.norm.rmsnorm(k, norm_weight, out=k_o)
return q_o, k_o
```
API Usage Notes
---------------
- The evaluator looks for a `qknorm` function in the module namespace
- Function must handle tensor reshaping correctly (q and k may have arbitrary shapes)
- Must use flashinfer.norm.rmsnorm for normalization
- Function returns a tuple of (q_normalized, k_normalized) tensors
- **Important**: Inputs q and k may be **non-contiguous** due to weight-QKV fusion
- **Avoid**: Additional `.contiguous()` or `.transpose()` calls that trigger memory copies
- **Focus**: Vectorized memory access and operation fusion to minimize kernel launches
Scoring (0-100)
---------------
Performance is measured against baseline implementations:
```
geometric_mean_speedup = geometric_mean(answer_times / baseline_times)
if speedup < 0.5 or correctness is wrong:
score = 0
elif speedup >= 0.5 and speedup < 1.0:
score = 50
elif speedup >= 1.0:
score = 100
```
- 0 points = Speedup < 0.5x OR correctness fails
- 50 points = Speedup >= 0.5x and < 1.0x
- 100 points = Speedup >= 1.0x
Evaluation Details
------------------
- Shapes focus on diverse batch-sizes, head-dim, num-kv-heads, num-qo-heads, e.g.:
- (16, 8, 32, 128)
- (128, 32, 32, 64)
- Correctness verified with tolerance: rtol=1e-2, atol=5e-3
- Performance measured using median execution time
- Requires CUDA backend and GPU support
|