File size: 4,624 Bytes
5fed0fc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
Mixed GEMM Optimization Problem
=================================

Problem Setting
---------------
Design and optimize high-performance Triton kernels for Mixed GEMM (Linear + Bias + GELU) computation on GPU. This problem focuses on implementing efficient fused kernels that combine matrix multiplication, bias addition, and GELU activation using Triton's JIT compilation system.

The challenge involves optimizing:
- **Fused computation**: Efficiently combining linear layer (X @ W + B) with GELU activation
- **Memory access patterns**: Efficient loading and storing of X, W, B tensors
- **Mixed precision**: Handling float16 inputs/outputs with float32 bias and accumulation
- **GELU activation**: Implementing efficient GELU computation using CUDA libdevice functions
- **Block tiling**: Optimal block sizes for GPU execution across different matrix sizes
- **Performance benchmarking**: Achieving speedup over baseline PyTorch implementations

Target
------
- **Primary**: Maximize geometric mean speedup over baseline (higher is better)
- **Secondary**: Ensure correctness across diverse matrix sizes
- **Tertiary**: Minimize kernel launch overhead and memory usage

API Specification
-----------------
Implement a `Solution` class that returns a Triton kernel implementation:

```python
class Solution:
    def solve(self, spec_path: str = None) -> dict:
        """
        Returns a dict with either:
        - {"code": "python_code_string"}
        - {"program_path": "path/to/kernel.py"}
        """
        # Your implementation
        pass
```

Your kernel implementation must provide:

```python
import torch
import triton
import triton.language as tl

def linear_gelu(X: torch.Tensor, W: torch.Tensor, B: torch.Tensor) -> torch.Tensor:
    """
    Linear layer with GELU activation computation.
    
    Args:
        X: Input tensor of shape (M, K) - input features (float16)
        W: Weight tensor of shape (K, N) - weight matrix (float16)
        B: Bias tensor of shape (N,) - bias vector (float32)
    
    Returns:
        Output tensor of shape (M, N) - output with GELU activation (float16)
    """
    # Your implementation
    pass
```

Input Specifications
--------------------
- **X**: Input tensor of shape `(M, K)` where:
  - `M`: Batch size (tested with 512, 1024)
  - `K`: Input feature dimension (typically 4096)
  - dtype: `torch.float16`
- **W**: Weight tensor of shape `(K, N)` where:
  - `N`: Output feature dimension (typically 4096)
  - dtype: `torch.float16`
- **B**: Bias tensor of shape `(N,)` where:
  - dtype: `torch.float32`
- All inputs are on CUDA device

Output Specifications
--------------------
- Output tensor of shape `(M, N)` matching the input batch and output feature dimensions
- Output dtype: `torch.float16`
- Output device: Same as input (CUDA)

Correctness Requirements
------------------------
- Numerical correctness verified against PyTorch baseline implementation
- Relative tolerance: 1e-2, Absolute tolerance: 5e-3
- All test cases must pass for any score above 0
- GELU activation must be correctly implemented

Scoring (0-100)
---------------
Performance is measured against GPU baseline implementations:

```
geometric_mean_gpu_time = geometric_mean(gpu_baseline_times)
geometric_mean_answer_time = geometric_mean(answer_times)

# Linear interpolation: 0 points = 1x GPU baseline, 100 points = 3x GPU baseline
target_time_0 = geometric_mean_gpu_time  # 0 points (1x GPU baseline)
target_time_100 = geometric_mean_gpu_time / 3.0  # 100 points (3x speedup over GPU)
score = 100 * (target_time_0 - geometric_mean_answer_time) / (target_time_0 - target_time_100)
```

- 0 points = 1x GPU baseline performance
- 100 points = 3x speedup over GPU baseline
- Score is linearly interpolated between these two points

Note: Correctness is verified against GPU baseline, and scoring spans from 1x GPU baseline (0 points) to 3x GPU baseline (100 points).

Evaluation Details
------------------
- Test cases: M = 512, 1024 (with N = 4096, K = 4096)
- Warmup phase: 10 iterations to stabilize GPU clocks and caches
- Random seed: Fixed seed (0) for reproducible data generation
- Strict correctness: Any test failure results in score of 0

Additional Notes
----------------
- The benchmark uses float32 for PyTorch baseline (for numerical stability) but float16 for answer evaluation
- GELU formula: gelu(x) = x * 0.5 * (1.0 + erf(x * 0.7071067811865476))
- Consider using CUDA libdevice erf function: `tl.extra.cuda.libdevice.erf`
- Accumulation should use float32 for numerical stability
- Bias addition should be done after matrix multiplication but before GELU