Kernels

Create README.md

#1
by adarshxs HF Staff - opened
Files changed (1) hide show
  1. README.md +71 -0
README.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bsd-3-clause
3
+ tags:
4
+ - kernels
5
+ ---
6
+
7
+ # sglang-flash-attn3
8
+
9
+ Pre-built Flash Attention 3 (forward-only) CUDA kernels from [sgl-flash-attn](https://github.com/sgl-project/sgl-flash-attn), packaged for the [HuggingFace kernels library](https://github.com/huggingface/kernels). Requires Hopper (sm_90+).
10
+
11
+ Kernel source: [kernels-community/sgl-flash-attn3](https://huggingface.co/kernels-community/sgl-flash-attn3/tree/v1) (branch: `V1`)
12
+
13
+ ## Usage
14
+
15
+ ```bash
16
+ pip install kernels
17
+ ```
18
+
19
+ ```python
20
+ from kernels import get_kernel
21
+
22
+ fa3 = get_kernel("kernels-community/sgl-flash-attn3", revision="v1")
23
+
24
+ fa3.flash_attn_varlen_func(q, k, v, cu_seqlens_q, cu_seqlens_k, causal=True)
25
+ fa3.flash_attn_with_kvcache(q, k_cache, v_cache, cache_seqlens=cache_seqlens, causal=True)
26
+ fa3.is_fa3_supported() # True on H100/H200
27
+ ```
28
+
29
+ ## SGLang Integration
30
+
31
+ Entry point: [`python/sglang/srt/layers/attention/flashattention_backend.py`](https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/layers/attention/flashattention_backend.py)
32
+
33
+ Original:
34
+ ```python
35
+ from sgl_kernel.flash_attn import flash_attn_varlen_func as flash_attn_varlen_func_fa3
36
+ from sgl_kernel.flash_attn import flash_attn_with_kvcache as flash_attn_with_kvcache_fa3
37
+ ```
38
+
39
+ Replace with:
40
+ ```python
41
+ from kernels import get_kernel
42
+ _fa3_mod = get_kernel("kernels-community/sgl-flash-attn3", revision="v1")
43
+ flash_attn_varlen_func_fa3 = _fa3_mod.flash_attn_varlen_func
44
+ flash_attn_with_kvcache_fa3 = _fa3_mod.flash_attn_with_kvcache
45
+ ```
46
+
47
+ Same pattern in 5 other files:
48
+ - `dual_chunk_flashattention_backend.py`
49
+ - `nsa_backend.py`
50
+ - `xpu_backend.py`
51
+ - `vision.py`
52
+ - `multimodal_gen/runtime/layers/attention/backends/flash_attn.py`
53
+
54
+
55
+ ## Benchmarks
56
+
57
+ H100 NVL, Qwen2.5-3B-Instruct, FA3. All deltas within noise - **zero performance regression**.
58
+
59
+ | Scenario | Native `sgl_kernel` FA3 tok/s | HF Hub FA3 tok/s | Δ |
60
+ |:--|--:|--:|:--|
61
+ | Short Gen (128→32) | 40,934 | 39,878 | -2.6% |
62
+ | Long Gen (256→1024) | 25,054 | 26,239 | +4.7% |
63
+ | Long Prefill (2048→128) | 53,833 | 54,283 | +0.8% |
64
+ | Bursty (512→256, 16rps) | 6,518 | 6,527 | +0.1% |
65
+ | High Concurrency (256→256) | 40,666 | 40,522 | -0.4% |
66
+
67
+ ## Credits
68
+
69
+ - [Tri Dao](https://tridao.me/) - [Flash Attention 3](https://tridao.me/blog/2024/flash3/)
70
+ - [SGLang](https://github.com/sgl-project/sglang) - `sgl_kernel` FA3 implementation
71
+ - [HuggingFace](https://huggingface.co/kernels-community) - [kernel-builder](https://huggingface.co/blog/kernel-builder) infrastructure