Kernels
File size: 1,310 Bytes
a743610
d45f0de
 
a743610
 
d45f0de
 
a743610
 
d45f0de
c077d9f
d45f0de
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
---
library_name: kernels
license: apache-2.0
---

<!-- This model card has automatically been generated. You
should probably proofread and complete it, then remove this comment. -->


This is the repository card of {repo_id} that has been pushed on the Hub. It was built to be used with the [`kernels` library](https://github.com/huggingface/kernels). This card was automatically generated.


## How to use

```python
# make sure `kernels` is installed: `pip install -U kernels`
from kernels import get_kernel

kernel_module = get_kernel("kernels-community/flash-attn3") # <- change the ID if needed
flash_attn_combine = kernel_module.flash_attn_combine

flash_attn_combine(...)
```

## Available functions

- `flash_attn_combine`
- `flash_attn_func`
- `flash_attn_qkvpacked_func`
- `flash_attn_varlen_func`
- `flash_attn_with_kvcache`
- `get_scheduler_metadata`

## Supported backends

- cuda

## CUDA Capabilities

- 8.0
- 9.0a

## Benchmarks

Benchmarking script is available for this kernel. Make sure to run `kernels benchmark org-id/repo-id` (replace "org-id" and "repo-id" with actual values).

[TODO: provide benchmarks if available]

## Source code

[TODO: provide original source code and other relevant citations if available]

## Notes

[TODO: provide additional notes about this kernel if needed]