Kernels
flash-attn3 / README.md
sayakpaul's picture
sayakpaul HF Staff
Upload README.md with huggingface_hub
d45f0de verified
|
raw
history blame
1.31 kB
---
library_name: kernels
license: apache-2.0
---
<!-- This model card has automatically been generated. You
should probably proofread and complete it, then remove this comment. -->
This is the repository card of {repo_id} that has been pushed on the Hub. It was built to be used with the [`kernels` library](https://github.com/huggingface/kernels). This card was automatically generated.
## How to use
```python
# make sure `kernels` is installed: `pip install -U kernels`
from kernels import get_kernel
kernel_module = get_kernel("kernels-community/flash-attn3") # <- change the ID if needed
flash_attn_combine = kernel_module.flash_attn_combine
flash_attn_combine(...)
```
## Available functions
- `flash_attn_combine`
- `flash_attn_func`
- `flash_attn_qkvpacked_func`
- `flash_attn_varlen_func`
- `flash_attn_with_kvcache`
- `get_scheduler_metadata`
## Supported backends
- cuda
## CUDA Capabilities
- 8.0
- 9.0a
## Benchmarks
Benchmarking script is available for this kernel. Make sure to run `kernels benchmark org-id/repo-id` (replace "org-id" and "repo-id" with actual values).
[TODO: provide benchmarks if available]
## Source code
[TODO: provide original source code and other relevant citations if available]
## Notes
[TODO: provide additional notes about this kernel if needed]