Instructions to use kernels-community/flash-attn2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Kernels
How to use kernels-community/flash-attn2 with Kernels:
# !pip install kernels from kernels import get_kernel kernel = get_kernel("kernels-community/flash-attn2") - Notebooks
- Google Colab
- Kaggle
Ctrl+K
- torch210-cxx11-cpu-x86_64-linux
- torch210-cxx11-cu126-x86_64-linux
- torch210-cxx11-cu128-x86_64-linux
- torch210-cxx11-cu130-x86_64-linux
- torch210-cxx11-xpu20253-x86_64-linux
- torch27-cxx11-cu118-x86_64-linux
- torch27-cxx11-cu126-x86_64-linux
- torch27-cxx11-cu128-x86_64-linux
- torch28-cxx11-cpu-x86_64-linux
- torch28-cxx11-cu126-x86_64-linux
- torch28-cxx11-cu128-x86_64-linux
- torch28-cxx11-cu129-x86_64-linux
- torch28-cxx11-xpu20251-x86_64-linux
- torch29-cxx11-cpu-x86_64-linux
- torch29-cxx11-cu126-x86_64-linux
- torch29-cxx11-cu128-x86_64-linux
- torch29-cxx11-cu130-x86_64-linux
- torch29-cxx11-xpu20252-x86_64-linux