lhallee commited on
Commit
c79d278
Β·
verified Β·
1 Parent(s): 05b72df

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -17,7 +17,7 @@ The small version corresponds to the 300 million parameter version of ESMC.
17
  | Backend | Key | Notes |
18
  | :--- | :--- | :--- |
19
  | PyTorch SDPA | `"sdpa"` | Default. Exact numerics, stable on all hardware. |
20
- | Flash Attention | `"kernels_flash"` | Fastest on Ampere/Hopper GPUs. Requires `pip install kernels` (pre-built β€” no hours-long compilation). Outputs differ slightly from SDPA due to online softmax reordering, but differences are numerically harmless. |
21
  | Flex Attention | `"flex"` | Skips padding tokens via block mask β€” faster on variable-length batches. Near-exact numerics. First use compiles a Triton kernel (30–120 s). Best combined with `torch.compile`. |
22
  | Auto | `"auto"` | Picks the best available: `kernels_flash` β†’ `flex` β†’ `sdpa`. |
23
 
 
17
  | Backend | Key | Notes |
18
  | :--- | :--- | :--- |
19
  | PyTorch SDPA | `"sdpa"` | Default. Exact numerics, stable on all hardware. |
20
+ | Flash Attention | `"kernels_flash"` | Fastest on Ampere/Hopper GPUs. Requires `pip install kernels` (pre-built β€” no hours-long compilation). Outputs are not bitwise identical to SDPA due to online softmax reordering; differences are often small but not guaranteed to be inconsequential β€” use `"sdpa"` if exact numerics matter. |
21
  | Flex Attention | `"flex"` | Skips padding tokens via block mask β€” faster on variable-length batches. Near-exact numerics. First use compiles a Triton kernel (30–120 s). Best combined with `torch.compile`. |
22
  | Auto | `"auto"` | Picks the best available: `kernels_flash` β†’ `flex` β†’ `sdpa`. |
23