update [markdown] - [cleaned] ✅
#8
by
prithivMLmods - opened
README.md
CHANGED
|
@@ -41,7 +41,8 @@ This repository provides pre-built wheels for `flash-attn` version **2.8.3** for
|
|
| 41 |
|
| 42 |
## Linux x86_64
|
| 43 |
|
| 44 |
-
### Torch 2.9
|
|
|
|
| 45 |
**ABI: `TRUE` (Implied)**
|
| 46 |
|
| 47 |
#### Python 3.12 (`cp312`)
|
|
@@ -53,7 +54,7 @@ flash-attn @ https://huggingface.co/strangertoolshf/flash_attention_2_wheelhouse
|
|
| 53 |
|
| 54 |
---
|
| 55 |
|
| 56 |
-
### Torch 2.8
|
| 57 |
|
| 58 |
#### ABI: `FALSE`
|
| 59 |
|
|
@@ -111,7 +112,7 @@ flash-attn @ https://huggingface.co/strangertoolshf/flash_attention_2_wheelhouse
|
|
| 111 |
|
| 112 |
---
|
| 113 |
|
| 114 |
-
### Torch 2.7
|
| 115 |
|
| 116 |
#### ABI: `FALSE`
|
| 117 |
|
|
@@ -169,7 +170,7 @@ flash-attn @ https://huggingface.co/strangertoolshf/flash_attention_2_wheelhouse
|
|
| 169 |
|
| 170 |
---
|
| 171 |
|
| 172 |
-
### Torch 2.6
|
| 173 |
|
| 174 |
#### ABI: `FALSE`
|
| 175 |
|
|
@@ -227,7 +228,7 @@ flash-attn @ https://huggingface.co/strangertoolshf/flash_attention_2_wheelhouse
|
|
| 227 |
|
| 228 |
---
|
| 229 |
|
| 230 |
-
### Torch 2.5
|
| 231 |
|
| 232 |
#### ABI: `FALSE`
|
| 233 |
|
|
@@ -285,7 +286,7 @@ flash-attn @ https://huggingface.co/strangertoolshf/flash_attention_2_wheelhouse
|
|
| 285 |
|
| 286 |
---
|
| 287 |
|
| 288 |
-
### Torch 2.4
|
| 289 |
|
| 290 |
#### ABI: `FALSE`
|
| 291 |
|
|
@@ -335,7 +336,7 @@ flash-attn @ https://huggingface.co/strangertoolshf/flash_attention_2_wheelhouse
|
|
| 335 |
|
| 336 |
## Linux aarch64 (ARM)
|
| 337 |
|
| 338 |
-
### Torch 2.9 (Nightly/Pre-release)
|
| 339 |
|
| 340 |
#### ABI: `TRUE`
|
| 341 |
|
|
@@ -348,5 +349,5 @@ flash-attn @ https://huggingface.co/strangertoolshf/flash_attention_2_wheelhouse
|
|
| 348 |
|
| 349 |
## Acknowledgements and Note:
|
| 350 |
|
| 351 |
-
- Dao-AILab [flash-attention]: [Faster Attention with Better Parallelism and Work Partitioning](https://
|
| 352 |
-
- Note: This repository follows the same license, release notices, and other terms and conditions as the Dao-AILab [flash-attention](https://github.com/Dao-AILab/flash-attention) repository.
|
|
|
|
| 41 |
|
| 42 |
## Linux x86_64
|
| 43 |
|
| 44 |
+
### **<span style="color:orangered;">Torch 2.9</span>**
|
| 45 |
+
|
| 46 |
**ABI: `TRUE` (Implied)**
|
| 47 |
|
| 48 |
#### Python 3.12 (`cp312`)
|
|
|
|
| 54 |
|
| 55 |
---
|
| 56 |
|
| 57 |
+
### **<span style="color:orangered;">Torch 2.8</span>**
|
| 58 |
|
| 59 |
#### ABI: `FALSE`
|
| 60 |
|
|
|
|
| 112 |
|
| 113 |
---
|
| 114 |
|
| 115 |
+
### **<span style="color:orangered;">Torch 2.7</span>**
|
| 116 |
|
| 117 |
#### ABI: `FALSE`
|
| 118 |
|
|
|
|
| 170 |
|
| 171 |
---
|
| 172 |
|
| 173 |
+
### **<span style="color:orangered;">Torch 2.6</span>**
|
| 174 |
|
| 175 |
#### ABI: `FALSE`
|
| 176 |
|
|
|
|
| 228 |
|
| 229 |
---
|
| 230 |
|
| 231 |
+
### **<span style="color:orangered;">Torch 2.5</span>**
|
| 232 |
|
| 233 |
#### ABI: `FALSE`
|
| 234 |
|
|
|
|
| 286 |
|
| 287 |
---
|
| 288 |
|
| 289 |
+
### **<span style="color:orangered;">Torch 2.4</span>**
|
| 290 |
|
| 291 |
#### ABI: `FALSE`
|
| 292 |
|
|
|
|
| 336 |
|
| 337 |
## Linux aarch64 (ARM)
|
| 338 |
|
| 339 |
+
### **<span style="color:orangered;">Torch 2.9 (Nightly/Pre-release)</span>**
|
| 340 |
|
| 341 |
#### ABI: `TRUE`
|
| 342 |
|
|
|
|
| 349 |
|
| 350 |
## Acknowledgements and Note:
|
| 351 |
|
| 352 |
+
- Dao-AILab [flash-attention]: [Faster Attention with Better Parallelism and Work Partitioning](https://arxiv.org/pdf/2205.14135).
|
| 353 |
+
- Note: This repository follows the same license, release notices, and other terms and conditions as the Dao-AILab [flash-attention](https://github.com/Dao-AILab/flash-attention) repository.
|