| # Prebuild wheels for popular packages with support for nvidia blackwell RTX gpus. | |
| **Used with Torch 2.7/2.8 nightly cu128** | |
| # Flash attention 2.7.1.post1 | |
| [Linux/python3.12/pytorch2.7](https://huggingface.co/Minthy/torch_2.7_dev_cu128_buildings/blob/main/flash_attn-2.7.4.post1-cp312-cp312-linux_x86_64.whl) | |
| [Windows/python 3.10/pytorch2.8](https://huggingface.co/Minthy/torch_2.7_dev_cu128_buildings/blob/main/flash_attn-2.7.4.post1-cp310-cp310-win_amd64.whl) | |
| # Xformers 0.0.30 | |
| [Linux/python 3.12/pytorch2.7](https://huggingface.co/Minthy/torch_2.7_dev_cu128_buildings/blob/main/xformers-0.0.30%2B52f96c05.d20250313-cp312-cp312-linux_x86_64.whl) **(compatible with latest triton and distributed training)** | |
| [Windows/python 3.10/pytorch2.8](https://huggingface.co/Minthy/torch_2.7_dev_cu128_buildings/blob/main/xformers-0.0.30%2B9a2cd3ef.d20250324-cp310-cp310-win_amd64.whl) | |
| # Triton 3.3.0 | |
| [Linux/python 3.12/pytorch2.7](https://huggingface.co/Minthy/torch_2.7_dev_cu128_buildings/blob/main/triton-3.3.0%2Bgit1c5ea229-cp312-cp312-linux_x86_64.whl) **(works fine with newest nccl library with blackwell patch for distributed training)** | |
| # NCCL fix for low ram rigs | |
| [No longer needed](https://github.com/NVIDIA/nccl-tests/issues/287), just get latest version | |