Request for Python 3.12 wheel: flash-attn for CUDA 128 + PyTorch 2.9.1
#19
by
StableDiffusion69
- opened
Can you please compile for Python 3.12 -flash-attn for CUDA 128 + PyTorch 2.9.1?
Thank you.
I second this. :)
The one for CUDA 130 at this page https://github.com/wildminder/AI-windows-whl?tab=readme-ov-file#available-wheels worked for me.
Doesn't do it for me, sadly. π