How to use the Wan2.2 model with a 5060 Ti 16GB GPU?
I tried installing it, but it’s not compatible. Are there any methods to make it work with this graphics card? Thanks!
You can try the GGUF version 😊: https://www.reddit.com/r/comfyui/comments/1nmx10l/wan22_animate_gguf_q4_test/
same issue i guess on 5090 ?
Hello! I also have a 5060 Ti with 16 GB VRAM. I just bought another 32 GB of RAM, bringing my total to 64 GB.
In the case of WAN 2.2, I use Q8 GGUF models in ComfyUI without any issues. I generate 5-second videos at 1024×574 resolution in approximately 7 minutes with Sage Attention 2.2 enabled.
FP8 models also work well, although the I2V (image-to-video) model isn't available in FP8 yet. Back in WAN 2.1, I used FP8 models exclusively.
''Hello! I also have a 5060 Ti with 16 GB VRAM. I just bought another 32 GB of RAM, bringing my total to 64 GB.
In the case of WAN 2.2, I use Q8 GGUF models in ComfyUI without any issues. I generate 5-second videos at 1024×574 resolution in approximately 7 minutes with Sage Attention 2.2 enabled.
FP8 models also work well, although the I2V (image-to-video) model isn't available in FP8 yet. Back in WAN 2.1, I used FP8 models exclusively.''
I installed WAN 2.2 on a 5060ti 16GB and it reported an incompatibility error with the VGA 5xxx series. Could you please give me a link to an installation guide? Thank you very much!
Hi! Well, it depends on the app you're using. If you're just getting started, install Pinokio—it can be downloaded from pinokio.co. If you're using ComfyUI, you'll need to follow all the steps: download the NVIDIA drivers, get CUDA from the same website, choose a template that interests you, and then install the nodes as requested by the app itself. As for installing Sage Attention, I had to watch tons of YouTube videos—it was really tough, haha!
If anyone else is going through drama trying to get sage attention 2 installed, I just spent a day figuring it out and my advice as a total scrub is to grab torch 2.9cu129 and python 3.12 and grab the compatible wheel from here - https://github.com/woct0rdho/SageAttention/releases/tag/v2.2.0-windows.post3 (post4 is in prerelease but grab post3 until you know it's working). While we're at it, if you're new to all this you might not also know you can grab prebuilt wheels for flash attention from here - https://huggingface.co/ussoewwin/Flash-Attention-2_for_Windows/tree/main and nunchaku here -https://huggingface.co/nunchaku-tech/nunchaku/tree/main To install them you just activate your comfyui venv and run uv pip install c:/path/to/filename.whl (or sans the uv if you don't have uv installed)
The GGUF + Sage Attention setup for WAN 2.2 is genuinely painful – I feel you on the YouTube video marathon to get it working.
I built OpenFork specifically to skip all that setup torture. It's a desktop app (completely open source) that pulls pre-built Docker images for models like WAN 2.2, so there's no manual wheel installation, no Sage Attention compilation, no version conflicts.
Supports:
- WAN 2.2 (T2V, I2V, FP8 versions)
- LTX-2 (FP4 optimized)
- Hunyuan 1.5
- Plus 15+ other models including some that are nearly impossible to install normally
The desktop client also has a distributed compute feature: share your GPU when idle, earn credits, use those credits to run workflows on more powerful GPUs when you need them. Pretty handy when you hit VRAM limits.
Website: openfork.video
Desktop app: https://github.com/besch/openfork_desktop
Python client: https://github.com/besch/openfork_client
Happy to help if you give it a shot. No BS, just trying to make this stuff less painful for everyone.