🧠 WAN2GP Video Generator

A reproducible pipeline for generating videos using custom LoRA modules, TorchInductor, and CUDA 12.8. Built for portability, disaster recovery, and clean builds across systems.

πŸ–ΌοΈ Sample Output

Here’s a successful run showing model pinning, async shuttle setup, and Torch compilation:

Sample Output

🧱 Environment Setup

This project requires a clean environment with Python 3.10, CUDA 12.8, and Visual Studio Build Tools 2022.

1. πŸ”§ Install Visual Studio Build Tools 2022

Required to compile TorchInductor kernels, xformers, and transformer modules.

  • Download: Visual Studio Build Tools 2022
  • Enable workload: βœ… Desktop development with C++
  • Launch from: x64 Native Tools Command Prompt for VS 2022

2. 🐍 Install Miniconda

3. πŸ§ͺ Create the Conda Environment

(Run all commands in the x64 Native Tools Command Prompt for VS 2022)

conda create -n wan2gp python=3.10
conda activate wan2gp

4. πŸ“₯ Download WAN2GP from GitHub

Clone the base pipeline from deepbeepmeep's GitHub:

git clone https://github.com/deepbeepmeep/wan2gp.git
cd wan2gp
pip install -r requirements.txt

πŸ“¦ Step-by-Step Installation from Custom Wheels

(Continue in the same x64 Native Tools Command Prompt for VS 2022)

Install the following wheels in this exact order to ensure compatibility.

1. Download Wheels via Browser

Visit the wheels_for_windows directory and manually download each .whl file to a local folder (e.g., C:\wan2gp\wheels_for_windows). (right click and save as)

2. Install Wheels Locally

Open your x64 Native Tools Command Prompt for VS 2022, activate your conda enviornment, navigate to the folder containing the wheels, and run:

pip install triton_windows-3.4.0.post20-cp310-cp310-win_amd64.whl
pip install torch-2.9.0.dev20250909-cu128-cp310-cp310-win_amd64.whl
pip install torchvision-0.24.0.dev20250909-cu128-cp310-cp310-win_amd64.whl
pip install torchaudio-2.8.0.dev20250909-cu128-cp310-cp310-win_amd64.whl
pip install flash_attn-2.8.3-cp310-cp310-win_amd64.whl
pip install xformers-0.0.33-f2043594.d20251008-cp39-abi3-win_amd64.whl

πŸ”§ Update wgp.py and Enable Compilation

Before launching, follow these steps to enable Torch compilation and apply your patched wgp.py:

For WAN2GP Version 8.995:

  1. Launch the program once:

    python wgp.py
    
  2. Go to Configuration β†’ Performance

  3. Enable βœ… Compile Transformers

  4. Click Apply Settings, then exit the program

  5. Overwrite the default wgp.py with your patched version (from this repo)

  6. Relaunch:

    python wgp.py
    

For WAN2GP Versions Newer Than 8.995:

Add the following lines to the top of wgp.py:

import torch._dynamo
torch._dynamo.config.accumulated_recompile_limit = 512  # or higher
torch._dynamo.config.verbose = True
torch._dynamo.config.suppress_errors = True

def compile_or_fallback(model, example_inputs):
    try:
        print("πŸ§ͺ Attempting Torch compile...")
        compiled_model = torch.compile(model, backend="inductor")
        try:
            compiled_model(*example_inputs)
        except Exception as runtime_error:
            print("⚠️ Runtime error during dry run. Falling back to eager mode.")
            print("Runtime error:", runtime_error)
            return model
        print("βœ… Compilation succeeded.")
        return compiled_model
    except Exception as compile_error:
        print("⚠️ Compilation failed. Falling back to eager mode.")
        print("Compile error:", compile_error)
        return model

This ensures graceful fallback and verbose diagnostics during Torch compilation.

πŸ” Upgrading WAN2GP Code

If you upgrade the WAN2GP codebase (e.g., by pulling updates from GitHub):

  • βœ… Re-run pip install -r requirements.txt to install any new dependencies
  • ⚠️ Then reinstall all custom wheels in the same order as above β€” upgrading may overwrite them
  • 🧩 Reapply your patched wgp.py unless deepbeepmeep integrates these changes upstream

πŸš€ Launch Instructions

Launch the pipeline using:

python wgp.py

Important: Always run from the x64 Native Tools Command Prompt for VS 2022 to ensure compiler visibility (cl.exe) and proper environment variables.

🧠 Runtime Behavior

  • TorchInductor compiles kernels on first run - this takes a few minutes, be patient
  • LoRA modules are injected and pinned to RAM
  • Async shuttles handle memory transfers across GPU and CPU
  • Output videos saved to outputs/ with timestamped filenames

⚠️ Optional Windows Tweaks (for long async runs)

If you encounter WinError 10055 (socket buffer exhaustion), apply these registry tweaks:

Registry Key Type Value
MaxUserPort DWORD 65534
TcpTimedWaitDelay DWORD 30

Location: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters

Reboot required after applying.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support