Nunchaku Prebuilt Wheels for Z-image (Windows)
This repository provides pre-compiled binaries (.whl) for the Nunchaku library. These builds include Z-image support and are specifically optimized for Windows x64 environments. The latest wheels now support:
- RTX 3000 Series (Ampere)
- RTX 4000 Series (Ada Lovelace)
- Data Center GPUs (A100, H100)
Compatibility Matrix (Windows Only)
All wheels target CUDA 12.8.
| Python Version | Torch 2.7.0 (cu128) | Torch 2.8.0 (cu128) |
|---|---|---|
| Python 3.11 | β Available | β Available |
| Python 3.12 | β Available | β Available |
| Python 3.13 | β Available | β Available |
Build Methodology
Built in December 2025 with the following configuration:
- OS: Windows 11 x64
- Compiler: MSVC (Visual Studio 2022) + NVCC 12.8
- CUDA: v12.8
Build Commands (For local compilation)
If you wish to build the library yourself, the script will automatically detect and optimize for your specific GPU architecture:
set CUDA_HOME=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8
set CUDA_PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8
set DISTUTILS_USE_SDK=1
uv pip install torch==2.x.0 torchvision --index-url https://download.pytorch.org/whl/cu128
uv pip install numpy ninja setuptools packaging wheel
uv build --wheel --no-build-isolation
Installation
- Download the
.whlfile matching your Python and Torch version from the Files and versions tab. - Install via pip:
# Example for Python 3.12 and Torch 2.8.0
pip install nunchaku-1.1.0+torch2.8-cp312-cp312-win_amd64.whl
- Verify the installation:
import nunchaku
print("Success: Nunchaku (Z-image) loaded.")
Disclaimer
This is an unofficial community repository for Windows users. For the original source code and official updates, please visit the Nunchaku-tech GitHub.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support