Release Notes - Nunchaku OOM Fix v1.0

Release Information

  • Version: 1.0
  • Release Date: September 9, 2025
  • Type: Hotfix/Patch

Overview

This patch addresses critical CUDA out of memory errors occurring in ComfyUI-nunchaku on high-VRAM GPUs (14GB+).

Problem Solved

CUDA error: out of memory (at C:\Users\muyangl\actions-runner_work\nunchaku\nunchaku\src\Tensor.h:95)

  • Issue: Nunchaku automatically disables CPU offload for GPUs with β‰₯14GB VRAM
  • Impact: CUDA OOM errors despite sufficient VRAM
  • Affected GPUs: RTX 3090 (24GB), RTX 4080 (16GB), RTX 4090 (24GB), and similar

Solution

Forces CPU offload to remain enabled when set to "auto" mode, preventing memory allocation failures.

Files Included (6 files)

  1. apply_nunchaku_patch.py - Core patch script
  2. nunchaku_force_cpu_offload.patch - Git patch file
  3. APPLY_PATCH.bat - Windows quick apply
  4. UNDO_PATCH.bat - Windows quick restore
  5. nunchaku_patch_tool.bat - Windows interactive tool
  6. README_NUNCHAKU_PATCH.md - Full documentation

Installation

Windows

1. Extract all files to ComfyUI folder
2. Double-click APPLY_PATCH.bat
3. Restart ComfyUI

Linux/Mac

python apply_nunchaku_patch.py
# Restart ComfyUI

Key Features

  • βœ… Automatic backup creation
  • βœ… One-click apply/restore
  • βœ… Cross-platform support
  • βœ… No dependencies required
  • βœ… Safe and reversible

Compatibility

  • ComfyUI-nunchaku (all 2025 versions)
  • Python 3.8+
  • Windows/Linux/macOS
  • All NVIDIA GPUs with CUDA support

Performance Impact

  • Memory: Prevents OOM errors
  • Speed: ~5-10% slower due to CPU offload
  • Stability: Significantly improved

Support

Report issues at: [Your GitHub/Discord/Forum Link]

License

Public Domain / MIT - Free to use and distribute


Thank you to the ComfyUI community for reporting and testing this fix.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support