--- license: mit --- # Release Notes - Nunchaku OOM Fix v1.0 ## Release Information - **Version:** 1.0 - **Release Date:** September 9, 2025 - **Type:** Hotfix/Patch ## Overview This patch addresses critical CUDA out of memory errors occurring in ComfyUI-nunchaku on high-VRAM GPUs (14GB+). ## Problem Solved CUDA error: out of memory (at C:\Users\muyangl\actions-runner_work\nunchaku\nunchaku\src\Tensor.h:95) - **Issue:** Nunchaku automatically disables CPU offload for GPUs with ≥14GB VRAM - **Impact:** CUDA OOM errors despite sufficient VRAM - **Affected GPUs:** RTX 3090 (24GB), RTX 4080 (16GB), RTX 4090 (24GB), and similar ## Solution Forces CPU offload to remain enabled when set to "auto" mode, preventing memory allocation failures. ## Files Included (6 files) 1. `apply_nunchaku_patch.py` - Core patch script 2. `nunchaku_force_cpu_offload.patch` - Git patch file 3. `APPLY_PATCH.bat` - Windows quick apply 4. `UNDO_PATCH.bat` - Windows quick restore 5. `nunchaku_patch_tool.bat` - Windows interactive tool 6. `README_NUNCHAKU_PATCH.md` - Full documentation ## Installation ### Windows ``` 1. Extract all files to ComfyUI folder 2. Double-click APPLY_PATCH.bat 3. Restart ComfyUI ``` ### Linux/Mac ```bash python apply_nunchaku_patch.py # Restart ComfyUI ``` ## Key Features - ✅ Automatic backup creation - ✅ One-click apply/restore - ✅ Cross-platform support - ✅ No dependencies required - ✅ Safe and reversible ## Compatibility - ComfyUI-nunchaku (all 2025 versions) - Python 3.8+ - Windows/Linux/macOS - All NVIDIA GPUs with CUDA support ## Performance Impact - Memory: Prevents OOM errors - Speed: ~5-10% slower due to CPU offload - Stability: Significantly improved ## Support Report issues at: [Your GitHub/Discord/Forum Link] ## License Public Domain / MIT - Free to use and distribute --- *Thank you to the ComfyUI community for reporting and testing this fix.*