lym00's picture
Update README.md
c3cad08 verified
---
license: apache-2.0
---
## Official Fix (2025-07-28)
The official fix is being tested: https://github.com/nunchaku-tech/nunchaku/pull/557
--
## βœ… Patched LoRAs
| Patched LoRAs | Original LoRAs |
|-----------------|----------------|
| [flux_kontext_deblur_nunchaku.safetensors](https://huggingface.co/lym00/comfyui_nunchaku_lora_patch/blob/main/flux_kontext_deblur_nunchaku.safetensors) | [civitai](https://civitai.com/models/1737381) or [civitaiarchive](https://civitaiarchive.com/models/1737381) |
| [flux_kontext_face_detailer_nunchaku.safetensors](https://huggingface.co/lym00/comfyui_nunchaku_lora_patch/blob/main/flux_kontext_face_detailer_nunchaku.safetensors) | [civitai](https://civitai.com/models/1752776) or [civitaiarchive](https://civitaiarchive.com/models/1752776) |
## ⚑ Usage
- These patched LoRAs are **compatible with** [ComfyUI-nunchaku](https://github.com/mit-han-lab/ComfyUI-nunchaku).
- Use the **Nunchaku FLUX LoRA Loader** node to load LoRA modules for **SVDQuant FLUX** models.
## πŸ› οΈ Patch References
Some original FLUX LoRA files were missing required `final_layer.adaLN` weights needed by **ComfyUI-nunchaku’s FLUX LoRA Loader**.
This patch script automatically adds **dummy adaLN tensors** to make the LoRA compatible with **SVDQuant FLUX** models.
**Script:** [patch_comfyui_nunchaku_lora.py](https://huggingface.co/lym00/comfyui_nunchaku_lora_patch/blob/main/patch_comfyui_nunchaku_lora.py)
**Based on:**
- **Nunchaku Issue:** [ComfyUI-nunchaku #340](https://github.com/mit-han-lab/ComfyUI-nunchaku/issues/340)
> Node Type: `NunchakuFluxLoraLoader`
> Exception Type: `KeyError`
> Exception Message: `'lora_unet_final_layer_adaLN_modulation_1.lora_down.weight'`
- **Example Gist:** [akedia/e0a132b5...](https://gist.github.com/akedia/e0a132b587e30413665d299ad893a60e)
---
## πŸ”„ Example Patch Log
```bash
Running patch_comfyui_nunchaku_lora.py
πŸ”„ Universal final_layer.adaLN LoRA patcher (.safetensors)
Enter path to input LoRA .safetensors file: Flux_kontext_deblur.safetensors
Enter path to save patched LoRA .safetensors file: flux_kontext_deblur_nunchaku.safetensors
βœ… Loaded 610 tensors from: Flux_kontext_deblur.safetensors
πŸ”‘ Found final_layer-related keys:
- lora_unet_final_layer_linear.lora_down.weight
- lora_unet_final_layer_linear.lora_up.weight
πŸ” Checking for final_layer keys with prefix 'lora_unet_final_layer'
Linear down: lora_unet_final_layer_linear.lora_down.weight
Linear up: lora_unet_final_layer_linear.lora_up.weight
βœ… Has final_layer.linear: True
βœ… Has final_layer.adaLN_modulation_1: False
βœ… Added dummy adaLN weights:
- lora_unet_final_layer_adaLN_modulation_1.lora_down.weight (shape: torch.Size([16, 3072]))
- lora_unet_final_layer_adaLN_modulation_1.lora_up.weight (shape: torch.Size([64, 16]))
βœ… Patched file saved to: flux_kontext_deblur_nunchaku.safetensors
Total tensors now: 612
πŸ” Verifying patched keys:
- lora_unet_final_layer_adaLN_modulation_1.lora_down.weight
- lora_unet_final_layer_adaLN_modulation_1.lora_up.weight
- lora_unet_final_layer_linear.lora_down.weight
- lora_unet_final_layer_linear.lora_up.weight
βœ… Contains adaLN after patch: True