lym00 commited on
Commit
8403b94
Β·
verified Β·
1 Parent(s): 1603286

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -1
README.md CHANGED
@@ -4,4 +4,34 @@ license: apache-2.0
4
 
5
  https://github.com/mit-han-lab/ComfyUI-nunchaku/issues/340
6
 
7
- https://gist.github.com/akedia/e0a132b587e30413665d299ad893a60e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
 
5
  https://github.com/mit-han-lab/ComfyUI-nunchaku/issues/340
6
 
7
+ https://gist.github.com/akedia/e0a132b587e30413665d299ad893a60e
8
+
9
+ ```Running convert_to_comfyui_lora.py
10
+ πŸ”„ Universal final_layer.adaLN LoRA patcher (.safetensors)
11
+ Enter path to input LoRA .safetensors file: Flux_kontext_deblur.safetensors
12
+ Enter path to save patched LoRA .safetensors file: flux_kontext_deblur_comfyui.safetensors
13
+
14
+ βœ… Loaded 610 tensors from: Flux_kontext_deblur.safetensors
15
+
16
+ πŸ”‘ Found these final_layer-related keys:
17
+ lora_unet_final_layer_linear.lora_down.weight
18
+ lora_unet_final_layer_linear.lora_up.weight
19
+
20
+ πŸ” Checking for final_layer keys with prefix: 'lora_unet_final_layer'
21
+ Linear down: lora_unet_final_layer_linear.lora_down.weight
22
+ Linear up: lora_unet_final_layer_linear.lora_up.weight
23
+ βœ… Has final_layer.linear: True
24
+ βœ… Has final_layer.adaLN_modulation_1: False
25
+ βœ… Added dummy adaLN weights:
26
+ lora_unet_final_layer_adaLN_modulation_1.lora_down.weight (shape: torch.Size([16, 3072]))
27
+ lora_unet_final_layer_adaLN_modulation_1.lora_up.weight (shape: torch.Size([64, 16]))
28
+
29
+ βœ… Patched file saved to: flux_kontext_deblur_comfyui.safetensors
30
+ Total tensors now: 612
31
+
32
+ πŸ” Verifying patched keys:
33
+ lora_unet_final_layer_adaLN_modulation_1.lora_down.weight
34
+ lora_unet_final_layer_adaLN_modulation_1.lora_up.weight
35
+ lora_unet_final_layer_linear.lora_down.weight
36
+ lora_unet_final_layer_linear.lora_up.weight
37
+ βœ… Contains adaLN after patch: True```