NewMechaZitMerge / README.md
n-Arno's picture
Update README.md
0277a87 verified
metadata
license: apache-2.0

This is a quantized to 4bit, converted to Apple Silicon MLX, version of Z-Image-Turbo for use with this tool, including my New Mecha LoRA.

The script used to convert and merge LoRA is included. It relies on the previously mentionned tool to work.

(.venv) [user@MacM1 MLX_z-image] $ python convert_comfy.py --src_model models/z_image_turbo_bf16.safetensors --dst_model alpha.safetensors --lora_model loras/new_mecha_zit.safetensors --lora_scale 0.6
Starting conversion!
Loading models/z_image_turbo_bf16.safetensors file...
Reverting ComfyUI format...
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 453/453 [00:00<00:00, 533114.40it/s]
Merging LoRA loras/new_mecha_zit.safetensors at scale 0.6...
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 481/481 [01:21<00:00,  5.91it/s]
Merged 240 weight keys
Converting to MLX...
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 521/521 [01:52<00:00,  4.62it/s]
Loading converted weights...
Quantizing (bits=4, group_size=32)...
Saving alpha.safetensors file...
Done!