wan-lora-pruned / README.md
woctordho's picture
Update README.md
d797389 verified

Some LoRAs pruned using resize_lora.py in Kohya's sd-scripts. Their sizes are greatly reduced to help save VRAM.

Pruning also roughly shows how much information the LoRA has learned. For two LoRAs with the same rank and fro, the larger one has more information.

See PR 2194 for a fix to further reduce the LoRA size, and PR 2240 to hugely reduce the time to prune a LoRA.