OneToAllAnimation FP8 model issue with Wan2.1-Fun-14B-InP-MPS lora
Hi,
I was testing the OneToAllAnimation FP8 model with the Wan2.1-Fun-14B-InP-MPS.safetensors lora and recieve the following error message:
(below shortened output goes up to block 39)
Loading LoRA: WAN\Wan2.1\Wan2.1-Fun-14B-InP-MPS with strength: 1.0
lora key not loaded: diffusion_model.blocks.0.cross_attn.k_img.alpha
lora key not loaded: diffusion_model.blocks.0.cross_attn.k_img.lora_A.weight
lora key not loaded: diffusion_model.blocks.0.cross_attn.k_img.lora_B.weight
lora key not loaded: diffusion_model.blocks.0.cross_attn.v_img.alpha
...
lora key not loaded: diffusion_model.blocks.9.cross_attn.v_img.lora_B.weight
lora key not loaded: diffusion_model.img_emb.proj.1.alpha
lora key not loaded: diffusion_model.img_emb.proj.1.lora_A.weight
lora key not loaded: diffusion_model.img_emb.proj.1.lora_B.weight
lora key not loaded: diffusion_model.img_emb.proj.3.alpha
lora key not loaded: diffusion_model.img_emb.proj.3.lora_A.weight
lora key not loaded: diffusion_model.img_emb.proj.3.lora_B.weight
Is the Wan2.1-Fun-14B-InP-MPS.safetensors lora only compatible with the full Fp16 model (which I can not test due to memory issues), or is there something else not correct?
two other loras load fine in the comfyUI workflow with the FP8 model:
Loading LoRA: WAN\Wan2.1\DetailEnhancerV1 with strength: 0.65
Loading LoRA: WAN\Wan2.1\lightx2v_t2v_14b_cfg_step_distill_v2_lora_rank32_bf16 with strength: 1.0
The workflow runs on the latest ComfyUI with all the custom nodes for WanVideo updated to the latest version (WanVideoWrapper 1.4.5), it runs on an RTX5080 with 16GB of memory with cuda12.8
I tried different settings for base_precision,, quantization etc. but had no effect on loading the lora.
