i used 1 lora with framepack-f1 in comfyui. it worked fine. it did add to required vram. i tried it with 2 loras. it spilled too much to shared ram so i didn't wait to see if it worked. here's a link to nsfw video i made. when model and lora first loaded it spilled to dram, but shortly settled at 94% for the remainder of gen. using rtx4090 24GB vram. https://civitai.com/images/74621941
it was a movement lora, not character lora.
base f1 fp8 model uses 19.4GB vram 544x704 10secs. adding the lora (314k) uses 23.8GB vram 400x544 10 secs. took twice as long to run.