Mega I2V Model Workflow for FirstLastFrameToVideo
Hello,
I've been using your mega models for a few months now, and while most of the projects I've finished used your Mega All-in-one models, I've found over time that the effects they produce are just really hard to prompt using VACE, especially when dealing for NSFW material. The extra testing and prompting I have to do to get the results right is time consuming and resource intensive. I've found that the V10 I2V model produces much better prompt adherence, and better image results (crisper, more in line with what's in the original image, vs VACE NSFW model which "smooths" the image too much + occasional artifacts).
That being said, my projects use First-Last frame primarily. VACE deals with this fine, but when experimenting with the V10 I2V model, I'm getting this weird result on the last few frames of the video generation where the image suddenly saturates its colors visibly and dramatically! This is especially the case when dealing with first and last frames that are identical, as in the example below:
Examples (note that I am using KL_Optimal because it has the -least- dramatic effect on color saturation, if I use simple or beta the effects are even worse):
With proper color correction, I can almost remove the effect entirely but it's still obvious:
Is there some secret sauce to make this model work properly with First-Last frame? I've provided an image to the simple workflow I'm using atm:
Any help or insight would be greatly appreciated.
I already use the Mega All-In-One models on a regular basis, and am aware that it's a "superior" model - but the effects of the I2V model are much more desirable for the projects I'm working on, and would really like this to work properly...
I believe the "WanFirstLastFrameToVideo" node is meant to work with models like this:
https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1-FLF2V-14B-720P_fp8_e4m3fn.safetensors
... and not "I2V" models. I'm not aware of a way to do it with an I2V model.
Your model seemed to do pretty well with FLF2V, despite it being made for another model. I was hoping you know of a way to fix this issue. The problem with those base FLF2V models is the motion is a little too dampened and requires additional LORAs to get it to do anything interesting, whereas your model gives great motion and results, minus this last-few-frame saturation issue.
Can you share this workflow?
