Workflow Request: LTX-2 V2V + I2V Hybrid with First-Frame Masked Character Swap
Hello,
I have been using your I2V workflow very successfully. It works extremely well.
I have another question, and if you have time, I would like to request an additional workflow.
Previously, I used a Wan 2.2 Animate model workflow. One of its V2V features allowed masking the character in the first frame of a video and replacing that character with a different one.
For example, I imported a dancing video, masked only the person, and swapped that person with another character. The result was interesting and worked quite well.
My question is:
Is it possible to implement a similar function using the LTX-2 model?
To summarize the core request:
- Based on LTX-2
- A combined V2V + I2V workflow
- Only the first frame is manually masked
- After defining the masked area, the user provides an image (character or object)
- The system swaps the masked region from the first frame and propagates it through the video, similar to I2V behavior
Additionally, as an experimental extension:
Would it also be possible to insert images into the middle frame or last frame using masking in a similar way?
It is similar to the workflow you uploaded, "LTX-2 - V2V Head Swap Experimental (BFS lora).json."
However, instead of replacing only the head using an image, the system recognizes and inserts the entire reference image.
Also, this is not about generating something new like T2V. It is about swapping only the character or object within an existing video, similar to a standard V2V approach.
Thank you.
https://civitai.com/models/2135612/wan-animate-character-replacement
The workflow I used previously was sourced from this link.
Yes been thinking of making something like that, because if you do a first frame replacement with exact same pose etc the ic-pose lora should work far better ;-)
And I bet behind the scene LTX do the same on their servers when users do dance videos etc.
Will try make something soon ;-)