Workflow - First Last Frame & First Last Middle Frame
A workflow for those who want to play around with first last frame ( credit to
@kabachuha
)
(download video, drop into comfy for the workflow)
Some useful resources, to create first and last frame image:
https://huggingface.co/Comfy-Org/Qwen-Image-Edit_ComfyUI
- Qwen Edit Multiple Angles lora: View image from a different angle ( fal-ai )
https://huggingface.co/fal/Qwen-Image-Edit-2511-Multiple-Angles-LoRA - Qwen Edit Multiple Angles lora : View image from a different angle (
@dx8152
)
https://huggingface.co/dx8152/Qwen-Edit-2509-Multiple-angles - Qwen Edit Next Scene lora : create a new view or scene from your image (
@lovis93
)
https://huggingface.co/lovis93/next-scene-qwen-image-lora-2509
Also a workflow here that can be downloaded (click the "download workflow" button):
https://www.runcomfy.com/comfyui-workflows/ltx-2-first-last-frame-in-comfyui-audio-visual-motion-control
As well as a custom node for LTX-2 first last frame, but i havent tested that one https://github.com/TTPlanetPig/Comfyui_TTP_Toolset
(but seems to work without it, although the node could improve on things)
And feel free to share your way, if something else works good ;-)
holy...
yes i dont know why the Video combine node stopped embedding the workflow meta data.
will upload the workflow asap ;-)
Added a First Middle Last frame workflow as well..
(can be used for first last too, just bypass middle if you want first last)
A bit experimental, but some interesting results ;-) (scene cuts etc)
(probably need to play around with the strength etc, to let the model have a bit of flexibility, each image has a strength setting)
(the cafe woman was just a test at very low res, but just to show it does scene cuts depending on image, length and prompt)
Thank you for share your workflows.
Do you think it's possible to do a workflow where you only add the last frame?
Btw, in my tests, I also see these glitches in the generated video in the frames where the input images are injected. I don't know if there's a way where the generated video is more fluid when use this workflow. Strength parameter doesn't look to help too much.
Yes was a bit of an experiment. Will try see if there is any reason for the glitching.
Saw Kijai made some LTXVAddGuideMulti .. might work better with that, will try some alternatives ;-)
Should work better now, used a KJnode for multi ref (LTXVAddGuideMulti)
First Middle Last Frame:
https://huggingface.co/RuneXX/LTX-2-Workflows/blob/main/LTX-2%20-%20First%20Middle%20Last%20Frame%20v2%20(experimental).json
It works much better! Thank you so much for the workflows RuneXX.
Here's my old take on this - with injecting any number of frames anywhere:
https://www.reddit.com/r/StableDiffusion/comments/1q7gzrp/ltx2_multi_frame_injection_works_minimal_clean/
I guess, it should be also possible to combine with video extend, to continue a video towards a specific frame. Possibly, this could help with video deterioration on every next extension.
Here's my old take on this - with injecting any number of frames anywhere:
Interesting for sure, will give it a test run ;-)
Been meaning to do an extend video workflow a bit the "wan way" with just last "motion frames", the challenge is perhaps the audio, need some audio "frames" too, to keep the audio consistent.
Is there a way to set multiple middle frames so that you can make a continuous film?
Yes, see my comment above with a simple workflow. It is possible to inject many frames anywhere. But need to play with strength - too high can cause brightness flashes, too low - generates noticeable differences. Also, if using upscale pass (which most people do), the same frames need to be injected again, otherwise important details will be lost.
Here's my more crazy convoluted workflow that shows how to do everything in one go - prepend generated fragment for a video clip, then add extended part with lipsync and multiple keyframes.
https://www.reddit.com/r/StableDiffusion/comments/1qt9ksg/ltx2_yolo_frankenworkflow_extend_a_video_from/
However, the more guides you add, the higher the risk for something to go wrong. You may end up with hundreds of generations where only one transition is good. Unfortunately, LTX2 is not smart enough.
@runexx Since I couldn’t achieve the desired result using the downloaded workflows, could I kindly ask you to send your video as a zipped file? It seems that it’s unable to read the metadata.
hi @runexx this link is broken
https://huggingface.co/RuneXX/LTX-2-Workflows/blob/main/LTX-2%20-%20First%20Middle%20Last%20Frame%20v2%20(experimental).json
Oh it was renamed after an update to it.
but you can find it here https://huggingface.co/RuneXX/LTX-2-Workflows/tree/main ;- )


