FramePack One Frame (Single Frame) Inference and Training / FramePack 1ãã¬ãŒã æšè«ãšåŠç¿
Overview / æŠèŠ
This document explains advanced inference and training methods using the FramePack model, particularly focusing on "1-frame inference" and its extensions. These features aim to leverage FramePack's flexibility to enable diverse image generation and editing tasks beyond simple video generation.
The Concept and Development of 1-Frame Inference
While FramePack is originally a model for generating sequential video frames (or frame sections), it was discovered that by focusing on its internal structure, particularly how it handles temporal information with RoPE (Rotary Position Embedding), interesting control over single-frame generation is possible.
Basic 1-Frame Inference:
- It takes an initial image and a prompt as input, limiting the number of generated frames to just one.
- In this process, by intentionally setting a large RoPE timestamp (
target_index) for the single frame to be generated, a single static image can be obtained that reflects temporal and semantic changes from the initial image according to the prompt. - This utilizes FramePack's characteristic of being highly sensitive to RoPE timestamps, as it supports bidirectional contexts like "Inverted anti-drifting." This allows for operations similar to natural language-based image editing, albeit in a limited capacity, without requiring additional training.
Kisekaeichi Method (Feature Merging via Post-Reference):
- This method, an extension of basic 1-frame inference, was proposed by furusu. In addition to the initial image, it also uses a reference image corresponding to a "next section-start image" (treated as
clean_latent_post) as input. - The RoPE timestamp (
target_index) for the image to be generated is set to an intermediate value between the timestamps of the initial image and the section-end image. - More importantly, masking (e.g., zeroing out specific regions) is applied to the latent representation of each reference image. For example, by setting masks to extract a character's face and body shape from the initial image and clothing textures from the reference image, an image can be generated that fuses the desired features of both, similar to a character "dress-up" or outfit swapping. This method can also be fundamentally achieved without additional training.
- This method, an extension of basic 1-frame inference, was proposed by furusu. In addition to the initial image, it also uses a reference image corresponding to a "next section-start image" (treated as
1f-mc (one frame multi-control) Method (Proximal Frame Blending):
- This method was proposed by mattyamonaca. It takes two reference images as input: an initial image (e.g., at
t=0) and a subsequent image (e.g., att=1, the first frame of a section), and generates a single image blending their features. - Unlike Kisekaeichi, latent masking is typically not performed.
- To fully leverage this method, additional training using LoRA (Low-Rank Adaptation) is recommended. Through training, the model can better learn the relationship and blending method between the two input images to achieve specific editing effects.
- This method was proposed by mattyamonaca. It takes two reference images as input: an initial image (e.g., at
Integration into a Generalized Control Framework
The concepts utilized in the methods aboveâspecifying reference images, manipulating timestamps, and applying latent masksâhave been generalized to create a more flexible control framework. Users can arbitrarily specify the following elements for both inference and LoRA training:
- Control Images: Any set of input images intended to influence the model.
- Clean Latent Index (Indices): Timestamps corresponding to each control image. These are treated as
clean latent indexinternally by FramePack and can be set to any position on the time axis. This is specified ascontrol_index. - Latent Masks: Masks applied to the latent representation of each control image, allowing selective control over which features from the control images are utilized. This is specified as
control_image_mask_pathor the alpha channel of the control image. - Target Index: The timestamp for the single frame to be generated.
This generalized control framework, along with corresponding extensions to the inference and LoRA training tools, has enabled advanced applications such as:
- Development of LoRAs that stabilize 1-frame inference effects (e.g., a camera orbiting effect) that were previously unstable with prompts alone.
- Development of Kisekaeichi LoRAs that learn to perform desired feature merging under specific conditions (e.g., ignoring character information from a clothing reference image), thereby automating the masking process through learning.
These features maximize FramePack's potential and open up new creative possibilities in static image generation and editing. Subsequent sections will detail the specific options for utilizing these functionalities.
æ¥æ¬èª
ãã®ããã¥ã¡ã³ãã§ã¯ãFramePackã¢ãã«ãçšããé«åºŠãªæšè«ããã³åŠç¿ææ³ãç¹ã«ã1ãã¬ãŒã æšè«ããšãã®æ¡åŒµæ©èœã«ã€ããŠè§£èª¬ããŸãããããã®æ©èœã¯ãFramePackã®æè»æ§ã掻ãããåç»çæã«çãŸããªã倿§ãªç»åçæã»ç·šéã¿ã¹ã¯ãå®çŸããããšãç®çãšããŠããŸãã
1ãã¬ãŒã æšè«ã®çºæ³ãšçºå±
FramePackã¯æ¬æ¥ãé£ç¶ããåç»ãã¬ãŒã ïŒãŸãã¯ãã¬ãŒã ã»ã¯ã·ã§ã³ïŒãçæããã¢ãã«ã§ããããã®å éšæ§é ãç¹ã«æéæ å ±ãæ±ãRoPE (Rotary Position Embedding) ã®æ±ãã«çç®ããããšã§ãåäžãã¬ãŒã ã®çæã«ãããŠãè峿·±ãå¶åŸ¡ãå¯èœã«ãªãããšãçºèŠãããŸããã
åºæ¬çãª1ãã¬ãŒã æšè«:
- éå§ç»åãšããã³ãããå ¥åãšããçæãããã¬ãŒã æ°ã1ãã¬ãŒã ã«éå®ããŸãã
- ãã®éãçæãã1ãã¬ãŒã ã«å²ãåœãŠãRoPEã®ã¿ã€ã ã¹ã¿ã³ãïŒ
target_indexïŒãæå³çã«å€§ããªå€ã«èšå®ããããšã§ãéå§ç»åããããã³ããã«åŸã£ãŠæéçã»æå³çã«å€åããåäžã®éæ¢ç»ãåŸãããšãã§ããŸãã - ããã¯ãFramePackãInverted anti-driftingãªã©ã®åæ¹åã³ã³ããã¹ãã«å¯Ÿå¿ãããããRoPEã®ã¿ã€ã ã¹ã¿ã³ãã«å¯ŸããŠææã«åå¿ããç¹æ§ãå©çšãããã®ã§ããããã«ãããåŠç¿ãªãã§éå®çãªããèªç¶èšèªã«ããç»åç·šéã«è¿ãæäœãå¯èœã§ãã
kisekaeichiæ¹åŒ (ãã¹ãåç §ã«ããç¹åŸŽããŒãž):
- åºæ¬çãª1ãã¬ãŒã æšè«ãçºå±ããããã®æ¹åŒã¯ãfurusuæ°ã«ããææ¡ãããŸãããéå§ç»åã«å ãããæ¬¡ã®ã»ã¯ã·ã§ã³ã®éå§ç»åãã«çžåœããåç
§ç»åïŒ
clean_latent_postãšããŠæ±ãããïŒãå ¥åãšããŠå©çšããŸãã - çæããç»åã®RoPEã¿ã€ã ã¹ã¿ã³ãïŒ
target_indexïŒããéå§ç»åã®ã¿ã€ã ã¹ã¿ã³ããšã»ã¯ã·ã§ã³çµç«¯ç»åã®ã¿ã€ã ã¹ã¿ã³ãã®äžéçãªå€ã«èšå®ããŸãã - ããã«éèŠãªç¹ãšããŠãååç §ç»åã®latent衚çŸã«å¯ŸããŠãã¹ã¯åŠçïŒç¹å®é åã0ã§åãããªã©ïŒãæœããŸããäŸãã°ãéå§ç»åããã¯ãã£ã©ã¯ã¿ãŒã®é¡ãäœåããåç §ç»åããã¯æè£ ã®ãã¯ã¹ãã£ãæœåºããããã«ãã¹ã¯ãèšå®ããããšã§ããã£ã©ã¯ã¿ãŒã®ãçãæ¿ããã®ãããªãäž¡è ã®æãŸããç¹åŸŽãèåãããç»åãçæã§ããŸãããã®ææ³ãåºæ¬çã«ã¯åŠç¿äžèŠã§å®çŸå¯èœã§ãã
- åºæ¬çãª1ãã¬ãŒã æšè«ãçºå±ããããã®æ¹åŒã¯ãfurusuæ°ã«ããææ¡ãããŸãããéå§ç»åã«å ãããæ¬¡ã®ã»ã¯ã·ã§ã³ã®éå§ç»åãã«çžåœããåç
§ç»åïŒ
1f-mc (one frame multi-control) æ¹åŒ (è¿æ¥ãã¬ãŒã ãã¬ã³ã):
- ãã®æ¹åŒã¯ãmattyamonacaæ°ã«ããææ¡ãããŸãããéå§ç»åïŒäŸ:
t=0ïŒãšããã®çŽåŸã®ç»åïŒäŸ:t=1ãã»ã¯ã·ã§ã³ã®æåã®ãã¬ãŒã ïŒã®2ã€ãåç §ç»åãšããŠå ¥åãããããã®ç¹åŸŽããã¬ã³ãããåäžç»åãçæããŸãã - kisekaeichiãšã¯ç°ãªããlatentãã¹ã¯ã¯éåžžè¡ããŸããã
- ãã®æ¹åŒã®ç䟡ãçºæ®ããã«ã¯ãLoRA (Low-Rank Adaptation) ã«ãã远å åŠç¿ãæšå¥šãããŸããåŠç¿ã«ãããã¢ãã«ã¯2ã€ã®å ¥åç»åéã®é¢ä¿æ§ããã¬ã³ãæ¹æ³ãããé©åã«åŠç¿ããç¹å®ã®ç·šé广ãå®çŸã§ããŸãã
- ãã®æ¹åŒã¯ãmattyamonacaæ°ã«ããææ¡ãããŸãããéå§ç»åïŒäŸ:
æ±çšçãªå¶åŸ¡ãã¬ãŒã ã¯ãŒã¯ãžã®çµ±å
äžèšã®åææ³ã§å©çšãããŠãããåç §ç»åã®æå®ããã¿ã€ã ã¹ã¿ã³ãã®æäœããlatentãã¹ã¯ã®é©çšããšãã£ãæŠå¿µãäžè¬åããããæè»ãªå¶åŸ¡ãå¯èœã«ããããã®æ¡åŒµãè¡ãããŸããã ãŠãŒã¶ãŒã¯ä»¥äžã®èŠçŽ ãä»»æã«æå®ããŠãæšè«ããã³LoRAåŠç¿ãè¡ãããšãã§ããŸãã
- å¶åŸ¡ç»å (Control Images): ã¢ãã«ã«åœ±é¿ãäžããããã®ä»»æã®å ¥åç»å矀ã
- Clean Latent Index (Indices): åå¶åŸ¡ç»åã«å¯Ÿå¿ããã¿ã€ã ã¹ã¿ã³ããFramePackå
éšã®
clean latent indexãšããŠæ±ãããæé軞äžã®ä»»æã®äœçœ®ãæå®å¯èœã§ããcontrol_indexãšããŠæå®ããŸãã - Latentãã¹ã¯ (Latent Masks): åå¶åŸ¡ç»åã®latentã«é©çšãããã¹ã¯ãããã«ãããå¶åŸ¡ç»åããå©çšããç¹åŸŽãéžæçã«å¶åŸ¡ããŸãã
control_image_mask_pathãŸãã¯å¶åŸ¡ç»åã®ã¢ã«ãã¡ãã£ã³ãã«ãšããŠæå®ããŸãã - Target Index: çæãããåäžãã¬ãŒã ã®ã¿ã€ã ã¹ã¿ã³ãã
ãã®æ±çšçãªå¶åŸ¡ãã¬ãŒã ã¯ãŒã¯ãšãããã«å¯Ÿå¿ããæšè«ããŒã«ããã³LoRAåŠç¿ããŒã«ã®æ¡åŒµã«ããã以äžã®ãããªé«åºŠãªå¿çšãå¯èœã«ãªããŸããã
- ããã³ããã ãã§ã¯äžå®å®ã ã£ã1ãã¬ãŒã æšè«ã®å¹æïŒäŸ: ã«ã¡ã©æåïŒãå®å®åãããLoRAã®éçºã
- ãã¹ã¯åŠçãæåã§è¡ã代ããã«ãç¹å®ã®æ¡ä»¶äžïŒäŸ: æã®åç §ç»åãããã£ã©ã¯ã¿ãŒæ å ±ãç¡èŠããïŒã§æãŸããç¹åŸŽããŒãžãè¡ãããã«åŠç¿ãããkisekaeichi LoRAã®éçºã
ãããã®æ©èœã¯ãFramePackã®ããã³ã·ã£ã«ãæå€§éã«åŒãåºãã鿢ç»çæã»ç·šéã«ãããæ°ããªåµé ã®å¯èœæ§ãæããã®ã§ãã以éã®ã»ã¯ã·ã§ã³ã§ã¯ããããã®æ©èœãå®éã«å©çšããããã®å ·äœçãªãªãã·ã§ã³ã«ã€ããŠèª¬æããŸãã
One Frame (Single Frame) Training / 1ãã¬ãŒã åŠç¿
This feature is experimental. It trains in the same way as one frame inference.
The dataset must be an image dataset. If you use caption files, you need to specify control_directory and place the start images in that directory. The image_directory should contain the images after the change. The filenames of both directories must match. Caption files should be placed in the image_directory.
If you use JSONL files, specify them as {"image_path": "/path/to/target_image1.jpg", "control_path": "/path/to/source_image1.jpg", "caption": "The object changes to red."}. The image_path should point to the images after the change, and control_path should point to the starting images.
For the dataset configuration, see here and here. There are also examples for kisekaeichi and 1f-mc settings.
For single frame training, specify --one_frame in fpack_cache_latents.py to create the cache. You can also use --one_frame_no_2x and --one_frame_no_4x options, which have the same meaning as no_2x and no_4x during inference. It is recommended to set these options to match the inference settings.
If you change whether to use one frame training or these options, please overwrite the existing cache without specifying --skip_existing.
Specify --one_frame in fpack_train_network.py to change the inference method during sample generation.
The optimal training settings are currently unknown. Feedback is welcome.
Example of prompt file description for sample generation
The command line options --one_frame_inference corresponds to --of, and --control_image_path corresponds to --ci.
Note that --ci can be specified multiple times, but --control_image_path is specified as --control_image_path img1.png img2.png, while --ci is specified as --ci img1.png --ci img2.png.
Normal single frame training:
The girl wears a school uniform. --i path/to/start.png --ci path/to/start.png --of no_2x,no_4x,target_index=1,control_index=0 --d 1111 --f 1 --s 10 --fs 7 --d 1234 --w 384 --h 576
Kisekaeichi training:
The girl wears a school uniform. --i path/to/start_with_alpha.png --ci path/to/ref_with_alpha.png --ci path/to/start_with_alpha.png --of no_post,no_2x,no_4x,target_index=5,control_index=0;10 --d 1111 --f 1 --s 10 --fs 7 --d 1234 --w 384 --h 576
æ¥æ¬èª
ãã®æ©èœã¯å®éšçãªãã®ã§ãã 1ãã¬ãŒã æšè«ãšåæ§ã®æ¹æ³ã§åŠç¿ãè¡ããŸãã
ããŒã¿ã»ããã¯ç»åããŒã¿ã»ããã§ããå¿
èŠããããŸãããã£ãã·ã§ã³ãã¡ã€ã«ãçšããå Žåã¯ãcontrol_directoryã远å ã§æå®ãããã®ãã£ã¬ã¯ããªã«éå§ç»åãæ ŒçŽããŠãã ãããimage_directoryã«ã¯å€ååŸã®ç»åãæ ŒçŽããŸããäž¡è
ã®ãã¡ã€ã«åã¯äžèŽãããå¿
èŠããããŸãããã£ãã·ã§ã³ãã¡ã€ã«ã¯image_directoryã«æ ŒçŽããŠãã ããã
JSONLãã¡ã€ã«ãçšããå Žåã¯ã{"image_path": "/path/to/target_image1.jpg", "control_path": "/path/to/source_image1.jpg", "caption": "The object changes to red"}ã®ããã«æå®ããŠãã ãããimage_pathã¯å€ååŸã®ç»åãcontrol_pathã¯éå§ç»åãæå®ããŸãã
ããŒã¿ã»ããã®èšå®ã«ã€ããŠã¯ããã¡ããšãã¡ããåç §ããŠãã ãããkisekaeichiãš1f-mcã®èšå®äŸããã¡ãã«ãããŸãã
1ãã¬ãŒã åŠç¿æã¯ãfpack_cache_latents.pyã«--one_frameãæå®ããŠãã£ãã·ã¥ãäœæããŠãã ããããŸã--one_frame_no_2xãš--one_frame_no_4xãªãã·ã§ã³ãå©çšå¯èœã§ããæšè«æã®no_2xãno_4xãšåãæå³ãæã¡ãŸãã®ã§ãæšè«æãšåãèšå®ã«ããããšããå§ãããŸãã
1ãã¬ãŒã åŠç¿ãåŠãã倿Žããå ŽåããŸããããã®ãªãã·ã§ã³ã倿Žããå Žåã¯ã--skip_existingãæå®ããã«æ¢åã®ãã£ãã·ã¥ãäžæžãããŠãã ããã
ãŸããfpack_train_network.pyã«--one_frameãæå®ããŠãµã³ãã«ç»åçææã®æšè«æ¹æ³ã倿ŽããŠãã ããã
æé©ãªåŠç¿èšå®ã¯ä»ã®ãšããäžæã§ãããã£ãŒãããã¯ãæè¿ããŸãã
ãµã³ãã«çæã®ããã³ãããã¡ã€ã«èšè¿°äŸ
ã³ãã³ãã©ã€ã³ãªãã·ã§ã³--one_frame_inferenceã«çžåœãã --ofãšã--control_image_pathã«çžåœãã--ciãçšæãããŠããŸãã
â» --ciã¯è€æ°æå®å¯èœã§ããã--control_image_pathã¯--control_image_path img1.png img2.pngã®ããã«ã¹ããŒã¹ã§åºåãã®ã«å¯ŸããŠã--ciã¯--ci img1.png --ci img2.pngã®ããã«æå®ããã®ã§æ³šæããŠãã ããã
éåžžã®1ãã¬ãŒã åŠç¿:
The girl wears a school uniform. --i path/to/start.png --ci path/to/start.png --of no_2x,no_4x,target_index=1,control_index=0 --d 1111 --f 1 --s 10 --fs 7 --d 1234 --w 384 --h 576
kisekaeichiæ¹åŒ:
The girl wears a school uniform. --i path/to/start_with_alpha.png --ci path/to/ref_with_alpha.png --ci path/to/start_with_alpha.png --of no_post,no_2x,no_4x,target_index=5,control_index=0;10 --d 1111 --f 1 --s 10 --fs 7 --d 1234 --w 384 --h 576
One (single) Frame Inference / 1ãã¬ãŒã æšè«
This feature is highly experimental and not officially supported. It is intended for users who want to explore the potential of FramePack for one frame inference, which is not a standard feature of the model.
This script also allows for one frame inference, which is not an official feature of FramePack but rather a custom implementation.
Theoretically, it generates an image after a specified time from the starting image, following the prompt. This means that, although limited, it allows for natural language-based image editing.
To perform one frame inference, specify some option in the --one_frame_inference option. Here is an example:
--video_sections 1 --output_type latent_images --one_frame_inference default --image_path start_image.png --control_image_path start_image.png
The --image_path is used to obtain the SIGCLIP features for one frame inference. Normally, you should specify the starting image. The --control_image_path is newly used to specify the control image, but for normal one frame inference, you should also specify the starting image.
The --one_frame_inference option is recommended to be set to default or no_2x,no_4x. If you specify --output_type as latent_images, both the latent and image will be saved.
You can specify the following strings in the --one_frame_inference option, separated by commas:
no_2x: Generates without passing clean latents 2x with zero vectors to the model. Slightly improves generation speed. The impact on generation results is unknown.no_4x: Generates without passing clean latents 4x with zero vectors to the model. Slightly improves generation speed. The impact on generation results is unknown.no_post: Generates without passing clean latents post with zero vectors to the model. Improves generation speed by about 20%, but may result in unstable generation.target_index=<integer>: Specifies the index of the image to be generated. The default is the last frame (i.e.,latent_window_size).
For example, you can use --one_frame_inference default to pass clean latents 2x, clean latents 4x, and post to the model. --one_frame_inference no_2x,no_4x if you want to skip passing clean latents 2x and 4x to the model. --one_frame_inference target_index=9 can be used to specify the target index for the generated image.
The --one_frame_inference option also supports advanced inference, which is described in the next section. This option allows for more detailed control using additional parameters like target_index and control_index within this option.
Normally, specify --video_sections 1 to indicate only one section (one image).
Increasing target_index from the default of 9 may result in larger changes. It has been confirmed that generation can be performed without breaking up to around 40.
The --end_image_path is ignored for one frame inference.
æ¥æ¬èª
ãã®æ©èœã¯éåžžã«å®éšçã§ãããå ¬åŒã«ã¯ãµããŒããããŠããŸãããFramePackã䜿çšããŠ1ãã¬ãŒã æšè«ã®å¯èœæ§ã詊ããããŠãŒã¶ãŒã«åãããã®ã§ãã
ãã®ã¹ã¯ãªããã§ã¯ãåäžç»åã®æšè«ãè¡ãããšãã§ããŸããFramePackå ¬åŒã®æ©èœã§ã¯ãªããç¬èªã®å®è£ ã§ãã
çè«çã«ã¯ãéå§ç»åãããããã³ããã«åŸããæå®æéçµéåŸã®ç»åãçæããŸããã€ãŸãå¶éä»ãã§ããèªç¶èšèªã«ããç»åç·šéãè¡ãããšãã§ããŸãã
åäžç»åæšè«ãè¡ãã«ã¯--one_frame_inferenceãªãã·ã§ã³ã«ãäœããã®ãªãã·ã§ã³ãæå®ããŠãã ãããèšè¿°äŸã¯ä»¥äžã®éãã§ãã
--video_sections 1 --output_type latent_images --one_frame_inference default --image_path start_image.png --control_image_path start_image.png
--image_pathã¯ã1ãã¬ãŒã æšè«ã§ã¯SIGCLIPã®ç¹åŸŽéãååŸããããã«çšããããŸããéåžžã¯éå§ç»åãæå®ããŠãã ããã--control_image_pathã¯æ°ãã远å ãããåŒæ°ã§ãå¶åŸ¡çšç»åãæå®ããããã«çšããããŸãããéåžžã¯éå§ç»åãæå®ããŠãã ããã
--one_frame_inferenceã®ãªãã·ã§ã³ã¯ãdefaultãŸã㯠no_2x,no_4xãæšå¥šããŸãã--output_typeã«latent_imagesãæå®ãããšlatentãšç»åã®äž¡æ¹ãä¿åãããŸãã
--one_frame_inferenceã®ãªãã·ã§ã³ã«ã¯ãã«ã³ãåºåãã§ä»¥äžã®ãªãã·ã§ã³ãä»»æåæ°æå®ã§ããŸãã
no_2x: ãŒããã¯ãã«ã® clean latents 2xãã¢ãã«ã«æž¡ããã«çæããŸãããããã«çæé床ãåäžããŸããçæçµæãžã®åœ±é¿ã¯äžæã§ããno_4x: ãŒããã¯ãã«ã® clean latents 4xãã¢ãã«ã«æž¡ããã«çæããŸãããããã«çæé床ãåäžããŸããçæçµæãžã®åœ±é¿ã¯äžæã§ããno_post: ãŒããã¯ãã«ã® clean latents ã® post ãæž¡ããã«çæããŸããçæé床ã20%çšåºŠåäžããŸãããçæçµæãäžå®å®ã«ãªãå ŽåããããŸããtarget_index=<æŽæ°>: çæããç»åã®indexãæå®ããŸããããã©ã«ãã¯æåŸã®ãã¬ãŒã ã§ãïŒ=latent_window_sizeïŒã
ããšãã°ã--one_frame_inference defaultã䜿çšãããšãclean latents 2xãclean latents 4xãpostãã¢ãã«ã«æž¡ããŸãã--one_frame_inference no_2x,no_4xã䜿çšãããšãclean latents 2xãš4xãã¢ãã«ã«æž¡ãã®ãã¹ãããããŸãã--one_frame_inference target_index=9ã䜿çšããŠãçæããç»åã®ã¿ãŒã²ããã€ã³ããã¯ã¹ãæå®ã§ããŸãã
åŸè¿°ã®é«åºŠãªæšè«ã§ã¯ããã®ãªãã·ã§ã³å
ã§ target_indexãcontrol_index ãšãã£ã远å ã®ãã©ã¡ãŒã¿ãæå®ããŠããã詳现ãªå¶åŸ¡ãå¯èœã§ãã
clean latents 2xãclean latents 4xãpostãã¢ãã«ã«æž¡ãå Žåã§ãå€ã¯ãŒããã¯ãã«ã§ãããå€ãæž¡ããåŠãã§çµæã¯å€ãããŸããç¹ã«no_postãæå®ãããšãlatent_window_sizeã倧ãããããšãã«çæçµæãäžå®å®ã«ãªãå ŽåããããŸãã
éåžžã¯--video_sections 1 ãšããŠ1ã»ã¯ã·ã§ã³ã®ã¿ïŒç»å1æïŒãæå®ããŠãã ããã
target_index ãããã©ã«ãã®9ãã倧ãããããšãå€åéã倧ãããªãå¯èœæ§ããããŸãã40çšåºŠãŸã§ã¯ç Žç¶»ãªãçæãããããšã確èªããŠããŸãã
--end_image_pathã¯ç¡èŠãããŸãã
kisekaeichi method (Post Reference Options) and 1f-mc (Multi-Control) / kisekaeichiæ¹åŒïŒãã¹ãåç §ãªãã·ã§ã³ïŒãš1f-mcïŒãã«ãã³ã³ãããŒã«ïŒ
The kisekaeichi method was proposed by furusu. The 1f-mc method was proposed by mattyamonaca in pull request #304.
In this repository, these methods have been integrated and can be specified with the --one_frame_inference option. This allows for specifying any number of control images as clean latents, along with indices. This means you can specify multiple starting images and multiple clean latent posts. Additionally, masks can be applied to each image.
It is expected to work only with FramePack (non-F1 model) and not with F1 models.
The following options have been added to --one_frame_inference. These can be used in conjunction with existing flags like target_index, no_post, no_2x, and no_4x.
control_index=<integer_or_semicolon_separated_integers>: Specifies the index(es) of the clean latent for the control image(s). You must specify the same number of indices as the number of control images specified with--control_image_path.
Additionally, the following command-line options have been added. These arguments are only valid when --one_frame_inference is specified.
--control_image_path <path1> [<path2> ...]: Specifies the path(s) to control (reference) image(s) for one frame inference. Provide one or more paths separated by spaces. Images with an alpha channel can be specified. If an alpha channel is present, it is used as a mask for the clean latent.--control_image_mask_path <path1> [<path2> ...]: Specifies the path(s) to grayscale mask(s) to be applied to the control image(s). Provide one or more paths separated by spaces. Each mask is applied to the corresponding control image. The 255 areas are referenced, while the 0 areas are ignored.
Example of specifying kisekaeichi:
The kisekaeichi method works without training, but using a dedicated LoRA may yield better results.
--video_sections 1 --output_type latent_images --image_path start_image.png --control_image_path start_image.png clean_latent_post_image.png \
--one_frame_inference target_index=1,control_index=0;10,no_post,no_2x,no_4x --control_image_mask_path ctrl_mask1.png ctrl_mask2.png
In this example, start_image.png (for clean_latent_pre) and clean_latent_post_image.png (for clean_latent_post) are the reference images. The target_index specifies the index of the generated image. The control_index specifies the clean latent index for each control image, so it will be 0;10. The masks for the control images are specified with --control_image_mask_path.
The optimal values for target_index and control_index are unknown. The target_index should be specified as 1 or higher. The control_index should be set to an appropriate value relative to latent_window_size. Specifying 1 for target_index results in less change from the starting image, but may introduce noise. Specifying 9 or 13 may reduce noise but result in larger changes from the original image.
The control_index should be larger than target_index. Typically, it is set to 10, but larger values (e.g., around 13-16) may also work.
Sample images and command lines for reproduction are as follows:
python fpack_generate_video.py --video_size 832 480 --video_sections 1 --infer_steps 25 \
--prompt "The girl in a school blazer in a classroom." --save_path path/to/output --output_type latent_images \
--dit path/to/dit --vae path/to/vae --text_encoder1 path/to/text_encoder1 --text_encoder2 path/to/text_encoder2 \
--image_encoder path/to/image_encoder --attn_mode sdpa --vae_spatial_tile_sample_min_size 128 --vae_chunk_size 32 \
--image_path path/to/kisekaeichi_start.png --control_image_path path/to/kisekaeichi_start.png path/to/kisekaeichi_ref.png
--one_frame_inference target_index=1,control_index=0;10,no_2x,no_4x,no_post
--control_image_mask_path path/to/kisekaeichi_start_mask.png path/to/kisekaeichi_ref_mask.png --seed 1234
Specify --fp8_scaled and --blocks_to_swap options according to your VRAM capacity.
Generation result: kisekaeichi_result.png
Example of 1f-mc (Multi-Control):
--video_sections 1 --output_type latent_images --image_path start_image.png --control_image_path start_image.png 2nd_image.png \
--one_frame_inference target_index=9,control_index=0;1,no_2x,no_4x
In this example, start_image.png is the starting image, and 2nd_image.png is the reference image. The target_index=9 specifies the index of the generated image, while control_index=0;1 specifies the clean latent indices for each control image.
1f-mc is intended to be used in combination with a trained LoRA, so adjust target_index and control_index according to the LoRA's description.
æ¥æ¬èª
kisekaeichiæ¹åŒã¯furusuæ°ã«ããææ¡ãããŸããããŸã1f-mcæ¹åŒã¯mattyamonacaæ°ã«ããPR #304 ã§ææ¡ãããŸããã
åœãªããžããªã§ã¯ãããã®æ¹åŒãçµ±åãã--one_frame_inferenceãªãã·ã§ã³ã§æå®ã§ããããã«ããŸãããããã«ãããä»»æã®ææ°ã®å¶åŸ¡çšç»åã clean latentãšããŠæå®ããããã«ã€ã³ããã¯ã¹ãæå®ã§ããŸããã€ãŸãéå§ç»åã®è€æ°ææå®ãclean latent postã®è€æ°ææå®ãªã©ãå¯èœã§ãããŸããããããã®ç»åã«ãã¹ã¯ãé©çšããããšãã§ããŸãã
ãªããFramePackç¡å°ã®ã¿åäœããF1ã¢ãã«ã§ã¯åäœããªããšæãããŸãã
--one_frame_inferenceã«ä»¥äžã®ãªãã·ã§ã³ã远å ãããŠããŸããtarget_indexãno_postãno_2xãno_4xãªã©æ¢åã®ãã©ã°ãšäœµçšã§ããŸãã
control_index=<æŽæ°ãŸãã¯ã»ãã³ãã³åºåãã®æŽæ°>: å¶åŸ¡çšç»åã®clean latentã®ã€ã³ããã¯ã¹ãæå®ããŸãã--control_image_pathã§æå®ããå¶åŸ¡çšç»åã®æ°ãšåãæ°ã®ã€ã³ããã¯ã¹ãæå®ããŠãã ããã
ãŸãã³ãã³ãã©ã€ã³ãªãã·ã§ã³ã«ä»¥äžã远å ãããŠããŸãããããã®åŒæ°ã¯--one_frame_inferenceãæå®ããå Žåã®ã¿æå¹ã§ãã
--control_image_path <ãã¹1> [<ãã¹2> ...]: 1ãã¬ãŒã æšè«çšã®å¶åŸ¡çšïŒåç §ïŒç»åã®ãã¹ã1ã€ä»¥äžãã¹ããŒã¹åºåãã§æå®ããŸããã¢ã«ãã¡ãã£ã³ãã«ãæã€ç»åãæå®å¯èœã§ããã¢ã«ãã¡ãã£ã³ãã«ãããå Žåã¯ãclean latentãžã®ãã¹ã¯ãšããŠå©çšãããŸãã--control_image_mask_path <ãã¹1> [<ãã¹2> ...]: å¶åŸ¡çšç»åã«é©çšããã°ã¬ãŒã¹ã±ãŒã«ãã¹ã¯ã®ãã¹ã1ã€ä»¥äžãã¹ããŒã¹åºåãã§æå®ããŸããåãã¹ã¯ã¯å¯Ÿå¿ããå¶åŸ¡çšç»åã«é©çšãããŸãã255ã®éšåãåç §ãããéšåã0ã®éšåãç¡èŠãããéšåã§ãã
kisekaeichiã®æå®äŸ:
kisekaeichiæ¹åŒã¯åŠç¿ãªãã§ãåäœããŸãããå°çšã®LoRAã䜿çšããããšã§ãããè¯ãçµæãåŸãããå¯èœæ§ããããŸãã
--video_sections 1 --output_type latent_images --image_path start_image.png --control_image_path start_image.png clean_latent_post_image.png \
--one_frame_inference target_index=1,control_index=0;10,no_post,no_2x,no_4x --control_image_mask_path ctrl_mask1.png ctrl_mask2.png
start_image.pngïŒclean_latent_preã«çžåœïŒãšclean_latent_post_image.pngã¯åç
§ç»åïŒclean_latent_postã«çžåœïŒã§ããtarget_indexã¯çæããç»åã®ã€ã³ããã¯ã¹ãæå®ããŸããcontrol_indexã¯ããããã®å¶åŸ¡çšç»åã®clean latent indexãæå®ããŸãã®ã§ã0;10 ã«ãªããŸãããŸã--control_image_mask_pathã«å¶åŸ¡çšç»åã«é©çšãããã¹ã¯ãæå®ããŸãã
target_indexãcontrol_indexã®æé©å€ã¯äžæã§ããtarget_indexã¯1以äžãæå®ããŠãã ãããcontrol_indexã¯latent_window_sizeã«å¯ŸããŠé©åãªå€ãæå®ããŠãã ãããtarget_indexã«1ãæå®ãããšéå§ç»åããã®å€åãå°ãªããªããŸããããã€ãºãä¹ã£ããããããšãå€ãããã§ãã9ã13ãªã©ãæå®ãããšãã€ãºã¯æ¹åããããããããŸããããå
ã®ç»åããã®å€åã倧ãããªããŸãã
control_indexã¯target_indexãã倧ããå€ãæå®ããŠãã ãããéåžžã¯10ã§ããããã以äžå€§ããªå€ãããšãã°`13~16çšåºŠã§ãåäœããããã§ãã
ãµã³ãã«ç»åãšåçŸã®ããã®ã³ãã³ãã©ã€ã³ã¯ä»¥äžã®ããã«ãªããŸãã
python fpack_generate_video.py --video_size 832 480 --video_sections 1 --infer_steps 25 \
--prompt "The girl in a school blazer in a classroom." --save_path path/to/output --output_type latent_images \
--dit path/to/dit --vae path/to/vae --text_encoder1 path/to/text_encoder1 --text_encoder2 path/to/text_encoder2 \
--image_encoder path/to/image_encoder --attn_mode sdpa --vae_spatial_tile_sample_min_size 128 --vae_chunk_size 32 \
--image_path path/to/kisekaeichi_start.png --control_image_path path/to/kisekaeichi_start.png path/to/kisekaeichi_ref.png
--one_frame_inference target_index=1,control_index=0;10,no_2x,no_4x,no_post
--control_image_mask_path path/to/kisekaeichi_start_mask.png path/to/kisekaeichi_ref_mask.png --seed 1234
VRAM容éã«å¿ããŠã--fp8_scaledã--blocks_to_swapçã®ãªãã·ã§ã³ã調æŽããŠãã ããã
çæçµæ:
1f-mcã®æå®äŸ:
--video_sections 1 --output_type latent_images --image_path start_image.png --control_image_path start_image.png 2nd_image.png \
--one_frame_inference target_index=9,control_index=0;1,no_2x,no_4x
ãã®äŸã§ã¯ãstart_image.pngãéå§ç»åã§ã2nd_image.pngãåç
§ç»åã§ããtarget_index=9ã¯çæããç»åã®ã€ã³ããã¯ã¹ãæå®ããcontrol_index=0;1ã¯ããããã®å¶åŸ¡çšç»åã®clean latent indexãæå®ããŠããŸãã
1f-mcã¯åŠç¿ããLoRAãšçµã¿åãããããšãæ³å®ããŠããŸãã®ã§ããã®LoRAã®èª¬æã«åŸã£ãŠãtarget_indexãcontrol_indexã調æŽããŠãã ããã