FramePack
Overview / æŠèŠ
This document describes the usage of the FramePack architecture within the Musubi Tuner framework. FramePack is a novel video generation architecture developed by lllyasviel.
Key differences from HunyuanVideo:
- FramePack only supports Image-to-Video (I2V) generation. Text-to-Video (T2V) is not supported.
- It utilizes a different DiT model architecture and requires an additional Image Encoder. VAE is same as HunyuanVideo. Text Encoders seem to be the same as HunyuanVideo but we employ the original FramePack method to utilize them.
- Caching and training scripts are specific to FramePack (
fpack_*.py). - Due to its progressive generation nature, VRAM usage can be significantly lower, especially for longer videos, compared to other architectures.
The official documentation does not provide detailed explanations on how to train the model, but it is based on the FramePack implementation and paper.
This feature is experimental.
For one-frame inference and training, see here.
æ¥æ¬èª
ãã®ããã¥ã¡ã³ãã¯ãMusubi Tunerãã¬ãŒã ã¯ãŒã¯å ã§ã®FramePack ã¢ãŒããã¯ãã£ã®äœ¿çšæ³ã«ã€ããŠèª¬æããŠããŸããFramePackã¯ãlllyasvielæ°ã«ã«ãã£ãŠéçºãããæ°ãããããªçæã¢ãŒããã¯ãã£ã§ãã
HunyuanVideoãšã®äž»ãªéãã¯æ¬¡ã®ãšããã§ãã
- FramePackã¯ãç»åãããããªïŒI2VïŒçæã®ã¿ããµããŒãããŠããŸããããã¹ããããããªïŒT2VïŒã¯ãµããŒããããŠããŸããã
- ç°ãªãDiTã¢ãã«ã¢ãŒããã¯ãã£ã䜿çšãã远å ã®ç»åãšã³ã³ãŒããŒãå¿ èŠã§ããVAEã¯HunyuanVideoãšåãã§ããããã¹ããšã³ã³ãŒããŒã¯HunyuanVideoãšåããšæãããŸãããFramePackå ¬åŒãšåãæ¹æ³ã§æšè«ãè¡ã£ãŠããŸãã
- ãã£ãã·ã³ã°ãšåŠç¿ã¹ã¯ãªããã¯FramePackå°çšïŒ
fpack_*.pyïŒã§ãã - ã»ã¯ã·ã§ã³ãã€çæãããããä»ã®ã¢ãŒããã¯ãã£ãšæ¯èŒããŠãç¹ã«é·ããããªã®å ŽåãVRAM䜿çšéãå€§å¹ ã«å°ãªããªãå¯èœæ§ããããŸãã
åŠç¿æ¹æ³ã«ã€ããŠå ¬åŒããã¯è©³çްãªèª¬æã¯ãããŸããããFramePackã®å®è£ ãšè«æãåèã«ããŠããŸãã
ãã®æ©èœã¯å®éšçãªãã®ã§ãã
1ãã¬ãŒã æšè«ãåŠç¿ã«ã€ããŠã¯ãã¡ããåç §ããŠãã ããã
Download the model / ã¢ãã«ã®ããŠã³ããŒã
You need to download the DiT, VAE, Text Encoder 1 (LLaMA), Text Encoder 2 (CLIP), and Image Encoder (SigLIP) models specifically for FramePack. Several download options are available for each component.
*Note: The weights are publicly available on the following page: maybleMyers/framepack_h1111 (except for FramePack-F1). Thank you maybleMyers!
DiT Model
Choose one of the following methods:
From lllyasviel's Hugging Face repo: Download the three
.safetensorsfiles (starting withdiffusion_pytorch_model-00001-of-00003.safetensors) from lllyasviel/FramePackI2V_HY. Specify the path to the first file (...-00001-of-00003.safetensors) as the--ditargument. For FramePack-F1, download from lllyasviel/FramePack_F1_I2V_HY_20250503.From local FramePack installation: If you have cloned and run the official FramePack repository, the model might be downloaded locally. Specify the path to the snapshot directory, e.g.,
path/to/FramePack/hf_download/hub/models--lllyasviel--FramePackI2V_HY/snapshots/<hex-uuid-folder>. FramePack-F1 is also available in the same way.From Kijai's Hugging Face repo: Download the single file
FramePackI2V_HY_bf16.safetensorsfrom Kijai/HunyuanVideo_comfy. Specify the path to this file as the--ditargument. No FramePack-F1 model is available here currently.
VAE Model
Choose one of the following methods:
- Use official HunyuanVideo VAE: Follow the instructions in the main README.md.
- From hunyuanvideo-community Hugging Face repo: Download
vae/diffusion_pytorch_model.safetensorsfrom hunyuanvideo-community/HunyuanVideo. - From local FramePack installation: If you have cloned and run the official FramePack repository, the VAE might be downloaded locally within the HunyuanVideo community model snapshot. Specify the path to the snapshot directory, e.g.,
path/to/FramePack/hf_download/hub/models--hunyuanvideo-community--HunyuanVideo/snapshots/<hex-uuid-folder>.
Text Encoder 1 (LLaMA) Model
Choose one of the following methods:
- From Comfy-Org Hugging Face repo: Download
split_files/text_encoders/llava_llama3_fp16.safetensorsfrom Comfy-Org/HunyuanVideo_repackaged. - From hunyuanvideo-community Hugging Face repo: Download the four
.safetensorsfiles (starting withtext_encoder/model-00001-of-00004.safetensors) from hunyuanvideo-community/HunyuanVideo. Specify the path to the first file (...-00001-of-00004.safetensors) as the--text_encoder1argument. - From local FramePack installation: (Same as VAE) Specify the path to the HunyuanVideo community model snapshot directory, e.g.,
path/to/FramePack/hf_download/hub/models--hunyuanvideo-community--HunyuanVideo/snapshots/<hex-uuid-folder>.
Text Encoder 2 (CLIP) Model
Choose one of the following methods:
- From Comfy-Org Hugging Face repo: Download
split_files/text_encoders/clip_l.safetensorsfrom Comfy-Org/HunyuanVideo_repackaged. - From hunyuanvideo-community Hugging Face repo: Download
text_encoder_2/model.safetensorsfrom hunyuanvideo-community/HunyuanVideo. - From local FramePack installation: (Same as VAE) Specify the path to the HunyuanVideo community model snapshot directory, e.g.,
path/to/FramePack/hf_download/hub/models--hunyuanvideo-community--HunyuanVideo/snapshots/<hex-uuid-folder>.
Image Encoder (SigLIP) Model
Choose one of the following methods:
- From Comfy-Org Hugging Face repo: Download
sigclip_vision_patch14_384.safetensorsfrom Comfy-Org/sigclip_vision_384. - From lllyasviel's Hugging Face repo: Download
image_encoder/model.safetensorsfrom lllyasviel/flux_redux_bfl. - From local FramePack installation: If you have cloned and run the official FramePack repository, the model might be downloaded locally. Specify the path to the snapshot directory, e.g.,
path/to/FramePack/hf_download/hub/models--lllyasviel--flux_redux_bfl/snapshots/<hex-uuid-folder>.
æ¥æ¬èª
â»ä»¥äžã®ããŒãžã«éã¿ãäžæ¬ã§å ¬éãããŠããŸãïŒFramePack-F1ãé€ãïŒãmaybleMyers æ°ã«æè¬ããããŸãã: https://huggingface.co/maybleMyers/framepack_h1111
DiTãVAEãããã¹ããšã³ã³ãŒããŒ1ïŒLLaMAïŒãããã¹ããšã³ã³ãŒããŒ2ïŒCLIPïŒãããã³ç»åãšã³ã³ãŒããŒïŒSigLIPïŒã¢ãã«ã¯è€æ°ã®æ¹æ³ã§ããŠã³ããŒãã§ããŸããè±èªã®èª¬æãåèã«ããŠãããŠã³ããŒãããŠãã ããã
FramePackå
¬åŒã®ãªããžããªãã¯ããŒã³ããŠå®è¡ããå Žåãã¢ãã«ã¯ããŒã«ã«ã«ããŠã³ããŒããããŠããå¯èœæ§ããããŸããã¹ãããã·ã§ãããã£ã¬ã¯ããªãžã®ãã¹ãæå®ããŠãã ãããäŸïŒpath/to/FramePack/hf_download/hub/models--lllyasviel--flux_redux_bfl/snapshots/<hex-uuid-folder>
HunyuanVideoã®æšè«ãComfyUIã§ãã§ã«è¡ã£ãŠããå Žåãããã€ãã®ã¢ãã«ã¯ãã§ã«ããŠã³ããŒããããŠããå¯èœæ§ããããŸãã
Pre-caching / äºåãã£ãã·ã³ã°
The default resolution for FramePack is 640x640. See the source code for the default resolution of each bucket.
The dataset for training must be a video dataset. Image datasets are not supported. You can train on videos of any length. Specify frame_extraction as full and set max_frames to a sufficiently large value. However, if the video is too long, you may run out of VRAM during VAE encoding.
Latent Pre-caching / latentã®äºåãã£ãã·ã³ã°
Latent pre-caching uses a dedicated script for FramePack. You must provide the Image Encoder model.
python src/musubi_tuner/fpack_cache_latents.py \
--dataset_config path/to/toml \
--vae path/to/vae_model.safetensors \
--image_encoder path/to/image_encoder_model.safetensors \
--vae_chunk_size 32 --vae_spatial_tile_sample_min_size 128
Key differences from HunyuanVideo caching:
- Uses
fpack_cache_latents.py. - Requires the
--image_encoderargument pointing to the downloaded SigLIP model. - The script generates multiple cache files per video, each corresponding to a different section, with the section index appended to the filename (e.g.,
..._frame_pos-0000-count_...becomes..._frame_pos-0000-0000-count_...,..._frame_pos-0000-0001-count_..., etc.). - Image embeddings are calculated using the Image Encoder and stored in the cache files alongside the latents.
For VRAM savings during VAE decoding, consider using --vae_chunk_size and --vae_spatial_tile_sample_min_size. If VRAM is overflowing and using shared memory, it is recommended to set --vae_chunk_size to 16 or 8, and --vae_spatial_tile_sample_min_size to 64 or 32.
Specifying --f1 is required for FramePack-F1 training. For one-frame training, specify --one_frame. If you change the presence of these options, please overwrite the existing cache without specifying --skip_existing.
--one_frame_no_2x and --one_frame_no_4x options are available for one-frame training, described in the next section.
FramePack-F1 support:
You can apply the FramePack-F1 sampling method by specifying --f1 during caching. The training script also requires specifying --f1 to change the options during sample generation.
By default, the sampling method used is Inverted anti-drifting (the same as during inference with the original FramePack model, using the latent and index in reverse order), described in the paper. You can switch to FramePack-F1 sampling (Vanilla sampling, using the temporally ordered latent and index) by specifying --f1.
æ¥æ¬èª
FramePackã®ããã©ã«ãè§£å床ã¯640x640ã§ããåãã±ããã®ããã©ã«ãè§£å床ã«ã€ããŠã¯ããœãŒã¹ã³ãŒããåç §ããŠãã ããã
ç»åããŒã¿ã»ããã§ã®åŠç¿ã¯è¡ããŸããããŸãåç»ã®é·ãã«ãããåŠç¿å¯èœã§ãã frame_extraction ã« full ãæå®ããŠãmax_frames ã«ååã«å€§ããªå€ãæå®ããŠãã ããããã ããããŸãã«ãé·ããšVAEã®encodeã§VRAMãäžè¶³ããå¯èœæ§ããããŸãã
latentã®äºåãã£ãã·ã³ã°ã¯FramePackå°çšã®ã¹ã¯ãªããã䜿çšããŸããç»åãšã³ã³ãŒããŒã¢ãã«ãæå®ããå¿ èŠããããŸãã
HunyuanVideoã®ãã£ãã·ã³ã°ãšã®äž»ãªéãã¯æ¬¡ã®ãšããã§ãã
fpack_cache_latents.pyã䜿çšããŸãã- ããŠã³ããŒãããSigLIPã¢ãã«ãæã
--image_encoderåŒæ°ãå¿ èŠã§ãã - ã¹ã¯ãªããã¯ãåãããªã«å¯ŸããŠè€æ°ã®ãã£ãã·ã¥ãã¡ã€ã«ãçæããŸããåãã¡ã€ã«ã¯ç°ãªãã»ã¯ã·ã§ã³ã«å¯Ÿå¿ããã»ã¯ã·ã§ã³ã€ã³ããã¯ã¹ããã¡ã€ã«åã«è¿œå ãããŸãïŒäŸïŒ
..._frame_pos-0000-count_...ã¯..._frame_pos-0000-0000-count_...ã..._frame_pos-0000-0001-count_...ãªã©ã«ãªããŸãïŒã - ç»ååã蟌ã¿ã¯ç»åãšã³ã³ãŒããŒã䜿çšããŠèšç®ãããlatentãšãšãã«ãã£ãã·ã¥ãã¡ã€ã«ã«ä¿åãããŸãã
VAEã®decodeæã®VRAMç¯çŽã®ããã«ã--vae_chunk_sizeãš--vae_spatial_tile_sample_min_sizeã䜿çšããããšãæ€èšããŠãã ãããVRAMãããµããŠå
±æã¡ã¢ãªã䜿çšããŠããå Žåã«ã¯ã--vae_chunk_sizeã16ã8ãªã©ã«ã--vae_spatial_tile_sample_min_sizeã64ã32ãªã©ã«å€æŽããããšããå§ãããŸãã
FramePack-F1ã®åŠç¿ãè¡ãå Žåã¯--f1ãæå®ããŠãã ããããããã®ãªãã·ã§ã³ã®æç¡ã倿Žããå Žåã«ã¯ã--skip_existingãæå®ããã«æ¢åã®ãã£ãã·ã¥ãäžæžãããŠãã ããã
FramePack-F1ã®ãµããŒãïŒ
ãã£ãã·ã¥æã®ãªãã·ã§ã³ã«--f1ãæå®ããããšã§ãFramePack-F1ã®ãµã³ããªã³ã°æ¹æ³ãé©çšã§ããŸããåŠç¿ã¹ã¯ãªããã«ã€ããŠã--f1ãæå®ããŠãµã³ãã«çææã®ãªãã·ã§ã³ã倿Žããå¿
èŠããããŸãã
ããã©ã«ãã§ã¯ãè«æã®ãµã³ããªã³ã°æ¹æ³ Inverted anti-drifting ïŒç¡å°ã®FramePackã®æšè«æãšåããéé ã® latent ãš index ã䜿çšïŒã䜿çšããŸãã--f1ãæå®ãããš FramePack-F1 ã® Vanilla sampling ïŒæéé ã® latent ãš index ã䜿çšïŒã«å€æŽã§ããŸãã
Text Encoder Output Pre-caching / ããã¹ããšã³ã³ãŒããŒåºåã®äºåãã£ãã·ã³ã°
Text encoder output pre-caching also uses a dedicated script.
python src/musubi_tuner/fpack_cache_text_encoder_outputs.py \
--dataset_config path/to/toml \
--text_encoder1 path/to/text_encoder1 \
--text_encoder2 path/to/text_encoder2 \
--batch_size 16
Key differences from HunyuanVideo caching:
- Uses
fpack_cache_text_encoder_outputs.py. - Requires both
--text_encoder1(LLaMA) and--text_encoder2(CLIP) arguments. - Uses
--fp8_llmoption to run the LLaMA Text Encoder 1 in fp8 mode for VRAM savings (similar to--fp8_t5in Wan2.1). - Saves LLaMA embeddings, attention mask, and CLIP pooler output to the cache file.
æ¥æ¬èª
ããã¹ããšã³ã³ãŒããŒåºåã®äºåãã£ãã·ã³ã°ãå°çšã®ã¹ã¯ãªããã䜿çšããŸãã
HunyuanVideoã®ãã£ãã·ã³ã°ãšã®äž»ãªéãã¯æ¬¡ã®ãšããã§ãã
fpack_cache_text_encoder_outputs.pyã䜿çšããŸãã- LLaMAãšCLIPã®äž¡æ¹ã®åŒæ°ãå¿ èŠã§ãã
- LLaMAããã¹ããšã³ã³ãŒããŒ1ãfp8ã¢ãŒãã§å®è¡ããããã®
--fp8_llmãªãã·ã§ã³ã䜿çšããŸãïŒWan2.1ã®--fp8_t5ã«äŒŒãŠããŸãïŒã - LLaMAã®åã蟌ã¿ãã¢ãã³ã·ã§ã³ãã¹ã¯ãCLIPã®ããŒã©ãŒåºåããã£ãã·ã¥ãã¡ã€ã«ã«ä¿åããŸãã
Training / åŠç¿
Training
Training uses a dedicated script fpack_train_network.py. Remember FramePack only supports I2V training.
accelerate launch --num_cpu_threads_per_process 1 --mixed_precision bf16 src/musubi_tuner/fpack_train_network.py \
--dit path/to/dit_model \
--vae path/to/vae_model.safetensors \
--text_encoder1 path/to/text_encoder1 \
--text_encoder2 path/to/text_encoder2 \
--image_encoder path/to/image_encoder_model.safetensors \
--dataset_config path/to/toml \
--sdpa --mixed_precision bf16 \
--optimizer_type adamw8bit --learning_rate 2e-4 --gradient_checkpointing \
--timestep_sampling shift --weighting_scheme none --discrete_flow_shift 3.0 \
--max_data_loader_n_workers 2 --persistent_data_loader_workers \
--network_module networks.lora_framepack --network_dim 32 \
--max_train_epochs 16 --save_every_n_epochs 1 --seed 42 \
--output_dir path/to/output_dir --output_name name-of-lora
If you use the command prompt (Windows, not PowerShell), you may need to write them in a single line, or use ^ instead of \ at the end of each line to continue the command.
The maximum value for --blocks_to_swap is 36. The default resolution for FramePack is 640x640, which requires around 17GB of VRAM. If you run out of VRAM, consider lowering the dataset resolution.
Key differences from HunyuanVideo training:
- Uses
fpack_train_network.py. --f1option is available for FramePack-F1 model training. You need to specify the FramePack-F1 model as--dit. This option only changes the sample generation during training. The training process itself is the same as the original FramePack model.- Requires specifying
--vae,--text_encoder1,--text_encoder2, and--image_encoder. - Requires specifying
--network_module networks.lora_framepack. - Optional
--latent_window_sizeargument (default 9, should match caching). - Memory saving options like
--fp8(for DiT) and--fp8_llm(for Text Encoder 1) are available.--fp8_scaledis recommended when using--fp8for DiT. --vae_chunk_sizeand--vae_spatial_tile_sample_min_sizeoptions are available for the VAE to prevent out-of-memory during sampling (similar to caching).--gradient_checkpointingis available for memory savings.- If you encounter an error when the batch size is greater than 1 (especially when specifying
--sdpaor--xformers, it will always result in an error), please specify--split_attn.
Training settings (learning rate, optimizers, etc.) are experimental. Feedback is welcome.
æ¥æ¬èª
FramePackã®åŠç¿ã¯å°çšã®ã¹ã¯ãªããfpack_train_network.pyã䜿çšããŸããFramePackã¯I2VåŠç¿ã®ã¿ããµããŒãããŠããŸãã
ã³ãã³ãèšè¿°äŸã¯è±èªçãåèã«ããŠãã ãããWindowsã§PowerShellã§ã¯ãªãã³ãã³ãããã³ããã䜿çšããŠããå Žåãã³ãã³ãã1è¡ã§èšè¿°ããããåè¡ã®æ«å°Ÿã«\ã®ä»£ããã«^ãä»ããŠã³ãã³ããç¶ããå¿
èŠããããŸãã
--blocks_to_swapã®æå€§å€ã¯36ã§ããFramePackã®ããã©ã«ãè§£å床ïŒ640x640ïŒã§ã¯ã17GBçšåºŠã®VRAMãå¿
èŠã§ããVRAM容éãäžè¶³ããå Žåã¯ãããŒã¿ã»ããã®è§£å床ãäžããŠãã ããã
HunyuanVideoã®åŠç¿ãšã®äž»ãªéãã¯æ¬¡ã®ãšããã§ãã
fpack_train_network.pyã䜿çšããŸãã- FramePack-F1ã¢ãã«ã®åŠç¿æã«ã¯
--f1ãæå®ããŠãã ããããã®å Žåã--ditã«FramePack-F1ã¢ãã«ãæå®ããå¿ èŠããããŸãããã®ãªãã·ã§ã³ã¯åŠç¿æã®ãµã³ãã«çææã®ã¿ã«åœ±é¿ããåŠç¿ããã»ã¹èªäœã¯å ã®FramePackã¢ãã«ãšåãã§ãã --vaeã--text_encoder1ã--text_encoder2ã--image_encoderãæå®ããå¿ èŠããããŸãã--network_module networks.lora_framepackãæå®ããå¿ èŠããããŸãã- å¿
èŠã«å¿ããŠ
--latent_window_sizeåŒæ°ïŒããã©ã«ã9ïŒãæå®ã§ããŸãïŒãã£ãã·ã³ã°æãšäžèŽãããå¿ èŠããããŸãïŒã --fp8ïŒDiTçšïŒã--fp8_llmïŒããã¹ããšã³ã³ãŒããŒ1çšïŒãªã©ã®ã¡ã¢ãªç¯çŽãªãã·ã§ã³ãå©çšå¯èœã§ãã--fp8_scaledã䜿çšããããšããå§ãããŸãã- ãµã³ãã«çææã«ã¡ã¢ãªäžè¶³ãé²ããããVAEçšã®
--vae_chunk_sizeã--vae_spatial_tile_sample_min_sizeãªãã·ã§ã³ãå©çšå¯èœã§ãïŒãã£ãã·ã³ã°æãšåæ§ïŒã - ã¡ã¢ãªç¯çŽã®ããã«
--gradient_checkpointingãå©çšå¯èœã§ãã - ããããµã€ãºã1ãã倧ããå Žåã«ãšã©ãŒãåºãæã«ã¯ïŒç¹ã«
--sdpaã--xformersãæå®ãããšå¿ ããšã©ãŒã«ãªããŸããïŒã--split_attnãæå®ããŠãã ããã
Inference
Inference uses a dedicated script fpack_generate_video.py.
python src/musubi_tuner/fpack_generate_video.py \
--dit path/to/dit_model \
--vae path/to/vae_model.safetensors \
--text_encoder1 path/to/text_encoder1 \
--text_encoder2 path/to/text_encoder2 \
--image_encoder path/to/image_encoder_model.safetensors \
--image_path path/to/start_image.jpg \
--prompt "A cat walks on the grass, realistic style." \
--video_size 512 768 --video_seconds 5 --fps 30 --infer_steps 25 \
--attn_mode sdpa --fp8_scaled \
--vae_chunk_size 32 --vae_spatial_tile_sample_min_size 128 \
--save_path path/to/save/dir --output_type both \
--seed 1234 --lora_multiplier 1.0 --lora_weight path/to/lora.safetensors
Key differences from HunyuanVideo inference:
- Uses
fpack_generate_video.py. --f1option is available for FramePack-F1 model inference (forward generation). You need to specify the FramePack-F1 model as--dit.- Requires specifying
--vae,--text_encoder1,--text_encoder2, and--image_encoder. - Requires specifying
--image_pathfor the starting frame. - Requires specifying
--video_secondsor--video_sections.--video_secondsspecifies the length of the video in seconds, while--video_sectionsspecifies the number of sections. If--video_sectionsis specified,--video_secondsis ignored. --video_sizeis the size of the generated video, height and width are specified in that order.--prompt: Prompt for generation.- Optional
--latent_window_sizeargument (default 9, should match caching and training). --fp8_scaledoption is available for DiT to reduce memory usage. Quality may be slightly lower.--fp8_llmoption is available to reduce memory usage of Text Encoder 1.--fp8alone is also an option for DiT but--fp8_scaledpotentially offers better quality.- LoRA loading options (
--lora_weight,--lora_multiplier,--include_patterns,--exclude_patterns) are available.--lycorisis also supported. --embedded_cfg_scale(default 10.0) controls the distilled guidance scale.--guidance_scale(default 1.0) controls the standard classifier-free guidance scale. Changing this from 1.0 is generally not recommended for the base FramePack model.--guidance_rescale(default 0.0) is available but typically not needed.--bulk_decodeoption can decode all frames at once, potentially faster but uses more VRAM during decoding.--vae_chunk_sizeand--vae_spatial_tile_sample_min_sizeoptions are recommended to prevent out-of-memory errors.--sample_solver(defaultunipc) is available but onlyunipcis implemented.--save_merged_modeloption is available to save the DiT model after merging LoRA weights. Inference is skipped if this is specified.--latent_paddingsoption overrides the default padding for each section. Specify it as a comma-separated list of integers, e.g.,--latent_paddings 0,0,0,0. This option is ignored if--f1is specified.--custom_system_promptoption overrides the default system prompt for the LLaMA Text Encoder 1. Specify it as a string. See here for the default system prompt.--rope_scaling_timestep_thresholdoption is the RoPE scaling timestep threshold, default is None (disabled). If set, RoPE scaling is applied only when the timestep exceeds the threshold. Start with around 800 and adjust as needed. This option is intended for one-frame inference and may not be suitable for other cases.--rope_scaling_factoroption is the RoPE scaling factor, default is 0.5, assuming a resolution of 2x. For 1.5x resolution, around 0.7 is recommended.
Other options like --video_size, --fps, --infer_steps, --save_path, --output_type, --seed, --attn_mode, --blocks_to_swap, --vae_chunk_size, --vae_spatial_tile_sample_min_size function similarly to HunyuanVideo/Wan2.1 where applicable.
--output_type supports latent_images in addition to the options available in HunyuanVideo/Wan2.1. This option saves the latent and image files in the specified directory.
The LoRA weights that can be specified in --lora_weight are not limited to the FramePack weights trained in this repository. You can also specify the HunyuanVideo LoRA weights from this repository and the HunyuanVideo LoRA weights from diffusion-pipe (automatic detection).
The maximum value for --blocks_to_swap is 38.
æ¥æ¬èª
FramePackã®æšè«ã¯å°çšã®ã¹ã¯ãªããfpack_generate_video.pyã䜿çšããŸããã³ãã³ãèšè¿°äŸã¯è±èªçãåèã«ããŠãã ããã
HunyuanVideoã®æšè«ãšã®äž»ãªéãã¯æ¬¡ã®ãšããã§ãã
fpack_generate_video.pyã䜿çšããŸãã--f1ãæå®ãããšãFramePack-F1ã¢ãã«ã®æšè«ãè¡ããŸãïŒé æ¹åã§çæïŒã--ditã«FramePack-F1ã¢ãã«ãæå®ããå¿ èŠããããŸãã--vaeã--text_encoder1ã--text_encoder2ã--image_encoderãæå®ããå¿ èŠããããŸãã--image_pathãæå®ããå¿ èŠããããŸãïŒéå§ãã¬ãŒã ïŒã--video_secondsãŸãã¯--video_sectionsãæå®ããå¿ èŠããããŸãã--video_secondsã¯ç§åäœã§ã®ãããªã®é·ããæå®ãã--video_sectionsã¯ã»ã¯ã·ã§ã³æ°ãæå®ããŸãã--video_sectionsãæå®ããå Žåã--video_secondsã¯ç¡èŠãããŸãã--video_sizeã¯çæãããããªã®ãµã€ãºã§ãé«ããšå¹ ããã®é çªã§æå®ããŸãã--prompt: çæçšã®ããã³ããã§ãã- å¿
èŠã«å¿ããŠ
--latent_window_sizeåŒæ°ïŒããã©ã«ã9ïŒãæå®ã§ããŸãïŒãã£ãã·ã³ã°æãåŠç¿æãšäžèŽãããå¿ èŠããããŸãïŒã - DiTã®ã¡ã¢ãªäœ¿çšéãåæžããããã«ã
--fp8_scaledãªãã·ã§ã³ãæå®å¯èœã§ããå質ã¯ããäœäžããå¯èœæ§ããããŸãããŸãText Encoder 1ã®ã¡ã¢ãªäœ¿çšéãåæžããããã«ã--fp8_llmãªãã·ã§ã³ãæå®å¯èœã§ããDiTçšã«--fp8åç¬ã®ãªãã·ã§ã³ãçšæãããŠããŸããã--fp8_scaledã®æ¹ãå質ãè¯ãå¯èœæ§ããããŸãã - LoRAã®èªã¿èŸŒã¿ãªãã·ã§ã³ïŒ
--lora_weightã--lora_multiplierã--include_patternsã--exclude_patternsïŒãå©çšå¯èœã§ããLyCORISããµããŒããããŠããŸãã --embedded_cfg_scaleïŒããã©ã«ã10.0ïŒã¯ãèžçãããã¬ã€ãã³ã¹ã¹ã±ãŒã«ãå¶åŸ¡ããŸããéåžžã¯å€æŽããªãã§ãã ããã--guidance_scaleïŒããã©ã«ã1.0ïŒã¯ãæšæºã®åé¡åšããªãŒã¬ã€ãã³ã¹ã¹ã±ãŒã«ãå¶åŸ¡ããŸããFramePackã¢ãã«ã®ããŒã¹ã¢ãã«ã§ã¯ãéåžž1.0ãã倿Žããªãããšããå§ãããŸãã--guidance_rescaleïŒããã©ã«ã0.0ïŒãå©çšå¯èœã§ãããéåžžã¯å¿ èŠãããŸããã--bulk_decodeãªãã·ã§ã³ã¯ããã¹ãŠã®ãã¬ãŒã ãäžåºŠã«ãã³ãŒãã§ãããªãã·ã§ã³ã§ããé«éã§ããããã³ãŒãäžã«VRAMãå€ã䜿çšããŸããVRAMäžè¶³ãšã©ãŒãé²ãããã«ã--vae_chunk_sizeãš--vae_spatial_tile_sample_min_sizeãªãã·ã§ã³ãæå®ããããšããå§ãããŸãã--sample_solverïŒããã©ã«ãunipcïŒã¯å©çšå¯èœã§ãããunipcã®ã¿ãå®è£ ãããŠããŸãã--save_merged_modelãªãã·ã§ã³ã¯ãLoRAã®éã¿ãããŒãžããåŸã«DiTã¢ãã«ãä¿åããããã®ãªãã·ã§ã³ã§ãããããæå®ãããšæšè«ã¯ã¹ããããããŸãã--latent_paddingsãªãã·ã§ã³ã¯ãåã»ã¯ã·ã§ã³ã®ããã©ã«ãã®ããã£ã³ã°ãäžæžãããŸããã«ã³ãåºåãã®æŽæ°ãªã¹ããšããŠæå®ããŸããäŸïŒ--latent_paddings 0,0,0,0ã--f1ãæå®ããå Žåã¯ç¡èŠãããŸãã--custom_system_promptãªãã·ã§ã³ã¯ãLLaMA Text Encoder 1ã®ããã©ã«ãã®ã·ã¹ãã ããã³ãããäžæžãããŸããæååãšããŠæå®ããŸããããã©ã«ãã®ã·ã¹ãã ããã³ããã¯ãã¡ããåç §ããŠãã ããã--rope_scaling_timestep_thresholdãªãã·ã§ã³ã¯RoPEã¹ã±ãŒãªã³ã°ã®ã¿ã€ã ã¹ãããéŸå€ã§ãããã©ã«ãã¯NoneïŒç¡å¹ïŒã§ããèšå®ãããšãã¿ã€ã ã¹ããããéŸå€ä»¥äžã®å Žåã«ã®ã¿RoPEã¹ã±ãŒãªã³ã°ãé©çšãããŸãã800çšåºŠããåããŠèª¿æŽããŠãã ããã1ãã¬ãŒã æšè«æã§ã®äœ¿çšãæ³å®ããŠããããã以å€ã®å Žåã¯æ³å®ããŠããŸããã--rope_scaling_factorãªãã·ã§ã³ã¯RoPEã¹ã±ãŒãªã³ã°ä¿æ°ã§ãããã©ã«ãã¯0.5ã§ãè§£å床ã2åã®å Žåãæ³å®ããŠããŸãã1.5åãªã0.7çšåºŠãè¯ãã§ãããã
--video_sizeã--fpsã--infer_stepsã--save_pathã--output_typeã--seedã--attn_modeã--blocks_to_swapã--vae_chunk_sizeã--vae_spatial_tile_sample_min_sizeãªã©ã®ä»ã®ãªãã·ã§ã³ã¯ãHunyuanVideo/Wan2.1ãšåæ§ã«æ©èœããŸãã
--lora_weightã«æå®ã§ããLoRAã®éã¿ã¯ãåœãªããžããªã§åŠç¿ããFramePackã®éã¿ä»¥å€ã«ãåœãªããžããªã®HunyuanVideoã®LoRAãdiffusion-pipeã®HunyuanVideoã®LoRAãæå®å¯èœã§ãïŒèªåå€å®ïŒã
--blocks_to_swapã®æå€§å€ã¯38ã§ãã
Batch and Interactive Modes / ãããã¢ãŒããšã€ã³ã¿ã©ã¯ãã£ãã¢ãŒã
In addition to single video generation, FramePack now supports batch generation from file and interactive prompt input:
Batch Mode from File / ãã¡ã€ã«ããã®ãããã¢ãŒã
Generate multiple videos from prompts stored in a text file:
python src/musubi_tuner/fpack_generate_video.py --from_file prompts.txt
--dit path/to/dit_model --vae path/to/vae_model.safetensors
--text_encoder1 path/to/text_encoder1 --text_encoder2 path/to/text_encoder2
--image_encoder path/to/image_encoder_model.safetensors --save_path output_directory
The prompts file format:
- One prompt per line
- Empty lines and lines starting with # are ignored (comments)
- Each line can include prompt-specific parameters using command-line style format:
A beautiful sunset over mountains --w 832 --h 480 --f 5 --d 42 --s 20 --i path/to/start_image.jpg
A busy city street at night --w 480 --h 832 --i path/to/another_start.jpg
Supported inline parameters (if omitted, default values from the command line are used):
--w: Width--h: Height--f: Video seconds--d: Seed--s: Inference steps--gor--l: Guidance scale--i: Image path (for start image)--im: Image mask path--n: Negative prompt--vs: Video sections--ei: End image path--ci: Control image path (explained in one-frame inference documentation)--cim: Control image mask path (explained in one-frame inference documentation)--of: One frame inference mode options (same as--one_frame_inferencein the command line), options for one-frame inference
In batch mode, models are loaded once and reused for all prompts, significantly improving overall generation time compared to multiple single runs.
Interactive Mode / ã€ã³ã¿ã©ã¯ãã£ãã¢ãŒã
Interactive command-line interface for entering prompts:
python src/musubi_tuner/fpack_generate_video.py --interactive
--dit path/to/dit_model --vae path/to/vae_model.safetensors
--text_encoder1 path/to/text_encoder1 --text_encoder2 path/to/text_encoder2
--image_encoder path/to/image_encoder_model.safetensors --save_path output_directory
In interactive mode:
- Enter prompts directly at the command line
- Use the same inline parameter format as batch mode
- Use Ctrl+D (or Ctrl+Z on Windows) to exit
- Models remain loaded between generations for efficiency
æ¥æ¬èª
åäžåç»ã®çæã«å ããŠãFramePackã¯çŸåšããã¡ã€ã«ããã®ãããçæãšã€ã³ã¿ã©ã¯ãã£ããªããã³ããå ¥åããµããŒãããŠããŸãã
ãã¡ã€ã«ããã®ãããã¢ãŒã
ããã¹ããã¡ã€ã«ã«ä¿åãããããã³ããããè€æ°ã®åç»ãçæããŸãïŒ
python src/musubi_tuner/fpack_generate_video.py --from_file prompts.txt
--dit path/to/dit_model --vae path/to/vae_model.safetensors
--text_encoder1 path/to/text_encoder1 --text_encoder2 path/to/text_encoder2
--image_encoder path/to/image_encoder_model.safetensors --save_path output_directory
ããã³ãããã¡ã€ã«ã®åœ¢åŒïŒãµã³ãã«ã¯è±èªããã¥ã¡ã³ããåç §ïŒïŒ
- 1è¡ã«1ã€ã®ããã³ãã
- 空è¡ã#ã§å§ãŸãè¡ã¯ç¡èŠãããŸãïŒã³ã¡ã³ãïŒ
- åè¡ã«ã¯ã³ãã³ãã©ã€ã³åœ¢åŒã§ããã³ããåºæã®ãã©ã¡ãŒã¿ãå«ããããšãã§ããŸãïŒ
ãµããŒããããŠããã€ã³ã©ã€ã³ãã©ã¡ãŒã¿ïŒçç¥ããå Žåãã³ãã³ãã©ã€ã³ã®ããã©ã«ãå€ã䜿çšãããŸãïŒ
--w: å¹--h: é«ã--f: åç»ã®ç§æ°--d: ã·ãŒã--s: æšè«ã¹ããã--gãŸãã¯--l: ã¬ã€ãã³ã¹ã¹ã±ãŒã«--i: ç»åãã¹ïŒéå§ç»åçšïŒ--im: ç»åãã¹ã¯ãã¹--n: ãã¬ãã£ãããã³ãã--vs: åç»ã»ã¯ã·ã§ã³æ°--ei: çµäºç»åãã¹--ci: å¶åŸ¡ç»åãã¹ïŒ1ãã¬ãŒã æšè«ã®ããã¥ã¡ã³ãã§è§£èª¬ïŒ--cim: å¶åŸ¡ç»åãã¹ã¯ãã¹ïŒ1ãã¬ãŒã æšè«ã®ããã¥ã¡ã³ãã§è§£èª¬ïŒ--of: 1ãã¬ãŒã æšè«ã¢ãŒããªãã·ã§ã³ïŒã³ãã³ãã©ã€ã³ã®--one_frame_inferenceãšåæ§ã1ãã¬ãŒã æšè«ã®ãªãã·ã§ã³ïŒ
ãããã¢ãŒãã§ã¯ãã¢ãã«ã¯äžåºŠã ãããŒãããããã¹ãŠã®ããã³ããã§åå©çšããããããè€æ°åã®åäžå®è¡ãšæ¯èŒããŠå šäœçãªçææéãå€§å¹ ã«æ¹åãããŸãã
ã€ã³ã¿ã©ã¯ãã£ãã¢ãŒã
ããã³ãããå ¥åããããã®ã€ã³ã¿ã©ã¯ãã£ããªã³ãã³ãã©ã€ã³ã€ã³ã¿ãŒãã§ãŒã¹ïŒ
python src/musubi_tuner/fpack_generate_video.py --interactive
--dit path/to/dit_model --vae path/to/vae_model.safetensors
--text_encoder1 path/to/text_encoder1 --text_encoder2 path/to/text_encoder2
--image_encoder path/to/image_encoder_model.safetensors --save_path output_directory
ã€ã³ã¿ã©ã¯ãã£ãã¢ãŒãã§ã¯ïŒ
- ã³ãã³ãã©ã€ã³ã§çŽæ¥ããã³ãããå ¥å
- ãããã¢ãŒããšåãã€ã³ã©ã€ã³ãã©ã¡ãŒã¿åœ¢åŒã䜿çš
- çµäºããã«ã¯ Ctrl+D (Windowsã§ã¯ Ctrl+Z) ã䜿çš
- å¹çã®ãããã¢ãã«ã¯çæéã§èªã¿èŸŒãŸãããŸãŸã«ãªããŸã
Advanced Video Control Features (Experimental) / é«åºŠãªãããªå¶åŸ¡æ©èœïŒå®éšçïŒ
This section describes experimental features added to the fpack_generate_video.py script to provide finer control over the generated video content, particularly useful for longer videos or sequences requiring specific transitions or states. These features leverage the Inverted Anti-drifting sampling method inherent to FramePack.
1. End Image Guidance (--end_image_path)
- Functionality: Guides the generation process to make the final frame(s) of the video resemble a specified target image.
- Usage:
--end_image_path <path_to_image_file> - Mechanism: The provided image is encoded using the VAE. This latent representation is used as a target or starting point during the generation of the final video section (which is the first step in Inverted Anti-drifting).
- Use Cases: Defining a clear ending for the video, such as a character striking a specific pose or a product appearing in a close-up.
This option is ignored if --f1 is specified. The end image is not used in the FramePack-F1 model.
2. Section Start Image Guidance (--image_path Extended Format)
- Functionality: Guides specific sections within the video to start with a visual state close to a provided image.
- You can force the start image by setting
--latent_paddingsto0,0,0,0(specify the number of sections as a comma-separated list). Iflatent_paddingsis set to 1 or more, the specified image will be used as a reference image (default behavior).
- You can force the start image by setting
- Usage:
--image_path "SECTION_SPEC:path/to/image.jpg;;;SECTION_SPEC:path/to/another.jpg;;;..."SECTION_SPEC: Defines the target section(s). Rules:0: The first section of the video (generated last in Inverted Anti-drifting).-1: The last section of the video (generated first).N(non-negative integer): The N-th section (0-indexed).-N(negative integer): The N-th section from the end.S-E(range, e.g.,0-2): Applies the same image guidance to sections S through E (inclusive).
- Use
;;;as a separator between definitions. - If no image is specified for a section, generation proceeds based on the prompt and preceding (future time) section context.
- Mechanism: When generating a specific section, if a corresponding start image is provided, its VAE latent representation is strongly referenced as the "initial state" for that section. This guides the beginning of the section towards the specified image while attempting to maintain temporal consistency with the subsequent (already generated) section.
- Use Cases: Defining clear starting points for scene changes, specifying character poses or attire at the beginning of certain sections.
3. Section-Specific Prompts (--prompt Extended Format)
- Functionality: Allows providing different text prompts for different sections of the video, enabling more granular control over the narrative or action flow.
- Usage:
--prompt "SECTION_SPEC:Prompt text for section(s);;;SECTION_SPEC:Another prompt;;;..."SECTION_SPEC: Uses the same rules as--image_path.- Use
;;;as a separator. - If a prompt for a specific section is not provided, the prompt associated with index
0(or the closest specified applicable prompt) is typically used. Check behavior if defaults are critical.
- Mechanism: During the generation of each section, the corresponding section-specific prompt is used as the primary textual guidance for the model.
- Prompt Content Recommendation when using
--latent_paddings 0,0,0,0without--f1(original FramePack model):- Recall that FramePack uses Inverted Anti-drifting and references future context.
- It is recommended to describe "the main content or state change that should occur in the current section, and the subsequent events or states leading towards the end of the video" in the prompt for each section.
- Including the content of subsequent sections in the current section's prompt helps the model maintain context and overall coherence.
- Example: For section 1, the prompt might describe what happens in section 1 and briefly summarize section 2 (and beyond).
- However, based on observations (e.g., the
latent_paddingscomment), the model's ability to perfectly utilize very long-term context might be limited. Experimentation is key. Describing just the "goal for the current section" might also work. Start by trying the "section and onwards" approach.
- Use the default prompt when
latent_paddingsis >= 1 or--latent_paddingsis not specified, or when using--f1(FramePack-F1 model). - Use Cases: Describing evolving storylines, gradual changes in character actions or emotions, step-by-step processes over time.
Combined Usage Example (with --f1 not specified)
Generating a 3-section video of "A dog runs towards a thrown ball, catches it, and runs back":
python src/musubi_tuner/fpack_generate_video.py \
--prompt "0:A dog runs towards a thrown ball, catches it, and runs back;;;1:The dog catches the ball and then runs back towards the viewer;;;2:The dog runs back towards the viewer holding the ball" \
--image_path "0:./img_start_running.png;;;1:./img_catching.png;;;2:./img_running_back.png" \
--end_image_path ./img_returned.png \
--save_path ./output \
# ... other arguments
- Generation Order: Section 2 -> Section 1 -> Section 0
- Generating Section 2:
- Prompt: "The dog runs back towards the viewer holding the ball"
- Start Image:
./img_running_back.png - End Image:
./img_returned.png(Initial target)
- Generating Section 1:
- Prompt: "The dog catches the ball and then runs back towards the viewer"
- Start Image:
./img_catching.png - Future Context: Generated Section 2 latent
- Generating Section 0:
- Prompt: "A dog runs towards a thrown ball, catches it, and runs back"
- Start Image:
./img_start_running.png - Future Context: Generated Section 1 & 2 latents
Important Considerations
- Inverted Generation: Always remember that generation proceeds from the end of the video towards the beginning. Section
-1(the last section,2in the example) is generated first. - Continuity vs. Guidance: While start image guidance is powerful, drastically different images between sections might lead to unnatural transitions. Balance guidance strength with the need for smooth flow.
- Prompt Optimization: The prompt content recommendation is a starting point. Fine-tune prompts based on observed model behavior and desired output quality.
æ¥æ¬èª
é«åºŠãªåç»å¶åŸ¡æ©èœïŒå®éšçïŒ
ãã®ã»ã¯ã·ã§ã³ã§ã¯ãfpack_generate_video.py ã¹ã¯ãªããã«è¿œå ãããå®éšçãªæ©èœã«ã€ããŠèª¬æããŸãããããã®æ©èœã¯ãçæãããåç»ã®å
容ããã詳现ã«å¶åŸ¡ããããã®ãã®ã§ãç¹ã«é·ãåç»ãç¹å®ã®é·ç§»ã»ç¶æ
ãå¿
èŠãªã·ãŒã±ã³ã¹ã«åœ¹ç«ã¡ãŸãããããã®æ©èœã¯ãFramePackåºæã®Inverted Anti-driftingãµã³ããªã³ã°æ¹åŒã掻çšããŠããŸãã
1. çµç«¯ç»åã¬ã€ãã³ã¹ (--end_image_path)
- æ©èœ: åç»ã®æåŸã®ãã¬ãŒã ïŒçŸ€ïŒãæå®ããã¿ãŒã²ããç»åã«è¿ã¥ããããã«çæãèªå°ããŸãã
- æžåŒ:
--end_image_path <ç»åãã¡ã€ã«ãã¹> - åäœ: æå®ãããç»åã¯VAEã§ãšã³ã³ãŒãããããã®æœåšè¡šçŸãåç»ã®æçµã»ã¯ã·ã§ã³ïŒInverted Anti-driftingã§ã¯æåã«çæãããïŒã®çææã®ç®æšãŸãã¯éå§ç¹ãšããŠäœ¿çšãããŸãã
- çšé: ãã£ã©ã¯ã¿ãŒãç¹å®ã®ããŒãºã§çµãããç¹å®ã®ååãã¯ããŒãºã¢ããã§çµãããªã©ãåç»ã®çµæ«ãæç¢ºã«å®çŸ©ããå Žåã
ãã®ãªãã·ã§ã³ã¯ã--f1ãæå®ããå Žåã¯ç¡èŠãããŸããFramePack-F1ã¢ãã«ã§ã¯çµç«¯ç»åã¯äœ¿çšãããŸããã
2. ã»ã¯ã·ã§ã³éå§ç»åã¬ã€ãã³ã¹ (--image_path æ¡åŒµæžåŒ)
- æ©èœ: åç»å
ã®ç¹å®ã®ã»ã¯ã·ã§ã³ããæå®ãããç»åã«è¿ãèŠèŠç¶æ
ããå§ãŸãããã«èªå°ããŸãã
--latent_paddingsã0,0,0,0ïŒã«ã³ãåºåãã§ã»ã¯ã·ã§ã³æ°ã ãæå®ïŒã«èšå®ããããšã§ãã»ã¯ã·ã§ã³ã®éå§ç»åã匷å¶ã§ããŸããlatent_paddingsã1以äžã®å Žåãæå®ãããç»åã¯åç §ç»åãšããŠäœ¿çšãããŸãã
- æžåŒ:
--image_path "ã»ã¯ã·ã§ã³æå®å:ç»åãã¹;;;ã»ã¯ã·ã§ã³æå®å:å¥ã®ç»åãã¹;;;..."ã»ã¯ã·ã§ã³æå®å: 察象ã»ã¯ã·ã§ã³ãå®çŸ©ããŸããã«ãŒã«ïŒ0: åç»ã®æåã®ã»ã¯ã·ã§ã³ïŒInverted Anti-driftingã§ã¯æåŸã«çæïŒã-1: åç»ã®æåŸã®ã»ã¯ã·ã§ã³ïŒæåã«çæïŒãNïŒéè² æŽæ°ïŒ: Nçªç®ã®ã»ã¯ã·ã§ã³ïŒ0å§ãŸãïŒã-NïŒè² æŽæ°ïŒ: æåŸããNçªç®ã®ã»ã¯ã·ã§ã³ãS-EïŒç¯å², äŸ:0-2ïŒ: ã»ã¯ã·ã§ã³SããEïŒäž¡ç«¯å«ãïŒã«åãç»åãé©çšã
- åºåãæåã¯
;;;ã§ãã - ã»ã¯ã·ã§ã³ã«ç»åãæå®ãããŠããªãå Žåãããã³ãããšåŸç¶ïŒæªæ¥æå»ïŒã»ã¯ã·ã§ã³ã®ã³ã³ããã¹ãã«åºã¥ããŠçæãããŸãã
- åäœ: ç¹å®ã»ã¯ã·ã§ã³ã®çææã察å¿ããéå§ç»åãæå®ãããŠããã°ããã®VAEæœåšè¡šçŸããã®ã»ã¯ã·ã§ã³ã®ãåæç¶æ ããšããŠåŒ·ãåç §ãããŸããããã«ãããåŸç¶ïŒçææžã¿ïŒã»ã¯ã·ã§ã³ãšã®æéçé£ç¶æ§ãç¶æããããšãã€ã€ãã»ã¯ã·ã§ã³ã®å§ãŸããæå®ç»åã«è¿ã¥ããŸãã
- çšé: ã·ãŒã³å€æŽã®èµ·ç¹ãæç¢ºã«ãããç¹å®ã®ã»ã¯ã·ã§ã³éå§æã®ãã£ã©ã¯ã¿ãŒã®ããŒãºãæè£ ãæå®ãããªã©ã
3. ã»ã¯ã·ã§ã³å¥ããã³ãã (--prompt æ¡åŒµæžåŒ)
- æ©èœ: åç»ã®ã»ã¯ã·ã§ã³ããšã«ç°ãªãããã¹ãããã³ãããäžããç©èªãã¢ã¯ã·ã§ã³ã®æµãããã现ããæç€ºã§ããŸãã
- æžåŒ:
--prompt "ã»ã¯ã·ã§ã³æå®å:ããã³ããããã¹ã;;;ã»ã¯ã·ã§ã³æå®å:å¥ã®ããã³ãã;;;..."ã»ã¯ã·ã§ã³æå®å:--image_pathãšåãã«ãŒã«ã§ãã- åºåãæåã¯
;;;ã§ãã - ç¹å®ã»ã¯ã·ã§ã³ã®ããã³ããããªãå Žåãéåžžã¯ã€ã³ããã¯ã¹
0ã«é¢é£ä»ããããããã³ããïŒãŸãã¯æãè¿ãé©çšå¯èœãªæå®ããã³ããïŒã䜿çšãããŸããããã©ã«ãã®æåãéèŠãªå Žåã¯ç¢ºèªããŠãã ããã
- åäœ: åã»ã¯ã·ã§ã³ã®çææã察å¿ããã»ã¯ã·ã§ã³å¥ããã³ãããã¢ãã«ãžã®äž»èŠãªããã¹ãæç€ºãšããŠäœ¿çšãããŸãã
latent_paddingsã«0ãæå®ããå ŽåïŒéF1ã¢ãã«ïŒã® ããã³ããå å®¹ã®æšå¥š:- FramePackã¯Inverted Anti-driftingãæ¡çšããæªæ¥ã®ã³ã³ããã¹ããåç §ããããšãæãåºããŠãã ããã
- åã»ã¯ã·ã§ã³ã®ããã³ããã«ã¯ããçŸåšã®ã»ã¯ã·ã§ã³ã§èµ·ããã¹ãäž»èŠãªå 容ãç¶æ å€åãããã³ããã«ç¶ãåç»ã®çµç«¯ãŸã§ã®å 容ããèšè¿°ããããšãæšå¥šããŸãã
- çŸåšã®ã»ã¯ã·ã§ã³ã®ããã³ããã«åŸç¶ã»ã¯ã·ã§ã³ã®å 容ãå«ããããšã§ãã¢ãã«ãå šäœçãªæèãææ¡ããäžè²«æ§ãä¿ã€ã®ã«åœ¹ç«ã¡ãŸãã
- äŸïŒã»ã¯ã·ã§ã³1ã®ããã³ããã«ã¯ãã»ã¯ã·ã§ã³1ã®å 容 ãš ã»ã¯ã·ã§ã³2ã®ç°¡åãªèŠçŽãèšè¿°ããŸãã
- ãã ããã¢ãã«ã®é·æã³ã³ããã¹ãå®å
šå©çšèœåã«ã¯éçãããå¯èœæ§ã瀺åãããŠããŸãïŒäŸïŒ
latent_paddingsã³ã¡ã³ãïŒãå®éšãéµãšãªããŸãããçŸåšã®ã»ã¯ã·ã§ã³ã®ç®æšãã®ã¿ãèšè¿°ããã ãã§ãæ©èœããå ŽåããããŸãããŸãã¯ãã»ã¯ã·ã§ã³ãšä»¥éãã¢ãããŒãã詊ãããšããå§ãããŸãã
- 䜿çšããããã³ããã¯ã
latent_paddingsã1以äžãŸãã¯æå®ãããŠããªãå ŽåããŸãã¯--f1ïŒFramePack-F1ã¢ãã«ïŒã䜿çšããŠããå Žåã¯ãéåžžã®ããã³ããå 容ãèšè¿°ããŠãã ããã - çšé: æéçµéã«äŒŽãã¹ããŒãªãŒã®å€åããã£ã©ã¯ã¿ãŒã®è¡åãææ ã®æ®µéçãªå€åãæ®µéçãªããã»ã¹ãªã©ãèšè¿°ããå Žåã
çµã¿åãã䜿çšäŸ ïŒ--f1æªæå®æïŒ
ãæããããããŒã«ã«åãã£ãŠç¬ãèµ°ãããããæãŸããèµ°ã£ãŠæ»ã£ãŠããã3ã»ã¯ã·ã§ã³åç»ã®çæïŒ ïŒã³ãã³ãèšè¿°äŸã¯è±èªçãåèã«ããŠãã ããïŒ
- çæé åº: ã»ã¯ã·ã§ã³2 â ã»ã¯ã·ã§ã³1 â ã»ã¯ã·ã§ã³0
- ã»ã¯ã·ã§ã³2çææ:
- ããã³ãã: "ç¬ãããŒã«ãå¥ããŠãã¡ãã«åãã£ãŠèµ°ã£ãŠãã"
- éå§ç»å:
./img_running_back.png - çµç«¯ç»å:
./img_returned.pngïŒåæç®æšïŒ
- ã»ã¯ã·ã§ã³1çææ:
- ããã³ãã: "ç¬ãããŒã«ãæãŸãããã®åŸãã¡ãã«åãã£ãŠèµ°ã£ãŠãã"
- éå§ç»å:
./img_catching.png - æªæ¥ã³ã³ããã¹ã: çææžã¿ã»ã¯ã·ã§ã³2ã®æœåšè¡šçŸ
- ã»ã¯ã·ã§ã³0çææ:
- ããã³ãã: "ç¬ãæããããããŒã«ã«åãã£ãŠèµ°ãããããæãŸããèµ°ã£ãŠæ»ã£ãŠãã"
- éå§ç»å:
./img_start_running.png - æªæ¥ã³ã³ããã¹ã: çææžã¿ã»ã¯ã·ã§ã³1 & 2ã®æœåšè¡šçŸ
éèŠãªèæ ®äºé
- éé çæ: çæã¯åç»ã®çµããããå§ãŸãã«åãã£ãŠé²ãããšãåžžã«æèããŠãã ãããã»ã¯ã·ã§ã³
-1ïŒæåŸã®ã»ã¯ã·ã§ã³ãäžã®äŸã§ã¯2ïŒãæåã«çæãããŸãã - é£ç¶æ§ãšã¬ã€ãã³ã¹ã®ãã©ã³ã¹: éå§ç»åã¬ã€ãã³ã¹ã¯åŒ·åã§ãããã»ã¯ã·ã§ã³éã§ç»åã倧ããç°ãªããšãé·ç§»ãäžèªç¶ã«ãªãå¯èœæ§ããããŸããã¬ã€ãã³ã¹ã®åŒ·ããšã¹ã ãŒãºãªæµãã®å¿ èŠæ§ã®ãã©ã³ã¹ãåã£ãŠãã ããã
- ããã³ããã®æé©å: æšå¥šãããããã³ããå 容ã¯ãããŸã§ãåèã§ããã¢ãã«ã®èгå¯ãããæåãšæãŸããåºåå質ã«åºã¥ããŠããã³ããã埮調æŽããŠãã ããã