> ð Click on the language section to expand / èšèªãã¯ãªãã¯ããŠå±é
# Advanced configuration / é«åºŠãªèšå®
## Table of contents / ç®æ¬¡
- [How to specify `network_args`](#how-to-specify-network_args--network_argsã®æå®æ¹æ³)
- [LoRA+](#lora)
- [Select the target modules of LoRA](#select-the-target-modules-of-lora--loraã®å¯Ÿè±¡ã¢ãžã¥ãŒã«ãéžæãã)
- [Save and view logs in TensorBoard format](#save-and-view-logs-in-tensorboard-format--tensorboard圢åŒã®ãã°ã®ä¿åãšåç
§)
- [Save and view logs in wandb](#save-and-view-logs-in-wandb--wandbã§ãã°ã®ä¿åãšåç
§)
- [FP8 weight optimization for models](#fp8-weight-optimization-for-models--ã¢ãã«ã®éã¿ã®fp8ãžã®æé©å)
- [PyTorch Dynamo optimization for model training](#pytorch-dynamo-optimization-for-model-training--ã¢ãã«ã®åŠç¿ã«ãããpytorch-dynamoã®æé©å)
- [LoRA Post-Hoc EMA merging](#lora-post-hoc-ema-merging--loraã®post-hoc-emaããŒãž)
- [MagCache](#magcache)
## How to specify `network_args` / `network_args`ã®æå®æ¹æ³
The `--network_args` option is an option for specifying detailed arguments to LoRA. Specify the arguments in the form of `key=value` in `--network_args`.
æ¥æ¬èª
`--network_args`ãªãã·ã§ã³ã¯ãLoRAãžã®è©³çްãªåŒæ°ãæå®ããããã®ãªãã·ã§ã³ã§ãã`--network_args`ã«ã¯ã`key=value`ã®åœ¢åŒã§åŒæ°ãæå®ããŸãã
### Example / èšè¿°äŸ
If you specify it on the command line, write as follows. / ã³ãã³ãã©ã€ã³ã§æå®ããå Žåã¯ä»¥äžã®ããã«èšè¿°ããŸãã
```bash
accelerate launch --num_cpu_threads_per_process 1 --mixed_precision bf16 src/musubi_tuner/hv_train_network.py --dit ...
--network_module networks.lora --network_dim 32
--network_args "key1=value1" "key2=value2" ...
```
If you specify it in the configuration file, write as follows. / èšå®ãã¡ã€ã«ã§æå®ããå Žåã¯ä»¥äžã®ããã«èšè¿°ããŸãã
```toml
network_args = ["key1=value1", "key2=value2", ...]
```
If you specify `"verbose=True"`, detailed information of LoRA will be displayed. / `"verbose=True"`ãæå®ãããšLoRAã®è©³çŽ°ãªæ
å ±ã衚瀺ãããŸãã
```bash
--network_args "verbose=True" "key1=value1" "key2=value2" ...
```
## LoRA+
LoRA+ is a method to improve the training speed by increasing the learning rate of the UP side (LoRA-B) of LoRA. Specify the multiplier for the learning rate. The original paper recommends 16, but adjust as needed. It seems to be good to start from around 4. For details, please refer to the [related PR of sd-scripts](https://github.com/kohya-ss/sd-scripts/pull/1233).
Specify `loraplus_lr_ratio` with `--network_args`.
æ¥æ¬èª
LoRA+ã¯ãLoRAã®UPåŽïŒLoRA-BïŒã®åŠç¿çãäžããããšã§åŠç¿é床ãåäžãããææ³ã§ããåŠç¿çã«å¯Ÿããåçãæå®ããŸããå
è«æã§ã¯16ãæšå¥šããŠããŸãããå¿
èŠã«å¿ããŠèª¿æŽããŠãã ããã4çšåºŠããå§ãããšããããã§ãã詳现ã¯[sd-scriptsã®é¢é£PR]https://github.com/kohya-ss/sd-scripts/pull/1233)ãåç
§ããŠãã ããã
`--network_args`ã§`loraplus_lr_ratio`ãæå®ããŸãã
### Example / èšè¿°äŸ
```bash
accelerate launch --num_cpu_threads_per_process 1 --mixed_precision bf16 src/musubi_tuner/hv_train_network.py --dit ...
--network_module networks.lora --network_dim 32 --network_args "loraplus_lr_ratio=4" ...
```
## Select the target modules of LoRA / LoRAã®å¯Ÿè±¡ã¢ãžã¥ãŒã«ãéžæãã
*This feature is highly experimental and the specification may change. / ãã®æ©èœã¯ç¹ã«å®éšçãªãã®ã§ã仿§ã¯å€æŽãããå¯èœæ§ããããŸãã*
By specifying `exclude_patterns` and `include_patterns` with `--network_args`, you can select the target modules of LoRA.
`exclude_patterns` excludes modules that match the specified pattern. `include_patterns` targets only modules that match the specified pattern.
Specify the values as a list. For example, `"exclude_patterns=[r'.*single_blocks.*', r'.*double_blocks\.[0-9]\..*']"`.
The pattern is a regular expression for the module name. The module name is in the form of `double_blocks.0.img_mod.linear` or `single_blocks.39.modulation.linear`. The regular expression is not a partial match but a complete match.
The patterns are applied in the order of `exclude_patterns`â`include_patterns`. By default, the Linear layers of `img_mod`, `txt_mod`, and `modulation` of double blocks and single blocks are excluded.
(`.*(img_mod|txt_mod|modulation).*` is specified.)
æ¥æ¬èª
`--network_args`ã§`exclude_patterns`ãš`include_patterns`ãæå®ããããšã§ãLoRAã®å¯Ÿè±¡ã¢ãžã¥ãŒã«ãéžæããããšãã§ããŸãã
`exclude_patterns`ã¯ãæå®ãããã¿ãŒã³ã«äžèŽããã¢ãžã¥ãŒã«ãé€å€ããŸãã`include_patterns`ã¯ãæå®ãããã¿ãŒã³ã«äžèŽããã¢ãžã¥ãŒã«ã®ã¿ã察象ãšããŸãã
å€ã¯ããªã¹ãã§æå®ããŸãã`"exclude_patterns=[r'.*single_blocks.*', r'.*double_blocks\.[0-9]\..*']"`ã®ããã«ãªããŸãã
ãã¿ãŒã³ã¯ãã¢ãžã¥ãŒã«åã«å¯Ÿããæ£èŠè¡šçŸã§ããã¢ãžã¥ãŒã«åã¯ãããšãã°`double_blocks.0.img_mod.linear`ã`single_blocks.39.modulation.linear`ã®ãããªåœ¢åŒã§ããæ£èŠè¡šçŸã¯éšåäžèŽã§ã¯ãªãå®å
šäžèŽã§ãã
ãã¿ãŒã³ã¯ã`exclude_patterns`â`include_patterns`ã®é ã§é©çšãããŸããããã©ã«ãã¯ãdouble blocksãšsingle blocksã®Linearå±€ã®ãã¡ã`img_mod`ã`txt_mod`ã`modulation`ãé€å€ãããŠããŸãã
ïŒ`.*(img_mod|txt_mod|modulation).*`ãæå®ãããŠããŸããïŒ
### Example / èšè¿°äŸ
Only the modules of double blocks / double blocksã®ã¢ãžã¥ãŒã«ã®ã¿ã察象ãšããå Žå:
```bash
--network_args "exclude_patterns=[r'.*single_blocks.*']"
```
Only the modules of single blocks from the 10th / single blocksã®10çªç®ä»¥éã®Linearã¢ãžã¥ãŒã«ã®ã¿ã察象ãšããå Žå:
```bash
--network_args "exclude_patterns=[r'.*']" "include_patterns=[r'.*single_blocks\.\d{2}\.linear.*']"
```
## Save and view logs in TensorBoard format / TensorBoard圢åŒã®ãã°ã®ä¿åãšåç
§
Specify the folder to save the logs with the `--logging_dir` option. Logs in TensorBoard format will be saved.
For example, if you specify `--logging_dir=logs`, a `logs` folder will be created in the working folder, and logs will be saved in the date folder inside it.
Also, if you specify the `--log_prefix` option, the specified string will be added before the date. For example, use `--logging_dir=logs --log_prefix=lora_setting1_` for identification.
To view logs in TensorBoard, open another command prompt and activate the virtual environment. Then enter the following in the working folder.
```powershell
tensorboard --logdir=logs
```
(tensorboard installation is required.)
Then open a browser and access http://localhost:6006/ to display it.
æ¥æ¬èª
`--logging_dir`ãªãã·ã§ã³ã«ãã°ä¿åå
ãã©ã«ããæå®ããŠãã ãããTensorBoard圢åŒã®ãã°ãä¿åãããŸãã
ããšãã°`--logging_dir=logs`ãšæå®ãããšãäœæ¥ãã©ã«ãã«logsãã©ã«ããäœæããããã®äžã®æ¥æãã©ã«ãã«ãã°ãä¿åãããŸãã
ãŸã`--log_prefix`ãªãã·ã§ã³ãæå®ãããšãæ¥æã®åã«æå®ããæååã远å ãããŸãã`--logging_dir=logs --log_prefix=lora_setting1_`ãªã©ãšããŠèå¥çšã«ã䜿ããã ããã
TensorBoardã§ãã°ã確èªããã«ã¯ãå¥ã®ã³ãã³ãããã³ãããéããä»®æ³ç°å¢ãæå¹ã«ããŠãããäœæ¥ãã©ã«ãã§ä»¥äžã®ããã«å
¥åããŸãã
```powershell
tensorboard --logdir=logs
```
ïŒtensorboardã®ã€ã³ã¹ããŒã«ãå¿
èŠã§ããïŒ
ãã®åŸãã©ãŠã¶ãéããhttp://localhost:6006/ ãžã¢ã¯ã»ã¹ãããšè¡šç€ºãããŸãã
## Save and view logs in wandb / wandbã§ãã°ã®ä¿åãšåç
§
`--log_with wandb` option is available to save logs in wandb format. `tensorboard` or `all` is also available. The default is `tensorboard`.
Specify the project name with `--log_tracker_name` when using wandb.
æ¥æ¬èª
`--log_with wandb`ãªãã·ã§ã³ãæå®ãããšwandb圢åŒã§ãã°ãä¿åããããšãã§ããŸãã`tensorboard`ã`all`ãæå®å¯èœã§ããããã©ã«ãã¯`tensorboard`ã§ãã
wandbã䜿çšããå Žåã¯ã`--log_tracker_name`ã§ãããžã§ã¯ãåãæå®ããŠãã ããã
## FP8 weight optimization for models / ã¢ãã«ã®éã¿ã®FP8ãžã®æé©å
The `--fp8_scaled` option is available to quantize the weights of the model to FP8 (E4M3) format with appropriate scaling. This reduces the VRAM usage while maintaining precision. Important weights are kept in FP16/BF16/FP32 format.
The model weights must be in fp16 or bf16. Weights that have been pre-converted to float8_e4m3 cannot be used.
Wan2.1 inference and training are supported.
Specify the `--fp8_scaled` option in addition to the `--fp8` option during inference.
Specify the `--fp8_scaled` option in addition to the `--fp8_base` option during training.
Acknowledgments: This feature is based on the [implementation](https://github.com/Tencent/HunyuanVideo/blob/7df4a45c7e424a3f6cd7d653a7ff1f60cddc1eb1/hyvideo/modules/fp8_optimization.py) of [HunyuanVideo](https://github.com/Tencent/HunyuanVideo). The selection of high-precision modules is based on the [implementation](https://github.com/tdrussell/diffusion-pipe/blob/407c04fdae1c9ab5e67b54d33bef62c3e0a8dbc7/models/wan.py) of [diffusion-pipe](https://github.com/tdrussell/diffusion-pipe). I would like to thank these repositories.
æ¥æ¬èª
éã¿ãåçŽã«FP8ãžcastããã®ã§ã¯ãªããé©åãªã¹ã±ãŒãªã³ã°ã§FP8圢åŒã«éååããããšã§ã粟床ãç¶æãã€ã€VRAM䜿çšéãåæžããŸãããŸããéèŠãªéã¿ã¯FP16/BF16/FP32圢åŒã§ä¿æããŸãã
ã¢ãã«ã®éã¿ã¯ãfp16ãŸãã¯bf16ãå¿
èŠã§ãããããããfloat8_e4m3ã«å€æãããéã¿ã¯äœ¿çšã§ããŸããã
Wan2.1ã®æšè«ãåŠç¿ã®ã¿å¯Ÿå¿ããŠããŸãã
æšè«æã¯`--fp8`ãªãã·ã§ã³ã«å ã㊠`--fp8_scaled`ãªãã·ã§ã³ãæå®ããŠãã ããã
åŠç¿æã¯`--fp8_base`ãªãã·ã§ã³ã«å ã㊠`--fp8_scaled`ãªãã·ã§ã³ãæå®ããŠãã ããã
è¬èŸïŒãã®æ©èœã¯ã[HunyuanVideo](https://github.com/Tencent/HunyuanVideo)ã®[å®è£
](https://github.com/Tencent/HunyuanVideo/blob/7df4a45c7e424a3f6cd7d653a7ff1f60cddc1eb1/hyvideo/modules/fp8_optimization.py)ãåèã«ããŸããããŸããé«ç²ŸåºŠã¢ãžã¥ãŒã«ã®éžæã«ãããŠã¯[diffusion-pipe](https://github.com/tdrussell/diffusion-pipe)ã®[å®è£
](https://github.com/tdrussell/diffusion-pipe/blob/407c04fdae1c9ab5e67b54d33bef62c3e0a8dbc7/models/wan.py)ãåèã«ããŸããããããã®ãªããžããªã«æè¬ããŸãã
### Key features and implementation details / äž»ãªç¹åŸŽãšå®è£
ã®è©³çް
- Implements FP8 (E4M3) weight quantization for Linear layers
- Reduces VRAM requirements by using 8-bit weights for storage (slightly increased compared to existing `--fp8` `--fp8_base` options)
- Quantizes weights to FP8 format with appropriate scaling instead of simple cast to FP8
- Maintains computational precision by dequantizing to original precision (FP16/BF16/FP32) during forward pass
- Preserves important weights in FP16/BF16/FP32 format
The implementation:
1. Quantizes weights to FP8 format with appropriate scaling
2. Replaces weights by FP8 quantized weights and stores scale factors in model state dict
3. Applies monkey patching to Linear layers for transparent dequantization during computation
æ¥æ¬èª
- Linearå±€ã®FP8ïŒE4M3ïŒéã¿éååãå®è£
- 8ãããã®éã¿ã䜿çšããããšã§VRAM䜿çšéãåæžïŒæ¢åã®`--fp8` `--fp8_base` ãªãã·ã§ã³ã«æ¯ã¹ãŠåŸ®å¢ïŒ
- åçŽãªFP8ãžã®castã§ã¯ãªããé©åãªå€ã§ã¹ã±ãŒã«ããŠéã¿ãFP8圢åŒã«éåå
- forwardæã«å
ã®ç²ŸåºŠïŒFP16/BF16/FP32ïŒã«ééååããŠèšç®ç²ŸåºŠãç¶æ
- 粟床ãéèŠãªéã¿ã¯FP16/BF16/FP32ã®ãŸãŸä¿æ
å®è£
:
1. 粟床ãç¶æã§ããé©åãªåçã§éã¿ãFP8圢åŒã«éåå
2. éã¿ãFP8éååéã¿ã«çœ®ãæããåçãã¢ãã«ã®state dictã«ä¿å
3. Linearå±€ã«monkey patchingããããšã§ã¢ãã«ã倿Žããã«ééåå
## PyTorch Dynamo optimization for model training / ã¢ãã«ã®åŠç¿ã«ãããPyTorch Dynamoã®æé©å
The PyTorch Dynamo options are now available to optimize the training process. PyTorch Dynamo is a Python-level JIT compiler designed to make unmodified PyTorch programs faster by using TorchInductor, a deep learning compiler. This integration allows for potential speedups in training while maintaining model accuracy.
[PR #215](https://github.com/kohya-ss/musubi-tuner/pull/215) added this feature.
Specify the `--dynamo_backend` option to enable Dynamo optimization with one of the available backends from the `DynamoBackend` enum.
Additional options allow for fine-tuning the Dynamo behavior:
- `--dynamo_mode`: Controls the optimization strategy
- `--dynamo_fullgraph`: Enables fullgraph mode for potentially better optimization
- `--dynamo_dynamic`: Enables dynamic shape handling
The `--dynamo_dynamic` option has been reported to have many problems based on the validation in PR #215.
### Available options:
```
--dynamo_backend {NO, INDUCTOR, NVFUSER, CUDAGRAPHS, CUDAGRAPHS_FALLBACK, etc.}
Specifies the Dynamo backend to use (default is NO, which disables Dynamo)
--dynamo_mode {default, reduce-overhead, max-autotune}
Specifies the optimization mode (default is 'default')
- 'default': Standard optimization
- 'reduce-overhead': Focuses on reducing compilation overhead
- 'max-autotune': Performs extensive autotuning for potentially better performance
--dynamo_fullgraph
Flag to enable fullgraph mode, which attempts to capture and optimize the entire model graph
--dynamo_dynamic
Flag to enable dynamic shape handling for models with variable input shapes
```
### Usage example:
```bash
python src/musubi_tuner/hv_train_network.py --dynamo_backend INDUCTOR --dynamo_mode default
```
For more aggressive optimization:
```bash
python src/musubi_tuner/hv_train_network.py --dynamo_backend INDUCTOR --dynamo_mode max-autotune --dynamo_fullgraph
```
Note: The best combination of options may depend on your specific model and hardware. Experimentation may be necessary to find the optimal configuration.
æ¥æ¬èª
PyTorch Dynamoãªãã·ã§ã³ãåŠç¿ããã»ã¹ãæé©åããããã«è¿œå ãããŸãããPyTorch Dynamoã¯ãTorchInductorïŒãã£ãŒãã©ãŒãã³ã°ã³ã³ãã€ã©ïŒã䜿çšããŠã倿Žãå ããããšãªãPyTorchããã°ã©ã ãé«éåããããã®Pythonã¬ãã«ã®JITã³ã³ãã€ã©ã§ãããã®çµ±åã«ãããã¢ãã«ã®ç²ŸåºŠãç¶æããªããåŠç¿ã®é«éåãæåŸ
ã§ããŸãã
[PR #215](https://github.com/kohya-ss/musubi-tuner/pull/215) ã§è¿œå ãããŸããã
`--dynamo_backend`ãªãã·ã§ã³ãæå®ããŠã`DynamoBackend`åæåããå©çšå¯èœãªããã¯ãšã³ãã®äžã€ãéžæããããšã§ãDynamoæé©åãæå¹ã«ããŸãã
远å ã®ãªãã·ã§ã³ã«ãããDynamoã®åäœã埮調æŽã§ããŸãïŒ
- `--dynamo_mode`ïŒæé©åæŠç¥ãå¶åŸ¡ããŸã
- `--dynamo_fullgraph`ïŒããè¯ãæé©åã®å¯èœæ§ã®ããã«ãã«ã°ã©ãã¢ãŒããæå¹ã«ããŸã
- `--dynamo_dynamic`ïŒåç圢ç¶åŠçãæå¹ã«ããŸã
PR #215ã§ã®æ€èšŒã«ãããšã`--dynamo_dynamic`ã«ã¯åé¡ãå€ãããšãå ±åãããŠããŸãã
__å©çšå¯èœãªãªãã·ã§ã³ïŒ__
```
--dynamo_backend {NO, INDUCTOR, NVFUSER, CUDAGRAPHS, CUDAGRAPHS_FALLBACK, ãªã©}
䜿çšããDynamoããã¯ãšã³ããæå®ããŸãïŒããã©ã«ãã¯NOã§ãDynamoãç¡å¹ã«ããŸãïŒ
--dynamo_mode {default, reduce-overhead, max-autotune}
æé©åã¢ãŒããæå®ããŸãïŒããã©ã«ã㯠'default'ïŒ
- 'default'ïŒæšæºçãªæé©å
- 'reduce-overhead'ïŒã³ã³ãã€ã«ã®ãªãŒããŒãããåæžã«çŠç¹ãåœãŠã
- 'max-autotune'ïŒããè¯ãããã©ãŒãã³ã¹ã®ããã«åºç¯ãªèªå調æŽãå®è¡
--dynamo_fullgraph
ãã«ã°ã©ãã¢ãŒããæå¹ã«ãããã©ã°ãã¢ãã«ã°ã©ãå
šäœããã£ããã£ããŠæé©åããããšããŸã
--dynamo_dynamic
å¯å€å
¥å圢ç¶ãæã€ã¢ãã«ã®ããã®åç圢ç¶åŠçãæå¹ã«ãããã©ã°
```
__䜿çšäŸïŒ__
```bash
python src/musubi_tuner/hv_train_network.py --dynamo_backend INDUCTOR --dynamo_mode default
```
ããç©æ¥µçãªæé©åã®å ŽåïŒ
```bash
python src/musubi_tuner/hv_train_network.py --dynamo_backend INDUCTOR --dynamo_mode max-autotune --dynamo_fullgraph
```
泚æïŒæé©ãªãªãã·ã§ã³ã®çµã¿åããã¯ãç¹å®ã®ã¢ãã«ãšããŒããŠã§ã¢ã«äŸåããå ŽåããããŸããæé©ãªæ§æãèŠã€ããããã«å®éšãå¿
èŠãããããŸããã
## LoRA Post-Hoc EMA merging / LoRAã®Post-Hoc EMAããŒãž
The LoRA Post-Hoc EMA (Exponential Moving Average) merging is a technique to combine multiple LoRA checkpoint files into a single, potentially more stable model. This method applies exponential moving average across multiple checkpoints sorted by modification time, with configurable decay rates.
The Post-Hoc EMA method works by:
1. Sorting checkpoint files by modification time (oldest to newest)
2. Using the oldest checkpoint as the base
3. Iteratively merging subsequent checkpoints with a decay rate (beta)
4. Optionally using linear interpolation between two beta values across the merge process
Pseudo-code for merging multiple checkpoints with beta=0.95 would look like this:
```
beta = 0.95
checkpoints = [checkpoint1, checkpoint2, checkpoint3] # List of checkpoints
merged_weights = checkpoints[0] # Use the first checkpoint as the base
for checkpoint in checkpoints[1:]:
merged_weights = beta * merged_weights + (1 - beta) * checkpoint
```
### Key features:
- **Temporal ordering**: Automatically sorts files by modification time
- **Configurable decay rates**: Supports single beta value or linear interpolation between two beta values
- **Metadata preservation**: Maintains and updates metadata from the last checkpoint
- **Hash updating**: Recalculates model hashes for the merged weights
- **Dtype preservation**: Maintains original data types of tensors
### Usage
The LoRA Post-Hoc EMA merging is available as a standalone script:
```bash
python src/musubi_tuner/lora_post_hoc_ema.py checkpoint1.safetensors checkpoint2.safetensors checkpoint3.safetensors --output_file merged_lora.safetensors --beta 0.95
```
### Command line options:
```
path [path ...]
List of paths to the LoRA weight files to merge
--beta BETA
Decay rate for merging weights (default: 0.95)
Higher values (closer to 1.0) give more weight to the accumulated average
Lower values give more weight to the current checkpoint
--beta2 BETA2
Second decay rate for linear interpolation (optional)
If specified, the decay rate will linearly interpolate from beta to beta2
across the merging process
--sigma_rel SIGMA_REL
Relative sigma for Power Function EMA (optional, mutually exclusive with beta/beta2)
This resolves the issue where the first checkpoint has a disproportionately large influence when beta is specified.
If specified, beta is calculated using the Power Function EMA method from the paper:
https://arxiv.org/pdf/2312.02696. This overrides beta and beta2.
--output_file OUTPUT_FILE
Output file path for the merged weights (required)
--no_sort
Disable sorting of checkpoint files (merge in specified order)
```
### Examples:
Basic usage with constant decay rate:
```bash
python src/musubi_tuner/lora_post_hoc_ema.py \
lora_epoch_001.safetensors \
lora_epoch_002.safetensors \
lora_epoch_003.safetensors \
--output_file lora_ema_merged.safetensors \
--beta 0.95
```
Using linear interpolation between two decay rates:
```bash
python src/musubi_tuner/lora_post_hoc_ema.py \
lora_epoch_001.safetensors \
lora_epoch_002.safetensors \
lora_epoch_003.safetensors \
--output_file lora_ema_interpolated.safetensors \
--beta 0.90 \
--beta2 0.95
```
Using Power Function EMA with `sigma_rel`:
```bash
python src/musubi_tuner/lora_post_hoc_ema.py \
lora_epoch_001.safetensors \
lora_epoch_002.safetensors \
lora_epoch_003.safetensors \
--output_file lora_power_ema_merged.safetensors \
--sigma_rel 0.2
```
#### betas for different Ï-rel values:

### Recommended settings example (after training for 30 epochs, using `--beta`)
If you're unsure which settings to try, start with the following "General Recommended Settings".
#### 1. General Recommended Settings (start with these combinations)
- **Target Epochs:** `15-30` (the latter half of training)
- **beta:** `0.9` (a balanced value)
#### 2. If training converged early
- **Situation:** Loss dropped early and stabilized afterwards.
- **Target Epochs:** `10-30` (from the epoch where loss stabilized to the end)
- **beta:** `0.95` (wider range, smoother)
#### 3. If you want to avoid overfitting
- **Situation:** In the latter part of training, generated results are too similar to training data.
- **Target Epochs:** `15-25` (focus on the peak performance range)
- **beta:** `0.8` (more emphasis on the latter part of the range while maintaining diversity)
**Note:** The optimal values may vary depending on the model and dataset. It's recommended to experiment with multiple `beta` values (e.g., 0.8, 0.9, 0.95) and compare the generated results.
### Recommended Settings Example (30 epochs training, using `--sigma_rel`)
When using `--sigma_rel`, the beta decay schedule is determined by the Power Function EMA method. Here are some starting points:
#### 1. General Recommended Settings
- **Target Epochs:** All epochs (from the first to the last).
- **sigma_rel:** `0.2` (a general starting point).
#### 2. If training converged early
- **Situation:** Loss dropped early and stabilized afterwards.
- **Target Epochs:** All epochs.
- **sigma_rel:** `0.25` (gives more weight to earlier checkpoints, suitable for early convergence).
#### 3. If you want to avoid overfitting
- **Situation:** In the latter part of training, generated results are too similar to training data.
- **Target Epochs:** From the first epoch, omitting the last few potentially overfitted epochs.
- **sigma_rel:** `0.15` (gives more weight to later (but not the very last) checkpoints, helping to mitigate overfitting from the final stages).
**Note:** The optimal `sigma_rel` value can depend on the dataset, model, and training duration. Experimentation is encouraged. Values typically range from 0.1 to 0.5. A graph showing the relationship between `sigma_rel` and the calculated `beta` values over epochs will be provided later to help understand its behavior.
### Notes:
- Files are automatically sorted by modification time, so the order in the command line doesn't matter
- The `--sigma_rel` option is mutually exclusive with `--beta` and `--beta2`. If `--sigma_rel` is provided, it will determine the beta values, and any provided `--beta` or `--beta2` will be ignored.
- All checkpoint files to be merged should be from the same training run, saved per epoch or step
- Merging is possible if shapes match, but may not work correctly as Post Hoc EMA
- All checkpoint files must have the same alpha value
- The merged model will have updated hash values in its metadata
- The metadata of the merged model will be taken from the last checkpoint, with only the hash value recalculated
- Non-float tensors (long, int, bool, etc.) are not merged and will use the first checkpoint's values
- Processing is done in float32 precision to maintain numerical stability during merging. The original data types are preserved when saving
æ¥æ¬èª
LoRA Post-Hoc EMAïŒææ°ç§»åå¹³åïŒããŒãžã¯ãè€æ°ã®LoRAãã§ãã¯ãã€ã³ããã¡ã€ã«ãåäžã®ãããå®å®ããã¢ãã«ã«çµåããææ³ã§ããã¹ã¯ãªããã§ã¯ãä¿®æ£æå»ã§ãœãŒãïŒå€ãé ïŒãããè€æ°ã®ãã§ãã¯ãã€ã³ãã«å¯ŸããŠæå®ãããæžè¡°çã§ææ°ç§»åå¹³åãé©çšããŸããæžè¡°çã¯æå®å¯èœã§ãã
Post-Hoc EMAæ¹æ³ã®åäœïŒ
1. ãã§ãã¯ãã€ã³ããã¡ã€ã«ãä¿®æ£æå»é ïŒå€ããã®ããæ°ãããã®ãžïŒã«ãœãŒã
2. æå€ã®ãã§ãã¯ãã€ã³ããããŒã¹ãšããŠäœ¿çš
3. æžè¡°çïŒbetaïŒã䜿ã£ãŠåŸç¶ã®ãã§ãã¯ãã€ã³ããå埩çã«ããŒãž
4. ãªãã·ã§ã³ã§ãããŒãžããã»ã¹å
šäœã§2ã€ã®ããŒã¿å€éã®ç·åœ¢è£éã䜿çš
ç䌌ã³ãŒãã«ããã€ã¡ãŒãžïŒè€æ°ã®ãã§ãã¯ãã€ã³ããbeta=0.95ã§ããŒãžããå Žåãæ¬¡ã®ããã«èšç®ãããŸãã
```
beta = 0.95
checkpoints = [checkpoint1, checkpoint2, checkpoint3] # ãã§ãã¯ãã€ã³ãã®ãªã¹ã
merged_weights = checkpoints[0] # æåã®ãã§ãã¯ãã€ã³ããããŒã¹ãšããŠäœ¿çš
for checkpoint in checkpoints[1:]:
merged_weights = beta * merged_weights + (1 - beta) * checkpoint
```
### äž»ãªç¹åŸŽïŒ
- **æç³»åé åºä»ã**: ãã¡ã€ã«ãä¿®æ£æå»ã§èªåçã«ãœãŒã
- **èšå®å¯èœãªæžè¡°ç**: åäžã®ããŒã¿å€ãŸãã¯2ã€ã®ããŒã¿å€éã®ç·åœ¢è£éããµããŒã
- **ã¡ã¿ããŒã¿ä¿æ**: æåŸã®ãã§ãã¯ãã€ã³ãããã¡ã¿ããŒã¿ãç¶æã»æŽæ°
- **ããã·ã¥æŽæ°**: ããŒãžãããéã¿ã®ã¢ãã«ããã·ã¥ãåèšç®
- **ããŒã¿åä¿æ**: ãã³ãœã«ã®å
ã®ããŒã¿åãç¶æ
### äœ¿çšæ³
LoRA Post-Hoc EMAããŒãžã¯ç¬ç«ããã¹ã¯ãªãããšããŠæäŸãããŠããŸãïŒ
```bash
python src/musubi_tuner/lora_post_hoc_ema.py checkpoint1.safetensors checkpoint2.safetensors checkpoint3.safetensors --output_file merged_lora.safetensors --beta 0.95
```
### ã³ãã³ãã©ã€ã³ãªãã·ã§ã³ïŒ
```
path [path ...]
ããŒãžããLoRAéã¿ãã¡ã€ã«ã®ãã¹ã®ãªã¹ã
--beta BETA
éã¿ããŒãžã®ããã®æžè¡°çïŒããã©ã«ãïŒ0.95ïŒ
é«ãå€ïŒ1.0ã«è¿ãïŒã¯çޝç©å¹³åã«ãã倧ããªéã¿ãäžããïŒå€ããã§ãã¯ãã€ã³ããéèŠïŒ
äœãå€ã¯çŸåšã®ãã§ãã¯ãã€ã³ãã«ãã倧ããªéã¿ãäžãã
--beta2 BETA2
ç·åœ¢è£éã®ããã®ç¬¬2æžè¡°çïŒãªãã·ã§ã³ïŒ
æå®ãããå Žåãæžè¡°çã¯ããŒãžããã»ã¹å
šäœã§betaããbeta2ãžç·åœ¢è£éããã
--sigma_rel SIGMA_REL
Power Function EMAã®ããã®çžå¯Ÿã·ã°ãïŒãªãã·ã§ã³ãbeta/beta2ãšåæã«æå®ã§ããŸããïŒ
betaãæå®ããå Žåã®ãæåã®ãã§ãã¯ãã€ã³ããçžå¯Ÿçã«å€§ããªåœ±é¿ãæã€æ¬ ç¹ã解決ããŸã
æå®ãããå Žåãbetaã¯æ¬¡ã®è«æã«åºã¥ããŠPower Function EMAæ³ã§èšç®ãããŸãïŒ
https://arxiv.org/pdf/2312.02696. ããã«ããbetaãšbeta2ãäžæžããããŸãã
--output_file OUTPUT_FILE
ããŒãžãããéã¿ã®åºåãã¡ã€ã«ãã¹ïŒå¿
é ïŒ
--no_sort
ãã§ãã¯ãã€ã³ããã¡ã€ã«ã®ãœãŒããç¡å¹ã«ããïŒæå®ããé åºã§ããŒãžïŒ
```
### äŸïŒ
宿°æžè¡°çã§ã®åºæ¬çãªäœ¿çšæ³ïŒ
```bash
python src/musubi_tuner/lora_post_hoc_ema.py \
lora_epoch_001.safetensors \
lora_epoch_002.safetensors \
lora_epoch_003.safetensors \
--output_file lora_ema_merged.safetensors \
--beta 0.95
```
2ã€ã®æžè¡°çéã®ç·åœ¢è£éã䜿çšïŒ
```bash
python src/musubi_tuner/lora_post_hoc_ema.py \
lora_epoch_001.safetensors \
lora_epoch_002.safetensors \
lora_epoch_003.safetensors \
--output_file lora_ema_interpolated.safetensors \
--beta 0.90 \
--beta2 0.95
```
`ã·ã°ã_rel`ã䜿çšããPower Function EMAïŒ
```bash
python src/musubi_tuner/lora_post_hoc_ema.py \
lora_epoch_001.safetensors \
lora_epoch_002.safetensors \
lora_epoch_003.safetensors \
--output_file lora_power_ema_merged.safetensors \
--sigma_rel 0.2
```
### æšå¥šèšå®ã®äŸ (30ãšããã¯åŠç¿ãã `--beta`ã䜿çšããå Žå)
ã©ã®èšå®ãã詊ãã°è¯ããåãããªãå Žåã¯ããŸã以äžã®ã**äžè¬çãªæšå¥šèšå®**ãããå§ããŠã¿ãŠãã ããã
#### 1. äžè¬çãªæšå¥šèšå® (ãŸã詊ãã¹ãçµã¿åãã)
- **察象ãšããã¯:** `15-30` (åŠç¿ã®åŸååå)
- **beta:** `0.9` (ãã©ã³ã¹ã®åããå€)
#### 2. æ©æã«åŠç¿ãåæããå Žå
- **ç¶æ³:** lossãæ©ãæ®µéã§äžããããã®åŸã¯å®å®ããŠããã
- **察象ãšããã¯:** `10-30` (lossãå®å®ãå§ãããšããã¯ããæåŸãŸã§)
- **beta:** `0.95` (察象ç¯å²ãåºãã®ã§ãããæ»ããã«ãã)
#### 3. éåŠç¿ãé¿ãããå Žå
- **ç¶æ³:** åŠç¿ã®æåŸã®æ¹ã§ãçæçµæãåŠç¿ããŒã¿ã«äŒŒãããŠããã
- **察象ãšããã¯:** `15-25` (æ§èœã®ããŒã¯ãšæãããç¯å²ã«çµã)
- **beta:** `0.8` (ç¯å²ã®çµç€ãéèŠãã€ã€ã倿§æ§ãæ®ã)
**ãã³ã:** æé©ãªå€ã¯ã¢ãã«ãããŒã¿ã»ããã«ãã£ãŠç°ãªããŸããè€æ°ã®`beta`ïŒäŸ: 0.8, 0.9, 0.95ïŒã詊ããŠãçæçµæãæ¯èŒããããšããå§ãããŸãã
### æšå¥šèšå®ã®äŸ (30ãšããã¯åŠç¿ãã `--sigma_rel`ã䜿çšããå Žå)
`--sigma_rel` ã䜿çšããå Žåãbetaã®æžè¡°ã¹ã±ãžã¥ãŒã«ã¯Power Function EMAæ³ã«ãã£ãŠæ±ºå®ãããŸãã以äžã¯ããã€ãã®éå§ç¹ã§ãã
#### 1. äžè¬çãªæšå¥šèšå®
- **察象ãšããã¯:** å
šãŠã®ãšããã¯ïŒæåããæåŸãŸã§ïŒ
- **sigma_rel:** `0.2` ïŒäžè¬çãªéå§ç¹ïŒ
#### 2. æ©æã«åŠç¿ãåæããå Žå
- **ç¶æ³:** lossãæ©ãæ®µéã§äžããããã®åŸã¯å®å®ããŠããã
- **察象ãšããã¯:** å
šãŠã®ãšããã¯
- **sigma_rel:** `0.25` ïŒåæã®ãã§ãã¯ãã€ã³ãã«éãã眮ããããæ©æåæã«é©ããŠããŸãïŒ
#### 3. éåŠç¿ãé¿ãããå Žå
- **ç¶æ³:** åŠç¿ã®æåŸã®æ¹ã§ãçæçµæãåŠç¿ããŒã¿ã«äŒŒãããŠããã
- **察象ãšããã¯:** æåã®ãšããã¯ãããéåŠç¿ã®å¯èœæ§ãããæåŸã®æ°ãšããã¯ãé€å€
- **sigma_rel:** `0.15` ïŒçµç€ïŒãã ãæåŸã®æåŸã§ã¯ãªãïŒã®ãã§ãã¯ãã€ã³ãã«éãã眮ããæçµæ®µéã§ã®éåŠç¿ã軜æžããã®ã«åœ¹ç«ã¡ãŸãïŒ
**ãã³ã:** æé©ãª `sigma_rel` ã®å€ã¯ãããŒã¿ã»ãããã¢ãã«ãåŠç¿æéã«ãã£ãŠç°ãªãå ŽåããããŸããå®éšãæšå¥šããŸããå€ã¯éåžž0.1ãã0.5ã®ç¯å²ã§ãã`sigma_rel` ãšãšããã¯ããšã®èšç®ããã `beta` å€ã®é¢ä¿ã瀺ãã°ã©ãã¯ããã®æåãçè§£ããã®ã«åœ¹ç«ã€ããåŸã»ã©æäŸããäºå®ã§ãã
### 泚æç¹ïŒ
- ãã¡ã€ã«ã¯ä¿®æ£æå»ã§èªåçã«ãœãŒãããããããã³ãã³ãã©ã€ã³ã§ã®é åºã¯é¢ä¿ãããŸãã
- `--sigma_rel`ãªãã·ã§ã³ã¯`--beta`ããã³`--beta2`ãšçžäºã«æä»çã§ãã`--sigma_rel`ãæå®ãããå ŽåããããããŒã¿å€ã決å®ããæå®ããã`--beta`ãŸãã¯`--beta2`ã¯ç¡èŠãããŸãã
- ããŒãžããå
šãŠã®ãã§ãã¯ãã€ã³ããã¡ã€ã«ã¯ãã²ãšã€ã®åŠç¿ã§ããšããã¯ããšããŸãã¯ã¹ãããããšã«ä¿åãããã¢ãã«ã§ããå¿
èŠããããŸã
- 圢ç¶ãäžèŽããŠããã°ããŒãžã¯ã§ããŸãããPost Hoc EMAãšããŠã¯æ£ããåäœããŸãã
- alphaå€ã¯ãã¹ãŠã®ãã§ãã¯ãã€ã³ãã§åãã§ããå¿
èŠããããŸã
- ããŒãžãããã¢ãã«ã®ã¡ã¿ããŒã¿ã¯ãæåŸã®ãã§ãã¯ãã€ã³ãã®ãã®ãå©çšãããŸããããã·ã¥å€ã®ã¿ãåèšç®ãããŸã
- æµ®åå°æ°ç¹ä»¥å€ã®ãlongãintãboolãªã©ã®ãã³ãœã«ã¯ããŒãžãããŸããïŒæåã®ãã§ãã¯ãã€ã³ãã®ãã®ã䜿çšãããŸãïŒ
- ããŒãžäžã®æ°å€å®å®æ§ãç¶æããããã«float32粟床ã§èšç®ãããŸããä¿åæã¯å
ã®ããŒã¿åãç¶æãããŸã
## MagCache
The following is quoted from the [MagCache github repository](https://github.com/Zehong-Ma/MagCache) "Magnitude-aware Cache (MagCache) for Video Diffusion Models":
> We introduce Magnitude-aware Cache (MagCache), a training-free caching approach that estimates and leverages the fluctuating differences among model outputs across timesteps based on the robust magnitude observations, thereby accelerating the inference. MagCache works well for Video Diffusion Models, Image Diffusion models.
We have implemented the MagCache feature in Musubi Tuner. Some of the code is based on the MagCache repository. It is available for `fpack_generate_video.py` for now.
### Usage
1. Calibrate the mag ratios
- Run the inference script as normal, but with the `--magcache_calibration` option to calibrate the mag ratios. You will get a following output:
```
INFO:musubi_tuner.fpack_generate_video:Copy and paste following values to --magcache_mag_ratios argument to use them:
1.00000,1.26562,1.08594,1.02344,1.00781,1.01562,1.01562,1.03125,1.04688,1.00781,1.03125,1.00000,1.01562,1.01562,1.02344,1.01562,0.98438,1.05469,0.98438,0.97266,1.03125,0.96875,0.93359,0.95703,0.77734
```
- It is recommended to run the calibration with your custom prompt and model.
- If you inference the multi-section video, you will get the mag ratios for each section. You can use the one of the sections or average them.
2. Use the mag ratios
- Run the inference script with the `--magcache_mag_ratios` option to use the mag ratios. For example:
```bash
python fpack_generate_video.py --magcache_mag_ratios 1.00000,1.26562,1.08594,1.02344,1.00781,1.01562,1.01562,1.03125,1.04688,1.00781,1.03125,1.00000,1.01562,1.01562,1.02344,1.01562,0.98438,1.05469,0.98438,0.97266,1.03125,0.96875,0.93359,0.95703,0.77734
```
- Specify `--magcache_mag_ratios 0` to use the default mag ratios from the MagCache repository.
- It is recommended to use the same steps as the calibration. If the steps are different, the mag ratios is interpolated to the specified steps.
- You can also specify the `--magcache_retention_ratio`, `--magcache_threshold`, and `--magcache_k` options to control the MagCache behavior. The default values are 0.2, 0.24, and 6, respectively (same as the MagCache repository).
```bash
python fpack_generate_video.py --magcache_retention_ratio 0.2 --magcache_threshold 0.24 --magcache_k 6
```
- The `--magcache_retention_ratio` option controls the ratio of the steps not to cache. For example, if you set it to 0.2, the first 20% of the steps will not be cached. The default value is 0.2.
- The `--magcache_threshold` option controls the threshold whether to use the cached output or not. If the accumulated error is less than the threshold, the cached output will be used. The default value is 0.24.
- The error is calculated by the accumulated error multiplied by the mag ratio.
- The `--magcache_k` option controls the number of steps to use for the cache. The default value is 6, which means the consecutive 6 steps will be used for the cache. The default value 6 is recommended for 50 steps, so you may want to lower it for smaller number of steps.
### Generated video example
Using F1-model, without MagCache, approximately 90 seconds are required to generate single section video with 25 steps (without VAE decoding) in my environment.
https://github.com/user-attachments/assets/30b8d05e-9bd6-42bf-997f-5ba5b3dde876
With MagCache, default settings, approximately 30 seconds are required to generate with the same settings.
https://github.com/user-attachments/assets/080076ea-4088-443c-8138-4eeb00694ec5
With MagCache, `--magcache_retention_ratio 0.2 --magcache_threshold 0.12 --magcache_k 3`, approximately 35 seconds are required to generate with the same settings.
https://github.com/user-attachments/assets/27d6c7ff-e3db-4c52-8668-9a887441acef
æ¥æ¬èª
以äžã¯ã[MagCache githubãªããžããª](https://github.com/Zehong-Ma/MagCache) "Magnitude-aware Cache (MagCache) for Video Diffusion Models"ããã®åŒçšã®æèš³ã§ãïŒ
> Magnitude-aware Cache (MagCache)ã¯ããã¬ãŒãã³ã°äžèŠã®ãã£ãã·ã³ã°ã¢ãããŒãã§ãå
ç¢ãªãã°ããã¥ãŒã芳枬ã«åºã¥ããŠã¿ã€ã ã¹ãããéã®ã¢ãã«åºåã®å€åå·®ãæšå®ããã³æŽ»çšããæšè«ãå éããŸããMagCacheã¯ããããªæ¡æ£ã¢ãã«ãç»åæ¡æ£ã¢ãã«ã«é©ããŠããŸãã
Musubi Tunerã«MagCacheæ©èœãå®è£
ããŸãããäžéšã®ã³ãŒãã¯MagCacheãªããžããªã®ã³ãŒããåºã«ããŠããŸããçŸåšã¯`fpack_generate_video.py`ã§ã®ã¿å©çšå¯èœã§ãã
### äœ¿çšæ¹æ³
1. mag_ratiosã®ãã£ãªãã¬ãŒã·ã§ã³
- `--magcache_calibration`ãªãã·ã§ã³ãæå®ããŠããã以å€ã¯éåžžéãæšè«ã¹ã¯ãªãããå®è¡ããmag ratiosããã£ãªãã¬ãŒã·ã§ã³ããŸãã以äžã®ãããªåºåãåŸãããŸãïŒ
```
INFO:musubi_tuner.fpack_generate_video:Copy and paste following values to --magcache_mag_ratios argument to use them:
1.00000,1.26562,1.08594,1.02344,1.00781,1.01562,1.01562,1.03125,1.04688,1.00781,1.03125,1.00000,1.01562,1.01562,1.02344,1.01562,0.98438,1.05469,0.98438,0.97266,1.03125,0.96875,0.93359,0.95703,0.77734
```
- ã«ã¹ã¿ã ããã³ãããšã¢ãã«ã§ãã£ãªãã¬ãŒã·ã§ã³ãå®è¡ããããšããå§ãããŸãã
- è€æ°ã»ã¯ã·ã§ã³ãããªãæšè«ããå Žåãåã»ã¯ã·ã§ã³ã®mag ratiosãåºåãããŸããã©ããäžã€ããŸãã¯ããããå¹³åããå€ã䜿ã£ãŠãã ããã
2. mag ratiosã®äœ¿çš
- `--magcache_mag_ratios`ãªãã·ã§ã³ã§mag ratiosãæå®ããŠæšè«ã¹ã¯ãªãããå®è¡ããŸããäŸïŒ
```bash
python fpack_generate_video.py --magcache_mag_ratios 1.00000,1.26562,1.08594,1.02344,1.00781,1.01562,1.01562,1.03125,1.04688,1.00781,1.03125,1.00000,1.01562,1.01562,1.02344,1.01562,0.98438,1.05469,0.98438,0.97266,1.03125,0.96875,0.93359,0.95703,0.77734
```
- `--magcache_mag_ratios 0`ãæå®ãããšãMagCacheãªããžããªã®ããã©ã«ãã®mag ratiosã䜿çšãããŸãã
- mag ratiosã®æ°ã¯ãã£ãªãã¬ãŒã·ã§ã³ããæãšåãã¹ãããæ°ãæå®ããããšããå§ãããŸããã¹ãããæ°ãç°ãªãå Žåãmag ratiosã¯æå®ãããã¹ãããæ°ã«åãããã«è£éãããŸãã
- `--magcache_retention_ratio`, `--magcache_threshold`, `--magcache_k`ãªãã·ã§ã³ãæå®ããŠMagCacheã®åäœãå¶åŸ¡ã§ããŸããããã©ã«ãå€ã¯0.2ã0.24ã6ã§ãïŒMagCacheãªããžããªãšåãã§ãïŒã
```bash
python fpack_generate_video.py --magcache_retention_ratio 0.2 --magcache_threshold 0.24 --magcache_k 6
```
- `--magcache_retention_ratio`ãªãã·ã§ã³ã¯ããã£ãã·ã¥ããªãã¹ãããã®å²åãå¶åŸ¡ããŸããäŸãã°ã0.2ã«èšå®ãããšãæåã®20%ã®ã¹ãããã¯ãã£ãã·ã¥ãããŸãããããã©ã«ãå€ã¯0.2ã§ãã
- `--magcache_threshold`ãªãã·ã§ã³ã¯ããã£ãã·ã¥ãããåºåã䜿çšãããã©ããã®éŸå€ãå¶åŸ¡ããŸãã环ç©èª€å·®ããã®éŸå€æªæºã®å Žåããã£ãã·ã¥ãããåºåã䜿çšãããŸããããã©ã«ãå€ã¯0.24ã§ãã
- 誀差ã¯ã环ç©èª€å·®ã«mag ratioãæãããã®ãšããŠèšç®ãããŸãã
- `--magcache_k`ãªãã·ã§ã³ã¯ããã£ãã·ã¥ã«äœ¿çšããã¹ãããæ°ãå¶åŸ¡ããŸããããã©ã«ãå€ã¯6ã§ãããã¯é£ç¶ãã6ã¹ãããããã£ãã·ã¥ã«äœ¿çšãããããšãæå³ããŸããããã©ã«ãå€6ã¯æãã50ã¹ãããã®å Žåã®æšå¥šå€ã®ãããã¹ãããæ°ãå°ãªãå Žåã¯æžããããšãæ€èšããŠãã ããã
çæãµã³ãã«ã¯è±èªã§ã®èª¬æãåç
§ããŠãã ããã