ImagePromptHelper-gemma3-270M
This model is a fine-tuned version of google/gemma-3-270m on the ImagePromptHelper-v02 (CC BY 4.0) dataset. It achieves the following results on the evaluation set:
- Loss: 0.2502
Model description
This model expands short image prompts into long image prompts. The moun optimizer was used to train this model to see what would happen. The result is much better than my previous attempts.
Intended uses & limitations
This model is intended to be used for image prompt expansion in a variety of ways as determined by the dataset that was used to train it. It is not intended to be used for any other purpose.
Training and evaluation data
I used the moun optimizer to train this model. Here is the LLama Factory config:
LLama Factory config
### model
model_name_or_path: google/gemma-3-270m
### method
stage: sft
do_train: true
finetuning_type: full
use_muon: true
seed: 101
### dataset
dataset: image_prompter_v2
template: gemma
cutoff_len: 2048
overwrite_cache: false
preprocessing_num_workers: 12
### output
output_dir: Gemma3/270M/full/image_prompter
logging_steps: 1
save_steps: 2500
save_strategy: steps
plot_loss: true
overwrite_output_dir: false
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
learning_rate: 1.0e-04
num_train_epochs: 2.0
weight_decay: 0.01
adam_beta1: 0.90
adam_beta2: 0.98
max_grad_norm: 1.0
lr_scheduler_type: cosine
warmup_ratio: 0.075
bf16: true
### eval
val_size: 0.01
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 2500
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 101
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.98) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.075
- num_epochs: 2.0
Training results
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 1.0308 | 0.2472 | 2500 | 1.0421 |
| 0.7823 | 0.4945 | 5000 | 0.8296 |
| 0.6441 | 0.7417 | 7500 | 0.6573 |
| 0.4683 | 0.9890 | 10000 | 0.5116 |
| 0.2582 | 1.2362 | 12500 | 0.4155 |
| 0.1799 | 1.4834 | 15000 | 0.3259 |
| 0.1587 | 1.7307 | 17500 | 0.2656 |
| 0.1782 | 1.9779 | 20000 | 0.2502 |
Framework versions
- Transformers 4.52.4
- Pytorch 2.7.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
- Downloads last month
- 12
Model tree for trollek/ImagePromptHelper-gemma3-270M
Base model
google/gemma-3-270m