File size: 3,047 Bytes
70e9fc5 fdbdfff 70e9fc5 02aef0a 70e9fc5 4998d54 70e9fc5 fdbdfff | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 | ---
license: gemma
datasets:
- trollek/ImagePromptHelper-v02
language:
- en
base_model:
- google/gemma-3-270m
library_name: transformers
tags:
- llama-factory
- moun
- full
pipeline_tag: text-generation
---
# ImagePromptHelper-gemma3-270M
This model is a fine-tuned version of [google/gemma-3-270m](https://huggingface.co/google/gemma-3-270m) on the [ImagePromptHelper-v02](https://huggingface.co/datasets/trollek/ImagePromptHelper-v02) (CC BY 4.0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2502
## Model description
This model expands short image prompts into long image prompts. The moun optimizer was used to train this model to see what would happen. The result is much better than my previous attempts.
## Intended uses & limitations
This model is intended to be used for image prompt expansion in a variety of ways as determined by the dataset that was used to train it. It is not intended to be used for any other purpose.
## Training and evaluation data
I used the moun optimizer to train this model. Here is the LLama Factory config:
<details>
<summary>LLama Factory config</summary>
```yaml
### model
model_name_or_path: google/gemma-3-270m
### method
stage: sft
do_train: true
finetuning_type: full
use_muon: true
seed: 101
### dataset
dataset: image_prompter_v2
template: gemma
cutoff_len: 2048
overwrite_cache: false
preprocessing_num_workers: 12
### output
output_dir: Gemma3/270M/full/image_prompter
logging_steps: 1
save_steps: 2500
save_strategy: steps
plot_loss: true
overwrite_output_dir: false
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
learning_rate: 1.0e-04
num_train_epochs: 2.0
weight_decay: 0.01
adam_beta1: 0.90
adam_beta2: 0.98
max_grad_norm: 1.0
lr_scheduler_type: cosine
warmup_ratio: 0.075
bf16: true
### eval
val_size: 0.01
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 2500
```
</details>
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 101
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.98) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.075
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 1.0308 | 0.2472 | 2500 | 1.0421 |
| 0.7823 | 0.4945 | 5000 | 0.8296 |
| 0.6441 | 0.7417 | 7500 | 0.6573 |
| 0.4683 | 0.9890 | 10000 | 0.5116 |
| 0.2582 | 1.2362 | 12500 | 0.4155 |
| 0.1799 | 1.4834 | 15000 | 0.3259 |
| 0.1587 | 1.7307 | 17500 | 0.2656 |
| 0.1782 | 1.9779 | 20000 | 0.2502 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1 |