5bcc5396b07e513c56df8d227037bba1
This model is a fine-tuned version of google/umt5-base on the Helsinki-NLP/opus_books [fi-no] dataset. It achieves the following results on the evaluation set:
- Loss: 2.8014
- Data Size: 1.0
- Epoch Runtime: 22.4074
- Bleu: 5.4188
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 50
Training results
| Training Loss | Epoch | Step | Validation Loss | Data Size | Epoch Runtime | Bleu |
|---|---|---|---|---|---|---|
| No log | 0 | 0 | 13.3193 | 0 | 2.3106 | 0.0333 |
| No log | 1 | 85 | 13.3120 | 0.0078 | 2.6831 | 0.0356 |
| No log | 2 | 170 | 13.0464 | 0.0156 | 3.6001 | 0.0370 |
| No log | 3 | 255 | 12.7136 | 0.0312 | 4.2199 | 0.0534 |
| No log | 4 | 340 | 12.1567 | 0.0625 | 5.2327 | 0.0324 |
| 1.1749 | 5 | 425 | 11.6050 | 0.125 | 6.5038 | 0.0318 |
| 1.1749 | 6 | 510 | 10.4120 | 0.25 | 8.6614 | 0.0432 |
| 4.5615 | 7 | 595 | 8.7903 | 0.5 | 13.4662 | 0.1144 |
| 10.1288 | 8.0 | 680 | 6.0192 | 1.0 | 22.9811 | 0.2258 |
| 6.163 | 9.0 | 765 | 4.0627 | 1.0 | 21.3025 | 2.5443 |
| 4.8451 | 10.0 | 850 | 3.4734 | 1.0 | 22.5832 | 1.9831 |
| 4.4945 | 11.0 | 935 | 3.2271 | 1.0 | 23.8550 | 2.8005 |
| 4.0415 | 12.0 | 1020 | 3.0752 | 1.0 | 21.7356 | 3.3590 |
| 3.768 | 13.0 | 1105 | 3.0154 | 1.0 | 22.7674 | 3.6688 |
| 3.6396 | 14.0 | 1190 | 2.9644 | 1.0 | 22.1751 | 3.9112 |
| 3.5757 | 15.0 | 1275 | 2.9374 | 1.0 | 21.2885 | 4.1408 |
| 3.42 | 16.0 | 1360 | 2.8955 | 1.0 | 22.3561 | 4.3539 |
| 3.3068 | 17.0 | 1445 | 2.8665 | 1.0 | 23.3444 | 4.5308 |
| 3.2127 | 18.0 | 1530 | 2.8577 | 1.0 | 24.0651 | 4.6450 |
| 3.1268 | 19.0 | 1615 | 2.8511 | 1.0 | 24.2527 | 4.7717 |
| 3.0505 | 20.0 | 1700 | 2.8305 | 1.0 | 21.8575 | 4.8608 |
| 2.991 | 21.0 | 1785 | 2.8234 | 1.0 | 22.0078 | 4.9905 |
| 2.9464 | 22.0 | 1870 | 2.8105 | 1.0 | 22.4925 | 4.9631 |
| 2.8467 | 23.0 | 1955 | 2.8010 | 1.0 | 22.3577 | 5.0732 |
| 2.8164 | 24.0 | 2040 | 2.7989 | 1.0 | 21.8246 | 5.1650 |
| 2.741 | 25.0 | 2125 | 2.7941 | 1.0 | 22.8675 | 5.1468 |
| 2.7071 | 26.0 | 2210 | 2.7833 | 1.0 | 23.4260 | 5.1629 |
| 2.6593 | 27.0 | 2295 | 2.7941 | 1.0 | 24.4490 | 5.3361 |
| 2.6095 | 28.0 | 2380 | 2.7955 | 1.0 | 24.7949 | 5.3050 |
| 2.5528 | 29.0 | 2465 | 2.7942 | 1.0 | 21.8180 | 5.3055 |
| 2.5013 | 30.0 | 2550 | 2.8014 | 1.0 | 22.4074 | 5.4188 |
Framework versions
- Transformers 4.57.0
- Pytorch 2.8.0+cu128
- Datasets 4.2.0
- Tokenizers 0.22.1
- Downloads last month
- 1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for contemmcm/5bcc5396b07e513c56df8d227037bba1
Base model
google/umt5-base