bc82c2bd7429994a0bb5b27adffa4f33
This model is a fine-tuned version of google/mt5-base on the Helsinki-NLP/opus_books [fi-fr] dataset. It achieves the following results on the evaluation set:
- Loss: 2.1278
- Data Size: 1.0
- Epoch Runtime: 21.6920
- Bleu: 4.2545
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 50
Training results
| Training Loss | Epoch | Step | Validation Loss | Data Size | Epoch Runtime | Bleu |
|---|---|---|---|---|---|---|
| No log | 0 | 0 | 15.4150 | 0 | 2.5465 | 0.0111 |
| No log | 1 | 88 | 15.2606 | 0.0078 | 2.5777 | 0.0121 |
| No log | 2 | 176 | 14.7890 | 0.0156 | 3.5348 | 0.0133 |
| No log | 3 | 264 | 14.0644 | 0.0312 | 4.5839 | 0.0131 |
| No log | 4 | 352 | 13.9049 | 0.0625 | 6.1717 | 0.0133 |
| No log | 5 | 440 | 11.0301 | 0.125 | 7.8851 | 0.0149 |
| 1.081 | 6 | 528 | 8.9344 | 0.25 | 10.6653 | 0.0150 |
| 3.7493 | 7 | 616 | 5.9982 | 0.5 | 14.5224 | 0.0147 |
| 4.5302 | 8.0 | 704 | 2.8173 | 1.0 | 24.9038 | 1.2415 |
| 3.7076 | 9.0 | 792 | 2.4542 | 1.0 | 21.9678 | 1.8482 |
| 3.2077 | 10.0 | 880 | 2.3253 | 1.0 | 21.8970 | 2.0676 |
| 2.9692 | 11.0 | 968 | 2.2765 | 1.0 | 22.6091 | 2.3553 |
| 2.8193 | 12.0 | 1056 | 2.2384 | 1.0 | 21.4813 | 2.6851 |
| 2.7487 | 13.0 | 1144 | 2.2033 | 1.0 | 22.5017 | 2.9447 |
| 2.6309 | 14.0 | 1232 | 2.1828 | 1.0 | 21.9026 | 3.2756 |
| 2.5517 | 15.0 | 1320 | 2.1783 | 1.0 | 23.0023 | 3.2794 |
| 2.4967 | 16.0 | 1408 | 2.1661 | 1.0 | 20.9630 | 3.5091 |
| 2.4337 | 17.0 | 1496 | 2.1488 | 1.0 | 21.0490 | 3.5234 |
| 2.3646 | 18.0 | 1584 | 2.1418 | 1.0 | 21.8422 | 3.5040 |
| 2.353 | 19.0 | 1672 | 2.1434 | 1.0 | 21.4715 | 3.8377 |
| 2.2909 | 20.0 | 1760 | 2.1283 | 1.0 | 22.0806 | 4.0097 |
| 2.2462 | 21.0 | 1848 | 2.1233 | 1.0 | 21.5011 | 3.9880 |
| 2.2333 | 22.0 | 1936 | 2.1252 | 1.0 | 22.1982 | 4.1439 |
| 2.1529 | 23.0 | 2024 | 2.1174 | 1.0 | 22.3929 | 3.9746 |
| 2.1306 | 24.0 | 2112 | 2.1253 | 1.0 | 22.8676 | 3.9714 |
| 2.0824 | 25.0 | 2200 | 2.1269 | 1.0 | 20.9700 | 4.0298 |
| 2.0616 | 26.0 | 2288 | 2.1224 | 1.0 | 21.8319 | 4.0111 |
| 2.0048 | 27.0 | 2376 | 2.1101 | 1.0 | 22.4927 | 3.9453 |
| 1.9662 | 28.0 | 2464 | 2.1191 | 1.0 | 23.3229 | 4.1390 |
| 1.9527 | 29.0 | 2552 | 2.1229 | 1.0 | 20.8483 | 4.3526 |
| 1.9034 | 30.0 | 2640 | 2.1229 | 1.0 | 20.7968 | 4.3342 |
| 1.871 | 31.0 | 2728 | 2.1278 | 1.0 | 21.6920 | 4.2545 |
Framework versions
- Transformers 4.57.0
- Pytorch 2.8.0+cu128
- Datasets 4.2.0
- Tokenizers 0.22.1
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for contemmcm/bc82c2bd7429994a0bb5b27adffa4f33
Base model
google/mt5-base