263ab4cb82d642995e2be12727b922a1
This model is a fine-tuned version of google/mt5-large on the Helsinki-NLP/opus_books [fr-sv] dataset. It achieves the following results on the evaluation set:
- Loss: 2.2564
- Data Size: 1.0
- Epoch Runtime: 36.4600
- Bleu: 5.7704
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 50
Training results
| Training Loss | Epoch | Step | Validation Loss | Data Size | Epoch Runtime | Bleu |
|---|---|---|---|---|---|---|
| No log | 0 | 0 | 24.3890 | 0 | 3.3071 | 0.0025 |
| No log | 1 | 75 | 23.7084 | 0.0078 | 3.9388 | 0.0017 |
| No log | 2 | 150 | 21.1689 | 0.0156 | 6.5721 | 0.0040 |
| No log | 3 | 225 | 19.1706 | 0.0312 | 8.5808 | 0.0039 |
| No log | 4 | 300 | 19.6652 | 0.0625 | 11.0893 | 0.0036 |
| No log | 5 | 375 | 16.3233 | 0.125 | 14.1852 | 0.0045 |
| No log | 6 | 450 | 15.1590 | 0.25 | 18.4018 | 0.0031 |
| No log | 7 | 525 | 14.0679 | 0.5 | 23.5450 | 0.0052 |
| 15.0882 | 8.0 | 600 | 10.7494 | 1.0 | 38.4446 | 0.0085 |
| 10.1869 | 9.0 | 675 | 7.4117 | 1.0 | 36.7303 | 0.0055 |
| 6.8121 | 10.0 | 750 | 4.8242 | 1.0 | 35.4228 | 0.0132 |
| 4.9805 | 11.0 | 825 | 2.7453 | 1.0 | 37.8814 | 0.4070 |
| 3.1367 | 12.0 | 900 | 2.4081 | 1.0 | 35.7230 | 3.3449 |
| 2.855 | 13.0 | 975 | 2.3234 | 1.0 | 35.9308 | 3.7089 |
| 2.6102 | 14.0 | 1050 | 2.2760 | 1.0 | 36.5028 | 4.7021 |
| 2.4512 | 15.0 | 1125 | 2.2514 | 1.0 | 36.0870 | 5.0867 |
| 2.3104 | 16.0 | 1200 | 2.2298 | 1.0 | 37.8982 | 5.7355 |
| 2.2308 | 17.0 | 1275 | 2.2273 | 1.0 | 36.9682 | 5.7351 |
| 2.1351 | 18.0 | 1350 | 2.2185 | 1.0 | 36.9190 | 5.4201 |
| 2.0345 | 19.0 | 1425 | 2.2267 | 1.0 | 37.6474 | 5.6028 |
| 1.9969 | 20.0 | 1500 | 2.2154 | 1.0 | 36.1461 | 5.9516 |
| 1.9012 | 21.0 | 1575 | 2.2291 | 1.0 | 37.2582 | 5.7230 |
| 1.8474 | 22.0 | 1650 | 2.2239 | 1.0 | 35.4654 | 5.9171 |
| 1.7626 | 23.0 | 1725 | 2.2431 | 1.0 | 38.6345 | 5.7564 |
| 1.7088 | 24.0 | 1800 | 2.2564 | 1.0 | 36.4600 | 5.7704 |
Framework versions
- Transformers 4.57.0
- Pytorch 2.8.0+cu128
- Datasets 4.2.0
- Tokenizers 0.22.1
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for contemmcm/263ab4cb82d642995e2be12727b922a1
Base model
google/mt5-large