969391e3edca42bb67265d889bb0f2de
This model is a fine-tuned version of google/umt5-base on the Helsinki-NLP/opus_books [en-no] dataset. It achieves the following results on the evaluation set:
- Loss: 2.3579
- Data Size: 1.0
- Epoch Runtime: 22.2381
- Bleu: 9.1771
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 50
Training results
| Training Loss | Epoch | Step | Validation Loss | Data Size | Epoch Runtime | Bleu |
|---|---|---|---|---|---|---|
| No log | 0 | 0 | 12.6089 | 0 | 2.3110 | 0.0463 |
| No log | 1 | 87 | 12.3311 | 0.0078 | 2.5011 | 0.0285 |
| No log | 2 | 174 | 12.0040 | 0.0156 | 3.1168 | 0.0282 |
| No log | 3 | 261 | 12.0963 | 0.0312 | 4.6377 | 0.0334 |
| No log | 4 | 348 | 11.7581 | 0.0625 | 5.7346 | 0.0341 |
| 0.7837 | 5 | 435 | 11.2765 | 0.125 | 7.7011 | 0.0423 |
| 4.1787 | 6 | 522 | 9.1949 | 0.25 | 10.8425 | 0.0825 |
| 4.9454 | 7 | 609 | 8.1111 | 0.5 | 15.0277 | 0.1050 |
| 6.2734 | 8.0 | 696 | 5.4070 | 1.0 | 25.0478 | 1.0300 |
| 5.9352 | 9.0 | 783 | 3.4979 | 1.0 | 22.1364 | 6.2949 |
| 4.3914 | 10.0 | 870 | 2.9098 | 1.0 | 23.6211 | 5.3635 |
| 3.6986 | 11.0 | 957 | 2.7066 | 1.0 | 23.8962 | 6.2296 |
| 3.4338 | 12.0 | 1044 | 2.5942 | 1.0 | 21.7949 | 6.4528 |
| 3.1597 | 13.0 | 1131 | 2.5501 | 1.0 | 22.6695 | 6.8768 |
| 2.9923 | 14.0 | 1218 | 2.4953 | 1.0 | 22.8472 | 7.2702 |
| 2.8177 | 15.0 | 1305 | 2.4511 | 1.0 | 22.0388 | 7.5479 |
| 2.7483 | 16.0 | 1392 | 2.4291 | 1.0 | 22.3877 | 7.5955 |
| 2.6706 | 17.0 | 1479 | 2.4134 | 1.0 | 22.5722 | 7.8085 |
| 2.6043 | 18.0 | 1566 | 2.4011 | 1.0 | 23.2514 | 8.0208 |
| 2.5313 | 19.0 | 1653 | 2.3669 | 1.0 | 24.2886 | 8.0977 |
| 2.4446 | 20.0 | 1740 | 2.3642 | 1.0 | 22.3310 | 8.1760 |
| 2.3425 | 21.0 | 1827 | 2.3593 | 1.0 | 21.8849 | 8.3472 |
| 2.3573 | 22.0 | 1914 | 2.3436 | 1.0 | 23.0895 | 8.4311 |
| 2.2714 | 23.0 | 2001 | 2.3440 | 1.0 | 23.6487 | 8.6941 |
| 2.2094 | 24.0 | 2088 | 2.3409 | 1.0 | 22.1558 | 8.7230 |
| 2.1613 | 25.0 | 2175 | 2.3410 | 1.0 | 22.5434 | 8.7608 |
| 2.1342 | 26.0 | 2262 | 2.3345 | 1.0 | 22.8285 | 8.8175 |
| 2.0927 | 27.0 | 2349 | 2.3514 | 1.0 | 23.8641 | 8.7929 |
| 2.0233 | 28.0 | 2436 | 2.3373 | 1.0 | 23.6712 | 8.9700 |
| 1.9949 | 29.0 | 2523 | 2.3308 | 1.0 | 23.2570 | 8.9758 |
| 1.9759 | 30.0 | 2610 | 2.3329 | 1.0 | 23.2268 | 9.0151 |
| 1.901 | 31.0 | 2697 | 2.3509 | 1.0 | 23.0121 | 9.1556 |
| 1.8725 | 32.0 | 2784 | 2.3408 | 1.0 | 23.4961 | 9.2192 |
| 1.8202 | 33.0 | 2871 | 2.3579 | 1.0 | 22.2381 | 9.1771 |
Framework versions
- Transformers 4.57.0
- Pytorch 2.8.0+cu128
- Datasets 4.2.0
- Tokenizers 0.22.1
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for contemmcm/969391e3edca42bb67265d889bb0f2de
Base model
google/umt5-base