| --- |
| library_name: transformers |
| base_model: facebook/mbart-large-50-many-to-many-mmt |
| tags: |
| - generated_from_trainer |
| metrics: |
| - bleu |
| model-index: |
| - name: df2ec487db69725dff7faeebdec684fd |
| results: [] |
| --- |
| |
| <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| should probably proofread and complete it, then remove this comment. --> |
|
|
| # df2ec487db69725dff7faeebdec684fd |
|
|
| This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the Helsinki-NLP/opus_books [es-it] dataset. |
| It achieves the following results on the evaluation set: |
| - Loss: 2.7366 |
| - Data Size: 1.0 |
| - Epoch Runtime: 185.3542 |
| - Bleu: 5.3451 |
| |
| ## Model description |
| |
| More information needed |
| |
| ## Intended uses & limitations |
| |
| More information needed |
| |
| ## Training and evaluation data |
| |
| More information needed |
| |
| ## Training procedure |
| |
| ### Training hyperparameters |
| |
| The following hyperparameters were used during training: |
| - learning_rate: 5e-05 |
| - train_batch_size: 8 |
| - eval_batch_size: 8 |
| - seed: 42 |
| - distributed_type: multi-GPU |
| - num_devices: 4 |
| - total_train_batch_size: 32 |
| - total_eval_batch_size: 32 |
| - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments |
| - lr_scheduler_type: constant |
| - num_epochs: 50 |
| |
| ### Training results |
| |
| | Training Loss | Epoch | Step | Validation Loss | Data Size | Epoch Runtime | Bleu | |
| |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------------:|:------:| |
| | No log | 0 | 0 | 6.8083 | 0 | 15.4169 | 0.3904 | |
| | No log | 1 | 721 | 3.2524 | 0.0078 | 17.0675 | 3.2322 | |
| | No log | 2 | 1442 | 3.0505 | 0.0156 | 20.1139 | 3.7278 | |
| | 0.0613 | 3 | 2163 | 2.9022 | 0.0312 | 22.9839 | 4.3023 | |
| | 0.2014 | 4 | 2884 | 2.7807 | 0.0625 | 28.3371 | 4.9489 | |
| | 2.7545 | 5 | 3605 | 2.6759 | 0.125 | 39.7855 | 5.6347 | |
| | 2.5588 | 6 | 4326 | 2.5606 | 0.25 | 59.6311 | 6.3126 | |
| | 2.4027 | 7 | 5047 | 2.4370 | 0.5 | 102.1237 | 7.7033 | |
| | 2.1059 | 8.0 | 5768 | 2.3380 | 1.0 | 184.7286 | 6.2981 | |
| | 1.769 | 9.0 | 6489 | 2.3692 | 1.0 | 185.0756 | 5.9444 | |
| | 1.5192 | 10.0 | 7210 | 2.4797 | 1.0 | 184.1943 | 5.6407 | |
| | 1.2383 | 11.0 | 7931 | 2.5675 | 1.0 | 183.9788 | 5.3725 | |
| | 1.0197 | 12.0 | 8652 | 2.7366 | 1.0 | 185.3542 | 5.3451 | |
| |
| |
| ### Framework versions |
| |
| - Transformers 4.57.0 |
| - Pytorch 2.8.0+cu128 |
| - Datasets 4.2.0 |
| - Tokenizers 0.22.1 |
| |