mabart-translation_en_es
This model is a fine-tuned version of facebook/mbart-large-50 on the dataset "Thermostatic/texts_parallel_corpus_europarl_english_spanish". It achieves the following results on the evaluation set:
- Loss: 1.3080
- Bleu: 27.9907
- Gen Len: 90.0233
Model description
This small model was developed to translate the minutes of the European Parliament proceedings from English into Spanish as part of a Master’s degree project in Natural Language Processing (NLP).
Intended uses & limitations
The model was trained on a relatively small dataset (3,000 rows), so its performance is limited. It is intended primarily for educational and experimental purposes.
Training and evaluation data
The model was trained on this dataset, which includes a large parallel corpus of English–Spanish sentence pairs: https://huggingface.co/datasets/Thermostatic/texts_parallel_corpus_europarl_english_spanish
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAFACTOR and the args are: No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|---|---|---|---|---|---|
| 1.4951 | 1.0 | 2700 | 1.3531 | 26.531 | 90.0033 |
| 0.8465 | 2.0 | 5400 | 1.3080 | 27.9907 | 90.0233 |
Framework versions
- Transformers 5.2.0
- Pytorch 2.10.0+cu128
- Datasets 4.6.1
- Tokenizers 0.22.2
- Downloads last month
- 9
Model tree for armpln/mabart-translation_en_es
Base model
facebook/mbart-large-50