| --- |
| library_name: transformers |
| license: apache-2.0 |
| base_model: Helsinki-NLP/opus-mt-en-fr |
| tags: |
| - generated_from_trainer |
| datasets: |
| - kde4 |
| metrics: |
| - bleu |
| model-index: |
| - name: lab1_finetuning |
| results: |
| - task: |
| name: Sequence-to-sequence Language Modeling |
| type: text2text-generation |
| dataset: |
| name: kde4 |
| type: kde4 |
| config: en-fr |
| split: train |
| args: en-fr |
| metrics: |
| - name: Bleu |
| type: bleu |
| value: 48.8947659869222 |
| --- |
| |
| <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| should probably proofread and complete it, then remove this comment. --> |
|
|
| # lab1_finetuning |
| |
| This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. |
| It achieves the following results on the evaluation set: |
| - Loss: 1.0255 |
| - Model Preparation Time: 0.0056 |
| - Bleu: 48.8948 |
| |
| ## Model description |
| |
| More information needed |
| |
| ## Intended uses & limitations |
| |
| More information needed |
| |
| ## Training and evaluation data |
| |
| More information needed |
| |
| ## Training procedure |
| |
| ### Training hyperparameters |
| |
| The following hyperparameters were used during training: |
| - learning_rate: 2e-05 |
| - train_batch_size: 16 |
| - eval_batch_size: 32 |
| - seed: 42 |
| - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments |
| - lr_scheduler_type: linear |
| - training_steps: 5000 |
| - mixed_precision_training: Native AMP |
|
|
| ### Training results |
|
|
| | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Bleu | |
| |:-------------:|:------:|:----:|:---------------:|:----------------------:|:-------:| |
| | 1.4007 | 0.0476 | 500 | 1.2424 | 0.0056 | 45.5942 | |
| | 1.1468 | 0.0952 | 1000 | 1.1651 | 0.0056 | 46.9449 | |
| | 1.0415 | 0.1427 | 1500 | 1.1203 | 0.0056 | 47.6958 | |
| | 1.1744 | 0.1903 | 2000 | 1.0877 | 0.0056 | 44.0503 | |
| | 1.1876 | 0.2379 | 2500 | 1.0665 | 0.0056 | 48.6443 | |
| | 1.1702 | 0.2855 | 3000 | 1.0510 | 0.0056 | 47.1173 | |
| | 1.0369 | 0.3330 | 3500 | 1.0385 | 0.0056 | 48.8846 | |
| | 1.1668 | 0.3806 | 4000 | 1.0325 | 0.0056 | 49.0365 | |
| | 1.1351 | 0.4282 | 4500 | 1.0279 | 0.0056 | 48.8962 | |
| | 1.0436 | 0.4758 | 5000 | 1.0255 | 0.0056 | 49.0433 | |
|
|
|
|
| ### Framework versions |
|
|
| - Transformers 4.57.6 |
| - Pytorch 2.10.0+cu128 |
| - Datasets 3.6.0 |
| - Tokenizers 0.22.2 |
|
|