m2m100-ky-en
This model is a fine-tuned version of facebook/m2m100_418M on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.1425
- Chrf++: 42.9292
- Bleu: 0.1388
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 6
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss | Chrf++ | Bleu |
|---|---|---|---|---|---|
| 3.453 | 0.4639 | 90 | 1.4171 | 31.5406 | 0.0619 |
| 0.7793 | 0.9278 | 180 | 1.2463 | 36.7296 | 0.0912 |
| 0.5573 | 1.3918 | 270 | 1.2096 | 38.6167 | 0.1059 |
| 0.506 | 1.8557 | 360 | 1.1481 | 39.9577 | 0.1186 |
| 0.3964 | 2.3196 | 450 | 1.1479 | 40.8569 | 0.1258 |
| 0.3543 | 2.7835 | 540 | 1.1372 | 41.3424 | 0.1261 |
| 0.2886 | 3.2474 | 630 | 1.1387 | 42.2531 | 0.1352 |
| 0.2554 | 3.7113 | 720 | 1.1344 | 42.0227 | 0.1348 |
| 0.2321 | 4.1753 | 810 | 1.1319 | 42.2876 | 0.1360 |
| 0.1921 | 4.6392 | 900 | 1.1365 | 42.7091 | 0.1374 |
| 0.1856 | 5.1031 | 990 | 1.1437 | 42.5069 | 0.1374 |
| 0.1533 | 5.5670 | 1080 | 1.1425 | 42.9292 | 0.1388 |
Framework versions
- Transformers 4.55.4
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
- Downloads last month
- 7
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for alinatl/m2m100-ky-en
Base model
facebook/m2m100_418M