ilo-toki-rut5-base
This model is a fine-tuned version of ai-forever/ruT5-base on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.7742
- Bleu: 31.1318
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|---|---|---|---|---|
| 0.8262 | 1.0 | 31041 | 1.1476 | 20.3554 |
| 1.2097 | 2.0 | 62082 | 1.0090 | 24.1511 |
| 0.3034 | 3.0 | 93123 | 0.9302 | 26.2398 |
| 0.7045 | 4.0 | 124164 | 0.8848 | 27.3941 |
| 0.2932 | 5.0 | 155205 | 0.8512 | 28.3013 |
| 0.2401 | 6.0 | 186246 | 0.8291 | 28.9026 |
| 0.7421 | 7.0 | 217287 | 0.8129 | 29.5286 |
| 0.3411 | 8.0 | 248328 | 0.7978 | 29.9663 |
| 0.516 | 9.0 | 279369 | 0.7910 | 30.2774 |
| 0.6968 | 10.0 | 310410 | 0.7847 | 30.6053 |
| 0.2133 | 11.0 | 341451 | 0.7792 | 30.8103 |
| 0.4 | 12.0 | 372492 | 0.7770 | 30.9236 |
| 0.5006 | 13.0 | 403533 | 0.7755 | 31.0467 |
| 0.3673 | 14.0 | 434574 | 0.7742 | 31.1312 |
| 0.4823 | 15.0 | 465615 | 0.7752 | 31.1158 |
Framework versions
- Transformers 4.56.2
- Pytorch 2.8.0+cu129
- Datasets 3.6.0
- Tokenizers 0.22.0
- Downloads last month
- 17
Model tree for NetherQuartz/ilo-toki-rut5-base
Base model
ai-forever/ruT5-base