--- library_name: transformers base_model: oluwagbotty/CS_mms_eng_yor tags: - generated_from_trainer metrics: - wer model-index: - name: CS_mms_eng_yor results: [] --- # CS_mms_eng_yor This model is a fine-tuned version of [oluwagbotty/CS_mms_eng_yor](https://huggingface.co/oluwagbotty/CS_mms_eng_yor) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3397 - Wer: 0.3218 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005705665024630542 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 2.3476 | 0.2436 | 100 | 0.4135 | 0.3730 | | 0.4631 | 0.4872 | 200 | 0.3958 | 0.3640 | | 0.4441 | 0.7308 | 300 | 0.3826 | 0.3493 | | 0.4367 | 0.9744 | 400 | 0.3784 | 0.3522 | | 0.424 | 1.2168 | 500 | 0.3734 | 0.3468 | | 0.4256 | 1.4604 | 600 | 0.3760 | 0.3516 | | 0.4189 | 1.7040 | 700 | 0.3739 | 0.3474 | | 0.4293 | 1.9476 | 800 | 0.3678 | 0.3448 | | 0.4105 | 2.1900 | 900 | 0.3663 | 0.3466 | | 0.3981 | 2.4336 | 1000 | 0.3736 | 0.3441 | | 0.4161 | 2.6772 | 1100 | 0.3725 | 0.3487 | | 0.4188 | 2.9208 | 1200 | 0.3628 | 0.3434 | | 0.3984 | 3.1632 | 1300 | 0.3671 | 0.3426 | | 0.4108 | 3.4068 | 1400 | 0.3674 | 0.3410 | | 0.3905 | 3.6504 | 1500 | 0.3593 | 0.3388 | | 0.3977 | 3.8940 | 1600 | 0.3626 | 0.3387 | | 0.3961 | 4.1364 | 1700 | 0.3610 | 0.3322 | | 0.3844 | 4.3800 | 1800 | 0.3644 | 0.3423 | | 0.3938 | 4.6236 | 1900 | 0.3554 | 0.3355 | | 0.3808 | 4.8672 | 2000 | 0.3579 | 0.3349 | | 0.3822 | 5.1096 | 2100 | 0.3562 | 0.3330 | | 0.3755 | 5.3532 | 2200 | 0.3556 | 0.3307 | | 0.3789 | 5.5968 | 2300 | 0.3514 | 0.3303 | | 0.3742 | 5.8404 | 2400 | 0.3472 | 0.3328 | | 0.3608 | 6.0828 | 2500 | 0.3470 | 0.3276 | | 0.3647 | 6.3264 | 2600 | 0.3468 | 0.3295 | | 0.3719 | 6.5700 | 2700 | 0.3457 | 0.3260 | | 0.3678 | 6.8136 | 2800 | 0.3423 | 0.3179 | | 0.3575 | 7.0560 | 2900 | 0.3422 | 0.3201 | | 0.3427 | 7.2996 | 3000 | 0.3516 | 0.3232 | | 0.3661 | 7.5432 | 3100 | 0.3420 | 0.3216 | | 0.3502 | 7.7868 | 3200 | 0.3430 | 0.3238 | | 0.3681 | 8.0292 | 3300 | 0.3388 | 0.3201 | | 0.3454 | 8.2728 | 3400 | 0.3397 | 0.3218 | ### Framework versions - Transformers 4.52.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1