| | --- |
| | library_name: transformers |
| | base_model: Serialtechlab/whisper-small-dhivehi-v3 |
| | tags: |
| | - generated_from_trainer |
| | metrics: |
| | - wer |
| | model-index: |
| | - name: whisper-small-dhivehi-v3 |
| | results: [] |
| | --- |
| | |
| | <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| | should probably proofread and complete it, then remove this comment. --> |
| |
|
| | # whisper-small-dhivehi-v3 |
| |
|
| | This model is a fine-tuned version of [Serialtechlab/whisper-small-dhivehi-v3](https://huggingface.co/Serialtechlab/whisper-small-dhivehi-v3) on the None dataset. |
| | It achieves the following results on the evaluation set: |
| | - Loss: 0.0469 |
| | - Wer: 0.4243 |
| | - Cer: 0.1538 |
| |
|
| | ## Model description |
| |
|
| | More information needed |
| |
|
| | ## Intended uses & limitations |
| |
|
| | More information needed |
| |
|
| | ## Training and evaluation data |
| |
|
| | More information needed |
| |
|
| | ## Training procedure |
| |
|
| | ### Training hyperparameters |
| |
|
| | The following hyperparameters were used during training: |
| | - learning_rate: 1e-05 |
| | - train_batch_size: 8 |
| | - eval_batch_size: 8 |
| | - seed: 42 |
| | - gradient_accumulation_steps: 2 |
| | - total_train_batch_size: 16 |
| | - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments |
| | - lr_scheduler_type: linear |
| | - lr_scheduler_warmup_steps: 500 |
| | - num_epochs: 3 |
| | - mixed_precision_training: Native AMP |
| | |
| | ### Training results |
| | |
| | | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |
| | |:-------------:|:------:|:----:|:---------------:|:------:|:------:| |
| | | 0.0555 | 0.1778 | 500 | 0.0611 | 3.4904 | 1.5343 | |
| | | 0.0476 | 0.3556 | 1000 | 0.0594 | 2.6611 | 1.0895 | |
| | | 0.0438 | 0.5333 | 1500 | 0.0573 | 2.6697 | 1.2302 | |
| | | 0.0417 | 0.7111 | 2000 | 0.0561 | 1.6096 | 0.7055 | |
| | | 0.0387 | 0.8889 | 2500 | 0.0538 | 0.4946 | 0.1877 | |
| | | 0.0239 | 1.0665 | 3000 | 0.0562 | 0.6677 | 0.2525 | |
| | | 0.0238 | 1.2443 | 3500 | 0.0539 | 0.6063 | 0.3227 | |
| | | 0.0306 | 1.4220 | 4000 | 0.0511 | 0.4554 | 0.2011 | |
| | | 0.029 | 1.5998 | 4500 | 0.0498 | 0.5114 | 0.2605 | |
| | | 0.0289 | 1.7776 | 5000 | 0.0474 | 0.4729 | 0.2151 | |
| | | 0.0287 | 1.9554 | 5500 | 0.0469 | 0.4243 | 0.1538 | |
| | | 0.0166 | 2.1330 | 6000 | 0.0498 | 0.7129 | 0.3167 | |
| | | 0.015 | 2.3108 | 6500 | 0.0498 | 0.7703 | 0.3581 | |
| | | 0.0154 | 2.4885 | 7000 | 0.0494 | 0.6739 | 0.2582 | |
| | | 0.0154 | 2.6663 | 7500 | 0.0497 | 0.9540 | 0.4275 | |
| | | 0.0144 | 2.8441 | 8000 | 0.0493 | 1.1120 | 0.4540 | |
| | |
| | |
| | ### Framework versions |
| | |
| | - Transformers 4.57.3 |
| | - Pytorch 2.9.1+cu128 |
| | - Datasets 4.4.2 |
| | - Tokenizers 0.22.1 |
| | |