whisper-medium-ph / README.md
rbcurzon's picture
Update README.md
49be343 verified
metadata
library_name: transformers
license: apache-2.0
base_model: openai/whisper-medium
tags:
  - generated_from_trainer
datasets:
  - rbcurzon/ph_dialect_asr
metrics:
  - wer
model-index:
  - name: whisper-medium-ph
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: rbcurzon/ph_dialect_asr all
          type: rbcurzon/ph_dialect_asr
          args: all
        metrics:
          - name: Wer
            type: wer
            value: 0.1146545827633379

whisper-medium-ph

This model is a fine-tuned version of openai/whisper-medium on the rbcurzon/ph_dialect_asr all dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2901
  • Wer: 0.1147

Model description

More information needed

Intended uses & limitations

This model is primarily designed for transcribing Tagalog, Bisaya, Ilocano, Waray, Kapampangan, Pangasinense, and Bikol voice notes and performing batch automatic speech recognition (ASR) for the same languages. It is also suitable for fine-tuning or domain adaptation for these specific speech tasks.

The model has several key limitations:

  • It performs poorly in noisy or multi-speaker environments, leading to transcription errors.
  • Accuracy is significantly reduced for noisy, accented, or dialectal speech.
  • It is not optimized for real-time streaming.
  • Like other Whisper-type models, it can produce plausible but incorrect words (hallucinations).

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.1822 1.4818 1000 0.2656 0.1445
0.0706 2.9637 2000 0.2491 0.1270
0.0072 4.4448 3000 0.2729 0.1191
0.005 5.9266 4000 0.2810 0.1157
0.0009 7.4077 5000 0.2901 0.1147

Framework versions

  • Transformers 4.56.0.dev0
  • Pytorch 2.8.0+cu128
  • Datasets 4.0.0
  • Tokenizers 0.21.4