SamagraDataGov's picture
Model save
0c8b6b9 verified
|
raw
history blame
1.76 kB
metadata
license: apache-2.0
base_model: openai/whisper-tiny
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: whisper-tiny-hindi2_test
    results: []

whisper-tiny-hindi2_test

This model is a fine-tuned version of openai/whisper-tiny on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3585
  • Wer: 38.2799

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3.75e-05
  • train_batch_size: 16
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant
  • lr_scheduler_warmup_steps: 50
  • training_steps: 200
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.312 1.2698 40 0.2682 38.6172
0.1574 2.5397 80 0.2760 37.6054
0.0914 3.8095 120 0.2983 37.6054
0.0579 5.0794 160 0.3087 37.7740
0.028 6.3492 200 0.3585 38.2799

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.1+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1