COPAS-whisper-lg-3-Nov29

This model is a fine-tuned version of openai/whisper-large-v3 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1596
  • Wer: 13.1363

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 100
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.9253 2.3529 100 0.3717 26.4368
0.1676 4.7059 200 0.2410 21.4559
0.04 7.0588 300 0.2064 17.6245
0.0204 9.4118 400 0.1890 18.1171
0.0092 11.7647 500 0.1929 17.8982
0.0051 14.1176 600 0.1794 21.3465
0.0062 16.4706 700 0.1682 14.8331
0.0046 18.8235 800 0.1798 15.2709
0.0026 21.1765 900 0.1717 15.3257
0.0022 23.5294 1000 0.1805 14.7236
0.0019 25.8824 1100 0.1961 15.4351
0.0027 28.2353 1200 0.1614 22.1128
0.0024 30.5882 1300 0.1653 17.4603
0.0023 32.9412 1400 0.1850 17.6793
0.0023 35.2941 1500 0.1564 13.1910
0.0014 37.6471 1600 0.1551 13.8478
0.002 40.0 1700 0.1533 13.5194
0.0013 42.3529 1800 0.1496 13.4100
0.0007 44.7059 1900 0.1555 13.4100
0.0006 47.0588 2000 0.1524 13.3005
0.0001 49.4118 2100 0.1526 13.4647
0.0001 51.7647 2200 0.1540 13.3005
0.0001 54.1176 2300 0.1549 13.2458
0.0001 56.4706 2400 0.1555 13.2458
0.0001 58.8235 2500 0.1561 13.3552
0.0001 61.1765 2600 0.1565 13.3005
0.0 63.5294 2700 0.1569 13.2458
0.0 65.8824 2800 0.1573 13.1910
0.0 68.2353 2900 0.1576 13.1910
0.0 70.5882 3000 0.1579 13.1910
0.0 72.9412 3100 0.1581 13.3552
0.0 75.2941 3200 0.1584 13.4100
0.0 77.6471 3300 0.1586 13.4100
0.0 80.0 3400 0.1588 12.7531
0.0 82.3529 3500 0.1589 13.0816
0.0 84.7059 3600 0.1591 12.6984
0.0 87.0588 3700 0.1593 13.1363
0.0 89.4118 3800 0.1594 13.1363
0.0 91.7647 3900 0.1594 13.0816
0.0 94.1176 4000 0.1595 13.0816
0.0 96.4706 4100 0.1596 13.0816
0.0 98.8235 4200 0.1596 13.1363

Framework versions

  • Transformers 4.43.4
  • Pytorch 2.4.1
  • Datasets 3.0.0
  • Tokenizers 0.19.1
Downloads last month
1
Safetensors
Model size
2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for sqrk/COPAS-whisper-lg-3-Nov29

Finetuned
(817)
this model