wav2vec2-base-myst-new

This model is a fine-tuned version of facebook/wav2vec2-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4262
  • Wer: 0.1249
  • Cer: 0.0583

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.06
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer Cer
0.5255 1.1615 2000 0.4068 0.2182 0.0884
0.358 2.3229 4000 0.3476 0.1657 0.0713
0.3055 3.4844 6000 0.3039 0.1564 0.0663
0.2571 4.6459 8000 0.2945 0.1472 0.0641
0.2373 5.8073 10000 0.3080 0.1457 0.0635
0.2277 6.9688 12000 0.3035 0.1370 0.0619
0.184 8.1301 14000 0.3264 0.1336 0.0603
0.155 9.2916 16000 0.3322 0.1348 0.0614
0.1459 10.4530 18000 0.3464 0.1340 0.0617
0.1396 11.6145 20000 0.3306 0.1330 0.0610
0.1288 12.7760 22000 0.3563 0.1294 0.0595
0.1123 13.9374 24000 0.3605 0.1294 0.0598
0.1061 15.0987 26000 0.3896 0.1287 0.0595
0.0938 16.2602 28000 0.3904 0.1274 0.0591
0.0851 17.4217 30000 0.4189 0.1252 0.0585
0.0858 18.5831 32000 0.4145 0.1260 0.0587
0.076 19.7446 34000 0.4262 0.1249 0.0583

Framework versions

  • Transformers 4.57.0
  • Pytorch 2.8.0+cu128
  • Datasets 4.1.1
  • Tokenizers 0.22.1
Downloads last month
-
Safetensors
Model size
94.4M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for HamdanXI/wav2vec2-base-myst-new

Finetuned
(909)
this model