wav2vec2-E10_freq_speed_pause2

This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2939
  • Cer: 29.2538

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 3
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Cer
33.6216 0.1289 200 5.0765 100.0
5.0003 0.2579 400 4.7569 100.0
4.8152 0.3868 600 4.6179 97.9730
4.7689 0.5158 800 4.6068 97.8966
4.6602 0.6447 1000 4.5302 99.8237
4.4084 0.7737 1200 4.4900 97.2150
3.6156 0.9026 1400 3.2667 63.0787
2.9009 1.0316 1600 2.8830 57.7027
2.621 1.1605 1800 2.9525 52.9083
2.3029 1.2895 2000 2.4458 49.5593
2.046 1.4184 2200 2.0830 41.4982
1.8452 1.5474 2400 1.9670 40.0940
1.6929 1.6763 2600 1.8540 38.7955
1.5456 1.8053 2800 1.6959 36.3690
1.4492 1.9342 3000 1.6960 36.4219
1.3279 2.0632 3200 1.5604 34.0541
1.1872 2.1921 3400 1.4687 33.3255
1.1379 2.3211 3600 1.4123 31.0517
1.071 2.4500 3800 1.3426 29.1657
1.0709 2.5790 4000 1.3340 29.9412
1.0125 2.7079 4200 1.3153 29.4007
0.9648 2.8369 4400 1.3055 29.9706
0.9444 2.9658 4600 1.2939 29.2538

Framework versions

  • Transformers 4.46.2
  • Pytorch 2.5.1+cu121
  • Datasets 3.1.0
  • Tokenizers 0.20.3
Downloads last month
5
Safetensors
Model size
0.3B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Gummybear05/wav2vec2-E10_freq_speed_pause2

Finetuned
(782)
this model

Evaluation results