wav2vec2-xlsr-tunisian

This model is a fine-tuned version of jonatasgrosman/wav2vec2-large-xlsr-53-arabic on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3672
  • Wer Raw: 54.7362
  • Wer Norm: 54.7362

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 3
  • total_train_batch_size: 24
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Raw Wer Norm
1.849 0.5917 100 1.7161 78.1946 78.1946
1.6327 1.1834 200 1.3272 73.8106 73.8106
1.4391 1.7751 300 1.5238 70.9070 70.9070
1.2211 2.3669 400 1.3363 66.9110 66.9110
1.2536 2.9586 500 1.1864 66.1492 66.1492
1.0124 3.5503 600 1.2131 63.1163 63.1163
0.9243 4.1420 700 1.1988 60.5146 60.5146
0.9155 4.7337 800 1.2127 58.8328 58.8328
0.7849 5.3254 900 1.1935 58.0998 58.0998
0.7695 5.9172 1000 1.2020 58.3441 58.3441
0.7065 6.5089 1100 1.2097 58.3441 58.3441
0.6705 7.1006 1200 1.2574 57.0648 57.0648
0.6303 7.6923 1300 1.3181 57.6685 57.6685
0.5513 8.2840 1400 1.3283 55.3256 55.3256
0.5949 8.8757 1500 1.3160 55.0812 55.0812
0.5202 9.4675 1600 1.3672 54.7362 54.7362

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.5.1+cu124
  • Datasets 3.6.0
  • Tokenizers 0.19.1
Downloads last month
13
Safetensors
Model size
0.3B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for asserr/wav2vec2-xlsr-tunisian

Finetuned
(18)
this model