ssc-bew-model

This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.2698
  • Cer: 0.4050
  • Wer: 0.9687

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 30
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Cer Wer
4.8123 0.4484 100 3.0133 0.9969 1.0
3.1265 0.8969 200 2.9396 0.9969 1.0
3.119 1.3453 300 2.8454 0.9969 1.0
3.0907 1.7937 400 2.8643 0.9969 1.0
3.0756 2.2422 500 2.9632 0.9969 1.0
2.9941 2.6906 600 2.8723 0.9969 1.0
2.8902 3.1390 700 2.6961 0.9892 1.0
2.7428 3.5874 800 2.4343 0.8631 0.9999
2.5463 4.0359 900 2.3476 0.7009 1.0
2.3062 4.4843 1000 2.1183 0.6930 1.0
2.2073 4.9327 1100 2.0210 0.7117 1.0
2.0765 5.3812 1200 2.0935 0.6305 0.9960
2.031 5.8296 1300 1.9194 0.6221 0.9994
1.9854 6.2780 1400 1.8179 0.5811 0.9999
1.8907 6.7265 1500 1.7837 0.5585 0.9998
1.8584 7.1749 1600 1.8593 0.5447 0.9931
1.7839 7.6233 1700 1.8705 0.5141 1.0417
1.785 8.0717 1800 1.7395 0.5033 0.9858
1.6759 8.5202 1900 1.7391 0.4723 0.9718
1.7218 8.9686 2000 1.7175 0.4940 0.9723
1.609 9.4170 2100 1.7110 0.4781 0.9772
1.5811 9.8655 2200 1.7124 0.4658 0.9843
1.5396 10.3139 2300 1.6364 0.4590 0.9560
1.5133 10.7623 2400 1.6455 0.4580 0.9590
1.4639 11.2108 2500 1.7604 0.4770 0.9661
1.4046 11.6592 2600 1.7846 0.4839 1.0466
1.3616 12.1076 2700 1.6502 0.4380 0.9452
1.3035 12.5561 2800 1.6649 0.4458 0.9521
1.3431 13.0045 2900 1.6487 0.4386 0.9449
1.2072 13.4529 3000 1.6000 0.4358 0.9402
1.1978 13.9013 3100 1.6768 0.4380 0.9613
1.1108 14.3498 3200 1.7206 0.4302 0.9412
1.1131 14.7982 3300 1.6861 0.4315 0.9715
1.0874 15.2466 3400 1.5880 0.4253 0.9327
1.0333 15.6951 3500 1.5706 0.4154 0.9273
0.9514 16.1435 3600 1.7136 0.4228 0.9538
0.9313 16.5919 3700 1.8036 0.4289 1.0255
0.9902 17.0404 3800 1.6053 0.4120 0.9242
0.8452 17.4888 3900 1.5672 0.4104 0.9327
0.872 17.9372 4000 1.6529 0.4131 0.9536
0.7782 18.3857 4100 1.8549 0.4384 1.0661
0.7984 18.8341 4200 1.8437 0.4153 1.0265
0.7615 19.2825 4300 1.7319 0.4127 0.9696
0.7246 19.7309 4400 1.7560 0.4035 0.9411
0.698 20.1794 4500 1.8200 0.4254 0.9628
0.6601 20.6278 4600 1.8046 0.4220 0.9428
0.6534 21.0762 4700 1.9306 0.4108 0.9498
0.6144 21.5247 4800 1.8637 0.4024 0.9563
0.6418 21.9731 4900 1.9459 0.4060 0.9565
0.5814 22.4215 5000 1.9776 0.4344 1.0098
0.577 22.8700 5100 2.0336 0.4109 0.9729
0.5315 23.3184 5200 2.1168 0.4078 0.9863
0.5363 23.7668 5300 2.0074 0.4066 0.9515
0.5649 24.2152 5400 2.0267 0.4149 0.9734
0.5181 24.6637 5500 1.9906 0.4076 0.9469
0.4848 25.1121 5600 2.1771 0.4014 0.9471
0.4804 25.5605 5700 2.0896 0.3972 0.9382
0.5048 26.0090 5800 2.1047 0.3946 0.9295
0.4581 26.4574 5900 2.1378 0.4016 0.9482
0.4616 26.9058 6000 2.1853 0.4071 0.9466
0.4075 27.3543 6100 2.2196 0.3982 0.9473
0.4338 27.8027 6200 2.1815 0.4027 0.9511
0.4475 28.2511 6300 2.2522 0.4019 0.9564
0.4115 28.6996 6400 2.2593 0.4024 0.9580
0.3882 29.1480 6500 2.2502 0.4072 0.9666
0.4021 29.5964 6600 2.2698 0.4050 0.9687

Framework versions

  • Transformers 4.57.2
  • Pytorch 2.9.1+cu128
  • Datasets 3.6.0
  • Tokenizers 0.22.0
Downloads last month
413
Safetensors
Model size
0.3B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ctaguchi/ssc-bew-model

Finetuned
(771)
this model

Evaluation results