wav2vec-finedtuned-thai

This model is a fine-tuned version of facebook/wav2vec2-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2298.1921
  • Wer: 0.9944

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 20
  • training_steps: 200
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
No log 2.0 20 15211.8594 1.2318
17347.0725 4.0 40 12302.1621 1.0916
14383.5075 6.0 60 4903.1660 0.9925
5520.3212 8.0 80 3211.9661 0.9944
3232.9334 10.0 100 2760.6001 0.9944
3232.9334 12.0 120 2523.9800 0.9944
2877.9325 14.0 140 2545.0522 0.9944
2709.8559 16.0 160 2380.6616 0.9944
2578.7244 18.0 180 2229.2996 0.9944
2568.8528 20.0 200 2298.1921 0.9944

Framework versions

  • Transformers 4.49.0
  • Pytorch 2.6.0+cu126
  • Datasets 3.3.2
  • Tokenizers 0.21.0
Downloads last month
3
Safetensors
Model size
94.4M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Katapat/wav2vec-finedtuned-thai

Finetuned
(947)
this model