model_syllable_onSet4

This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1349
  • 0 Precision: 1.0
  • 0 Recall: 1.0
  • 0 F1-score: 1.0
  • 0 Support: 26
  • 1 Precision: 1.0
  • 1 Recall: 0.9677
  • 1 F1-score: 0.9836
  • 1 Support: 31
  • 2 Precision: 0.9630
  • 2 Recall: 1.0
  • 2 F1-score: 0.9811
  • 2 Support: 26
  • 3 Precision: 1.0
  • 3 Recall: 1.0
  • 3 F1-score: 1.0
  • 3 Support: 14
  • Accuracy: 0.9897
  • Macro avg Precision: 0.9907
  • Macro avg Recall: 0.9919
  • Macro avg F1-score: 0.9912
  • Macro avg Support: 97
  • Weighted avg Precision: 0.9901
  • Weighted avg Recall: 0.9897
  • Weighted avg F1-score: 0.9897
  • Weighted avg Support: 97
  • Wer: 0.2258
  • Mtrix: [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 0, 30, 1, 0], [2, 0, 0, 26, 0], [3, 0, 0, 0, 14]]

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 200
  • num_epochs: 70
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss 0 Precision 0 Recall 0 F1-score 0 Support 1 Precision 1 Recall 1 F1-score 1 Support 2 Precision 2 Recall 2 F1-score 2 Support 3 Precision 3 Recall 3 F1-score 3 Support Accuracy Macro avg Precision Macro avg Recall Macro avg F1-score Macro avg Support Weighted avg Precision Weighted avg Recall Weighted avg F1-score Weighted avg Support Wer Mtrix
1.6602 4.16 100 1.5639 0.0 0.0 0.0 26 0.0 0.0 0.0 31 0.2584 0.8846 0.4 26 0.0 0.0 0.0 14 0.2371 0.0646 0.2212 0.1 97 0.0693 0.2371 0.1072 97 0.9732 [[0, 1, 2, 3], [0, 0, 0, 26, 0], [1, 0, 0, 31, 0], [2, 3, 0, 23, 0], [3, 5, 0, 9, 0]]
1.616 8.33 200 1.4203 0.0 0.0 0.0 26 0.0 0.0 0.0 31 0.2584 0.8846 0.4 26 0.0 0.0 0.0 14 0.2371 0.0646 0.2212 0.1 97 0.0693 0.2371 0.1072 97 0.9732 [[0, 1, 2, 3], [0, 0, 0, 26, 0], [1, 0, 0, 31, 0], [2, 3, 0, 23, 0], [3, 5, 0, 9, 0]]
1.2107 12.49 300 1.1249 0.0 0.0 0.0 26 0.0 0.0 0.0 31 0.2584 0.8846 0.4 26 0.0 0.0 0.0 14 0.2371 0.0646 0.2212 0.1 97 0.0693 0.2371 0.1072 97 0.9732 [[0, 1, 2, 3], [0, 0, 0, 26, 0], [1, 0, 0, 31, 0], [2, 3, 0, 23, 0], [3, 5, 0, 9, 0]]
1.1283 16.65 400 1.0201 0.0 0.0 0.0 26 0.0 0.0 0.0 31 0.2584 0.8846 0.4 26 0.0 0.0 0.0 14 0.2371 0.0646 0.2212 0.1 97 0.0693 0.2371 0.1072 97 0.9732 [[0, 1, 2, 3], [0, 0, 0, 26, 0], [1, 0, 0, 31, 0], [2, 3, 0, 23, 0], [3, 5, 0, 9, 0]]
0.8868 20.82 500 0.8944 0.0 0.0 0.0 26 0.0 0.0 0.0 31 0.2584 0.8846 0.4 26 0.0 0.0 0.0 14 0.2371 0.0646 0.2212 0.1 97 0.0693 0.2371 0.1072 97 0.9732 [[0, 1, 2, 3], [0, 0, 0, 26, 0], [1, 0, 0, 31, 0], [2, 3, 0, 23, 0], [3, 5, 0, 9, 0]]
0.8863 24.98 600 0.9316 0.0 0.0 0.0 26 0.0 0.0 0.0 31 0.2584 0.8846 0.4 26 0.0 0.0 0.0 14 0.2371 0.0646 0.2212 0.1 97 0.0693 0.2371 0.1072 97 0.9732 [[0, 1, 2, 3], [0, 0, 0, 26, 0], [1, 0, 0, 31, 0], [2, 3, 0, 23, 0], [3, 5, 0, 9, 0]]
0.9019 29.16 700 0.8688 0.7647 1.0 0.8667 26 0.0 0.0 0.0 31 0.3651 0.8846 0.5169 26 0.0 0.0 0.0 14 0.5052 0.2824 0.4712 0.3459 97 0.3028 0.5052 0.3708 97 0.9732 [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 0, 0, 31, 0], [2, 3, 0, 23, 0], [3, 5, 0, 9, 0]]
0.7977 33.33 800 0.8014 1.0 1.0 1.0 26 0.9667 0.9355 0.9508 31 0.9259 0.9615 0.9434 26 1.0 1.0 1.0 14 0.9691 0.9731 0.9743 0.9736 97 0.9695 0.9691 0.9691 97 1.0 [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 0, 29, 2, 0], [2, 0, 1, 25, 0], [3, 0, 0, 0, 14]]
0.729 37.49 900 0.8163 1.0 1.0 1.0 26 0.9091 0.9677 0.9375 31 0.9583 0.8846 0.9200 26 1.0 1.0 1.0 14 0.9588 0.9669 0.9631 0.9644 97 0.9598 0.9588 0.9586 97 1.0 [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 0, 30, 1, 0], [2, 0, 3, 23, 0], [3, 0, 0, 0, 14]]
0.6526 41.65 1000 0.6691 1.0 1.0 1.0 26 0.9667 0.9355 0.9508 31 0.9259 0.9615 0.9434 26 1.0 1.0 1.0 14 0.9691 0.9731 0.9743 0.9736 97 0.9695 0.9691 0.9691 97 0.7055 [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 0, 29, 2, 0], [2, 0, 1, 25, 0], [3, 0, 0, 0, 14]]
0.6633 45.82 1100 0.3445 1.0 1.0 1.0 26 0.9394 1.0 0.9688 31 1.0 0.9231 0.9600 26 1.0 1.0 1.0 14 0.9794 0.9848 0.9808 0.9822 97 0.9806 0.9794 0.9793 97 0.5017 [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 0, 31, 0, 0], [2, 0, 2, 24, 0], [3, 0, 0, 0, 14]]
0.1913 49.98 1200 0.2455 1.0 1.0 1.0 26 0.9677 0.9677 0.9677 31 0.96 0.9231 0.9412 26 0.9333 1.0 0.9655 14 0.9691 0.9653 0.9727 0.9686 97 0.9693 0.9691 0.9689 97 0.3946 [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 0, 30, 1, 0], [2, 0, 1, 24, 1], [3, 0, 0, 0, 14]]
0.2024 54.16 1300 0.1865 1.0 1.0 1.0 26 1.0 0.9355 0.9667 31 0.9286 1.0 0.9630 26 1.0 1.0 1.0 14 0.9794 0.9821 0.9839 0.9824 97 0.9809 0.9794 0.9794 97 0.3423 [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 0, 29, 2, 0], [2, 0, 0, 26, 0], [3, 0, 0, 0, 14]]
0.1212 58.33 1400 0.1485 1.0 1.0 1.0 26 1.0 0.9677 0.9836 31 0.9630 1.0 0.9811 26 1.0 1.0 1.0 14 0.9897 0.9907 0.9919 0.9912 97 0.9901 0.9897 0.9897 97 0.2957 [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 0, 30, 1, 0], [2, 0, 0, 26, 0], [3, 0, 0, 0, 14]]
0.108 62.49 1500 0.1348 1.0 1.0 1.0 26 1.0 0.9677 0.9836 31 0.9630 1.0 0.9811 26 1.0 1.0 1.0 14 0.9897 0.9907 0.9919 0.9912 97 0.9901 0.9897 0.9897 97 0.2433 [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 0, 30, 1, 0], [2, 0, 0, 26, 0], [3, 0, 0, 0, 14]]
0.1058 66.65 1600 0.1328 1.0 1.0 1.0 26 1.0 0.9677 0.9836 31 0.9630 1.0 0.9811 26 1.0 1.0 1.0 14 0.9897 0.9907 0.9919 0.9912 97 0.9901 0.9897 0.9897 97 0.2224 [[0, 1, 2, 3], [0, 26, 0, 0, 0], [1, 0, 30, 1, 0], [2, 0, 0, 26, 0], [3, 0, 0, 0, 14]]

Framework versions

  • Transformers 4.25.1
  • Pytorch 1.13.0+cu116
  • Datasets 2.8.0
  • Tokenizers 0.13.2
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support