File size: 3,782 Bytes
febeb1f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4c6be2d
febeb1f
 
4c6be2d
febeb1f
 
 
 
 
 
 
 
 
4c6be2d
 
febeb1f
4c6be2d
 
febeb1f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4c6be2d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
febeb1f
 
 
 
4c6be2d
febeb1f
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
---
library_name: transformers
license: mit
base_model: facebook/w2v-bert-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_11_0
metrics:
- wer
- bleu
model-index:
- name: w2v3
  results:
  - task:
      name: Automatic Speech Recognition
      type: automatic-speech-recognition
    dataset:
      name: common_voice_11_0
      type: common_voice_11_0
      config: ar
      split: test
      args: ar
    metrics:
    - name: Wer
      type: wer
      value: 0.14435763249060218
    - name: Bleu
      type: bleu
      value: 0.625443124553845
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# w2v3

This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1747
- Wer: 0.1444
- Cer: 0.0349
- Bleu: 0.6254
- Bert Score F1: 0.9721

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 5000

### Training results

| Training Loss | Epoch  | Step | Validation Loss | Wer    | Cer    | Bleu   | Bert Score F1 |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|:-------------:|
| 0.3995        | 0.0357 | 250  | 0.3664          | 0.2725 | 0.0732 | 0.4414 | 0.9332        |
| 0.3065        | 0.0713 | 500  | 0.3399          | 0.2119 | 0.0593 | 0.5188 | 0.9465        |
| 0.2648        | 0.1070 | 750  | 0.3095          | 0.2327 | 0.0633 | 0.4970 | 0.9430        |
| 0.2393        | 0.1426 | 1000 | 0.2885          | 0.2134 | 0.0551 | 0.5156 | 0.9545        |
| 0.2756        | 0.1783 | 1250 | 0.2486          | 0.1817 | 0.0467 | 0.5670 | 0.9614        |
| 0.2005        | 0.2139 | 1500 | 0.2448          | 0.1935 | 0.0482 | 0.5485 | 0.9588        |
| 0.2112        | 0.2496 | 1750 | 0.2377          | 0.1823 | 0.0464 | 0.5617 | 0.9622        |
| 0.1934        | 0.2853 | 2000 | 0.2226          | 0.1674 | 0.0420 | 0.5888 | 0.9658        |
| 0.1631        | 0.3209 | 2250 | 0.2205          | 0.1660 | 0.0421 | 0.5888 | 0.9647        |
| 0.1905        | 0.3566 | 2500 | 0.2249          | 0.1679 | 0.0429 | 0.5879 | 0.9651        |
| 0.1639        | 0.3922 | 2750 | 0.2026          | 0.1625 | 0.0403 | 0.5975 | 0.9673        |
| 0.1567        | 0.4279 | 3000 | 0.1895          | 0.1516 | 0.0379 | 0.6150 | 0.9685        |
| 0.1641        | 0.4636 | 3250 | 0.1984          | 0.1555 | 0.0379 | 0.6076 | 0.9693        |
| 0.1404        | 0.4992 | 3500 | 0.1876          | 0.1528 | 0.0370 | 0.6124 | 0.9696        |
| 0.1475        | 0.5349 | 3750 | 0.1913          | 0.1568 | 0.0381 | 0.6055 | 0.9691        |
| 0.1586        | 0.5705 | 4000 | 0.1846          | 0.1510 | 0.0366 | 0.6151 | 0.9705        |
| 0.1322        | 0.6062 | 4250 | 0.1801          | 0.1475 | 0.0356 | 0.6208 | 0.9715        |
| 0.1396        | 0.6418 | 4500 | 0.1788          | 0.1454 | 0.0351 | 0.6242 | 0.9720        |
| 0.1287        | 0.6775 | 4750 | 0.1755          | 0.1455 | 0.0352 | 0.6233 | 0.9718        |
| 0.1376        | 0.7132 | 5000 | 0.1747          | 0.1444 | 0.0349 | 0.6254 | 0.9721        |


### Framework versions

- Transformers 4.50.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0