Training finished
Browse files
README.md
CHANGED
|
@@ -21,8 +21,8 @@ should probably proofread and complete it, then remove this comment. -->
|
|
| 21 |
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the quran-ayat-speech-to-text dataset.
|
| 22 |
It achieves the following results on the evaluation set:
|
| 23 |
- Loss: 0.0150
|
| 24 |
-
- Wer: 13.
|
| 25 |
-
- Cer: 4.
|
| 26 |
|
| 27 |
## Model description
|
| 28 |
|
|
@@ -50,7 +50,7 @@ The following hyperparameters were used during training:
|
|
| 50 |
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 51 |
- lr_scheduler_type: cosine
|
| 52 |
- lr_scheduler_warmup_steps: 500
|
| 53 |
-
- num_epochs:
|
| 54 |
- mixed_precision_training: Native AMP
|
| 55 |
|
| 56 |
### Training results
|
|
@@ -184,18 +184,26 @@ The following hyperparameters were used during training:
|
|
| 184 |
| 0.0006 | 17.6056 | 50000 | 4.0609 | 0.0149 | 13.3552 |
|
| 185 |
| 0.0009 | 17.7464 | 50400 | 4.0827 | 0.0149 | 13.4406 |
|
| 186 |
| 0.0007 | 17.8872 | 50800 | 4.0965 | 0.0149 | 13.4434 |
|
| 187 |
-
| 0.0011 | 18.0282 | 51200 | 0.0150
|
| 188 |
-
| 0.0007 | 18.1690 | 51600 | 0.0150
|
| 189 |
-
| 0.0008 | 18.3098 | 52000 | 0.0150
|
| 190 |
-
| 0.0009 | 18.4507 | 52400 | 0.0150
|
| 191 |
-
| 0.001 | 18.5915 | 52800 | 0.0150
|
| 192 |
-
| 0.0007 | 18.7323 | 53200 | 0.0150
|
| 193 |
-
| 0.0008 | 18.8732 | 53600 | 0.0150
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 194 |
|
| 195 |
|
| 196 |
### Framework versions
|
| 197 |
|
| 198 |
- Transformers 4.47.0
|
| 199 |
- Pytorch 2.5.1+cu121
|
| 200 |
-
- Datasets 3.
|
| 201 |
- Tokenizers 0.21.0
|
|
|
|
| 21 |
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the quran-ayat-speech-to-text dataset.
|
| 22 |
It achieves the following results on the evaluation set:
|
| 23 |
- Loss: 0.0150
|
| 24 |
+
- Wer: 13.4792
|
| 25 |
+
- Cer: 4.1022
|
| 26 |
|
| 27 |
## Model description
|
| 28 |
|
|
|
|
| 50 |
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 51 |
- lr_scheduler_type: cosine
|
| 52 |
- lr_scheduler_warmup_steps: 500
|
| 53 |
+
- num_epochs: 20
|
| 54 |
- mixed_precision_training: Native AMP
|
| 55 |
|
| 56 |
### Training results
|
|
|
|
| 184 |
| 0.0006 | 17.6056 | 50000 | 4.0609 | 0.0149 | 13.3552 |
|
| 185 |
| 0.0009 | 17.7464 | 50400 | 4.0827 | 0.0149 | 13.4406 |
|
| 186 |
| 0.0007 | 17.8872 | 50800 | 4.0965 | 0.0149 | 13.4434 |
|
| 187 |
+
| 0.0011 | 18.0282 | 51200 | 4.0696 | 0.0150 | 13.4020 |
|
| 188 |
+
| 0.0007 | 18.1690 | 51600 | 4.0619 | 0.0150 | 13.4516 |
|
| 189 |
+
| 0.0008 | 18.3098 | 52000 | 4.0388 | 0.0150 | 13.3276 |
|
| 190 |
+
| 0.0009 | 18.4507 | 52400 | 4.0439 | 0.0150 | 13.3414 |
|
| 191 |
+
| 0.001 | 18.5915 | 52800 | 4.0930 | 0.0150 | 13.4406 |
|
| 192 |
+
| 0.0007 | 18.7323 | 53200 | 4.1163 | 0.0150 | 13.4351 |
|
| 193 |
+
| 0.0008 | 18.8732 | 53600 | 4.1186 | 0.0150 | 13.4379 |
|
| 194 |
+
| 0.001 | 19.0141 | 54000 | 0.0150 | 13.3965 | 4.0801 |
|
| 195 |
+
| 0.0008 | 19.1549 | 54400 | 0.0151 | 13.4379 | 4.0984 |
|
| 196 |
+
| 0.0008 | 19.2957 | 54800 | 0.0151 | 13.4186 | 4.1115 |
|
| 197 |
+
| 0.0009 | 19.4366 | 55200 | 0.0150 | 13.3579 | 4.0622 |
|
| 198 |
+
| 0.0009 | 19.5774 | 55600 | 0.0150 | 13.4103 | 4.0612 |
|
| 199 |
+
| 0.0008 | 19.7182 | 56000 | 0.0150 | 13.4627 | 4.0769 |
|
| 200 |
+
| 0.0009 | 19.8591 | 56400 | 0.0150 | 13.4819 | 4.1010 |
|
| 201 |
+
| 0.0009 | 19.9999 | 56800 | 0.0150 | 13.4792 | 4.1022 |
|
| 202 |
|
| 203 |
|
| 204 |
### Framework versions
|
| 205 |
|
| 206 |
- Transformers 4.47.0
|
| 207 |
- Pytorch 2.5.1+cu121
|
| 208 |
+
- Datasets 3.2.0
|
| 209 |
- Tokenizers 0.21.0
|