End of training
Browse files
README.md
CHANGED
|
@@ -22,7 +22,7 @@ model-index:
|
|
| 22 |
metrics:
|
| 23 |
- name: Wer
|
| 24 |
type: wer
|
| 25 |
-
value:
|
| 26 |
---
|
| 27 |
|
| 28 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
@@ -32,8 +32,8 @@ should probably proofread and complete it, then remove this comment. -->
|
|
| 32 |
|
| 33 |
This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the Common Voice 17.0 dataset.
|
| 34 |
It achieves the following results on the evaluation set:
|
| 35 |
-
- Loss: 1.
|
| 36 |
-
- Wer:
|
| 37 |
|
| 38 |
## Model description
|
| 39 |
|
|
@@ -59,18 +59,14 @@ The following hyperparameters were used during training:
|
|
| 59 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 60 |
- lr_scheduler_type: linear
|
| 61 |
- lr_scheduler_warmup_steps: 500
|
| 62 |
-
- training_steps:
|
| 63 |
- mixed_precision_training: Native AMP
|
| 64 |
|
| 65 |
### Training results
|
| 66 |
|
| 67 |
-
| Training Loss | Epoch
|
| 68 |
-
|
| 69 |
-
| 0.
|
| 70 |
-
| 0.0995 | 10.9890 | 2000 | 1.3888 | 50.6094 |
|
| 71 |
-
| 0.0139 | 16.4835 | 3000 | 1.5189 | 48.9959 |
|
| 72 |
-
| 0.0028 | 21.9780 | 4000 | 1.6030 | 46.3610 |
|
| 73 |
-
| 0.0008 | 27.4725 | 5000 | 1.6241 | 46.8485 |
|
| 74 |
|
| 75 |
|
| 76 |
### Framework versions
|
|
|
|
| 22 |
metrics:
|
| 23 |
- name: Wer
|
| 24 |
type: wer
|
| 25 |
+
value: 80.51073708647708
|
| 26 |
---
|
| 27 |
|
| 28 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
|
| 32 |
|
| 33 |
This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the Common Voice 17.0 dataset.
|
| 34 |
It achieves the following results on the evaluation set:
|
| 35 |
+
- Loss: 1.1000
|
| 36 |
+
- Wer: 80.5107
|
| 37 |
|
| 38 |
## Model description
|
| 39 |
|
|
|
|
| 59 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 60 |
- lr_scheduler_type: linear
|
| 61 |
- lr_scheduler_warmup_steps: 500
|
| 62 |
+
- training_steps: 1000
|
| 63 |
- mixed_precision_training: Native AMP
|
| 64 |
|
| 65 |
### Training results
|
| 66 |
|
| 67 |
+
| Training Loss | Epoch | Step | Validation Loss | Wer |
|
| 68 |
+
|:-------------:|:------:|:----:|:---------------:|:-------:|
|
| 69 |
+
| 0.4601 | 5.4945 | 1000 | 1.1000 | 80.5107 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 70 |
|
| 71 |
|
| 72 |
### Framework versions
|