End of training
Browse files
README.md
CHANGED
|
@@ -23,7 +23,7 @@ model-index:
|
|
| 23 |
metrics:
|
| 24 |
- name: Wer
|
| 25 |
type: wer
|
| 26 |
-
value:
|
| 27 |
---
|
| 28 |
|
| 29 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
@@ -33,9 +33,9 @@ should probably proofread and complete it, then remove this comment. -->
|
|
| 33 |
|
| 34 |
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
|
| 35 |
It achieves the following results on the evaluation set:
|
| 36 |
-
- Loss: 1.
|
| 37 |
-
- Wer Ortho:
|
| 38 |
-
- Wer:
|
| 39 |
|
| 40 |
## Model description
|
| 41 |
|
|
@@ -58,8 +58,8 @@ The following hyperparameters were used during training:
|
|
| 58 |
- train_batch_size: 32
|
| 59 |
- eval_batch_size: 16
|
| 60 |
- seed: 42
|
| 61 |
-
- gradient_accumulation_steps:
|
| 62 |
-
- total_train_batch_size:
|
| 63 |
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 64 |
- lr_scheduler_type: constant_with_warmup
|
| 65 |
- lr_scheduler_warmup_steps: 100
|
|
@@ -68,9 +68,9 @@ The following hyperparameters were used during training:
|
|
| 68 |
|
| 69 |
### Training results
|
| 70 |
|
| 71 |
-
| Training Loss | Epoch
|
| 72 |
-
|
| 73 |
-
| 0.
|
| 74 |
|
| 75 |
|
| 76 |
### Framework versions
|
|
|
|
| 23 |
metrics:
|
| 24 |
- name: Wer
|
| 25 |
type: wer
|
| 26 |
+
value: 0.3700564971751412
|
| 27 |
---
|
| 28 |
|
| 29 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
|
| 33 |
|
| 34 |
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
|
| 35 |
It achieves the following results on the evaluation set:
|
| 36 |
+
- Loss: 1.4305
|
| 37 |
+
- Wer Ortho: 0.3944
|
| 38 |
+
- Wer: 0.3701
|
| 39 |
|
| 40 |
## Model description
|
| 41 |
|
|
|
|
| 58 |
- train_batch_size: 32
|
| 59 |
- eval_batch_size: 16
|
| 60 |
- seed: 42
|
| 61 |
+
- gradient_accumulation_steps: 4
|
| 62 |
+
- total_train_batch_size: 128
|
| 63 |
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 64 |
- lr_scheduler_type: constant_with_warmup
|
| 65 |
- lr_scheduler_warmup_steps: 100
|
|
|
|
| 68 |
|
| 69 |
### Training results
|
| 70 |
|
| 71 |
+
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|
| 72 |
+
|:-------------:|:--------:|:----:|:---------------:|:---------:|:------:|
|
| 73 |
+
| 0.009 | 112.6154 | 450 | 1.4305 | 0.3944 | 0.3701 |
|
| 74 |
|
| 75 |
|
| 76 |
### Framework versions
|