Model save
Browse files
README.md
CHANGED
|
@@ -1,27 +1,25 @@
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
-
language:
|
| 4 |
-
- ar
|
| 5 |
license: apache-2.0
|
| 6 |
-
base_model:
|
| 7 |
tags:
|
| 8 |
- generated_from_trainer
|
| 9 |
metrics:
|
| 10 |
- wer
|
| 11 |
model-index:
|
| 12 |
-
- name:
|
| 13 |
results: []
|
| 14 |
---
|
| 15 |
|
| 16 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| 17 |
should probably proofread and complete it, then remove this comment. -->
|
| 18 |
|
| 19 |
-
#
|
| 20 |
|
| 21 |
-
This model is a fine-tuned version of [
|
| 22 |
It achieves the following results on the evaluation set:
|
| 23 |
-
- Loss: 0.
|
| 24 |
-
- Wer: 0.
|
| 25 |
|
| 26 |
## Model description
|
| 27 |
|
|
@@ -49,21 +47,29 @@ The following hyperparameters were used during training:
|
|
| 49 |
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 50 |
- lr_scheduler_type: linear
|
| 51 |
- lr_scheduler_warmup_steps: 500
|
| 52 |
-
- num_epochs:
|
| 53 |
- mixed_precision_training: Native AMP
|
| 54 |
|
| 55 |
### Training results
|
| 56 |
|
| 57 |
-
| Training Loss | Epoch
|
| 58 |
-
|
| 59 |
-
| 12.8034 | 0.7126
|
| 60 |
-
| 6.2023 | 1.4247
|
| 61 |
-
| 4.0891 | 2.1368
|
| 62 |
-
| 3.8063 | 2.8495
|
| 63 |
-
| 3.1293 | 3.5616
|
| 64 |
-
| 2.494 | 4.2737
|
| 65 |
-
| 2.5376 | 4.9863
|
| 66 |
-
| 2.1642 | 5.6984
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 67 |
|
| 68 |
|
| 69 |
### Framework versions
|
|
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
|
|
|
|
|
|
| 3 |
license: apache-2.0
|
| 4 |
+
base_model: Baselhany/Distilation_Whisper_base_CKP_256
|
| 5 |
tags:
|
| 6 |
- generated_from_trainer
|
| 7 |
metrics:
|
| 8 |
- wer
|
| 9 |
model-index:
|
| 10 |
+
- name: Distilation_Whisper_base_CKP_256
|
| 11 |
results: []
|
| 12 |
---
|
| 13 |
|
| 14 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| 15 |
should probably proofread and complete it, then remove this comment. -->
|
| 16 |
|
| 17 |
+
# Distilation_Whisper_base_CKP_256
|
| 18 |
|
| 19 |
+
This model is a fine-tuned version of [Baselhany/Distilation_Whisper_base_CKP_256](https://huggingface.co/Baselhany/Distilation_Whisper_base_CKP_256) on an unknown dataset.
|
| 20 |
It achieves the following results on the evaluation set:
|
| 21 |
+
- Loss: 0.1013
|
| 22 |
+
- Wer: 0.2231
|
| 23 |
|
| 24 |
## Model description
|
| 25 |
|
|
|
|
| 47 |
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 48 |
- lr_scheduler_type: linear
|
| 49 |
- lr_scheduler_warmup_steps: 500
|
| 50 |
+
- num_epochs: 12
|
| 51 |
- mixed_precision_training: Native AMP
|
| 52 |
|
| 53 |
### Training results
|
| 54 |
|
| 55 |
+
| Training Loss | Epoch | Step | Validation Loss | Wer |
|
| 56 |
+
|:-------------:|:-------:|:-----:|:---------------:|:------:|
|
| 57 |
+
| 12.8034 | 0.7126 | 1000 | 0.1280 | 0.3420 |
|
| 58 |
+
| 6.2023 | 1.4247 | 2000 | 0.1193 | 0.2798 |
|
| 59 |
+
| 4.0891 | 2.1368 | 3000 | 0.1101 | 0.2380 |
|
| 60 |
+
| 3.8063 | 2.8495 | 4000 | 0.1078 | 0.2206 |
|
| 61 |
+
| 3.1293 | 3.5616 | 5000 | 0.1058 | 0.2109 |
|
| 62 |
+
| 2.494 | 4.2737 | 6000 | 0.1045 | 0.2238 |
|
| 63 |
+
| 2.5376 | 4.9863 | 7000 | 0.1033 | 0.2100 |
|
| 64 |
+
| 2.1642 | 5.6984 | 8000 | 0.1018 | 0.2193 |
|
| 65 |
+
| 2.3079 | 6.4148 | 9000 | 0.1027 | 0.2106 |
|
| 66 |
+
| 2.0513 | 7.1268 | 10000 | 0.1025 | 0.2243 |
|
| 67 |
+
| 2.0667 | 7.8395 | 11000 | 0.1021 | 0.2231 |
|
| 68 |
+
| 1.992 | 8.5516 | 12000 | 0.0993 | 0.2207 |
|
| 69 |
+
| 1.6136 | 9.2637 | 13000 | 0.0985 | 0.2203 |
|
| 70 |
+
| 1.6703 | 9.9763 | 14000 | 0.0979 | 0.2178 |
|
| 71 |
+
| 1.5747 | 10.6884 | 15000 | 0.0973 | 0.2138 |
|
| 72 |
+
| 1.5171 | 11.4005 | 16000 | 0.0972 | 0.2224 |
|
| 73 |
|
| 74 |
|
| 75 |
### Framework versions
|
generation_config.json
CHANGED
|
@@ -65,20 +65,7 @@
|
|
| 65 |
"1": "LABEL_1"
|
| 66 |
},
|
| 67 |
"init_std": 0.02,
|
| 68 |
-
"input_ids":
|
| 69 |
-
[
|
| 70 |
-
1,
|
| 71 |
-
50272
|
| 72 |
-
],
|
| 73 |
-
[
|
| 74 |
-
2,
|
| 75 |
-
50359
|
| 76 |
-
],
|
| 77 |
-
[
|
| 78 |
-
3,
|
| 79 |
-
50363
|
| 80 |
-
]
|
| 81 |
-
],
|
| 82 |
"is_decoder": false,
|
| 83 |
"is_encoder_decoder": true,
|
| 84 |
"is_multilingual": true,
|
|
|
|
| 65 |
"1": "LABEL_1"
|
| 66 |
},
|
| 67 |
"init_std": 0.02,
|
| 68 |
+
"input_ids": null,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 69 |
"is_decoder": false,
|
| 70 |
"is_encoder_decoder": true,
|
| 71 |
"is_multilingual": true,
|
runs/Jun08_07-01-42_ff9e9c151bf3/events.out.tfevents.1749397235.ff9e9c151bf3.19.1
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5992a2230a73ade4462e374d46d23a1840c4991ff2a3af6d6443700d62faf9f3
|
| 3 |
+
size 412
|