Baselhany commited on
Commit
7f69051
·
verified ·
1 Parent(s): 765be7e

Model save

Browse files
README.md CHANGED
@@ -1,27 +1,25 @@
1
  ---
2
  library_name: transformers
3
- language:
4
- - ar
5
  license: apache-2.0
6
- base_model: openai/whisper-base
7
  tags:
8
  - generated_from_trainer
9
  metrics:
10
  - wer
11
  model-index:
12
- - name: Whisper base AR - BA
13
  results: []
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
- # Whisper base AR - BA
20
 
21
- This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the quran-ayat-speech-to-text dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 0.1108
24
- - Wer: 0.2343
25
 
26
  ## Model description
27
 
@@ -49,7 +47,7 @@ The following hyperparameters were used during training:
49
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
50
  - lr_scheduler_type: linear
51
  - lr_scheduler_warmup_steps: 500
52
- - num_epochs: 40
53
  - mixed_precision_training: Native AMP
54
 
55
  ### Training results
@@ -99,6 +97,19 @@ The following hyperparameters were used during training:
99
  | 1.3084 | 38.0 | 14250 | 0.1053 | 0.2412 |
100
  | 1.302 | 39.0 | 14625 | 0.1054 | 0.2309 |
101
  | 1.2152 | 40.0 | 15000 | 0.1053 | 0.2297 |
 
 
 
 
 
 
 
 
 
 
 
 
 
102
 
103
 
104
  ### Framework versions
 
1
  ---
2
  library_name: transformers
 
 
3
  license: apache-2.0
4
+ base_model: Baselhany/Distilation_Whisper_base_CKP_10k
5
  tags:
6
  - generated_from_trainer
7
  metrics:
8
  - wer
9
  model-index:
10
+ - name: Distilation_Whisper_base_CKP_10k
11
  results: []
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
+ # Distilation_Whisper_base_CKP_10k
18
 
19
+ This model is a fine-tuned version of [Baselhany/Distilation_Whisper_base_CKP_10k](https://huggingface.co/Baselhany/Distilation_Whisper_base_CKP_10k) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.1070
22
+ - Wer: 0.2297
23
 
24
  ## Model description
25
 
 
47
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
  - lr_scheduler_type: linear
49
  - lr_scheduler_warmup_steps: 500
50
+ - num_epochs: 50
51
  - mixed_precision_training: Native AMP
52
 
53
  ### Training results
 
97
  | 1.3084 | 38.0 | 14250 | 0.1053 | 0.2412 |
98
  | 1.302 | 39.0 | 14625 | 0.1054 | 0.2309 |
99
  | 1.2152 | 40.0 | 15000 | 0.1053 | 0.2297 |
100
+ | 3.6933 | 37.9994 | 15314 | 0.1044 | 0.2122 |
101
+ | 2.9938 | 39.0 | 15718 | 0.1051 | 0.2193 |
102
+ | 2.5582 | 40.0 | 16122 | 0.1041 | 0.2202 |
103
+ | 2.1949 | 41.0 | 16526 | 0.1032 | 0.2137 |
104
+ | 2.1428 | 42.0 | 16930 | 0.1045 | 0.2146 |
105
+ | 2.0052 | 43.0 | 17334 | 0.1027 | 0.2146 |
106
+ | 1.7204 | 44.0 | 17738 | 0.1031 | 0.2121 |
107
+ | 1.7391 | 45.0 | 18142 | 0.1026 | 0.2125 |
108
+ | 1.6544 | 46.0 | 18546 | 0.1028 | 0.2140 |
109
+ | 1.6764 | 47.0 | 18950 | 0.1033 | 0.2121 |
110
+ | 1.535 | 48.0 | 19354 | 0.1028 | 0.2122 |
111
+ | 1.5344 | 49.0 | 19758 | 0.1025 | 0.2163 |
112
+ | 1.5171 | 49.9721 | 20150 | 0.1025 | 0.2121 |
113
 
114
 
115
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:46f26b7c7b98aea584520e7cb43e0c742a2fab82d01bbc510e14658ecd7060c0
3
  size 223144592
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d144fad846b56a4381fcea74b3bcd86b551154a4d37803a37b131e661a34cb9c
3
  size 223144592
runs/May24_00-06-33_0a4de841d5e1/events.out.tfevents.1748065712.0a4de841d5e1.19.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28982b766ad951aafb66a28ecd0cda80174606e03c9b4e0ed8c7b83731013a69
3
+ size 412