Taeyeun72 commited on
Commit
6bb7717
·
1 Parent(s): 6a35e10

End of training

Browse files
Files changed (3) hide show
  1. README.md +120 -0
  2. generation_config.json +267 -0
  3. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ko
4
+ license: apache-2.0
5
+ base_model: openai/whisper-small
6
+ tags:
7
+ - hf-asr-leaderboard
8
+ - generated_from_trainer
9
+ datasets:
10
+ - arrow
11
+ metrics:
12
+ - wer
13
+ model-index:
14
+ - name: whisper-kor_noising_3
15
+ results:
16
+ - task:
17
+ name: Automatic Speech Recognition
18
+ type: automatic-speech-recognition
19
+ dataset:
20
+ name: whisper-kor_noising_3
21
+ type: arrow
22
+ config: default
23
+ split: train
24
+ args: 'config: ko, split: valid'
25
+ metrics:
26
+ - name: Wer
27
+ type: wer
28
+ value: 19.58714884370635
29
+ ---
30
+
31
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
32
+ should probably proofread and complete it, then remove this comment. -->
33
+
34
+ # whisper-kor_noising_3
35
+
36
+ This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the whisper-kor_noising_3 dataset.
37
+ It achieves the following results on the evaluation set:
38
+ - Loss: 0.2748
39
+ - Wer: 19.5871
40
+ - Cer: 8.9142
41
+
42
+ ## Model description
43
+
44
+ More information needed
45
+
46
+ ## Intended uses & limitations
47
+
48
+ More information needed
49
+
50
+ ## Training and evaluation data
51
+
52
+ More information needed
53
+
54
+ ## Training procedure
55
+
56
+ ### Training hyperparameters
57
+
58
+ The following hyperparameters were used during training:
59
+ - learning_rate: 1e-05
60
+ - train_batch_size: 16
61
+ - eval_batch_size: 16
62
+ - seed: 42
63
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
64
+ - lr_scheduler_type: linear
65
+ - lr_scheduler_warmup_steps: 500
66
+ - training_steps: 4000
67
+ - mixed_precision_training: Native AMP
68
+
69
+ ### Training results
70
+
71
+ | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
72
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
73
+ | 0.2751 | 0.05 | 100 | 0.3078 | 20.8754 | 10.0300 |
74
+ | 0.3055 | 0.09 | 200 | 0.2981 | 20.7046 | 9.6834 |
75
+ | 0.2684 | 0.14 | 300 | 0.2974 | 21.1703 | 9.7215 |
76
+ | 0.267 | 0.18 | 400 | 0.3019 | 21.7600 | 10.4611 |
77
+ | 0.2927 | 0.23 | 500 | 0.3014 | 20.7357 | 9.5862 |
78
+ | 0.287 | 0.28 | 600 | 0.3057 | 21.5117 | 9.9370 |
79
+ | 0.2913 | 0.32 | 700 | 0.3098 | 22.8620 | 10.9218 |
80
+ | 0.3201 | 0.37 | 800 | 0.3044 | 22.7534 | 10.9683 |
81
+ | 0.2929 | 0.42 | 900 | 0.2994 | 21.1237 | 9.7553 |
82
+ | 0.2661 | 0.46 | 1000 | 0.3023 | 22.3809 | 10.6218 |
83
+ | 0.2865 | 0.51 | 1100 | 0.3013 | 24.5538 | 11.4755 |
84
+ | 0.2668 | 0.55 | 1200 | 0.3011 | 23.4052 | 11.0951 |
85
+ | 0.2888 | 0.6 | 1300 | 0.2956 | 24.4296 | 13.4494 |
86
+ | 0.245 | 0.65 | 1400 | 0.3015 | 21.2323 | 9.8821 |
87
+ | 0.2718 | 0.69 | 1500 | 0.3009 | 21.4807 | 9.7468 |
88
+ | 0.2757 | 0.74 | 1600 | 0.2950 | 20.7357 | 9.5862 |
89
+ | 0.2943 | 0.78 | 1700 | 0.2965 | 21.1237 | 9.7510 |
90
+ | 0.2637 | 0.83 | 1800 | 0.2934 | 21.9618 | 10.5372 |
91
+ | 0.2593 | 0.88 | 1900 | 0.2911 | 21.9929 | 10.4231 |
92
+ | 0.2742 | 0.92 | 2000 | 0.2888 | 22.2257 | 11.2642 |
93
+ | 0.2682 | 0.97 | 2100 | 0.2866 | 20.9530 | 9.7806 |
94
+ | 0.172 | 1.02 | 2200 | 0.2858 | 19.8044 | 9.1044 |
95
+ | 0.165 | 1.06 | 2300 | 0.2875 | 19.6027 | 9.0452 |
96
+ | 0.1634 | 1.11 | 2400 | 0.2868 | 19.5871 | 9.0621 |
97
+ | 0.1928 | 1.15 | 2500 | 0.2853 | 22.0705 | 11.0233 |
98
+ | 0.1876 | 1.2 | 2600 | 0.2832 | 21.7911 | 10.8035 |
99
+ | 0.1795 | 1.25 | 2700 | 0.2826 | 19.7113 | 9.1255 |
100
+ | 0.1844 | 1.29 | 2800 | 0.2821 | 19.6803 | 9.0494 |
101
+ | 0.1532 | 1.34 | 2900 | 0.2797 | 19.7579 | 9.0790 |
102
+ | 0.1529 | 1.39 | 3000 | 0.2783 | 20.0683 | 9.0748 |
103
+ | 0.1334 | 1.43 | 3100 | 0.2795 | 19.7579 | 9.0579 |
104
+ | 0.1538 | 1.48 | 3200 | 0.2787 | 20.7667 | 10.0850 |
105
+ | 0.1537 | 1.52 | 3300 | 0.2785 | 19.5406 | 8.7704 |
106
+ | 0.1694 | 1.57 | 3400 | 0.2780 | 19.6492 | 8.8085 |
107
+ | 0.1811 | 1.62 | 3500 | 0.2766 | 19.5406 | 8.9015 |
108
+ | 0.163 | 1.66 | 3600 | 0.2772 | 21.2634 | 10.2033 |
109
+ | 0.1445 | 1.71 | 3700 | 0.2763 | 19.3854 | 8.8169 |
110
+ | 0.1548 | 1.75 | 3800 | 0.2750 | 19.4009 | 8.7958 |
111
+ | 0.1588 | 1.8 | 3900 | 0.2749 | 19.3854 | 8.8127 |
112
+ | 0.1575 | 1.85 | 4000 | 0.2748 | 19.5871 | 8.9142 |
113
+
114
+
115
+ ### Framework versions
116
+
117
+ - Transformers 4.35.2
118
+ - Pytorch 2.1.1+cu118
119
+ - Datasets 2.15.0
120
+ - Tokenizers 0.15.0
generation_config.json ADDED
@@ -0,0 +1,267 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alignment_heads": [
3
+ [
4
+ 5,
5
+ 3
6
+ ],
7
+ [
8
+ 5,
9
+ 9
10
+ ],
11
+ [
12
+ 8,
13
+ 0
14
+ ],
15
+ [
16
+ 8,
17
+ 4
18
+ ],
19
+ [
20
+ 8,
21
+ 7
22
+ ],
23
+ [
24
+ 8,
25
+ 8
26
+ ],
27
+ [
28
+ 9,
29
+ 0
30
+ ],
31
+ [
32
+ 9,
33
+ 7
34
+ ],
35
+ [
36
+ 9,
37
+ 9
38
+ ],
39
+ [
40
+ 10,
41
+ 5
42
+ ]
43
+ ],
44
+ "begin_suppress_tokens": [
45
+ 220,
46
+ 50257
47
+ ],
48
+ "bos_token_id": 50257,
49
+ "decoder_start_token_id": 50258,
50
+ "eos_token_id": 50257,
51
+ "forced_decoder_ids": [
52
+ [
53
+ 1,
54
+ 50264
55
+ ],
56
+ [
57
+ 2,
58
+ 50359
59
+ ],
60
+ [
61
+ 3,
62
+ 50363
63
+ ]
64
+ ],
65
+ "is_multilingual": true,
66
+ "lang_to_id": {
67
+ "<|af|>": 50327,
68
+ "<|am|>": 50334,
69
+ "<|ar|>": 50272,
70
+ "<|as|>": 50350,
71
+ "<|az|>": 50304,
72
+ "<|ba|>": 50355,
73
+ "<|be|>": 50330,
74
+ "<|bg|>": 50292,
75
+ "<|bn|>": 50302,
76
+ "<|bo|>": 50347,
77
+ "<|br|>": 50309,
78
+ "<|bs|>": 50315,
79
+ "<|ca|>": 50270,
80
+ "<|cs|>": 50283,
81
+ "<|cy|>": 50297,
82
+ "<|da|>": 50285,
83
+ "<|de|>": 50261,
84
+ "<|el|>": 50281,
85
+ "<|en|>": 50259,
86
+ "<|es|>": 50262,
87
+ "<|et|>": 50307,
88
+ "<|eu|>": 50310,
89
+ "<|fa|>": 50300,
90
+ "<|fi|>": 50277,
91
+ "<|fo|>": 50338,
92
+ "<|fr|>": 50265,
93
+ "<|gl|>": 50319,
94
+ "<|gu|>": 50333,
95
+ "<|haw|>": 50352,
96
+ "<|ha|>": 50354,
97
+ "<|he|>": 50279,
98
+ "<|hi|>": 50276,
99
+ "<|hr|>": 50291,
100
+ "<|ht|>": 50339,
101
+ "<|hu|>": 50286,
102
+ "<|hy|>": 50312,
103
+ "<|id|>": 50275,
104
+ "<|is|>": 50311,
105
+ "<|it|>": 50274,
106
+ "<|ja|>": 50266,
107
+ "<|jw|>": 50356,
108
+ "<|ka|>": 50329,
109
+ "<|kk|>": 50316,
110
+ "<|km|>": 50323,
111
+ "<|kn|>": 50306,
112
+ "<|ko|>": 50264,
113
+ "<|la|>": 50294,
114
+ "<|lb|>": 50345,
115
+ "<|ln|>": 50353,
116
+ "<|lo|>": 50336,
117
+ "<|lt|>": 50293,
118
+ "<|lv|>": 50301,
119
+ "<|mg|>": 50349,
120
+ "<|mi|>": 50295,
121
+ "<|mk|>": 50308,
122
+ "<|ml|>": 50296,
123
+ "<|mn|>": 50314,
124
+ "<|mr|>": 50320,
125
+ "<|ms|>": 50282,
126
+ "<|mt|>": 50343,
127
+ "<|my|>": 50346,
128
+ "<|ne|>": 50313,
129
+ "<|nl|>": 50271,
130
+ "<|nn|>": 50342,
131
+ "<|no|>": 50288,
132
+ "<|oc|>": 50328,
133
+ "<|pa|>": 50321,
134
+ "<|pl|>": 50269,
135
+ "<|ps|>": 50340,
136
+ "<|pt|>": 50267,
137
+ "<|ro|>": 50284,
138
+ "<|ru|>": 50263,
139
+ "<|sa|>": 50344,
140
+ "<|sd|>": 50332,
141
+ "<|si|>": 50322,
142
+ "<|sk|>": 50298,
143
+ "<|sl|>": 50305,
144
+ "<|sn|>": 50324,
145
+ "<|so|>": 50326,
146
+ "<|sq|>": 50317,
147
+ "<|sr|>": 50303,
148
+ "<|su|>": 50357,
149
+ "<|sv|>": 50273,
150
+ "<|sw|>": 50318,
151
+ "<|ta|>": 50287,
152
+ "<|te|>": 50299,
153
+ "<|tg|>": 50331,
154
+ "<|th|>": 50289,
155
+ "<|tk|>": 50341,
156
+ "<|tl|>": 50348,
157
+ "<|tr|>": 50268,
158
+ "<|tt|>": 50351,
159
+ "<|uk|>": 50280,
160
+ "<|ur|>": 50290,
161
+ "<|uz|>": 50337,
162
+ "<|vi|>": 50278,
163
+ "<|yi|>": 50335,
164
+ "<|yo|>": 50325,
165
+ "<|zh|>": 50260
166
+ },
167
+ "max_initial_timestamp_index": 1,
168
+ "max_length": 448,
169
+ "no_timestamps_token_id": 50363,
170
+ "pad_token_id": 50257,
171
+ "return_timestamps": false,
172
+ "suppress_tokens": [
173
+ 1,
174
+ 2,
175
+ 7,
176
+ 8,
177
+ 9,
178
+ 10,
179
+ 14,
180
+ 25,
181
+ 26,
182
+ 27,
183
+ 28,
184
+ 29,
185
+ 31,
186
+ 58,
187
+ 59,
188
+ 60,
189
+ 61,
190
+ 62,
191
+ 63,
192
+ 90,
193
+ 91,
194
+ 92,
195
+ 93,
196
+ 359,
197
+ 503,
198
+ 522,
199
+ 542,
200
+ 873,
201
+ 893,
202
+ 902,
203
+ 918,
204
+ 922,
205
+ 931,
206
+ 1350,
207
+ 1853,
208
+ 1982,
209
+ 2460,
210
+ 2627,
211
+ 3246,
212
+ 3253,
213
+ 3268,
214
+ 3536,
215
+ 3846,
216
+ 3961,
217
+ 4183,
218
+ 4667,
219
+ 6585,
220
+ 6647,
221
+ 7273,
222
+ 9061,
223
+ 9383,
224
+ 10428,
225
+ 10929,
226
+ 11938,
227
+ 12033,
228
+ 12331,
229
+ 12562,
230
+ 13793,
231
+ 14157,
232
+ 14635,
233
+ 15265,
234
+ 15618,
235
+ 16553,
236
+ 16604,
237
+ 18362,
238
+ 18956,
239
+ 20075,
240
+ 21675,
241
+ 22520,
242
+ 26130,
243
+ 26161,
244
+ 26435,
245
+ 28279,
246
+ 29464,
247
+ 31650,
248
+ 32302,
249
+ 32470,
250
+ 36865,
251
+ 42863,
252
+ 47425,
253
+ 49870,
254
+ 50254,
255
+ 50258,
256
+ 50358,
257
+ 50359,
258
+ 50360,
259
+ 50361,
260
+ 50362
261
+ ],
262
+ "task_to_id": {
263
+ "transcribe": 50359,
264
+ "translate": 50358
265
+ },
266
+ "transformers_version": "4.35.2"
267
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fa9eabcb3a456d83235c353f4f1f64f47cfabb09ef49558cb6e6bb11ac102ac2
3
  size 966995080
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ecf59b7c52a9fa41eee0108d95ff2a6e2f2c512b1ea06f6c9f2c57b6b90bc08e
3
  size 966995080