Darknsu commited on
Commit
521f545
·
verified ·
1 Parent(s): 678c6fc

Upload entire folder with structure in one commit

Browse files
checkpoint-1420/README.md ADDED
@@ -0,0 +1,206 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: bengaliAI/tugstugi_bengaliai-regional-asr_whisper-medium
3
+ library_name: peft
4
+ tags:
5
+ - base_model:adapter:bengaliAI/tugstugi_bengaliai-regional-asr_whisper-medium
6
+ - lora
7
+ - transformers
8
+ ---
9
+
10
+ # Model Card for Model ID
11
+
12
+ <!-- Provide a quick summary of what the model is/does. -->
13
+
14
+
15
+
16
+ ## Model Details
17
+
18
+ ### Model Description
19
+
20
+ <!-- Provide a longer summary of what this model is. -->
21
+
22
+
23
+
24
+ - **Developed by:** [More Information Needed]
25
+ - **Funded by [optional]:** [More Information Needed]
26
+ - **Shared by [optional]:** [More Information Needed]
27
+ - **Model type:** [More Information Needed]
28
+ - **Language(s) (NLP):** [More Information Needed]
29
+ - **License:** [More Information Needed]
30
+ - **Finetuned from model [optional]:** [More Information Needed]
31
+
32
+ ### Model Sources [optional]
33
+
34
+ <!-- Provide the basic links for the model. -->
35
+
36
+ - **Repository:** [More Information Needed]
37
+ - **Paper [optional]:** [More Information Needed]
38
+ - **Demo [optional]:** [More Information Needed]
39
+
40
+ ## Uses
41
+
42
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
43
+
44
+ ### Direct Use
45
+
46
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
47
+
48
+ [More Information Needed]
49
+
50
+ ### Downstream Use [optional]
51
+
52
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
53
+
54
+ [More Information Needed]
55
+
56
+ ### Out-of-Scope Use
57
+
58
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
59
+
60
+ [More Information Needed]
61
+
62
+ ## Bias, Risks, and Limitations
63
+
64
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
65
+
66
+ [More Information Needed]
67
+
68
+ ### Recommendations
69
+
70
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
71
+
72
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
73
+
74
+ ## How to Get Started with the Model
75
+
76
+ Use the code below to get started with the model.
77
+
78
+ [More Information Needed]
79
+
80
+ ## Training Details
81
+
82
+ ### Training Data
83
+
84
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
85
+
86
+ [More Information Needed]
87
+
88
+ ### Training Procedure
89
+
90
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
91
+
92
+ #### Preprocessing [optional]
93
+
94
+ [More Information Needed]
95
+
96
+
97
+ #### Training Hyperparameters
98
+
99
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
100
+
101
+ #### Speeds, Sizes, Times [optional]
102
+
103
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
104
+
105
+ [More Information Needed]
106
+
107
+ ## Evaluation
108
+
109
+ <!-- This section describes the evaluation protocols and provides the results. -->
110
+
111
+ ### Testing Data, Factors & Metrics
112
+
113
+ #### Testing Data
114
+
115
+ <!-- This should link to a Dataset Card if possible. -->
116
+
117
+ [More Information Needed]
118
+
119
+ #### Factors
120
+
121
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
122
+
123
+ [More Information Needed]
124
+
125
+ #### Metrics
126
+
127
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
128
+
129
+ [More Information Needed]
130
+
131
+ ### Results
132
+
133
+ [More Information Needed]
134
+
135
+ #### Summary
136
+
137
+
138
+
139
+ ## Model Examination [optional]
140
+
141
+ <!-- Relevant interpretability work for the model goes here -->
142
+
143
+ [More Information Needed]
144
+
145
+ ## Environmental Impact
146
+
147
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
148
+
149
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
150
+
151
+ - **Hardware Type:** [More Information Needed]
152
+ - **Hours used:** [More Information Needed]
153
+ - **Cloud Provider:** [More Information Needed]
154
+ - **Compute Region:** [More Information Needed]
155
+ - **Carbon Emitted:** [More Information Needed]
156
+
157
+ ## Technical Specifications [optional]
158
+
159
+ ### Model Architecture and Objective
160
+
161
+ [More Information Needed]
162
+
163
+ ### Compute Infrastructure
164
+
165
+ [More Information Needed]
166
+
167
+ #### Hardware
168
+
169
+ [More Information Needed]
170
+
171
+ #### Software
172
+
173
+ [More Information Needed]
174
+
175
+ ## Citation [optional]
176
+
177
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
178
+
179
+ **BibTeX:**
180
+
181
+ [More Information Needed]
182
+
183
+ **APA:**
184
+
185
+ [More Information Needed]
186
+
187
+ ## Glossary [optional]
188
+
189
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
190
+
191
+ [More Information Needed]
192
+
193
+ ## More Information [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Authors [optional]
198
+
199
+ [More Information Needed]
200
+
201
+ ## Model Card Contact
202
+
203
+ [More Information Needed]
204
+ ### Framework versions
205
+
206
+ - PEFT 0.18.2.dev0
checkpoint-1420/adapter_config.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alora_invocation_tokens": null,
3
+ "alpha_pattern": {},
4
+ "arrow_config": null,
5
+ "auto_mapping": {
6
+ "base_model_class": "WhisperForConditionalGeneration",
7
+ "parent_library": "transformers.models.whisper.modeling_whisper"
8
+ },
9
+ "base_model_name_or_path": "bengaliAI/tugstugi_bengaliai-regional-asr_whisper-medium",
10
+ "bias": "none",
11
+ "corda_config": null,
12
+ "ensure_weight_tying": false,
13
+ "eva_config": null,
14
+ "exclude_modules": null,
15
+ "fan_in_fan_out": false,
16
+ "inference_mode": true,
17
+ "init_lora_weights": true,
18
+ "layer_replication": null,
19
+ "layers_pattern": null,
20
+ "layers_to_transform": null,
21
+ "loftq_config": {},
22
+ "lora_alpha": 64,
23
+ "lora_bias": false,
24
+ "lora_dropout": 0.05,
25
+ "lora_ga_config": null,
26
+ "megatron_config": null,
27
+ "megatron_core": "megatron.core",
28
+ "modules_to_save": null,
29
+ "peft_type": "LORA",
30
+ "peft_version": "0.18.2.dev0@f6a7e678840a3e59c8e28f105695968f0dc706d4",
31
+ "qalora_group_size": 16,
32
+ "r": 32,
33
+ "rank_pattern": {},
34
+ "revision": null,
35
+ "target_modules": [
36
+ "q_proj",
37
+ "v_proj"
38
+ ],
39
+ "target_parameters": null,
40
+ "task_type": null,
41
+ "trainable_token_indices": null,
42
+ "use_bdlora": null,
43
+ "use_dora": false,
44
+ "use_qalora": false,
45
+ "use_rslora": false
46
+ }
checkpoint-1420/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bdbf0cc677ecb2a9a91f78987cce456c31f0ff16773664e45a16753a2506e074
3
+ size 37789960
checkpoint-1420/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa764c728b4fd4fc7bcf896491480a0cca434653272e66288c1aae1f9be9ca8f
3
+ size 50493579
checkpoint-1420/preprocessor_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "chunk_length": 30,
3
+ "feature_extractor_type": "WhisperFeatureExtractor",
4
+ "feature_size": 80,
5
+ "hop_length": 160,
6
+ "n_fft": 400,
7
+ "n_samples": 480000,
8
+ "nb_max_frames": 3000,
9
+ "padding_side": "right",
10
+ "padding_value": 0.0,
11
+ "processor_class": "WhisperProcessor",
12
+ "return_attention_mask": false,
13
+ "sampling_rate": 16000
14
+ }
checkpoint-1420/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2e4311f0d6af261623eca9b8fdd4f347841ae60d813eaae06b3ad9e102cdf17a
3
+ size 14709
checkpoint-1420/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d7b82f2abef14e513e7458848a43a4e09efeccb390d66485d1c4e40269e9603
3
+ size 1465
checkpoint-1420/trainer_state.json ADDED
@@ -0,0 +1,443 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": 37.95777749811226,
3
+ "best_model_checkpoint": "./whisper-lora-15k-adapters/checkpoint-710",
4
+ "epoch": 1.6647127784290738,
5
+ "eval_steps": 710,
6
+ "global_step": 1420,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.029308323563892145,
13
+ "grad_norm": 0.7977361679077148,
14
+ "learning_rate": 5e-05,
15
+ "loss": 1.1739,
16
+ "step": 25
17
+ },
18
+ {
19
+ "epoch": 0.05861664712778429,
20
+ "grad_norm": 0.9913051724433899,
21
+ "learning_rate": 0.0001,
22
+ "loss": 1.0586,
23
+ "step": 50
24
+ },
25
+ {
26
+ "epoch": 0.08792497069167643,
27
+ "grad_norm": 0.7450225353240967,
28
+ "learning_rate": 9.94061757719715e-05,
29
+ "loss": 0.8402,
30
+ "step": 75
31
+ },
32
+ {
33
+ "epoch": 0.11723329425556858,
34
+ "grad_norm": 0.6139466166496277,
35
+ "learning_rate": 9.881235154394299e-05,
36
+ "loss": 0.8144,
37
+ "step": 100
38
+ },
39
+ {
40
+ "epoch": 0.14654161781946073,
41
+ "grad_norm": 0.8228874206542969,
42
+ "learning_rate": 9.82185273159145e-05,
43
+ "loss": 0.8631,
44
+ "step": 125
45
+ },
46
+ {
47
+ "epoch": 0.17584994138335286,
48
+ "grad_norm": 0.6503015756607056,
49
+ "learning_rate": 9.762470308788599e-05,
50
+ "loss": 0.7633,
51
+ "step": 150
52
+ },
53
+ {
54
+ "epoch": 0.205158264947245,
55
+ "grad_norm": 0.747552216053009,
56
+ "learning_rate": 9.703087885985749e-05,
57
+ "loss": 0.758,
58
+ "step": 175
59
+ },
60
+ {
61
+ "epoch": 0.23446658851113716,
62
+ "grad_norm": 0.7009592056274414,
63
+ "learning_rate": 9.643705463182898e-05,
64
+ "loss": 0.7861,
65
+ "step": 200
66
+ },
67
+ {
68
+ "epoch": 0.2637749120750293,
69
+ "grad_norm": 0.5309766530990601,
70
+ "learning_rate": 9.584323040380047e-05,
71
+ "loss": 0.7396,
72
+ "step": 225
73
+ },
74
+ {
75
+ "epoch": 0.29308323563892147,
76
+ "grad_norm": 0.5797637104988098,
77
+ "learning_rate": 9.524940617577197e-05,
78
+ "loss": 0.7698,
79
+ "step": 250
80
+ },
81
+ {
82
+ "epoch": 0.3223915592028136,
83
+ "grad_norm": 0.5136704444885254,
84
+ "learning_rate": 9.465558194774347e-05,
85
+ "loss": 0.6787,
86
+ "step": 275
87
+ },
88
+ {
89
+ "epoch": 0.3516998827667057,
90
+ "grad_norm": 0.5696821808815002,
91
+ "learning_rate": 9.406175771971497e-05,
92
+ "loss": 0.6905,
93
+ "step": 300
94
+ },
95
+ {
96
+ "epoch": 0.3810082063305979,
97
+ "grad_norm": 0.6909741163253784,
98
+ "learning_rate": 9.346793349168646e-05,
99
+ "loss": 0.6872,
100
+ "step": 325
101
+ },
102
+ {
103
+ "epoch": 0.41031652989449,
104
+ "grad_norm": 0.684543788433075,
105
+ "learning_rate": 9.287410926365795e-05,
106
+ "loss": 0.7384,
107
+ "step": 350
108
+ },
109
+ {
110
+ "epoch": 0.4396248534583822,
111
+ "grad_norm": 0.5790461301803589,
112
+ "learning_rate": 9.228028503562945e-05,
113
+ "loss": 0.6926,
114
+ "step": 375
115
+ },
116
+ {
117
+ "epoch": 0.46893317702227433,
118
+ "grad_norm": 0.6158836483955383,
119
+ "learning_rate": 9.168646080760096e-05,
120
+ "loss": 0.6611,
121
+ "step": 400
122
+ },
123
+ {
124
+ "epoch": 0.49824150058616645,
125
+ "grad_norm": 0.49336278438568115,
126
+ "learning_rate": 9.109263657957245e-05,
127
+ "loss": 0.7189,
128
+ "step": 425
129
+ },
130
+ {
131
+ "epoch": 0.5275498241500586,
132
+ "grad_norm": 0.5663182139396667,
133
+ "learning_rate": 9.049881235154394e-05,
134
+ "loss": 0.6859,
135
+ "step": 450
136
+ },
137
+ {
138
+ "epoch": 0.5568581477139508,
139
+ "grad_norm": 0.5849825739860535,
140
+ "learning_rate": 8.990498812351545e-05,
141
+ "loss": 0.6784,
142
+ "step": 475
143
+ },
144
+ {
145
+ "epoch": 0.5861664712778429,
146
+ "grad_norm": 0.6453606486320496,
147
+ "learning_rate": 8.931116389548694e-05,
148
+ "loss": 0.7507,
149
+ "step": 500
150
+ },
151
+ {
152
+ "epoch": 0.6154747948417351,
153
+ "grad_norm": 0.5397807359695435,
154
+ "learning_rate": 8.871733966745844e-05,
155
+ "loss": 0.7184,
156
+ "step": 525
157
+ },
158
+ {
159
+ "epoch": 0.6447831184056272,
160
+ "grad_norm": 0.5113738775253296,
161
+ "learning_rate": 8.812351543942994e-05,
162
+ "loss": 0.6882,
163
+ "step": 550
164
+ },
165
+ {
166
+ "epoch": 0.6740914419695193,
167
+ "grad_norm": 0.44587358832359314,
168
+ "learning_rate": 8.752969121140144e-05,
169
+ "loss": 0.6624,
170
+ "step": 575
171
+ },
172
+ {
173
+ "epoch": 0.7033997655334114,
174
+ "grad_norm": 0.559115469455719,
175
+ "learning_rate": 8.693586698337293e-05,
176
+ "loss": 0.7167,
177
+ "step": 600
178
+ },
179
+ {
180
+ "epoch": 0.7327080890973037,
181
+ "grad_norm": 0.5207410454750061,
182
+ "learning_rate": 8.634204275534443e-05,
183
+ "loss": 0.6482,
184
+ "step": 625
185
+ },
186
+ {
187
+ "epoch": 0.7620164126611958,
188
+ "grad_norm": 0.6229726076126099,
189
+ "learning_rate": 8.574821852731592e-05,
190
+ "loss": 0.6613,
191
+ "step": 650
192
+ },
193
+ {
194
+ "epoch": 0.7913247362250879,
195
+ "grad_norm": 0.558819055557251,
196
+ "learning_rate": 8.515439429928741e-05,
197
+ "loss": 0.6905,
198
+ "step": 675
199
+ },
200
+ {
201
+ "epoch": 0.82063305978898,
202
+ "grad_norm": 0.5218378901481628,
203
+ "learning_rate": 8.456057007125892e-05,
204
+ "loss": 0.6825,
205
+ "step": 700
206
+ },
207
+ {
208
+ "epoch": 0.8323563892145369,
209
+ "eval_loss": 0.5370081067085266,
210
+ "eval_runtime": 12369.2007,
211
+ "eval_samples_per_second": 0.122,
212
+ "eval_steps_per_second": 0.008,
213
+ "eval_wer": 37.95777749811226,
214
+ "step": 710
215
+ },
216
+ {
217
+ "epoch": 0.8499413833528722,
218
+ "grad_norm": 0.6432453393936157,
219
+ "learning_rate": 8.396674584323041e-05,
220
+ "loss": 0.6864,
221
+ "step": 725
222
+ },
223
+ {
224
+ "epoch": 0.8792497069167644,
225
+ "grad_norm": 0.47664546966552734,
226
+ "learning_rate": 8.33729216152019e-05,
227
+ "loss": 0.6571,
228
+ "step": 750
229
+ },
230
+ {
231
+ "epoch": 0.9085580304806565,
232
+ "grad_norm": 0.8838010430335999,
233
+ "learning_rate": 8.27790973871734e-05,
234
+ "loss": 0.6636,
235
+ "step": 775
236
+ },
237
+ {
238
+ "epoch": 0.9378663540445487,
239
+ "grad_norm": 0.5111401677131653,
240
+ "learning_rate": 8.21852731591449e-05,
241
+ "loss": 0.5934,
242
+ "step": 800
243
+ },
244
+ {
245
+ "epoch": 0.9671746776084408,
246
+ "grad_norm": 0.5476071238517761,
247
+ "learning_rate": 8.15914489311164e-05,
248
+ "loss": 0.6751,
249
+ "step": 825
250
+ },
251
+ {
252
+ "epoch": 0.9964830011723329,
253
+ "grad_norm": 0.5263524651527405,
254
+ "learning_rate": 8.09976247030879e-05,
255
+ "loss": 0.691,
256
+ "step": 850
257
+ },
258
+ {
259
+ "epoch": 1.0257913247362251,
260
+ "grad_norm": 0.5008183121681213,
261
+ "learning_rate": 8.040380047505939e-05,
262
+ "loss": 0.6445,
263
+ "step": 875
264
+ },
265
+ {
266
+ "epoch": 1.0550996483001172,
267
+ "grad_norm": 0.5410601496696472,
268
+ "learning_rate": 7.980997624703088e-05,
269
+ "loss": 0.6595,
270
+ "step": 900
271
+ },
272
+ {
273
+ "epoch": 1.0844079718640094,
274
+ "grad_norm": 0.40574726462364197,
275
+ "learning_rate": 7.921615201900238e-05,
276
+ "loss": 0.6498,
277
+ "step": 925
278
+ },
279
+ {
280
+ "epoch": 1.1137162954279016,
281
+ "grad_norm": 0.5469894409179688,
282
+ "learning_rate": 7.862232779097387e-05,
283
+ "loss": 0.6366,
284
+ "step": 950
285
+ },
286
+ {
287
+ "epoch": 1.1430246189917936,
288
+ "grad_norm": 0.4871784448623657,
289
+ "learning_rate": 7.802850356294538e-05,
290
+ "loss": 0.5661,
291
+ "step": 975
292
+ },
293
+ {
294
+ "epoch": 1.1723329425556859,
295
+ "grad_norm": 0.589715301990509,
296
+ "learning_rate": 7.743467933491687e-05,
297
+ "loss": 0.65,
298
+ "step": 1000
299
+ },
300
+ {
301
+ "epoch": 1.2016412661195779,
302
+ "grad_norm": 0.5508498549461365,
303
+ "learning_rate": 7.684085510688836e-05,
304
+ "loss": 0.7024,
305
+ "step": 1025
306
+ },
307
+ {
308
+ "epoch": 1.2309495896834701,
309
+ "grad_norm": 0.6247432231903076,
310
+ "learning_rate": 7.624703087885986e-05,
311
+ "loss": 0.7014,
312
+ "step": 1050
313
+ },
314
+ {
315
+ "epoch": 1.2602579132473624,
316
+ "grad_norm": 0.5116705298423767,
317
+ "learning_rate": 7.565320665083135e-05,
318
+ "loss": 0.6342,
319
+ "step": 1075
320
+ },
321
+ {
322
+ "epoch": 1.2895662368112544,
323
+ "grad_norm": 0.760197639465332,
324
+ "learning_rate": 7.505938242280284e-05,
325
+ "loss": 0.6412,
326
+ "step": 1100
327
+ },
328
+ {
329
+ "epoch": 1.3188745603751466,
330
+ "grad_norm": 0.565066933631897,
331
+ "learning_rate": 7.446555819477435e-05,
332
+ "loss": 0.6729,
333
+ "step": 1125
334
+ },
335
+ {
336
+ "epoch": 1.3481828839390386,
337
+ "grad_norm": 0.5542116761207581,
338
+ "learning_rate": 7.387173396674584e-05,
339
+ "loss": 0.6489,
340
+ "step": 1150
341
+ },
342
+ {
343
+ "epoch": 1.3774912075029309,
344
+ "grad_norm": 0.604937732219696,
345
+ "learning_rate": 7.327790973871734e-05,
346
+ "loss": 0.643,
347
+ "step": 1175
348
+ },
349
+ {
350
+ "epoch": 1.4067995310668229,
351
+ "grad_norm": 0.5015918612480164,
352
+ "learning_rate": 7.268408551068883e-05,
353
+ "loss": 0.6287,
354
+ "step": 1200
355
+ },
356
+ {
357
+ "epoch": 1.436107854630715,
358
+ "grad_norm": 0.4906526207923889,
359
+ "learning_rate": 7.209026128266033e-05,
360
+ "loss": 0.6248,
361
+ "step": 1225
362
+ },
363
+ {
364
+ "epoch": 1.4654161781946073,
365
+ "grad_norm": 0.751358687877655,
366
+ "learning_rate": 7.149643705463183e-05,
367
+ "loss": 0.5974,
368
+ "step": 1250
369
+ },
370
+ {
371
+ "epoch": 1.4947245017584994,
372
+ "grad_norm": 0.6856566667556763,
373
+ "learning_rate": 7.090261282660333e-05,
374
+ "loss": 0.6734,
375
+ "step": 1275
376
+ },
377
+ {
378
+ "epoch": 1.5240328253223916,
379
+ "grad_norm": 0.6584367752075195,
380
+ "learning_rate": 7.030878859857482e-05,
381
+ "loss": 0.6485,
382
+ "step": 1300
383
+ },
384
+ {
385
+ "epoch": 1.5533411488862838,
386
+ "grad_norm": 0.5148582458496094,
387
+ "learning_rate": 6.971496437054633e-05,
388
+ "loss": 0.6787,
389
+ "step": 1325
390
+ },
391
+ {
392
+ "epoch": 1.5826494724501758,
393
+ "grad_norm": 0.5255956053733826,
394
+ "learning_rate": 6.912114014251782e-05,
395
+ "loss": 0.6549,
396
+ "step": 1350
397
+ },
398
+ {
399
+ "epoch": 1.6119577960140679,
400
+ "grad_norm": 0.5913639068603516,
401
+ "learning_rate": 6.852731591448931e-05,
402
+ "loss": 0.6161,
403
+ "step": 1375
404
+ },
405
+ {
406
+ "epoch": 1.64126611957796,
407
+ "grad_norm": 0.509831964969635,
408
+ "learning_rate": 6.793349168646082e-05,
409
+ "loss": 0.6236,
410
+ "step": 1400
411
+ },
412
+ {
413
+ "epoch": 1.6647127784290738,
414
+ "eval_loss": 0.5116328001022339,
415
+ "eval_runtime": 12915.4161,
416
+ "eval_samples_per_second": 0.117,
417
+ "eval_steps_per_second": 0.007,
418
+ "eval_wer": 40.11137679335515,
419
+ "step": 1420
420
+ }
421
+ ],
422
+ "logging_steps": 25,
423
+ "max_steps": 4260,
424
+ "num_input_tokens_seen": 0,
425
+ "num_train_epochs": 5,
426
+ "save_steps": 710,
427
+ "stateful_callbacks": {
428
+ "TrainerControl": {
429
+ "args": {
430
+ "should_epoch_stop": false,
431
+ "should_evaluate": false,
432
+ "should_log": false,
433
+ "should_save": true,
434
+ "should_training_stop": false
435
+ },
436
+ "attributes": {}
437
+ }
438
+ },
439
+ "total_flos": 2.34813850435584e+19,
440
+ "train_batch_size": 16,
441
+ "trial_name": null,
442
+ "trial_params": null
443
+ }
checkpoint-1420/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:adab8b1d68a80f524b99a807e79576b23a9eb8fc309875ebf6261e16a641e65f
3
+ size 5777
checkpoint-710/README.md ADDED
@@ -0,0 +1,206 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: bengaliAI/tugstugi_bengaliai-regional-asr_whisper-medium
3
+ library_name: peft
4
+ tags:
5
+ - base_model:adapter:bengaliAI/tugstugi_bengaliai-regional-asr_whisper-medium
6
+ - lora
7
+ - transformers
8
+ ---
9
+
10
+ # Model Card for Model ID
11
+
12
+ <!-- Provide a quick summary of what the model is/does. -->
13
+
14
+
15
+
16
+ ## Model Details
17
+
18
+ ### Model Description
19
+
20
+ <!-- Provide a longer summary of what this model is. -->
21
+
22
+
23
+
24
+ - **Developed by:** [More Information Needed]
25
+ - **Funded by [optional]:** [More Information Needed]
26
+ - **Shared by [optional]:** [More Information Needed]
27
+ - **Model type:** [More Information Needed]
28
+ - **Language(s) (NLP):** [More Information Needed]
29
+ - **License:** [More Information Needed]
30
+ - **Finetuned from model [optional]:** [More Information Needed]
31
+
32
+ ### Model Sources [optional]
33
+
34
+ <!-- Provide the basic links for the model. -->
35
+
36
+ - **Repository:** [More Information Needed]
37
+ - **Paper [optional]:** [More Information Needed]
38
+ - **Demo [optional]:** [More Information Needed]
39
+
40
+ ## Uses
41
+
42
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
43
+
44
+ ### Direct Use
45
+
46
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
47
+
48
+ [More Information Needed]
49
+
50
+ ### Downstream Use [optional]
51
+
52
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
53
+
54
+ [More Information Needed]
55
+
56
+ ### Out-of-Scope Use
57
+
58
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
59
+
60
+ [More Information Needed]
61
+
62
+ ## Bias, Risks, and Limitations
63
+
64
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
65
+
66
+ [More Information Needed]
67
+
68
+ ### Recommendations
69
+
70
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
71
+
72
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
73
+
74
+ ## How to Get Started with the Model
75
+
76
+ Use the code below to get started with the model.
77
+
78
+ [More Information Needed]
79
+
80
+ ## Training Details
81
+
82
+ ### Training Data
83
+
84
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
85
+
86
+ [More Information Needed]
87
+
88
+ ### Training Procedure
89
+
90
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
91
+
92
+ #### Preprocessing [optional]
93
+
94
+ [More Information Needed]
95
+
96
+
97
+ #### Training Hyperparameters
98
+
99
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
100
+
101
+ #### Speeds, Sizes, Times [optional]
102
+
103
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
104
+
105
+ [More Information Needed]
106
+
107
+ ## Evaluation
108
+
109
+ <!-- This section describes the evaluation protocols and provides the results. -->
110
+
111
+ ### Testing Data, Factors & Metrics
112
+
113
+ #### Testing Data
114
+
115
+ <!-- This should link to a Dataset Card if possible. -->
116
+
117
+ [More Information Needed]
118
+
119
+ #### Factors
120
+
121
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
122
+
123
+ [More Information Needed]
124
+
125
+ #### Metrics
126
+
127
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
128
+
129
+ [More Information Needed]
130
+
131
+ ### Results
132
+
133
+ [More Information Needed]
134
+
135
+ #### Summary
136
+
137
+
138
+
139
+ ## Model Examination [optional]
140
+
141
+ <!-- Relevant interpretability work for the model goes here -->
142
+
143
+ [More Information Needed]
144
+
145
+ ## Environmental Impact
146
+
147
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
148
+
149
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
150
+
151
+ - **Hardware Type:** [More Information Needed]
152
+ - **Hours used:** [More Information Needed]
153
+ - **Cloud Provider:** [More Information Needed]
154
+ - **Compute Region:** [More Information Needed]
155
+ - **Carbon Emitted:** [More Information Needed]
156
+
157
+ ## Technical Specifications [optional]
158
+
159
+ ### Model Architecture and Objective
160
+
161
+ [More Information Needed]
162
+
163
+ ### Compute Infrastructure
164
+
165
+ [More Information Needed]
166
+
167
+ #### Hardware
168
+
169
+ [More Information Needed]
170
+
171
+ #### Software
172
+
173
+ [More Information Needed]
174
+
175
+ ## Citation [optional]
176
+
177
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
178
+
179
+ **BibTeX:**
180
+
181
+ [More Information Needed]
182
+
183
+ **APA:**
184
+
185
+ [More Information Needed]
186
+
187
+ ## Glossary [optional]
188
+
189
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
190
+
191
+ [More Information Needed]
192
+
193
+ ## More Information [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Authors [optional]
198
+
199
+ [More Information Needed]
200
+
201
+ ## Model Card Contact
202
+
203
+ [More Information Needed]
204
+ ### Framework versions
205
+
206
+ - PEFT 0.18.2.dev0
checkpoint-710/adapter_config.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alora_invocation_tokens": null,
3
+ "alpha_pattern": {},
4
+ "arrow_config": null,
5
+ "auto_mapping": {
6
+ "base_model_class": "WhisperForConditionalGeneration",
7
+ "parent_library": "transformers.models.whisper.modeling_whisper"
8
+ },
9
+ "base_model_name_or_path": "bengaliAI/tugstugi_bengaliai-regional-asr_whisper-medium",
10
+ "bias": "none",
11
+ "corda_config": null,
12
+ "ensure_weight_tying": false,
13
+ "eva_config": null,
14
+ "exclude_modules": null,
15
+ "fan_in_fan_out": false,
16
+ "inference_mode": true,
17
+ "init_lora_weights": true,
18
+ "layer_replication": null,
19
+ "layers_pattern": null,
20
+ "layers_to_transform": null,
21
+ "loftq_config": {},
22
+ "lora_alpha": 64,
23
+ "lora_bias": false,
24
+ "lora_dropout": 0.05,
25
+ "lora_ga_config": null,
26
+ "megatron_config": null,
27
+ "megatron_core": "megatron.core",
28
+ "modules_to_save": null,
29
+ "peft_type": "LORA",
30
+ "peft_version": "0.18.2.dev0@f6a7e678840a3e59c8e28f105695968f0dc706d4",
31
+ "qalora_group_size": 16,
32
+ "r": 32,
33
+ "rank_pattern": {},
34
+ "revision": null,
35
+ "target_modules": [
36
+ "q_proj",
37
+ "v_proj"
38
+ ],
39
+ "target_parameters": null,
40
+ "task_type": null,
41
+ "trainable_token_indices": null,
42
+ "use_bdlora": null,
43
+ "use_dora": false,
44
+ "use_qalora": false,
45
+ "use_rslora": false
46
+ }
checkpoint-710/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cbac03d24d0d020be82a80a91f8adabe020e6af1a87aef919e27b7eda1a97151
3
+ size 37789960
checkpoint-710/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:55a720dba9d6abb964f50003b4f3c817ad8020b20fbf4b728f161d176e76d8c8
3
+ size 50493579
checkpoint-710/preprocessor_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "chunk_length": 30,
3
+ "feature_extractor_type": "WhisperFeatureExtractor",
4
+ "feature_size": 80,
5
+ "hop_length": 160,
6
+ "n_fft": 400,
7
+ "n_samples": 480000,
8
+ "nb_max_frames": 3000,
9
+ "padding_side": "right",
10
+ "padding_value": 0.0,
11
+ "processor_class": "WhisperProcessor",
12
+ "return_attention_mask": false,
13
+ "sampling_rate": 16000
14
+ }
checkpoint-710/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:56ba09af5e40d75b2c1feada441aabf672654767a1fbb70391220de2dfe8a649
3
+ size 14709
checkpoint-710/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2866d5e5b10d7c2bc6fd8c2b13ccb8200b95c027b3b09e74346f7af3f6e0e76c
3
+ size 1465
checkpoint-710/trainer_state.json ADDED
@@ -0,0 +1,238 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": 37.95777749811226,
3
+ "best_model_checkpoint": "./whisper-lora-15k-adapters/checkpoint-710",
4
+ "epoch": 0.8323563892145369,
5
+ "eval_steps": 710,
6
+ "global_step": 710,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.029308323563892145,
13
+ "grad_norm": 0.7977361679077148,
14
+ "learning_rate": 5e-05,
15
+ "loss": 1.1739,
16
+ "step": 25
17
+ },
18
+ {
19
+ "epoch": 0.05861664712778429,
20
+ "grad_norm": 0.9913051724433899,
21
+ "learning_rate": 0.0001,
22
+ "loss": 1.0586,
23
+ "step": 50
24
+ },
25
+ {
26
+ "epoch": 0.08792497069167643,
27
+ "grad_norm": 0.7450225353240967,
28
+ "learning_rate": 9.94061757719715e-05,
29
+ "loss": 0.8402,
30
+ "step": 75
31
+ },
32
+ {
33
+ "epoch": 0.11723329425556858,
34
+ "grad_norm": 0.6139466166496277,
35
+ "learning_rate": 9.881235154394299e-05,
36
+ "loss": 0.8144,
37
+ "step": 100
38
+ },
39
+ {
40
+ "epoch": 0.14654161781946073,
41
+ "grad_norm": 0.8228874206542969,
42
+ "learning_rate": 9.82185273159145e-05,
43
+ "loss": 0.8631,
44
+ "step": 125
45
+ },
46
+ {
47
+ "epoch": 0.17584994138335286,
48
+ "grad_norm": 0.6503015756607056,
49
+ "learning_rate": 9.762470308788599e-05,
50
+ "loss": 0.7633,
51
+ "step": 150
52
+ },
53
+ {
54
+ "epoch": 0.205158264947245,
55
+ "grad_norm": 0.747552216053009,
56
+ "learning_rate": 9.703087885985749e-05,
57
+ "loss": 0.758,
58
+ "step": 175
59
+ },
60
+ {
61
+ "epoch": 0.23446658851113716,
62
+ "grad_norm": 0.7009592056274414,
63
+ "learning_rate": 9.643705463182898e-05,
64
+ "loss": 0.7861,
65
+ "step": 200
66
+ },
67
+ {
68
+ "epoch": 0.2637749120750293,
69
+ "grad_norm": 0.5309766530990601,
70
+ "learning_rate": 9.584323040380047e-05,
71
+ "loss": 0.7396,
72
+ "step": 225
73
+ },
74
+ {
75
+ "epoch": 0.29308323563892147,
76
+ "grad_norm": 0.5797637104988098,
77
+ "learning_rate": 9.524940617577197e-05,
78
+ "loss": 0.7698,
79
+ "step": 250
80
+ },
81
+ {
82
+ "epoch": 0.3223915592028136,
83
+ "grad_norm": 0.5136704444885254,
84
+ "learning_rate": 9.465558194774347e-05,
85
+ "loss": 0.6787,
86
+ "step": 275
87
+ },
88
+ {
89
+ "epoch": 0.3516998827667057,
90
+ "grad_norm": 0.5696821808815002,
91
+ "learning_rate": 9.406175771971497e-05,
92
+ "loss": 0.6905,
93
+ "step": 300
94
+ },
95
+ {
96
+ "epoch": 0.3810082063305979,
97
+ "grad_norm": 0.6909741163253784,
98
+ "learning_rate": 9.346793349168646e-05,
99
+ "loss": 0.6872,
100
+ "step": 325
101
+ },
102
+ {
103
+ "epoch": 0.41031652989449,
104
+ "grad_norm": 0.684543788433075,
105
+ "learning_rate": 9.287410926365795e-05,
106
+ "loss": 0.7384,
107
+ "step": 350
108
+ },
109
+ {
110
+ "epoch": 0.4396248534583822,
111
+ "grad_norm": 0.5790461301803589,
112
+ "learning_rate": 9.228028503562945e-05,
113
+ "loss": 0.6926,
114
+ "step": 375
115
+ },
116
+ {
117
+ "epoch": 0.46893317702227433,
118
+ "grad_norm": 0.6158836483955383,
119
+ "learning_rate": 9.168646080760096e-05,
120
+ "loss": 0.6611,
121
+ "step": 400
122
+ },
123
+ {
124
+ "epoch": 0.49824150058616645,
125
+ "grad_norm": 0.49336278438568115,
126
+ "learning_rate": 9.109263657957245e-05,
127
+ "loss": 0.7189,
128
+ "step": 425
129
+ },
130
+ {
131
+ "epoch": 0.5275498241500586,
132
+ "grad_norm": 0.5663182139396667,
133
+ "learning_rate": 9.049881235154394e-05,
134
+ "loss": 0.6859,
135
+ "step": 450
136
+ },
137
+ {
138
+ "epoch": 0.5568581477139508,
139
+ "grad_norm": 0.5849825739860535,
140
+ "learning_rate": 8.990498812351545e-05,
141
+ "loss": 0.6784,
142
+ "step": 475
143
+ },
144
+ {
145
+ "epoch": 0.5861664712778429,
146
+ "grad_norm": 0.6453606486320496,
147
+ "learning_rate": 8.931116389548694e-05,
148
+ "loss": 0.7507,
149
+ "step": 500
150
+ },
151
+ {
152
+ "epoch": 0.6154747948417351,
153
+ "grad_norm": 0.5397807359695435,
154
+ "learning_rate": 8.871733966745844e-05,
155
+ "loss": 0.7184,
156
+ "step": 525
157
+ },
158
+ {
159
+ "epoch": 0.6447831184056272,
160
+ "grad_norm": 0.5113738775253296,
161
+ "learning_rate": 8.812351543942994e-05,
162
+ "loss": 0.6882,
163
+ "step": 550
164
+ },
165
+ {
166
+ "epoch": 0.6740914419695193,
167
+ "grad_norm": 0.44587358832359314,
168
+ "learning_rate": 8.752969121140144e-05,
169
+ "loss": 0.6624,
170
+ "step": 575
171
+ },
172
+ {
173
+ "epoch": 0.7033997655334114,
174
+ "grad_norm": 0.559115469455719,
175
+ "learning_rate": 8.693586698337293e-05,
176
+ "loss": 0.7167,
177
+ "step": 600
178
+ },
179
+ {
180
+ "epoch": 0.7327080890973037,
181
+ "grad_norm": 0.5207410454750061,
182
+ "learning_rate": 8.634204275534443e-05,
183
+ "loss": 0.6482,
184
+ "step": 625
185
+ },
186
+ {
187
+ "epoch": 0.7620164126611958,
188
+ "grad_norm": 0.6229726076126099,
189
+ "learning_rate": 8.574821852731592e-05,
190
+ "loss": 0.6613,
191
+ "step": 650
192
+ },
193
+ {
194
+ "epoch": 0.7913247362250879,
195
+ "grad_norm": 0.558819055557251,
196
+ "learning_rate": 8.515439429928741e-05,
197
+ "loss": 0.6905,
198
+ "step": 675
199
+ },
200
+ {
201
+ "epoch": 0.82063305978898,
202
+ "grad_norm": 0.5218378901481628,
203
+ "learning_rate": 8.456057007125892e-05,
204
+ "loss": 0.6825,
205
+ "step": 700
206
+ },
207
+ {
208
+ "epoch": 0.8323563892145369,
209
+ "eval_loss": 0.5370081067085266,
210
+ "eval_runtime": 12369.2007,
211
+ "eval_samples_per_second": 0.122,
212
+ "eval_steps_per_second": 0.008,
213
+ "eval_wer": 37.95777749811226,
214
+ "step": 710
215
+ }
216
+ ],
217
+ "logging_steps": 25,
218
+ "max_steps": 4260,
219
+ "num_input_tokens_seen": 0,
220
+ "num_train_epochs": 5,
221
+ "save_steps": 710,
222
+ "stateful_callbacks": {
223
+ "TrainerControl": {
224
+ "args": {
225
+ "should_epoch_stop": false,
226
+ "should_evaluate": false,
227
+ "should_log": false,
228
+ "should_save": true,
229
+ "should_training_stop": false
230
+ },
231
+ "attributes": {}
232
+ }
233
+ },
234
+ "total_flos": 1.17484489801728e+19,
235
+ "train_batch_size": 16,
236
+ "trial_name": null,
237
+ "trial_params": null
238
+ }
checkpoint-710/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:adab8b1d68a80f524b99a807e79576b23a9eb8fc309875ebf6261e16a641e65f
3
+ size 5777