bitwisemind commited on
Commit
112af2d
·
verified ·
1 Parent(s): bd344d4

Checkpoint at step 4266

Browse files
checkpoint-4266/README.md ADDED
@@ -0,0 +1,206 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: bengaliAI/tugstugi_bengaliai-regional-asr_whisper-medium
3
+ library_name: peft
4
+ tags:
5
+ - base_model:adapter:bengaliAI/tugstugi_bengaliai-regional-asr_whisper-medium
6
+ - lora
7
+ - transformers
8
+ ---
9
+
10
+ # Model Card for Model ID
11
+
12
+ <!-- Provide a quick summary of what the model is/does. -->
13
+
14
+
15
+
16
+ ## Model Details
17
+
18
+ ### Model Description
19
+
20
+ <!-- Provide a longer summary of what this model is. -->
21
+
22
+
23
+
24
+ - **Developed by:** [More Information Needed]
25
+ - **Funded by [optional]:** [More Information Needed]
26
+ - **Shared by [optional]:** [More Information Needed]
27
+ - **Model type:** [More Information Needed]
28
+ - **Language(s) (NLP):** [More Information Needed]
29
+ - **License:** [More Information Needed]
30
+ - **Finetuned from model [optional]:** [More Information Needed]
31
+
32
+ ### Model Sources [optional]
33
+
34
+ <!-- Provide the basic links for the model. -->
35
+
36
+ - **Repository:** [More Information Needed]
37
+ - **Paper [optional]:** [More Information Needed]
38
+ - **Demo [optional]:** [More Information Needed]
39
+
40
+ ## Uses
41
+
42
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
43
+
44
+ ### Direct Use
45
+
46
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
47
+
48
+ [More Information Needed]
49
+
50
+ ### Downstream Use [optional]
51
+
52
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
53
+
54
+ [More Information Needed]
55
+
56
+ ### Out-of-Scope Use
57
+
58
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
59
+
60
+ [More Information Needed]
61
+
62
+ ## Bias, Risks, and Limitations
63
+
64
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
65
+
66
+ [More Information Needed]
67
+
68
+ ### Recommendations
69
+
70
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
71
+
72
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
73
+
74
+ ## How to Get Started with the Model
75
+
76
+ Use the code below to get started with the model.
77
+
78
+ [More Information Needed]
79
+
80
+ ## Training Details
81
+
82
+ ### Training Data
83
+
84
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
85
+
86
+ [More Information Needed]
87
+
88
+ ### Training Procedure
89
+
90
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
91
+
92
+ #### Preprocessing [optional]
93
+
94
+ [More Information Needed]
95
+
96
+
97
+ #### Training Hyperparameters
98
+
99
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
100
+
101
+ #### Speeds, Sizes, Times [optional]
102
+
103
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
104
+
105
+ [More Information Needed]
106
+
107
+ ## Evaluation
108
+
109
+ <!-- This section describes the evaluation protocols and provides the results. -->
110
+
111
+ ### Testing Data, Factors & Metrics
112
+
113
+ #### Testing Data
114
+
115
+ <!-- This should link to a Dataset Card if possible. -->
116
+
117
+ [More Information Needed]
118
+
119
+ #### Factors
120
+
121
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
122
+
123
+ [More Information Needed]
124
+
125
+ #### Metrics
126
+
127
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
128
+
129
+ [More Information Needed]
130
+
131
+ ### Results
132
+
133
+ [More Information Needed]
134
+
135
+ #### Summary
136
+
137
+
138
+
139
+ ## Model Examination [optional]
140
+
141
+ <!-- Relevant interpretability work for the model goes here -->
142
+
143
+ [More Information Needed]
144
+
145
+ ## Environmental Impact
146
+
147
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
148
+
149
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
150
+
151
+ - **Hardware Type:** [More Information Needed]
152
+ - **Hours used:** [More Information Needed]
153
+ - **Cloud Provider:** [More Information Needed]
154
+ - **Compute Region:** [More Information Needed]
155
+ - **Carbon Emitted:** [More Information Needed]
156
+
157
+ ## Technical Specifications [optional]
158
+
159
+ ### Model Architecture and Objective
160
+
161
+ [More Information Needed]
162
+
163
+ ### Compute Infrastructure
164
+
165
+ [More Information Needed]
166
+
167
+ #### Hardware
168
+
169
+ [More Information Needed]
170
+
171
+ #### Software
172
+
173
+ [More Information Needed]
174
+
175
+ ## Citation [optional]
176
+
177
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
178
+
179
+ **BibTeX:**
180
+
181
+ [More Information Needed]
182
+
183
+ **APA:**
184
+
185
+ [More Information Needed]
186
+
187
+ ## Glossary [optional]
188
+
189
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
190
+
191
+ [More Information Needed]
192
+
193
+ ## More Information [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Authors [optional]
198
+
199
+ [More Information Needed]
200
+
201
+ ## Model Card Contact
202
+
203
+ [More Information Needed]
204
+ ### Framework versions
205
+
206
+ - PEFT 0.18.2.dev0
checkpoint-4266/adapter_config.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alora_invocation_tokens": null,
3
+ "alpha_pattern": {},
4
+ "arrow_config": null,
5
+ "auto_mapping": {
6
+ "base_model_class": "WhisperForConditionalGeneration",
7
+ "parent_library": "transformers.models.whisper.modeling_whisper"
8
+ },
9
+ "base_model_name_or_path": "bengaliAI/tugstugi_bengaliai-regional-asr_whisper-medium",
10
+ "bias": "none",
11
+ "corda_config": null,
12
+ "ensure_weight_tying": false,
13
+ "eva_config": null,
14
+ "exclude_modules": null,
15
+ "fan_in_fan_out": false,
16
+ "inference_mode": true,
17
+ "init_lora_weights": true,
18
+ "layer_replication": null,
19
+ "layers_pattern": null,
20
+ "layers_to_transform": null,
21
+ "loftq_config": {},
22
+ "lora_alpha": 64,
23
+ "lora_bias": false,
24
+ "lora_dropout": 0.05,
25
+ "lora_ga_config": null,
26
+ "megatron_config": null,
27
+ "megatron_core": "megatron.core",
28
+ "modules_to_save": null,
29
+ "peft_type": "LORA",
30
+ "peft_version": "0.18.2.dev0@2cd96ed041620f74d239a0ce3f16207153c43413",
31
+ "qalora_group_size": 16,
32
+ "r": 32,
33
+ "rank_pattern": {},
34
+ "revision": null,
35
+ "target_modules": [
36
+ "v_proj",
37
+ "q_proj"
38
+ ],
39
+ "target_parameters": null,
40
+ "task_type": null,
41
+ "trainable_token_indices": null,
42
+ "use_bdlora": null,
43
+ "use_dora": false,
44
+ "use_qalora": false,
45
+ "use_rslora": false
46
+ }
checkpoint-4266/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c61851b594070fe34d743640e7b590a85595fd820f5f8e4fd7028a7c1d06b5ce
3
+ size 37789960
checkpoint-4266/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd330c73e092e7f9b1c710b9a029dc3084b215aa290f564618bb07751a86e923
3
+ size 50493579
checkpoint-4266/preprocessor_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "chunk_length": 30,
3
+ "feature_extractor_type": "WhisperFeatureExtractor",
4
+ "feature_size": 80,
5
+ "hop_length": 160,
6
+ "n_fft": 400,
7
+ "n_samples": 480000,
8
+ "nb_max_frames": 3000,
9
+ "padding_side": "right",
10
+ "padding_value": 0.0,
11
+ "processor_class": "WhisperProcessor",
12
+ "return_attention_mask": false,
13
+ "sampling_rate": 16000
14
+ }
checkpoint-4266/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84218c0013963bfb2caeb9603ceeaf3d24a4ad562158bcac33b4f26c23bec7b0
3
+ size 14709
checkpoint-4266/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea4579275563816b4e80033f28eaa3ca1877be0754d1432e1a7dd6951220b00d
3
+ size 1465
checkpoint-4266/trainer_state.json ADDED
@@ -0,0 +1,1385 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": 29.441848022785567,
3
+ "best_model_checkpoint": "./whisper-lora-15k-adapters/checkpoint-4029",
4
+ "epoch": 4.995316159250585,
5
+ "eval_steps": 237,
6
+ "global_step": 4266,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.02927400468384075,
13
+ "grad_norm": 0.578233540058136,
14
+ "learning_rate": 0.0005,
15
+ "loss": 0.99,
16
+ "step": 25
17
+ },
18
+ {
19
+ "epoch": 0.0585480093676815,
20
+ "grad_norm": 0.3136115074157715,
21
+ "learning_rate": 0.001,
22
+ "loss": 0.7869,
23
+ "step": 50
24
+ },
25
+ {
26
+ "epoch": 0.08782201405152225,
27
+ "grad_norm": 0.3556165397167206,
28
+ "learning_rate": 0.0009940758293838863,
29
+ "loss": 0.6559,
30
+ "step": 75
31
+ },
32
+ {
33
+ "epoch": 0.117096018735363,
34
+ "grad_norm": 0.43047836422920227,
35
+ "learning_rate": 0.0009881516587677726,
36
+ "loss": 0.6799,
37
+ "step": 100
38
+ },
39
+ {
40
+ "epoch": 0.14637002341920374,
41
+ "grad_norm": 0.41161859035491943,
42
+ "learning_rate": 0.0009822274881516586,
43
+ "loss": 0.6179,
44
+ "step": 125
45
+ },
46
+ {
47
+ "epoch": 0.1756440281030445,
48
+ "grad_norm": 0.3486640453338623,
49
+ "learning_rate": 0.000976303317535545,
50
+ "loss": 0.6218,
51
+ "step": 150
52
+ },
53
+ {
54
+ "epoch": 0.20491803278688525,
55
+ "grad_norm": 0.33961209654808044,
56
+ "learning_rate": 0.0009703791469194313,
57
+ "loss": 0.5623,
58
+ "step": 175
59
+ },
60
+ {
61
+ "epoch": 0.234192037470726,
62
+ "grad_norm": 0.4211539328098297,
63
+ "learning_rate": 0.0009644549763033176,
64
+ "loss": 0.6293,
65
+ "step": 200
66
+ },
67
+ {
68
+ "epoch": 0.26346604215456676,
69
+ "grad_norm": 0.4401342272758484,
70
+ "learning_rate": 0.0009585308056872039,
71
+ "loss": 0.65,
72
+ "step": 225
73
+ },
74
+ {
75
+ "epoch": 0.2775175644028103,
76
+ "eval_loss": 0.6182317733764648,
77
+ "eval_runtime": 12305.8898,
78
+ "eval_samples_per_second": 0.123,
79
+ "eval_steps_per_second": 0.008,
80
+ "eval_wer": 42.11536326582399,
81
+ "step": 237
82
+ },
83
+ {
84
+ "epoch": 0.2927400468384075,
85
+ "grad_norm": 0.5222854614257812,
86
+ "learning_rate": 0.0009526066350710901,
87
+ "loss": 0.6494,
88
+ "step": 250
89
+ },
90
+ {
91
+ "epoch": 0.32201405152224827,
92
+ "grad_norm": 0.5739536285400391,
93
+ "learning_rate": 0.0009466824644549763,
94
+ "loss": 0.5656,
95
+ "step": 275
96
+ },
97
+ {
98
+ "epoch": 0.351288056206089,
99
+ "grad_norm": 0.4213266968727112,
100
+ "learning_rate": 0.0009407582938388626,
101
+ "loss": 0.5781,
102
+ "step": 300
103
+ },
104
+ {
105
+ "epoch": 0.3805620608899297,
106
+ "grad_norm": 0.5185717344284058,
107
+ "learning_rate": 0.0009348341232227489,
108
+ "loss": 0.5993,
109
+ "step": 325
110
+ },
111
+ {
112
+ "epoch": 0.4098360655737705,
113
+ "grad_norm": 0.41156110167503357,
114
+ "learning_rate": 0.0009289099526066352,
115
+ "loss": 0.5504,
116
+ "step": 350
117
+ },
118
+ {
119
+ "epoch": 0.43911007025761123,
120
+ "grad_norm": 0.44983068108558655,
121
+ "learning_rate": 0.0009229857819905212,
122
+ "loss": 0.642,
123
+ "step": 375
124
+ },
125
+ {
126
+ "epoch": 0.468384074941452,
127
+ "grad_norm": 0.7018289566040039,
128
+ "learning_rate": 0.0009170616113744075,
129
+ "loss": 0.6313,
130
+ "step": 400
131
+ },
132
+ {
133
+ "epoch": 0.49765807962529274,
134
+ "grad_norm": 0.41570019721984863,
135
+ "learning_rate": 0.0009111374407582938,
136
+ "loss": 0.642,
137
+ "step": 425
138
+ },
139
+ {
140
+ "epoch": 0.5269320843091335,
141
+ "grad_norm": 0.2906375229358673,
142
+ "learning_rate": 0.0009052132701421801,
143
+ "loss": 0.5501,
144
+ "step": 450
145
+ },
146
+ {
147
+ "epoch": 0.5550351288056206,
148
+ "eval_loss": 0.5893104076385498,
149
+ "eval_runtime": 12217.9829,
150
+ "eval_samples_per_second": 0.124,
151
+ "eval_steps_per_second": 0.008,
152
+ "eval_wer": 39.71420868158533,
153
+ "step": 474
154
+ },
155
+ {
156
+ "epoch": 0.5562060889929742,
157
+ "grad_norm": 0.42602404952049255,
158
+ "learning_rate": 0.0008992890995260664,
159
+ "loss": 0.6419,
160
+ "step": 475
161
+ },
162
+ {
163
+ "epoch": 0.585480093676815,
164
+ "grad_norm": 0.45508912205696106,
165
+ "learning_rate": 0.0008933649289099525,
166
+ "loss": 0.5816,
167
+ "step": 500
168
+ },
169
+ {
170
+ "epoch": 0.6147540983606558,
171
+ "grad_norm": 0.5000929236412048,
172
+ "learning_rate": 0.0008874407582938388,
173
+ "loss": 0.6941,
174
+ "step": 525
175
+ },
176
+ {
177
+ "epoch": 0.6440281030444965,
178
+ "grad_norm": 0.4415169656276703,
179
+ "learning_rate": 0.0008815165876777251,
180
+ "loss": 0.5615,
181
+ "step": 550
182
+ },
183
+ {
184
+ "epoch": 0.6733021077283372,
185
+ "grad_norm": 0.5120753049850464,
186
+ "learning_rate": 0.0008755924170616114,
187
+ "loss": 0.559,
188
+ "step": 575
189
+ },
190
+ {
191
+ "epoch": 0.702576112412178,
192
+ "grad_norm": 0.3653784990310669,
193
+ "learning_rate": 0.0008696682464454977,
194
+ "loss": 0.5836,
195
+ "step": 600
196
+ },
197
+ {
198
+ "epoch": 0.7318501170960188,
199
+ "grad_norm": 0.5504665374755859,
200
+ "learning_rate": 0.0008637440758293838,
201
+ "loss": 0.6163,
202
+ "step": 625
203
+ },
204
+ {
205
+ "epoch": 0.7611241217798594,
206
+ "grad_norm": 0.49855440855026245,
207
+ "learning_rate": 0.0008578199052132701,
208
+ "loss": 0.5482,
209
+ "step": 650
210
+ },
211
+ {
212
+ "epoch": 0.7903981264637002,
213
+ "grad_norm": 0.3784034848213196,
214
+ "learning_rate": 0.0008518957345971564,
215
+ "loss": 0.5572,
216
+ "step": 675
217
+ },
218
+ {
219
+ "epoch": 0.819672131147541,
220
+ "grad_norm": 0.5111596584320068,
221
+ "learning_rate": 0.0008459715639810427,
222
+ "loss": 0.565,
223
+ "step": 700
224
+ },
225
+ {
226
+ "epoch": 0.832552693208431,
227
+ "eval_loss": 0.5823442339897156,
228
+ "eval_runtime": 12225.5714,
229
+ "eval_samples_per_second": 0.124,
230
+ "eval_steps_per_second": 0.008,
231
+ "eval_wer": 37.68099852505035,
232
+ "step": 711
233
+ },
234
+ {
235
+ "epoch": 0.8489461358313818,
236
+ "grad_norm": 0.5943437218666077,
237
+ "learning_rate": 0.000840047393364929,
238
+ "loss": 0.5146,
239
+ "step": 725
240
+ },
241
+ {
242
+ "epoch": 0.8782201405152225,
243
+ "grad_norm": 0.5228826403617859,
244
+ "learning_rate": 0.0008341232227488151,
245
+ "loss": 0.5338,
246
+ "step": 750
247
+ },
248
+ {
249
+ "epoch": 0.9074941451990632,
250
+ "grad_norm": 0.44550982117652893,
251
+ "learning_rate": 0.0008281990521327014,
252
+ "loss": 0.5631,
253
+ "step": 775
254
+ },
255
+ {
256
+ "epoch": 0.936768149882904,
257
+ "grad_norm": 0.5326892733573914,
258
+ "learning_rate": 0.0008222748815165877,
259
+ "loss": 0.5489,
260
+ "step": 800
261
+ },
262
+ {
263
+ "epoch": 0.9660421545667447,
264
+ "grad_norm": 0.5083812475204468,
265
+ "learning_rate": 0.000816350710900474,
266
+ "loss": 0.5336,
267
+ "step": 825
268
+ },
269
+ {
270
+ "epoch": 0.9953161592505855,
271
+ "grad_norm": 0.4346718192100525,
272
+ "learning_rate": 0.0008104265402843603,
273
+ "loss": 0.6155,
274
+ "step": 850
275
+ },
276
+ {
277
+ "epoch": 1.0245901639344261,
278
+ "grad_norm": 0.4419436454772949,
279
+ "learning_rate": 0.0008045023696682464,
280
+ "loss": 0.5506,
281
+ "step": 875
282
+ },
283
+ {
284
+ "epoch": 1.053864168618267,
285
+ "grad_norm": 0.5935924649238586,
286
+ "learning_rate": 0.0007985781990521327,
287
+ "loss": 0.5407,
288
+ "step": 900
289
+ },
290
+ {
291
+ "epoch": 1.0831381733021077,
292
+ "grad_norm": 0.4228830635547638,
293
+ "learning_rate": 0.000792654028436019,
294
+ "loss": 0.5527,
295
+ "step": 925
296
+ },
297
+ {
298
+ "epoch": 1.1100702576112411,
299
+ "eval_loss": 0.5060898065567017,
300
+ "eval_runtime": 12332.7416,
301
+ "eval_samples_per_second": 0.123,
302
+ "eval_steps_per_second": 0.008,
303
+ "eval_wer": 37.76065419091052,
304
+ "step": 948
305
+ },
306
+ {
307
+ "epoch": 1.1124121779859484,
308
+ "grad_norm": 0.37129494547843933,
309
+ "learning_rate": 0.0007867298578199053,
310
+ "loss": 0.5191,
311
+ "step": 950
312
+ },
313
+ {
314
+ "epoch": 1.1416861826697893,
315
+ "grad_norm": 0.7254778146743774,
316
+ "learning_rate": 0.0007808056872037916,
317
+ "loss": 0.5537,
318
+ "step": 975
319
+ },
320
+ {
321
+ "epoch": 1.17096018735363,
322
+ "grad_norm": 0.4878183603286743,
323
+ "learning_rate": 0.0007748815165876777,
324
+ "loss": 0.5281,
325
+ "step": 1000
326
+ },
327
+ {
328
+ "epoch": 1.2002341920374708,
329
+ "grad_norm": 0.35084593296051025,
330
+ "learning_rate": 0.000768957345971564,
331
+ "loss": 0.5166,
332
+ "step": 1025
333
+ },
334
+ {
335
+ "epoch": 1.2295081967213115,
336
+ "grad_norm": 0.5030648708343506,
337
+ "learning_rate": 0.0007630331753554502,
338
+ "loss": 0.5284,
339
+ "step": 1050
340
+ },
341
+ {
342
+ "epoch": 1.2587822014051522,
343
+ "grad_norm": 0.5004339218139648,
344
+ "learning_rate": 0.0007571090047393365,
345
+ "loss": 0.5695,
346
+ "step": 1075
347
+ },
348
+ {
349
+ "epoch": 1.288056206088993,
350
+ "grad_norm": 0.5789551734924316,
351
+ "learning_rate": 0.0007511848341232228,
352
+ "loss": 0.5511,
353
+ "step": 1100
354
+ },
355
+ {
356
+ "epoch": 1.3173302107728337,
357
+ "grad_norm": 0.389371782541275,
358
+ "learning_rate": 0.0007452606635071089,
359
+ "loss": 0.5661,
360
+ "step": 1125
361
+ },
362
+ {
363
+ "epoch": 1.3466042154566744,
364
+ "grad_norm": 0.38161447644233704,
365
+ "learning_rate": 0.0007393364928909952,
366
+ "loss": 0.5087,
367
+ "step": 1150
368
+ },
369
+ {
370
+ "epoch": 1.3758782201405153,
371
+ "grad_norm": 0.40263721346855164,
372
+ "learning_rate": 0.0007334123222748815,
373
+ "loss": 0.5147,
374
+ "step": 1175
375
+ },
376
+ {
377
+ "epoch": 1.3875878220140514,
378
+ "eval_loss": 0.5079419016838074,
379
+ "eval_runtime": 12322.9428,
380
+ "eval_samples_per_second": 0.123,
381
+ "eval_steps_per_second": 0.008,
382
+ "eval_wer": 38.37238559521937,
383
+ "step": 1185
384
+ },
385
+ {
386
+ "epoch": 1.405152224824356,
387
+ "grad_norm": 0.5195249319076538,
388
+ "learning_rate": 0.0007274881516587678,
389
+ "loss": 0.5409,
390
+ "step": 1200
391
+ },
392
+ {
393
+ "epoch": 1.4344262295081966,
394
+ "grad_norm": 0.40098896622657776,
395
+ "learning_rate": 0.0007215639810426541,
396
+ "loss": 0.5121,
397
+ "step": 1225
398
+ },
399
+ {
400
+ "epoch": 1.4637002341920375,
401
+ "grad_norm": 0.42950162291526794,
402
+ "learning_rate": 0.0007156398104265402,
403
+ "loss": 0.5155,
404
+ "step": 1250
405
+ },
406
+ {
407
+ "epoch": 1.4929742388758782,
408
+ "grad_norm": 0.38044580817222595,
409
+ "learning_rate": 0.0007097156398104265,
410
+ "loss": 0.4727,
411
+ "step": 1275
412
+ },
413
+ {
414
+ "epoch": 1.5222482435597189,
415
+ "grad_norm": 0.38700923323631287,
416
+ "learning_rate": 0.0007037914691943128,
417
+ "loss": 0.498,
418
+ "step": 1300
419
+ },
420
+ {
421
+ "epoch": 1.5515222482435598,
422
+ "grad_norm": 0.4633864760398865,
423
+ "learning_rate": 0.0006978672985781991,
424
+ "loss": 0.5297,
425
+ "step": 1325
426
+ },
427
+ {
428
+ "epoch": 1.5807962529274004,
429
+ "grad_norm": 0.48980265855789185,
430
+ "learning_rate": 0.0006919431279620854,
431
+ "loss": 0.4732,
432
+ "step": 1350
433
+ },
434
+ {
435
+ "epoch": 1.6100702576112411,
436
+ "grad_norm": 0.3389205038547516,
437
+ "learning_rate": 0.0006860189573459715,
438
+ "loss": 0.5461,
439
+ "step": 1375
440
+ },
441
+ {
442
+ "epoch": 1.639344262295082,
443
+ "grad_norm": 0.3686542510986328,
444
+ "learning_rate": 0.0006800947867298578,
445
+ "loss": 0.5033,
446
+ "step": 1400
447
+ },
448
+ {
449
+ "epoch": 1.6651053864168617,
450
+ "eval_loss": 0.5103082060813904,
451
+ "eval_runtime": 12252.9597,
452
+ "eval_samples_per_second": 0.124,
453
+ "eval_steps_per_second": 0.008,
454
+ "eval_wer": 39.455228053358894,
455
+ "step": 1422
456
+ },
457
+ {
458
+ "epoch": 1.6686182669789227,
459
+ "grad_norm": 0.46823108196258545,
460
+ "learning_rate": 0.0006741706161137441,
461
+ "loss": 0.5392,
462
+ "step": 1425
463
+ },
464
+ {
465
+ "epoch": 1.6978922716627634,
466
+ "grad_norm": 0.5638931393623352,
467
+ "learning_rate": 0.0006682464454976304,
468
+ "loss": 0.5253,
469
+ "step": 1450
470
+ },
471
+ {
472
+ "epoch": 1.7271662763466042,
473
+ "grad_norm": 0.5234322547912598,
474
+ "learning_rate": 0.0006623222748815167,
475
+ "loss": 0.51,
476
+ "step": 1475
477
+ },
478
+ {
479
+ "epoch": 1.756440281030445,
480
+ "grad_norm": 0.5467631816864014,
481
+ "learning_rate": 0.0006563981042654028,
482
+ "loss": 0.5436,
483
+ "step": 1500
484
+ },
485
+ {
486
+ "epoch": 1.7857142857142856,
487
+ "grad_norm": 0.3867318034172058,
488
+ "learning_rate": 0.0006504739336492891,
489
+ "loss": 0.5142,
490
+ "step": 1525
491
+ },
492
+ {
493
+ "epoch": 1.8149882903981265,
494
+ "grad_norm": 0.4091216027736664,
495
+ "learning_rate": 0.0006445497630331754,
496
+ "loss": 0.5345,
497
+ "step": 1550
498
+ },
499
+ {
500
+ "epoch": 1.8442622950819674,
501
+ "grad_norm": 0.44898247718811035,
502
+ "learning_rate": 0.0006386255924170617,
503
+ "loss": 0.4937,
504
+ "step": 1575
505
+ },
506
+ {
507
+ "epoch": 1.8735362997658078,
508
+ "grad_norm": 0.3484508991241455,
509
+ "learning_rate": 0.000632701421800948,
510
+ "loss": 0.489,
511
+ "step": 1600
512
+ },
513
+ {
514
+ "epoch": 1.9028103044496487,
515
+ "grad_norm": 0.5735388398170471,
516
+ "learning_rate": 0.0006267772511848341,
517
+ "loss": 0.4742,
518
+ "step": 1625
519
+ },
520
+ {
521
+ "epoch": 1.9320843091334896,
522
+ "grad_norm": 0.7618733048439026,
523
+ "learning_rate": 0.0006208530805687204,
524
+ "loss": 0.5559,
525
+ "step": 1650
526
+ },
527
+ {
528
+ "epoch": 1.9426229508196722,
529
+ "eval_loss": 0.5032439827919006,
530
+ "eval_runtime": 12233.7713,
531
+ "eval_samples_per_second": 0.124,
532
+ "eval_steps_per_second": 0.008,
533
+ "eval_wer": 39.20600686955827,
534
+ "step": 1659
535
+ },
536
+ {
537
+ "epoch": 1.96135831381733,
538
+ "grad_norm": 0.3670201003551483,
539
+ "learning_rate": 0.0006149289099526067,
540
+ "loss": 0.4868,
541
+ "step": 1675
542
+ },
543
+ {
544
+ "epoch": 1.990632318501171,
545
+ "grad_norm": 0.4840170443058014,
546
+ "learning_rate": 0.000609004739336493,
547
+ "loss": 0.5458,
548
+ "step": 1700
549
+ },
550
+ {
551
+ "epoch": 2.019906323185012,
552
+ "grad_norm": 0.30357852578163147,
553
+ "learning_rate": 0.0006030805687203791,
554
+ "loss": 0.4845,
555
+ "step": 1725
556
+ },
557
+ {
558
+ "epoch": 2.0491803278688523,
559
+ "grad_norm": 0.43158742785453796,
560
+ "learning_rate": 0.0005971563981042653,
561
+ "loss": 0.5007,
562
+ "step": 1750
563
+ },
564
+ {
565
+ "epoch": 2.078454332552693,
566
+ "grad_norm": 0.46644917130470276,
567
+ "learning_rate": 0.0005912322274881516,
568
+ "loss": 0.4558,
569
+ "step": 1775
570
+ },
571
+ {
572
+ "epoch": 2.107728337236534,
573
+ "grad_norm": 0.42779088020324707,
574
+ "learning_rate": 0.0005853080568720379,
575
+ "loss": 0.4736,
576
+ "step": 1800
577
+ },
578
+ {
579
+ "epoch": 2.1370023419203745,
580
+ "grad_norm": 0.4596354067325592,
581
+ "learning_rate": 0.0005793838862559242,
582
+ "loss": 0.4338,
583
+ "step": 1825
584
+ },
585
+ {
586
+ "epoch": 2.1662763466042154,
587
+ "grad_norm": 0.5213513970375061,
588
+ "learning_rate": 0.0005734597156398104,
589
+ "loss": 0.4657,
590
+ "step": 1850
591
+ },
592
+ {
593
+ "epoch": 2.1955503512880563,
594
+ "grad_norm": 0.30604368448257446,
595
+ "learning_rate": 0.0005675355450236966,
596
+ "loss": 0.4697,
597
+ "step": 1875
598
+ },
599
+ {
600
+ "epoch": 2.2201405152224822,
601
+ "eval_loss": 0.49964743852615356,
602
+ "eval_runtime": 12221.7958,
603
+ "eval_samples_per_second": 0.124,
604
+ "eval_steps_per_second": 0.008,
605
+ "eval_wer": 35.9517533349309,
606
+ "step": 1896
607
+ },
608
+ {
609
+ "epoch": 2.2248243559718968,
610
+ "grad_norm": 0.38965240120887756,
611
+ "learning_rate": 0.0005616113744075829,
612
+ "loss": 0.4203,
613
+ "step": 1900
614
+ },
615
+ {
616
+ "epoch": 2.2540983606557377,
617
+ "grad_norm": 0.5568481087684631,
618
+ "learning_rate": 0.0005556872037914692,
619
+ "loss": 0.4664,
620
+ "step": 1925
621
+ },
622
+ {
623
+ "epoch": 2.2833723653395785,
624
+ "grad_norm": 0.6357014179229736,
625
+ "learning_rate": 0.0005497630331753555,
626
+ "loss": 0.5052,
627
+ "step": 1950
628
+ },
629
+ {
630
+ "epoch": 2.312646370023419,
631
+ "grad_norm": 0.49635082483291626,
632
+ "learning_rate": 0.0005438388625592417,
633
+ "loss": 0.4611,
634
+ "step": 1975
635
+ },
636
+ {
637
+ "epoch": 2.34192037470726,
638
+ "grad_norm": 0.5938425660133362,
639
+ "learning_rate": 0.0005379146919431279,
640
+ "loss": 0.4882,
641
+ "step": 2000
642
+ },
643
+ {
644
+ "epoch": 2.371194379391101,
645
+ "grad_norm": 0.38019558787345886,
646
+ "learning_rate": 0.0005319905213270142,
647
+ "loss": 0.4349,
648
+ "step": 2025
649
+ },
650
+ {
651
+ "epoch": 2.4004683840749417,
652
+ "grad_norm": 0.3761730492115021,
653
+ "learning_rate": 0.0005260663507109005,
654
+ "loss": 0.4384,
655
+ "step": 2050
656
+ },
657
+ {
658
+ "epoch": 2.429742388758782,
659
+ "grad_norm": 0.3853297233581543,
660
+ "learning_rate": 0.0005201421800947868,
661
+ "loss": 0.4731,
662
+ "step": 2075
663
+ },
664
+ {
665
+ "epoch": 2.459016393442623,
666
+ "grad_norm": 0.45702874660491943,
667
+ "learning_rate": 0.000514218009478673,
668
+ "loss": 0.5495,
669
+ "step": 2100
670
+ },
671
+ {
672
+ "epoch": 2.4882903981264635,
673
+ "grad_norm": 0.4612327814102173,
674
+ "learning_rate": 0.0005082938388625592,
675
+ "loss": 0.5017,
676
+ "step": 2125
677
+ },
678
+ {
679
+ "epoch": 2.4976580796252925,
680
+ "eval_loss": 0.4605013132095337,
681
+ "eval_runtime": 12477.8005,
682
+ "eval_samples_per_second": 0.122,
683
+ "eval_steps_per_second": 0.008,
684
+ "eval_wer": 37.35402284018324,
685
+ "step": 2133
686
+ },
687
+ {
688
+ "epoch": 2.5175644028103044,
689
+ "grad_norm": 0.5223235487937927,
690
+ "learning_rate": 0.0005023696682464455,
691
+ "loss": 0.5064,
692
+ "step": 2150
693
+ },
694
+ {
695
+ "epoch": 2.5468384074941453,
696
+ "grad_norm": 0.531912624835968,
697
+ "learning_rate": 0.0004964454976303318,
698
+ "loss": 0.4461,
699
+ "step": 2175
700
+ },
701
+ {
702
+ "epoch": 2.576112412177986,
703
+ "grad_norm": 0.3550543487071991,
704
+ "learning_rate": 0.0004905213270142181,
705
+ "loss": 0.475,
706
+ "step": 2200
707
+ },
708
+ {
709
+ "epoch": 2.6053864168618266,
710
+ "grad_norm": 0.5227505564689636,
711
+ "learning_rate": 0.00048459715639810423,
712
+ "loss": 0.5369,
713
+ "step": 2225
714
+ },
715
+ {
716
+ "epoch": 2.6346604215456675,
717
+ "grad_norm": 0.4097291827201843,
718
+ "learning_rate": 0.0004786729857819905,
719
+ "loss": 0.4568,
720
+ "step": 2250
721
+ },
722
+ {
723
+ "epoch": 2.663934426229508,
724
+ "grad_norm": 0.3728583753108978,
725
+ "learning_rate": 0.0004727488151658768,
726
+ "loss": 0.4736,
727
+ "step": 2275
728
+ },
729
+ {
730
+ "epoch": 2.693208430913349,
731
+ "grad_norm": 0.5369985699653625,
732
+ "learning_rate": 0.000466824644549763,
733
+ "loss": 0.4443,
734
+ "step": 2300
735
+ },
736
+ {
737
+ "epoch": 2.7224824355971897,
738
+ "grad_norm": 0.5444718599319458,
739
+ "learning_rate": 0.0004609004739336493,
740
+ "loss": 0.485,
741
+ "step": 2325
742
+ },
743
+ {
744
+ "epoch": 2.7517564402810306,
745
+ "grad_norm": 0.36326608061790466,
746
+ "learning_rate": 0.00045497630331753553,
747
+ "loss": 0.4285,
748
+ "step": 2350
749
+ },
750
+ {
751
+ "epoch": 2.775175644028103,
752
+ "eval_loss": 0.4545816481113434,
753
+ "eval_runtime": 12525.3573,
754
+ "eval_samples_per_second": 0.121,
755
+ "eval_steps_per_second": 0.008,
756
+ "eval_wer": 37.59597393380218,
757
+ "step": 2370
758
+ },
759
+ {
760
+ "epoch": 2.781030444964871,
761
+ "grad_norm": 0.3528901934623718,
762
+ "learning_rate": 0.0004490521327014218,
763
+ "loss": 0.4291,
764
+ "step": 2375
765
+ },
766
+ {
767
+ "epoch": 2.810304449648712,
768
+ "grad_norm": 0.25132936239242554,
769
+ "learning_rate": 0.0004431279620853081,
770
+ "loss": 0.4559,
771
+ "step": 2400
772
+ },
773
+ {
774
+ "epoch": 2.839578454332553,
775
+ "grad_norm": 0.3684830367565155,
776
+ "learning_rate": 0.0004372037914691943,
777
+ "loss": 0.3786,
778
+ "step": 2425
779
+ },
780
+ {
781
+ "epoch": 2.8688524590163933,
782
+ "grad_norm": 0.4008779227733612,
783
+ "learning_rate": 0.0004312796208530806,
784
+ "loss": 0.398,
785
+ "step": 2450
786
+ },
787
+ {
788
+ "epoch": 2.898126463700234,
789
+ "grad_norm": 0.4344983696937561,
790
+ "learning_rate": 0.00042535545023696683,
791
+ "loss": 0.4301,
792
+ "step": 2475
793
+ },
794
+ {
795
+ "epoch": 2.927400468384075,
796
+ "grad_norm": 0.6371504664421082,
797
+ "learning_rate": 0.0004194312796208531,
798
+ "loss": 0.4736,
799
+ "step": 2500
800
+ },
801
+ {
802
+ "epoch": 2.9566744730679155,
803
+ "grad_norm": 0.44420474767684937,
804
+ "learning_rate": 0.0004135071090047394,
805
+ "loss": 0.4088,
806
+ "step": 2525
807
+ },
808
+ {
809
+ "epoch": 2.9859484777517564,
810
+ "grad_norm": 0.3830738067626953,
811
+ "learning_rate": 0.00040758293838862557,
812
+ "loss": 0.423,
813
+ "step": 2550
814
+ },
815
+ {
816
+ "epoch": 3.0152224824355973,
817
+ "grad_norm": 0.4638129770755768,
818
+ "learning_rate": 0.00040165876777251185,
819
+ "loss": 0.3978,
820
+ "step": 2575
821
+ },
822
+ {
823
+ "epoch": 3.0444964871194378,
824
+ "grad_norm": 0.43738076090812683,
825
+ "learning_rate": 0.0003957345971563981,
826
+ "loss": 0.4102,
827
+ "step": 2600
828
+ },
829
+ {
830
+ "epoch": 3.0526932084309135,
831
+ "eval_loss": 0.45073771476745605,
832
+ "eval_runtime": 10045.043,
833
+ "eval_samples_per_second": 0.151,
834
+ "eval_steps_per_second": 0.009,
835
+ "eval_wer": 34.176755639038404,
836
+ "step": 2607
837
+ },
838
+ {
839
+ "epoch": 3.0737704918032787,
840
+ "grad_norm": 0.5374141335487366,
841
+ "learning_rate": 0.00038981042654028436,
842
+ "loss": 0.4432,
843
+ "step": 2625
844
+ },
845
+ {
846
+ "epoch": 3.1030444964871196,
847
+ "grad_norm": 0.4544264078140259,
848
+ "learning_rate": 0.00038388625592417064,
849
+ "loss": 0.4027,
850
+ "step": 2650
851
+ },
852
+ {
853
+ "epoch": 3.13231850117096,
854
+ "grad_norm": 0.37046387791633606,
855
+ "learning_rate": 0.00037796208530805687,
856
+ "loss": 0.3846,
857
+ "step": 2675
858
+ },
859
+ {
860
+ "epoch": 3.161592505854801,
861
+ "grad_norm": 0.44554048776626587,
862
+ "learning_rate": 0.00037203791469194315,
863
+ "loss": 0.4104,
864
+ "step": 2700
865
+ },
866
+ {
867
+ "epoch": 3.190866510538642,
868
+ "grad_norm": 0.551729679107666,
869
+ "learning_rate": 0.0003661137440758294,
870
+ "loss": 0.3826,
871
+ "step": 2725
872
+ },
873
+ {
874
+ "epoch": 3.2201405152224822,
875
+ "grad_norm": 0.4279089868068695,
876
+ "learning_rate": 0.00036018957345971566,
877
+ "loss": 0.3984,
878
+ "step": 2750
879
+ },
880
+ {
881
+ "epoch": 3.249414519906323,
882
+ "grad_norm": 0.505104660987854,
883
+ "learning_rate": 0.00035426540284360194,
884
+ "loss": 0.3973,
885
+ "step": 2775
886
+ },
887
+ {
888
+ "epoch": 3.278688524590164,
889
+ "grad_norm": 0.4596370458602905,
890
+ "learning_rate": 0.00034834123222748817,
891
+ "loss": 0.4953,
892
+ "step": 2800
893
+ },
894
+ {
895
+ "epoch": 3.307962529274005,
896
+ "grad_norm": 0.40555649995803833,
897
+ "learning_rate": 0.00034241706161137445,
898
+ "loss": 0.4195,
899
+ "step": 2825
900
+ },
901
+ {
902
+ "epoch": 3.330210772833724,
903
+ "eval_loss": 0.45748990774154663,
904
+ "eval_runtime": 10069.353,
905
+ "eval_samples_per_second": 0.151,
906
+ "eval_steps_per_second": 0.009,
907
+ "eval_wer": 36.828563761779236,
908
+ "step": 2844
909
+ },
910
+ {
911
+ "epoch": 3.3372365339578454,
912
+ "grad_norm": 0.44486892223358154,
913
+ "learning_rate": 0.0003364928909952606,
914
+ "loss": 0.4271,
915
+ "step": 2850
916
+ },
917
+ {
918
+ "epoch": 3.3665105386416863,
919
+ "grad_norm": 0.5062658786773682,
920
+ "learning_rate": 0.0003305687203791469,
921
+ "loss": 0.4694,
922
+ "step": 2875
923
+ },
924
+ {
925
+ "epoch": 3.3957845433255267,
926
+ "grad_norm": 0.4567612409591675,
927
+ "learning_rate": 0.0003246445497630332,
928
+ "loss": 0.4907,
929
+ "step": 2900
930
+ },
931
+ {
932
+ "epoch": 3.4250585480093676,
933
+ "grad_norm": 0.2982103228569031,
934
+ "learning_rate": 0.0003187203791469194,
935
+ "loss": 0.4649,
936
+ "step": 2925
937
+ },
938
+ {
939
+ "epoch": 3.4543325526932085,
940
+ "grad_norm": 0.39093852043151855,
941
+ "learning_rate": 0.0003127962085308057,
942
+ "loss": 0.3721,
943
+ "step": 2950
944
+ },
945
+ {
946
+ "epoch": 3.4836065573770494,
947
+ "grad_norm": 0.49913489818573,
948
+ "learning_rate": 0.0003068720379146919,
949
+ "loss": 0.3926,
950
+ "step": 2975
951
+ },
952
+ {
953
+ "epoch": 3.51288056206089,
954
+ "grad_norm": 0.4530438482761383,
955
+ "learning_rate": 0.0003009478672985782,
956
+ "loss": 0.4016,
957
+ "step": 3000
958
+ },
959
+ {
960
+ "epoch": 3.5421545667447307,
961
+ "grad_norm": 0.5188443064689636,
962
+ "learning_rate": 0.0002950236966824645,
963
+ "loss": 0.4079,
964
+ "step": 3025
965
+ },
966
+ {
967
+ "epoch": 3.571428571428571,
968
+ "grad_norm": 0.632804274559021,
969
+ "learning_rate": 0.0002890995260663507,
970
+ "loss": 0.4173,
971
+ "step": 3050
972
+ },
973
+ {
974
+ "epoch": 3.600702576112412,
975
+ "grad_norm": 0.38659924268722534,
976
+ "learning_rate": 0.000283175355450237,
977
+ "loss": 0.4724,
978
+ "step": 3075
979
+ },
980
+ {
981
+ "epoch": 3.607728337236534,
982
+ "eval_loss": 0.45248714089393616,
983
+ "eval_runtime": 9993.3519,
984
+ "eval_samples_per_second": 0.152,
985
+ "eval_steps_per_second": 0.01,
986
+ "eval_wer": 33.128660047669406,
987
+ "step": 3081
988
+ },
989
+ {
990
+ "epoch": 3.629976580796253,
991
+ "grad_norm": 0.4062461256980896,
992
+ "learning_rate": 0.0002772511848341232,
993
+ "loss": 0.3773,
994
+ "step": 3100
995
+ },
996
+ {
997
+ "epoch": 3.659250585480094,
998
+ "grad_norm": 0.3994309604167938,
999
+ "learning_rate": 0.0002713270142180095,
1000
+ "loss": 0.3728,
1001
+ "step": 3125
1002
+ },
1003
+ {
1004
+ "epoch": 3.6885245901639343,
1005
+ "grad_norm": 0.49005889892578125,
1006
+ "learning_rate": 0.0002654028436018958,
1007
+ "loss": 0.3888,
1008
+ "step": 3150
1009
+ },
1010
+ {
1011
+ "epoch": 3.717798594847775,
1012
+ "grad_norm": 0.5987063050270081,
1013
+ "learning_rate": 0.000259478672985782,
1014
+ "loss": 0.3343,
1015
+ "step": 3175
1016
+ },
1017
+ {
1018
+ "epoch": 3.747072599531616,
1019
+ "grad_norm": 0.4161163568496704,
1020
+ "learning_rate": 0.00025355450236966824,
1021
+ "loss": 0.3598,
1022
+ "step": 3200
1023
+ },
1024
+ {
1025
+ "epoch": 3.7763466042154565,
1026
+ "grad_norm": 0.49151715636253357,
1027
+ "learning_rate": 0.0002476303317535545,
1028
+ "loss": 0.3479,
1029
+ "step": 3225
1030
+ },
1031
+ {
1032
+ "epoch": 3.8056206088992974,
1033
+ "grad_norm": 0.48634546995162964,
1034
+ "learning_rate": 0.00024170616113744077,
1035
+ "loss": 0.418,
1036
+ "step": 3250
1037
+ },
1038
+ {
1039
+ "epoch": 3.8348946135831383,
1040
+ "grad_norm": 0.6226612329483032,
1041
+ "learning_rate": 0.000235781990521327,
1042
+ "loss": 0.4501,
1043
+ "step": 3275
1044
+ },
1045
+ {
1046
+ "epoch": 3.8641686182669788,
1047
+ "grad_norm": 0.5559201240539551,
1048
+ "learning_rate": 0.00022985781990521325,
1049
+ "loss": 0.3354,
1050
+ "step": 3300
1051
+ },
1052
+ {
1053
+ "epoch": 3.8852459016393444,
1054
+ "eval_loss": 0.38408055901527405,
1055
+ "eval_runtime": 10005.3823,
1056
+ "eval_samples_per_second": 0.152,
1057
+ "eval_steps_per_second": 0.009,
1058
+ "eval_wer": 30.916382850049335,
1059
+ "step": 3318
1060
+ },
1061
+ {
1062
+ "epoch": 3.8934426229508197,
1063
+ "grad_norm": 0.48179224133491516,
1064
+ "learning_rate": 0.00022393364928909954,
1065
+ "loss": 0.3966,
1066
+ "step": 3325
1067
+ },
1068
+ {
1069
+ "epoch": 3.9227166276346606,
1070
+ "grad_norm": 0.6447755098342896,
1071
+ "learning_rate": 0.0002180094786729858,
1072
+ "loss": 0.4289,
1073
+ "step": 3350
1074
+ },
1075
+ {
1076
+ "epoch": 3.951990632318501,
1077
+ "grad_norm": 0.5592084527015686,
1078
+ "learning_rate": 0.00021208530805687204,
1079
+ "loss": 0.4374,
1080
+ "step": 3375
1081
+ },
1082
+ {
1083
+ "epoch": 3.981264637002342,
1084
+ "grad_norm": 0.8099021315574646,
1085
+ "learning_rate": 0.0002061611374407583,
1086
+ "loss": 0.4103,
1087
+ "step": 3400
1088
+ },
1089
+ {
1090
+ "epoch": 4.010538641686183,
1091
+ "grad_norm": 0.4467124938964844,
1092
+ "learning_rate": 0.00020023696682464458,
1093
+ "loss": 0.3623,
1094
+ "step": 3425
1095
+ },
1096
+ {
1097
+ "epoch": 4.039812646370024,
1098
+ "grad_norm": 0.7924293279647827,
1099
+ "learning_rate": 0.0001943127962085308,
1100
+ "loss": 0.4256,
1101
+ "step": 3450
1102
+ },
1103
+ {
1104
+ "epoch": 4.069086651053865,
1105
+ "grad_norm": 0.5706892013549805,
1106
+ "learning_rate": 0.00018838862559241706,
1107
+ "loss": 0.4215,
1108
+ "step": 3475
1109
+ },
1110
+ {
1111
+ "epoch": 4.098360655737705,
1112
+ "grad_norm": 0.5217132568359375,
1113
+ "learning_rate": 0.00018246445497630332,
1114
+ "loss": 0.3815,
1115
+ "step": 3500
1116
+ },
1117
+ {
1118
+ "epoch": 4.1276346604215455,
1119
+ "grad_norm": 0.6682723164558411,
1120
+ "learning_rate": 0.00017654028436018957,
1121
+ "loss": 0.4053,
1122
+ "step": 3525
1123
+ },
1124
+ {
1125
+ "epoch": 4.156908665105386,
1126
+ "grad_norm": 0.34864214062690735,
1127
+ "learning_rate": 0.00017061611374407585,
1128
+ "loss": 0.341,
1129
+ "step": 3550
1130
+ },
1131
+ {
1132
+ "epoch": 4.162763466042154,
1133
+ "eval_loss": 0.3835831880569458,
1134
+ "eval_runtime": 9981.995,
1135
+ "eval_samples_per_second": 0.152,
1136
+ "eval_steps_per_second": 0.01,
1137
+ "eval_wer": 31.677117484164626,
1138
+ "step": 3555
1139
+ },
1140
+ {
1141
+ "epoch": 4.186182669789227,
1142
+ "grad_norm": 0.44574442505836487,
1143
+ "learning_rate": 0.0001646919431279621,
1144
+ "loss": 0.3327,
1145
+ "step": 3575
1146
+ },
1147
+ {
1148
+ "epoch": 4.215456674473068,
1149
+ "grad_norm": 0.3936574459075928,
1150
+ "learning_rate": 0.00015876777251184833,
1151
+ "loss": 0.3432,
1152
+ "step": 3600
1153
+ },
1154
+ {
1155
+ "epoch": 4.244730679156909,
1156
+ "grad_norm": 0.4089759290218353,
1157
+ "learning_rate": 0.0001528436018957346,
1158
+ "loss": 0.4012,
1159
+ "step": 3625
1160
+ },
1161
+ {
1162
+ "epoch": 4.274004683840749,
1163
+ "grad_norm": 0.2764008641242981,
1164
+ "learning_rate": 0.00014691943127962084,
1165
+ "loss": 0.3951,
1166
+ "step": 3650
1167
+ },
1168
+ {
1169
+ "epoch": 4.30327868852459,
1170
+ "grad_norm": 0.5552195310592651,
1171
+ "learning_rate": 0.00014099526066350712,
1172
+ "loss": 0.3913,
1173
+ "step": 3675
1174
+ },
1175
+ {
1176
+ "epoch": 4.332552693208431,
1177
+ "grad_norm": 0.5924739241600037,
1178
+ "learning_rate": 0.00013507109004739338,
1179
+ "loss": 0.4114,
1180
+ "step": 3700
1181
+ },
1182
+ {
1183
+ "epoch": 4.361826697892272,
1184
+ "grad_norm": 0.40245234966278076,
1185
+ "learning_rate": 0.00012914691943127963,
1186
+ "loss": 0.4296,
1187
+ "step": 3725
1188
+ },
1189
+ {
1190
+ "epoch": 4.391100702576113,
1191
+ "grad_norm": 0.47809380292892456,
1192
+ "learning_rate": 0.0001232227488151659,
1193
+ "loss": 0.3637,
1194
+ "step": 3750
1195
+ },
1196
+ {
1197
+ "epoch": 4.4203747072599535,
1198
+ "grad_norm": 0.5671224594116211,
1199
+ "learning_rate": 0.00011729857819905214,
1200
+ "loss": 0.3418,
1201
+ "step": 3775
1202
+ },
1203
+ {
1204
+ "epoch": 4.4402810304449645,
1205
+ "eval_loss": 0.3801835775375366,
1206
+ "eval_runtime": 9999.3716,
1207
+ "eval_samples_per_second": 0.152,
1208
+ "eval_steps_per_second": 0.01,
1209
+ "eval_wer": 30.77792278066015,
1210
+ "step": 3792
1211
+ },
1212
+ {
1213
+ "epoch": 4.4496487119437935,
1214
+ "grad_norm": 0.2669212818145752,
1215
+ "learning_rate": 0.00011137440758293838,
1216
+ "loss": 0.281,
1217
+ "step": 3800
1218
+ },
1219
+ {
1220
+ "epoch": 4.478922716627634,
1221
+ "grad_norm": 0.4983394742012024,
1222
+ "learning_rate": 0.00010545023696682465,
1223
+ "loss": 0.361,
1224
+ "step": 3825
1225
+ },
1226
+ {
1227
+ "epoch": 4.508196721311475,
1228
+ "grad_norm": 0.38378050923347473,
1229
+ "learning_rate": 9.95260663507109e-05,
1230
+ "loss": 0.3639,
1231
+ "step": 3850
1232
+ },
1233
+ {
1234
+ "epoch": 4.537470725995316,
1235
+ "grad_norm": 0.5017076134681702,
1236
+ "learning_rate": 9.360189573459716e-05,
1237
+ "loss": 0.3279,
1238
+ "step": 3875
1239
+ },
1240
+ {
1241
+ "epoch": 4.566744730679157,
1242
+ "grad_norm": 0.4742709696292877,
1243
+ "learning_rate": 8.767772511848341e-05,
1244
+ "loss": 0.4007,
1245
+ "step": 3900
1246
+ },
1247
+ {
1248
+ "epoch": 4.596018735362998,
1249
+ "grad_norm": 0.34730374813079834,
1250
+ "learning_rate": 8.175355450236967e-05,
1251
+ "loss": 0.3295,
1252
+ "step": 3925
1253
+ },
1254
+ {
1255
+ "epoch": 4.625292740046838,
1256
+ "grad_norm": 0.7860919237136841,
1257
+ "learning_rate": 7.582938388625594e-05,
1258
+ "loss": 0.3522,
1259
+ "step": 3950
1260
+ },
1261
+ {
1262
+ "epoch": 4.654566744730679,
1263
+ "grad_norm": 0.6666613221168518,
1264
+ "learning_rate": 6.990521327014218e-05,
1265
+ "loss": 0.3032,
1266
+ "step": 3975
1267
+ },
1268
+ {
1269
+ "epoch": 4.68384074941452,
1270
+ "grad_norm": 0.6350705623626709,
1271
+ "learning_rate": 6.398104265402843e-05,
1272
+ "loss": 0.3547,
1273
+ "step": 4000
1274
+ },
1275
+ {
1276
+ "epoch": 4.713114754098361,
1277
+ "grad_norm": 0.6553702354431152,
1278
+ "learning_rate": 5.80568720379147e-05,
1279
+ "loss": 0.3615,
1280
+ "step": 4025
1281
+ },
1282
+ {
1283
+ "epoch": 4.717798594847775,
1284
+ "eval_loss": 0.33885863423347473,
1285
+ "eval_runtime": 12318.746,
1286
+ "eval_samples_per_second": 0.123,
1287
+ "eval_steps_per_second": 0.008,
1288
+ "eval_wer": 29.441848022785567,
1289
+ "step": 4029
1290
+ },
1291
+ {
1292
+ "epoch": 4.742388758782202,
1293
+ "grad_norm": 0.5741427540779114,
1294
+ "learning_rate": 5.2132701421800946e-05,
1295
+ "loss": 0.307,
1296
+ "step": 4050
1297
+ },
1298
+ {
1299
+ "epoch": 4.7716627634660425,
1300
+ "grad_norm": 0.6011261343955994,
1301
+ "learning_rate": 4.62085308056872e-05,
1302
+ "loss": 0.3338,
1303
+ "step": 4075
1304
+ },
1305
+ {
1306
+ "epoch": 4.800936768149883,
1307
+ "grad_norm": 0.5157088041305542,
1308
+ "learning_rate": 4.028436018957346e-05,
1309
+ "loss": 0.3553,
1310
+ "step": 4100
1311
+ },
1312
+ {
1313
+ "epoch": 4.830210772833723,
1314
+ "grad_norm": 0.4962690472602844,
1315
+ "learning_rate": 3.4360189573459716e-05,
1316
+ "loss": 0.3092,
1317
+ "step": 4125
1318
+ },
1319
+ {
1320
+ "epoch": 4.859484777517564,
1321
+ "grad_norm": 0.5557773113250732,
1322
+ "learning_rate": 2.843601895734597e-05,
1323
+ "loss": 0.3786,
1324
+ "step": 4150
1325
+ },
1326
+ {
1327
+ "epoch": 4.888758782201405,
1328
+ "grad_norm": 0.44363346695899963,
1329
+ "learning_rate": 2.251184834123223e-05,
1330
+ "loss": 0.3852,
1331
+ "step": 4175
1332
+ },
1333
+ {
1334
+ "epoch": 4.918032786885246,
1335
+ "grad_norm": 0.6363089680671692,
1336
+ "learning_rate": 1.6587677725118486e-05,
1337
+ "loss": 0.4154,
1338
+ "step": 4200
1339
+ },
1340
+ {
1341
+ "epoch": 4.947306791569087,
1342
+ "grad_norm": 0.6191068291664124,
1343
+ "learning_rate": 1.066350710900474e-05,
1344
+ "loss": 0.383,
1345
+ "step": 4225
1346
+ },
1347
+ {
1348
+ "epoch": 4.976580796252927,
1349
+ "grad_norm": 0.5701784491539001,
1350
+ "learning_rate": 4.739336492890996e-06,
1351
+ "loss": 0.3457,
1352
+ "step": 4250
1353
+ },
1354
+ {
1355
+ "epoch": 4.995316159250585,
1356
+ "eval_loss": 0.337829053401947,
1357
+ "eval_runtime": 12275.7529,
1358
+ "eval_samples_per_second": 0.124,
1359
+ "eval_steps_per_second": 0.008,
1360
+ "eval_wer": 29.493776455963115,
1361
+ "step": 4266
1362
+ }
1363
+ ],
1364
+ "logging_steps": 25,
1365
+ "max_steps": 4270,
1366
+ "num_input_tokens_seen": 0,
1367
+ "num_train_epochs": 5,
1368
+ "save_steps": 237,
1369
+ "stateful_callbacks": {
1370
+ "TrainerControl": {
1371
+ "args": {
1372
+ "should_epoch_stop": false,
1373
+ "should_evaluate": false,
1374
+ "should_log": false,
1375
+ "should_save": true,
1376
+ "should_training_stop": false
1377
+ },
1378
+ "attributes": {}
1379
+ }
1380
+ },
1381
+ "total_flos": 7.058997654847488e+19,
1382
+ "train_batch_size": 16,
1383
+ "trial_name": null,
1384
+ "trial_params": null
1385
+ }
checkpoint-4266/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa5582a99028a63593693ed5fc6ef4b593e05c0b8394dc57197a6a05d422e4a4
3
+ size 5841