zhouzypaul commited on
Commit
8b49384
·
verified ·
1 Parent(s): af1f5b2

Upload folder using huggingface_hub

Browse files
drawer-checkpoint-600/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: google/paligemma-3b-pt-224
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.11.1
drawer-checkpoint-600/adapter_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "google/paligemma-3b-pt-224",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 8,
14
+ "lora_dropout": 0.0,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 8,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "o_proj",
24
+ "q_proj",
25
+ "v_proj",
26
+ "down_proj",
27
+ "gate_proj",
28
+ "k_proj",
29
+ "up_proj"
30
+ ],
31
+ "task_type": "CAUSAL_LM",
32
+ "use_dora": false,
33
+ "use_rslora": false
34
+ }
drawer-checkpoint-600/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5eb438e0def193f7fe22309008770906a59086aa1d016780692a8117a0e045f
3
+ size 45258384
drawer-checkpoint-600/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f96e2d89b51ff8b5ed0f24e9ac18f91766977b1948e60f8d12358166cc1c4699
3
+ size 23852612
drawer-checkpoint-600/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b06889f5df6769d6be8de04e7fceacf2ec736f49de8429e7149d177ce3c94aa1
3
+ size 14244
drawer-checkpoint-600/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:384cdd841f626f948844d8aaab4650f27e6eb851cb16b6453adb9abcf44fb95b
3
+ size 1064
drawer-checkpoint-600/trainer_state.json ADDED
@@ -0,0 +1,2373 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 46.14545454545455,
5
+ "eval_steps": 20,
6
+ "global_step": 600,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.14545454545454545,
13
+ "grad_norm": 6.82509708404541,
14
+ "learning_rate": 2e-05,
15
+ "loss": 2.8785,
16
+ "step": 2
17
+ },
18
+ {
19
+ "epoch": 0.2909090909090909,
20
+ "grad_norm": 15.459501266479492,
21
+ "learning_rate": 1.9955947136563878e-05,
22
+ "loss": 2.7812,
23
+ "step": 4
24
+ },
25
+ {
26
+ "epoch": 0.43636363636363634,
27
+ "grad_norm": 3.881777763366699,
28
+ "learning_rate": 1.9911894273127754e-05,
29
+ "loss": 2.6871,
30
+ "step": 6
31
+ },
32
+ {
33
+ "epoch": 0.5818181818181818,
34
+ "grad_norm": 35.793495178222656,
35
+ "learning_rate": 1.986784140969163e-05,
36
+ "loss": 2.4961,
37
+ "step": 8
38
+ },
39
+ {
40
+ "epoch": 0.7272727272727273,
41
+ "grad_norm": 2.7190792560577393,
42
+ "learning_rate": 1.982378854625551e-05,
43
+ "loss": 2.4283,
44
+ "step": 10
45
+ },
46
+ {
47
+ "epoch": 0.8727272727272727,
48
+ "grad_norm": 4.550004959106445,
49
+ "learning_rate": 1.9779735682819387e-05,
50
+ "loss": 2.2243,
51
+ "step": 12
52
+ },
53
+ {
54
+ "epoch": 1.0727272727272728,
55
+ "grad_norm": 5.233625411987305,
56
+ "learning_rate": 1.9735682819383263e-05,
57
+ "loss": 2.5864,
58
+ "step": 14
59
+ },
60
+ {
61
+ "epoch": 1.2181818181818183,
62
+ "grad_norm": 4.761728763580322,
63
+ "learning_rate": 1.969162995594714e-05,
64
+ "loss": 2.2414,
65
+ "step": 16
66
+ },
67
+ {
68
+ "epoch": 1.3636363636363638,
69
+ "grad_norm": 3.82110857963562,
70
+ "learning_rate": 1.9647577092511016e-05,
71
+ "loss": 2.3064,
72
+ "step": 18
73
+ },
74
+ {
75
+ "epoch": 1.509090909090909,
76
+ "grad_norm": 3.762476682662964,
77
+ "learning_rate": 1.9603524229074892e-05,
78
+ "loss": 2.2103,
79
+ "step": 20
80
+ },
81
+ {
82
+ "epoch": 1.509090909090909,
83
+ "eval_loss": 0.6365886330604553,
84
+ "eval_runtime": 2.3807,
85
+ "eval_samples_per_second": 28.983,
86
+ "eval_steps_per_second": 3.78,
87
+ "step": 20
88
+ },
89
+ {
90
+ "epoch": 1.6545454545454545,
91
+ "grad_norm": 11.473240852355957,
92
+ "learning_rate": 1.955947136563877e-05,
93
+ "loss": 1.9572,
94
+ "step": 22
95
+ },
96
+ {
97
+ "epoch": 1.8,
98
+ "grad_norm": 2.828364372253418,
99
+ "learning_rate": 1.9515418502202645e-05,
100
+ "loss": 2.0745,
101
+ "step": 24
102
+ },
103
+ {
104
+ "epoch": 1.9454545454545453,
105
+ "grad_norm": 5.521190166473389,
106
+ "learning_rate": 1.947136563876652e-05,
107
+ "loss": 1.996,
108
+ "step": 26
109
+ },
110
+ {
111
+ "epoch": 2.1454545454545455,
112
+ "grad_norm": 2.7916009426116943,
113
+ "learning_rate": 1.9427312775330398e-05,
114
+ "loss": 2.2481,
115
+ "step": 28
116
+ },
117
+ {
118
+ "epoch": 2.290909090909091,
119
+ "grad_norm": 3.057888984680176,
120
+ "learning_rate": 1.9383259911894274e-05,
121
+ "loss": 1.7869,
122
+ "step": 30
123
+ },
124
+ {
125
+ "epoch": 2.4363636363636365,
126
+ "grad_norm": 4.96619176864624,
127
+ "learning_rate": 1.933920704845815e-05,
128
+ "loss": 2.0987,
129
+ "step": 32
130
+ },
131
+ {
132
+ "epoch": 2.581818181818182,
133
+ "grad_norm": 12.586703300476074,
134
+ "learning_rate": 1.9295154185022027e-05,
135
+ "loss": 1.6259,
136
+ "step": 34
137
+ },
138
+ {
139
+ "epoch": 2.7272727272727275,
140
+ "grad_norm": 3.439871311187744,
141
+ "learning_rate": 1.9251101321585906e-05,
142
+ "loss": 1.8023,
143
+ "step": 36
144
+ },
145
+ {
146
+ "epoch": 2.8727272727272726,
147
+ "grad_norm": 3.031771183013916,
148
+ "learning_rate": 1.9207048458149783e-05,
149
+ "loss": 1.9268,
150
+ "step": 38
151
+ },
152
+ {
153
+ "epoch": 3.0727272727272728,
154
+ "grad_norm": 3.8170132637023926,
155
+ "learning_rate": 1.916299559471366e-05,
156
+ "loss": 1.9151,
157
+ "step": 40
158
+ },
159
+ {
160
+ "epoch": 3.0727272727272728,
161
+ "eval_loss": 0.4257761538028717,
162
+ "eval_runtime": 2.4761,
163
+ "eval_samples_per_second": 27.867,
164
+ "eval_steps_per_second": 3.635,
165
+ "step": 40
166
+ },
167
+ {
168
+ "epoch": 3.2181818181818183,
169
+ "grad_norm": 3.2115018367767334,
170
+ "learning_rate": 1.9118942731277536e-05,
171
+ "loss": 1.7152,
172
+ "step": 42
173
+ },
174
+ {
175
+ "epoch": 3.3636363636363638,
176
+ "grad_norm": 6.737055778503418,
177
+ "learning_rate": 1.9074889867841412e-05,
178
+ "loss": 1.6732,
179
+ "step": 44
180
+ },
181
+ {
182
+ "epoch": 3.509090909090909,
183
+ "grad_norm": 2.8172948360443115,
184
+ "learning_rate": 1.9030837004405288e-05,
185
+ "loss": 1.4994,
186
+ "step": 46
187
+ },
188
+ {
189
+ "epoch": 3.6545454545454543,
190
+ "grad_norm": 3.6704299449920654,
191
+ "learning_rate": 1.8986784140969165e-05,
192
+ "loss": 1.463,
193
+ "step": 48
194
+ },
195
+ {
196
+ "epoch": 3.8,
197
+ "grad_norm": 7.444520473480225,
198
+ "learning_rate": 1.894273127753304e-05,
199
+ "loss": 1.3427,
200
+ "step": 50
201
+ },
202
+ {
203
+ "epoch": 3.9454545454545453,
204
+ "grad_norm": 4.045940399169922,
205
+ "learning_rate": 1.8898678414096917e-05,
206
+ "loss": 1.4332,
207
+ "step": 52
208
+ },
209
+ {
210
+ "epoch": 4.1454545454545455,
211
+ "grad_norm": 37.547218322753906,
212
+ "learning_rate": 1.8854625550660794e-05,
213
+ "loss": 1.6301,
214
+ "step": 54
215
+ },
216
+ {
217
+ "epoch": 4.290909090909091,
218
+ "grad_norm": 4.190737247467041,
219
+ "learning_rate": 1.881057268722467e-05,
220
+ "loss": 1.1717,
221
+ "step": 56
222
+ },
223
+ {
224
+ "epoch": 4.4363636363636365,
225
+ "grad_norm": 3.416926622390747,
226
+ "learning_rate": 1.8766519823788546e-05,
227
+ "loss": 1.4066,
228
+ "step": 58
229
+ },
230
+ {
231
+ "epoch": 4.581818181818182,
232
+ "grad_norm": 7.9970622062683105,
233
+ "learning_rate": 1.8722466960352423e-05,
234
+ "loss": 1.0832,
235
+ "step": 60
236
+ },
237
+ {
238
+ "epoch": 4.581818181818182,
239
+ "eval_loss": 0.2970159649848938,
240
+ "eval_runtime": 2.4051,
241
+ "eval_samples_per_second": 28.689,
242
+ "eval_steps_per_second": 3.742,
243
+ "step": 60
244
+ },
245
+ {
246
+ "epoch": 4.7272727272727275,
247
+ "grad_norm": 4.826462268829346,
248
+ "learning_rate": 1.8678414096916303e-05,
249
+ "loss": 1.1102,
250
+ "step": 62
251
+ },
252
+ {
253
+ "epoch": 4.872727272727273,
254
+ "grad_norm": 5.472512245178223,
255
+ "learning_rate": 1.863436123348018e-05,
256
+ "loss": 0.9379,
257
+ "step": 64
258
+ },
259
+ {
260
+ "epoch": 5.072727272727272,
261
+ "grad_norm": 14.63734245300293,
262
+ "learning_rate": 1.8590308370044055e-05,
263
+ "loss": 1.3147,
264
+ "step": 66
265
+ },
266
+ {
267
+ "epoch": 5.218181818181818,
268
+ "grad_norm": 5.167418479919434,
269
+ "learning_rate": 1.854625550660793e-05,
270
+ "loss": 1.3131,
271
+ "step": 68
272
+ },
273
+ {
274
+ "epoch": 5.363636363636363,
275
+ "grad_norm": 7.9379706382751465,
276
+ "learning_rate": 1.8502202643171808e-05,
277
+ "loss": 1.4852,
278
+ "step": 70
279
+ },
280
+ {
281
+ "epoch": 5.509090909090909,
282
+ "grad_norm": 9.374183654785156,
283
+ "learning_rate": 1.8458149779735684e-05,
284
+ "loss": 0.9271,
285
+ "step": 72
286
+ },
287
+ {
288
+ "epoch": 5.654545454545454,
289
+ "grad_norm": 3.153024673461914,
290
+ "learning_rate": 1.841409691629956e-05,
291
+ "loss": 1.1132,
292
+ "step": 74
293
+ },
294
+ {
295
+ "epoch": 5.8,
296
+ "grad_norm": 9.691145896911621,
297
+ "learning_rate": 1.8370044052863437e-05,
298
+ "loss": 0.8964,
299
+ "step": 76
300
+ },
301
+ {
302
+ "epoch": 5.945454545454545,
303
+ "grad_norm": 10.270374298095703,
304
+ "learning_rate": 1.8325991189427313e-05,
305
+ "loss": 0.7245,
306
+ "step": 78
307
+ },
308
+ {
309
+ "epoch": 6.1454545454545455,
310
+ "grad_norm": 5.091557025909424,
311
+ "learning_rate": 1.828193832599119e-05,
312
+ "loss": 1.0971,
313
+ "step": 80
314
+ },
315
+ {
316
+ "epoch": 6.1454545454545455,
317
+ "eval_loss": 0.22357036173343658,
318
+ "eval_runtime": 2.4761,
319
+ "eval_samples_per_second": 27.867,
320
+ "eval_steps_per_second": 3.635,
321
+ "step": 80
322
+ },
323
+ {
324
+ "epoch": 6.290909090909091,
325
+ "grad_norm": 3.592783212661743,
326
+ "learning_rate": 1.8237885462555066e-05,
327
+ "loss": 1.2351,
328
+ "step": 82
329
+ },
330
+ {
331
+ "epoch": 6.4363636363636365,
332
+ "grad_norm": 5.458250999450684,
333
+ "learning_rate": 1.8193832599118942e-05,
334
+ "loss": 1.1667,
335
+ "step": 84
336
+ },
337
+ {
338
+ "epoch": 6.581818181818182,
339
+ "grad_norm": 7.37303352355957,
340
+ "learning_rate": 1.814977973568282e-05,
341
+ "loss": 1.2361,
342
+ "step": 86
343
+ },
344
+ {
345
+ "epoch": 6.7272727272727275,
346
+ "grad_norm": 11.211993217468262,
347
+ "learning_rate": 1.81057268722467e-05,
348
+ "loss": 0.721,
349
+ "step": 88
350
+ },
351
+ {
352
+ "epoch": 6.872727272727273,
353
+ "grad_norm": 7.392971038818359,
354
+ "learning_rate": 1.8061674008810575e-05,
355
+ "loss": 0.8399,
356
+ "step": 90
357
+ },
358
+ {
359
+ "epoch": 7.072727272727272,
360
+ "grad_norm": 4.306834697723389,
361
+ "learning_rate": 1.801762114537445e-05,
362
+ "loss": 1.1074,
363
+ "step": 92
364
+ },
365
+ {
366
+ "epoch": 7.218181818181818,
367
+ "grad_norm": 8.093742370605469,
368
+ "learning_rate": 1.7973568281938328e-05,
369
+ "loss": 0.9737,
370
+ "step": 94
371
+ },
372
+ {
373
+ "epoch": 7.363636363636363,
374
+ "grad_norm": 6.201341152191162,
375
+ "learning_rate": 1.7929515418502204e-05,
376
+ "loss": 0.7175,
377
+ "step": 96
378
+ },
379
+ {
380
+ "epoch": 7.509090909090909,
381
+ "grad_norm": 13.637624740600586,
382
+ "learning_rate": 1.788546255506608e-05,
383
+ "loss": 0.89,
384
+ "step": 98
385
+ },
386
+ {
387
+ "epoch": 7.654545454545454,
388
+ "grad_norm": 9.778822898864746,
389
+ "learning_rate": 1.784140969162996e-05,
390
+ "loss": 1.3698,
391
+ "step": 100
392
+ },
393
+ {
394
+ "epoch": 7.654545454545454,
395
+ "eval_loss": 0.16181005537509918,
396
+ "eval_runtime": 2.2757,
397
+ "eval_samples_per_second": 30.32,
398
+ "eval_steps_per_second": 3.955,
399
+ "step": 100
400
+ },
401
+ {
402
+ "epoch": 7.8,
403
+ "grad_norm": 7.181942939758301,
404
+ "learning_rate": 1.7797356828193833e-05,
405
+ "loss": 1.0689,
406
+ "step": 102
407
+ },
408
+ {
409
+ "epoch": 7.945454545454545,
410
+ "grad_norm": 6.704788684844971,
411
+ "learning_rate": 1.775330396475771e-05,
412
+ "loss": 0.8812,
413
+ "step": 104
414
+ },
415
+ {
416
+ "epoch": 8.145454545454545,
417
+ "grad_norm": 5.125433921813965,
418
+ "learning_rate": 1.7709251101321586e-05,
419
+ "loss": 1.3281,
420
+ "step": 106
421
+ },
422
+ {
423
+ "epoch": 8.290909090909091,
424
+ "grad_norm": 5.149466037750244,
425
+ "learning_rate": 1.7665198237885462e-05,
426
+ "loss": 0.7437,
427
+ "step": 108
428
+ },
429
+ {
430
+ "epoch": 8.436363636363636,
431
+ "grad_norm": 3.999161958694458,
432
+ "learning_rate": 1.762114537444934e-05,
433
+ "loss": 0.8021,
434
+ "step": 110
435
+ },
436
+ {
437
+ "epoch": 8.581818181818182,
438
+ "grad_norm": 7.346274375915527,
439
+ "learning_rate": 1.7577092511013215e-05,
440
+ "loss": 0.9234,
441
+ "step": 112
442
+ },
443
+ {
444
+ "epoch": 8.727272727272727,
445
+ "grad_norm": 5.139860153198242,
446
+ "learning_rate": 1.7533039647577095e-05,
447
+ "loss": 0.7378,
448
+ "step": 114
449
+ },
450
+ {
451
+ "epoch": 8.872727272727273,
452
+ "grad_norm": 6.312108993530273,
453
+ "learning_rate": 1.748898678414097e-05,
454
+ "loss": 0.9809,
455
+ "step": 116
456
+ },
457
+ {
458
+ "epoch": 9.072727272727272,
459
+ "grad_norm": 6.904297351837158,
460
+ "learning_rate": 1.7444933920704847e-05,
461
+ "loss": 0.8955,
462
+ "step": 118
463
+ },
464
+ {
465
+ "epoch": 9.218181818181819,
466
+ "grad_norm": 3.7636210918426514,
467
+ "learning_rate": 1.7400881057268724e-05,
468
+ "loss": 0.8585,
469
+ "step": 120
470
+ },
471
+ {
472
+ "epoch": 9.218181818181819,
473
+ "eval_loss": 0.1344047635793686,
474
+ "eval_runtime": 2.3846,
475
+ "eval_samples_per_second": 28.935,
476
+ "eval_steps_per_second": 3.774,
477
+ "step": 120
478
+ },
479
+ {
480
+ "epoch": 9.363636363636363,
481
+ "grad_norm": 5.598763465881348,
482
+ "learning_rate": 1.73568281938326e-05,
483
+ "loss": 0.7672,
484
+ "step": 122
485
+ },
486
+ {
487
+ "epoch": 9.50909090909091,
488
+ "grad_norm": 6.591607570648193,
489
+ "learning_rate": 1.7312775330396476e-05,
490
+ "loss": 0.8267,
491
+ "step": 124
492
+ },
493
+ {
494
+ "epoch": 9.654545454545454,
495
+ "grad_norm": 5.553727626800537,
496
+ "learning_rate": 1.7268722466960356e-05,
497
+ "loss": 0.8901,
498
+ "step": 126
499
+ },
500
+ {
501
+ "epoch": 9.8,
502
+ "grad_norm": 4.232337474822998,
503
+ "learning_rate": 1.7224669603524232e-05,
504
+ "loss": 0.537,
505
+ "step": 128
506
+ },
507
+ {
508
+ "epoch": 9.945454545454545,
509
+ "grad_norm": 4.437870979309082,
510
+ "learning_rate": 1.718061674008811e-05,
511
+ "loss": 0.8119,
512
+ "step": 130
513
+ },
514
+ {
515
+ "epoch": 10.145454545454545,
516
+ "grad_norm": 4.003602981567383,
517
+ "learning_rate": 1.7136563876651985e-05,
518
+ "loss": 0.7023,
519
+ "step": 132
520
+ },
521
+ {
522
+ "epoch": 10.290909090909091,
523
+ "grad_norm": 7.0120530128479,
524
+ "learning_rate": 1.709251101321586e-05,
525
+ "loss": 0.9343,
526
+ "step": 134
527
+ },
528
+ {
529
+ "epoch": 10.436363636363636,
530
+ "grad_norm": 4.784329891204834,
531
+ "learning_rate": 1.7048458149779738e-05,
532
+ "loss": 0.8812,
533
+ "step": 136
534
+ },
535
+ {
536
+ "epoch": 10.581818181818182,
537
+ "grad_norm": 5.506158351898193,
538
+ "learning_rate": 1.7004405286343614e-05,
539
+ "loss": 0.733,
540
+ "step": 138
541
+ },
542
+ {
543
+ "epoch": 10.727272727272727,
544
+ "grad_norm": 6.800877571105957,
545
+ "learning_rate": 1.696035242290749e-05,
546
+ "loss": 0.6621,
547
+ "step": 140
548
+ },
549
+ {
550
+ "epoch": 10.727272727272727,
551
+ "eval_loss": 0.12524102628231049,
552
+ "eval_runtime": 2.3673,
553
+ "eval_samples_per_second": 29.147,
554
+ "eval_steps_per_second": 3.802,
555
+ "step": 140
556
+ },
557
+ {
558
+ "epoch": 10.872727272727273,
559
+ "grad_norm": 5.377838134765625,
560
+ "learning_rate": 1.6916299559471367e-05,
561
+ "loss": 0.4691,
562
+ "step": 142
563
+ },
564
+ {
565
+ "epoch": 11.072727272727272,
566
+ "grad_norm": 26.6933650970459,
567
+ "learning_rate": 1.6872246696035243e-05,
568
+ "loss": 0.5977,
569
+ "step": 144
570
+ },
571
+ {
572
+ "epoch": 11.218181818181819,
573
+ "grad_norm": 10.951379776000977,
574
+ "learning_rate": 1.682819383259912e-05,
575
+ "loss": 0.6514,
576
+ "step": 146
577
+ },
578
+ {
579
+ "epoch": 11.363636363636363,
580
+ "grad_norm": 8.1903657913208,
581
+ "learning_rate": 1.6784140969162996e-05,
582
+ "loss": 0.7592,
583
+ "step": 148
584
+ },
585
+ {
586
+ "epoch": 11.50909090909091,
587
+ "grad_norm": 5.440346717834473,
588
+ "learning_rate": 1.6740088105726872e-05,
589
+ "loss": 0.6327,
590
+ "step": 150
591
+ },
592
+ {
593
+ "epoch": 11.654545454545454,
594
+ "grad_norm": 6.1671929359436035,
595
+ "learning_rate": 1.6696035242290752e-05,
596
+ "loss": 0.688,
597
+ "step": 152
598
+ },
599
+ {
600
+ "epoch": 11.8,
601
+ "grad_norm": 7.2087321281433105,
602
+ "learning_rate": 1.665198237885463e-05,
603
+ "loss": 0.713,
604
+ "step": 154
605
+ },
606
+ {
607
+ "epoch": 11.945454545454545,
608
+ "grad_norm": 6.372963905334473,
609
+ "learning_rate": 1.6607929515418505e-05,
610
+ "loss": 0.6853,
611
+ "step": 156
612
+ },
613
+ {
614
+ "epoch": 12.145454545454545,
615
+ "grad_norm": 8.625984191894531,
616
+ "learning_rate": 1.656387665198238e-05,
617
+ "loss": 0.6097,
618
+ "step": 158
619
+ },
620
+ {
621
+ "epoch": 12.290909090909091,
622
+ "grad_norm": 5.289595127105713,
623
+ "learning_rate": 1.6519823788546258e-05,
624
+ "loss": 0.6303,
625
+ "step": 160
626
+ },
627
+ {
628
+ "epoch": 12.290909090909091,
629
+ "eval_loss": 0.1394082009792328,
630
+ "eval_runtime": 2.6171,
631
+ "eval_samples_per_second": 26.365,
632
+ "eval_steps_per_second": 3.439,
633
+ "step": 160
634
+ },
635
+ {
636
+ "epoch": 12.436363636363636,
637
+ "grad_norm": 6.001311302185059,
638
+ "learning_rate": 1.6475770925110134e-05,
639
+ "loss": 0.5984,
640
+ "step": 162
641
+ },
642
+ {
643
+ "epoch": 12.581818181818182,
644
+ "grad_norm": 5.441368579864502,
645
+ "learning_rate": 1.643171806167401e-05,
646
+ "loss": 0.6642,
647
+ "step": 164
648
+ },
649
+ {
650
+ "epoch": 12.727272727272727,
651
+ "grad_norm": 6.692780494689941,
652
+ "learning_rate": 1.6387665198237887e-05,
653
+ "loss": 0.8171,
654
+ "step": 166
655
+ },
656
+ {
657
+ "epoch": 12.872727272727273,
658
+ "grad_norm": 5.548882961273193,
659
+ "learning_rate": 1.6343612334801763e-05,
660
+ "loss": 0.5259,
661
+ "step": 168
662
+ },
663
+ {
664
+ "epoch": 13.072727272727272,
665
+ "grad_norm": 5.942930221557617,
666
+ "learning_rate": 1.629955947136564e-05,
667
+ "loss": 0.7613,
668
+ "step": 170
669
+ },
670
+ {
671
+ "epoch": 13.218181818181819,
672
+ "grad_norm": 6.03953218460083,
673
+ "learning_rate": 1.6255506607929516e-05,
674
+ "loss": 0.6904,
675
+ "step": 172
676
+ },
677
+ {
678
+ "epoch": 13.363636363636363,
679
+ "grad_norm": 6.526908874511719,
680
+ "learning_rate": 1.6211453744493392e-05,
681
+ "loss": 0.8909,
682
+ "step": 174
683
+ },
684
+ {
685
+ "epoch": 13.50909090909091,
686
+ "grad_norm": 8.379182815551758,
687
+ "learning_rate": 1.616740088105727e-05,
688
+ "loss": 0.482,
689
+ "step": 176
690
+ },
691
+ {
692
+ "epoch": 13.654545454545454,
693
+ "grad_norm": 7.846014022827148,
694
+ "learning_rate": 1.6123348017621148e-05,
695
+ "loss": 0.7162,
696
+ "step": 178
697
+ },
698
+ {
699
+ "epoch": 13.8,
700
+ "grad_norm": 6.574687957763672,
701
+ "learning_rate": 1.6079295154185025e-05,
702
+ "loss": 0.7616,
703
+ "step": 180
704
+ },
705
+ {
706
+ "epoch": 13.8,
707
+ "eval_loss": 0.12109341472387314,
708
+ "eval_runtime": 2.3562,
709
+ "eval_samples_per_second": 29.285,
710
+ "eval_steps_per_second": 3.82,
711
+ "step": 180
712
+ },
713
+ {
714
+ "epoch": 13.945454545454545,
715
+ "grad_norm": 7.361100673675537,
716
+ "learning_rate": 1.60352422907489e-05,
717
+ "loss": 0.4337,
718
+ "step": 182
719
+ },
720
+ {
721
+ "epoch": 14.145454545454545,
722
+ "grad_norm": 6.376481533050537,
723
+ "learning_rate": 1.5991189427312777e-05,
724
+ "loss": 0.8238,
725
+ "step": 184
726
+ },
727
+ {
728
+ "epoch": 14.290909090909091,
729
+ "grad_norm": 10.678642272949219,
730
+ "learning_rate": 1.5947136563876654e-05,
731
+ "loss": 0.8049,
732
+ "step": 186
733
+ },
734
+ {
735
+ "epoch": 14.436363636363636,
736
+ "grad_norm": 4.040198802947998,
737
+ "learning_rate": 1.590308370044053e-05,
738
+ "loss": 0.2872,
739
+ "step": 188
740
+ },
741
+ {
742
+ "epoch": 14.581818181818182,
743
+ "grad_norm": 5.497896194458008,
744
+ "learning_rate": 1.5859030837004406e-05,
745
+ "loss": 0.7336,
746
+ "step": 190
747
+ },
748
+ {
749
+ "epoch": 14.727272727272727,
750
+ "grad_norm": 5.201958656311035,
751
+ "learning_rate": 1.5814977973568283e-05,
752
+ "loss": 0.5283,
753
+ "step": 192
754
+ },
755
+ {
756
+ "epoch": 14.872727272727273,
757
+ "grad_norm": 7.142544269561768,
758
+ "learning_rate": 1.577092511013216e-05,
759
+ "loss": 0.5861,
760
+ "step": 194
761
+ },
762
+ {
763
+ "epoch": 15.072727272727272,
764
+ "grad_norm": 8.23885440826416,
765
+ "learning_rate": 1.5726872246696035e-05,
766
+ "loss": 0.6023,
767
+ "step": 196
768
+ },
769
+ {
770
+ "epoch": 15.218181818181819,
771
+ "grad_norm": 8.388402938842773,
772
+ "learning_rate": 1.5682819383259912e-05,
773
+ "loss": 0.5845,
774
+ "step": 198
775
+ },
776
+ {
777
+ "epoch": 15.363636363636363,
778
+ "grad_norm": 4.7502546310424805,
779
+ "learning_rate": 1.5638766519823788e-05,
780
+ "loss": 0.5452,
781
+ "step": 200
782
+ },
783
+ {
784
+ "epoch": 15.363636363636363,
785
+ "eval_loss": 0.10003522038459778,
786
+ "eval_runtime": 2.4461,
787
+ "eval_samples_per_second": 28.208,
788
+ "eval_steps_per_second": 3.679,
789
+ "step": 200
790
+ },
791
+ {
792
+ "epoch": 15.50909090909091,
793
+ "grad_norm": 5.821452617645264,
794
+ "learning_rate": 1.5594713656387664e-05,
795
+ "loss": 0.4954,
796
+ "step": 202
797
+ },
798
+ {
799
+ "epoch": 15.654545454545454,
800
+ "grad_norm": 5.848851203918457,
801
+ "learning_rate": 1.5550660792951544e-05,
802
+ "loss": 0.4935,
803
+ "step": 204
804
+ },
805
+ {
806
+ "epoch": 15.8,
807
+ "grad_norm": 21.797243118286133,
808
+ "learning_rate": 1.550660792951542e-05,
809
+ "loss": 0.536,
810
+ "step": 206
811
+ },
812
+ {
813
+ "epoch": 15.945454545454545,
814
+ "grad_norm": 6.6740336418151855,
815
+ "learning_rate": 1.5462555066079297e-05,
816
+ "loss": 0.6374,
817
+ "step": 208
818
+ },
819
+ {
820
+ "epoch": 16.145454545454545,
821
+ "grad_norm": 7.30694055557251,
822
+ "learning_rate": 1.5418502202643173e-05,
823
+ "loss": 0.5413,
824
+ "step": 210
825
+ },
826
+ {
827
+ "epoch": 16.29090909090909,
828
+ "grad_norm": 4.78403902053833,
829
+ "learning_rate": 1.537444933920705e-05,
830
+ "loss": 0.6679,
831
+ "step": 212
832
+ },
833
+ {
834
+ "epoch": 16.436363636363637,
835
+ "grad_norm": 3.9313669204711914,
836
+ "learning_rate": 1.5330396475770926e-05,
837
+ "loss": 0.3608,
838
+ "step": 214
839
+ },
840
+ {
841
+ "epoch": 16.581818181818182,
842
+ "grad_norm": 3.9479591846466064,
843
+ "learning_rate": 1.5286343612334802e-05,
844
+ "loss": 0.5477,
845
+ "step": 216
846
+ },
847
+ {
848
+ "epoch": 16.727272727272727,
849
+ "grad_norm": 8.678921699523926,
850
+ "learning_rate": 1.524229074889868e-05,
851
+ "loss": 0.4708,
852
+ "step": 218
853
+ },
854
+ {
855
+ "epoch": 16.87272727272727,
856
+ "grad_norm": 12.411300659179688,
857
+ "learning_rate": 1.5198237885462557e-05,
858
+ "loss": 0.6094,
859
+ "step": 220
860
+ },
861
+ {
862
+ "epoch": 16.87272727272727,
863
+ "eval_loss": 0.12052173912525177,
864
+ "eval_runtime": 2.4108,
865
+ "eval_samples_per_second": 28.622,
866
+ "eval_steps_per_second": 3.733,
867
+ "step": 220
868
+ },
869
+ {
870
+ "epoch": 17.072727272727274,
871
+ "grad_norm": 7.06217098236084,
872
+ "learning_rate": 1.5154185022026433e-05,
873
+ "loss": 0.4908,
874
+ "step": 222
875
+ },
876
+ {
877
+ "epoch": 17.21818181818182,
878
+ "grad_norm": 9.333539962768555,
879
+ "learning_rate": 1.511013215859031e-05,
880
+ "loss": 0.4848,
881
+ "step": 224
882
+ },
883
+ {
884
+ "epoch": 17.363636363636363,
885
+ "grad_norm": 9.074432373046875,
886
+ "learning_rate": 1.5066079295154186e-05,
887
+ "loss": 0.5138,
888
+ "step": 226
889
+ },
890
+ {
891
+ "epoch": 17.509090909090908,
892
+ "grad_norm": 7.402597427368164,
893
+ "learning_rate": 1.5022026431718062e-05,
894
+ "loss": 0.5642,
895
+ "step": 228
896
+ },
897
+ {
898
+ "epoch": 17.654545454545456,
899
+ "grad_norm": 8.179388046264648,
900
+ "learning_rate": 1.497797356828194e-05,
901
+ "loss": 0.6335,
902
+ "step": 230
903
+ },
904
+ {
905
+ "epoch": 17.8,
906
+ "grad_norm": 4.337785720825195,
907
+ "learning_rate": 1.4933920704845817e-05,
908
+ "loss": 0.507,
909
+ "step": 232
910
+ },
911
+ {
912
+ "epoch": 17.945454545454545,
913
+ "grad_norm": 6.7688212394714355,
914
+ "learning_rate": 1.4889867841409693e-05,
915
+ "loss": 0.2764,
916
+ "step": 234
917
+ },
918
+ {
919
+ "epoch": 18.145454545454545,
920
+ "grad_norm": 10.437609672546387,
921
+ "learning_rate": 1.484581497797357e-05,
922
+ "loss": 0.4297,
923
+ "step": 236
924
+ },
925
+ {
926
+ "epoch": 18.29090909090909,
927
+ "grad_norm": 4.999331474304199,
928
+ "learning_rate": 1.4801762114537446e-05,
929
+ "loss": 0.2667,
930
+ "step": 238
931
+ },
932
+ {
933
+ "epoch": 18.436363636363637,
934
+ "grad_norm": 6.584486484527588,
935
+ "learning_rate": 1.4757709251101322e-05,
936
+ "loss": 0.7202,
937
+ "step": 240
938
+ },
939
+ {
940
+ "epoch": 18.436363636363637,
941
+ "eval_loss": 0.10028823465108871,
942
+ "eval_runtime": 2.4195,
943
+ "eval_samples_per_second": 28.518,
944
+ "eval_steps_per_second": 3.72,
945
+ "step": 240
946
+ },
947
+ {
948
+ "epoch": 18.581818181818182,
949
+ "grad_norm": 6.803129196166992,
950
+ "learning_rate": 1.47136563876652e-05,
951
+ "loss": 0.5936,
952
+ "step": 242
953
+ },
954
+ {
955
+ "epoch": 18.727272727272727,
956
+ "grad_norm": 6.420366287231445,
957
+ "learning_rate": 1.4669603524229076e-05,
958
+ "loss": 0.3529,
959
+ "step": 244
960
+ },
961
+ {
962
+ "epoch": 18.87272727272727,
963
+ "grad_norm": 11.502973556518555,
964
+ "learning_rate": 1.4625550660792953e-05,
965
+ "loss": 0.468,
966
+ "step": 246
967
+ },
968
+ {
969
+ "epoch": 19.072727272727274,
970
+ "grad_norm": 8.540968894958496,
971
+ "learning_rate": 1.458149779735683e-05,
972
+ "loss": 0.4884,
973
+ "step": 248
974
+ },
975
+ {
976
+ "epoch": 19.21818181818182,
977
+ "grad_norm": 7.930655479431152,
978
+ "learning_rate": 1.4537444933920706e-05,
979
+ "loss": 0.4504,
980
+ "step": 250
981
+ },
982
+ {
983
+ "epoch": 19.363636363636363,
984
+ "grad_norm": 6.510903835296631,
985
+ "learning_rate": 1.4493392070484582e-05,
986
+ "loss": 0.2514,
987
+ "step": 252
988
+ },
989
+ {
990
+ "epoch": 19.509090909090908,
991
+ "grad_norm": 15.958013534545898,
992
+ "learning_rate": 1.4449339207048458e-05,
993
+ "loss": 0.4331,
994
+ "step": 254
995
+ },
996
+ {
997
+ "epoch": 19.654545454545456,
998
+ "grad_norm": 6.7750468254089355,
999
+ "learning_rate": 1.4405286343612336e-05,
1000
+ "loss": 0.303,
1001
+ "step": 256
1002
+ },
1003
+ {
1004
+ "epoch": 19.8,
1005
+ "grad_norm": 5.576696872711182,
1006
+ "learning_rate": 1.4361233480176213e-05,
1007
+ "loss": 0.3676,
1008
+ "step": 258
1009
+ },
1010
+ {
1011
+ "epoch": 19.945454545454545,
1012
+ "grad_norm": 5.2856340408325195,
1013
+ "learning_rate": 1.4317180616740089e-05,
1014
+ "loss": 0.4044,
1015
+ "step": 260
1016
+ },
1017
+ {
1018
+ "epoch": 19.945454545454545,
1019
+ "eval_loss": 0.04984922334551811,
1020
+ "eval_runtime": 2.1717,
1021
+ "eval_samples_per_second": 31.773,
1022
+ "eval_steps_per_second": 4.144,
1023
+ "step": 260
1024
+ },
1025
+ {
1026
+ "epoch": 20.145454545454545,
1027
+ "grad_norm": 8.463933944702148,
1028
+ "learning_rate": 1.4273127753303965e-05,
1029
+ "loss": 0.3656,
1030
+ "step": 262
1031
+ },
1032
+ {
1033
+ "epoch": 20.29090909090909,
1034
+ "grad_norm": 7.155218124389648,
1035
+ "learning_rate": 1.4229074889867842e-05,
1036
+ "loss": 0.3901,
1037
+ "step": 264
1038
+ },
1039
+ {
1040
+ "epoch": 20.436363636363637,
1041
+ "grad_norm": 6.042890548706055,
1042
+ "learning_rate": 1.4185022026431718e-05,
1043
+ "loss": 0.345,
1044
+ "step": 266
1045
+ },
1046
+ {
1047
+ "epoch": 20.581818181818182,
1048
+ "grad_norm": 6.153244495391846,
1049
+ "learning_rate": 1.4140969162995596e-05,
1050
+ "loss": 0.3879,
1051
+ "step": 268
1052
+ },
1053
+ {
1054
+ "epoch": 20.727272727272727,
1055
+ "grad_norm": 8.221923828125,
1056
+ "learning_rate": 1.4096916299559472e-05,
1057
+ "loss": 0.2643,
1058
+ "step": 270
1059
+ },
1060
+ {
1061
+ "epoch": 20.87272727272727,
1062
+ "grad_norm": 8.140462875366211,
1063
+ "learning_rate": 1.4052863436123349e-05,
1064
+ "loss": 0.2997,
1065
+ "step": 272
1066
+ },
1067
+ {
1068
+ "epoch": 21.072727272727274,
1069
+ "grad_norm": 7.747637748718262,
1070
+ "learning_rate": 1.4008810572687225e-05,
1071
+ "loss": 0.3217,
1072
+ "step": 274
1073
+ },
1074
+ {
1075
+ "epoch": 21.21818181818182,
1076
+ "grad_norm": 7.08354377746582,
1077
+ "learning_rate": 1.3964757709251102e-05,
1078
+ "loss": 0.4,
1079
+ "step": 276
1080
+ },
1081
+ {
1082
+ "epoch": 21.363636363636363,
1083
+ "grad_norm": 9.22011947631836,
1084
+ "learning_rate": 1.3920704845814978e-05,
1085
+ "loss": 0.4179,
1086
+ "step": 278
1087
+ },
1088
+ {
1089
+ "epoch": 21.509090909090908,
1090
+ "grad_norm": 5.6526570320129395,
1091
+ "learning_rate": 1.3876651982378854e-05,
1092
+ "loss": 0.3362,
1093
+ "step": 280
1094
+ },
1095
+ {
1096
+ "epoch": 21.509090909090908,
1097
+ "eval_loss": 0.08825496584177017,
1098
+ "eval_runtime": 2.4845,
1099
+ "eval_samples_per_second": 27.772,
1100
+ "eval_steps_per_second": 3.622,
1101
+ "step": 280
1102
+ },
1103
+ {
1104
+ "epoch": 21.654545454545456,
1105
+ "grad_norm": 7.663400173187256,
1106
+ "learning_rate": 1.3832599118942734e-05,
1107
+ "loss": 0.3132,
1108
+ "step": 282
1109
+ },
1110
+ {
1111
+ "epoch": 21.8,
1112
+ "grad_norm": 21.708505630493164,
1113
+ "learning_rate": 1.378854625550661e-05,
1114
+ "loss": 0.3784,
1115
+ "step": 284
1116
+ },
1117
+ {
1118
+ "epoch": 21.945454545454545,
1119
+ "grad_norm": 14.356860160827637,
1120
+ "learning_rate": 1.3744493392070487e-05,
1121
+ "loss": 0.4086,
1122
+ "step": 286
1123
+ },
1124
+ {
1125
+ "epoch": 22.145454545454545,
1126
+ "grad_norm": 7.74119234085083,
1127
+ "learning_rate": 1.3700440528634363e-05,
1128
+ "loss": 0.4617,
1129
+ "step": 288
1130
+ },
1131
+ {
1132
+ "epoch": 22.29090909090909,
1133
+ "grad_norm": 6.34710168838501,
1134
+ "learning_rate": 1.3656387665198238e-05,
1135
+ "loss": 0.4295,
1136
+ "step": 290
1137
+ },
1138
+ {
1139
+ "epoch": 22.436363636363637,
1140
+ "grad_norm": 4.869462966918945,
1141
+ "learning_rate": 1.3612334801762114e-05,
1142
+ "loss": 0.189,
1143
+ "step": 292
1144
+ },
1145
+ {
1146
+ "epoch": 22.581818181818182,
1147
+ "grad_norm": 7.1519060134887695,
1148
+ "learning_rate": 1.3568281938325994e-05,
1149
+ "loss": 0.3282,
1150
+ "step": 294
1151
+ },
1152
+ {
1153
+ "epoch": 22.727272727272727,
1154
+ "grad_norm": 6.199375152587891,
1155
+ "learning_rate": 1.352422907488987e-05,
1156
+ "loss": 0.2359,
1157
+ "step": 296
1158
+ },
1159
+ {
1160
+ "epoch": 22.87272727272727,
1161
+ "grad_norm": 6.23310661315918,
1162
+ "learning_rate": 1.3480176211453747e-05,
1163
+ "loss": 0.2866,
1164
+ "step": 298
1165
+ },
1166
+ {
1167
+ "epoch": 23.072727272727274,
1168
+ "grad_norm": 8.448976516723633,
1169
+ "learning_rate": 1.3436123348017623e-05,
1170
+ "loss": 0.3896,
1171
+ "step": 300
1172
+ },
1173
+ {
1174
+ "epoch": 23.072727272727274,
1175
+ "eval_loss": 0.11301498115062714,
1176
+ "eval_runtime": 2.4354,
1177
+ "eval_samples_per_second": 28.332,
1178
+ "eval_steps_per_second": 3.695,
1179
+ "step": 300
1180
+ },
1181
+ {
1182
+ "epoch": 23.21818181818182,
1183
+ "grad_norm": 9.333931922912598,
1184
+ "learning_rate": 1.33920704845815e-05,
1185
+ "loss": 0.2415,
1186
+ "step": 302
1187
+ },
1188
+ {
1189
+ "epoch": 23.363636363636363,
1190
+ "grad_norm": 5.8387131690979,
1191
+ "learning_rate": 1.3348017621145376e-05,
1192
+ "loss": 0.2947,
1193
+ "step": 304
1194
+ },
1195
+ {
1196
+ "epoch": 23.509090909090908,
1197
+ "grad_norm": 5.108311176300049,
1198
+ "learning_rate": 1.3303964757709252e-05,
1199
+ "loss": 0.3266,
1200
+ "step": 306
1201
+ },
1202
+ {
1203
+ "epoch": 23.654545454545456,
1204
+ "grad_norm": 7.220591068267822,
1205
+ "learning_rate": 1.325991189427313e-05,
1206
+ "loss": 0.3936,
1207
+ "step": 308
1208
+ },
1209
+ {
1210
+ "epoch": 23.8,
1211
+ "grad_norm": 8.200072288513184,
1212
+ "learning_rate": 1.3215859030837006e-05,
1213
+ "loss": 0.2774,
1214
+ "step": 310
1215
+ },
1216
+ {
1217
+ "epoch": 23.945454545454545,
1218
+ "grad_norm": 5.814062595367432,
1219
+ "learning_rate": 1.3171806167400883e-05,
1220
+ "loss": 0.2768,
1221
+ "step": 312
1222
+ },
1223
+ {
1224
+ "epoch": 24.145454545454545,
1225
+ "grad_norm": 6.153317928314209,
1226
+ "learning_rate": 1.3127753303964759e-05,
1227
+ "loss": 0.2619,
1228
+ "step": 314
1229
+ },
1230
+ {
1231
+ "epoch": 24.29090909090909,
1232
+ "grad_norm": 5.712184906005859,
1233
+ "learning_rate": 1.3083700440528635e-05,
1234
+ "loss": 0.1983,
1235
+ "step": 316
1236
+ },
1237
+ {
1238
+ "epoch": 24.436363636363637,
1239
+ "grad_norm": 5.369919776916504,
1240
+ "learning_rate": 1.3039647577092512e-05,
1241
+ "loss": 0.2346,
1242
+ "step": 318
1243
+ },
1244
+ {
1245
+ "epoch": 24.581818181818182,
1246
+ "grad_norm": 4.122873783111572,
1247
+ "learning_rate": 1.299559471365639e-05,
1248
+ "loss": 0.3465,
1249
+ "step": 320
1250
+ },
1251
+ {
1252
+ "epoch": 24.581818181818182,
1253
+ "eval_loss": 0.04590336233377457,
1254
+ "eval_runtime": 2.292,
1255
+ "eval_samples_per_second": 30.105,
1256
+ "eval_steps_per_second": 3.927,
1257
+ "step": 320
1258
+ },
1259
+ {
1260
+ "epoch": 24.727272727272727,
1261
+ "grad_norm": 7.60134744644165,
1262
+ "learning_rate": 1.2951541850220266e-05,
1263
+ "loss": 0.2854,
1264
+ "step": 322
1265
+ },
1266
+ {
1267
+ "epoch": 24.87272727272727,
1268
+ "grad_norm": 10.638797760009766,
1269
+ "learning_rate": 1.2907488986784143e-05,
1270
+ "loss": 0.5049,
1271
+ "step": 324
1272
+ },
1273
+ {
1274
+ "epoch": 25.072727272727274,
1275
+ "grad_norm": 10.093364715576172,
1276
+ "learning_rate": 1.2863436123348019e-05,
1277
+ "loss": 0.5965,
1278
+ "step": 326
1279
+ },
1280
+ {
1281
+ "epoch": 25.21818181818182,
1282
+ "grad_norm": 17.12324333190918,
1283
+ "learning_rate": 1.2819383259911895e-05,
1284
+ "loss": 0.2992,
1285
+ "step": 328
1286
+ },
1287
+ {
1288
+ "epoch": 25.363636363636363,
1289
+ "grad_norm": 8.28325366973877,
1290
+ "learning_rate": 1.2775330396475772e-05,
1291
+ "loss": 0.2581,
1292
+ "step": 330
1293
+ },
1294
+ {
1295
+ "epoch": 25.509090909090908,
1296
+ "grad_norm": 6.659326076507568,
1297
+ "learning_rate": 1.2731277533039648e-05,
1298
+ "loss": 0.2056,
1299
+ "step": 332
1300
+ },
1301
+ {
1302
+ "epoch": 25.654545454545456,
1303
+ "grad_norm": 55.338016510009766,
1304
+ "learning_rate": 1.2687224669603526e-05,
1305
+ "loss": 0.2271,
1306
+ "step": 334
1307
+ },
1308
+ {
1309
+ "epoch": 25.8,
1310
+ "grad_norm": 80.13277435302734,
1311
+ "learning_rate": 1.2643171806167402e-05,
1312
+ "loss": 0.2678,
1313
+ "step": 336
1314
+ },
1315
+ {
1316
+ "epoch": 25.945454545454545,
1317
+ "grad_norm": 12.583796501159668,
1318
+ "learning_rate": 1.2599118942731279e-05,
1319
+ "loss": 0.4055,
1320
+ "step": 338
1321
+ },
1322
+ {
1323
+ "epoch": 26.145454545454545,
1324
+ "grad_norm": 11.221192359924316,
1325
+ "learning_rate": 1.2555066079295155e-05,
1326
+ "loss": 0.2387,
1327
+ "step": 340
1328
+ },
1329
+ {
1330
+ "epoch": 26.145454545454545,
1331
+ "eval_loss": 0.13301870226860046,
1332
+ "eval_runtime": 2.4483,
1333
+ "eval_samples_per_second": 28.183,
1334
+ "eval_steps_per_second": 3.676,
1335
+ "step": 340
1336
+ },
1337
+ {
1338
+ "epoch": 26.29090909090909,
1339
+ "grad_norm": 8.863375663757324,
1340
+ "learning_rate": 1.2511013215859032e-05,
1341
+ "loss": 0.3256,
1342
+ "step": 342
1343
+ },
1344
+ {
1345
+ "epoch": 26.436363636363637,
1346
+ "grad_norm": 9.474263191223145,
1347
+ "learning_rate": 1.2466960352422908e-05,
1348
+ "loss": 0.2353,
1349
+ "step": 344
1350
+ },
1351
+ {
1352
+ "epoch": 26.581818181818182,
1353
+ "grad_norm": 10.095996856689453,
1354
+ "learning_rate": 1.2422907488986786e-05,
1355
+ "loss": 0.2644,
1356
+ "step": 346
1357
+ },
1358
+ {
1359
+ "epoch": 26.727272727272727,
1360
+ "grad_norm": 11.57005500793457,
1361
+ "learning_rate": 1.2378854625550662e-05,
1362
+ "loss": 0.1684,
1363
+ "step": 348
1364
+ },
1365
+ {
1366
+ "epoch": 26.87272727272727,
1367
+ "grad_norm": 11.982728958129883,
1368
+ "learning_rate": 1.2334801762114539e-05,
1369
+ "loss": 0.2843,
1370
+ "step": 350
1371
+ },
1372
+ {
1373
+ "epoch": 27.072727272727274,
1374
+ "grad_norm": 13.845200538635254,
1375
+ "learning_rate": 1.2290748898678415e-05,
1376
+ "loss": 0.395,
1377
+ "step": 352
1378
+ },
1379
+ {
1380
+ "epoch": 27.21818181818182,
1381
+ "grad_norm": 6.1033830642700195,
1382
+ "learning_rate": 1.2246696035242291e-05,
1383
+ "loss": 0.2963,
1384
+ "step": 354
1385
+ },
1386
+ {
1387
+ "epoch": 27.363636363636363,
1388
+ "grad_norm": 21.648548126220703,
1389
+ "learning_rate": 1.2202643171806168e-05,
1390
+ "loss": 0.2149,
1391
+ "step": 356
1392
+ },
1393
+ {
1394
+ "epoch": 27.509090909090908,
1395
+ "grad_norm": 7.1174821853637695,
1396
+ "learning_rate": 1.2158590308370044e-05,
1397
+ "loss": 0.2935,
1398
+ "step": 358
1399
+ },
1400
+ {
1401
+ "epoch": 27.654545454545456,
1402
+ "grad_norm": 7.000458240509033,
1403
+ "learning_rate": 1.2114537444933922e-05,
1404
+ "loss": 0.1398,
1405
+ "step": 360
1406
+ },
1407
+ {
1408
+ "epoch": 27.654545454545456,
1409
+ "eval_loss": 0.14708828926086426,
1410
+ "eval_runtime": 2.4203,
1411
+ "eval_samples_per_second": 28.509,
1412
+ "eval_steps_per_second": 3.719,
1413
+ "step": 360
1414
+ },
1415
+ {
1416
+ "epoch": 27.8,
1417
+ "grad_norm": 10.330643653869629,
1418
+ "learning_rate": 1.2070484581497798e-05,
1419
+ "loss": 0.2579,
1420
+ "step": 362
1421
+ },
1422
+ {
1423
+ "epoch": 27.945454545454545,
1424
+ "grad_norm": 11.92882251739502,
1425
+ "learning_rate": 1.2026431718061675e-05,
1426
+ "loss": 0.2156,
1427
+ "step": 364
1428
+ },
1429
+ {
1430
+ "epoch": 28.145454545454545,
1431
+ "grad_norm": 15.744492530822754,
1432
+ "learning_rate": 1.1982378854625551e-05,
1433
+ "loss": 0.2926,
1434
+ "step": 366
1435
+ },
1436
+ {
1437
+ "epoch": 28.29090909090909,
1438
+ "grad_norm": 2.5392167568206787,
1439
+ "learning_rate": 1.1938325991189428e-05,
1440
+ "loss": 0.1505,
1441
+ "step": 368
1442
+ },
1443
+ {
1444
+ "epoch": 28.436363636363637,
1445
+ "grad_norm": 30.72704315185547,
1446
+ "learning_rate": 1.1894273127753304e-05,
1447
+ "loss": 0.6061,
1448
+ "step": 370
1449
+ },
1450
+ {
1451
+ "epoch": 28.581818181818182,
1452
+ "grad_norm": 10.176592826843262,
1453
+ "learning_rate": 1.1850220264317182e-05,
1454
+ "loss": 0.2487,
1455
+ "step": 372
1456
+ },
1457
+ {
1458
+ "epoch": 28.727272727272727,
1459
+ "grad_norm": 10.259997367858887,
1460
+ "learning_rate": 1.1806167400881058e-05,
1461
+ "loss": 0.1321,
1462
+ "step": 374
1463
+ },
1464
+ {
1465
+ "epoch": 28.87272727272727,
1466
+ "grad_norm": 6.5083088874816895,
1467
+ "learning_rate": 1.1762114537444935e-05,
1468
+ "loss": 0.2465,
1469
+ "step": 376
1470
+ },
1471
+ {
1472
+ "epoch": 29.072727272727274,
1473
+ "grad_norm": 12.744314193725586,
1474
+ "learning_rate": 1.1718061674008811e-05,
1475
+ "loss": 0.4127,
1476
+ "step": 378
1477
+ },
1478
+ {
1479
+ "epoch": 29.21818181818182,
1480
+ "grad_norm": 10.478876113891602,
1481
+ "learning_rate": 1.1674008810572687e-05,
1482
+ "loss": 0.3401,
1483
+ "step": 380
1484
+ },
1485
+ {
1486
+ "epoch": 29.21818181818182,
1487
+ "eval_loss": 0.13561537861824036,
1488
+ "eval_runtime": 2.3932,
1489
+ "eval_samples_per_second": 28.831,
1490
+ "eval_steps_per_second": 3.761,
1491
+ "step": 380
1492
+ },
1493
+ {
1494
+ "epoch": 29.363636363636363,
1495
+ "grad_norm": 6.906543731689453,
1496
+ "learning_rate": 1.1629955947136564e-05,
1497
+ "loss": 0.3319,
1498
+ "step": 382
1499
+ },
1500
+ {
1501
+ "epoch": 29.509090909090908,
1502
+ "grad_norm": 20.187089920043945,
1503
+ "learning_rate": 1.158590308370044e-05,
1504
+ "loss": 0.3399,
1505
+ "step": 384
1506
+ },
1507
+ {
1508
+ "epoch": 29.654545454545456,
1509
+ "grad_norm": 6.2685770988464355,
1510
+ "learning_rate": 1.154185022026432e-05,
1511
+ "loss": 0.0765,
1512
+ "step": 386
1513
+ },
1514
+ {
1515
+ "epoch": 29.8,
1516
+ "grad_norm": 9.634342193603516,
1517
+ "learning_rate": 1.1497797356828195e-05,
1518
+ "loss": 0.0637,
1519
+ "step": 388
1520
+ },
1521
+ {
1522
+ "epoch": 29.945454545454545,
1523
+ "grad_norm": 7.522281646728516,
1524
+ "learning_rate": 1.1453744493392071e-05,
1525
+ "loss": 0.1269,
1526
+ "step": 390
1527
+ },
1528
+ {
1529
+ "epoch": 30.145454545454545,
1530
+ "grad_norm": 4.608687877655029,
1531
+ "learning_rate": 1.1409691629955947e-05,
1532
+ "loss": 0.3527,
1533
+ "step": 392
1534
+ },
1535
+ {
1536
+ "epoch": 30.29090909090909,
1537
+ "grad_norm": 7.271747589111328,
1538
+ "learning_rate": 1.1365638766519824e-05,
1539
+ "loss": 0.2316,
1540
+ "step": 394
1541
+ },
1542
+ {
1543
+ "epoch": 30.436363636363637,
1544
+ "grad_norm": 5.001755237579346,
1545
+ "learning_rate": 1.13215859030837e-05,
1546
+ "loss": 0.2571,
1547
+ "step": 396
1548
+ },
1549
+ {
1550
+ "epoch": 30.581818181818182,
1551
+ "grad_norm": 8.896844863891602,
1552
+ "learning_rate": 1.127753303964758e-05,
1553
+ "loss": 0.254,
1554
+ "step": 398
1555
+ },
1556
+ {
1557
+ "epoch": 30.727272727272727,
1558
+ "grad_norm": 25.82440185546875,
1559
+ "learning_rate": 1.1233480176211456e-05,
1560
+ "loss": 0.4802,
1561
+ "step": 400
1562
+ },
1563
+ {
1564
+ "epoch": 30.727272727272727,
1565
+ "eval_loss": 0.12111978977918625,
1566
+ "eval_runtime": 2.3933,
1567
+ "eval_samples_per_second": 28.83,
1568
+ "eval_steps_per_second": 3.76,
1569
+ "step": 400
1570
+ },
1571
+ {
1572
+ "epoch": 30.87272727272727,
1573
+ "grad_norm": 20.979629516601562,
1574
+ "learning_rate": 1.1189427312775332e-05,
1575
+ "loss": 0.3713,
1576
+ "step": 402
1577
+ },
1578
+ {
1579
+ "epoch": 31.072727272727274,
1580
+ "grad_norm": 7.909449577331543,
1581
+ "learning_rate": 1.1145374449339209e-05,
1582
+ "loss": 0.1726,
1583
+ "step": 404
1584
+ },
1585
+ {
1586
+ "epoch": 31.21818181818182,
1587
+ "grad_norm": 17.605968475341797,
1588
+ "learning_rate": 1.1101321585903085e-05,
1589
+ "loss": 0.3414,
1590
+ "step": 406
1591
+ },
1592
+ {
1593
+ "epoch": 31.363636363636363,
1594
+ "grad_norm": 8.59572696685791,
1595
+ "learning_rate": 1.105726872246696e-05,
1596
+ "loss": 0.1712,
1597
+ "step": 408
1598
+ },
1599
+ {
1600
+ "epoch": 31.509090909090908,
1601
+ "grad_norm": 8.62060832977295,
1602
+ "learning_rate": 1.1013215859030836e-05,
1603
+ "loss": 0.1353,
1604
+ "step": 410
1605
+ },
1606
+ {
1607
+ "epoch": 31.654545454545456,
1608
+ "grad_norm": 17.99728775024414,
1609
+ "learning_rate": 1.0969162995594716e-05,
1610
+ "loss": 0.5617,
1611
+ "step": 412
1612
+ },
1613
+ {
1614
+ "epoch": 31.8,
1615
+ "grad_norm": 5.9628472328186035,
1616
+ "learning_rate": 1.0925110132158592e-05,
1617
+ "loss": 0.1698,
1618
+ "step": 414
1619
+ },
1620
+ {
1621
+ "epoch": 31.945454545454545,
1622
+ "grad_norm": 25.31252670288086,
1623
+ "learning_rate": 1.0881057268722469e-05,
1624
+ "loss": 0.4276,
1625
+ "step": 416
1626
+ },
1627
+ {
1628
+ "epoch": 32.14545454545455,
1629
+ "grad_norm": 10.135091781616211,
1630
+ "learning_rate": 1.0837004405286345e-05,
1631
+ "loss": 0.3815,
1632
+ "step": 418
1633
+ },
1634
+ {
1635
+ "epoch": 32.29090909090909,
1636
+ "grad_norm": 17.551605224609375,
1637
+ "learning_rate": 1.0792951541850221e-05,
1638
+ "loss": 0.2944,
1639
+ "step": 420
1640
+ },
1641
+ {
1642
+ "epoch": 32.29090909090909,
1643
+ "eval_loss": 0.14291267096996307,
1644
+ "eval_runtime": 2.3598,
1645
+ "eval_samples_per_second": 29.24,
1646
+ "eval_steps_per_second": 3.814,
1647
+ "step": 420
1648
+ },
1649
+ {
1650
+ "epoch": 32.43636363636364,
1651
+ "grad_norm": 7.138656139373779,
1652
+ "learning_rate": 1.0748898678414098e-05,
1653
+ "loss": 0.2004,
1654
+ "step": 422
1655
+ },
1656
+ {
1657
+ "epoch": 32.58181818181818,
1658
+ "grad_norm": 4.1204633712768555,
1659
+ "learning_rate": 1.0704845814977976e-05,
1660
+ "loss": 0.1863,
1661
+ "step": 424
1662
+ },
1663
+ {
1664
+ "epoch": 32.72727272727273,
1665
+ "grad_norm": 20.72953224182129,
1666
+ "learning_rate": 1.0660792951541852e-05,
1667
+ "loss": 0.4327,
1668
+ "step": 426
1669
+ },
1670
+ {
1671
+ "epoch": 32.872727272727275,
1672
+ "grad_norm": 5.564031600952148,
1673
+ "learning_rate": 1.0616740088105728e-05,
1674
+ "loss": 0.2392,
1675
+ "step": 428
1676
+ },
1677
+ {
1678
+ "epoch": 33.07272727272727,
1679
+ "grad_norm": 11.07070541381836,
1680
+ "learning_rate": 1.0572687224669605e-05,
1681
+ "loss": 0.4252,
1682
+ "step": 430
1683
+ },
1684
+ {
1685
+ "epoch": 33.21818181818182,
1686
+ "grad_norm": 7.520185947418213,
1687
+ "learning_rate": 1.0528634361233481e-05,
1688
+ "loss": 0.2946,
1689
+ "step": 432
1690
+ },
1691
+ {
1692
+ "epoch": 33.36363636363637,
1693
+ "grad_norm": 6.980128765106201,
1694
+ "learning_rate": 1.0484581497797357e-05,
1695
+ "loss": 0.189,
1696
+ "step": 434
1697
+ },
1698
+ {
1699
+ "epoch": 33.50909090909091,
1700
+ "grad_norm": 11.715187072753906,
1701
+ "learning_rate": 1.0440528634361234e-05,
1702
+ "loss": 0.1778,
1703
+ "step": 436
1704
+ },
1705
+ {
1706
+ "epoch": 33.654545454545456,
1707
+ "grad_norm": 13.902486801147461,
1708
+ "learning_rate": 1.0396475770925112e-05,
1709
+ "loss": 0.2654,
1710
+ "step": 438
1711
+ },
1712
+ {
1713
+ "epoch": 33.8,
1714
+ "grad_norm": 13.725213050842285,
1715
+ "learning_rate": 1.0352422907488988e-05,
1716
+ "loss": 0.2106,
1717
+ "step": 440
1718
+ },
1719
+ {
1720
+ "epoch": 33.8,
1721
+ "eval_loss": 0.1386052370071411,
1722
+ "eval_runtime": 2.4719,
1723
+ "eval_samples_per_second": 27.914,
1724
+ "eval_steps_per_second": 3.641,
1725
+ "step": 440
1726
+ },
1727
+ {
1728
+ "epoch": 33.945454545454545,
1729
+ "grad_norm": 10.430094718933105,
1730
+ "learning_rate": 1.0308370044052865e-05,
1731
+ "loss": 0.3899,
1732
+ "step": 442
1733
+ },
1734
+ {
1735
+ "epoch": 34.14545454545455,
1736
+ "grad_norm": 19.070362091064453,
1737
+ "learning_rate": 1.0264317180616741e-05,
1738
+ "loss": 0.4314,
1739
+ "step": 444
1740
+ },
1741
+ {
1742
+ "epoch": 34.29090909090909,
1743
+ "grad_norm": 12.896262168884277,
1744
+ "learning_rate": 1.0220264317180617e-05,
1745
+ "loss": 0.3634,
1746
+ "step": 446
1747
+ },
1748
+ {
1749
+ "epoch": 34.43636363636364,
1750
+ "grad_norm": 4.983848571777344,
1751
+ "learning_rate": 1.0176211453744494e-05,
1752
+ "loss": 0.1384,
1753
+ "step": 448
1754
+ },
1755
+ {
1756
+ "epoch": 34.58181818181818,
1757
+ "grad_norm": 8.473978996276855,
1758
+ "learning_rate": 1.0132158590308372e-05,
1759
+ "loss": 0.2409,
1760
+ "step": 450
1761
+ },
1762
+ {
1763
+ "epoch": 34.72727272727273,
1764
+ "grad_norm": 6.038484573364258,
1765
+ "learning_rate": 1.0088105726872248e-05,
1766
+ "loss": 0.2472,
1767
+ "step": 452
1768
+ },
1769
+ {
1770
+ "epoch": 34.872727272727275,
1771
+ "grad_norm": 11.81058120727539,
1772
+ "learning_rate": 1.0044052863436124e-05,
1773
+ "loss": 0.353,
1774
+ "step": 454
1775
+ },
1776
+ {
1777
+ "epoch": 35.07272727272727,
1778
+ "grad_norm": 12.163418769836426,
1779
+ "learning_rate": 1e-05,
1780
+ "loss": 0.294,
1781
+ "step": 456
1782
+ },
1783
+ {
1784
+ "epoch": 35.21818181818182,
1785
+ "grad_norm": 5.494348049163818,
1786
+ "learning_rate": 9.955947136563877e-06,
1787
+ "loss": 0.301,
1788
+ "step": 458
1789
+ },
1790
+ {
1791
+ "epoch": 35.36363636363637,
1792
+ "grad_norm": 7.566185474395752,
1793
+ "learning_rate": 9.911894273127755e-06,
1794
+ "loss": 0.1619,
1795
+ "step": 460
1796
+ },
1797
+ {
1798
+ "epoch": 35.36363636363637,
1799
+ "eval_loss": 0.1423145830631256,
1800
+ "eval_runtime": 2.4618,
1801
+ "eval_samples_per_second": 28.028,
1802
+ "eval_steps_per_second": 3.656,
1803
+ "step": 460
1804
+ },
1805
+ {
1806
+ "epoch": 35.50909090909091,
1807
+ "grad_norm": 10.549810409545898,
1808
+ "learning_rate": 9.867841409691632e-06,
1809
+ "loss": 0.2093,
1810
+ "step": 462
1811
+ },
1812
+ {
1813
+ "epoch": 35.654545454545456,
1814
+ "grad_norm": 13.12122917175293,
1815
+ "learning_rate": 9.823788546255508e-06,
1816
+ "loss": 0.2033,
1817
+ "step": 464
1818
+ },
1819
+ {
1820
+ "epoch": 35.8,
1821
+ "grad_norm": 5.5263543128967285,
1822
+ "learning_rate": 9.779735682819384e-06,
1823
+ "loss": 0.2627,
1824
+ "step": 466
1825
+ },
1826
+ {
1827
+ "epoch": 35.945454545454545,
1828
+ "grad_norm": 8.017375946044922,
1829
+ "learning_rate": 9.73568281938326e-06,
1830
+ "loss": 0.3121,
1831
+ "step": 468
1832
+ },
1833
+ {
1834
+ "epoch": 36.14545454545455,
1835
+ "grad_norm": 4.360472202301025,
1836
+ "learning_rate": 9.691629955947137e-06,
1837
+ "loss": 0.2122,
1838
+ "step": 470
1839
+ },
1840
+ {
1841
+ "epoch": 36.29090909090909,
1842
+ "grad_norm": 13.291132926940918,
1843
+ "learning_rate": 9.647577092511013e-06,
1844
+ "loss": 0.2821,
1845
+ "step": 472
1846
+ },
1847
+ {
1848
+ "epoch": 36.43636363636364,
1849
+ "grad_norm": 11.430693626403809,
1850
+ "learning_rate": 9.603524229074891e-06,
1851
+ "loss": 0.1907,
1852
+ "step": 474
1853
+ },
1854
+ {
1855
+ "epoch": 36.58181818181818,
1856
+ "grad_norm": 13.506272315979004,
1857
+ "learning_rate": 9.559471365638768e-06,
1858
+ "loss": 0.3229,
1859
+ "step": 476
1860
+ },
1861
+ {
1862
+ "epoch": 36.72727272727273,
1863
+ "grad_norm": 21.470775604248047,
1864
+ "learning_rate": 9.515418502202644e-06,
1865
+ "loss": 0.3678,
1866
+ "step": 478
1867
+ },
1868
+ {
1869
+ "epoch": 36.872727272727275,
1870
+ "grad_norm": 13.642948150634766,
1871
+ "learning_rate": 9.47136563876652e-06,
1872
+ "loss": 0.227,
1873
+ "step": 480
1874
+ },
1875
+ {
1876
+ "epoch": 36.872727272727275,
1877
+ "eval_loss": 0.1406387984752655,
1878
+ "eval_runtime": 2.3855,
1879
+ "eval_samples_per_second": 28.925,
1880
+ "eval_steps_per_second": 3.773,
1881
+ "step": 480
1882
+ },
1883
+ {
1884
+ "epoch": 37.07272727272727,
1885
+ "grad_norm": 10.579504013061523,
1886
+ "learning_rate": 9.427312775330397e-06,
1887
+ "loss": 0.2049,
1888
+ "step": 482
1889
+ },
1890
+ {
1891
+ "epoch": 37.21818181818182,
1892
+ "grad_norm": 12.022968292236328,
1893
+ "learning_rate": 9.383259911894273e-06,
1894
+ "loss": 0.1632,
1895
+ "step": 484
1896
+ },
1897
+ {
1898
+ "epoch": 37.36363636363637,
1899
+ "grad_norm": 5.538464546203613,
1900
+ "learning_rate": 9.339207048458151e-06,
1901
+ "loss": 0.2758,
1902
+ "step": 486
1903
+ },
1904
+ {
1905
+ "epoch": 37.50909090909091,
1906
+ "grad_norm": 11.991792678833008,
1907
+ "learning_rate": 9.295154185022028e-06,
1908
+ "loss": 0.3123,
1909
+ "step": 488
1910
+ },
1911
+ {
1912
+ "epoch": 37.654545454545456,
1913
+ "grad_norm": 7.3717803955078125,
1914
+ "learning_rate": 9.251101321585904e-06,
1915
+ "loss": 0.1472,
1916
+ "step": 490
1917
+ },
1918
+ {
1919
+ "epoch": 37.8,
1920
+ "grad_norm": 6.4648356437683105,
1921
+ "learning_rate": 9.20704845814978e-06,
1922
+ "loss": 0.3114,
1923
+ "step": 492
1924
+ },
1925
+ {
1926
+ "epoch": 37.945454545454545,
1927
+ "grad_norm": 7.207763195037842,
1928
+ "learning_rate": 9.162995594713657e-06,
1929
+ "loss": 0.1862,
1930
+ "step": 494
1931
+ },
1932
+ {
1933
+ "epoch": 38.14545454545455,
1934
+ "grad_norm": 18.45302963256836,
1935
+ "learning_rate": 9.118942731277533e-06,
1936
+ "loss": 0.147,
1937
+ "step": 496
1938
+ },
1939
+ {
1940
+ "epoch": 38.29090909090909,
1941
+ "grad_norm": 3.806180238723755,
1942
+ "learning_rate": 9.07488986784141e-06,
1943
+ "loss": 0.1729,
1944
+ "step": 498
1945
+ },
1946
+ {
1947
+ "epoch": 38.43636363636364,
1948
+ "grad_norm": 85.68894958496094,
1949
+ "learning_rate": 9.030837004405287e-06,
1950
+ "loss": 0.3548,
1951
+ "step": 500
1952
+ },
1953
+ {
1954
+ "epoch": 38.43636363636364,
1955
+ "eval_loss": 0.13241176307201385,
1956
+ "eval_runtime": 2.4482,
1957
+ "eval_samples_per_second": 28.184,
1958
+ "eval_steps_per_second": 3.676,
1959
+ "step": 500
1960
+ },
1961
+ {
1962
+ "epoch": 38.58181818181818,
1963
+ "grad_norm": 6.820540904998779,
1964
+ "learning_rate": 8.986784140969164e-06,
1965
+ "loss": 0.3516,
1966
+ "step": 502
1967
+ },
1968
+ {
1969
+ "epoch": 38.72727272727273,
1970
+ "grad_norm": 20.636869430541992,
1971
+ "learning_rate": 8.94273127753304e-06,
1972
+ "loss": 0.5911,
1973
+ "step": 504
1974
+ },
1975
+ {
1976
+ "epoch": 38.872727272727275,
1977
+ "grad_norm": 3.5924813747406006,
1978
+ "learning_rate": 8.898678414096917e-06,
1979
+ "loss": 0.086,
1980
+ "step": 506
1981
+ },
1982
+ {
1983
+ "epoch": 39.07272727272727,
1984
+ "grad_norm": 6.470989227294922,
1985
+ "learning_rate": 8.854625550660793e-06,
1986
+ "loss": 0.1793,
1987
+ "step": 508
1988
+ },
1989
+ {
1990
+ "epoch": 39.21818181818182,
1991
+ "grad_norm": 7.134296417236328,
1992
+ "learning_rate": 8.81057268722467e-06,
1993
+ "loss": 0.4018,
1994
+ "step": 510
1995
+ },
1996
+ {
1997
+ "epoch": 39.36363636363637,
1998
+ "grad_norm": 11.029175758361816,
1999
+ "learning_rate": 8.766519823788547e-06,
2000
+ "loss": 0.2334,
2001
+ "step": 512
2002
+ },
2003
+ {
2004
+ "epoch": 39.50909090909091,
2005
+ "grad_norm": 13.326338768005371,
2006
+ "learning_rate": 8.722466960352424e-06,
2007
+ "loss": 0.2861,
2008
+ "step": 514
2009
+ },
2010
+ {
2011
+ "epoch": 39.654545454545456,
2012
+ "grad_norm": 3.306131362915039,
2013
+ "learning_rate": 8.6784140969163e-06,
2014
+ "loss": 0.106,
2015
+ "step": 516
2016
+ },
2017
+ {
2018
+ "epoch": 39.8,
2019
+ "grad_norm": 5.459674835205078,
2020
+ "learning_rate": 8.634361233480178e-06,
2021
+ "loss": 0.1152,
2022
+ "step": 518
2023
+ },
2024
+ {
2025
+ "epoch": 39.945454545454545,
2026
+ "grad_norm": 8.895302772521973,
2027
+ "learning_rate": 8.590308370044054e-06,
2028
+ "loss": 0.2219,
2029
+ "step": 520
2030
+ },
2031
+ {
2032
+ "epoch": 39.945454545454545,
2033
+ "eval_loss": 0.15922874212265015,
2034
+ "eval_runtime": 2.4505,
2035
+ "eval_samples_per_second": 28.158,
2036
+ "eval_steps_per_second": 3.673,
2037
+ "step": 520
2038
+ },
2039
+ {
2040
+ "epoch": 40.14545454545455,
2041
+ "grad_norm": 6.200043201446533,
2042
+ "learning_rate": 8.54625550660793e-06,
2043
+ "loss": 0.2572,
2044
+ "step": 522
2045
+ },
2046
+ {
2047
+ "epoch": 40.29090909090909,
2048
+ "grad_norm": 7.694340705871582,
2049
+ "learning_rate": 8.502202643171807e-06,
2050
+ "loss": 0.2623,
2051
+ "step": 524
2052
+ },
2053
+ {
2054
+ "epoch": 40.43636363636364,
2055
+ "grad_norm": 9.543501853942871,
2056
+ "learning_rate": 8.458149779735683e-06,
2057
+ "loss": 0.1868,
2058
+ "step": 526
2059
+ },
2060
+ {
2061
+ "epoch": 40.58181818181818,
2062
+ "grad_norm": 7.275038242340088,
2063
+ "learning_rate": 8.41409691629956e-06,
2064
+ "loss": 0.1552,
2065
+ "step": 528
2066
+ },
2067
+ {
2068
+ "epoch": 40.72727272727273,
2069
+ "grad_norm": 7.618786811828613,
2070
+ "learning_rate": 8.370044052863436e-06,
2071
+ "loss": 0.2407,
2072
+ "step": 530
2073
+ },
2074
+ {
2075
+ "epoch": 40.872727272727275,
2076
+ "grad_norm": 6.1815643310546875,
2077
+ "learning_rate": 8.325991189427314e-06,
2078
+ "loss": 0.2044,
2079
+ "step": 532
2080
+ },
2081
+ {
2082
+ "epoch": 41.07272727272727,
2083
+ "grad_norm": 92.19898986816406,
2084
+ "learning_rate": 8.28193832599119e-06,
2085
+ "loss": 0.5422,
2086
+ "step": 534
2087
+ },
2088
+ {
2089
+ "epoch": 41.21818181818182,
2090
+ "grad_norm": 9.73451042175293,
2091
+ "learning_rate": 8.237885462555067e-06,
2092
+ "loss": 0.0852,
2093
+ "step": 536
2094
+ },
2095
+ {
2096
+ "epoch": 41.36363636363637,
2097
+ "grad_norm": 11.83582878112793,
2098
+ "learning_rate": 8.193832599118943e-06,
2099
+ "loss": 0.5382,
2100
+ "step": 538
2101
+ },
2102
+ {
2103
+ "epoch": 41.50909090909091,
2104
+ "grad_norm": 25.238807678222656,
2105
+ "learning_rate": 8.14977973568282e-06,
2106
+ "loss": 0.1972,
2107
+ "step": 540
2108
+ },
2109
+ {
2110
+ "epoch": 41.50909090909091,
2111
+ "eval_loss": 0.15016484260559082,
2112
+ "eval_runtime": 2.3545,
2113
+ "eval_samples_per_second": 29.306,
2114
+ "eval_steps_per_second": 3.823,
2115
+ "step": 540
2116
+ },
2117
+ {
2118
+ "epoch": 41.654545454545456,
2119
+ "grad_norm": 22.146385192871094,
2120
+ "learning_rate": 8.105726872246696e-06,
2121
+ "loss": 0.1851,
2122
+ "step": 542
2123
+ },
2124
+ {
2125
+ "epoch": 41.8,
2126
+ "grad_norm": 26.442724227905273,
2127
+ "learning_rate": 8.061674008810574e-06,
2128
+ "loss": 0.1823,
2129
+ "step": 544
2130
+ },
2131
+ {
2132
+ "epoch": 41.945454545454545,
2133
+ "grad_norm": 8.326465606689453,
2134
+ "learning_rate": 8.01762114537445e-06,
2135
+ "loss": 0.1316,
2136
+ "step": 546
2137
+ },
2138
+ {
2139
+ "epoch": 42.14545454545455,
2140
+ "grad_norm": 4.691376686096191,
2141
+ "learning_rate": 7.973568281938327e-06,
2142
+ "loss": 0.2861,
2143
+ "step": 548
2144
+ },
2145
+ {
2146
+ "epoch": 42.29090909090909,
2147
+ "grad_norm": 6.158398628234863,
2148
+ "learning_rate": 7.929515418502203e-06,
2149
+ "loss": 0.1283,
2150
+ "step": 550
2151
+ },
2152
+ {
2153
+ "epoch": 42.43636363636364,
2154
+ "grad_norm": 15.966384887695312,
2155
+ "learning_rate": 7.88546255506608e-06,
2156
+ "loss": 0.3838,
2157
+ "step": 552
2158
+ },
2159
+ {
2160
+ "epoch": 42.58181818181818,
2161
+ "grad_norm": 8.7086763381958,
2162
+ "learning_rate": 7.841409691629956e-06,
2163
+ "loss": 0.3816,
2164
+ "step": 554
2165
+ },
2166
+ {
2167
+ "epoch": 42.72727272727273,
2168
+ "grad_norm": 7.7044148445129395,
2169
+ "learning_rate": 7.797356828193832e-06,
2170
+ "loss": 0.178,
2171
+ "step": 556
2172
+ },
2173
+ {
2174
+ "epoch": 42.872727272727275,
2175
+ "grad_norm": 15.754891395568848,
2176
+ "learning_rate": 7.75330396475771e-06,
2177
+ "loss": 0.1347,
2178
+ "step": 558
2179
+ },
2180
+ {
2181
+ "epoch": 43.07272727272727,
2182
+ "grad_norm": 9.64274787902832,
2183
+ "learning_rate": 7.709251101321587e-06,
2184
+ "loss": 0.2613,
2185
+ "step": 560
2186
+ },
2187
+ {
2188
+ "epoch": 43.07272727272727,
2189
+ "eval_loss": 0.1366824060678482,
2190
+ "eval_runtime": 2.3942,
2191
+ "eval_samples_per_second": 28.819,
2192
+ "eval_steps_per_second": 3.759,
2193
+ "step": 560
2194
+ },
2195
+ {
2196
+ "epoch": 43.21818181818182,
2197
+ "grad_norm": 11.09427547454834,
2198
+ "learning_rate": 7.665198237885463e-06,
2199
+ "loss": 0.1816,
2200
+ "step": 562
2201
+ },
2202
+ {
2203
+ "epoch": 43.36363636363637,
2204
+ "grad_norm": 6.398634910583496,
2205
+ "learning_rate": 7.62114537444934e-06,
2206
+ "loss": 0.2137,
2207
+ "step": 564
2208
+ },
2209
+ {
2210
+ "epoch": 43.50909090909091,
2211
+ "grad_norm": 16.895645141601562,
2212
+ "learning_rate": 7.5770925110132166e-06,
2213
+ "loss": 0.4384,
2214
+ "step": 566
2215
+ },
2216
+ {
2217
+ "epoch": 43.654545454545456,
2218
+ "grad_norm": 32.78514099121094,
2219
+ "learning_rate": 7.533039647577093e-06,
2220
+ "loss": 0.2552,
2221
+ "step": 568
2222
+ },
2223
+ {
2224
+ "epoch": 43.8,
2225
+ "grad_norm": 7.516735076904297,
2226
+ "learning_rate": 7.48898678414097e-06,
2227
+ "loss": 0.1258,
2228
+ "step": 570
2229
+ },
2230
+ {
2231
+ "epoch": 43.945454545454545,
2232
+ "grad_norm": 9.726518630981445,
2233
+ "learning_rate": 7.4449339207048465e-06,
2234
+ "loss": 0.2467,
2235
+ "step": 572
2236
+ },
2237
+ {
2238
+ "epoch": 44.14545454545455,
2239
+ "grad_norm": 19.73747444152832,
2240
+ "learning_rate": 7.400881057268723e-06,
2241
+ "loss": 0.3987,
2242
+ "step": 574
2243
+ },
2244
+ {
2245
+ "epoch": 44.29090909090909,
2246
+ "grad_norm": 9.251556396484375,
2247
+ "learning_rate": 7.3568281938326e-06,
2248
+ "loss": 0.2826,
2249
+ "step": 576
2250
+ },
2251
+ {
2252
+ "epoch": 44.43636363636364,
2253
+ "grad_norm": 10.289827346801758,
2254
+ "learning_rate": 7.312775330396476e-06,
2255
+ "loss": 0.2497,
2256
+ "step": 578
2257
+ },
2258
+ {
2259
+ "epoch": 44.58181818181818,
2260
+ "grad_norm": 7.645063400268555,
2261
+ "learning_rate": 7.268722466960353e-06,
2262
+ "loss": 0.1532,
2263
+ "step": 580
2264
+ },
2265
+ {
2266
+ "epoch": 44.58181818181818,
2267
+ "eval_loss": 0.11615127325057983,
2268
+ "eval_runtime": 2.4126,
2269
+ "eval_samples_per_second": 28.6,
2270
+ "eval_steps_per_second": 3.73,
2271
+ "step": 580
2272
+ },
2273
+ {
2274
+ "epoch": 44.72727272727273,
2275
+ "grad_norm": 14.68101692199707,
2276
+ "learning_rate": 7.224669603524229e-06,
2277
+ "loss": 0.3414,
2278
+ "step": 582
2279
+ },
2280
+ {
2281
+ "epoch": 44.872727272727275,
2282
+ "grad_norm": 8.249561309814453,
2283
+ "learning_rate": 7.180616740088106e-06,
2284
+ "loss": 0.299,
2285
+ "step": 584
2286
+ },
2287
+ {
2288
+ "epoch": 45.07272727272727,
2289
+ "grad_norm": 10.156097412109375,
2290
+ "learning_rate": 7.136563876651983e-06,
2291
+ "loss": 0.3253,
2292
+ "step": 586
2293
+ },
2294
+ {
2295
+ "epoch": 45.21818181818182,
2296
+ "grad_norm": 15.027548789978027,
2297
+ "learning_rate": 7.092511013215859e-06,
2298
+ "loss": 0.2906,
2299
+ "step": 588
2300
+ },
2301
+ {
2302
+ "epoch": 45.36363636363637,
2303
+ "grad_norm": 6.069249153137207,
2304
+ "learning_rate": 7.048458149779736e-06,
2305
+ "loss": 0.1639,
2306
+ "step": 590
2307
+ },
2308
+ {
2309
+ "epoch": 45.50909090909091,
2310
+ "grad_norm": 12.984724998474121,
2311
+ "learning_rate": 7.004405286343613e-06,
2312
+ "loss": 0.4149,
2313
+ "step": 592
2314
+ },
2315
+ {
2316
+ "epoch": 45.654545454545456,
2317
+ "grad_norm": 11.390819549560547,
2318
+ "learning_rate": 6.960352422907489e-06,
2319
+ "loss": 0.3058,
2320
+ "step": 594
2321
+ },
2322
+ {
2323
+ "epoch": 45.8,
2324
+ "grad_norm": 5.197844505310059,
2325
+ "learning_rate": 6.916299559471367e-06,
2326
+ "loss": 0.2393,
2327
+ "step": 596
2328
+ },
2329
+ {
2330
+ "epoch": 45.945454545454545,
2331
+ "grad_norm": 21.49470329284668,
2332
+ "learning_rate": 6.872246696035243e-06,
2333
+ "loss": 0.3496,
2334
+ "step": 598
2335
+ },
2336
+ {
2337
+ "epoch": 46.14545454545455,
2338
+ "grad_norm": 16.046436309814453,
2339
+ "learning_rate": 6.828193832599119e-06,
2340
+ "loss": 0.2507,
2341
+ "step": 600
2342
+ },
2343
+ {
2344
+ "epoch": 46.14545454545455,
2345
+ "eval_loss": 0.023349598050117493,
2346
+ "eval_runtime": 2.4143,
2347
+ "eval_samples_per_second": 28.579,
2348
+ "eval_steps_per_second": 3.728,
2349
+ "step": 600
2350
+ }
2351
+ ],
2352
+ "logging_steps": 2,
2353
+ "max_steps": 910,
2354
+ "num_input_tokens_seen": 0,
2355
+ "num_train_epochs": 70,
2356
+ "save_steps": 20,
2357
+ "stateful_callbacks": {
2358
+ "TrainerControl": {
2359
+ "args": {
2360
+ "should_epoch_stop": false,
2361
+ "should_evaluate": false,
2362
+ "should_log": false,
2363
+ "should_save": true,
2364
+ "should_training_stop": false
2365
+ },
2366
+ "attributes": {}
2367
+ }
2368
+ },
2369
+ "total_flos": 2.272765725884805e+17,
2370
+ "train_batch_size": 24,
2371
+ "trial_name": null,
2372
+ "trial_params": null
2373
+ }
drawer-checkpoint-600/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee10c0e6112d8c1c5d9ce11c643ba0f1e97be5da1d6ec96aa2d9ad06d54d01b1
3
+ size 5432
sink-checkpoint-880/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: google/paligemma-3b-pt-224
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.14.0
sink-checkpoint-880/adapter_config.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "google/paligemma-3b-pt-224",
5
+ "bias": "none",
6
+ "eva_config": null,
7
+ "exclude_modules": null,
8
+ "fan_in_fan_out": false,
9
+ "inference_mode": true,
10
+ "init_lora_weights": true,
11
+ "layer_replication": null,
12
+ "layers_pattern": null,
13
+ "layers_to_transform": null,
14
+ "loftq_config": {},
15
+ "lora_alpha": 8,
16
+ "lora_bias": false,
17
+ "lora_dropout": 0.0,
18
+ "megatron_config": null,
19
+ "megatron_core": "megatron.core",
20
+ "modules_to_save": null,
21
+ "peft_type": "LORA",
22
+ "r": 8,
23
+ "rank_pattern": {},
24
+ "revision": null,
25
+ "target_modules": [
26
+ "gate_proj",
27
+ "o_proj",
28
+ "k_proj",
29
+ "q_proj",
30
+ "up_proj",
31
+ "v_proj",
32
+ "down_proj"
33
+ ],
34
+ "task_type": "CAUSAL_LM",
35
+ "use_dora": false,
36
+ "use_rslora": false
37
+ }
sink-checkpoint-880/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd7ba6222e50fa460ef0027b165e98ac004dae27fe1695c7261f4f77f7c8e205
3
+ size 45258384
sink-checkpoint-880/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e5e2290d006b29d1311740874207d56ee0f079fb4e5290ef4990cf9170ccf1a
3
+ size 23534340
sink-checkpoint-880/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:12c2e7f0907323e0fe87853a5364fe04cd02c48dcb144a16fed2ac61a163c4e1
3
+ size 14244
sink-checkpoint-880/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1fde2e2dfada897deb5238ccacd748135c79cc0d7faff648a6808b4aaaae770d
3
+ size 1064
sink-checkpoint-880/trainer_state.json ADDED
@@ -0,0 +1,3465 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 58.67796610169491,
5
+ "eval_steps": 20,
6
+ "global_step": 880,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.13559322033898305,
13
+ "grad_norm": 2.9112112522125244,
14
+ "learning_rate": 2e-05,
15
+ "loss": 1.4916,
16
+ "step": 2
17
+ },
18
+ {
19
+ "epoch": 0.2711864406779661,
20
+ "grad_norm": 2.9749155044555664,
21
+ "learning_rate": 1.9977653631284916e-05,
22
+ "loss": 1.4191,
23
+ "step": 4
24
+ },
25
+ {
26
+ "epoch": 0.4067796610169492,
27
+ "grad_norm": 2.969093084335327,
28
+ "learning_rate": 1.9955307262569833e-05,
29
+ "loss": 1.4008,
30
+ "step": 6
31
+ },
32
+ {
33
+ "epoch": 0.5423728813559322,
34
+ "grad_norm": 2.5384202003479004,
35
+ "learning_rate": 1.9932960893854748e-05,
36
+ "loss": 1.2799,
37
+ "step": 8
38
+ },
39
+ {
40
+ "epoch": 0.6779661016949152,
41
+ "grad_norm": 2.5124058723449707,
42
+ "learning_rate": 1.9910614525139665e-05,
43
+ "loss": 1.196,
44
+ "step": 10
45
+ },
46
+ {
47
+ "epoch": 0.8135593220338984,
48
+ "grad_norm": 2.1421890258789062,
49
+ "learning_rate": 1.9888268156424583e-05,
50
+ "loss": 1.056,
51
+ "step": 12
52
+ },
53
+ {
54
+ "epoch": 0.9491525423728814,
55
+ "grad_norm": 1.9405546188354492,
56
+ "learning_rate": 1.98659217877095e-05,
57
+ "loss": 0.964,
58
+ "step": 14
59
+ },
60
+ {
61
+ "epoch": 1.0677966101694916,
62
+ "grad_norm": 1.989654779434204,
63
+ "learning_rate": 1.9843575418994415e-05,
64
+ "loss": 0.775,
65
+ "step": 16
66
+ },
67
+ {
68
+ "epoch": 1.2033898305084745,
69
+ "grad_norm": 1.751224398612976,
70
+ "learning_rate": 1.9821229050279332e-05,
71
+ "loss": 0.8305,
72
+ "step": 18
73
+ },
74
+ {
75
+ "epoch": 1.3389830508474576,
76
+ "grad_norm": 1.6789581775665283,
77
+ "learning_rate": 1.9798882681564246e-05,
78
+ "loss": 0.7356,
79
+ "step": 20
80
+ },
81
+ {
82
+ "epoch": 1.3389830508474576,
83
+ "eval_loss": 0.7151809930801392,
84
+ "eval_runtime": 2.5969,
85
+ "eval_samples_per_second": 28.495,
86
+ "eval_steps_per_second": 3.851,
87
+ "step": 20
88
+ },
89
+ {
90
+ "epoch": 1.4745762711864407,
91
+ "grad_norm": 1.2105047702789307,
92
+ "learning_rate": 1.9776536312849164e-05,
93
+ "loss": 0.6815,
94
+ "step": 22
95
+ },
96
+ {
97
+ "epoch": 1.6101694915254239,
98
+ "grad_norm": 1.2142481803894043,
99
+ "learning_rate": 1.9754189944134078e-05,
100
+ "loss": 0.6242,
101
+ "step": 24
102
+ },
103
+ {
104
+ "epoch": 1.7457627118644068,
105
+ "grad_norm": 1.3069604635238647,
106
+ "learning_rate": 1.9731843575418996e-05,
107
+ "loss": 0.5879,
108
+ "step": 26
109
+ },
110
+ {
111
+ "epoch": 1.8813559322033897,
112
+ "grad_norm": 1.1408629417419434,
113
+ "learning_rate": 1.970949720670391e-05,
114
+ "loss": 0.5539,
115
+ "step": 28
116
+ },
117
+ {
118
+ "epoch": 2.0,
119
+ "grad_norm": 0.9239047765731812,
120
+ "learning_rate": 1.9687150837988828e-05,
121
+ "loss": 0.4186,
122
+ "step": 30
123
+ },
124
+ {
125
+ "epoch": 2.135593220338983,
126
+ "grad_norm": 1.3483924865722656,
127
+ "learning_rate": 1.9664804469273745e-05,
128
+ "loss": 0.4998,
129
+ "step": 32
130
+ },
131
+ {
132
+ "epoch": 2.2711864406779663,
133
+ "grad_norm": 1.6858242750167847,
134
+ "learning_rate": 1.9642458100558663e-05,
135
+ "loss": 0.5158,
136
+ "step": 34
137
+ },
138
+ {
139
+ "epoch": 2.406779661016949,
140
+ "grad_norm": 0.8516944050788879,
141
+ "learning_rate": 1.9620111731843577e-05,
142
+ "loss": 0.4635,
143
+ "step": 36
144
+ },
145
+ {
146
+ "epoch": 2.542372881355932,
147
+ "grad_norm": 1.3875209093093872,
148
+ "learning_rate": 1.9597765363128495e-05,
149
+ "loss": 0.4618,
150
+ "step": 38
151
+ },
152
+ {
153
+ "epoch": 2.6779661016949152,
154
+ "grad_norm": 1.7358266115188599,
155
+ "learning_rate": 1.957541899441341e-05,
156
+ "loss": 0.4104,
157
+ "step": 40
158
+ },
159
+ {
160
+ "epoch": 2.6779661016949152,
161
+ "eval_loss": 0.4399190843105316,
162
+ "eval_runtime": 2.572,
163
+ "eval_samples_per_second": 28.771,
164
+ "eval_steps_per_second": 3.888,
165
+ "step": 40
166
+ },
167
+ {
168
+ "epoch": 2.8135593220338984,
169
+ "grad_norm": 1.1381796598434448,
170
+ "learning_rate": 1.9553072625698326e-05,
171
+ "loss": 0.4133,
172
+ "step": 42
173
+ },
174
+ {
175
+ "epoch": 2.9491525423728815,
176
+ "grad_norm": 0.7984223365783691,
177
+ "learning_rate": 1.953072625698324e-05,
178
+ "loss": 0.3676,
179
+ "step": 44
180
+ },
181
+ {
182
+ "epoch": 3.0677966101694913,
183
+ "grad_norm": 1.2213139533996582,
184
+ "learning_rate": 1.9508379888268158e-05,
185
+ "loss": 0.369,
186
+ "step": 46
187
+ },
188
+ {
189
+ "epoch": 3.2033898305084745,
190
+ "grad_norm": 0.7633634209632874,
191
+ "learning_rate": 1.9486033519553072e-05,
192
+ "loss": 0.4063,
193
+ "step": 48
194
+ },
195
+ {
196
+ "epoch": 3.3389830508474576,
197
+ "grad_norm": 2.0809831619262695,
198
+ "learning_rate": 1.946368715083799e-05,
199
+ "loss": 0.4234,
200
+ "step": 50
201
+ },
202
+ {
203
+ "epoch": 3.4745762711864407,
204
+ "grad_norm": 0.8528281450271606,
205
+ "learning_rate": 1.9441340782122907e-05,
206
+ "loss": 0.3576,
207
+ "step": 52
208
+ },
209
+ {
210
+ "epoch": 3.610169491525424,
211
+ "grad_norm": 0.7061654329299927,
212
+ "learning_rate": 1.9418994413407825e-05,
213
+ "loss": 0.3745,
214
+ "step": 54
215
+ },
216
+ {
217
+ "epoch": 3.7457627118644066,
218
+ "grad_norm": 1.4746936559677124,
219
+ "learning_rate": 1.939664804469274e-05,
220
+ "loss": 0.3695,
221
+ "step": 56
222
+ },
223
+ {
224
+ "epoch": 3.8813559322033897,
225
+ "grad_norm": 0.6227794289588928,
226
+ "learning_rate": 1.9374301675977657e-05,
227
+ "loss": 0.3367,
228
+ "step": 58
229
+ },
230
+ {
231
+ "epoch": 4.0,
232
+ "grad_norm": 0.5637996196746826,
233
+ "learning_rate": 1.935195530726257e-05,
234
+ "loss": 0.3265,
235
+ "step": 60
236
+ },
237
+ {
238
+ "epoch": 4.0,
239
+ "eval_loss": 0.3942866325378418,
240
+ "eval_runtime": 2.5619,
241
+ "eval_samples_per_second": 28.885,
242
+ "eval_steps_per_second": 3.903,
243
+ "step": 60
244
+ },
245
+ {
246
+ "epoch": 4.135593220338983,
247
+ "grad_norm": 1.8247381448745728,
248
+ "learning_rate": 1.932960893854749e-05,
249
+ "loss": 0.3546,
250
+ "step": 62
251
+ },
252
+ {
253
+ "epoch": 4.271186440677966,
254
+ "grad_norm": 1.5979743003845215,
255
+ "learning_rate": 1.9307262569832403e-05,
256
+ "loss": 0.3514,
257
+ "step": 64
258
+ },
259
+ {
260
+ "epoch": 4.406779661016949,
261
+ "grad_norm": 1.3399819135665894,
262
+ "learning_rate": 1.928491620111732e-05,
263
+ "loss": 0.3569,
264
+ "step": 66
265
+ },
266
+ {
267
+ "epoch": 4.5423728813559325,
268
+ "grad_norm": 1.8462806940078735,
269
+ "learning_rate": 1.9262569832402235e-05,
270
+ "loss": 0.3359,
271
+ "step": 68
272
+ },
273
+ {
274
+ "epoch": 4.677966101694915,
275
+ "grad_norm": 0.8700501918792725,
276
+ "learning_rate": 1.9240223463687152e-05,
277
+ "loss": 0.316,
278
+ "step": 70
279
+ },
280
+ {
281
+ "epoch": 4.813559322033898,
282
+ "grad_norm": 1.181127905845642,
283
+ "learning_rate": 1.921787709497207e-05,
284
+ "loss": 0.3232,
285
+ "step": 72
286
+ },
287
+ {
288
+ "epoch": 4.9491525423728815,
289
+ "grad_norm": 2.346749782562256,
290
+ "learning_rate": 1.9195530726256984e-05,
291
+ "loss": 0.298,
292
+ "step": 74
293
+ },
294
+ {
295
+ "epoch": 5.067796610169491,
296
+ "grad_norm": 0.9776425957679749,
297
+ "learning_rate": 1.91731843575419e-05,
298
+ "loss": 0.2711,
299
+ "step": 76
300
+ },
301
+ {
302
+ "epoch": 5.203389830508475,
303
+ "grad_norm": 1.002541184425354,
304
+ "learning_rate": 1.915083798882682e-05,
305
+ "loss": 0.2998,
306
+ "step": 78
307
+ },
308
+ {
309
+ "epoch": 5.338983050847458,
310
+ "grad_norm": 1.0936061143875122,
311
+ "learning_rate": 1.9128491620111733e-05,
312
+ "loss": 0.3303,
313
+ "step": 80
314
+ },
315
+ {
316
+ "epoch": 5.338983050847458,
317
+ "eval_loss": 0.3295021951198578,
318
+ "eval_runtime": 2.5712,
319
+ "eval_samples_per_second": 28.781,
320
+ "eval_steps_per_second": 3.889,
321
+ "step": 80
322
+ },
323
+ {
324
+ "epoch": 5.47457627118644,
325
+ "grad_norm": 2.263033390045166,
326
+ "learning_rate": 1.910614525139665e-05,
327
+ "loss": 0.2711,
328
+ "step": 82
329
+ },
330
+ {
331
+ "epoch": 5.610169491525424,
332
+ "grad_norm": 1.7781164646148682,
333
+ "learning_rate": 1.9083798882681565e-05,
334
+ "loss": 0.2874,
335
+ "step": 84
336
+ },
337
+ {
338
+ "epoch": 5.745762711864407,
339
+ "grad_norm": 2.106416702270508,
340
+ "learning_rate": 1.9061452513966483e-05,
341
+ "loss": 0.2999,
342
+ "step": 86
343
+ },
344
+ {
345
+ "epoch": 5.88135593220339,
346
+ "grad_norm": 3.282771348953247,
347
+ "learning_rate": 1.9039106145251397e-05,
348
+ "loss": 0.3286,
349
+ "step": 88
350
+ },
351
+ {
352
+ "epoch": 6.0,
353
+ "grad_norm": 3.5657804012298584,
354
+ "learning_rate": 1.9016759776536315e-05,
355
+ "loss": 0.2677,
356
+ "step": 90
357
+ },
358
+ {
359
+ "epoch": 6.135593220338983,
360
+ "grad_norm": 1.501582384109497,
361
+ "learning_rate": 1.8994413407821232e-05,
362
+ "loss": 0.2794,
363
+ "step": 92
364
+ },
365
+ {
366
+ "epoch": 6.271186440677966,
367
+ "grad_norm": 3.8603317737579346,
368
+ "learning_rate": 1.8972067039106146e-05,
369
+ "loss": 0.252,
370
+ "step": 94
371
+ },
372
+ {
373
+ "epoch": 6.406779661016949,
374
+ "grad_norm": 3.8966710567474365,
375
+ "learning_rate": 1.8949720670391064e-05,
376
+ "loss": 0.3099,
377
+ "step": 96
378
+ },
379
+ {
380
+ "epoch": 6.5423728813559325,
381
+ "grad_norm": 3.0203518867492676,
382
+ "learning_rate": 1.8927374301675978e-05,
383
+ "loss": 0.2637,
384
+ "step": 98
385
+ },
386
+ {
387
+ "epoch": 6.677966101694915,
388
+ "grad_norm": 2.759808301925659,
389
+ "learning_rate": 1.8905027932960896e-05,
390
+ "loss": 0.2842,
391
+ "step": 100
392
+ },
393
+ {
394
+ "epoch": 6.677966101694915,
395
+ "eval_loss": 0.31398913264274597,
396
+ "eval_runtime": 2.5467,
397
+ "eval_samples_per_second": 29.058,
398
+ "eval_steps_per_second": 3.927,
399
+ "step": 100
400
+ },
401
+ {
402
+ "epoch": 6.813559322033898,
403
+ "grad_norm": 2.1833386421203613,
404
+ "learning_rate": 1.888268156424581e-05,
405
+ "loss": 0.255,
406
+ "step": 102
407
+ },
408
+ {
409
+ "epoch": 6.9491525423728815,
410
+ "grad_norm": 2.4655394554138184,
411
+ "learning_rate": 1.8860335195530728e-05,
412
+ "loss": 0.2781,
413
+ "step": 104
414
+ },
415
+ {
416
+ "epoch": 7.067796610169491,
417
+ "grad_norm": 3.762148141860962,
418
+ "learning_rate": 1.8837988826815642e-05,
419
+ "loss": 0.2399,
420
+ "step": 106
421
+ },
422
+ {
423
+ "epoch": 7.203389830508475,
424
+ "grad_norm": 3.290569543838501,
425
+ "learning_rate": 1.881564245810056e-05,
426
+ "loss": 0.2865,
427
+ "step": 108
428
+ },
429
+ {
430
+ "epoch": 7.338983050847458,
431
+ "grad_norm": 1.6182972192764282,
432
+ "learning_rate": 1.8793296089385477e-05,
433
+ "loss": 0.2416,
434
+ "step": 110
435
+ },
436
+ {
437
+ "epoch": 7.47457627118644,
438
+ "grad_norm": 2.6292684078216553,
439
+ "learning_rate": 1.8770949720670394e-05,
440
+ "loss": 0.2322,
441
+ "step": 112
442
+ },
443
+ {
444
+ "epoch": 7.610169491525424,
445
+ "grad_norm": 1.8571443557739258,
446
+ "learning_rate": 1.874860335195531e-05,
447
+ "loss": 0.2222,
448
+ "step": 114
449
+ },
450
+ {
451
+ "epoch": 7.745762711864407,
452
+ "grad_norm": 2.225163221359253,
453
+ "learning_rate": 1.8726256983240226e-05,
454
+ "loss": 0.2364,
455
+ "step": 116
456
+ },
457
+ {
458
+ "epoch": 7.88135593220339,
459
+ "grad_norm": 1.7137680053710938,
460
+ "learning_rate": 1.870391061452514e-05,
461
+ "loss": 0.2326,
462
+ "step": 118
463
+ },
464
+ {
465
+ "epoch": 8.0,
466
+ "grad_norm": 2.072951316833496,
467
+ "learning_rate": 1.8681564245810058e-05,
468
+ "loss": 0.1936,
469
+ "step": 120
470
+ },
471
+ {
472
+ "epoch": 8.0,
473
+ "eval_loss": 0.2349766343832016,
474
+ "eval_runtime": 2.5546,
475
+ "eval_samples_per_second": 28.968,
476
+ "eval_steps_per_second": 3.915,
477
+ "step": 120
478
+ },
479
+ {
480
+ "epoch": 8.135593220338983,
481
+ "grad_norm": 3.6892073154449463,
482
+ "learning_rate": 1.8659217877094972e-05,
483
+ "loss": 0.2627,
484
+ "step": 122
485
+ },
486
+ {
487
+ "epoch": 8.271186440677965,
488
+ "grad_norm": 1.3742704391479492,
489
+ "learning_rate": 1.863687150837989e-05,
490
+ "loss": 0.2074,
491
+ "step": 124
492
+ },
493
+ {
494
+ "epoch": 8.40677966101695,
495
+ "grad_norm": 2.050481081008911,
496
+ "learning_rate": 1.8614525139664804e-05,
497
+ "loss": 0.2415,
498
+ "step": 126
499
+ },
500
+ {
501
+ "epoch": 8.542372881355933,
502
+ "grad_norm": 0.8493655920028687,
503
+ "learning_rate": 1.859217877094972e-05,
504
+ "loss": 0.2314,
505
+ "step": 128
506
+ },
507
+ {
508
+ "epoch": 8.677966101694915,
509
+ "grad_norm": 2.1460635662078857,
510
+ "learning_rate": 1.856983240223464e-05,
511
+ "loss": 0.1926,
512
+ "step": 130
513
+ },
514
+ {
515
+ "epoch": 8.813559322033898,
516
+ "grad_norm": 1.617407202720642,
517
+ "learning_rate": 1.8547486033519553e-05,
518
+ "loss": 0.1655,
519
+ "step": 132
520
+ },
521
+ {
522
+ "epoch": 8.94915254237288,
523
+ "grad_norm": 1.4597758054733276,
524
+ "learning_rate": 1.852513966480447e-05,
525
+ "loss": 0.1948,
526
+ "step": 134
527
+ },
528
+ {
529
+ "epoch": 9.067796610169491,
530
+ "grad_norm": 1.5142747163772583,
531
+ "learning_rate": 1.850279329608939e-05,
532
+ "loss": 0.1672,
533
+ "step": 136
534
+ },
535
+ {
536
+ "epoch": 9.203389830508474,
537
+ "grad_norm": 2.3377439975738525,
538
+ "learning_rate": 1.8480446927374303e-05,
539
+ "loss": 0.2106,
540
+ "step": 138
541
+ },
542
+ {
543
+ "epoch": 9.338983050847457,
544
+ "grad_norm": 3.232727527618408,
545
+ "learning_rate": 1.845810055865922e-05,
546
+ "loss": 0.2048,
547
+ "step": 140
548
+ },
549
+ {
550
+ "epoch": 9.338983050847457,
551
+ "eval_loss": 0.2491840422153473,
552
+ "eval_runtime": 2.5565,
553
+ "eval_samples_per_second": 28.946,
554
+ "eval_steps_per_second": 3.912,
555
+ "step": 140
556
+ },
557
+ {
558
+ "epoch": 9.474576271186441,
559
+ "grad_norm": 2.2882869243621826,
560
+ "learning_rate": 1.8435754189944135e-05,
561
+ "loss": 0.1978,
562
+ "step": 142
563
+ },
564
+ {
565
+ "epoch": 9.610169491525424,
566
+ "grad_norm": 3.388127565383911,
567
+ "learning_rate": 1.8413407821229052e-05,
568
+ "loss": 0.1776,
569
+ "step": 144
570
+ },
571
+ {
572
+ "epoch": 9.745762711864407,
573
+ "grad_norm": 3.8429529666900635,
574
+ "learning_rate": 1.8391061452513966e-05,
575
+ "loss": 0.1905,
576
+ "step": 146
577
+ },
578
+ {
579
+ "epoch": 9.88135593220339,
580
+ "grad_norm": 2.7679781913757324,
581
+ "learning_rate": 1.8368715083798884e-05,
582
+ "loss": 0.1573,
583
+ "step": 148
584
+ },
585
+ {
586
+ "epoch": 10.0,
587
+ "grad_norm": 2.399533748626709,
588
+ "learning_rate": 1.83463687150838e-05,
589
+ "loss": 0.1606,
590
+ "step": 150
591
+ },
592
+ {
593
+ "epoch": 10.135593220338983,
594
+ "grad_norm": 1.8182557821273804,
595
+ "learning_rate": 1.8324022346368716e-05,
596
+ "loss": 0.1882,
597
+ "step": 152
598
+ },
599
+ {
600
+ "epoch": 10.271186440677965,
601
+ "grad_norm": 4.531972408294678,
602
+ "learning_rate": 1.8301675977653633e-05,
603
+ "loss": 0.1603,
604
+ "step": 154
605
+ },
606
+ {
607
+ "epoch": 10.40677966101695,
608
+ "grad_norm": 2.173290252685547,
609
+ "learning_rate": 1.827932960893855e-05,
610
+ "loss": 0.143,
611
+ "step": 156
612
+ },
613
+ {
614
+ "epoch": 10.542372881355933,
615
+ "grad_norm": 1.7528904676437378,
616
+ "learning_rate": 1.8256983240223465e-05,
617
+ "loss": 0.1425,
618
+ "step": 158
619
+ },
620
+ {
621
+ "epoch": 10.677966101694915,
622
+ "grad_norm": 3.755601644515991,
623
+ "learning_rate": 1.8234636871508383e-05,
624
+ "loss": 0.2182,
625
+ "step": 160
626
+ },
627
+ {
628
+ "epoch": 10.677966101694915,
629
+ "eval_loss": 0.1529647409915924,
630
+ "eval_runtime": 2.5594,
631
+ "eval_samples_per_second": 28.913,
632
+ "eval_steps_per_second": 3.907,
633
+ "step": 160
634
+ },
635
+ {
636
+ "epoch": 10.813559322033898,
637
+ "grad_norm": 2.2113025188446045,
638
+ "learning_rate": 1.8212290502793297e-05,
639
+ "loss": 0.1192,
640
+ "step": 162
641
+ },
642
+ {
643
+ "epoch": 10.94915254237288,
644
+ "grad_norm": 3.4620025157928467,
645
+ "learning_rate": 1.8189944134078215e-05,
646
+ "loss": 0.1341,
647
+ "step": 164
648
+ },
649
+ {
650
+ "epoch": 11.067796610169491,
651
+ "grad_norm": 4.0926361083984375,
652
+ "learning_rate": 1.816759776536313e-05,
653
+ "loss": 0.1312,
654
+ "step": 166
655
+ },
656
+ {
657
+ "epoch": 11.203389830508474,
658
+ "grad_norm": 4.601075649261475,
659
+ "learning_rate": 1.8145251396648046e-05,
660
+ "loss": 0.1556,
661
+ "step": 168
662
+ },
663
+ {
664
+ "epoch": 11.338983050847457,
665
+ "grad_norm": 2.2051029205322266,
666
+ "learning_rate": 1.812290502793296e-05,
667
+ "loss": 0.1507,
668
+ "step": 170
669
+ },
670
+ {
671
+ "epoch": 11.474576271186441,
672
+ "grad_norm": 3.648322820663452,
673
+ "learning_rate": 1.8100558659217878e-05,
674
+ "loss": 0.1527,
675
+ "step": 172
676
+ },
677
+ {
678
+ "epoch": 11.610169491525424,
679
+ "grad_norm": 2.4474399089813232,
680
+ "learning_rate": 1.8078212290502796e-05,
681
+ "loss": 0.1408,
682
+ "step": 174
683
+ },
684
+ {
685
+ "epoch": 11.745762711864407,
686
+ "grad_norm": 3.7187936305999756,
687
+ "learning_rate": 1.8055865921787713e-05,
688
+ "loss": 0.1516,
689
+ "step": 176
690
+ },
691
+ {
692
+ "epoch": 11.88135593220339,
693
+ "grad_norm": 2.468073606491089,
694
+ "learning_rate": 1.8033519553072627e-05,
695
+ "loss": 0.1054,
696
+ "step": 178
697
+ },
698
+ {
699
+ "epoch": 12.0,
700
+ "grad_norm": 2.842026948928833,
701
+ "learning_rate": 1.8011173184357545e-05,
702
+ "loss": 0.1329,
703
+ "step": 180
704
+ },
705
+ {
706
+ "epoch": 12.0,
707
+ "eval_loss": 0.1553211659193039,
708
+ "eval_runtime": 2.5574,
709
+ "eval_samples_per_second": 28.935,
710
+ "eval_steps_per_second": 3.91,
711
+ "step": 180
712
+ },
713
+ {
714
+ "epoch": 12.135593220338983,
715
+ "grad_norm": 2.264756441116333,
716
+ "learning_rate": 1.798882681564246e-05,
717
+ "loss": 0.1426,
718
+ "step": 182
719
+ },
720
+ {
721
+ "epoch": 12.271186440677965,
722
+ "grad_norm": 2.491142511367798,
723
+ "learning_rate": 1.7966480446927377e-05,
724
+ "loss": 0.1656,
725
+ "step": 184
726
+ },
727
+ {
728
+ "epoch": 12.40677966101695,
729
+ "grad_norm": 4.568375587463379,
730
+ "learning_rate": 1.794413407821229e-05,
731
+ "loss": 0.1257,
732
+ "step": 186
733
+ },
734
+ {
735
+ "epoch": 12.542372881355933,
736
+ "grad_norm": 3.9673123359680176,
737
+ "learning_rate": 1.792178770949721e-05,
738
+ "loss": 0.1062,
739
+ "step": 188
740
+ },
741
+ {
742
+ "epoch": 12.677966101694915,
743
+ "grad_norm": 6.7387237548828125,
744
+ "learning_rate": 1.7899441340782123e-05,
745
+ "loss": 0.1287,
746
+ "step": 190
747
+ },
748
+ {
749
+ "epoch": 12.813559322033898,
750
+ "grad_norm": 1.935915231704712,
751
+ "learning_rate": 1.787709497206704e-05,
752
+ "loss": 0.1214,
753
+ "step": 192
754
+ },
755
+ {
756
+ "epoch": 12.94915254237288,
757
+ "grad_norm": 4.558732509613037,
758
+ "learning_rate": 1.7854748603351958e-05,
759
+ "loss": 0.1171,
760
+ "step": 194
761
+ },
762
+ {
763
+ "epoch": 13.067796610169491,
764
+ "grad_norm": 5.550953388214111,
765
+ "learning_rate": 1.7832402234636872e-05,
766
+ "loss": 0.1369,
767
+ "step": 196
768
+ },
769
+ {
770
+ "epoch": 13.203389830508474,
771
+ "grad_norm": 5.132591247558594,
772
+ "learning_rate": 1.781005586592179e-05,
773
+ "loss": 0.1445,
774
+ "step": 198
775
+ },
776
+ {
777
+ "epoch": 13.338983050847457,
778
+ "grad_norm": 2.593233823776245,
779
+ "learning_rate": 1.7787709497206704e-05,
780
+ "loss": 0.1073,
781
+ "step": 200
782
+ },
783
+ {
784
+ "epoch": 13.338983050847457,
785
+ "eval_loss": 0.12146170437335968,
786
+ "eval_runtime": 2.6084,
787
+ "eval_samples_per_second": 28.37,
788
+ "eval_steps_per_second": 3.834,
789
+ "step": 200
790
+ },
791
+ {
792
+ "epoch": 13.474576271186441,
793
+ "grad_norm": 4.441246509552002,
794
+ "learning_rate": 1.776536312849162e-05,
795
+ "loss": 0.1389,
796
+ "step": 202
797
+ },
798
+ {
799
+ "epoch": 13.610169491525424,
800
+ "grad_norm": 5.079741954803467,
801
+ "learning_rate": 1.7743016759776536e-05,
802
+ "loss": 0.0813,
803
+ "step": 204
804
+ },
805
+ {
806
+ "epoch": 13.745762711864407,
807
+ "grad_norm": 8.572298049926758,
808
+ "learning_rate": 1.7720670391061453e-05,
809
+ "loss": 0.1685,
810
+ "step": 206
811
+ },
812
+ {
813
+ "epoch": 13.88135593220339,
814
+ "grad_norm": 4.782704830169678,
815
+ "learning_rate": 1.7698324022346368e-05,
816
+ "loss": 0.0785,
817
+ "step": 208
818
+ },
819
+ {
820
+ "epoch": 14.0,
821
+ "grad_norm": 4.0041327476501465,
822
+ "learning_rate": 1.7675977653631285e-05,
823
+ "loss": 0.0974,
824
+ "step": 210
825
+ },
826
+ {
827
+ "epoch": 14.135593220338983,
828
+ "grad_norm": 4.304680824279785,
829
+ "learning_rate": 1.7653631284916203e-05,
830
+ "loss": 0.0984,
831
+ "step": 212
832
+ },
833
+ {
834
+ "epoch": 14.271186440677965,
835
+ "grad_norm": 4.6230974197387695,
836
+ "learning_rate": 1.763128491620112e-05,
837
+ "loss": 0.1162,
838
+ "step": 214
839
+ },
840
+ {
841
+ "epoch": 14.40677966101695,
842
+ "grad_norm": 3.000336170196533,
843
+ "learning_rate": 1.7608938547486035e-05,
844
+ "loss": 0.1195,
845
+ "step": 216
846
+ },
847
+ {
848
+ "epoch": 14.542372881355933,
849
+ "grad_norm": 3.35532808303833,
850
+ "learning_rate": 1.7586592178770952e-05,
851
+ "loss": 0.0873,
852
+ "step": 218
853
+ },
854
+ {
855
+ "epoch": 14.677966101694915,
856
+ "grad_norm": 5.4840617179870605,
857
+ "learning_rate": 1.7564245810055866e-05,
858
+ "loss": 0.0831,
859
+ "step": 220
860
+ },
861
+ {
862
+ "epoch": 14.677966101694915,
863
+ "eval_loss": 0.10071565955877304,
864
+ "eval_runtime": 2.6542,
865
+ "eval_samples_per_second": 27.881,
866
+ "eval_steps_per_second": 3.768,
867
+ "step": 220
868
+ },
869
+ {
870
+ "epoch": 14.813559322033898,
871
+ "grad_norm": 6.936678886413574,
872
+ "learning_rate": 1.7541899441340784e-05,
873
+ "loss": 0.1007,
874
+ "step": 222
875
+ },
876
+ {
877
+ "epoch": 14.94915254237288,
878
+ "grad_norm": 3.4845359325408936,
879
+ "learning_rate": 1.7519553072625698e-05,
880
+ "loss": 0.1388,
881
+ "step": 224
882
+ },
883
+ {
884
+ "epoch": 15.067796610169491,
885
+ "grad_norm": 3.1045992374420166,
886
+ "learning_rate": 1.7497206703910616e-05,
887
+ "loss": 0.0689,
888
+ "step": 226
889
+ },
890
+ {
891
+ "epoch": 15.203389830508474,
892
+ "grad_norm": 8.256555557250977,
893
+ "learning_rate": 1.747486033519553e-05,
894
+ "loss": 0.1174,
895
+ "step": 228
896
+ },
897
+ {
898
+ "epoch": 15.338983050847457,
899
+ "grad_norm": 7.423788070678711,
900
+ "learning_rate": 1.7452513966480447e-05,
901
+ "loss": 0.1408,
902
+ "step": 230
903
+ },
904
+ {
905
+ "epoch": 15.474576271186441,
906
+ "grad_norm": 6.530277729034424,
907
+ "learning_rate": 1.7430167597765365e-05,
908
+ "loss": 0.1173,
909
+ "step": 232
910
+ },
911
+ {
912
+ "epoch": 15.610169491525424,
913
+ "grad_norm": 2.70352840423584,
914
+ "learning_rate": 1.7407821229050283e-05,
915
+ "loss": 0.0734,
916
+ "step": 234
917
+ },
918
+ {
919
+ "epoch": 15.745762711864407,
920
+ "grad_norm": 5.371959686279297,
921
+ "learning_rate": 1.7385474860335197e-05,
922
+ "loss": 0.1002,
923
+ "step": 236
924
+ },
925
+ {
926
+ "epoch": 15.88135593220339,
927
+ "grad_norm": 5.175782680511475,
928
+ "learning_rate": 1.7363128491620114e-05,
929
+ "loss": 0.0868,
930
+ "step": 238
931
+ },
932
+ {
933
+ "epoch": 16.0,
934
+ "grad_norm": 5.210105895996094,
935
+ "learning_rate": 1.734078212290503e-05,
936
+ "loss": 0.0819,
937
+ "step": 240
938
+ },
939
+ {
940
+ "epoch": 16.0,
941
+ "eval_loss": 0.12177320569753647,
942
+ "eval_runtime": 2.5867,
943
+ "eval_samples_per_second": 28.608,
944
+ "eval_steps_per_second": 3.866,
945
+ "step": 240
946
+ },
947
+ {
948
+ "epoch": 16.135593220338983,
949
+ "grad_norm": 8.700615882873535,
950
+ "learning_rate": 1.7318435754189946e-05,
951
+ "loss": 0.1277,
952
+ "step": 242
953
+ },
954
+ {
955
+ "epoch": 16.271186440677965,
956
+ "grad_norm": 3.056018829345703,
957
+ "learning_rate": 1.729608938547486e-05,
958
+ "loss": 0.0916,
959
+ "step": 244
960
+ },
961
+ {
962
+ "epoch": 16.406779661016948,
963
+ "grad_norm": 7.251372337341309,
964
+ "learning_rate": 1.7273743016759778e-05,
965
+ "loss": 0.0979,
966
+ "step": 246
967
+ },
968
+ {
969
+ "epoch": 16.54237288135593,
970
+ "grad_norm": 6.464871883392334,
971
+ "learning_rate": 1.7251396648044692e-05,
972
+ "loss": 0.12,
973
+ "step": 248
974
+ },
975
+ {
976
+ "epoch": 16.677966101694913,
977
+ "grad_norm": 2.8786838054656982,
978
+ "learning_rate": 1.722905027932961e-05,
979
+ "loss": 0.06,
980
+ "step": 250
981
+ },
982
+ {
983
+ "epoch": 16.8135593220339,
984
+ "grad_norm": 9.285773277282715,
985
+ "learning_rate": 1.7206703910614527e-05,
986
+ "loss": 0.1193,
987
+ "step": 252
988
+ },
989
+ {
990
+ "epoch": 16.949152542372882,
991
+ "grad_norm": 2.4323410987854004,
992
+ "learning_rate": 1.7184357541899445e-05,
993
+ "loss": 0.0925,
994
+ "step": 254
995
+ },
996
+ {
997
+ "epoch": 17.06779661016949,
998
+ "grad_norm": 5.3646345138549805,
999
+ "learning_rate": 1.716201117318436e-05,
1000
+ "loss": 0.0899,
1001
+ "step": 256
1002
+ },
1003
+ {
1004
+ "epoch": 17.203389830508474,
1005
+ "grad_norm": 4.193625450134277,
1006
+ "learning_rate": 1.7139664804469277e-05,
1007
+ "loss": 0.0729,
1008
+ "step": 258
1009
+ },
1010
+ {
1011
+ "epoch": 17.338983050847457,
1012
+ "grad_norm": 3.745112895965576,
1013
+ "learning_rate": 1.711731843575419e-05,
1014
+ "loss": 0.135,
1015
+ "step": 260
1016
+ },
1017
+ {
1018
+ "epoch": 17.338983050847457,
1019
+ "eval_loss": 0.06021345034241676,
1020
+ "eval_runtime": 2.5566,
1021
+ "eval_samples_per_second": 28.945,
1022
+ "eval_steps_per_second": 3.911,
1023
+ "step": 260
1024
+ },
1025
+ {
1026
+ "epoch": 17.47457627118644,
1027
+ "grad_norm": 4.401156425476074,
1028
+ "learning_rate": 1.709497206703911e-05,
1029
+ "loss": 0.0871,
1030
+ "step": 262
1031
+ },
1032
+ {
1033
+ "epoch": 17.610169491525422,
1034
+ "grad_norm": 4.245051383972168,
1035
+ "learning_rate": 1.7072625698324023e-05,
1036
+ "loss": 0.1012,
1037
+ "step": 264
1038
+ },
1039
+ {
1040
+ "epoch": 17.74576271186441,
1041
+ "grad_norm": 5.859309673309326,
1042
+ "learning_rate": 1.705027932960894e-05,
1043
+ "loss": 0.0625,
1044
+ "step": 266
1045
+ },
1046
+ {
1047
+ "epoch": 17.88135593220339,
1048
+ "grad_norm": 2.995872735977173,
1049
+ "learning_rate": 1.7027932960893855e-05,
1050
+ "loss": 0.0993,
1051
+ "step": 268
1052
+ },
1053
+ {
1054
+ "epoch": 18.0,
1055
+ "grad_norm": 3.555217981338501,
1056
+ "learning_rate": 1.7005586592178772e-05,
1057
+ "loss": 0.0898,
1058
+ "step": 270
1059
+ },
1060
+ {
1061
+ "epoch": 18.135593220338983,
1062
+ "grad_norm": 3.3912792205810547,
1063
+ "learning_rate": 1.698324022346369e-05,
1064
+ "loss": 0.1194,
1065
+ "step": 272
1066
+ },
1067
+ {
1068
+ "epoch": 18.271186440677965,
1069
+ "grad_norm": 4.934084892272949,
1070
+ "learning_rate": 1.6960893854748607e-05,
1071
+ "loss": 0.0804,
1072
+ "step": 274
1073
+ },
1074
+ {
1075
+ "epoch": 18.406779661016948,
1076
+ "grad_norm": 5.4431047439575195,
1077
+ "learning_rate": 1.693854748603352e-05,
1078
+ "loss": 0.0815,
1079
+ "step": 276
1080
+ },
1081
+ {
1082
+ "epoch": 18.54237288135593,
1083
+ "grad_norm": 2.607501745223999,
1084
+ "learning_rate": 1.691620111731844e-05,
1085
+ "loss": 0.0997,
1086
+ "step": 278
1087
+ },
1088
+ {
1089
+ "epoch": 18.677966101694913,
1090
+ "grad_norm": 4.231921672821045,
1091
+ "learning_rate": 1.6893854748603353e-05,
1092
+ "loss": 0.0772,
1093
+ "step": 280
1094
+ },
1095
+ {
1096
+ "epoch": 18.677966101694913,
1097
+ "eval_loss": 0.051648449152708054,
1098
+ "eval_runtime": 2.6325,
1099
+ "eval_samples_per_second": 28.11,
1100
+ "eval_steps_per_second": 3.799,
1101
+ "step": 280
1102
+ },
1103
+ {
1104
+ "epoch": 18.8135593220339,
1105
+ "grad_norm": 1.9475698471069336,
1106
+ "learning_rate": 1.687150837988827e-05,
1107
+ "loss": 0.0754,
1108
+ "step": 282
1109
+ },
1110
+ {
1111
+ "epoch": 18.949152542372882,
1112
+ "grad_norm": 5.1709418296813965,
1113
+ "learning_rate": 1.6849162011173185e-05,
1114
+ "loss": 0.1151,
1115
+ "step": 284
1116
+ },
1117
+ {
1118
+ "epoch": 19.06779661016949,
1119
+ "grad_norm": 3.932926893234253,
1120
+ "learning_rate": 1.68268156424581e-05,
1121
+ "loss": 0.0615,
1122
+ "step": 286
1123
+ },
1124
+ {
1125
+ "epoch": 19.203389830508474,
1126
+ "grad_norm": 2.0830068588256836,
1127
+ "learning_rate": 1.6804469273743017e-05,
1128
+ "loss": 0.0857,
1129
+ "step": 288
1130
+ },
1131
+ {
1132
+ "epoch": 19.338983050847457,
1133
+ "grad_norm": 3.0773911476135254,
1134
+ "learning_rate": 1.6782122905027934e-05,
1135
+ "loss": 0.0505,
1136
+ "step": 290
1137
+ },
1138
+ {
1139
+ "epoch": 19.47457627118644,
1140
+ "grad_norm": 3.452148914337158,
1141
+ "learning_rate": 1.6759776536312852e-05,
1142
+ "loss": 0.0925,
1143
+ "step": 292
1144
+ },
1145
+ {
1146
+ "epoch": 19.610169491525422,
1147
+ "grad_norm": 4.790304660797119,
1148
+ "learning_rate": 1.6737430167597766e-05,
1149
+ "loss": 0.1019,
1150
+ "step": 294
1151
+ },
1152
+ {
1153
+ "epoch": 19.74576271186441,
1154
+ "grad_norm": 3.481476068496704,
1155
+ "learning_rate": 1.6715083798882684e-05,
1156
+ "loss": 0.115,
1157
+ "step": 296
1158
+ },
1159
+ {
1160
+ "epoch": 19.88135593220339,
1161
+ "grad_norm": 2.273287057876587,
1162
+ "learning_rate": 1.6692737430167598e-05,
1163
+ "loss": 0.0724,
1164
+ "step": 298
1165
+ },
1166
+ {
1167
+ "epoch": 20.0,
1168
+ "grad_norm": 6.9051032066345215,
1169
+ "learning_rate": 1.6670391061452516e-05,
1170
+ "loss": 0.0945,
1171
+ "step": 300
1172
+ },
1173
+ {
1174
+ "epoch": 20.0,
1175
+ "eval_loss": 0.05212599039077759,
1176
+ "eval_runtime": 2.5523,
1177
+ "eval_samples_per_second": 28.994,
1178
+ "eval_steps_per_second": 3.918,
1179
+ "step": 300
1180
+ },
1181
+ {
1182
+ "epoch": 20.135593220338983,
1183
+ "grad_norm": 5.481380939483643,
1184
+ "learning_rate": 1.664804469273743e-05,
1185
+ "loss": 0.1007,
1186
+ "step": 302
1187
+ },
1188
+ {
1189
+ "epoch": 20.271186440677965,
1190
+ "grad_norm": 3.626795530319214,
1191
+ "learning_rate": 1.6625698324022347e-05,
1192
+ "loss": 0.0892,
1193
+ "step": 304
1194
+ },
1195
+ {
1196
+ "epoch": 20.406779661016948,
1197
+ "grad_norm": 10.506450653076172,
1198
+ "learning_rate": 1.660335195530726e-05,
1199
+ "loss": 0.0812,
1200
+ "step": 306
1201
+ },
1202
+ {
1203
+ "epoch": 20.54237288135593,
1204
+ "grad_norm": 3.4885365962982178,
1205
+ "learning_rate": 1.658100558659218e-05,
1206
+ "loss": 0.0848,
1207
+ "step": 308
1208
+ },
1209
+ {
1210
+ "epoch": 20.677966101694913,
1211
+ "grad_norm": 4.149766445159912,
1212
+ "learning_rate": 1.6558659217877097e-05,
1213
+ "loss": 0.0656,
1214
+ "step": 310
1215
+ },
1216
+ {
1217
+ "epoch": 20.8135593220339,
1218
+ "grad_norm": 3.2135589122772217,
1219
+ "learning_rate": 1.6536312849162014e-05,
1220
+ "loss": 0.0821,
1221
+ "step": 312
1222
+ },
1223
+ {
1224
+ "epoch": 20.949152542372882,
1225
+ "grad_norm": 1.4754184484481812,
1226
+ "learning_rate": 1.651396648044693e-05,
1227
+ "loss": 0.0536,
1228
+ "step": 314
1229
+ },
1230
+ {
1231
+ "epoch": 21.06779661016949,
1232
+ "grad_norm": 4.384027481079102,
1233
+ "learning_rate": 1.6491620111731846e-05,
1234
+ "loss": 0.0478,
1235
+ "step": 316
1236
+ },
1237
+ {
1238
+ "epoch": 21.203389830508474,
1239
+ "grad_norm": 1.7271068096160889,
1240
+ "learning_rate": 1.646927374301676e-05,
1241
+ "loss": 0.0542,
1242
+ "step": 318
1243
+ },
1244
+ {
1245
+ "epoch": 21.338983050847457,
1246
+ "grad_norm": 2.160224199295044,
1247
+ "learning_rate": 1.6446927374301678e-05,
1248
+ "loss": 0.0787,
1249
+ "step": 320
1250
+ },
1251
+ {
1252
+ "epoch": 21.338983050847457,
1253
+ "eval_loss": 0.03173355758190155,
1254
+ "eval_runtime": 2.5476,
1255
+ "eval_samples_per_second": 29.047,
1256
+ "eval_steps_per_second": 3.925,
1257
+ "step": 320
1258
+ },
1259
+ {
1260
+ "epoch": 21.47457627118644,
1261
+ "grad_norm": 1.8456004858016968,
1262
+ "learning_rate": 1.6424581005586592e-05,
1263
+ "loss": 0.0979,
1264
+ "step": 322
1265
+ },
1266
+ {
1267
+ "epoch": 21.610169491525422,
1268
+ "grad_norm": 3.751704216003418,
1269
+ "learning_rate": 1.640223463687151e-05,
1270
+ "loss": 0.0882,
1271
+ "step": 324
1272
+ },
1273
+ {
1274
+ "epoch": 21.74576271186441,
1275
+ "grad_norm": 4.201952934265137,
1276
+ "learning_rate": 1.6379888268156424e-05,
1277
+ "loss": 0.0523,
1278
+ "step": 326
1279
+ },
1280
+ {
1281
+ "epoch": 21.88135593220339,
1282
+ "grad_norm": 2.5336902141571045,
1283
+ "learning_rate": 1.635754189944134e-05,
1284
+ "loss": 0.0594,
1285
+ "step": 328
1286
+ },
1287
+ {
1288
+ "epoch": 22.0,
1289
+ "grad_norm": 1.527018666267395,
1290
+ "learning_rate": 1.633519553072626e-05,
1291
+ "loss": 0.0558,
1292
+ "step": 330
1293
+ },
1294
+ {
1295
+ "epoch": 22.135593220338983,
1296
+ "grad_norm": 6.322342395782471,
1297
+ "learning_rate": 1.6312849162011177e-05,
1298
+ "loss": 0.0708,
1299
+ "step": 332
1300
+ },
1301
+ {
1302
+ "epoch": 22.271186440677965,
1303
+ "grad_norm": 1.7524330615997314,
1304
+ "learning_rate": 1.629050279329609e-05,
1305
+ "loss": 0.0895,
1306
+ "step": 334
1307
+ },
1308
+ {
1309
+ "epoch": 22.406779661016948,
1310
+ "grad_norm": 3.92496395111084,
1311
+ "learning_rate": 1.626815642458101e-05,
1312
+ "loss": 0.0474,
1313
+ "step": 336
1314
+ },
1315
+ {
1316
+ "epoch": 22.54237288135593,
1317
+ "grad_norm": 4.856224536895752,
1318
+ "learning_rate": 1.6245810055865923e-05,
1319
+ "loss": 0.1158,
1320
+ "step": 338
1321
+ },
1322
+ {
1323
+ "epoch": 22.677966101694913,
1324
+ "grad_norm": 8.387490272521973,
1325
+ "learning_rate": 1.622346368715084e-05,
1326
+ "loss": 0.106,
1327
+ "step": 340
1328
+ },
1329
+ {
1330
+ "epoch": 22.677966101694913,
1331
+ "eval_loss": 0.06160757318139076,
1332
+ "eval_runtime": 2.5632,
1333
+ "eval_samples_per_second": 28.87,
1334
+ "eval_steps_per_second": 3.901,
1335
+ "step": 340
1336
+ },
1337
+ {
1338
+ "epoch": 22.8135593220339,
1339
+ "grad_norm": 9.437094688415527,
1340
+ "learning_rate": 1.6201117318435755e-05,
1341
+ "loss": 0.0991,
1342
+ "step": 342
1343
+ },
1344
+ {
1345
+ "epoch": 22.949152542372882,
1346
+ "grad_norm": 2.586557149887085,
1347
+ "learning_rate": 1.6178770949720672e-05,
1348
+ "loss": 0.0508,
1349
+ "step": 344
1350
+ },
1351
+ {
1352
+ "epoch": 23.06779661016949,
1353
+ "grad_norm": 5.563693046569824,
1354
+ "learning_rate": 1.6156424581005586e-05,
1355
+ "loss": 0.0476,
1356
+ "step": 346
1357
+ },
1358
+ {
1359
+ "epoch": 23.203389830508474,
1360
+ "grad_norm": 9.477078437805176,
1361
+ "learning_rate": 1.6134078212290504e-05,
1362
+ "loss": 0.0879,
1363
+ "step": 348
1364
+ },
1365
+ {
1366
+ "epoch": 23.338983050847457,
1367
+ "grad_norm": 8.087209701538086,
1368
+ "learning_rate": 1.611173184357542e-05,
1369
+ "loss": 0.0926,
1370
+ "step": 350
1371
+ },
1372
+ {
1373
+ "epoch": 23.47457627118644,
1374
+ "grad_norm": 5.049620628356934,
1375
+ "learning_rate": 1.6089385474860336e-05,
1376
+ "loss": 0.0984,
1377
+ "step": 352
1378
+ },
1379
+ {
1380
+ "epoch": 23.610169491525422,
1381
+ "grad_norm": 8.416902542114258,
1382
+ "learning_rate": 1.6067039106145253e-05,
1383
+ "loss": 0.0866,
1384
+ "step": 354
1385
+ },
1386
+ {
1387
+ "epoch": 23.74576271186441,
1388
+ "grad_norm": 10.671903610229492,
1389
+ "learning_rate": 1.604469273743017e-05,
1390
+ "loss": 0.0687,
1391
+ "step": 356
1392
+ },
1393
+ {
1394
+ "epoch": 23.88135593220339,
1395
+ "grad_norm": 1.6531108617782593,
1396
+ "learning_rate": 1.6022346368715085e-05,
1397
+ "loss": 0.0441,
1398
+ "step": 358
1399
+ },
1400
+ {
1401
+ "epoch": 24.0,
1402
+ "grad_norm": 7.7953081130981445,
1403
+ "learning_rate": 1.6000000000000003e-05,
1404
+ "loss": 0.1062,
1405
+ "step": 360
1406
+ },
1407
+ {
1408
+ "epoch": 24.0,
1409
+ "eval_loss": 0.053918685764074326,
1410
+ "eval_runtime": 2.5535,
1411
+ "eval_samples_per_second": 28.98,
1412
+ "eval_steps_per_second": 3.916,
1413
+ "step": 360
1414
+ },
1415
+ {
1416
+ "epoch": 24.135593220338983,
1417
+ "grad_norm": 2.7345736026763916,
1418
+ "learning_rate": 1.5977653631284917e-05,
1419
+ "loss": 0.0885,
1420
+ "step": 362
1421
+ },
1422
+ {
1423
+ "epoch": 24.271186440677965,
1424
+ "grad_norm": 3.899789810180664,
1425
+ "learning_rate": 1.5955307262569834e-05,
1426
+ "loss": 0.0293,
1427
+ "step": 364
1428
+ },
1429
+ {
1430
+ "epoch": 24.406779661016948,
1431
+ "grad_norm": 4.242441654205322,
1432
+ "learning_rate": 1.593296089385475e-05,
1433
+ "loss": 0.093,
1434
+ "step": 366
1435
+ },
1436
+ {
1437
+ "epoch": 24.54237288135593,
1438
+ "grad_norm": 2.2398180961608887,
1439
+ "learning_rate": 1.5910614525139666e-05,
1440
+ "loss": 0.0757,
1441
+ "step": 368
1442
+ },
1443
+ {
1444
+ "epoch": 24.677966101694913,
1445
+ "grad_norm": 2.8534092903137207,
1446
+ "learning_rate": 1.5888268156424584e-05,
1447
+ "loss": 0.0666,
1448
+ "step": 370
1449
+ },
1450
+ {
1451
+ "epoch": 24.8135593220339,
1452
+ "grad_norm": 1.8129491806030273,
1453
+ "learning_rate": 1.5865921787709498e-05,
1454
+ "loss": 0.0385,
1455
+ "step": 372
1456
+ },
1457
+ {
1458
+ "epoch": 24.949152542372882,
1459
+ "grad_norm": 5.007038593292236,
1460
+ "learning_rate": 1.5843575418994416e-05,
1461
+ "loss": 0.059,
1462
+ "step": 374
1463
+ },
1464
+ {
1465
+ "epoch": 25.06779661016949,
1466
+ "grad_norm": 2.3928046226501465,
1467
+ "learning_rate": 1.5821229050279333e-05,
1468
+ "loss": 0.035,
1469
+ "step": 376
1470
+ },
1471
+ {
1472
+ "epoch": 25.203389830508474,
1473
+ "grad_norm": 3.3850510120391846,
1474
+ "learning_rate": 1.5798882681564247e-05,
1475
+ "loss": 0.0513,
1476
+ "step": 378
1477
+ },
1478
+ {
1479
+ "epoch": 25.338983050847457,
1480
+ "grad_norm": 6.255110740661621,
1481
+ "learning_rate": 1.577653631284916e-05,
1482
+ "loss": 0.0606,
1483
+ "step": 380
1484
+ },
1485
+ {
1486
+ "epoch": 25.338983050847457,
1487
+ "eval_loss": 0.053671400994062424,
1488
+ "eval_runtime": 2.5639,
1489
+ "eval_samples_per_second": 28.862,
1490
+ "eval_steps_per_second": 3.9,
1491
+ "step": 380
1492
+ },
1493
+ {
1494
+ "epoch": 25.47457627118644,
1495
+ "grad_norm": 5.570713996887207,
1496
+ "learning_rate": 1.575418994413408e-05,
1497
+ "loss": 0.0859,
1498
+ "step": 382
1499
+ },
1500
+ {
1501
+ "epoch": 25.610169491525422,
1502
+ "grad_norm": 2.1054348945617676,
1503
+ "learning_rate": 1.5731843575418993e-05,
1504
+ "loss": 0.0705,
1505
+ "step": 384
1506
+ },
1507
+ {
1508
+ "epoch": 25.74576271186441,
1509
+ "grad_norm": 8.123078346252441,
1510
+ "learning_rate": 1.570949720670391e-05,
1511
+ "loss": 0.0753,
1512
+ "step": 386
1513
+ },
1514
+ {
1515
+ "epoch": 25.88135593220339,
1516
+ "grad_norm": 6.04182243347168,
1517
+ "learning_rate": 1.568715083798883e-05,
1518
+ "loss": 0.0773,
1519
+ "step": 388
1520
+ },
1521
+ {
1522
+ "epoch": 26.0,
1523
+ "grad_norm": 7.443014144897461,
1524
+ "learning_rate": 1.5664804469273743e-05,
1525
+ "loss": 0.0568,
1526
+ "step": 390
1527
+ },
1528
+ {
1529
+ "epoch": 26.135593220338983,
1530
+ "grad_norm": 3.3453972339630127,
1531
+ "learning_rate": 1.564245810055866e-05,
1532
+ "loss": 0.0514,
1533
+ "step": 392
1534
+ },
1535
+ {
1536
+ "epoch": 26.271186440677965,
1537
+ "grad_norm": 4.5492119789123535,
1538
+ "learning_rate": 1.5620111731843578e-05,
1539
+ "loss": 0.0444,
1540
+ "step": 394
1541
+ },
1542
+ {
1543
+ "epoch": 26.406779661016948,
1544
+ "grad_norm": 3.651944637298584,
1545
+ "learning_rate": 1.5597765363128492e-05,
1546
+ "loss": 0.0318,
1547
+ "step": 396
1548
+ },
1549
+ {
1550
+ "epoch": 26.54237288135593,
1551
+ "grad_norm": 3.770928382873535,
1552
+ "learning_rate": 1.557541899441341e-05,
1553
+ "loss": 0.0767,
1554
+ "step": 398
1555
+ },
1556
+ {
1557
+ "epoch": 26.677966101694913,
1558
+ "grad_norm": 2.9299845695495605,
1559
+ "learning_rate": 1.5553072625698324e-05,
1560
+ "loss": 0.0634,
1561
+ "step": 400
1562
+ },
1563
+ {
1564
+ "epoch": 26.677966101694913,
1565
+ "eval_loss": 0.026073100045323372,
1566
+ "eval_runtime": 2.5551,
1567
+ "eval_samples_per_second": 28.961,
1568
+ "eval_steps_per_second": 3.914,
1569
+ "step": 400
1570
+ },
1571
+ {
1572
+ "epoch": 26.8135593220339,
1573
+ "grad_norm": 3.504227876663208,
1574
+ "learning_rate": 1.553072625698324e-05,
1575
+ "loss": 0.0635,
1576
+ "step": 402
1577
+ },
1578
+ {
1579
+ "epoch": 26.949152542372882,
1580
+ "grad_norm": 1.415451169013977,
1581
+ "learning_rate": 1.5508379888268156e-05,
1582
+ "loss": 0.0341,
1583
+ "step": 404
1584
+ },
1585
+ {
1586
+ "epoch": 27.06779661016949,
1587
+ "grad_norm": 3.344877004623413,
1588
+ "learning_rate": 1.5486033519553073e-05,
1589
+ "loss": 0.0748,
1590
+ "step": 406
1591
+ },
1592
+ {
1593
+ "epoch": 27.203389830508474,
1594
+ "grad_norm": 4.419982433319092,
1595
+ "learning_rate": 1.546368715083799e-05,
1596
+ "loss": 0.1079,
1597
+ "step": 408
1598
+ },
1599
+ {
1600
+ "epoch": 27.338983050847457,
1601
+ "grad_norm": 3.3656415939331055,
1602
+ "learning_rate": 1.5441340782122905e-05,
1603
+ "loss": 0.0236,
1604
+ "step": 410
1605
+ },
1606
+ {
1607
+ "epoch": 27.47457627118644,
1608
+ "grad_norm": 5.571065902709961,
1609
+ "learning_rate": 1.5418994413407823e-05,
1610
+ "loss": 0.0633,
1611
+ "step": 412
1612
+ },
1613
+ {
1614
+ "epoch": 27.610169491525422,
1615
+ "grad_norm": 3.140500068664551,
1616
+ "learning_rate": 1.539664804469274e-05,
1617
+ "loss": 0.0278,
1618
+ "step": 414
1619
+ },
1620
+ {
1621
+ "epoch": 27.74576271186441,
1622
+ "grad_norm": 3.674678087234497,
1623
+ "learning_rate": 1.5374301675977654e-05,
1624
+ "loss": 0.0349,
1625
+ "step": 416
1626
+ },
1627
+ {
1628
+ "epoch": 27.88135593220339,
1629
+ "grad_norm": 3.510533332824707,
1630
+ "learning_rate": 1.5351955307262572e-05,
1631
+ "loss": 0.0682,
1632
+ "step": 418
1633
+ },
1634
+ {
1635
+ "epoch": 28.0,
1636
+ "grad_norm": 4.30226469039917,
1637
+ "learning_rate": 1.5329608938547486e-05,
1638
+ "loss": 0.0483,
1639
+ "step": 420
1640
+ },
1641
+ {
1642
+ "epoch": 28.0,
1643
+ "eval_loss": 0.028232913464307785,
1644
+ "eval_runtime": 2.5426,
1645
+ "eval_samples_per_second": 29.104,
1646
+ "eval_steps_per_second": 3.933,
1647
+ "step": 420
1648
+ },
1649
+ {
1650
+ "epoch": 28.135593220338983,
1651
+ "grad_norm": 1.8113818168640137,
1652
+ "learning_rate": 1.5307262569832404e-05,
1653
+ "loss": 0.073,
1654
+ "step": 422
1655
+ },
1656
+ {
1657
+ "epoch": 28.271186440677965,
1658
+ "grad_norm": 2.436624526977539,
1659
+ "learning_rate": 1.5284916201117318e-05,
1660
+ "loss": 0.0324,
1661
+ "step": 424
1662
+ },
1663
+ {
1664
+ "epoch": 28.406779661016948,
1665
+ "grad_norm": 2.096217155456543,
1666
+ "learning_rate": 1.5262569832402236e-05,
1667
+ "loss": 0.0719,
1668
+ "step": 426
1669
+ },
1670
+ {
1671
+ "epoch": 28.54237288135593,
1672
+ "grad_norm": 1.7644083499908447,
1673
+ "learning_rate": 1.5240223463687152e-05,
1674
+ "loss": 0.0361,
1675
+ "step": 428
1676
+ },
1677
+ {
1678
+ "epoch": 28.677966101694913,
1679
+ "grad_norm": 1.688166618347168,
1680
+ "learning_rate": 1.5217877094972069e-05,
1681
+ "loss": 0.051,
1682
+ "step": 430
1683
+ },
1684
+ {
1685
+ "epoch": 28.8135593220339,
1686
+ "grad_norm": 4.347079753875732,
1687
+ "learning_rate": 1.5195530726256983e-05,
1688
+ "loss": 0.0404,
1689
+ "step": 432
1690
+ },
1691
+ {
1692
+ "epoch": 28.949152542372882,
1693
+ "grad_norm": 1.5720326900482178,
1694
+ "learning_rate": 1.5173184357541901e-05,
1695
+ "loss": 0.0234,
1696
+ "step": 434
1697
+ },
1698
+ {
1699
+ "epoch": 29.06779661016949,
1700
+ "grad_norm": 2.3207366466522217,
1701
+ "learning_rate": 1.5150837988826817e-05,
1702
+ "loss": 0.0349,
1703
+ "step": 436
1704
+ },
1705
+ {
1706
+ "epoch": 29.203389830508474,
1707
+ "grad_norm": 5.7105937004089355,
1708
+ "learning_rate": 1.5128491620111734e-05,
1709
+ "loss": 0.0624,
1710
+ "step": 438
1711
+ },
1712
+ {
1713
+ "epoch": 29.338983050847457,
1714
+ "grad_norm": 3.4320545196533203,
1715
+ "learning_rate": 1.5106145251396649e-05,
1716
+ "loss": 0.0275,
1717
+ "step": 440
1718
+ },
1719
+ {
1720
+ "epoch": 29.338983050847457,
1721
+ "eval_loss": 0.018391674384474754,
1722
+ "eval_runtime": 2.5807,
1723
+ "eval_samples_per_second": 28.674,
1724
+ "eval_steps_per_second": 3.875,
1725
+ "step": 440
1726
+ },
1727
+ {
1728
+ "epoch": 29.47457627118644,
1729
+ "grad_norm": 2.861673593521118,
1730
+ "learning_rate": 1.5083798882681566e-05,
1731
+ "loss": 0.0441,
1732
+ "step": 442
1733
+ },
1734
+ {
1735
+ "epoch": 29.610169491525422,
1736
+ "grad_norm": 4.7570343017578125,
1737
+ "learning_rate": 1.5061452513966482e-05,
1738
+ "loss": 0.0461,
1739
+ "step": 444
1740
+ },
1741
+ {
1742
+ "epoch": 29.74576271186441,
1743
+ "grad_norm": 2.971254825592041,
1744
+ "learning_rate": 1.5039106145251398e-05,
1745
+ "loss": 0.0781,
1746
+ "step": 446
1747
+ },
1748
+ {
1749
+ "epoch": 29.88135593220339,
1750
+ "grad_norm": 4.213547706604004,
1751
+ "learning_rate": 1.5016759776536314e-05,
1752
+ "loss": 0.0458,
1753
+ "step": 448
1754
+ },
1755
+ {
1756
+ "epoch": 30.0,
1757
+ "grad_norm": 1.9377760887145996,
1758
+ "learning_rate": 1.4994413407821231e-05,
1759
+ "loss": 0.0237,
1760
+ "step": 450
1761
+ },
1762
+ {
1763
+ "epoch": 30.135593220338983,
1764
+ "grad_norm": 5.132991790771484,
1765
+ "learning_rate": 1.4972067039106146e-05,
1766
+ "loss": 0.0363,
1767
+ "step": 452
1768
+ },
1769
+ {
1770
+ "epoch": 30.271186440677965,
1771
+ "grad_norm": 4.021152496337891,
1772
+ "learning_rate": 1.4949720670391063e-05,
1773
+ "loss": 0.0468,
1774
+ "step": 454
1775
+ },
1776
+ {
1777
+ "epoch": 30.406779661016948,
1778
+ "grad_norm": 2.108609199523926,
1779
+ "learning_rate": 1.492737430167598e-05,
1780
+ "loss": 0.0511,
1781
+ "step": 456
1782
+ },
1783
+ {
1784
+ "epoch": 30.54237288135593,
1785
+ "grad_norm": 7.607340335845947,
1786
+ "learning_rate": 1.4905027932960897e-05,
1787
+ "loss": 0.0496,
1788
+ "step": 458
1789
+ },
1790
+ {
1791
+ "epoch": 30.677966101694913,
1792
+ "grad_norm": 5.113883972167969,
1793
+ "learning_rate": 1.4882681564245811e-05,
1794
+ "loss": 0.0437,
1795
+ "step": 460
1796
+ },
1797
+ {
1798
+ "epoch": 30.677966101694913,
1799
+ "eval_loss": 0.019765684381127357,
1800
+ "eval_runtime": 2.5379,
1801
+ "eval_samples_per_second": 29.158,
1802
+ "eval_steps_per_second": 3.94,
1803
+ "step": 460
1804
+ },
1805
+ {
1806
+ "epoch": 30.8135593220339,
1807
+ "grad_norm": 7.709514617919922,
1808
+ "learning_rate": 1.4860335195530729e-05,
1809
+ "loss": 0.0684,
1810
+ "step": 462
1811
+ },
1812
+ {
1813
+ "epoch": 30.949152542372882,
1814
+ "grad_norm": 10.068818092346191,
1815
+ "learning_rate": 1.4837988826815643e-05,
1816
+ "loss": 0.0848,
1817
+ "step": 464
1818
+ },
1819
+ {
1820
+ "epoch": 31.06779661016949,
1821
+ "grad_norm": 3.7379937171936035,
1822
+ "learning_rate": 1.481564245810056e-05,
1823
+ "loss": 0.0317,
1824
+ "step": 466
1825
+ },
1826
+ {
1827
+ "epoch": 31.203389830508474,
1828
+ "grad_norm": 5.47053337097168,
1829
+ "learning_rate": 1.4793296089385476e-05,
1830
+ "loss": 0.0377,
1831
+ "step": 468
1832
+ },
1833
+ {
1834
+ "epoch": 31.338983050847457,
1835
+ "grad_norm": 10.049771308898926,
1836
+ "learning_rate": 1.4770949720670394e-05,
1837
+ "loss": 0.0693,
1838
+ "step": 470
1839
+ },
1840
+ {
1841
+ "epoch": 31.47457627118644,
1842
+ "grad_norm": 4.554798126220703,
1843
+ "learning_rate": 1.4748603351955308e-05,
1844
+ "loss": 0.0511,
1845
+ "step": 472
1846
+ },
1847
+ {
1848
+ "epoch": 31.610169491525422,
1849
+ "grad_norm": 5.097829341888428,
1850
+ "learning_rate": 1.4726256983240224e-05,
1851
+ "loss": 0.0855,
1852
+ "step": 474
1853
+ },
1854
+ {
1855
+ "epoch": 31.74576271186441,
1856
+ "grad_norm": 5.5600199699401855,
1857
+ "learning_rate": 1.4703910614525141e-05,
1858
+ "loss": 0.0413,
1859
+ "step": 476
1860
+ },
1861
+ {
1862
+ "epoch": 31.88135593220339,
1863
+ "grad_norm": 4.647425174713135,
1864
+ "learning_rate": 1.4681564245810056e-05,
1865
+ "loss": 0.0464,
1866
+ "step": 478
1867
+ },
1868
+ {
1869
+ "epoch": 32.0,
1870
+ "grad_norm": 2.4656901359558105,
1871
+ "learning_rate": 1.4659217877094973e-05,
1872
+ "loss": 0.0301,
1873
+ "step": 480
1874
+ },
1875
+ {
1876
+ "epoch": 32.0,
1877
+ "eval_loss": 0.019590254873037338,
1878
+ "eval_runtime": 2.5488,
1879
+ "eval_samples_per_second": 29.034,
1880
+ "eval_steps_per_second": 3.923,
1881
+ "step": 480
1882
+ },
1883
+ {
1884
+ "epoch": 32.13559322033898,
1885
+ "grad_norm": 3.361614942550659,
1886
+ "learning_rate": 1.463687150837989e-05,
1887
+ "loss": 0.0444,
1888
+ "step": 482
1889
+ },
1890
+ {
1891
+ "epoch": 32.271186440677965,
1892
+ "grad_norm": 2.201188802719116,
1893
+ "learning_rate": 1.4614525139664805e-05,
1894
+ "loss": 0.0607,
1895
+ "step": 484
1896
+ },
1897
+ {
1898
+ "epoch": 32.40677966101695,
1899
+ "grad_norm": 4.552088737487793,
1900
+ "learning_rate": 1.4592178770949721e-05,
1901
+ "loss": 0.0291,
1902
+ "step": 486
1903
+ },
1904
+ {
1905
+ "epoch": 32.54237288135593,
1906
+ "grad_norm": 4.290890216827393,
1907
+ "learning_rate": 1.4569832402234639e-05,
1908
+ "loss": 0.072,
1909
+ "step": 488
1910
+ },
1911
+ {
1912
+ "epoch": 32.67796610169491,
1913
+ "grad_norm": 3.6115856170654297,
1914
+ "learning_rate": 1.4547486033519553e-05,
1915
+ "loss": 0.0493,
1916
+ "step": 490
1917
+ },
1918
+ {
1919
+ "epoch": 32.813559322033896,
1920
+ "grad_norm": 8.543648719787598,
1921
+ "learning_rate": 1.452513966480447e-05,
1922
+ "loss": 0.0608,
1923
+ "step": 492
1924
+ },
1925
+ {
1926
+ "epoch": 32.94915254237288,
1927
+ "grad_norm": 2.269991159439087,
1928
+ "learning_rate": 1.4502793296089386e-05,
1929
+ "loss": 0.0473,
1930
+ "step": 494
1931
+ },
1932
+ {
1933
+ "epoch": 33.067796610169495,
1934
+ "grad_norm": 1.4337422847747803,
1935
+ "learning_rate": 1.4480446927374304e-05,
1936
+ "loss": 0.03,
1937
+ "step": 496
1938
+ },
1939
+ {
1940
+ "epoch": 33.20338983050848,
1941
+ "grad_norm": 8.269601821899414,
1942
+ "learning_rate": 1.4458100558659218e-05,
1943
+ "loss": 0.0571,
1944
+ "step": 498
1945
+ },
1946
+ {
1947
+ "epoch": 33.33898305084746,
1948
+ "grad_norm": 2.0852158069610596,
1949
+ "learning_rate": 1.4435754189944136e-05,
1950
+ "loss": 0.0324,
1951
+ "step": 500
1952
+ },
1953
+ {
1954
+ "epoch": 33.33898305084746,
1955
+ "eval_loss": 0.014804137870669365,
1956
+ "eval_runtime": 2.5555,
1957
+ "eval_samples_per_second": 28.957,
1958
+ "eval_steps_per_second": 3.913,
1959
+ "step": 500
1960
+ },
1961
+ {
1962
+ "epoch": 33.47457627118644,
1963
+ "grad_norm": 1.6408460140228271,
1964
+ "learning_rate": 1.4413407821229052e-05,
1965
+ "loss": 0.0351,
1966
+ "step": 502
1967
+ },
1968
+ {
1969
+ "epoch": 33.610169491525426,
1970
+ "grad_norm": 5.137746810913086,
1971
+ "learning_rate": 1.4391061452513967e-05,
1972
+ "loss": 0.046,
1973
+ "step": 504
1974
+ },
1975
+ {
1976
+ "epoch": 33.74576271186441,
1977
+ "grad_norm": 5.1391777992248535,
1978
+ "learning_rate": 1.4368715083798883e-05,
1979
+ "loss": 0.051,
1980
+ "step": 506
1981
+ },
1982
+ {
1983
+ "epoch": 33.88135593220339,
1984
+ "grad_norm": 1.245812177658081,
1985
+ "learning_rate": 1.4346368715083801e-05,
1986
+ "loss": 0.0436,
1987
+ "step": 508
1988
+ },
1989
+ {
1990
+ "epoch": 34.0,
1991
+ "grad_norm": 9.421274185180664,
1992
+ "learning_rate": 1.4324022346368715e-05,
1993
+ "loss": 0.0351,
1994
+ "step": 510
1995
+ },
1996
+ {
1997
+ "epoch": 34.13559322033898,
1998
+ "grad_norm": 3.474705696105957,
1999
+ "learning_rate": 1.4301675977653633e-05,
2000
+ "loss": 0.0337,
2001
+ "step": 512
2002
+ },
2003
+ {
2004
+ "epoch": 34.271186440677965,
2005
+ "grad_norm": 2.859117031097412,
2006
+ "learning_rate": 1.4279329608938549e-05,
2007
+ "loss": 0.03,
2008
+ "step": 514
2009
+ },
2010
+ {
2011
+ "epoch": 34.40677966101695,
2012
+ "grad_norm": 2.916255235671997,
2013
+ "learning_rate": 1.4256983240223466e-05,
2014
+ "loss": 0.0497,
2015
+ "step": 516
2016
+ },
2017
+ {
2018
+ "epoch": 34.54237288135593,
2019
+ "grad_norm": 4.541175842285156,
2020
+ "learning_rate": 1.423463687150838e-05,
2021
+ "loss": 0.0301,
2022
+ "step": 518
2023
+ },
2024
+ {
2025
+ "epoch": 34.67796610169491,
2026
+ "grad_norm": 3.8101017475128174,
2027
+ "learning_rate": 1.4212290502793298e-05,
2028
+ "loss": 0.0438,
2029
+ "step": 520
2030
+ },
2031
+ {
2032
+ "epoch": 34.67796610169491,
2033
+ "eval_loss": 0.013752175495028496,
2034
+ "eval_runtime": 2.5546,
2035
+ "eval_samples_per_second": 28.968,
2036
+ "eval_steps_per_second": 3.915,
2037
+ "step": 520
2038
+ },
2039
+ {
2040
+ "epoch": 34.813559322033896,
2041
+ "grad_norm": 4.509979724884033,
2042
+ "learning_rate": 1.4189944134078212e-05,
2043
+ "loss": 0.0606,
2044
+ "step": 522
2045
+ },
2046
+ {
2047
+ "epoch": 34.94915254237288,
2048
+ "grad_norm": 4.143325328826904,
2049
+ "learning_rate": 1.416759776536313e-05,
2050
+ "loss": 0.0488,
2051
+ "step": 524
2052
+ },
2053
+ {
2054
+ "epoch": 35.067796610169495,
2055
+ "grad_norm": 3.6379213333129883,
2056
+ "learning_rate": 1.4145251396648046e-05,
2057
+ "loss": 0.0524,
2058
+ "step": 526
2059
+ },
2060
+ {
2061
+ "epoch": 35.20338983050848,
2062
+ "grad_norm": 6.871606826782227,
2063
+ "learning_rate": 1.4122905027932963e-05,
2064
+ "loss": 0.0413,
2065
+ "step": 528
2066
+ },
2067
+ {
2068
+ "epoch": 35.33898305084746,
2069
+ "grad_norm": 3.910445213317871,
2070
+ "learning_rate": 1.4100558659217877e-05,
2071
+ "loss": 0.0514,
2072
+ "step": 530
2073
+ },
2074
+ {
2075
+ "epoch": 35.47457627118644,
2076
+ "grad_norm": 3.3859481811523438,
2077
+ "learning_rate": 1.4078212290502795e-05,
2078
+ "loss": 0.0579,
2079
+ "step": 532
2080
+ },
2081
+ {
2082
+ "epoch": 35.610169491525426,
2083
+ "grad_norm": 1.2161208391189575,
2084
+ "learning_rate": 1.4055865921787711e-05,
2085
+ "loss": 0.0453,
2086
+ "step": 534
2087
+ },
2088
+ {
2089
+ "epoch": 35.74576271186441,
2090
+ "grad_norm": 3.3081114292144775,
2091
+ "learning_rate": 1.4033519553072627e-05,
2092
+ "loss": 0.0435,
2093
+ "step": 536
2094
+ },
2095
+ {
2096
+ "epoch": 35.88135593220339,
2097
+ "grad_norm": 1.9852931499481201,
2098
+ "learning_rate": 1.4011173184357543e-05,
2099
+ "loss": 0.0373,
2100
+ "step": 538
2101
+ },
2102
+ {
2103
+ "epoch": 36.0,
2104
+ "grad_norm": 1.9561396837234497,
2105
+ "learning_rate": 1.398882681564246e-05,
2106
+ "loss": 0.0135,
2107
+ "step": 540
2108
+ },
2109
+ {
2110
+ "epoch": 36.0,
2111
+ "eval_loss": 0.026296433061361313,
2112
+ "eval_runtime": 2.5529,
2113
+ "eval_samples_per_second": 28.986,
2114
+ "eval_steps_per_second": 3.917,
2115
+ "step": 540
2116
+ },
2117
+ {
2118
+ "epoch": 36.13559322033898,
2119
+ "grad_norm": 7.186518669128418,
2120
+ "learning_rate": 1.3966480446927374e-05,
2121
+ "loss": 0.0676,
2122
+ "step": 542
2123
+ },
2124
+ {
2125
+ "epoch": 36.271186440677965,
2126
+ "grad_norm": 4.682290077209473,
2127
+ "learning_rate": 1.3944134078212292e-05,
2128
+ "loss": 0.0873,
2129
+ "step": 544
2130
+ },
2131
+ {
2132
+ "epoch": 36.40677966101695,
2133
+ "grad_norm": 2.860055923461914,
2134
+ "learning_rate": 1.3921787709497208e-05,
2135
+ "loss": 0.0337,
2136
+ "step": 546
2137
+ },
2138
+ {
2139
+ "epoch": 36.54237288135593,
2140
+ "grad_norm": 3.043217658996582,
2141
+ "learning_rate": 1.3899441340782126e-05,
2142
+ "loss": 0.0416,
2143
+ "step": 548
2144
+ },
2145
+ {
2146
+ "epoch": 36.67796610169491,
2147
+ "grad_norm": 3.08353328704834,
2148
+ "learning_rate": 1.387709497206704e-05,
2149
+ "loss": 0.0424,
2150
+ "step": 550
2151
+ },
2152
+ {
2153
+ "epoch": 36.813559322033896,
2154
+ "grad_norm": 2.2683467864990234,
2155
+ "learning_rate": 1.3854748603351957e-05,
2156
+ "loss": 0.0312,
2157
+ "step": 552
2158
+ },
2159
+ {
2160
+ "epoch": 36.94915254237288,
2161
+ "grad_norm": 6.045701503753662,
2162
+ "learning_rate": 1.3832402234636873e-05,
2163
+ "loss": 0.0351,
2164
+ "step": 554
2165
+ },
2166
+ {
2167
+ "epoch": 37.067796610169495,
2168
+ "grad_norm": 2.3876640796661377,
2169
+ "learning_rate": 1.3810055865921789e-05,
2170
+ "loss": 0.0319,
2171
+ "step": 556
2172
+ },
2173
+ {
2174
+ "epoch": 37.20338983050848,
2175
+ "grad_norm": 2.451681613922119,
2176
+ "learning_rate": 1.3787709497206705e-05,
2177
+ "loss": 0.0582,
2178
+ "step": 558
2179
+ },
2180
+ {
2181
+ "epoch": 37.33898305084746,
2182
+ "grad_norm": 8.542646408081055,
2183
+ "learning_rate": 1.3765363128491623e-05,
2184
+ "loss": 0.1046,
2185
+ "step": 560
2186
+ },
2187
+ {
2188
+ "epoch": 37.33898305084746,
2189
+ "eval_loss": 0.06252755224704742,
2190
+ "eval_runtime": 2.5598,
2191
+ "eval_samples_per_second": 28.909,
2192
+ "eval_steps_per_second": 3.907,
2193
+ "step": 560
2194
+ },
2195
+ {
2196
+ "epoch": 37.47457627118644,
2197
+ "grad_norm": 9.273839950561523,
2198
+ "learning_rate": 1.3743016759776537e-05,
2199
+ "loss": 0.1107,
2200
+ "step": 562
2201
+ },
2202
+ {
2203
+ "epoch": 37.610169491525426,
2204
+ "grad_norm": 5.178836345672607,
2205
+ "learning_rate": 1.3720670391061454e-05,
2206
+ "loss": 0.0468,
2207
+ "step": 564
2208
+ },
2209
+ {
2210
+ "epoch": 37.74576271186441,
2211
+ "grad_norm": 3.0488035678863525,
2212
+ "learning_rate": 1.369832402234637e-05,
2213
+ "loss": 0.0246,
2214
+ "step": 566
2215
+ },
2216
+ {
2217
+ "epoch": 37.88135593220339,
2218
+ "grad_norm": 20.631141662597656,
2219
+ "learning_rate": 1.3675977653631284e-05,
2220
+ "loss": 0.0696,
2221
+ "step": 568
2222
+ },
2223
+ {
2224
+ "epoch": 38.0,
2225
+ "grad_norm": 1.3084431886672974,
2226
+ "learning_rate": 1.3653631284916202e-05,
2227
+ "loss": 0.0248,
2228
+ "step": 570
2229
+ },
2230
+ {
2231
+ "epoch": 38.13559322033898,
2232
+ "grad_norm": 5.100032329559326,
2233
+ "learning_rate": 1.3631284916201118e-05,
2234
+ "loss": 0.0595,
2235
+ "step": 572
2236
+ },
2237
+ {
2238
+ "epoch": 38.271186440677965,
2239
+ "grad_norm": 3.8755807876586914,
2240
+ "learning_rate": 1.3608938547486034e-05,
2241
+ "loss": 0.0406,
2242
+ "step": 574
2243
+ },
2244
+ {
2245
+ "epoch": 38.40677966101695,
2246
+ "grad_norm": 4.9192585945129395,
2247
+ "learning_rate": 1.358659217877095e-05,
2248
+ "loss": 0.0485,
2249
+ "step": 576
2250
+ },
2251
+ {
2252
+ "epoch": 38.54237288135593,
2253
+ "grad_norm": 3.2827744483947754,
2254
+ "learning_rate": 1.3564245810055867e-05,
2255
+ "loss": 0.0596,
2256
+ "step": 578
2257
+ },
2258
+ {
2259
+ "epoch": 38.67796610169491,
2260
+ "grad_norm": 10.84189224243164,
2261
+ "learning_rate": 1.3541899441340782e-05,
2262
+ "loss": 0.0576,
2263
+ "step": 580
2264
+ },
2265
+ {
2266
+ "epoch": 38.67796610169491,
2267
+ "eval_loss": 0.042270395904779434,
2268
+ "eval_runtime": 2.549,
2269
+ "eval_samples_per_second": 29.031,
2270
+ "eval_steps_per_second": 3.923,
2271
+ "step": 580
2272
+ },
2273
+ {
2274
+ "epoch": 38.813559322033896,
2275
+ "grad_norm": 7.526885986328125,
2276
+ "learning_rate": 1.3519553072625699e-05,
2277
+ "loss": 0.0663,
2278
+ "step": 582
2279
+ },
2280
+ {
2281
+ "epoch": 38.94915254237288,
2282
+ "grad_norm": 7.868960380554199,
2283
+ "learning_rate": 1.3497206703910615e-05,
2284
+ "loss": 0.0476,
2285
+ "step": 584
2286
+ },
2287
+ {
2288
+ "epoch": 39.067796610169495,
2289
+ "grad_norm": 0.8654645085334778,
2290
+ "learning_rate": 1.3474860335195533e-05,
2291
+ "loss": 0.0265,
2292
+ "step": 586
2293
+ },
2294
+ {
2295
+ "epoch": 39.20338983050848,
2296
+ "grad_norm": 5.453415393829346,
2297
+ "learning_rate": 1.3452513966480447e-05,
2298
+ "loss": 0.0443,
2299
+ "step": 588
2300
+ },
2301
+ {
2302
+ "epoch": 39.33898305084746,
2303
+ "grad_norm": 3.8741185665130615,
2304
+ "learning_rate": 1.3430167597765364e-05,
2305
+ "loss": 0.052,
2306
+ "step": 590
2307
+ },
2308
+ {
2309
+ "epoch": 39.47457627118644,
2310
+ "grad_norm": 6.330420017242432,
2311
+ "learning_rate": 1.340782122905028e-05,
2312
+ "loss": 0.0562,
2313
+ "step": 592
2314
+ },
2315
+ {
2316
+ "epoch": 39.610169491525426,
2317
+ "grad_norm": 7.795945167541504,
2318
+ "learning_rate": 1.3385474860335196e-05,
2319
+ "loss": 0.0588,
2320
+ "step": 594
2321
+ },
2322
+ {
2323
+ "epoch": 39.74576271186441,
2324
+ "grad_norm": 5.596314430236816,
2325
+ "learning_rate": 1.3363128491620112e-05,
2326
+ "loss": 0.0683,
2327
+ "step": 596
2328
+ },
2329
+ {
2330
+ "epoch": 39.88135593220339,
2331
+ "grad_norm": 5.971983909606934,
2332
+ "learning_rate": 1.334078212290503e-05,
2333
+ "loss": 0.068,
2334
+ "step": 598
2335
+ },
2336
+ {
2337
+ "epoch": 40.0,
2338
+ "grad_norm": 2.094987154006958,
2339
+ "learning_rate": 1.3318435754189944e-05,
2340
+ "loss": 0.0148,
2341
+ "step": 600
2342
+ },
2343
+ {
2344
+ "epoch": 40.0,
2345
+ "eval_loss": 0.014860520139336586,
2346
+ "eval_runtime": 2.5519,
2347
+ "eval_samples_per_second": 28.998,
2348
+ "eval_steps_per_second": 3.919,
2349
+ "step": 600
2350
+ },
2351
+ {
2352
+ "epoch": 40.13559322033898,
2353
+ "grad_norm": 6.481432914733887,
2354
+ "learning_rate": 1.3296089385474861e-05,
2355
+ "loss": 0.0723,
2356
+ "step": 602
2357
+ },
2358
+ {
2359
+ "epoch": 40.271186440677965,
2360
+ "grad_norm": 7.206397533416748,
2361
+ "learning_rate": 1.3273743016759777e-05,
2362
+ "loss": 0.0458,
2363
+ "step": 604
2364
+ },
2365
+ {
2366
+ "epoch": 40.40677966101695,
2367
+ "grad_norm": 4.1349382400512695,
2368
+ "learning_rate": 1.3251396648044695e-05,
2369
+ "loss": 0.0648,
2370
+ "step": 606
2371
+ },
2372
+ {
2373
+ "epoch": 40.54237288135593,
2374
+ "grad_norm": 4.141824245452881,
2375
+ "learning_rate": 1.322905027932961e-05,
2376
+ "loss": 0.0387,
2377
+ "step": 608
2378
+ },
2379
+ {
2380
+ "epoch": 40.67796610169491,
2381
+ "grad_norm": 6.174028396606445,
2382
+ "learning_rate": 1.3206703910614527e-05,
2383
+ "loss": 0.0609,
2384
+ "step": 610
2385
+ },
2386
+ {
2387
+ "epoch": 40.813559322033896,
2388
+ "grad_norm": 4.508706092834473,
2389
+ "learning_rate": 1.3184357541899443e-05,
2390
+ "loss": 0.0532,
2391
+ "step": 612
2392
+ },
2393
+ {
2394
+ "epoch": 40.94915254237288,
2395
+ "grad_norm": 3.444021224975586,
2396
+ "learning_rate": 1.3162011173184359e-05,
2397
+ "loss": 0.0555,
2398
+ "step": 614
2399
+ },
2400
+ {
2401
+ "epoch": 41.067796610169495,
2402
+ "grad_norm": 6.984348773956299,
2403
+ "learning_rate": 1.3139664804469274e-05,
2404
+ "loss": 0.0613,
2405
+ "step": 616
2406
+ },
2407
+ {
2408
+ "epoch": 41.20338983050848,
2409
+ "grad_norm": 9.282891273498535,
2410
+ "learning_rate": 1.3117318435754192e-05,
2411
+ "loss": 0.089,
2412
+ "step": 618
2413
+ },
2414
+ {
2415
+ "epoch": 41.33898305084746,
2416
+ "grad_norm": 1.716286540031433,
2417
+ "learning_rate": 1.3094972067039106e-05,
2418
+ "loss": 0.0597,
2419
+ "step": 620
2420
+ },
2421
+ {
2422
+ "epoch": 41.33898305084746,
2423
+ "eval_loss": 0.037027593702077866,
2424
+ "eval_runtime": 2.55,
2425
+ "eval_samples_per_second": 29.02,
2426
+ "eval_steps_per_second": 3.922,
2427
+ "step": 620
2428
+ },
2429
+ {
2430
+ "epoch": 41.47457627118644,
2431
+ "grad_norm": 7.117356300354004,
2432
+ "learning_rate": 1.3072625698324024e-05,
2433
+ "loss": 0.0321,
2434
+ "step": 622
2435
+ },
2436
+ {
2437
+ "epoch": 41.610169491525426,
2438
+ "grad_norm": 10.987210273742676,
2439
+ "learning_rate": 1.305027932960894e-05,
2440
+ "loss": 0.0517,
2441
+ "step": 624
2442
+ },
2443
+ {
2444
+ "epoch": 41.74576271186441,
2445
+ "grad_norm": 4.5963969230651855,
2446
+ "learning_rate": 1.3027932960893857e-05,
2447
+ "loss": 0.0711,
2448
+ "step": 626
2449
+ },
2450
+ {
2451
+ "epoch": 41.88135593220339,
2452
+ "grad_norm": 3.3372626304626465,
2453
+ "learning_rate": 1.3005586592178771e-05,
2454
+ "loss": 0.0478,
2455
+ "step": 628
2456
+ },
2457
+ {
2458
+ "epoch": 42.0,
2459
+ "grad_norm": 7.7206501960754395,
2460
+ "learning_rate": 1.2983240223463689e-05,
2461
+ "loss": 0.0666,
2462
+ "step": 630
2463
+ },
2464
+ {
2465
+ "epoch": 42.13559322033898,
2466
+ "grad_norm": 5.320857048034668,
2467
+ "learning_rate": 1.2960893854748603e-05,
2468
+ "loss": 0.0642,
2469
+ "step": 632
2470
+ },
2471
+ {
2472
+ "epoch": 42.271186440677965,
2473
+ "grad_norm": 1.0182682275772095,
2474
+ "learning_rate": 1.2938547486033521e-05,
2475
+ "loss": 0.0136,
2476
+ "step": 634
2477
+ },
2478
+ {
2479
+ "epoch": 42.40677966101695,
2480
+ "grad_norm": 12.313161849975586,
2481
+ "learning_rate": 1.2916201117318437e-05,
2482
+ "loss": 0.0592,
2483
+ "step": 636
2484
+ },
2485
+ {
2486
+ "epoch": 42.54237288135593,
2487
+ "grad_norm": 10.050963401794434,
2488
+ "learning_rate": 1.2893854748603354e-05,
2489
+ "loss": 0.0676,
2490
+ "step": 638
2491
+ },
2492
+ {
2493
+ "epoch": 42.67796610169491,
2494
+ "grad_norm": 4.61830472946167,
2495
+ "learning_rate": 1.2871508379888269e-05,
2496
+ "loss": 0.0609,
2497
+ "step": 640
2498
+ },
2499
+ {
2500
+ "epoch": 42.67796610169491,
2501
+ "eval_loss": 0.007570674177259207,
2502
+ "eval_runtime": 2.5703,
2503
+ "eval_samples_per_second": 28.79,
2504
+ "eval_steps_per_second": 3.891,
2505
+ "step": 640
2506
+ },
2507
+ {
2508
+ "epoch": 42.813559322033896,
2509
+ "grad_norm": 6.831172943115234,
2510
+ "learning_rate": 1.2849162011173186e-05,
2511
+ "loss": 0.0619,
2512
+ "step": 642
2513
+ },
2514
+ {
2515
+ "epoch": 42.94915254237288,
2516
+ "grad_norm": 6.174488067626953,
2517
+ "learning_rate": 1.2826815642458102e-05,
2518
+ "loss": 0.0574,
2519
+ "step": 644
2520
+ },
2521
+ {
2522
+ "epoch": 43.067796610169495,
2523
+ "grad_norm": 3.5727338790893555,
2524
+ "learning_rate": 1.2804469273743018e-05,
2525
+ "loss": 0.0631,
2526
+ "step": 646
2527
+ },
2528
+ {
2529
+ "epoch": 43.20338983050848,
2530
+ "grad_norm": 8.358489036560059,
2531
+ "learning_rate": 1.2782122905027934e-05,
2532
+ "loss": 0.0511,
2533
+ "step": 648
2534
+ },
2535
+ {
2536
+ "epoch": 43.33898305084746,
2537
+ "grad_norm": 9.393823623657227,
2538
+ "learning_rate": 1.2759776536312851e-05,
2539
+ "loss": 0.0464,
2540
+ "step": 650
2541
+ },
2542
+ {
2543
+ "epoch": 43.47457627118644,
2544
+ "grad_norm": 3.770444631576538,
2545
+ "learning_rate": 1.2737430167597766e-05,
2546
+ "loss": 0.0639,
2547
+ "step": 652
2548
+ },
2549
+ {
2550
+ "epoch": 43.610169491525426,
2551
+ "grad_norm": 6.276406764984131,
2552
+ "learning_rate": 1.2715083798882683e-05,
2553
+ "loss": 0.0387,
2554
+ "step": 654
2555
+ },
2556
+ {
2557
+ "epoch": 43.74576271186441,
2558
+ "grad_norm": 10.378983497619629,
2559
+ "learning_rate": 1.2692737430167599e-05,
2560
+ "loss": 0.0651,
2561
+ "step": 656
2562
+ },
2563
+ {
2564
+ "epoch": 43.88135593220339,
2565
+ "grad_norm": 4.60952091217041,
2566
+ "learning_rate": 1.2670391061452517e-05,
2567
+ "loss": 0.0934,
2568
+ "step": 658
2569
+ },
2570
+ {
2571
+ "epoch": 44.0,
2572
+ "grad_norm": 4.878607749938965,
2573
+ "learning_rate": 1.2648044692737431e-05,
2574
+ "loss": 0.0248,
2575
+ "step": 660
2576
+ },
2577
+ {
2578
+ "epoch": 44.0,
2579
+ "eval_loss": 0.04451555758714676,
2580
+ "eval_runtime": 2.5401,
2581
+ "eval_samples_per_second": 29.133,
2582
+ "eval_steps_per_second": 3.937,
2583
+ "step": 660
2584
+ },
2585
+ {
2586
+ "epoch": 44.13559322033898,
2587
+ "grad_norm": 6.286678314208984,
2588
+ "learning_rate": 1.2625698324022347e-05,
2589
+ "loss": 0.072,
2590
+ "step": 662
2591
+ },
2592
+ {
2593
+ "epoch": 44.271186440677965,
2594
+ "grad_norm": 13.918813705444336,
2595
+ "learning_rate": 1.2603351955307264e-05,
2596
+ "loss": 0.1053,
2597
+ "step": 664
2598
+ },
2599
+ {
2600
+ "epoch": 44.40677966101695,
2601
+ "grad_norm": 2.6223111152648926,
2602
+ "learning_rate": 1.2581005586592179e-05,
2603
+ "loss": 0.0576,
2604
+ "step": 666
2605
+ },
2606
+ {
2607
+ "epoch": 44.54237288135593,
2608
+ "grad_norm": 5.414268493652344,
2609
+ "learning_rate": 1.2558659217877096e-05,
2610
+ "loss": 0.0953,
2611
+ "step": 668
2612
+ },
2613
+ {
2614
+ "epoch": 44.67796610169491,
2615
+ "grad_norm": 13.318530082702637,
2616
+ "learning_rate": 1.253631284916201e-05,
2617
+ "loss": 0.0835,
2618
+ "step": 670
2619
+ },
2620
+ {
2621
+ "epoch": 44.813559322033896,
2622
+ "grad_norm": 9.664336204528809,
2623
+ "learning_rate": 1.2513966480446928e-05,
2624
+ "loss": 0.07,
2625
+ "step": 672
2626
+ },
2627
+ {
2628
+ "epoch": 44.94915254237288,
2629
+ "grad_norm": 2.9403374195098877,
2630
+ "learning_rate": 1.2491620111731844e-05,
2631
+ "loss": 0.032,
2632
+ "step": 674
2633
+ },
2634
+ {
2635
+ "epoch": 45.067796610169495,
2636
+ "grad_norm": 4.393988609313965,
2637
+ "learning_rate": 1.2469273743016761e-05,
2638
+ "loss": 0.0497,
2639
+ "step": 676
2640
+ },
2641
+ {
2642
+ "epoch": 45.20338983050848,
2643
+ "grad_norm": 3.4139366149902344,
2644
+ "learning_rate": 1.2446927374301676e-05,
2645
+ "loss": 0.0284,
2646
+ "step": 678
2647
+ },
2648
+ {
2649
+ "epoch": 45.33898305084746,
2650
+ "grad_norm": 4.1256937980651855,
2651
+ "learning_rate": 1.2424581005586593e-05,
2652
+ "loss": 0.0507,
2653
+ "step": 680
2654
+ },
2655
+ {
2656
+ "epoch": 45.33898305084746,
2657
+ "eval_loss": 0.01075062993913889,
2658
+ "eval_runtime": 2.5345,
2659
+ "eval_samples_per_second": 29.197,
2660
+ "eval_steps_per_second": 3.946,
2661
+ "step": 680
2662
+ },
2663
+ {
2664
+ "epoch": 45.47457627118644,
2665
+ "grad_norm": 4.633230686187744,
2666
+ "learning_rate": 1.2402234636871509e-05,
2667
+ "loss": 0.0698,
2668
+ "step": 682
2669
+ },
2670
+ {
2671
+ "epoch": 45.610169491525426,
2672
+ "grad_norm": 1.5590040683746338,
2673
+ "learning_rate": 1.2379888268156425e-05,
2674
+ "loss": 0.0303,
2675
+ "step": 684
2676
+ },
2677
+ {
2678
+ "epoch": 45.74576271186441,
2679
+ "grad_norm": 4.279468536376953,
2680
+ "learning_rate": 1.2357541899441341e-05,
2681
+ "loss": 0.049,
2682
+ "step": 686
2683
+ },
2684
+ {
2685
+ "epoch": 45.88135593220339,
2686
+ "grad_norm": 8.833124160766602,
2687
+ "learning_rate": 1.2335195530726258e-05,
2688
+ "loss": 0.0702,
2689
+ "step": 688
2690
+ },
2691
+ {
2692
+ "epoch": 46.0,
2693
+ "grad_norm": 4.080725193023682,
2694
+ "learning_rate": 1.2312849162011173e-05,
2695
+ "loss": 0.0499,
2696
+ "step": 690
2697
+ },
2698
+ {
2699
+ "epoch": 46.13559322033898,
2700
+ "grad_norm": 2.91790771484375,
2701
+ "learning_rate": 1.229050279329609e-05,
2702
+ "loss": 0.0527,
2703
+ "step": 692
2704
+ },
2705
+ {
2706
+ "epoch": 46.271186440677965,
2707
+ "grad_norm": 5.005844593048096,
2708
+ "learning_rate": 1.2268156424581006e-05,
2709
+ "loss": 0.0464,
2710
+ "step": 694
2711
+ },
2712
+ {
2713
+ "epoch": 46.40677966101695,
2714
+ "grad_norm": 1.9500732421875,
2715
+ "learning_rate": 1.2245810055865924e-05,
2716
+ "loss": 0.084,
2717
+ "step": 696
2718
+ },
2719
+ {
2720
+ "epoch": 46.54237288135593,
2721
+ "grad_norm": 2.756706476211548,
2722
+ "learning_rate": 1.2223463687150838e-05,
2723
+ "loss": 0.0337,
2724
+ "step": 698
2725
+ },
2726
+ {
2727
+ "epoch": 46.67796610169491,
2728
+ "grad_norm": 4.091094017028809,
2729
+ "learning_rate": 1.2201117318435756e-05,
2730
+ "loss": 0.0591,
2731
+ "step": 700
2732
+ },
2733
+ {
2734
+ "epoch": 46.67796610169491,
2735
+ "eval_loss": 0.016965001821517944,
2736
+ "eval_runtime": 2.5344,
2737
+ "eval_samples_per_second": 29.199,
2738
+ "eval_steps_per_second": 3.946,
2739
+ "step": 700
2740
+ },
2741
+ {
2742
+ "epoch": 46.813559322033896,
2743
+ "grad_norm": 8.62890625,
2744
+ "learning_rate": 1.2178770949720671e-05,
2745
+ "loss": 0.0932,
2746
+ "step": 702
2747
+ },
2748
+ {
2749
+ "epoch": 46.94915254237288,
2750
+ "grad_norm": 5.938720703125,
2751
+ "learning_rate": 1.2156424581005587e-05,
2752
+ "loss": 0.0534,
2753
+ "step": 704
2754
+ },
2755
+ {
2756
+ "epoch": 47.067796610169495,
2757
+ "grad_norm": 7.693348407745361,
2758
+ "learning_rate": 1.2134078212290503e-05,
2759
+ "loss": 0.0807,
2760
+ "step": 706
2761
+ },
2762
+ {
2763
+ "epoch": 47.20338983050848,
2764
+ "grad_norm": 7.366421222686768,
2765
+ "learning_rate": 1.211173184357542e-05,
2766
+ "loss": 0.0733,
2767
+ "step": 708
2768
+ },
2769
+ {
2770
+ "epoch": 47.33898305084746,
2771
+ "grad_norm": 4.7935662269592285,
2772
+ "learning_rate": 1.2089385474860335e-05,
2773
+ "loss": 0.0419,
2774
+ "step": 710
2775
+ },
2776
+ {
2777
+ "epoch": 47.47457627118644,
2778
+ "grad_norm": 5.062975883483887,
2779
+ "learning_rate": 1.2067039106145253e-05,
2780
+ "loss": 0.0824,
2781
+ "step": 712
2782
+ },
2783
+ {
2784
+ "epoch": 47.610169491525426,
2785
+ "grad_norm": 1.7458360195159912,
2786
+ "learning_rate": 1.2044692737430169e-05,
2787
+ "loss": 0.0395,
2788
+ "step": 714
2789
+ },
2790
+ {
2791
+ "epoch": 47.74576271186441,
2792
+ "grad_norm": 7.176196575164795,
2793
+ "learning_rate": 1.2022346368715086e-05,
2794
+ "loss": 0.064,
2795
+ "step": 716
2796
+ },
2797
+ {
2798
+ "epoch": 47.88135593220339,
2799
+ "grad_norm": 2.629854679107666,
2800
+ "learning_rate": 1.2e-05,
2801
+ "loss": 0.0467,
2802
+ "step": 718
2803
+ },
2804
+ {
2805
+ "epoch": 48.0,
2806
+ "grad_norm": 2.657071352005005,
2807
+ "learning_rate": 1.1977653631284918e-05,
2808
+ "loss": 0.0508,
2809
+ "step": 720
2810
+ },
2811
+ {
2812
+ "epoch": 48.0,
2813
+ "eval_loss": 0.031947944313287735,
2814
+ "eval_runtime": 2.5581,
2815
+ "eval_samples_per_second": 28.928,
2816
+ "eval_steps_per_second": 3.909,
2817
+ "step": 720
2818
+ },
2819
+ {
2820
+ "epoch": 48.13559322033898,
2821
+ "grad_norm": 5.297333240509033,
2822
+ "learning_rate": 1.1955307262569834e-05,
2823
+ "loss": 0.0621,
2824
+ "step": 722
2825
+ },
2826
+ {
2827
+ "epoch": 48.271186440677965,
2828
+ "grad_norm": 1.733160138130188,
2829
+ "learning_rate": 1.193296089385475e-05,
2830
+ "loss": 0.0377,
2831
+ "step": 724
2832
+ },
2833
+ {
2834
+ "epoch": 48.40677966101695,
2835
+ "grad_norm": 5.1290364265441895,
2836
+ "learning_rate": 1.1910614525139666e-05,
2837
+ "loss": 0.045,
2838
+ "step": 726
2839
+ },
2840
+ {
2841
+ "epoch": 48.54237288135593,
2842
+ "grad_norm": 6.375577926635742,
2843
+ "learning_rate": 1.1888268156424583e-05,
2844
+ "loss": 0.0711,
2845
+ "step": 728
2846
+ },
2847
+ {
2848
+ "epoch": 48.67796610169491,
2849
+ "grad_norm": 6.196961879730225,
2850
+ "learning_rate": 1.1865921787709497e-05,
2851
+ "loss": 0.0617,
2852
+ "step": 730
2853
+ },
2854
+ {
2855
+ "epoch": 48.813559322033896,
2856
+ "grad_norm": 10.0493803024292,
2857
+ "learning_rate": 1.1843575418994415e-05,
2858
+ "loss": 0.0939,
2859
+ "step": 732
2860
+ },
2861
+ {
2862
+ "epoch": 48.94915254237288,
2863
+ "grad_norm": 2.9289793968200684,
2864
+ "learning_rate": 1.1821229050279331e-05,
2865
+ "loss": 0.0391,
2866
+ "step": 734
2867
+ },
2868
+ {
2869
+ "epoch": 49.067796610169495,
2870
+ "grad_norm": 6.385377883911133,
2871
+ "learning_rate": 1.1798882681564248e-05,
2872
+ "loss": 0.1002,
2873
+ "step": 736
2874
+ },
2875
+ {
2876
+ "epoch": 49.20338983050848,
2877
+ "grad_norm": 4.037416458129883,
2878
+ "learning_rate": 1.1776536312849163e-05,
2879
+ "loss": 0.057,
2880
+ "step": 738
2881
+ },
2882
+ {
2883
+ "epoch": 49.33898305084746,
2884
+ "grad_norm": 3.304706335067749,
2885
+ "learning_rate": 1.175418994413408e-05,
2886
+ "loss": 0.0541,
2887
+ "step": 740
2888
+ },
2889
+ {
2890
+ "epoch": 49.33898305084746,
2891
+ "eval_loss": 0.01124641951173544,
2892
+ "eval_runtime": 2.5488,
2893
+ "eval_samples_per_second": 29.033,
2894
+ "eval_steps_per_second": 3.923,
2895
+ "step": 740
2896
+ },
2897
+ {
2898
+ "epoch": 49.47457627118644,
2899
+ "grad_norm": 4.556395053863525,
2900
+ "learning_rate": 1.1731843575418994e-05,
2901
+ "loss": 0.0832,
2902
+ "step": 742
2903
+ },
2904
+ {
2905
+ "epoch": 49.610169491525426,
2906
+ "grad_norm": 2.6618998050689697,
2907
+ "learning_rate": 1.1709497206703912e-05,
2908
+ "loss": 0.0424,
2909
+ "step": 744
2910
+ },
2911
+ {
2912
+ "epoch": 49.74576271186441,
2913
+ "grad_norm": 3.9990556240081787,
2914
+ "learning_rate": 1.1687150837988828e-05,
2915
+ "loss": 0.0402,
2916
+ "step": 746
2917
+ },
2918
+ {
2919
+ "epoch": 49.88135593220339,
2920
+ "grad_norm": 6.526775360107422,
2921
+ "learning_rate": 1.1664804469273745e-05,
2922
+ "loss": 0.0657,
2923
+ "step": 748
2924
+ },
2925
+ {
2926
+ "epoch": 50.0,
2927
+ "grad_norm": 5.300907135009766,
2928
+ "learning_rate": 1.164245810055866e-05,
2929
+ "loss": 0.1074,
2930
+ "step": 750
2931
+ },
2932
+ {
2933
+ "epoch": 50.13559322033898,
2934
+ "grad_norm": 4.9466872215271,
2935
+ "learning_rate": 1.1620111731843577e-05,
2936
+ "loss": 0.0555,
2937
+ "step": 752
2938
+ },
2939
+ {
2940
+ "epoch": 50.271186440677965,
2941
+ "grad_norm": 10.109463691711426,
2942
+ "learning_rate": 1.1597765363128493e-05,
2943
+ "loss": 0.1021,
2944
+ "step": 754
2945
+ },
2946
+ {
2947
+ "epoch": 50.40677966101695,
2948
+ "grad_norm": 4.380556583404541,
2949
+ "learning_rate": 1.1575418994413407e-05,
2950
+ "loss": 0.0554,
2951
+ "step": 756
2952
+ },
2953
+ {
2954
+ "epoch": 50.54237288135593,
2955
+ "grad_norm": 9.943109512329102,
2956
+ "learning_rate": 1.1553072625698325e-05,
2957
+ "loss": 0.0853,
2958
+ "step": 758
2959
+ },
2960
+ {
2961
+ "epoch": 50.67796610169491,
2962
+ "grad_norm": 5.02907657623291,
2963
+ "learning_rate": 1.1530726256983241e-05,
2964
+ "loss": 0.0654,
2965
+ "step": 760
2966
+ },
2967
+ {
2968
+ "epoch": 50.67796610169491,
2969
+ "eval_loss": 0.01798735000193119,
2970
+ "eval_runtime": 2.5967,
2971
+ "eval_samples_per_second": 28.498,
2972
+ "eval_steps_per_second": 3.851,
2973
+ "step": 760
2974
+ },
2975
+ {
2976
+ "epoch": 50.813559322033896,
2977
+ "grad_norm": 3.1131908893585205,
2978
+ "learning_rate": 1.1508379888268157e-05,
2979
+ "loss": 0.0909,
2980
+ "step": 762
2981
+ },
2982
+ {
2983
+ "epoch": 50.94915254237288,
2984
+ "grad_norm": 9.560342788696289,
2985
+ "learning_rate": 1.1486033519553073e-05,
2986
+ "loss": 0.0705,
2987
+ "step": 764
2988
+ },
2989
+ {
2990
+ "epoch": 51.067796610169495,
2991
+ "grad_norm": 6.878483772277832,
2992
+ "learning_rate": 1.146368715083799e-05,
2993
+ "loss": 0.0432,
2994
+ "step": 766
2995
+ },
2996
+ {
2997
+ "epoch": 51.20338983050848,
2998
+ "grad_norm": 5.553246974945068,
2999
+ "learning_rate": 1.1441340782122904e-05,
3000
+ "loss": 0.0632,
3001
+ "step": 768
3002
+ },
3003
+ {
3004
+ "epoch": 51.33898305084746,
3005
+ "grad_norm": 3.580183267593384,
3006
+ "learning_rate": 1.1418994413407822e-05,
3007
+ "loss": 0.0991,
3008
+ "step": 770
3009
+ },
3010
+ {
3011
+ "epoch": 51.47457627118644,
3012
+ "grad_norm": 3.993243455886841,
3013
+ "learning_rate": 1.1396648044692738e-05,
3014
+ "loss": 0.0537,
3015
+ "step": 772
3016
+ },
3017
+ {
3018
+ "epoch": 51.610169491525426,
3019
+ "grad_norm": 4.740231037139893,
3020
+ "learning_rate": 1.1374301675977656e-05,
3021
+ "loss": 0.0993,
3022
+ "step": 774
3023
+ },
3024
+ {
3025
+ "epoch": 51.74576271186441,
3026
+ "grad_norm": 5.158156394958496,
3027
+ "learning_rate": 1.135195530726257e-05,
3028
+ "loss": 0.0674,
3029
+ "step": 776
3030
+ },
3031
+ {
3032
+ "epoch": 51.88135593220339,
3033
+ "grad_norm": 1.6429027318954468,
3034
+ "learning_rate": 1.1329608938547487e-05,
3035
+ "loss": 0.0522,
3036
+ "step": 778
3037
+ },
3038
+ {
3039
+ "epoch": 52.0,
3040
+ "grad_norm": 2.0988657474517822,
3041
+ "learning_rate": 1.1307262569832402e-05,
3042
+ "loss": 0.0566,
3043
+ "step": 780
3044
+ },
3045
+ {
3046
+ "epoch": 52.0,
3047
+ "eval_loss": 0.013120992109179497,
3048
+ "eval_runtime": 2.5626,
3049
+ "eval_samples_per_second": 28.877,
3050
+ "eval_steps_per_second": 3.902,
3051
+ "step": 780
3052
+ },
3053
+ {
3054
+ "epoch": 52.13559322033898,
3055
+ "grad_norm": 2.3040082454681396,
3056
+ "learning_rate": 1.1284916201117319e-05,
3057
+ "loss": 0.0665,
3058
+ "step": 782
3059
+ },
3060
+ {
3061
+ "epoch": 52.271186440677965,
3062
+ "grad_norm": 4.124909400939941,
3063
+ "learning_rate": 1.1262569832402235e-05,
3064
+ "loss": 0.0676,
3065
+ "step": 784
3066
+ },
3067
+ {
3068
+ "epoch": 52.40677966101695,
3069
+ "grad_norm": 2.7234182357788086,
3070
+ "learning_rate": 1.1240223463687153e-05,
3071
+ "loss": 0.0707,
3072
+ "step": 786
3073
+ },
3074
+ {
3075
+ "epoch": 52.54237288135593,
3076
+ "grad_norm": 1.9680346250534058,
3077
+ "learning_rate": 1.1217877094972067e-05,
3078
+ "loss": 0.0756,
3079
+ "step": 788
3080
+ },
3081
+ {
3082
+ "epoch": 52.67796610169491,
3083
+ "grad_norm": 3.0284550189971924,
3084
+ "learning_rate": 1.1195530726256984e-05,
3085
+ "loss": 0.0762,
3086
+ "step": 790
3087
+ },
3088
+ {
3089
+ "epoch": 52.813559322033896,
3090
+ "grad_norm": 4.520724296569824,
3091
+ "learning_rate": 1.11731843575419e-05,
3092
+ "loss": 0.0467,
3093
+ "step": 792
3094
+ },
3095
+ {
3096
+ "epoch": 52.94915254237288,
3097
+ "grad_norm": 3.205146312713623,
3098
+ "learning_rate": 1.1150837988826818e-05,
3099
+ "loss": 0.0612,
3100
+ "step": 794
3101
+ },
3102
+ {
3103
+ "epoch": 53.067796610169495,
3104
+ "grad_norm": 4.275065898895264,
3105
+ "learning_rate": 1.1128491620111732e-05,
3106
+ "loss": 0.0743,
3107
+ "step": 796
3108
+ },
3109
+ {
3110
+ "epoch": 53.20338983050848,
3111
+ "grad_norm": 4.333241939544678,
3112
+ "learning_rate": 1.110614525139665e-05,
3113
+ "loss": 0.0873,
3114
+ "step": 798
3115
+ },
3116
+ {
3117
+ "epoch": 53.33898305084746,
3118
+ "grad_norm": 5.192393779754639,
3119
+ "learning_rate": 1.1083798882681564e-05,
3120
+ "loss": 0.0542,
3121
+ "step": 800
3122
+ },
3123
+ {
3124
+ "epoch": 53.33898305084746,
3125
+ "eval_loss": 0.008098521269857883,
3126
+ "eval_runtime": 2.5376,
3127
+ "eval_samples_per_second": 29.162,
3128
+ "eval_steps_per_second": 3.941,
3129
+ "step": 800
3130
+ },
3131
+ {
3132
+ "epoch": 53.47457627118644,
3133
+ "grad_norm": 7.420374870300293,
3134
+ "learning_rate": 1.1061452513966481e-05,
3135
+ "loss": 0.0784,
3136
+ "step": 802
3137
+ },
3138
+ {
3139
+ "epoch": 53.610169491525426,
3140
+ "grad_norm": 2.5742266178131104,
3141
+ "learning_rate": 1.1039106145251397e-05,
3142
+ "loss": 0.0539,
3143
+ "step": 804
3144
+ },
3145
+ {
3146
+ "epoch": 53.74576271186441,
3147
+ "grad_norm": 4.041259765625,
3148
+ "learning_rate": 1.1016759776536315e-05,
3149
+ "loss": 0.0642,
3150
+ "step": 806
3151
+ },
3152
+ {
3153
+ "epoch": 53.88135593220339,
3154
+ "grad_norm": 4.20781135559082,
3155
+ "learning_rate": 1.0994413407821229e-05,
3156
+ "loss": 0.0756,
3157
+ "step": 808
3158
+ },
3159
+ {
3160
+ "epoch": 54.0,
3161
+ "grad_norm": 2.9392924308776855,
3162
+ "learning_rate": 1.0972067039106147e-05,
3163
+ "loss": 0.0348,
3164
+ "step": 810
3165
+ },
3166
+ {
3167
+ "epoch": 54.13559322033898,
3168
+ "grad_norm": 1.5523197650909424,
3169
+ "learning_rate": 1.0949720670391063e-05,
3170
+ "loss": 0.0614,
3171
+ "step": 812
3172
+ },
3173
+ {
3174
+ "epoch": 54.271186440677965,
3175
+ "grad_norm": 6.238807678222656,
3176
+ "learning_rate": 1.0927374301675978e-05,
3177
+ "loss": 0.1051,
3178
+ "step": 814
3179
+ },
3180
+ {
3181
+ "epoch": 54.40677966101695,
3182
+ "grad_norm": 8.669645309448242,
3183
+ "learning_rate": 1.0905027932960894e-05,
3184
+ "loss": 0.0783,
3185
+ "step": 816
3186
+ },
3187
+ {
3188
+ "epoch": 54.54237288135593,
3189
+ "grad_norm": 3.28987717628479,
3190
+ "learning_rate": 1.0882681564245812e-05,
3191
+ "loss": 0.0533,
3192
+ "step": 818
3193
+ },
3194
+ {
3195
+ "epoch": 54.67796610169491,
3196
+ "grad_norm": 5.65046501159668,
3197
+ "learning_rate": 1.0860335195530726e-05,
3198
+ "loss": 0.0987,
3199
+ "step": 820
3200
+ },
3201
+ {
3202
+ "epoch": 54.67796610169491,
3203
+ "eval_loss": 0.010310381650924683,
3204
+ "eval_runtime": 2.5279,
3205
+ "eval_samples_per_second": 29.273,
3206
+ "eval_steps_per_second": 3.956,
3207
+ "step": 820
3208
+ },
3209
+ {
3210
+ "epoch": 54.813559322033896,
3211
+ "grad_norm": 3.2011706829071045,
3212
+ "learning_rate": 1.0837988826815644e-05,
3213
+ "loss": 0.0651,
3214
+ "step": 822
3215
+ },
3216
+ {
3217
+ "epoch": 54.94915254237288,
3218
+ "grad_norm": 3.0242323875427246,
3219
+ "learning_rate": 1.081564245810056e-05,
3220
+ "loss": 0.0694,
3221
+ "step": 824
3222
+ },
3223
+ {
3224
+ "epoch": 55.067796610169495,
3225
+ "grad_norm": 2.575657367706299,
3226
+ "learning_rate": 1.0793296089385477e-05,
3227
+ "loss": 0.0618,
3228
+ "step": 826
3229
+ },
3230
+ {
3231
+ "epoch": 55.20338983050848,
3232
+ "grad_norm": 5.550198554992676,
3233
+ "learning_rate": 1.0770949720670391e-05,
3234
+ "loss": 0.1067,
3235
+ "step": 828
3236
+ },
3237
+ {
3238
+ "epoch": 55.33898305084746,
3239
+ "grad_norm": 12.178544044494629,
3240
+ "learning_rate": 1.0748603351955309e-05,
3241
+ "loss": 0.1135,
3242
+ "step": 830
3243
+ },
3244
+ {
3245
+ "epoch": 55.47457627118644,
3246
+ "grad_norm": 5.148195743560791,
3247
+ "learning_rate": 1.0726256983240225e-05,
3248
+ "loss": 0.0764,
3249
+ "step": 832
3250
+ },
3251
+ {
3252
+ "epoch": 55.610169491525426,
3253
+ "grad_norm": 2.915388822555542,
3254
+ "learning_rate": 1.070391061452514e-05,
3255
+ "loss": 0.0802,
3256
+ "step": 834
3257
+ },
3258
+ {
3259
+ "epoch": 55.74576271186441,
3260
+ "grad_norm": 8.565984725952148,
3261
+ "learning_rate": 1.0681564245810057e-05,
3262
+ "loss": 0.085,
3263
+ "step": 836
3264
+ },
3265
+ {
3266
+ "epoch": 55.88135593220339,
3267
+ "grad_norm": 4.304515838623047,
3268
+ "learning_rate": 1.0659217877094974e-05,
3269
+ "loss": 0.0653,
3270
+ "step": 838
3271
+ },
3272
+ {
3273
+ "epoch": 56.0,
3274
+ "grad_norm": 7.9517412185668945,
3275
+ "learning_rate": 1.0636871508379889e-05,
3276
+ "loss": 0.0811,
3277
+ "step": 840
3278
+ },
3279
+ {
3280
+ "epoch": 56.0,
3281
+ "eval_loss": 0.0406964011490345,
3282
+ "eval_runtime": 2.5558,
3283
+ "eval_samples_per_second": 28.954,
3284
+ "eval_steps_per_second": 3.913,
3285
+ "step": 840
3286
+ },
3287
+ {
3288
+ "epoch": 56.13559322033898,
3289
+ "grad_norm": 10.264988899230957,
3290
+ "learning_rate": 1.0614525139664806e-05,
3291
+ "loss": 0.1184,
3292
+ "step": 842
3293
+ },
3294
+ {
3295
+ "epoch": 56.271186440677965,
3296
+ "grad_norm": 3.1228268146514893,
3297
+ "learning_rate": 1.0592178770949722e-05,
3298
+ "loss": 0.0628,
3299
+ "step": 844
3300
+ },
3301
+ {
3302
+ "epoch": 56.40677966101695,
3303
+ "grad_norm": 6.7178053855896,
3304
+ "learning_rate": 1.056983240223464e-05,
3305
+ "loss": 0.1161,
3306
+ "step": 846
3307
+ },
3308
+ {
3309
+ "epoch": 56.54237288135593,
3310
+ "grad_norm": 8.054555892944336,
3311
+ "learning_rate": 1.0547486033519554e-05,
3312
+ "loss": 0.0777,
3313
+ "step": 848
3314
+ },
3315
+ {
3316
+ "epoch": 56.67796610169491,
3317
+ "grad_norm": 6.216050148010254,
3318
+ "learning_rate": 1.052513966480447e-05,
3319
+ "loss": 0.0832,
3320
+ "step": 850
3321
+ },
3322
+ {
3323
+ "epoch": 56.813559322033896,
3324
+ "grad_norm": 2.1706607341766357,
3325
+ "learning_rate": 1.0502793296089386e-05,
3326
+ "loss": 0.0866,
3327
+ "step": 852
3328
+ },
3329
+ {
3330
+ "epoch": 56.94915254237288,
3331
+ "grad_norm": 7.902822494506836,
3332
+ "learning_rate": 1.0480446927374301e-05,
3333
+ "loss": 0.0811,
3334
+ "step": 854
3335
+ },
3336
+ {
3337
+ "epoch": 57.067796610169495,
3338
+ "grad_norm": 7.259898662567139,
3339
+ "learning_rate": 1.0458100558659219e-05,
3340
+ "loss": 0.0913,
3341
+ "step": 856
3342
+ },
3343
+ {
3344
+ "epoch": 57.20338983050848,
3345
+ "grad_norm": 3.974984884262085,
3346
+ "learning_rate": 1.0435754189944133e-05,
3347
+ "loss": 0.1195,
3348
+ "step": 858
3349
+ },
3350
+ {
3351
+ "epoch": 57.33898305084746,
3352
+ "grad_norm": 2.8770463466644287,
3353
+ "learning_rate": 1.041340782122905e-05,
3354
+ "loss": 0.0474,
3355
+ "step": 860
3356
+ },
3357
+ {
3358
+ "epoch": 57.33898305084746,
3359
+ "eval_loss": 0.009472488425672054,
3360
+ "eval_runtime": 2.5134,
3361
+ "eval_samples_per_second": 29.442,
3362
+ "eval_steps_per_second": 3.979,
3363
+ "step": 860
3364
+ },
3365
+ {
3366
+ "epoch": 57.47457627118644,
3367
+ "grad_norm": 5.433828353881836,
3368
+ "learning_rate": 1.0391061452513967e-05,
3369
+ "loss": 0.0983,
3370
+ "step": 862
3371
+ },
3372
+ {
3373
+ "epoch": 57.610169491525426,
3374
+ "grad_norm": 4.964000701904297,
3375
+ "learning_rate": 1.0368715083798884e-05,
3376
+ "loss": 0.0645,
3377
+ "step": 864
3378
+ },
3379
+ {
3380
+ "epoch": 57.74576271186441,
3381
+ "grad_norm": 8.073065757751465,
3382
+ "learning_rate": 1.0346368715083799e-05,
3383
+ "loss": 0.0813,
3384
+ "step": 866
3385
+ },
3386
+ {
3387
+ "epoch": 57.88135593220339,
3388
+ "grad_norm": 5.719815731048584,
3389
+ "learning_rate": 1.0324022346368716e-05,
3390
+ "loss": 0.0965,
3391
+ "step": 868
3392
+ },
3393
+ {
3394
+ "epoch": 58.0,
3395
+ "grad_norm": 5.597076416015625,
3396
+ "learning_rate": 1.0301675977653632e-05,
3397
+ "loss": 0.0864,
3398
+ "step": 870
3399
+ },
3400
+ {
3401
+ "epoch": 58.13559322033898,
3402
+ "grad_norm": 2.6213345527648926,
3403
+ "learning_rate": 1.0279329608938548e-05,
3404
+ "loss": 0.0782,
3405
+ "step": 872
3406
+ },
3407
+ {
3408
+ "epoch": 58.271186440677965,
3409
+ "grad_norm": 2.7207729816436768,
3410
+ "learning_rate": 1.0256983240223464e-05,
3411
+ "loss": 0.0807,
3412
+ "step": 874
3413
+ },
3414
+ {
3415
+ "epoch": 58.40677966101695,
3416
+ "grad_norm": 5.042091369628906,
3417
+ "learning_rate": 1.0234636871508381e-05,
3418
+ "loss": 0.0833,
3419
+ "step": 876
3420
+ },
3421
+ {
3422
+ "epoch": 58.54237288135593,
3423
+ "grad_norm": 2.6447582244873047,
3424
+ "learning_rate": 1.0212290502793296e-05,
3425
+ "loss": 0.087,
3426
+ "step": 878
3427
+ },
3428
+ {
3429
+ "epoch": 58.67796610169491,
3430
+ "grad_norm": 2.3219821453094482,
3431
+ "learning_rate": 1.0189944134078213e-05,
3432
+ "loss": 0.0787,
3433
+ "step": 880
3434
+ },
3435
+ {
3436
+ "epoch": 58.67796610169491,
3437
+ "eval_loss": 0.007559705525636673,
3438
+ "eval_runtime": 2.537,
3439
+ "eval_samples_per_second": 29.169,
3440
+ "eval_steps_per_second": 3.942,
3441
+ "step": 880
3442
+ }
3443
+ ],
3444
+ "logging_steps": 2,
3445
+ "max_steps": 1792,
3446
+ "num_input_tokens_seen": 0,
3447
+ "num_train_epochs": 128,
3448
+ "save_steps": 20,
3449
+ "stateful_callbacks": {
3450
+ "TrainerControl": {
3451
+ "args": {
3452
+ "should_epoch_stop": false,
3453
+ "should_evaluate": false,
3454
+ "should_log": false,
3455
+ "should_save": true,
3456
+ "should_training_stop": false
3457
+ },
3458
+ "attributes": {}
3459
+ }
3460
+ },
3461
+ "total_flos": 3.2969851968773606e+17,
3462
+ "train_batch_size": 24,
3463
+ "trial_name": null,
3464
+ "trial_params": null
3465
+ }
sink-checkpoint-880/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd5c266343fa85498a9c84d6a5356c472665bda5c4c6ceac5db9ba3000d4c66d
3
+ size 5304