olisval commited on
Commit
a652ef2
·
verified ·
1 Parent(s): 074eace

Update LoRA weights

Browse files
.gitattributes ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ adapter_model.safetensors filter=lfs diff=lfs merge=lfs -text
2
+ optimizer.pt filter=lfs diff=lfs merge=lfs -text
3
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-1.5B-Instruct
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.12.0
adapter_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "Qwen/Qwen2.5-1.5B-Instruct",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 16,
14
+ "lora_dropout": 0,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 8,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "o_proj",
24
+ "q_proj",
25
+ "v_proj",
26
+ "up_proj",
27
+ "down_proj",
28
+ "gate_proj",
29
+ "k_proj"
30
+ ],
31
+ "task_type": "CAUSAL_LM",
32
+ "use_dora": false,
33
+ "use_rslora": false
34
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3a45b576c916db1ac517ecd4c12ee2c81ba071efb5a4bdac47d6c754cc349f5
3
+ size 36981072
added_tokens.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</tool_call>": 151658,
3
+ "<tool_call>": 151657,
4
+ "<|box_end|>": 151649,
5
+ "<|box_start|>": 151648,
6
+ "<|endoftext|>": 151643,
7
+ "<|file_sep|>": 151664,
8
+ "<|fim_middle|>": 151660,
9
+ "<|fim_pad|>": 151662,
10
+ "<|fim_prefix|>": 151659,
11
+ "<|fim_suffix|>": 151661,
12
+ "<|im_end|>": 151645,
13
+ "<|im_start|>": 151644,
14
+ "<|image_pad|>": 151655,
15
+ "<|object_ref_end|>": 151647,
16
+ "<|object_ref_start|>": 151646,
17
+ "<|quad_end|>": 151651,
18
+ "<|quad_start|>": 151650,
19
+ "<|repo_name|>": 151663,
20
+ "<|video_pad|>": 151656,
21
+ "<|vision_end|>": 151653,
22
+ "<|vision_pad|>": 151654,
23
+ "<|vision_start|>": 151652
24
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9ce601a7a25aca7987d3d8228a02d4b8e6b497aa1629d4ccebe0b00285a04864
3
+ size 74188650
rng_state.pth ADDED
Binary file (14.2 kB). View file
 
scheduler.pt ADDED
Binary file (1.06 kB). View file
 
special_tokens_map.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "eos_token": {
18
+ "content": "<|im_end|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "pad_token": {
25
+ "content": "<|endoftext|>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ }
31
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c5ae00e602b8860cbd784ba82a8aa14e8feecec692e7076590d014d7b7fdafa
3
+ size 11421896
tokenizer_config.json ADDED
@@ -0,0 +1,208 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ }
181
+ },
182
+ "additional_special_tokens": [
183
+ "<|im_start|>",
184
+ "<|im_end|>",
185
+ "<|object_ref_start|>",
186
+ "<|object_ref_end|>",
187
+ "<|box_start|>",
188
+ "<|box_end|>",
189
+ "<|quad_start|>",
190
+ "<|quad_end|>",
191
+ "<|vision_start|>",
192
+ "<|vision_end|>",
193
+ "<|vision_pad|>",
194
+ "<|image_pad|>",
195
+ "<|video_pad|>"
196
+ ],
197
+ "bos_token": null,
198
+ "chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n {%- else %}\n {{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}\n {%- endif %}\n {{- \"\\n\\n# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0]['role'] == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0]['content'] + '<|im_end|>\\n' }}\n {%- else %}\n {{- '<|im_start|>system\\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- for message in messages %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) or (message.role == \"assistant\" and not message.tool_calls) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {{- '<|im_start|>' + message.role }}\n {%- if message.content %}\n {{- '\\n' + message.content }}\n {%- endif %}\n {%- for tool_call in message.tool_calls %}\n {%- if tool_call.function is defined %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '\\n<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {{- tool_call.arguments | tojson }}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- message.content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}\n",
199
+ "clean_up_tokenization_spaces": false,
200
+ "eos_token": "<|im_end|>",
201
+ "errors": "replace",
202
+ "model_max_length": 131072,
203
+ "pad_token": "<|endoftext|>",
204
+ "padding_side": "right",
205
+ "split_special_tokens": false,
206
+ "tokenizer_class": "Qwen2Tokenizer",
207
+ "unk_token": null
208
+ }
trainer_state.json ADDED
@@ -0,0 +1,3073 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.6776034236804565,
5
+ "eval_steps": 500,
6
+ "global_step": 1900,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.001783166904422254,
13
+ "grad_norm": 2.0930111408233643,
14
+ "learning_rate": 4.999995641358869e-05,
15
+ "loss": 0.7967,
16
+ "num_input_tokens_seen": 63024,
17
+ "step": 5
18
+ },
19
+ {
20
+ "epoch": 0.003566333808844508,
21
+ "grad_norm": 1.2970882654190063,
22
+ "learning_rate": 4.999982565450674e-05,
23
+ "loss": 0.7382,
24
+ "num_input_tokens_seen": 126336,
25
+ "step": 10
26
+ },
27
+ {
28
+ "epoch": 0.005349500713266762,
29
+ "grad_norm": 0.8319762349128723,
30
+ "learning_rate": 4.999960772321009e-05,
31
+ "loss": 0.6823,
32
+ "num_input_tokens_seen": 184688,
33
+ "step": 15
34
+ },
35
+ {
36
+ "epoch": 0.007132667617689016,
37
+ "grad_norm": 0.9985227584838867,
38
+ "learning_rate": 4.999930262045865e-05,
39
+ "loss": 0.6836,
40
+ "num_input_tokens_seen": 245808,
41
+ "step": 20
42
+ },
43
+ {
44
+ "epoch": 0.00891583452211127,
45
+ "grad_norm": 1.065556287765503,
46
+ "learning_rate": 4.9998910347316286e-05,
47
+ "loss": 0.7561,
48
+ "num_input_tokens_seen": 306944,
49
+ "step": 25
50
+ },
51
+ {
52
+ "epoch": 0.010699001426533523,
53
+ "grad_norm": 1.066805362701416,
54
+ "learning_rate": 4.9998430905150826e-05,
55
+ "loss": 0.7299,
56
+ "num_input_tokens_seen": 371616,
57
+ "step": 30
58
+ },
59
+ {
60
+ "epoch": 0.012482168330955777,
61
+ "grad_norm": 1.2590147256851196,
62
+ "learning_rate": 4.999786429563404e-05,
63
+ "loss": 0.6834,
64
+ "num_input_tokens_seen": 435536,
65
+ "step": 35
66
+ },
67
+ {
68
+ "epoch": 0.014265335235378032,
69
+ "grad_norm": 1.0066215991973877,
70
+ "learning_rate": 4.999721052074164e-05,
71
+ "loss": 0.6511,
72
+ "num_input_tokens_seen": 499328,
73
+ "step": 40
74
+ },
75
+ {
76
+ "epoch": 0.016048502139800285,
77
+ "grad_norm": 1.0162546634674072,
78
+ "learning_rate": 4.99964695827533e-05,
79
+ "loss": 0.5992,
80
+ "num_input_tokens_seen": 557504,
81
+ "step": 45
82
+ },
83
+ {
84
+ "epoch": 0.01783166904422254,
85
+ "grad_norm": 0.9829245209693909,
86
+ "learning_rate": 4.999564148425258e-05,
87
+ "loss": 0.6245,
88
+ "num_input_tokens_seen": 621440,
89
+ "step": 50
90
+ },
91
+ {
92
+ "epoch": 0.019614835948644792,
93
+ "grad_norm": 0.9447645545005798,
94
+ "learning_rate": 4.999472622812701e-05,
95
+ "loss": 0.6444,
96
+ "num_input_tokens_seen": 685856,
97
+ "step": 55
98
+ },
99
+ {
100
+ "epoch": 0.021398002853067047,
101
+ "grad_norm": 1.0958608388900757,
102
+ "learning_rate": 4.9993723817567996e-05,
103
+ "loss": 0.5194,
104
+ "num_input_tokens_seen": 748112,
105
+ "step": 60
106
+ },
107
+ {
108
+ "epoch": 0.023181169757489302,
109
+ "grad_norm": 0.9865729808807373,
110
+ "learning_rate": 4.999263425607086e-05,
111
+ "loss": 0.5021,
112
+ "num_input_tokens_seen": 811008,
113
+ "step": 65
114
+ },
115
+ {
116
+ "epoch": 0.024964336661911554,
117
+ "grad_norm": 1.2535978555679321,
118
+ "learning_rate": 4.9991457547434805e-05,
119
+ "loss": 0.6641,
120
+ "num_input_tokens_seen": 878272,
121
+ "step": 70
122
+ },
123
+ {
124
+ "epoch": 0.02674750356633381,
125
+ "grad_norm": 1.6020156145095825,
126
+ "learning_rate": 4.9990193695762914e-05,
127
+ "loss": 0.5479,
128
+ "num_input_tokens_seen": 942608,
129
+ "step": 75
130
+ },
131
+ {
132
+ "epoch": 0.028530670470756064,
133
+ "grad_norm": 1.1668367385864258,
134
+ "learning_rate": 4.998884270546214e-05,
135
+ "loss": 0.6181,
136
+ "num_input_tokens_seen": 1005776,
137
+ "step": 80
138
+ },
139
+ {
140
+ "epoch": 0.030313837375178315,
141
+ "grad_norm": 1.1580744981765747,
142
+ "learning_rate": 4.998740458124324e-05,
143
+ "loss": 0.6266,
144
+ "num_input_tokens_seen": 1068192,
145
+ "step": 85
146
+ },
147
+ {
148
+ "epoch": 0.03209700427960057,
149
+ "grad_norm": 0.9773775339126587,
150
+ "learning_rate": 4.9985879328120846e-05,
151
+ "loss": 0.5088,
152
+ "num_input_tokens_seen": 1128592,
153
+ "step": 90
154
+ },
155
+ {
156
+ "epoch": 0.033880171184022825,
157
+ "grad_norm": 1.4142199754714966,
158
+ "learning_rate": 4.9984266951413396e-05,
159
+ "loss": 0.5199,
160
+ "num_input_tokens_seen": 1194592,
161
+ "step": 95
162
+ },
163
+ {
164
+ "epoch": 0.03566333808844508,
165
+ "grad_norm": 1.459350347518921,
166
+ "learning_rate": 4.998256745674308e-05,
167
+ "loss": 0.5855,
168
+ "num_input_tokens_seen": 1257744,
169
+ "step": 100
170
+ },
171
+ {
172
+ "epoch": 0.037446504992867335,
173
+ "grad_norm": 1.118642807006836,
174
+ "learning_rate": 4.99807808500359e-05,
175
+ "loss": 0.6148,
176
+ "num_input_tokens_seen": 1320944,
177
+ "step": 105
178
+ },
179
+ {
180
+ "epoch": 0.039229671897289584,
181
+ "grad_norm": 1.1180983781814575,
182
+ "learning_rate": 4.99789071375216e-05,
183
+ "loss": 0.5517,
184
+ "num_input_tokens_seen": 1382928,
185
+ "step": 110
186
+ },
187
+ {
188
+ "epoch": 0.04101283880171184,
189
+ "grad_norm": 1.2651177644729614,
190
+ "learning_rate": 4.9976946325733654e-05,
191
+ "loss": 0.5959,
192
+ "num_input_tokens_seen": 1449408,
193
+ "step": 115
194
+ },
195
+ {
196
+ "epoch": 0.042796005706134094,
197
+ "grad_norm": 0.9860583543777466,
198
+ "learning_rate": 4.997489842150924e-05,
199
+ "loss": 0.4779,
200
+ "num_input_tokens_seen": 1510752,
201
+ "step": 120
202
+ },
203
+ {
204
+ "epoch": 0.04457917261055635,
205
+ "grad_norm": 1.0358836650848389,
206
+ "learning_rate": 4.997276343198922e-05,
207
+ "loss": 0.5474,
208
+ "num_input_tokens_seen": 1568928,
209
+ "step": 125
210
+ },
211
+ {
212
+ "epoch": 0.046362339514978604,
213
+ "grad_norm": 1.3108216524124146,
214
+ "learning_rate": 4.997054136461811e-05,
215
+ "loss": 0.4624,
216
+ "num_input_tokens_seen": 1631872,
217
+ "step": 130
218
+ },
219
+ {
220
+ "epoch": 0.04814550641940086,
221
+ "grad_norm": 1.0577709674835205,
222
+ "learning_rate": 4.996823222714408e-05,
223
+ "loss": 0.558,
224
+ "num_input_tokens_seen": 1694000,
225
+ "step": 135
226
+ },
227
+ {
228
+ "epoch": 0.04992867332382311,
229
+ "grad_norm": 0.9583589434623718,
230
+ "learning_rate": 4.996583602761887e-05,
231
+ "loss": 0.535,
232
+ "num_input_tokens_seen": 1752208,
233
+ "step": 140
234
+ },
235
+ {
236
+ "epoch": 0.05171184022824536,
237
+ "grad_norm": 1.1273239850997925,
238
+ "learning_rate": 4.9963352774397845e-05,
239
+ "loss": 0.581,
240
+ "num_input_tokens_seen": 1809968,
241
+ "step": 145
242
+ },
243
+ {
244
+ "epoch": 0.05349500713266762,
245
+ "grad_norm": 0.9180589914321899,
246
+ "learning_rate": 4.9960782476139875e-05,
247
+ "loss": 0.5853,
248
+ "num_input_tokens_seen": 1875584,
249
+ "step": 150
250
+ },
251
+ {
252
+ "epoch": 0.05527817403708987,
253
+ "grad_norm": 0.9368972778320312,
254
+ "learning_rate": 4.9958125141807376e-05,
255
+ "loss": 0.5655,
256
+ "num_input_tokens_seen": 1936544,
257
+ "step": 155
258
+ },
259
+ {
260
+ "epoch": 0.05706134094151213,
261
+ "grad_norm": 1.093083143234253,
262
+ "learning_rate": 4.9955380780666233e-05,
263
+ "loss": 0.5248,
264
+ "num_input_tokens_seen": 1997312,
265
+ "step": 160
266
+ },
267
+ {
268
+ "epoch": 0.05884450784593438,
269
+ "grad_norm": 1.0452104806900024,
270
+ "learning_rate": 4.99525494022858e-05,
271
+ "loss": 0.5912,
272
+ "num_input_tokens_seen": 2058400,
273
+ "step": 165
274
+ },
275
+ {
276
+ "epoch": 0.06062767475035663,
277
+ "grad_norm": 1.655479073524475,
278
+ "learning_rate": 4.9949631016538845e-05,
279
+ "loss": 0.5465,
280
+ "num_input_tokens_seen": 2123584,
281
+ "step": 170
282
+ },
283
+ {
284
+ "epoch": 0.062410841654778886,
285
+ "grad_norm": 1.295340895652771,
286
+ "learning_rate": 4.994662563360152e-05,
287
+ "loss": 0.6319,
288
+ "num_input_tokens_seen": 2187776,
289
+ "step": 175
290
+ },
291
+ {
292
+ "epoch": 0.06419400855920114,
293
+ "grad_norm": 1.1385325193405151,
294
+ "learning_rate": 4.994353326395334e-05,
295
+ "loss": 0.6121,
296
+ "num_input_tokens_seen": 2248592,
297
+ "step": 180
298
+ },
299
+ {
300
+ "epoch": 0.06597717546362339,
301
+ "grad_norm": 1.2202588319778442,
302
+ "learning_rate": 4.994035391837713e-05,
303
+ "loss": 0.5926,
304
+ "num_input_tokens_seen": 2311472,
305
+ "step": 185
306
+ },
307
+ {
308
+ "epoch": 0.06776034236804565,
309
+ "grad_norm": 1.1300709247589111,
310
+ "learning_rate": 4.9937087607958987e-05,
311
+ "loss": 0.5075,
312
+ "num_input_tokens_seen": 2374240,
313
+ "step": 190
314
+ },
315
+ {
316
+ "epoch": 0.0695435092724679,
317
+ "grad_norm": 1.0753881931304932,
318
+ "learning_rate": 4.993373434408825e-05,
319
+ "loss": 0.5187,
320
+ "num_input_tokens_seen": 2434864,
321
+ "step": 195
322
+ },
323
+ {
324
+ "epoch": 0.07132667617689016,
325
+ "grad_norm": 1.0271146297454834,
326
+ "learning_rate": 4.993029413845746e-05,
327
+ "loss": 0.5777,
328
+ "num_input_tokens_seen": 2495712,
329
+ "step": 200
330
+ },
331
+ {
332
+ "epoch": 0.07310984308131241,
333
+ "grad_norm": 1.7475312948226929,
334
+ "learning_rate": 4.9926767003062316e-05,
335
+ "loss": 0.5091,
336
+ "num_input_tokens_seen": 2555184,
337
+ "step": 205
338
+ },
339
+ {
340
+ "epoch": 0.07489300998573467,
341
+ "grad_norm": 1.1732685565948486,
342
+ "learning_rate": 4.992315295020163e-05,
343
+ "loss": 0.5594,
344
+ "num_input_tokens_seen": 2616736,
345
+ "step": 210
346
+ },
347
+ {
348
+ "epoch": 0.07667617689015692,
349
+ "grad_norm": 1.1418745517730713,
350
+ "learning_rate": 4.991945199247728e-05,
351
+ "loss": 0.633,
352
+ "num_input_tokens_seen": 2679568,
353
+ "step": 215
354
+ },
355
+ {
356
+ "epoch": 0.07845934379457917,
357
+ "grad_norm": 1.5812561511993408,
358
+ "learning_rate": 4.991566414279421e-05,
359
+ "loss": 0.5361,
360
+ "num_input_tokens_seen": 2741888,
361
+ "step": 220
362
+ },
363
+ {
364
+ "epoch": 0.08024251069900143,
365
+ "grad_norm": 1.2565455436706543,
366
+ "learning_rate": 4.99117894143603e-05,
367
+ "loss": 0.5128,
368
+ "num_input_tokens_seen": 2804736,
369
+ "step": 225
370
+ },
371
+ {
372
+ "epoch": 0.08202567760342368,
373
+ "grad_norm": 1.081152081489563,
374
+ "learning_rate": 4.990782782068639e-05,
375
+ "loss": 0.4925,
376
+ "num_input_tokens_seen": 2864768,
377
+ "step": 230
378
+ },
379
+ {
380
+ "epoch": 0.08380884450784594,
381
+ "grad_norm": 1.157086730003357,
382
+ "learning_rate": 4.9903779375586224e-05,
383
+ "loss": 0.5091,
384
+ "num_input_tokens_seen": 2925776,
385
+ "step": 235
386
+ },
387
+ {
388
+ "epoch": 0.08559201141226819,
389
+ "grad_norm": 1.496232032775879,
390
+ "learning_rate": 4.989964409317637e-05,
391
+ "loss": 0.5611,
392
+ "num_input_tokens_seen": 2984032,
393
+ "step": 240
394
+ },
395
+ {
396
+ "epoch": 0.08737517831669044,
397
+ "grad_norm": 1.5581008195877075,
398
+ "learning_rate": 4.989542198787619e-05,
399
+ "loss": 0.4574,
400
+ "num_input_tokens_seen": 3047024,
401
+ "step": 245
402
+ },
403
+ {
404
+ "epoch": 0.0891583452211127,
405
+ "grad_norm": 1.1673293113708496,
406
+ "learning_rate": 4.9891113074407816e-05,
407
+ "loss": 0.4982,
408
+ "num_input_tokens_seen": 3105552,
409
+ "step": 250
410
+ },
411
+ {
412
+ "epoch": 0.09094151212553495,
413
+ "grad_norm": 1.1178501844406128,
414
+ "learning_rate": 4.988671736779604e-05,
415
+ "loss": 0.5412,
416
+ "num_input_tokens_seen": 3165632,
417
+ "step": 255
418
+ },
419
+ {
420
+ "epoch": 0.09272467902995721,
421
+ "grad_norm": 1.1773957014083862,
422
+ "learning_rate": 4.988223488336832e-05,
423
+ "loss": 0.5028,
424
+ "num_input_tokens_seen": 3229392,
425
+ "step": 260
426
+ },
427
+ {
428
+ "epoch": 0.09450784593437946,
429
+ "grad_norm": 1.1285181045532227,
430
+ "learning_rate": 4.987766563675467e-05,
431
+ "loss": 0.5414,
432
+ "num_input_tokens_seen": 3287616,
433
+ "step": 265
434
+ },
435
+ {
436
+ "epoch": 0.09629101283880172,
437
+ "grad_norm": 1.630057454109192,
438
+ "learning_rate": 4.9873009643887666e-05,
439
+ "loss": 0.5512,
440
+ "num_input_tokens_seen": 3346496,
441
+ "step": 270
442
+ },
443
+ {
444
+ "epoch": 0.09807417974322397,
445
+ "grad_norm": 2.1637048721313477,
446
+ "learning_rate": 4.986826692100236e-05,
447
+ "loss": 0.4881,
448
+ "num_input_tokens_seen": 3409312,
449
+ "step": 275
450
+ },
451
+ {
452
+ "epoch": 0.09985734664764621,
453
+ "grad_norm": 1.9481849670410156,
454
+ "learning_rate": 4.98634374846362e-05,
455
+ "loss": 0.4716,
456
+ "num_input_tokens_seen": 3472752,
457
+ "step": 280
458
+ },
459
+ {
460
+ "epoch": 0.10164051355206848,
461
+ "grad_norm": 1.3725030422210693,
462
+ "learning_rate": 4.9858521351629005e-05,
463
+ "loss": 0.5286,
464
+ "num_input_tokens_seen": 3534032,
465
+ "step": 285
466
+ },
467
+ {
468
+ "epoch": 0.10342368045649072,
469
+ "grad_norm": 1.5664440393447876,
470
+ "learning_rate": 4.985351853912292e-05,
471
+ "loss": 0.4985,
472
+ "num_input_tokens_seen": 3598336,
473
+ "step": 290
474
+ },
475
+ {
476
+ "epoch": 0.10520684736091299,
477
+ "grad_norm": 1.3553557395935059,
478
+ "learning_rate": 4.984842906456231e-05,
479
+ "loss": 0.5768,
480
+ "num_input_tokens_seen": 3662144,
481
+ "step": 295
482
+ },
483
+ {
484
+ "epoch": 0.10699001426533523,
485
+ "grad_norm": 1.2202125787734985,
486
+ "learning_rate": 4.984325294569372e-05,
487
+ "loss": 0.4933,
488
+ "num_input_tokens_seen": 3724048,
489
+ "step": 300
490
+ },
491
+ {
492
+ "epoch": 0.10877318116975748,
493
+ "grad_norm": 1.0455083847045898,
494
+ "learning_rate": 4.9837990200565834e-05,
495
+ "loss": 0.5675,
496
+ "num_input_tokens_seen": 3784320,
497
+ "step": 305
498
+ },
499
+ {
500
+ "epoch": 0.11055634807417974,
501
+ "grad_norm": 1.5656300783157349,
502
+ "learning_rate": 4.983264084752939e-05,
503
+ "loss": 0.5315,
504
+ "num_input_tokens_seen": 3849040,
505
+ "step": 310
506
+ },
507
+ {
508
+ "epoch": 0.11233951497860199,
509
+ "grad_norm": 1.4153857231140137,
510
+ "learning_rate": 4.98272049052371e-05,
511
+ "loss": 0.5444,
512
+ "num_input_tokens_seen": 3909552,
513
+ "step": 315
514
+ },
515
+ {
516
+ "epoch": 0.11412268188302425,
517
+ "grad_norm": 1.9048830270767212,
518
+ "learning_rate": 4.982168239264364e-05,
519
+ "loss": 0.4808,
520
+ "num_input_tokens_seen": 3969120,
521
+ "step": 320
522
+ },
523
+ {
524
+ "epoch": 0.1159058487874465,
525
+ "grad_norm": 1.0821411609649658,
526
+ "learning_rate": 4.981607332900552e-05,
527
+ "loss": 0.4829,
528
+ "num_input_tokens_seen": 4029360,
529
+ "step": 325
530
+ },
531
+ {
532
+ "epoch": 0.11768901569186876,
533
+ "grad_norm": 1.2863287925720215,
534
+ "learning_rate": 4.9810377733881065e-05,
535
+ "loss": 0.5273,
536
+ "num_input_tokens_seen": 4091296,
537
+ "step": 330
538
+ },
539
+ {
540
+ "epoch": 0.11947218259629101,
541
+ "grad_norm": 1.3957486152648926,
542
+ "learning_rate": 4.98045956271303e-05,
543
+ "loss": 0.5443,
544
+ "num_input_tokens_seen": 4154304,
545
+ "step": 335
546
+ },
547
+ {
548
+ "epoch": 0.12125534950071326,
549
+ "grad_norm": 1.1562933921813965,
550
+ "learning_rate": 4.979872702891495e-05,
551
+ "loss": 0.5046,
552
+ "num_input_tokens_seen": 4220400,
553
+ "step": 340
554
+ },
555
+ {
556
+ "epoch": 0.12303851640513552,
557
+ "grad_norm": 1.1498775482177734,
558
+ "learning_rate": 4.979277195969829e-05,
559
+ "loss": 0.5393,
560
+ "num_input_tokens_seen": 4279408,
561
+ "step": 345
562
+ },
563
+ {
564
+ "epoch": 0.12482168330955777,
565
+ "grad_norm": 1.2570199966430664,
566
+ "learning_rate": 4.978673044024514e-05,
567
+ "loss": 0.451,
568
+ "num_input_tokens_seen": 4339392,
569
+ "step": 350
570
+ },
571
+ {
572
+ "epoch": 0.12660485021398002,
573
+ "grad_norm": 1.3947458267211914,
574
+ "learning_rate": 4.978060249162175e-05,
575
+ "loss": 0.5715,
576
+ "num_input_tokens_seen": 4399424,
577
+ "step": 355
578
+ },
579
+ {
580
+ "epoch": 0.12838801711840228,
581
+ "grad_norm": 1.1799883842468262,
582
+ "learning_rate": 4.977438813519574e-05,
583
+ "loss": 0.5409,
584
+ "num_input_tokens_seen": 4460992,
585
+ "step": 360
586
+ },
587
+ {
588
+ "epoch": 0.13017118402282454,
589
+ "grad_norm": 0.9736462831497192,
590
+ "learning_rate": 4.976808739263602e-05,
591
+ "loss": 0.5298,
592
+ "num_input_tokens_seen": 4525664,
593
+ "step": 365
594
+ },
595
+ {
596
+ "epoch": 0.13195435092724678,
597
+ "grad_norm": 1.1682716608047485,
598
+ "learning_rate": 4.976170028591274e-05,
599
+ "loss": 0.481,
600
+ "num_input_tokens_seen": 4582160,
601
+ "step": 370
602
+ },
603
+ {
604
+ "epoch": 0.13373751783166904,
605
+ "grad_norm": 1.3871419429779053,
606
+ "learning_rate": 4.975522683729719e-05,
607
+ "loss": 0.5021,
608
+ "num_input_tokens_seen": 4649328,
609
+ "step": 375
610
+ },
611
+ {
612
+ "epoch": 0.1355206847360913,
613
+ "grad_norm": 1.1554944515228271,
614
+ "learning_rate": 4.9748667069361715e-05,
615
+ "loss": 0.5064,
616
+ "num_input_tokens_seen": 4711088,
617
+ "step": 380
618
+ },
619
+ {
620
+ "epoch": 0.13730385164051356,
621
+ "grad_norm": 1.3844372034072876,
622
+ "learning_rate": 4.9742021004979656e-05,
623
+ "loss": 0.5516,
624
+ "num_input_tokens_seen": 4774864,
625
+ "step": 385
626
+ },
627
+ {
628
+ "epoch": 0.1390870185449358,
629
+ "grad_norm": 1.4874283075332642,
630
+ "learning_rate": 4.9735288667325257e-05,
631
+ "loss": 0.4712,
632
+ "num_input_tokens_seen": 4834944,
633
+ "step": 390
634
+ },
635
+ {
636
+ "epoch": 0.14087018544935806,
637
+ "grad_norm": 1.195500373840332,
638
+ "learning_rate": 4.97284700798736e-05,
639
+ "loss": 0.5326,
640
+ "num_input_tokens_seen": 4897264,
641
+ "step": 395
642
+ },
643
+ {
644
+ "epoch": 0.14265335235378032,
645
+ "grad_norm": 1.1240135431289673,
646
+ "learning_rate": 4.97215652664005e-05,
647
+ "loss": 0.5958,
648
+ "num_input_tokens_seen": 4962208,
649
+ "step": 400
650
+ },
651
+ {
652
+ "epoch": 0.14443651925820256,
653
+ "grad_norm": 0.8974002599716187,
654
+ "learning_rate": 4.971457425098244e-05,
655
+ "loss": 0.5536,
656
+ "num_input_tokens_seen": 5027264,
657
+ "step": 405
658
+ },
659
+ {
660
+ "epoch": 0.14621968616262482,
661
+ "grad_norm": 1.0974167585372925,
662
+ "learning_rate": 4.970749705799649e-05,
663
+ "loss": 0.4721,
664
+ "num_input_tokens_seen": 5093216,
665
+ "step": 410
666
+ },
667
+ {
668
+ "epoch": 0.14800285306704708,
669
+ "grad_norm": 1.3087302446365356,
670
+ "learning_rate": 4.9700333712120195e-05,
671
+ "loss": 0.4383,
672
+ "num_input_tokens_seen": 5155296,
673
+ "step": 415
674
+ },
675
+ {
676
+ "epoch": 0.14978601997146934,
677
+ "grad_norm": 5.880493640899658,
678
+ "learning_rate": 4.969308423833152e-05,
679
+ "loss": 0.5098,
680
+ "num_input_tokens_seen": 5216416,
681
+ "step": 420
682
+ },
683
+ {
684
+ "epoch": 0.15156918687589158,
685
+ "grad_norm": 1.2446019649505615,
686
+ "learning_rate": 4.9685748661908756e-05,
687
+ "loss": 0.494,
688
+ "num_input_tokens_seen": 5278816,
689
+ "step": 425
690
+ },
691
+ {
692
+ "epoch": 0.15335235378031384,
693
+ "grad_norm": 1.1921520233154297,
694
+ "learning_rate": 4.967832700843041e-05,
695
+ "loss": 0.5728,
696
+ "num_input_tokens_seen": 5344896,
697
+ "step": 430
698
+ },
699
+ {
700
+ "epoch": 0.1551355206847361,
701
+ "grad_norm": 1.161622166633606,
702
+ "learning_rate": 4.967081930377515e-05,
703
+ "loss": 0.5036,
704
+ "num_input_tokens_seen": 5400960,
705
+ "step": 435
706
+ },
707
+ {
708
+ "epoch": 0.15691868758915833,
709
+ "grad_norm": 1.0513135194778442,
710
+ "learning_rate": 4.966322557412168e-05,
711
+ "loss": 0.4347,
712
+ "num_input_tokens_seen": 5462928,
713
+ "step": 440
714
+ },
715
+ {
716
+ "epoch": 0.1587018544935806,
717
+ "grad_norm": 1.2251578569412231,
718
+ "learning_rate": 4.965554584594868e-05,
719
+ "loss": 0.4997,
720
+ "num_input_tokens_seen": 5525296,
721
+ "step": 445
722
+ },
723
+ {
724
+ "epoch": 0.16048502139800286,
725
+ "grad_norm": 1.2554380893707275,
726
+ "learning_rate": 4.9647780146034695e-05,
727
+ "loss": 0.511,
728
+ "num_input_tokens_seen": 5590640,
729
+ "step": 450
730
+ },
731
+ {
732
+ "epoch": 0.16226818830242512,
733
+ "grad_norm": 2.3998403549194336,
734
+ "learning_rate": 4.9639928501458035e-05,
735
+ "loss": 0.5376,
736
+ "num_input_tokens_seen": 5652912,
737
+ "step": 455
738
+ },
739
+ {
740
+ "epoch": 0.16405135520684735,
741
+ "grad_norm": 1.3643852472305298,
742
+ "learning_rate": 4.963199093959671e-05,
743
+ "loss": 0.5668,
744
+ "num_input_tokens_seen": 5711952,
745
+ "step": 460
746
+ },
747
+ {
748
+ "epoch": 0.16583452211126962,
749
+ "grad_norm": 1.4717122316360474,
750
+ "learning_rate": 4.96239674881283e-05,
751
+ "loss": 0.4877,
752
+ "num_input_tokens_seen": 5773968,
753
+ "step": 465
754
+ },
755
+ {
756
+ "epoch": 0.16761768901569188,
757
+ "grad_norm": 1.8179185390472412,
758
+ "learning_rate": 4.9615858175029884e-05,
759
+ "loss": 0.4669,
760
+ "num_input_tokens_seen": 5836064,
761
+ "step": 470
762
+ },
763
+ {
764
+ "epoch": 0.1694008559201141,
765
+ "grad_norm": 2.963438034057617,
766
+ "learning_rate": 4.960766302857793e-05,
767
+ "loss": 0.4766,
768
+ "num_input_tokens_seen": 5897600,
769
+ "step": 475
770
+ },
771
+ {
772
+ "epoch": 0.17118402282453637,
773
+ "grad_norm": 2.9000422954559326,
774
+ "learning_rate": 4.9599382077348205e-05,
775
+ "loss": 0.542,
776
+ "num_input_tokens_seen": 5959856,
777
+ "step": 480
778
+ },
779
+ {
780
+ "epoch": 0.17296718972895864,
781
+ "grad_norm": 1.1453759670257568,
782
+ "learning_rate": 4.959101535021566e-05,
783
+ "loss": 0.5482,
784
+ "num_input_tokens_seen": 6016128,
785
+ "step": 485
786
+ },
787
+ {
788
+ "epoch": 0.17475035663338087,
789
+ "grad_norm": 1.1614904403686523,
790
+ "learning_rate": 4.9582562876354346e-05,
791
+ "loss": 0.5361,
792
+ "num_input_tokens_seen": 6079664,
793
+ "step": 490
794
+ },
795
+ {
796
+ "epoch": 0.17653352353780313,
797
+ "grad_norm": 1.3136591911315918,
798
+ "learning_rate": 4.95740246852373e-05,
799
+ "loss": 0.5131,
800
+ "num_input_tokens_seen": 6137568,
801
+ "step": 495
802
+ },
803
+ {
804
+ "epoch": 0.1783166904422254,
805
+ "grad_norm": 1.0961729288101196,
806
+ "learning_rate": 4.9565400806636447e-05,
807
+ "loss": 0.431,
808
+ "num_input_tokens_seen": 6199280,
809
+ "step": 500
810
+ },
811
+ {
812
+ "epoch": 0.18009985734664766,
813
+ "grad_norm": 1.3530110120773315,
814
+ "learning_rate": 4.9556691270622515e-05,
815
+ "loss": 0.526,
816
+ "num_input_tokens_seen": 6262272,
817
+ "step": 505
818
+ },
819
+ {
820
+ "epoch": 0.1818830242510699,
821
+ "grad_norm": 1.2133769989013672,
822
+ "learning_rate": 4.9547896107564886e-05,
823
+ "loss": 0.5082,
824
+ "num_input_tokens_seen": 6324144,
825
+ "step": 510
826
+ },
827
+ {
828
+ "epoch": 0.18366619115549215,
829
+ "grad_norm": 1.2528913021087646,
830
+ "learning_rate": 4.9539015348131526e-05,
831
+ "loss": 0.5343,
832
+ "num_input_tokens_seen": 6386096,
833
+ "step": 515
834
+ },
835
+ {
836
+ "epoch": 0.18544935805991442,
837
+ "grad_norm": 1.4908058643341064,
838
+ "learning_rate": 4.953004902328887e-05,
839
+ "loss": 0.5408,
840
+ "num_input_tokens_seen": 6450704,
841
+ "step": 520
842
+ },
843
+ {
844
+ "epoch": 0.18723252496433665,
845
+ "grad_norm": 1.0931016206741333,
846
+ "learning_rate": 4.9520997164301726e-05,
847
+ "loss": 0.53,
848
+ "num_input_tokens_seen": 6512512,
849
+ "step": 525
850
+ },
851
+ {
852
+ "epoch": 0.1890156918687589,
853
+ "grad_norm": 1.317772626876831,
854
+ "learning_rate": 4.951185980273312e-05,
855
+ "loss": 0.4741,
856
+ "num_input_tokens_seen": 6572848,
857
+ "step": 530
858
+ },
859
+ {
860
+ "epoch": 0.19079885877318117,
861
+ "grad_norm": 1.114240288734436,
862
+ "learning_rate": 4.9502636970444246e-05,
863
+ "loss": 0.5021,
864
+ "num_input_tokens_seen": 6634064,
865
+ "step": 535
866
+ },
867
+ {
868
+ "epoch": 0.19258202567760344,
869
+ "grad_norm": 1.1686744689941406,
870
+ "learning_rate": 4.949332869959432e-05,
871
+ "loss": 0.5557,
872
+ "num_input_tokens_seen": 6698560,
873
+ "step": 540
874
+ },
875
+ {
876
+ "epoch": 0.19436519258202567,
877
+ "grad_norm": 1.2107973098754883,
878
+ "learning_rate": 4.948393502264046e-05,
879
+ "loss": 0.5101,
880
+ "num_input_tokens_seen": 6758000,
881
+ "step": 545
882
+ },
883
+ {
884
+ "epoch": 0.19614835948644793,
885
+ "grad_norm": 1.067867398262024,
886
+ "learning_rate": 4.9474455972337607e-05,
887
+ "loss": 0.4712,
888
+ "num_input_tokens_seen": 6823616,
889
+ "step": 550
890
+ },
891
+ {
892
+ "epoch": 0.1979315263908702,
893
+ "grad_norm": 1.0068106651306152,
894
+ "learning_rate": 4.946489158173838e-05,
895
+ "loss": 0.4854,
896
+ "num_input_tokens_seen": 6883376,
897
+ "step": 555
898
+ },
899
+ {
900
+ "epoch": 0.19971469329529243,
901
+ "grad_norm": 1.490473747253418,
902
+ "learning_rate": 4.945524188419298e-05,
903
+ "loss": 0.5664,
904
+ "num_input_tokens_seen": 6943808,
905
+ "step": 560
906
+ },
907
+ {
908
+ "epoch": 0.2014978601997147,
909
+ "grad_norm": 1.0813665390014648,
910
+ "learning_rate": 4.9445506913349063e-05,
911
+ "loss": 0.6241,
912
+ "num_input_tokens_seen": 7005728,
913
+ "step": 565
914
+ },
915
+ {
916
+ "epoch": 0.20328102710413695,
917
+ "grad_norm": 1.3641761541366577,
918
+ "learning_rate": 4.943568670315162e-05,
919
+ "loss": 0.4916,
920
+ "num_input_tokens_seen": 7068608,
921
+ "step": 570
922
+ },
923
+ {
924
+ "epoch": 0.20506419400855921,
925
+ "grad_norm": 1.0902137756347656,
926
+ "learning_rate": 4.942578128784287e-05,
927
+ "loss": 0.4833,
928
+ "num_input_tokens_seen": 7127008,
929
+ "step": 575
930
+ },
931
+ {
932
+ "epoch": 0.20684736091298145,
933
+ "grad_norm": 1.430445909500122,
934
+ "learning_rate": 4.941579070196214e-05,
935
+ "loss": 0.422,
936
+ "num_input_tokens_seen": 7191776,
937
+ "step": 580
938
+ },
939
+ {
940
+ "epoch": 0.2086305278174037,
941
+ "grad_norm": 1.6088680028915405,
942
+ "learning_rate": 4.940571498034572e-05,
943
+ "loss": 0.4913,
944
+ "num_input_tokens_seen": 7251536,
945
+ "step": 585
946
+ },
947
+ {
948
+ "epoch": 0.21041369472182597,
949
+ "grad_norm": 1.3081697225570679,
950
+ "learning_rate": 4.939555415812678e-05,
951
+ "loss": 0.451,
952
+ "num_input_tokens_seen": 7315696,
953
+ "step": 590
954
+ },
955
+ {
956
+ "epoch": 0.2121968616262482,
957
+ "grad_norm": 1.3625929355621338,
958
+ "learning_rate": 4.938530827073522e-05,
959
+ "loss": 0.5694,
960
+ "num_input_tokens_seen": 7373792,
961
+ "step": 595
962
+ },
963
+ {
964
+ "epoch": 0.21398002853067047,
965
+ "grad_norm": 1.1833407878875732,
966
+ "learning_rate": 4.9374977353897566e-05,
967
+ "loss": 0.5647,
968
+ "num_input_tokens_seen": 7434464,
969
+ "step": 600
970
+ },
971
+ {
972
+ "epoch": 0.21576319543509273,
973
+ "grad_norm": 1.3193016052246094,
974
+ "learning_rate": 4.936456144363681e-05,
975
+ "loss": 0.5739,
976
+ "num_input_tokens_seen": 7497328,
977
+ "step": 605
978
+ },
979
+ {
980
+ "epoch": 0.21754636233951496,
981
+ "grad_norm": 1.4671732187271118,
982
+ "learning_rate": 4.935406057627234e-05,
983
+ "loss": 0.5399,
984
+ "num_input_tokens_seen": 7560816,
985
+ "step": 610
986
+ },
987
+ {
988
+ "epoch": 0.21932952924393723,
989
+ "grad_norm": 1.0455771684646606,
990
+ "learning_rate": 4.9343474788419767e-05,
991
+ "loss": 0.4423,
992
+ "num_input_tokens_seen": 7623280,
993
+ "step": 615
994
+ },
995
+ {
996
+ "epoch": 0.2211126961483595,
997
+ "grad_norm": 1.2360905408859253,
998
+ "learning_rate": 4.9332804116990795e-05,
999
+ "loss": 0.4595,
1000
+ "num_input_tokens_seen": 7685264,
1001
+ "step": 620
1002
+ },
1003
+ {
1004
+ "epoch": 0.22289586305278175,
1005
+ "grad_norm": 1.3082692623138428,
1006
+ "learning_rate": 4.9322048599193124e-05,
1007
+ "loss": 0.5022,
1008
+ "num_input_tokens_seen": 7748000,
1009
+ "step": 625
1010
+ },
1011
+ {
1012
+ "epoch": 0.22467902995720399,
1013
+ "grad_norm": 1.306279182434082,
1014
+ "learning_rate": 4.931120827253033e-05,
1015
+ "loss": 0.4287,
1016
+ "num_input_tokens_seen": 7812992,
1017
+ "step": 630
1018
+ },
1019
+ {
1020
+ "epoch": 0.22646219686162625,
1021
+ "grad_norm": 1.3158313035964966,
1022
+ "learning_rate": 4.930028317480167e-05,
1023
+ "loss": 0.4895,
1024
+ "num_input_tokens_seen": 7876416,
1025
+ "step": 635
1026
+ },
1027
+ {
1028
+ "epoch": 0.2282453637660485,
1029
+ "grad_norm": 1.1636604070663452,
1030
+ "learning_rate": 4.9289273344102014e-05,
1031
+ "loss": 0.4975,
1032
+ "num_input_tokens_seen": 7940544,
1033
+ "step": 640
1034
+ },
1035
+ {
1036
+ "epoch": 0.23002853067047074,
1037
+ "grad_norm": 1.23000168800354,
1038
+ "learning_rate": 4.927817881882169e-05,
1039
+ "loss": 0.4295,
1040
+ "num_input_tokens_seen": 7999472,
1041
+ "step": 645
1042
+ },
1043
+ {
1044
+ "epoch": 0.231811697574893,
1045
+ "grad_norm": 1.54082453250885,
1046
+ "learning_rate": 4.9266999637646326e-05,
1047
+ "loss": 0.5753,
1048
+ "num_input_tokens_seen": 8061168,
1049
+ "step": 650
1050
+ },
1051
+ {
1052
+ "epoch": 0.23359486447931527,
1053
+ "grad_norm": 2.485759973526001,
1054
+ "learning_rate": 4.925573583955676e-05,
1055
+ "loss": 0.443,
1056
+ "num_input_tokens_seen": 8118944,
1057
+ "step": 655
1058
+ },
1059
+ {
1060
+ "epoch": 0.23537803138373753,
1061
+ "grad_norm": 1.284912347793579,
1062
+ "learning_rate": 4.9244387463828876e-05,
1063
+ "loss": 0.5421,
1064
+ "num_input_tokens_seen": 8185072,
1065
+ "step": 660
1066
+ },
1067
+ {
1068
+ "epoch": 0.23716119828815976,
1069
+ "grad_norm": 3.4624996185302734,
1070
+ "learning_rate": 4.9232954550033484e-05,
1071
+ "loss": 0.4099,
1072
+ "num_input_tokens_seen": 8247616,
1073
+ "step": 665
1074
+ },
1075
+ {
1076
+ "epoch": 0.23894436519258203,
1077
+ "grad_norm": 1.1022762060165405,
1078
+ "learning_rate": 4.922143713803613e-05,
1079
+ "loss": 0.4784,
1080
+ "num_input_tokens_seen": 8312240,
1081
+ "step": 670
1082
+ },
1083
+ {
1084
+ "epoch": 0.2407275320970043,
1085
+ "grad_norm": 1.1634345054626465,
1086
+ "learning_rate": 4.920983526799705e-05,
1087
+ "loss": 0.3882,
1088
+ "num_input_tokens_seen": 8371088,
1089
+ "step": 675
1090
+ },
1091
+ {
1092
+ "epoch": 0.24251069900142652,
1093
+ "grad_norm": 1.4921728372573853,
1094
+ "learning_rate": 4.919814898037095e-05,
1095
+ "loss": 0.5662,
1096
+ "num_input_tokens_seen": 8435264,
1097
+ "step": 680
1098
+ },
1099
+ {
1100
+ "epoch": 0.24429386590584878,
1101
+ "grad_norm": 1.2474942207336426,
1102
+ "learning_rate": 4.918637831590689e-05,
1103
+ "loss": 0.4169,
1104
+ "num_input_tokens_seen": 8498960,
1105
+ "step": 685
1106
+ },
1107
+ {
1108
+ "epoch": 0.24607703281027105,
1109
+ "grad_norm": 0.9692139625549316,
1110
+ "learning_rate": 4.917452331564816e-05,
1111
+ "loss": 0.4681,
1112
+ "num_input_tokens_seen": 8561168,
1113
+ "step": 690
1114
+ },
1115
+ {
1116
+ "epoch": 0.2478601997146933,
1117
+ "grad_norm": 1.57968008518219,
1118
+ "learning_rate": 4.9162584020932114e-05,
1119
+ "loss": 0.4668,
1120
+ "num_input_tokens_seen": 8624528,
1121
+ "step": 695
1122
+ },
1123
+ {
1124
+ "epoch": 0.24964336661911554,
1125
+ "grad_norm": 1.7983195781707764,
1126
+ "learning_rate": 4.915056047339002e-05,
1127
+ "loss": 0.5366,
1128
+ "num_input_tokens_seen": 8684608,
1129
+ "step": 700
1130
+ },
1131
+ {
1132
+ "epoch": 0.2514265335235378,
1133
+ "grad_norm": 1.3157538175582886,
1134
+ "learning_rate": 4.913845271494695e-05,
1135
+ "loss": 0.4451,
1136
+ "num_input_tokens_seen": 8747216,
1137
+ "step": 705
1138
+ },
1139
+ {
1140
+ "epoch": 0.25320970042796004,
1141
+ "grad_norm": 1.193604588508606,
1142
+ "learning_rate": 4.91262607878216e-05,
1143
+ "loss": 0.5626,
1144
+ "num_input_tokens_seen": 8807392,
1145
+ "step": 710
1146
+ },
1147
+ {
1148
+ "epoch": 0.2549928673323823,
1149
+ "grad_norm": 1.0445785522460938,
1150
+ "learning_rate": 4.911398473452616e-05,
1151
+ "loss": 0.4848,
1152
+ "num_input_tokens_seen": 8868496,
1153
+ "step": 715
1154
+ },
1155
+ {
1156
+ "epoch": 0.25677603423680456,
1157
+ "grad_norm": 1.8069995641708374,
1158
+ "learning_rate": 4.910162459786617e-05,
1159
+ "loss": 0.4672,
1160
+ "num_input_tokens_seen": 8929056,
1161
+ "step": 720
1162
+ },
1163
+ {
1164
+ "epoch": 0.2585592011412268,
1165
+ "grad_norm": 1.1339744329452515,
1166
+ "learning_rate": 4.908918042094033e-05,
1167
+ "loss": 0.399,
1168
+ "num_input_tokens_seen": 8991968,
1169
+ "step": 725
1170
+ },
1171
+ {
1172
+ "epoch": 0.2603423680456491,
1173
+ "grad_norm": 1.2230961322784424,
1174
+ "learning_rate": 4.907665224714042e-05,
1175
+ "loss": 0.5477,
1176
+ "num_input_tokens_seen": 9053408,
1177
+ "step": 730
1178
+ },
1179
+ {
1180
+ "epoch": 0.26212553495007135,
1181
+ "grad_norm": 1.2331055402755737,
1182
+ "learning_rate": 4.906404012015108e-05,
1183
+ "loss": 0.4485,
1184
+ "num_input_tokens_seen": 9115920,
1185
+ "step": 735
1186
+ },
1187
+ {
1188
+ "epoch": 0.26390870185449355,
1189
+ "grad_norm": 1.8696657419204712,
1190
+ "learning_rate": 4.905134408394969e-05,
1191
+ "loss": 0.4714,
1192
+ "num_input_tokens_seen": 9184576,
1193
+ "step": 740
1194
+ },
1195
+ {
1196
+ "epoch": 0.2656918687589158,
1197
+ "grad_norm": 1.9693909883499146,
1198
+ "learning_rate": 4.9038564182806234e-05,
1199
+ "loss": 0.516,
1200
+ "num_input_tokens_seen": 9247872,
1201
+ "step": 745
1202
+ },
1203
+ {
1204
+ "epoch": 0.2674750356633381,
1205
+ "grad_norm": 1.0184056758880615,
1206
+ "learning_rate": 4.902570046128312e-05,
1207
+ "loss": 0.4914,
1208
+ "num_input_tokens_seen": 9310976,
1209
+ "step": 750
1210
+ },
1211
+ {
1212
+ "epoch": 0.26925820256776034,
1213
+ "grad_norm": 1.165300726890564,
1214
+ "learning_rate": 4.9012752964235014e-05,
1215
+ "loss": 0.4695,
1216
+ "num_input_tokens_seen": 9372016,
1217
+ "step": 755
1218
+ },
1219
+ {
1220
+ "epoch": 0.2710413694721826,
1221
+ "grad_norm": 1.0303696393966675,
1222
+ "learning_rate": 4.8999721736808714e-05,
1223
+ "loss": 0.4741,
1224
+ "num_input_tokens_seen": 9432624,
1225
+ "step": 760
1226
+ },
1227
+ {
1228
+ "epoch": 0.27282453637660486,
1229
+ "grad_norm": 1.2935962677001953,
1230
+ "learning_rate": 4.898660682444297e-05,
1231
+ "loss": 0.5044,
1232
+ "num_input_tokens_seen": 9493360,
1233
+ "step": 765
1234
+ },
1235
+ {
1236
+ "epoch": 0.2746077032810271,
1237
+ "grad_norm": 1.3259665966033936,
1238
+ "learning_rate": 4.8973408272868347e-05,
1239
+ "loss": 0.4618,
1240
+ "num_input_tokens_seen": 9555136,
1241
+ "step": 770
1242
+ },
1243
+ {
1244
+ "epoch": 0.27639087018544933,
1245
+ "grad_norm": 4.303719520568848,
1246
+ "learning_rate": 4.896012612810704e-05,
1247
+ "loss": 0.3954,
1248
+ "num_input_tokens_seen": 9616896,
1249
+ "step": 775
1250
+ },
1251
+ {
1252
+ "epoch": 0.2781740370898716,
1253
+ "grad_norm": 1.2892228364944458,
1254
+ "learning_rate": 4.894676043647274e-05,
1255
+ "loss": 0.3872,
1256
+ "num_input_tokens_seen": 9674752,
1257
+ "step": 780
1258
+ },
1259
+ {
1260
+ "epoch": 0.27995720399429386,
1261
+ "grad_norm": 1.360479474067688,
1262
+ "learning_rate": 4.8933311244570434e-05,
1263
+ "loss": 0.4713,
1264
+ "num_input_tokens_seen": 9736976,
1265
+ "step": 785
1266
+ },
1267
+ {
1268
+ "epoch": 0.2817403708987161,
1269
+ "grad_norm": 1.3140631914138794,
1270
+ "learning_rate": 4.8919778599296293e-05,
1271
+ "loss": 0.3917,
1272
+ "num_input_tokens_seen": 9797136,
1273
+ "step": 790
1274
+ },
1275
+ {
1276
+ "epoch": 0.2835235378031384,
1277
+ "grad_norm": 1.0723479986190796,
1278
+ "learning_rate": 4.890616254783748e-05,
1279
+ "loss": 0.4911,
1280
+ "num_input_tokens_seen": 9858928,
1281
+ "step": 795
1282
+ },
1283
+ {
1284
+ "epoch": 0.28530670470756064,
1285
+ "grad_norm": 1.4321138858795166,
1286
+ "learning_rate": 4.8892463137671963e-05,
1287
+ "loss": 0.4682,
1288
+ "num_input_tokens_seen": 9917776,
1289
+ "step": 800
1290
+ },
1291
+ {
1292
+ "epoch": 0.2870898716119829,
1293
+ "grad_norm": 1.2900583744049072,
1294
+ "learning_rate": 4.887868041656839e-05,
1295
+ "loss": 0.4978,
1296
+ "num_input_tokens_seen": 9982464,
1297
+ "step": 805
1298
+ },
1299
+ {
1300
+ "epoch": 0.2888730385164051,
1301
+ "grad_norm": 1.1396691799163818,
1302
+ "learning_rate": 4.886481443258594e-05,
1303
+ "loss": 0.4178,
1304
+ "num_input_tokens_seen": 10044208,
1305
+ "step": 810
1306
+ },
1307
+ {
1308
+ "epoch": 0.2906562054208274,
1309
+ "grad_norm": 1.40047025680542,
1310
+ "learning_rate": 4.885086523407405e-05,
1311
+ "loss": 0.455,
1312
+ "num_input_tokens_seen": 10105968,
1313
+ "step": 815
1314
+ },
1315
+ {
1316
+ "epoch": 0.29243937232524964,
1317
+ "grad_norm": 1.263271689414978,
1318
+ "learning_rate": 4.88368328696724e-05,
1319
+ "loss": 0.4933,
1320
+ "num_input_tokens_seen": 10166992,
1321
+ "step": 820
1322
+ },
1323
+ {
1324
+ "epoch": 0.2942225392296719,
1325
+ "grad_norm": 1.3891979455947876,
1326
+ "learning_rate": 4.882271738831059e-05,
1327
+ "loss": 0.5043,
1328
+ "num_input_tokens_seen": 10232144,
1329
+ "step": 825
1330
+ },
1331
+ {
1332
+ "epoch": 0.29600570613409416,
1333
+ "grad_norm": 1.4529082775115967,
1334
+ "learning_rate": 4.880851883920809e-05,
1335
+ "loss": 0.5188,
1336
+ "num_input_tokens_seen": 10292944,
1337
+ "step": 830
1338
+ },
1339
+ {
1340
+ "epoch": 0.2977888730385164,
1341
+ "grad_norm": 1.1358407735824585,
1342
+ "learning_rate": 4.879423727187401e-05,
1343
+ "loss": 0.5159,
1344
+ "num_input_tokens_seen": 10354256,
1345
+ "step": 835
1346
+ },
1347
+ {
1348
+ "epoch": 0.2995720399429387,
1349
+ "grad_norm": 1.7224394083023071,
1350
+ "learning_rate": 4.8779872736106916e-05,
1351
+ "loss": 0.5063,
1352
+ "num_input_tokens_seen": 10416688,
1353
+ "step": 840
1354
+ },
1355
+ {
1356
+ "epoch": 0.3013552068473609,
1357
+ "grad_norm": 1.557830810546875,
1358
+ "learning_rate": 4.8765425281994704e-05,
1359
+ "loss": 0.44,
1360
+ "num_input_tokens_seen": 10477712,
1361
+ "step": 845
1362
+ },
1363
+ {
1364
+ "epoch": 0.30313837375178315,
1365
+ "grad_norm": 1.207537293434143,
1366
+ "learning_rate": 4.8750894959914377e-05,
1367
+ "loss": 0.457,
1368
+ "num_input_tokens_seen": 10539120,
1369
+ "step": 850
1370
+ },
1371
+ {
1372
+ "epoch": 0.3049215406562054,
1373
+ "grad_norm": 1.2876931428909302,
1374
+ "learning_rate": 4.873628182053191e-05,
1375
+ "loss": 0.4583,
1376
+ "num_input_tokens_seen": 10602400,
1377
+ "step": 855
1378
+ },
1379
+ {
1380
+ "epoch": 0.3067047075606277,
1381
+ "grad_norm": 1.5556424856185913,
1382
+ "learning_rate": 4.872158591480206e-05,
1383
+ "loss": 0.4462,
1384
+ "num_input_tokens_seen": 10665920,
1385
+ "step": 860
1386
+ },
1387
+ {
1388
+ "epoch": 0.30848787446504994,
1389
+ "grad_norm": 1.2405084371566772,
1390
+ "learning_rate": 4.870680729396815e-05,
1391
+ "loss": 0.4229,
1392
+ "num_input_tokens_seen": 10732768,
1393
+ "step": 865
1394
+ },
1395
+ {
1396
+ "epoch": 0.3102710413694722,
1397
+ "grad_norm": 1.3671534061431885,
1398
+ "learning_rate": 4.869194600956195e-05,
1399
+ "loss": 0.5017,
1400
+ "num_input_tokens_seen": 10794368,
1401
+ "step": 870
1402
+ },
1403
+ {
1404
+ "epoch": 0.31205420827389446,
1405
+ "grad_norm": 1.0638670921325684,
1406
+ "learning_rate": 4.867700211340347e-05,
1407
+ "loss": 0.4751,
1408
+ "num_input_tokens_seen": 10853408,
1409
+ "step": 875
1410
+ },
1411
+ {
1412
+ "epoch": 0.31383737517831667,
1413
+ "grad_norm": 1.2563133239746094,
1414
+ "learning_rate": 4.8661975657600765e-05,
1415
+ "loss": 0.4873,
1416
+ "num_input_tokens_seen": 10918576,
1417
+ "step": 880
1418
+ },
1419
+ {
1420
+ "epoch": 0.31562054208273893,
1421
+ "grad_norm": 1.0638364553451538,
1422
+ "learning_rate": 4.8646866694549795e-05,
1423
+ "loss": 0.4572,
1424
+ "num_input_tokens_seen": 10980976,
1425
+ "step": 885
1426
+ },
1427
+ {
1428
+ "epoch": 0.3174037089871612,
1429
+ "grad_norm": 1.3460172414779663,
1430
+ "learning_rate": 4.863167527693417e-05,
1431
+ "loss": 0.4758,
1432
+ "num_input_tokens_seen": 11040448,
1433
+ "step": 890
1434
+ },
1435
+ {
1436
+ "epoch": 0.31918687589158345,
1437
+ "grad_norm": 1.210242509841919,
1438
+ "learning_rate": 4.861640145772507e-05,
1439
+ "loss": 0.5092,
1440
+ "num_input_tokens_seen": 11104160,
1441
+ "step": 895
1442
+ },
1443
+ {
1444
+ "epoch": 0.3209700427960057,
1445
+ "grad_norm": 1.0002316236495972,
1446
+ "learning_rate": 4.8601045290180946e-05,
1447
+ "loss": 0.4447,
1448
+ "num_input_tokens_seen": 11164224,
1449
+ "step": 900
1450
+ },
1451
+ {
1452
+ "epoch": 0.322753209700428,
1453
+ "grad_norm": 1.332479476928711,
1454
+ "learning_rate": 4.858560682784744e-05,
1455
+ "loss": 0.4335,
1456
+ "num_input_tokens_seen": 11227376,
1457
+ "step": 905
1458
+ },
1459
+ {
1460
+ "epoch": 0.32453637660485024,
1461
+ "grad_norm": 1.223310112953186,
1462
+ "learning_rate": 4.8570086124557116e-05,
1463
+ "loss": 0.4156,
1464
+ "num_input_tokens_seen": 11284704,
1465
+ "step": 910
1466
+ },
1467
+ {
1468
+ "epoch": 0.32631954350927245,
1469
+ "grad_norm": 1.439526915550232,
1470
+ "learning_rate": 4.85544832344293e-05,
1471
+ "loss": 0.431,
1472
+ "num_input_tokens_seen": 11348448,
1473
+ "step": 915
1474
+ },
1475
+ {
1476
+ "epoch": 0.3281027104136947,
1477
+ "grad_norm": 1.27132248878479,
1478
+ "learning_rate": 4.853879821186993e-05,
1479
+ "loss": 0.4941,
1480
+ "num_input_tokens_seen": 11406160,
1481
+ "step": 920
1482
+ },
1483
+ {
1484
+ "epoch": 0.32988587731811697,
1485
+ "grad_norm": 1.6706770658493042,
1486
+ "learning_rate": 4.8523031111571316e-05,
1487
+ "loss": 0.4718,
1488
+ "num_input_tokens_seen": 11467088,
1489
+ "step": 925
1490
+ },
1491
+ {
1492
+ "epoch": 0.33166904422253923,
1493
+ "grad_norm": 1.131922960281372,
1494
+ "learning_rate": 4.850718198851195e-05,
1495
+ "loss": 0.4172,
1496
+ "num_input_tokens_seen": 11532768,
1497
+ "step": 930
1498
+ },
1499
+ {
1500
+ "epoch": 0.3334522111269615,
1501
+ "grad_norm": 1.1946320533752441,
1502
+ "learning_rate": 4.849125089795634e-05,
1503
+ "loss": 0.3736,
1504
+ "num_input_tokens_seen": 11591984,
1505
+ "step": 935
1506
+ },
1507
+ {
1508
+ "epoch": 0.33523537803138376,
1509
+ "grad_norm": 1.2938627004623413,
1510
+ "learning_rate": 4.8475237895454833e-05,
1511
+ "loss": 0.462,
1512
+ "num_input_tokens_seen": 11656624,
1513
+ "step": 940
1514
+ },
1515
+ {
1516
+ "epoch": 0.33701854493580596,
1517
+ "grad_norm": 1.3345882892608643,
1518
+ "learning_rate": 4.845914303684336e-05,
1519
+ "loss": 0.4584,
1520
+ "num_input_tokens_seen": 11718256,
1521
+ "step": 945
1522
+ },
1523
+ {
1524
+ "epoch": 0.3388017118402282,
1525
+ "grad_norm": 0.8923389315605164,
1526
+ "learning_rate": 4.844296637824329e-05,
1527
+ "loss": 0.5339,
1528
+ "num_input_tokens_seen": 11776080,
1529
+ "step": 950
1530
+ },
1531
+ {
1532
+ "epoch": 0.3405848787446505,
1533
+ "grad_norm": 1.2611216306686401,
1534
+ "learning_rate": 4.8426707976061226e-05,
1535
+ "loss": 0.5625,
1536
+ "num_input_tokens_seen": 11840768,
1537
+ "step": 955
1538
+ },
1539
+ {
1540
+ "epoch": 0.34236804564907275,
1541
+ "grad_norm": 1.1760461330413818,
1542
+ "learning_rate": 4.84103678869888e-05,
1543
+ "loss": 0.5043,
1544
+ "num_input_tokens_seen": 11904208,
1545
+ "step": 960
1546
+ },
1547
+ {
1548
+ "epoch": 0.344151212553495,
1549
+ "grad_norm": 1.19206964969635,
1550
+ "learning_rate": 4.8393946168002477e-05,
1551
+ "loss": 0.4183,
1552
+ "num_input_tokens_seen": 11967952,
1553
+ "step": 965
1554
+ },
1555
+ {
1556
+ "epoch": 0.3459343794579173,
1557
+ "grad_norm": 1.1648989915847778,
1558
+ "learning_rate": 4.8377442876363364e-05,
1559
+ "loss": 0.4095,
1560
+ "num_input_tokens_seen": 12033136,
1561
+ "step": 970
1562
+ },
1563
+ {
1564
+ "epoch": 0.34771754636233954,
1565
+ "grad_norm": 1.2076241970062256,
1566
+ "learning_rate": 4.8360858069617006e-05,
1567
+ "loss": 0.4537,
1568
+ "num_input_tokens_seen": 12097584,
1569
+ "step": 975
1570
+ },
1571
+ {
1572
+ "epoch": 0.34950071326676174,
1573
+ "grad_norm": 1.0648747682571411,
1574
+ "learning_rate": 4.834419180559317e-05,
1575
+ "loss": 0.3932,
1576
+ "num_input_tokens_seen": 12156320,
1577
+ "step": 980
1578
+ },
1579
+ {
1580
+ "epoch": 0.351283880171184,
1581
+ "grad_norm": 1.228440523147583,
1582
+ "learning_rate": 4.832744414240567e-05,
1583
+ "loss": 0.4313,
1584
+ "num_input_tokens_seen": 12218384,
1585
+ "step": 985
1586
+ },
1587
+ {
1588
+ "epoch": 0.35306704707560627,
1589
+ "grad_norm": 1.4440958499908447,
1590
+ "learning_rate": 4.8310615138452156e-05,
1591
+ "loss": 0.4685,
1592
+ "num_input_tokens_seen": 12281856,
1593
+ "step": 990
1594
+ },
1595
+ {
1596
+ "epoch": 0.35485021398002853,
1597
+ "grad_norm": 1.0754982233047485,
1598
+ "learning_rate": 4.829370485241388e-05,
1599
+ "loss": 0.4623,
1600
+ "num_input_tokens_seen": 12343904,
1601
+ "step": 995
1602
+ },
1603
+ {
1604
+ "epoch": 0.3566333808844508,
1605
+ "grad_norm": 1.3745994567871094,
1606
+ "learning_rate": 4.827671334325556e-05,
1607
+ "loss": 0.4334,
1608
+ "num_input_tokens_seen": 12402256,
1609
+ "step": 1000
1610
+ },
1611
+ {
1612
+ "epoch": 0.35841654778887305,
1613
+ "grad_norm": 1.0649508237838745,
1614
+ "learning_rate": 4.82596406702251e-05,
1615
+ "loss": 0.4728,
1616
+ "num_input_tokens_seen": 12465536,
1617
+ "step": 1005
1618
+ },
1619
+ {
1620
+ "epoch": 0.3601997146932953,
1621
+ "grad_norm": 1.061046838760376,
1622
+ "learning_rate": 4.8242486892853424e-05,
1623
+ "loss": 0.421,
1624
+ "num_input_tokens_seen": 12530464,
1625
+ "step": 1010
1626
+ },
1627
+ {
1628
+ "epoch": 0.3619828815977175,
1629
+ "grad_norm": 1.7068672180175781,
1630
+ "learning_rate": 4.822525207095425e-05,
1631
+ "loss": 0.4843,
1632
+ "num_input_tokens_seen": 12593216,
1633
+ "step": 1015
1634
+ },
1635
+ {
1636
+ "epoch": 0.3637660485021398,
1637
+ "grad_norm": 1.2018966674804688,
1638
+ "learning_rate": 4.820793626462391e-05,
1639
+ "loss": 0.4604,
1640
+ "num_input_tokens_seen": 12655248,
1641
+ "step": 1020
1642
+ },
1643
+ {
1644
+ "epoch": 0.36554921540656204,
1645
+ "grad_norm": 1.2888667583465576,
1646
+ "learning_rate": 4.819053953424112e-05,
1647
+ "loss": 0.427,
1648
+ "num_input_tokens_seen": 12718048,
1649
+ "step": 1025
1650
+ },
1651
+ {
1652
+ "epoch": 0.3673323823109843,
1653
+ "grad_norm": 1.3183050155639648,
1654
+ "learning_rate": 4.817306194046675e-05,
1655
+ "loss": 0.4415,
1656
+ "num_input_tokens_seen": 12781536,
1657
+ "step": 1030
1658
+ },
1659
+ {
1660
+ "epoch": 0.36911554921540657,
1661
+ "grad_norm": 1.7154110670089722,
1662
+ "learning_rate": 4.815550354424365e-05,
1663
+ "loss": 0.5193,
1664
+ "num_input_tokens_seen": 12844336,
1665
+ "step": 1035
1666
+ },
1667
+ {
1668
+ "epoch": 0.37089871611982883,
1669
+ "grad_norm": 1.3131228685379028,
1670
+ "learning_rate": 4.813786440679642e-05,
1671
+ "loss": 0.4078,
1672
+ "num_input_tokens_seen": 12906288,
1673
+ "step": 1040
1674
+ },
1675
+ {
1676
+ "epoch": 0.3726818830242511,
1677
+ "grad_norm": 1.1588881015777588,
1678
+ "learning_rate": 4.81201445896312e-05,
1679
+ "loss": 0.3672,
1680
+ "num_input_tokens_seen": 12965200,
1681
+ "step": 1045
1682
+ },
1683
+ {
1684
+ "epoch": 0.3744650499286733,
1685
+ "grad_norm": 1.5113669633865356,
1686
+ "learning_rate": 4.810234415453545e-05,
1687
+ "loss": 0.4896,
1688
+ "num_input_tokens_seen": 13033248,
1689
+ "step": 1050
1690
+ },
1691
+ {
1692
+ "epoch": 0.37624821683309556,
1693
+ "grad_norm": 1.4646553993225098,
1694
+ "learning_rate": 4.808446316357773e-05,
1695
+ "loss": 0.4772,
1696
+ "num_input_tokens_seen": 13096752,
1697
+ "step": 1055
1698
+ },
1699
+ {
1700
+ "epoch": 0.3780313837375178,
1701
+ "grad_norm": 1.9652795791625977,
1702
+ "learning_rate": 4.80665016791075e-05,
1703
+ "loss": 0.4468,
1704
+ "num_input_tokens_seen": 13158992,
1705
+ "step": 1060
1706
+ },
1707
+ {
1708
+ "epoch": 0.3798145506419401,
1709
+ "grad_norm": 3.033592700958252,
1710
+ "learning_rate": 4.804845976375489e-05,
1711
+ "loss": 0.3997,
1712
+ "num_input_tokens_seen": 13222064,
1713
+ "step": 1065
1714
+ },
1715
+ {
1716
+ "epoch": 0.38159771754636235,
1717
+ "grad_norm": 1.2086786031723022,
1718
+ "learning_rate": 4.8030337480430496e-05,
1719
+ "loss": 0.4966,
1720
+ "num_input_tokens_seen": 13286112,
1721
+ "step": 1070
1722
+ },
1723
+ {
1724
+ "epoch": 0.3833808844507846,
1725
+ "grad_norm": 1.7219619750976562,
1726
+ "learning_rate": 4.801213489232514e-05,
1727
+ "loss": 0.4918,
1728
+ "num_input_tokens_seen": 13346832,
1729
+ "step": 1075
1730
+ },
1731
+ {
1732
+ "epoch": 0.38516405135520687,
1733
+ "grad_norm": 1.256044864654541,
1734
+ "learning_rate": 4.799385206290965e-05,
1735
+ "loss": 0.4734,
1736
+ "num_input_tokens_seen": 13408992,
1737
+ "step": 1080
1738
+ },
1739
+ {
1740
+ "epoch": 0.3869472182596291,
1741
+ "grad_norm": 1.150932788848877,
1742
+ "learning_rate": 4.7975489055934666e-05,
1743
+ "loss": 0.3703,
1744
+ "num_input_tokens_seen": 13469280,
1745
+ "step": 1085
1746
+ },
1747
+ {
1748
+ "epoch": 0.38873038516405134,
1749
+ "grad_norm": 1.4256497621536255,
1750
+ "learning_rate": 4.79570459354304e-05,
1751
+ "loss": 0.5076,
1752
+ "num_input_tokens_seen": 13533536,
1753
+ "step": 1090
1754
+ },
1755
+ {
1756
+ "epoch": 0.3905135520684736,
1757
+ "grad_norm": 1.1593137979507446,
1758
+ "learning_rate": 4.79385227657064e-05,
1759
+ "loss": 0.4351,
1760
+ "num_input_tokens_seen": 13594304,
1761
+ "step": 1095
1762
+ },
1763
+ {
1764
+ "epoch": 0.39229671897289586,
1765
+ "grad_norm": 0.9239110350608826,
1766
+ "learning_rate": 4.791991961135135e-05,
1767
+ "loss": 0.4984,
1768
+ "num_input_tokens_seen": 13657328,
1769
+ "step": 1100
1770
+ },
1771
+ {
1772
+ "epoch": 0.3940798858773181,
1773
+ "grad_norm": 0.999727189540863,
1774
+ "learning_rate": 4.790123653723282e-05,
1775
+ "loss": 0.4598,
1776
+ "num_input_tokens_seen": 13720224,
1777
+ "step": 1105
1778
+ },
1779
+ {
1780
+ "epoch": 0.3958630527817404,
1781
+ "grad_norm": 1.0658410787582397,
1782
+ "learning_rate": 4.788247360849708e-05,
1783
+ "loss": 0.4409,
1784
+ "num_input_tokens_seen": 13782656,
1785
+ "step": 1110
1786
+ },
1787
+ {
1788
+ "epoch": 0.39764621968616265,
1789
+ "grad_norm": 1.2038542032241821,
1790
+ "learning_rate": 4.786363089056881e-05,
1791
+ "loss": 0.4719,
1792
+ "num_input_tokens_seen": 13849120,
1793
+ "step": 1115
1794
+ },
1795
+ {
1796
+ "epoch": 0.39942938659058486,
1797
+ "grad_norm": 1.1782008409500122,
1798
+ "learning_rate": 4.784470844915093e-05,
1799
+ "loss": 0.4147,
1800
+ "num_input_tokens_seen": 13910944,
1801
+ "step": 1120
1802
+ },
1803
+ {
1804
+ "epoch": 0.4012125534950071,
1805
+ "grad_norm": 0.9827120304107666,
1806
+ "learning_rate": 4.782570635022436e-05,
1807
+ "loss": 0.3883,
1808
+ "num_input_tokens_seen": 13969248,
1809
+ "step": 1125
1810
+ },
1811
+ {
1812
+ "epoch": 0.4029957203994294,
1813
+ "grad_norm": 1.0276070833206177,
1814
+ "learning_rate": 4.7806624660047744e-05,
1815
+ "loss": 0.4337,
1816
+ "num_input_tokens_seen": 14028112,
1817
+ "step": 1130
1818
+ },
1819
+ {
1820
+ "epoch": 0.40477888730385164,
1821
+ "grad_norm": 1.923315167427063,
1822
+ "learning_rate": 4.7787463445157286e-05,
1823
+ "loss": 0.5135,
1824
+ "num_input_tokens_seen": 14090320,
1825
+ "step": 1135
1826
+ },
1827
+ {
1828
+ "epoch": 0.4065620542082739,
1829
+ "grad_norm": 1.3430618047714233,
1830
+ "learning_rate": 4.7768222772366466e-05,
1831
+ "loss": 0.5111,
1832
+ "num_input_tokens_seen": 14151840,
1833
+ "step": 1140
1834
+ },
1835
+ {
1836
+ "epoch": 0.40834522111269617,
1837
+ "grad_norm": 1.5225883722305298,
1838
+ "learning_rate": 4.774890270876584e-05,
1839
+ "loss": 0.5005,
1840
+ "num_input_tokens_seen": 14213824,
1841
+ "step": 1145
1842
+ },
1843
+ {
1844
+ "epoch": 0.41012838801711843,
1845
+ "grad_norm": 1.0013866424560547,
1846
+ "learning_rate": 4.772950332172279e-05,
1847
+ "loss": 0.6018,
1848
+ "num_input_tokens_seen": 14278736,
1849
+ "step": 1150
1850
+ },
1851
+ {
1852
+ "epoch": 0.41191155492154063,
1853
+ "grad_norm": 1.0078413486480713,
1854
+ "learning_rate": 4.771002467888128e-05,
1855
+ "loss": 0.3879,
1856
+ "num_input_tokens_seen": 14339408,
1857
+ "step": 1155
1858
+ },
1859
+ {
1860
+ "epoch": 0.4136947218259629,
1861
+ "grad_norm": 1.1650017499923706,
1862
+ "learning_rate": 4.769046684816165e-05,
1863
+ "loss": 0.4924,
1864
+ "num_input_tokens_seen": 14399008,
1865
+ "step": 1160
1866
+ },
1867
+ {
1868
+ "epoch": 0.41547788873038516,
1869
+ "grad_norm": 1.351217269897461,
1870
+ "learning_rate": 4.767082989776034e-05,
1871
+ "loss": 0.4104,
1872
+ "num_input_tokens_seen": 14462656,
1873
+ "step": 1165
1874
+ },
1875
+ {
1876
+ "epoch": 0.4172610556348074,
1877
+ "grad_norm": 1.3392795324325562,
1878
+ "learning_rate": 4.76511138961497e-05,
1879
+ "loss": 0.4629,
1880
+ "num_input_tokens_seen": 14527568,
1881
+ "step": 1170
1882
+ },
1883
+ {
1884
+ "epoch": 0.4190442225392297,
1885
+ "grad_norm": 1.3544095754623413,
1886
+ "learning_rate": 4.763131891207771e-05,
1887
+ "loss": 0.486,
1888
+ "num_input_tokens_seen": 14590944,
1889
+ "step": 1175
1890
+ },
1891
+ {
1892
+ "epoch": 0.42082738944365194,
1893
+ "grad_norm": 1.1842771768569946,
1894
+ "learning_rate": 4.761144501456773e-05,
1895
+ "loss": 0.4529,
1896
+ "num_input_tokens_seen": 14651104,
1897
+ "step": 1180
1898
+ },
1899
+ {
1900
+ "epoch": 0.4226105563480742,
1901
+ "grad_norm": 0.9588406085968018,
1902
+ "learning_rate": 4.7591492272918344e-05,
1903
+ "loss": 0.3739,
1904
+ "num_input_tokens_seen": 14711344,
1905
+ "step": 1185
1906
+ },
1907
+ {
1908
+ "epoch": 0.4243937232524964,
1909
+ "grad_norm": 1.1637108325958252,
1910
+ "learning_rate": 4.7571460756703e-05,
1911
+ "loss": 0.4772,
1912
+ "num_input_tokens_seen": 14772656,
1913
+ "step": 1190
1914
+ },
1915
+ {
1916
+ "epoch": 0.4261768901569187,
1917
+ "grad_norm": 1.225539207458496,
1918
+ "learning_rate": 4.755135053576987e-05,
1919
+ "loss": 0.4606,
1920
+ "num_input_tokens_seen": 14833840,
1921
+ "step": 1195
1922
+ },
1923
+ {
1924
+ "epoch": 0.42796005706134094,
1925
+ "grad_norm": 1.4021525382995605,
1926
+ "learning_rate": 4.753116168024153e-05,
1927
+ "loss": 0.4168,
1928
+ "num_input_tokens_seen": 14896544,
1929
+ "step": 1200
1930
+ },
1931
+ {
1932
+ "epoch": 0.4297432239657632,
1933
+ "grad_norm": 1.615051507949829,
1934
+ "learning_rate": 4.751089426051476e-05,
1935
+ "loss": 0.4156,
1936
+ "num_input_tokens_seen": 14956432,
1937
+ "step": 1205
1938
+ },
1939
+ {
1940
+ "epoch": 0.43152639087018546,
1941
+ "grad_norm": 3.979645252227783,
1942
+ "learning_rate": 4.749054834726029e-05,
1943
+ "loss": 0.5188,
1944
+ "num_input_tokens_seen": 15021296,
1945
+ "step": 1210
1946
+ },
1947
+ {
1948
+ "epoch": 0.4333095577746077,
1949
+ "grad_norm": 1.3276335000991821,
1950
+ "learning_rate": 4.7470124011422555e-05,
1951
+ "loss": 0.4941,
1952
+ "num_input_tokens_seen": 15080688,
1953
+ "step": 1215
1954
+ },
1955
+ {
1956
+ "epoch": 0.43509272467902993,
1957
+ "grad_norm": 1.2513278722763062,
1958
+ "learning_rate": 4.744962132421943e-05,
1959
+ "loss": 0.4719,
1960
+ "num_input_tokens_seen": 15141456,
1961
+ "step": 1220
1962
+ },
1963
+ {
1964
+ "epoch": 0.4368758915834522,
1965
+ "grad_norm": 1.1449891328811646,
1966
+ "learning_rate": 4.742904035714199e-05,
1967
+ "loss": 0.4811,
1968
+ "num_input_tokens_seen": 15202768,
1969
+ "step": 1225
1970
+ },
1971
+ {
1972
+ "epoch": 0.43865905848787445,
1973
+ "grad_norm": 1.0668220520019531,
1974
+ "learning_rate": 4.7408381181954284e-05,
1975
+ "loss": 0.4801,
1976
+ "num_input_tokens_seen": 15266416,
1977
+ "step": 1230
1978
+ },
1979
+ {
1980
+ "epoch": 0.4404422253922967,
1981
+ "grad_norm": 1.576777458190918,
1982
+ "learning_rate": 4.7387643870693055e-05,
1983
+ "loss": 0.4551,
1984
+ "num_input_tokens_seen": 15328416,
1985
+ "step": 1235
1986
+ },
1987
+ {
1988
+ "epoch": 0.442225392296719,
1989
+ "grad_norm": 1.0677021741867065,
1990
+ "learning_rate": 4.736682849566751e-05,
1991
+ "loss": 0.3682,
1992
+ "num_input_tokens_seen": 15387392,
1993
+ "step": 1240
1994
+ },
1995
+ {
1996
+ "epoch": 0.44400855920114124,
1997
+ "grad_norm": 1.105083703994751,
1998
+ "learning_rate": 4.734593512945904e-05,
1999
+ "loss": 0.4721,
2000
+ "num_input_tokens_seen": 15444928,
2001
+ "step": 1245
2002
+ },
2003
+ {
2004
+ "epoch": 0.4457917261055635,
2005
+ "grad_norm": 1.1016100645065308,
2006
+ "learning_rate": 4.7324963844920986e-05,
2007
+ "loss": 0.4568,
2008
+ "num_input_tokens_seen": 15505488,
2009
+ "step": 1250
2010
+ },
2011
+ {
2012
+ "epoch": 0.4475748930099857,
2013
+ "grad_norm": 1.4010059833526611,
2014
+ "learning_rate": 4.7303914715178396e-05,
2015
+ "loss": 0.5337,
2016
+ "num_input_tokens_seen": 15566336,
2017
+ "step": 1255
2018
+ },
2019
+ {
2020
+ "epoch": 0.44935805991440797,
2021
+ "grad_norm": 1.149238109588623,
2022
+ "learning_rate": 4.728278781362777e-05,
2023
+ "loss": 0.3965,
2024
+ "num_input_tokens_seen": 15632768,
2025
+ "step": 1260
2026
+ },
2027
+ {
2028
+ "epoch": 0.45114122681883023,
2029
+ "grad_norm": 1.4296883344650269,
2030
+ "learning_rate": 4.7261583213936746e-05,
2031
+ "loss": 0.5366,
2032
+ "num_input_tokens_seen": 15694944,
2033
+ "step": 1265
2034
+ },
2035
+ {
2036
+ "epoch": 0.4529243937232525,
2037
+ "grad_norm": 1.2786849737167358,
2038
+ "learning_rate": 4.7240300990043926e-05,
2039
+ "loss": 0.4339,
2040
+ "num_input_tokens_seen": 15756496,
2041
+ "step": 1270
2042
+ },
2043
+ {
2044
+ "epoch": 0.45470756062767476,
2045
+ "grad_norm": 1.1299382448196411,
2046
+ "learning_rate": 4.721894121615859e-05,
2047
+ "loss": 0.4866,
2048
+ "num_input_tokens_seen": 15821200,
2049
+ "step": 1275
2050
+ },
2051
+ {
2052
+ "epoch": 0.456490727532097,
2053
+ "grad_norm": 1.1465532779693604,
2054
+ "learning_rate": 4.7197503966760375e-05,
2055
+ "loss": 0.4288,
2056
+ "num_input_tokens_seen": 15882736,
2057
+ "step": 1280
2058
+ },
2059
+ {
2060
+ "epoch": 0.4582738944365193,
2061
+ "grad_norm": 1.4677292108535767,
2062
+ "learning_rate": 4.717598931659913e-05,
2063
+ "loss": 0.443,
2064
+ "num_input_tokens_seen": 15944560,
2065
+ "step": 1285
2066
+ },
2067
+ {
2068
+ "epoch": 0.4600570613409415,
2069
+ "grad_norm": 1.8437912464141846,
2070
+ "learning_rate": 4.7154397340694556e-05,
2071
+ "loss": 0.4923,
2072
+ "num_input_tokens_seen": 16006784,
2073
+ "step": 1290
2074
+ },
2075
+ {
2076
+ "epoch": 0.46184022824536375,
2077
+ "grad_norm": 1.5408210754394531,
2078
+ "learning_rate": 4.713272811433599e-05,
2079
+ "loss": 0.4868,
2080
+ "num_input_tokens_seen": 16068896,
2081
+ "step": 1295
2082
+ },
2083
+ {
2084
+ "epoch": 0.463623395149786,
2085
+ "grad_norm": 1.1977325677871704,
2086
+ "learning_rate": 4.711098171308214e-05,
2087
+ "loss": 0.4781,
2088
+ "num_input_tokens_seen": 16128640,
2089
+ "step": 1300
2090
+ },
2091
+ {
2092
+ "epoch": 0.4654065620542083,
2093
+ "grad_norm": 1.470975399017334,
2094
+ "learning_rate": 4.708915821276082e-05,
2095
+ "loss": 0.4748,
2096
+ "num_input_tokens_seen": 16192800,
2097
+ "step": 1305
2098
+ },
2099
+ {
2100
+ "epoch": 0.46718972895863053,
2101
+ "grad_norm": 1.460138201713562,
2102
+ "learning_rate": 4.706725768946866e-05,
2103
+ "loss": 0.5107,
2104
+ "num_input_tokens_seen": 16251248,
2105
+ "step": 1310
2106
+ },
2107
+ {
2108
+ "epoch": 0.4689728958630528,
2109
+ "grad_norm": 1.2103915214538574,
2110
+ "learning_rate": 4.7045280219570896e-05,
2111
+ "loss": 0.4768,
2112
+ "num_input_tokens_seen": 16314704,
2113
+ "step": 1315
2114
+ },
2115
+ {
2116
+ "epoch": 0.47075606276747506,
2117
+ "grad_norm": 1.1669901609420776,
2118
+ "learning_rate": 4.702322587970104e-05,
2119
+ "loss": 0.4624,
2120
+ "num_input_tokens_seen": 16375792,
2121
+ "step": 1320
2122
+ },
2123
+ {
2124
+ "epoch": 0.47253922967189727,
2125
+ "grad_norm": 1.1790727376937866,
2126
+ "learning_rate": 4.700109474676064e-05,
2127
+ "loss": 0.4735,
2128
+ "num_input_tokens_seen": 16438672,
2129
+ "step": 1325
2130
+ },
2131
+ {
2132
+ "epoch": 0.4743223965763195,
2133
+ "grad_norm": 1.019875407218933,
2134
+ "learning_rate": 4.697888689791906e-05,
2135
+ "loss": 0.3809,
2136
+ "num_input_tokens_seen": 16498896,
2137
+ "step": 1330
2138
+ },
2139
+ {
2140
+ "epoch": 0.4761055634807418,
2141
+ "grad_norm": 1.2999383211135864,
2142
+ "learning_rate": 4.6956602410613115e-05,
2143
+ "loss": 0.4421,
2144
+ "num_input_tokens_seen": 16566736,
2145
+ "step": 1335
2146
+ },
2147
+ {
2148
+ "epoch": 0.47788873038516405,
2149
+ "grad_norm": 1.4289456605911255,
2150
+ "learning_rate": 4.6934241362546874e-05,
2151
+ "loss": 0.5083,
2152
+ "num_input_tokens_seen": 16630480,
2153
+ "step": 1340
2154
+ },
2155
+ {
2156
+ "epoch": 0.4796718972895863,
2157
+ "grad_norm": 1.2647002935409546,
2158
+ "learning_rate": 4.691180383169137e-05,
2159
+ "loss": 0.5118,
2160
+ "num_input_tokens_seen": 16688832,
2161
+ "step": 1345
2162
+ },
2163
+ {
2164
+ "epoch": 0.4814550641940086,
2165
+ "grad_norm": 1.1783503293991089,
2166
+ "learning_rate": 4.688928989628431e-05,
2167
+ "loss": 0.4128,
2168
+ "num_input_tokens_seen": 16752432,
2169
+ "step": 1350
2170
+ },
2171
+ {
2172
+ "epoch": 0.48323823109843084,
2173
+ "grad_norm": 1.2250187397003174,
2174
+ "learning_rate": 4.686669963482983e-05,
2175
+ "loss": 0.3974,
2176
+ "num_input_tokens_seen": 16814912,
2177
+ "step": 1355
2178
+ },
2179
+ {
2180
+ "epoch": 0.48502139800285304,
2181
+ "grad_norm": 1.5874429941177368,
2182
+ "learning_rate": 4.6844033126098206e-05,
2183
+ "loss": 0.5244,
2184
+ "num_input_tokens_seen": 16875696,
2185
+ "step": 1360
2186
+ },
2187
+ {
2188
+ "epoch": 0.4868045649072753,
2189
+ "grad_norm": 1.5424435138702393,
2190
+ "learning_rate": 4.682129044912558e-05,
2191
+ "loss": 0.3909,
2192
+ "num_input_tokens_seen": 16934768,
2193
+ "step": 1365
2194
+ },
2195
+ {
2196
+ "epoch": 0.48858773181169757,
2197
+ "grad_norm": 1.395538568496704,
2198
+ "learning_rate": 4.679847168321368e-05,
2199
+ "loss": 0.4208,
2200
+ "num_input_tokens_seen": 16994192,
2201
+ "step": 1370
2202
+ },
2203
+ {
2204
+ "epoch": 0.49037089871611983,
2205
+ "grad_norm": 1.3311400413513184,
2206
+ "learning_rate": 4.677557690792956e-05,
2207
+ "loss": 0.5148,
2208
+ "num_input_tokens_seen": 17055952,
2209
+ "step": 1375
2210
+ },
2211
+ {
2212
+ "epoch": 0.4921540656205421,
2213
+ "grad_norm": 1.0483784675598145,
2214
+ "learning_rate": 4.6752606203105314e-05,
2215
+ "loss": 0.4838,
2216
+ "num_input_tokens_seen": 17118352,
2217
+ "step": 1380
2218
+ },
2219
+ {
2220
+ "epoch": 0.49393723252496435,
2221
+ "grad_norm": 1.4240469932556152,
2222
+ "learning_rate": 4.6729559648837777e-05,
2223
+ "loss": 0.4676,
2224
+ "num_input_tokens_seen": 17181856,
2225
+ "step": 1385
2226
+ },
2227
+ {
2228
+ "epoch": 0.4957203994293866,
2229
+ "grad_norm": 1.1497527360916138,
2230
+ "learning_rate": 4.6706437325488285e-05,
2231
+ "loss": 0.4607,
2232
+ "num_input_tokens_seen": 17239040,
2233
+ "step": 1390
2234
+ },
2235
+ {
2236
+ "epoch": 0.4975035663338088,
2237
+ "grad_norm": 1.324589490890503,
2238
+ "learning_rate": 4.6683239313682356e-05,
2239
+ "loss": 0.3867,
2240
+ "num_input_tokens_seen": 17300096,
2241
+ "step": 1395
2242
+ },
2243
+ {
2244
+ "epoch": 0.4992867332382311,
2245
+ "grad_norm": 1.401481032371521,
2246
+ "learning_rate": 4.6659965694309446e-05,
2247
+ "loss": 0.477,
2248
+ "num_input_tokens_seen": 17367088,
2249
+ "step": 1400
2250
+ },
2251
+ {
2252
+ "epoch": 0.5010699001426534,
2253
+ "grad_norm": 1.0556763410568237,
2254
+ "learning_rate": 4.6636616548522637e-05,
2255
+ "loss": 0.4092,
2256
+ "num_input_tokens_seen": 17427648,
2257
+ "step": 1405
2258
+ },
2259
+ {
2260
+ "epoch": 0.5028530670470756,
2261
+ "grad_norm": 1.5187320709228516,
2262
+ "learning_rate": 4.661319195773837e-05,
2263
+ "loss": 0.4266,
2264
+ "num_input_tokens_seen": 17491664,
2265
+ "step": 1410
2266
+ },
2267
+ {
2268
+ "epoch": 0.5046362339514978,
2269
+ "grad_norm": 1.2626229524612427,
2270
+ "learning_rate": 4.658969200363614e-05,
2271
+ "loss": 0.5192,
2272
+ "num_input_tokens_seen": 17553312,
2273
+ "step": 1415
2274
+ },
2275
+ {
2276
+ "epoch": 0.5064194008559201,
2277
+ "grad_norm": 1.3596255779266357,
2278
+ "learning_rate": 4.6566116768158254e-05,
2279
+ "loss": 0.4983,
2280
+ "num_input_tokens_seen": 17614656,
2281
+ "step": 1420
2282
+ },
2283
+ {
2284
+ "epoch": 0.5082025677603423,
2285
+ "grad_norm": 1.131866455078125,
2286
+ "learning_rate": 4.6542466333509496e-05,
2287
+ "loss": 0.4593,
2288
+ "num_input_tokens_seen": 17673104,
2289
+ "step": 1425
2290
+ },
2291
+ {
2292
+ "epoch": 0.5099857346647646,
2293
+ "grad_norm": 1.1720597743988037,
2294
+ "learning_rate": 4.651874078215688e-05,
2295
+ "loss": 0.3885,
2296
+ "num_input_tokens_seen": 17733920,
2297
+ "step": 1430
2298
+ },
2299
+ {
2300
+ "epoch": 0.5117689015691869,
2301
+ "grad_norm": 1.1201550960540771,
2302
+ "learning_rate": 4.6494940196829326e-05,
2303
+ "loss": 0.4661,
2304
+ "num_input_tokens_seen": 17795024,
2305
+ "step": 1435
2306
+ },
2307
+ {
2308
+ "epoch": 0.5135520684736091,
2309
+ "grad_norm": 1.4359281063079834,
2310
+ "learning_rate": 4.647106466051741e-05,
2311
+ "loss": 0.4327,
2312
+ "num_input_tokens_seen": 17856080,
2313
+ "step": 1440
2314
+ },
2315
+ {
2316
+ "epoch": 0.5153352353780314,
2317
+ "grad_norm": 1.2126119136810303,
2318
+ "learning_rate": 4.644711425647305e-05,
2319
+ "loss": 0.4281,
2320
+ "num_input_tokens_seen": 17918592,
2321
+ "step": 1445
2322
+ },
2323
+ {
2324
+ "epoch": 0.5171184022824536,
2325
+ "grad_norm": 1.1998052597045898,
2326
+ "learning_rate": 4.642308906820921e-05,
2327
+ "loss": 0.4234,
2328
+ "num_input_tokens_seen": 17985056,
2329
+ "step": 1450
2330
+ },
2331
+ {
2332
+ "epoch": 0.5189015691868759,
2333
+ "grad_norm": 1.2513782978057861,
2334
+ "learning_rate": 4.6398989179499635e-05,
2335
+ "loss": 0.4952,
2336
+ "num_input_tokens_seen": 18047856,
2337
+ "step": 1455
2338
+ },
2339
+ {
2340
+ "epoch": 0.5206847360912982,
2341
+ "grad_norm": 1.5451606512069702,
2342
+ "learning_rate": 4.637481467437854e-05,
2343
+ "loss": 0.4061,
2344
+ "num_input_tokens_seen": 18110608,
2345
+ "step": 1460
2346
+ },
2347
+ {
2348
+ "epoch": 0.5224679029957204,
2349
+ "grad_norm": 1.280383586883545,
2350
+ "learning_rate": 4.635056563714031e-05,
2351
+ "loss": 0.4709,
2352
+ "num_input_tokens_seen": 18170192,
2353
+ "step": 1465
2354
+ },
2355
+ {
2356
+ "epoch": 0.5242510699001427,
2357
+ "grad_norm": 1.536872386932373,
2358
+ "learning_rate": 4.632624215233924e-05,
2359
+ "loss": 0.5166,
2360
+ "num_input_tokens_seen": 18234512,
2361
+ "step": 1470
2362
+ },
2363
+ {
2364
+ "epoch": 0.526034236804565,
2365
+ "grad_norm": 1.1344192028045654,
2366
+ "learning_rate": 4.6301844304789185e-05,
2367
+ "loss": 0.4313,
2368
+ "num_input_tokens_seen": 18297872,
2369
+ "step": 1475
2370
+ },
2371
+ {
2372
+ "epoch": 0.5278174037089871,
2373
+ "grad_norm": 1.2558397054672241,
2374
+ "learning_rate": 4.6277372179563336e-05,
2375
+ "loss": 0.4426,
2376
+ "num_input_tokens_seen": 18360688,
2377
+ "step": 1480
2378
+ },
2379
+ {
2380
+ "epoch": 0.5296005706134094,
2381
+ "grad_norm": 1.3379613161087036,
2382
+ "learning_rate": 4.625282586199384e-05,
2383
+ "loss": 0.4684,
2384
+ "num_input_tokens_seen": 18421600,
2385
+ "step": 1485
2386
+ },
2387
+ {
2388
+ "epoch": 0.5313837375178316,
2389
+ "grad_norm": 1.471182942390442,
2390
+ "learning_rate": 4.622820543767159e-05,
2391
+ "loss": 0.3746,
2392
+ "num_input_tokens_seen": 18482608,
2393
+ "step": 1490
2394
+ },
2395
+ {
2396
+ "epoch": 0.5331669044222539,
2397
+ "grad_norm": 1.147135853767395,
2398
+ "learning_rate": 4.6203510992445844e-05,
2399
+ "loss": 0.3896,
2400
+ "num_input_tokens_seen": 18542720,
2401
+ "step": 1495
2402
+ },
2403
+ {
2404
+ "epoch": 0.5349500713266762,
2405
+ "grad_norm": 1.6015293598175049,
2406
+ "learning_rate": 4.617874261242399e-05,
2407
+ "loss": 0.4613,
2408
+ "num_input_tokens_seen": 18604304,
2409
+ "step": 1500
2410
+ },
2411
+ {
2412
+ "epoch": 0.5367332382310984,
2413
+ "grad_norm": 1.1261463165283203,
2414
+ "learning_rate": 4.615390038397121e-05,
2415
+ "loss": 0.4636,
2416
+ "num_input_tokens_seen": 18666336,
2417
+ "step": 1505
2418
+ },
2419
+ {
2420
+ "epoch": 0.5385164051355207,
2421
+ "grad_norm": 1.1836202144622803,
2422
+ "learning_rate": 4.612898439371019e-05,
2423
+ "loss": 0.4072,
2424
+ "num_input_tokens_seen": 18724912,
2425
+ "step": 1510
2426
+ },
2427
+ {
2428
+ "epoch": 0.5402995720399429,
2429
+ "grad_norm": 1.108585238456726,
2430
+ "learning_rate": 4.6103994728520815e-05,
2431
+ "loss": 0.3483,
2432
+ "num_input_tokens_seen": 18786352,
2433
+ "step": 1515
2434
+ },
2435
+ {
2436
+ "epoch": 0.5420827389443652,
2437
+ "grad_norm": 1.3794957399368286,
2438
+ "learning_rate": 4.607893147553989e-05,
2439
+ "loss": 0.4259,
2440
+ "num_input_tokens_seen": 18851488,
2441
+ "step": 1520
2442
+ },
2443
+ {
2444
+ "epoch": 0.5438659058487875,
2445
+ "grad_norm": 1.4083433151245117,
2446
+ "learning_rate": 4.605379472216076e-05,
2447
+ "loss": 0.4364,
2448
+ "num_input_tokens_seen": 18915008,
2449
+ "step": 1525
2450
+ },
2451
+ {
2452
+ "epoch": 0.5456490727532097,
2453
+ "grad_norm": 1.3088963031768799,
2454
+ "learning_rate": 4.602858455603313e-05,
2455
+ "loss": 0.4098,
2456
+ "num_input_tokens_seen": 18976256,
2457
+ "step": 1530
2458
+ },
2459
+ {
2460
+ "epoch": 0.547432239657632,
2461
+ "grad_norm": 1.3022725582122803,
2462
+ "learning_rate": 4.600330106506263e-05,
2463
+ "loss": 0.4449,
2464
+ "num_input_tokens_seen": 19036560,
2465
+ "step": 1535
2466
+ },
2467
+ {
2468
+ "epoch": 0.5492154065620543,
2469
+ "grad_norm": 1.7286397218704224,
2470
+ "learning_rate": 4.597794433741061e-05,
2471
+ "loss": 0.5088,
2472
+ "num_input_tokens_seen": 19097568,
2473
+ "step": 1540
2474
+ },
2475
+ {
2476
+ "epoch": 0.5509985734664765,
2477
+ "grad_norm": 1.4286762475967407,
2478
+ "learning_rate": 4.5952514461493754e-05,
2479
+ "loss": 0.445,
2480
+ "num_input_tokens_seen": 19158592,
2481
+ "step": 1545
2482
+ },
2483
+ {
2484
+ "epoch": 0.5527817403708987,
2485
+ "grad_norm": 1.2713367938995361,
2486
+ "learning_rate": 4.5927011525983824e-05,
2487
+ "loss": 0.3791,
2488
+ "num_input_tokens_seen": 19215600,
2489
+ "step": 1550
2490
+ },
2491
+ {
2492
+ "epoch": 0.5545649072753209,
2493
+ "grad_norm": 1.3422623872756958,
2494
+ "learning_rate": 4.590143561980736e-05,
2495
+ "loss": 0.4897,
2496
+ "num_input_tokens_seen": 19277184,
2497
+ "step": 1555
2498
+ },
2499
+ {
2500
+ "epoch": 0.5563480741797432,
2501
+ "grad_norm": 1.278333306312561,
2502
+ "learning_rate": 4.5875786832145287e-05,
2503
+ "loss": 0.4426,
2504
+ "num_input_tokens_seen": 19338032,
2505
+ "step": 1560
2506
+ },
2507
+ {
2508
+ "epoch": 0.5581312410841655,
2509
+ "grad_norm": 1.4938713312149048,
2510
+ "learning_rate": 4.5850065252432706e-05,
2511
+ "loss": 0.4246,
2512
+ "num_input_tokens_seen": 19397040,
2513
+ "step": 1565
2514
+ },
2515
+ {
2516
+ "epoch": 0.5599144079885877,
2517
+ "grad_norm": 2.4364399909973145,
2518
+ "learning_rate": 4.582427097035854e-05,
2519
+ "loss": 0.4777,
2520
+ "num_input_tokens_seen": 19456144,
2521
+ "step": 1570
2522
+ },
2523
+ {
2524
+ "epoch": 0.56169757489301,
2525
+ "grad_norm": 3.5539422035217285,
2526
+ "learning_rate": 4.579840407586517e-05,
2527
+ "loss": 0.4894,
2528
+ "num_input_tokens_seen": 19518176,
2529
+ "step": 1575
2530
+ },
2531
+ {
2532
+ "epoch": 0.5634807417974322,
2533
+ "grad_norm": 1.4036399126052856,
2534
+ "learning_rate": 4.577246465914825e-05,
2535
+ "loss": 0.4704,
2536
+ "num_input_tokens_seen": 19581024,
2537
+ "step": 1580
2538
+ },
2539
+ {
2540
+ "epoch": 0.5652639087018545,
2541
+ "grad_norm": 0.9552262425422668,
2542
+ "learning_rate": 4.5746452810656225e-05,
2543
+ "loss": 0.4527,
2544
+ "num_input_tokens_seen": 19643104,
2545
+ "step": 1585
2546
+ },
2547
+ {
2548
+ "epoch": 0.5670470756062768,
2549
+ "grad_norm": 1.2145711183547974,
2550
+ "learning_rate": 4.572036862109017e-05,
2551
+ "loss": 0.4612,
2552
+ "num_input_tokens_seen": 19702528,
2553
+ "step": 1590
2554
+ },
2555
+ {
2556
+ "epoch": 0.568830242510699,
2557
+ "grad_norm": 1.0046789646148682,
2558
+ "learning_rate": 4.5694212181403374e-05,
2559
+ "loss": 0.4235,
2560
+ "num_input_tokens_seen": 19763424,
2561
+ "step": 1595
2562
+ },
2563
+ {
2564
+ "epoch": 0.5706134094151213,
2565
+ "grad_norm": 1.3540983200073242,
2566
+ "learning_rate": 4.5667983582801064e-05,
2567
+ "loss": 0.3833,
2568
+ "num_input_tokens_seen": 19823200,
2569
+ "step": 1600
2570
+ },
2571
+ {
2572
+ "epoch": 0.5723965763195435,
2573
+ "grad_norm": 1.2544758319854736,
2574
+ "learning_rate": 4.5641682916740084e-05,
2575
+ "loss": 0.4586,
2576
+ "num_input_tokens_seen": 19883888,
2577
+ "step": 1605
2578
+ },
2579
+ {
2580
+ "epoch": 0.5741797432239658,
2581
+ "grad_norm": 1.1667801141738892,
2582
+ "learning_rate": 4.5615310274928556e-05,
2583
+ "loss": 0.5969,
2584
+ "num_input_tokens_seen": 19949840,
2585
+ "step": 1610
2586
+ },
2587
+ {
2588
+ "epoch": 0.5759629101283881,
2589
+ "grad_norm": 0.9844037294387817,
2590
+ "learning_rate": 4.5588865749325594e-05,
2591
+ "loss": 0.3798,
2592
+ "num_input_tokens_seen": 20014640,
2593
+ "step": 1615
2594
+ },
2595
+ {
2596
+ "epoch": 0.5777460770328102,
2597
+ "grad_norm": 1.3161027431488037,
2598
+ "learning_rate": 4.556234943214095e-05,
2599
+ "loss": 0.4234,
2600
+ "num_input_tokens_seen": 20077008,
2601
+ "step": 1620
2602
+ },
2603
+ {
2604
+ "epoch": 0.5795292439372325,
2605
+ "grad_norm": 1.1113629341125488,
2606
+ "learning_rate": 4.5535761415834724e-05,
2607
+ "loss": 0.4714,
2608
+ "num_input_tokens_seen": 20141488,
2609
+ "step": 1625
2610
+ },
2611
+ {
2612
+ "epoch": 0.5813124108416547,
2613
+ "grad_norm": 1.3117053508758545,
2614
+ "learning_rate": 4.550910179311699e-05,
2615
+ "loss": 0.5514,
2616
+ "num_input_tokens_seen": 20206016,
2617
+ "step": 1630
2618
+ },
2619
+ {
2620
+ "epoch": 0.583095577746077,
2621
+ "grad_norm": 1.151132345199585,
2622
+ "learning_rate": 4.5482370656947554e-05,
2623
+ "loss": 0.4626,
2624
+ "num_input_tokens_seen": 20270880,
2625
+ "step": 1635
2626
+ },
2627
+ {
2628
+ "epoch": 0.5848787446504993,
2629
+ "grad_norm": 2.0122318267822266,
2630
+ "learning_rate": 4.5455568100535545e-05,
2631
+ "loss": 0.4758,
2632
+ "num_input_tokens_seen": 20334448,
2633
+ "step": 1640
2634
+ },
2635
+ {
2636
+ "epoch": 0.5866619115549215,
2637
+ "grad_norm": 1.6800963878631592,
2638
+ "learning_rate": 4.542869421733915e-05,
2639
+ "loss": 0.4178,
2640
+ "num_input_tokens_seen": 20398480,
2641
+ "step": 1645
2642
+ },
2643
+ {
2644
+ "epoch": 0.5884450784593438,
2645
+ "grad_norm": 1.4573643207550049,
2646
+ "learning_rate": 4.540174910106526e-05,
2647
+ "loss": 0.4314,
2648
+ "num_input_tokens_seen": 20458128,
2649
+ "step": 1650
2650
+ },
2651
+ {
2652
+ "epoch": 0.5902282453637661,
2653
+ "grad_norm": 1.1499691009521484,
2654
+ "learning_rate": 4.537473284566914e-05,
2655
+ "loss": 0.4182,
2656
+ "num_input_tokens_seen": 20521840,
2657
+ "step": 1655
2658
+ },
2659
+ {
2660
+ "epoch": 0.5920114122681883,
2661
+ "grad_norm": 1.1684014797210693,
2662
+ "learning_rate": 4.5347645545354136e-05,
2663
+ "loss": 0.3945,
2664
+ "num_input_tokens_seen": 20582304,
2665
+ "step": 1660
2666
+ },
2667
+ {
2668
+ "epoch": 0.5937945791726106,
2669
+ "grad_norm": 1.358035683631897,
2670
+ "learning_rate": 4.532048729457128e-05,
2671
+ "loss": 0.4674,
2672
+ "num_input_tokens_seen": 20642656,
2673
+ "step": 1665
2674
+ },
2675
+ {
2676
+ "epoch": 0.5955777460770328,
2677
+ "grad_norm": 1.285057783126831,
2678
+ "learning_rate": 4.5293258188019055e-05,
2679
+ "loss": 0.4027,
2680
+ "num_input_tokens_seen": 20709664,
2681
+ "step": 1670
2682
+ },
2683
+ {
2684
+ "epoch": 0.5973609129814551,
2685
+ "grad_norm": 1.0107051134109497,
2686
+ "learning_rate": 4.526595832064296e-05,
2687
+ "loss": 0.4402,
2688
+ "num_input_tokens_seen": 20769888,
2689
+ "step": 1675
2690
+ },
2691
+ {
2692
+ "epoch": 0.5991440798858774,
2693
+ "grad_norm": 1.144665241241455,
2694
+ "learning_rate": 4.523858778763528e-05,
2695
+ "loss": 0.4725,
2696
+ "num_input_tokens_seen": 20834912,
2697
+ "step": 1680
2698
+ },
2699
+ {
2700
+ "epoch": 0.6009272467902995,
2701
+ "grad_norm": 1.5452603101730347,
2702
+ "learning_rate": 4.521114668443464e-05,
2703
+ "loss": 0.4413,
2704
+ "num_input_tokens_seen": 20896784,
2705
+ "step": 1685
2706
+ },
2707
+ {
2708
+ "epoch": 0.6027104136947218,
2709
+ "grad_norm": 1.0601692199707031,
2710
+ "learning_rate": 4.518363510672583e-05,
2711
+ "loss": 0.4758,
2712
+ "num_input_tokens_seen": 20954224,
2713
+ "step": 1690
2714
+ },
2715
+ {
2716
+ "epoch": 0.604493580599144,
2717
+ "grad_norm": 1.6150104999542236,
2718
+ "learning_rate": 4.515605315043928e-05,
2719
+ "loss": 0.4027,
2720
+ "num_input_tokens_seen": 21019760,
2721
+ "step": 1695
2722
+ },
2723
+ {
2724
+ "epoch": 0.6062767475035663,
2725
+ "grad_norm": 1.3952018022537231,
2726
+ "learning_rate": 4.512840091175089e-05,
2727
+ "loss": 0.4497,
2728
+ "num_input_tokens_seen": 21081952,
2729
+ "step": 1700
2730
+ },
2731
+ {
2732
+ "epoch": 0.6080599144079886,
2733
+ "grad_norm": 1.6579699516296387,
2734
+ "learning_rate": 4.5100678487081614e-05,
2735
+ "loss": 0.4343,
2736
+ "num_input_tokens_seen": 21145680,
2737
+ "step": 1705
2738
+ },
2739
+ {
2740
+ "epoch": 0.6098430813124108,
2741
+ "grad_norm": 1.5067193508148193,
2742
+ "learning_rate": 4.507288597309711e-05,
2743
+ "loss": 0.4142,
2744
+ "num_input_tokens_seen": 21206048,
2745
+ "step": 1710
2746
+ },
2747
+ {
2748
+ "epoch": 0.6116262482168331,
2749
+ "grad_norm": 1.2458901405334473,
2750
+ "learning_rate": 4.504502346670748e-05,
2751
+ "loss": 0.5092,
2752
+ "num_input_tokens_seen": 21269520,
2753
+ "step": 1715
2754
+ },
2755
+ {
2756
+ "epoch": 0.6134094151212554,
2757
+ "grad_norm": 1.33489990234375,
2758
+ "learning_rate": 4.5017091065066837e-05,
2759
+ "loss": 0.4563,
2760
+ "num_input_tokens_seen": 21331136,
2761
+ "step": 1720
2762
+ },
2763
+ {
2764
+ "epoch": 0.6151925820256776,
2765
+ "grad_norm": 1.4016698598861694,
2766
+ "learning_rate": 4.4989088865573035e-05,
2767
+ "loss": 0.3743,
2768
+ "num_input_tokens_seen": 21392496,
2769
+ "step": 1725
2770
+ },
2771
+ {
2772
+ "epoch": 0.6169757489300999,
2773
+ "grad_norm": 1.5638152360916138,
2774
+ "learning_rate": 4.496101696586732e-05,
2775
+ "loss": 0.4823,
2776
+ "num_input_tokens_seen": 21455504,
2777
+ "step": 1730
2778
+ },
2779
+ {
2780
+ "epoch": 0.6187589158345221,
2781
+ "grad_norm": 1.2184085845947266,
2782
+ "learning_rate": 4.4932875463833944e-05,
2783
+ "loss": 0.4219,
2784
+ "num_input_tokens_seen": 21518800,
2785
+ "step": 1735
2786
+ },
2787
+ {
2788
+ "epoch": 0.6205420827389444,
2789
+ "grad_norm": 1.5745280981063843,
2790
+ "learning_rate": 4.490466445759988e-05,
2791
+ "loss": 0.506,
2792
+ "num_input_tokens_seen": 21579120,
2793
+ "step": 1740
2794
+ },
2795
+ {
2796
+ "epoch": 0.6223252496433667,
2797
+ "grad_norm": 1.4783879518508911,
2798
+ "learning_rate": 4.487638404553445e-05,
2799
+ "loss": 0.4638,
2800
+ "num_input_tokens_seen": 21638528,
2801
+ "step": 1745
2802
+ },
2803
+ {
2804
+ "epoch": 0.6241084165477889,
2805
+ "grad_norm": 1.4319891929626465,
2806
+ "learning_rate": 4.484803432624899e-05,
2807
+ "loss": 0.434,
2808
+ "num_input_tokens_seen": 21703664,
2809
+ "step": 1750
2810
+ },
2811
+ {
2812
+ "epoch": 0.6258915834522111,
2813
+ "grad_norm": 1.3542821407318115,
2814
+ "learning_rate": 4.48196153985965e-05,
2815
+ "loss": 0.4472,
2816
+ "num_input_tokens_seen": 21764336,
2817
+ "step": 1755
2818
+ },
2819
+ {
2820
+ "epoch": 0.6276747503566333,
2821
+ "grad_norm": 1.1602082252502441,
2822
+ "learning_rate": 4.4791127361671304e-05,
2823
+ "loss": 0.3541,
2824
+ "num_input_tokens_seen": 21825392,
2825
+ "step": 1760
2826
+ },
2827
+ {
2828
+ "epoch": 0.6294579172610556,
2829
+ "grad_norm": 1.6145776510238647,
2830
+ "learning_rate": 4.476257031480871e-05,
2831
+ "loss": 0.4401,
2832
+ "num_input_tokens_seen": 21886848,
2833
+ "step": 1765
2834
+ },
2835
+ {
2836
+ "epoch": 0.6312410841654779,
2837
+ "grad_norm": 1.1257821321487427,
2838
+ "learning_rate": 4.4733944357584644e-05,
2839
+ "loss": 0.5242,
2840
+ "num_input_tokens_seen": 21951680,
2841
+ "step": 1770
2842
+ },
2843
+ {
2844
+ "epoch": 0.6330242510699001,
2845
+ "grad_norm": 1.4322980642318726,
2846
+ "learning_rate": 4.470524958981534e-05,
2847
+ "loss": 0.4926,
2848
+ "num_input_tokens_seen": 22016624,
2849
+ "step": 1775
2850
+ },
2851
+ {
2852
+ "epoch": 0.6348074179743224,
2853
+ "grad_norm": 1.255799651145935,
2854
+ "learning_rate": 4.4676486111556936e-05,
2855
+ "loss": 0.4128,
2856
+ "num_input_tokens_seen": 22079040,
2857
+ "step": 1780
2858
+ },
2859
+ {
2860
+ "epoch": 0.6365905848787446,
2861
+ "grad_norm": 1.157120943069458,
2862
+ "learning_rate": 4.46476540231052e-05,
2863
+ "loss": 0.3521,
2864
+ "num_input_tokens_seen": 22142400,
2865
+ "step": 1785
2866
+ },
2867
+ {
2868
+ "epoch": 0.6383737517831669,
2869
+ "grad_norm": 1.5262624025344849,
2870
+ "learning_rate": 4.461875342499509e-05,
2871
+ "loss": 0.4028,
2872
+ "num_input_tokens_seen": 22199136,
2873
+ "step": 1790
2874
+ },
2875
+ {
2876
+ "epoch": 0.6401569186875892,
2877
+ "grad_norm": 1.7937567234039307,
2878
+ "learning_rate": 4.458978441800048e-05,
2879
+ "loss": 0.4126,
2880
+ "num_input_tokens_seen": 22260608,
2881
+ "step": 1795
2882
+ },
2883
+ {
2884
+ "epoch": 0.6419400855920114,
2885
+ "grad_norm": 1.3475735187530518,
2886
+ "learning_rate": 4.456074710313378e-05,
2887
+ "loss": 0.4692,
2888
+ "num_input_tokens_seen": 22322272,
2889
+ "step": 1800
2890
+ },
2891
+ {
2892
+ "epoch": 0.6437232524964337,
2893
+ "grad_norm": 1.2804908752441406,
2894
+ "learning_rate": 4.4531641581645576e-05,
2895
+ "loss": 0.4931,
2896
+ "num_input_tokens_seen": 22384368,
2897
+ "step": 1805
2898
+ },
2899
+ {
2900
+ "epoch": 0.645506419400856,
2901
+ "grad_norm": 1.2529658079147339,
2902
+ "learning_rate": 4.4502467955024294e-05,
2903
+ "loss": 0.386,
2904
+ "num_input_tokens_seen": 22447888,
2905
+ "step": 1810
2906
+ },
2907
+ {
2908
+ "epoch": 0.6472895863052782,
2909
+ "grad_norm": 1.3398923873901367,
2910
+ "learning_rate": 4.447322632499581e-05,
2911
+ "loss": 0.4522,
2912
+ "num_input_tokens_seen": 22514704,
2913
+ "step": 1815
2914
+ },
2915
+ {
2916
+ "epoch": 0.6490727532097005,
2917
+ "grad_norm": 1.320273518562317,
2918
+ "learning_rate": 4.444391679352315e-05,
2919
+ "loss": 0.4082,
2920
+ "num_input_tokens_seen": 22573024,
2921
+ "step": 1820
2922
+ },
2923
+ {
2924
+ "epoch": 0.6508559201141226,
2925
+ "grad_norm": 1.2203108072280884,
2926
+ "learning_rate": 4.441453946280612e-05,
2927
+ "loss": 0.4551,
2928
+ "num_input_tokens_seen": 22632080,
2929
+ "step": 1825
2930
+ },
2931
+ {
2932
+ "epoch": 0.6526390870185449,
2933
+ "grad_norm": 1.1191906929016113,
2934
+ "learning_rate": 4.4385094435280873e-05,
2935
+ "loss": 0.3873,
2936
+ "num_input_tokens_seen": 22692192,
2937
+ "step": 1830
2938
+ },
2939
+ {
2940
+ "epoch": 0.6544222539229672,
2941
+ "grad_norm": 1.249611496925354,
2942
+ "learning_rate": 4.435558181361969e-05,
2943
+ "loss": 0.398,
2944
+ "num_input_tokens_seen": 22754624,
2945
+ "step": 1835
2946
+ },
2947
+ {
2948
+ "epoch": 0.6562054208273894,
2949
+ "grad_norm": 1.4326295852661133,
2950
+ "learning_rate": 4.432600170073048e-05,
2951
+ "loss": 0.4159,
2952
+ "num_input_tokens_seen": 22819616,
2953
+ "step": 1840
2954
+ },
2955
+ {
2956
+ "epoch": 0.6579885877318117,
2957
+ "grad_norm": 1.2453666925430298,
2958
+ "learning_rate": 4.429635419975655e-05,
2959
+ "loss": 0.4343,
2960
+ "num_input_tokens_seen": 22879136,
2961
+ "step": 1845
2962
+ },
2963
+ {
2964
+ "epoch": 0.6597717546362339,
2965
+ "grad_norm": 1.1724647283554077,
2966
+ "learning_rate": 4.426663941407614e-05,
2967
+ "loss": 0.4287,
2968
+ "num_input_tokens_seen": 22940528,
2969
+ "step": 1850
2970
+ },
2971
+ {
2972
+ "epoch": 0.6615549215406562,
2973
+ "grad_norm": 1.185964822769165,
2974
+ "learning_rate": 4.423685744730213e-05,
2975
+ "loss": 0.3901,
2976
+ "num_input_tokens_seen": 23004128,
2977
+ "step": 1855
2978
+ },
2979
+ {
2980
+ "epoch": 0.6633380884450785,
2981
+ "grad_norm": 1.167861819267273,
2982
+ "learning_rate": 4.420700840328162e-05,
2983
+ "loss": 0.512,
2984
+ "num_input_tokens_seen": 23066240,
2985
+ "step": 1860
2986
+ },
2987
+ {
2988
+ "epoch": 0.6651212553495007,
2989
+ "grad_norm": 1.6327167749404907,
2990
+ "learning_rate": 4.417709238609566e-05,
2991
+ "loss": 0.4102,
2992
+ "num_input_tokens_seen": 23126128,
2993
+ "step": 1865
2994
+ },
2995
+ {
2996
+ "epoch": 0.666904422253923,
2997
+ "grad_norm": 1.0951687097549438,
2998
+ "learning_rate": 4.4147109500058776e-05,
2999
+ "loss": 0.4767,
3000
+ "num_input_tokens_seen": 23182704,
3001
+ "step": 1870
3002
+ },
3003
+ {
3004
+ "epoch": 0.6686875891583453,
3005
+ "grad_norm": 1.1051822900772095,
3006
+ "learning_rate": 4.411705984971868e-05,
3007
+ "loss": 0.4009,
3008
+ "num_input_tokens_seen": 23244816,
3009
+ "step": 1875
3010
+ },
3011
+ {
3012
+ "epoch": 0.6704707560627675,
3013
+ "grad_norm": 1.4562581777572632,
3014
+ "learning_rate": 4.408694353985589e-05,
3015
+ "loss": 0.5083,
3016
+ "num_input_tokens_seen": 23307776,
3017
+ "step": 1880
3018
+ },
3019
+ {
3020
+ "epoch": 0.6722539229671898,
3021
+ "grad_norm": 1.4651310443878174,
3022
+ "learning_rate": 4.4056760675483356e-05,
3023
+ "loss": 0.5302,
3024
+ "num_input_tokens_seen": 23370368,
3025
+ "step": 1885
3026
+ },
3027
+ {
3028
+ "epoch": 0.6740370898716119,
3029
+ "grad_norm": 1.1008446216583252,
3030
+ "learning_rate": 4.402651136184609e-05,
3031
+ "loss": 0.3035,
3032
+ "num_input_tokens_seen": 23436192,
3033
+ "step": 1890
3034
+ },
3035
+ {
3036
+ "epoch": 0.6758202567760342,
3037
+ "grad_norm": 1.7820332050323486,
3038
+ "learning_rate": 4.3996195704420826e-05,
3039
+ "loss": 0.3972,
3040
+ "num_input_tokens_seen": 23501408,
3041
+ "step": 1895
3042
+ },
3043
+ {
3044
+ "epoch": 0.6776034236804565,
3045
+ "grad_norm": 1.2907474040985107,
3046
+ "learning_rate": 4.396581380891562e-05,
3047
+ "loss": 0.4644,
3048
+ "num_input_tokens_seen": 23561072,
3049
+ "step": 1900
3050
+ }
3051
+ ],
3052
+ "logging_steps": 5,
3053
+ "max_steps": 8412,
3054
+ "num_input_tokens_seen": 23561072,
3055
+ "num_train_epochs": 3,
3056
+ "save_steps": 100,
3057
+ "stateful_callbacks": {
3058
+ "TrainerControl": {
3059
+ "args": {
3060
+ "should_epoch_stop": false,
3061
+ "should_evaluate": false,
3062
+ "should_log": false,
3063
+ "should_save": true,
3064
+ "should_training_stop": false
3065
+ },
3066
+ "attributes": {}
3067
+ }
3068
+ },
3069
+ "total_flos": 1.8654332564260454e+17,
3070
+ "train_batch_size": 2,
3071
+ "trial_name": null,
3072
+ "trial_params": null
3073
+ }
training_args.bin ADDED
Binary file (5.43 kB). View file
 
vocab.json ADDED
The diff for this file is too large to render. See raw diff