ucmp137538 commited on
Commit
fe128c1
·
verified ·
1 Parent(s): 439986c

Model save

Browse files
Files changed (5) hide show
  1. README.md +58 -0
  2. all_results.json +8 -0
  3. generation_config.json +11 -0
  4. train_results.json +8 -0
  5. trainer_state.json +1723 -0
README.md ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-7B-Instruct
3
+ library_name: transformers
4
+ model_name: PreThink_MemAgent
5
+ tags:
6
+ - generated_from_trainer
7
+ - trl
8
+ - sft
9
+ licence: license
10
+ ---
11
+
12
+ # Model Card for PreThink_MemAgent
13
+
14
+ This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
15
+ It has been trained using [TRL](https://github.com/huggingface/trl).
16
+
17
+ ## Quick start
18
+
19
+ ```python
20
+ from transformers import pipeline
21
+
22
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="ucmp137538/PreThink_MemAgent", device="cuda")
24
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
+ print(output["generated_text"])
26
+ ```
27
+
28
+ ## Training procedure
29
+
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mingzeli/PreThink_MemAgent/runs/umrk1n8k)
31
+
32
+
33
+ This model was trained with SFT.
34
+
35
+ ### Framework versions
36
+
37
+ - TRL: 0.18.0
38
+ - Transformers: 4.52.3
39
+ - Pytorch: 2.7.0
40
+ - Datasets: 4.3.0
41
+ - Tokenizers: 0.21.4
42
+
43
+ ## Citations
44
+
45
+
46
+
47
+ Cite TRL as:
48
+
49
+ ```bibtex
50
+ @misc{vonwerra2022trl,
51
+ title = {{TRL: Transformer Reinforcement Learning}},
52
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
53
+ year = 2020,
54
+ journal = {GitHub repository},
55
+ publisher = {GitHub},
56
+ howpublished = {\url{https://github.com/huggingface/trl}}
57
+ }
58
+ ```
all_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_flos": 2.7972111648056934e+17,
3
+ "train_loss": 1.0216455256655104,
4
+ "train_runtime": 2780.2712,
5
+ "train_samples": 5375,
6
+ "train_samples_per_second": 9.666,
7
+ "train_steps_per_second": 0.076
8
+ }
generation_config.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "do_sample": true,
4
+ "eos_token_id": 151645,
5
+ "pad_token_id": 151643,
6
+ "repetition_penalty": 1.05,
7
+ "temperature": 0.7,
8
+ "top_k": 20,
9
+ "top_p": 0.8,
10
+ "transformers_version": "4.52.3"
11
+ }
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_flos": 2.7972111648056934e+17,
3
+ "train_loss": 1.0216455256655104,
4
+ "train_runtime": 2780.2712,
5
+ "train_samples": 5375,
6
+ "train_samples_per_second": 9.666,
7
+ "train_steps_per_second": 0.076
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,1723 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": null,
3
+ "best_metric": null,
4
+ "best_model_checkpoint": null,
5
+ "epoch": 5.0,
6
+ "eval_steps": 500,
7
+ "global_step": 210,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "epoch": 0.023809523809523808,
14
+ "grad_norm": 6.538216213930027,
15
+ "learning_rate": 0.0,
16
+ "loss": 1.9338,
17
+ "num_tokens": 301499.0,
18
+ "step": 1
19
+ },
20
+ {
21
+ "epoch": 0.047619047619047616,
22
+ "grad_norm": 6.773412169837184,
23
+ "learning_rate": 4.2857142857142855e-06,
24
+ "loss": 1.9236,
25
+ "num_tokens": 603383.0,
26
+ "step": 2
27
+ },
28
+ {
29
+ "epoch": 0.07142857142857142,
30
+ "grad_norm": 4.749957858339624,
31
+ "learning_rate": 8.571428571428571e-06,
32
+ "loss": 1.8766,
33
+ "num_tokens": 863622.0,
34
+ "step": 3
35
+ },
36
+ {
37
+ "epoch": 0.09523809523809523,
38
+ "grad_norm": 2.2238600419377277,
39
+ "learning_rate": 1.2857142857142857e-05,
40
+ "loss": 1.7506,
41
+ "num_tokens": 1179346.0,
42
+ "step": 4
43
+ },
44
+ {
45
+ "epoch": 0.11904761904761904,
46
+ "grad_norm": 3.7552483574413404,
47
+ "learning_rate": 1.7142857142857142e-05,
48
+ "loss": 1.7277,
49
+ "num_tokens": 1490946.0,
50
+ "step": 5
51
+ },
52
+ {
53
+ "epoch": 0.14285714285714285,
54
+ "grad_norm": 4.15755928790991,
55
+ "learning_rate": 2.1428571428571428e-05,
56
+ "loss": 1.658,
57
+ "num_tokens": 1719799.0,
58
+ "step": 6
59
+ },
60
+ {
61
+ "epoch": 0.16666666666666666,
62
+ "grad_norm": 2.245417974821929,
63
+ "learning_rate": 2.5714285714285714e-05,
64
+ "loss": 1.6608,
65
+ "num_tokens": 2009958.0,
66
+ "step": 7
67
+ },
68
+ {
69
+ "epoch": 0.19047619047619047,
70
+ "grad_norm": 3.1872785546255233,
71
+ "learning_rate": 3e-05,
72
+ "loss": 1.6032,
73
+ "num_tokens": 2334131.0,
74
+ "step": 8
75
+ },
76
+ {
77
+ "epoch": 0.21428571428571427,
78
+ "grad_norm": 2.062217429493514,
79
+ "learning_rate": 2.999838339925525e-05,
80
+ "loss": 1.6093,
81
+ "num_tokens": 2659726.0,
82
+ "step": 9
83
+ },
84
+ {
85
+ "epoch": 0.23809523809523808,
86
+ "grad_norm": 30.952811910237916,
87
+ "learning_rate": 2.9993533984191048e-05,
88
+ "loss": 1.6153,
89
+ "num_tokens": 2997515.0,
90
+ "step": 10
91
+ },
92
+ {
93
+ "epoch": 0.2619047619047619,
94
+ "grad_norm": 5.437630168475735,
95
+ "learning_rate": 2.9985452916224895e-05,
96
+ "loss": 1.6272,
97
+ "num_tokens": 3298571.0,
98
+ "step": 11
99
+ },
100
+ {
101
+ "epoch": 0.2857142857142857,
102
+ "grad_norm": 9.067784859415095,
103
+ "learning_rate": 2.9974142130743523e-05,
104
+ "loss": 1.6427,
105
+ "num_tokens": 3621523.0,
106
+ "step": 12
107
+ },
108
+ {
109
+ "epoch": 0.30952380952380953,
110
+ "grad_norm": 8.297445832624902,
111
+ "learning_rate": 2.9959604336639406e-05,
112
+ "loss": 1.6557,
113
+ "num_tokens": 3918652.0,
114
+ "step": 13
115
+ },
116
+ {
117
+ "epoch": 0.3333333333333333,
118
+ "grad_norm": 18.611016423707234,
119
+ "learning_rate": 2.9941843015662005e-05,
120
+ "loss": 1.6543,
121
+ "num_tokens": 4189760.0,
122
+ "step": 14
123
+ },
124
+ {
125
+ "epoch": 0.35714285714285715,
126
+ "grad_norm": 37.588129897999515,
127
+ "learning_rate": 2.9920862421583856e-05,
128
+ "loss": 1.6303,
129
+ "num_tokens": 4543297.0,
130
+ "step": 15
131
+ },
132
+ {
133
+ "epoch": 0.38095238095238093,
134
+ "grad_norm": 4.123366988000153,
135
+ "learning_rate": 2.9896667579181855e-05,
136
+ "loss": 1.6285,
137
+ "num_tokens": 4829061.0,
138
+ "step": 16
139
+ },
140
+ {
141
+ "epoch": 0.40476190476190477,
142
+ "grad_norm": 11.611181067813334,
143
+ "learning_rate": 2.9869264283033827e-05,
144
+ "loss": 1.5812,
145
+ "num_tokens": 5089606.0,
146
+ "step": 17
147
+ },
148
+ {
149
+ "epoch": 0.42857142857142855,
150
+ "grad_norm": 2.2261338342365398,
151
+ "learning_rate": 2.9838659096130715e-05,
152
+ "loss": 1.5597,
153
+ "num_tokens": 5390312.0,
154
+ "step": 18
155
+ },
156
+ {
157
+ "epoch": 0.4523809523809524,
158
+ "grad_norm": 2.1851134690595377,
159
+ "learning_rate": 2.9804859348304814e-05,
160
+ "loss": 1.5541,
161
+ "num_tokens": 5637391.0,
162
+ "step": 19
163
+ },
164
+ {
165
+ "epoch": 0.47619047619047616,
166
+ "grad_norm": 19.42274392919098,
167
+ "learning_rate": 2.976787313447427e-05,
168
+ "loss": 1.5582,
169
+ "num_tokens": 5900436.0,
170
+ "step": 20
171
+ },
172
+ {
173
+ "epoch": 0.5,
174
+ "grad_norm": 2.743718856278709,
175
+ "learning_rate": 2.9727709312704392e-05,
176
+ "loss": 1.5783,
177
+ "num_tokens": 6208557.0,
178
+ "step": 21
179
+ },
180
+ {
181
+ "epoch": 0.5238095238095238,
182
+ "grad_norm": 1.270567239695732,
183
+ "learning_rate": 2.968437750208617e-05,
184
+ "loss": 1.571,
185
+ "num_tokens": 6508211.0,
186
+ "step": 22
187
+ },
188
+ {
189
+ "epoch": 0.5476190476190477,
190
+ "grad_norm": 1.3295343234029675,
191
+ "learning_rate": 2.963788808043254e-05,
192
+ "loss": 1.5558,
193
+ "num_tokens": 6815212.0,
194
+ "step": 23
195
+ },
196
+ {
197
+ "epoch": 0.5714285714285714,
198
+ "grad_norm": 1.0645487974082066,
199
+ "learning_rate": 2.9588252181792933e-05,
200
+ "loss": 1.5579,
201
+ "num_tokens": 7165341.0,
202
+ "step": 24
203
+ },
204
+ {
205
+ "epoch": 0.5952380952380952,
206
+ "grad_norm": 0.8618665557232967,
207
+ "learning_rate": 2.953548169378672e-05,
208
+ "loss": 1.5311,
209
+ "num_tokens": 7446406.0,
210
+ "step": 25
211
+ },
212
+ {
213
+ "epoch": 0.6190476190476191,
214
+ "grad_norm": 0.8833172294647831,
215
+ "learning_rate": 2.9479589254756145e-05,
216
+ "loss": 1.5395,
217
+ "num_tokens": 7716401.0,
218
+ "step": 26
219
+ },
220
+ {
221
+ "epoch": 0.6428571428571429,
222
+ "grad_norm": 0.8730876262810777,
223
+ "learning_rate": 2.9420588250739514e-05,
224
+ "loss": 1.5083,
225
+ "num_tokens": 7991093.0,
226
+ "step": 27
227
+ },
228
+ {
229
+ "epoch": 0.6666666666666666,
230
+ "grad_norm": 9.830926234059927,
231
+ "learning_rate": 2.9358492812265286e-05,
232
+ "loss": 1.562,
233
+ "num_tokens": 8326807.0,
234
+ "step": 28
235
+ },
236
+ {
237
+ "epoch": 0.6904761904761905,
238
+ "grad_norm": 3.439648574877625,
239
+ "learning_rate": 2.9293317810967836e-05,
240
+ "loss": 1.5434,
241
+ "num_tokens": 8608746.0,
242
+ "step": 29
243
+ },
244
+ {
245
+ "epoch": 0.7142857142857143,
246
+ "grad_norm": 1.4403398373980578,
247
+ "learning_rate": 2.9225078856025776e-05,
248
+ "loss": 1.55,
249
+ "num_tokens": 8887444.0,
250
+ "step": 30
251
+ },
252
+ {
253
+ "epoch": 0.7380952380952381,
254
+ "grad_norm": 0.9256924750760002,
255
+ "learning_rate": 2.915379229042361e-05,
256
+ "loss": 1.5387,
257
+ "num_tokens": 9158119.0,
258
+ "step": 31
259
+ },
260
+ {
261
+ "epoch": 0.7619047619047619,
262
+ "grad_norm": 0.9693333317012788,
263
+ "learning_rate": 2.9079475187037645e-05,
264
+ "loss": 1.5525,
265
+ "num_tokens": 9455380.0,
266
+ "step": 32
267
+ },
268
+ {
269
+ "epoch": 0.7857142857142857,
270
+ "grad_norm": 4.418422539715986,
271
+ "learning_rate": 2.900214534454708e-05,
272
+ "loss": 1.562,
273
+ "num_tokens": 9819963.0,
274
+ "step": 33
275
+ },
276
+ {
277
+ "epoch": 0.8095238095238095,
278
+ "grad_norm": 1.4976653598911862,
279
+ "learning_rate": 2.8921821283171333e-05,
280
+ "loss": 1.5134,
281
+ "num_tokens": 10120795.0,
282
+ "step": 34
283
+ },
284
+ {
285
+ "epoch": 0.8333333333333334,
286
+ "grad_norm": 0.9155113033240181,
287
+ "learning_rate": 2.883852224023446e-05,
288
+ "loss": 1.5485,
289
+ "num_tokens": 10448963.0,
290
+ "step": 35
291
+ },
292
+ {
293
+ "epoch": 0.8571428571428571,
294
+ "grad_norm": 1.078789553362349,
295
+ "learning_rate": 2.875226816555792e-05,
296
+ "loss": 1.5239,
297
+ "num_tokens": 10726463.0,
298
+ "step": 36
299
+ },
300
+ {
301
+ "epoch": 0.8809523809523809,
302
+ "grad_norm": 0.8912072449466584,
303
+ "learning_rate": 2.8663079716682657e-05,
304
+ "loss": 1.5328,
305
+ "num_tokens": 11034561.0,
306
+ "step": 37
307
+ },
308
+ {
309
+ "epoch": 0.9047619047619048,
310
+ "grad_norm": 0.8137040759182133,
311
+ "learning_rate": 2.8570978253921695e-05,
312
+ "loss": 1.5173,
313
+ "num_tokens": 11357514.0,
314
+ "step": 38
315
+ },
316
+ {
317
+ "epoch": 0.9285714285714286,
318
+ "grad_norm": 0.9066560545161888,
319
+ "learning_rate": 2.8475985835244395e-05,
320
+ "loss": 1.5032,
321
+ "num_tokens": 11618431.0,
322
+ "step": 39
323
+ },
324
+ {
325
+ "epoch": 0.9523809523809523,
326
+ "grad_norm": 0.789402718151679,
327
+ "learning_rate": 2.8378125210993705e-05,
328
+ "loss": 1.5219,
329
+ "num_tokens": 11938221.0,
330
+ "step": 40
331
+ },
332
+ {
333
+ "epoch": 0.9761904761904762,
334
+ "grad_norm": 0.984207528765468,
335
+ "learning_rate": 2.8277419818437477e-05,
336
+ "loss": 1.536,
337
+ "num_tokens": 12260374.0,
338
+ "step": 41
339
+ },
340
+ {
341
+ "epoch": 1.0,
342
+ "grad_norm": 0.7141475421535004,
343
+ "learning_rate": 2.817389377615535e-05,
344
+ "loss": 1.5138,
345
+ "num_tokens": 12557020.0,
346
+ "step": 42
347
+ },
348
+ {
349
+ "epoch": 1.0238095238095237,
350
+ "grad_norm": 0.9363198060842514,
351
+ "learning_rate": 2.8067571878262455e-05,
352
+ "loss": 1.3474,
353
+ "num_tokens": 12816238.0,
354
+ "step": 43
355
+ },
356
+ {
357
+ "epoch": 1.0476190476190477,
358
+ "grad_norm": 0.6926582015960785,
359
+ "learning_rate": 2.7958479588471274e-05,
360
+ "loss": 1.3267,
361
+ "num_tokens": 13122015.0,
362
+ "step": 44
363
+ },
364
+ {
365
+ "epoch": 1.0714285714285714,
366
+ "grad_norm": 0.9454215939995514,
367
+ "learning_rate": 2.784664303399321e-05,
368
+ "loss": 1.339,
369
+ "num_tokens": 13397622.0,
370
+ "step": 45
371
+ },
372
+ {
373
+ "epoch": 1.0952380952380953,
374
+ "grad_norm": 0.7721777219877292,
375
+ "learning_rate": 2.7732088999281182e-05,
376
+ "loss": 1.3203,
377
+ "num_tokens": 13699593.0,
378
+ "step": 46
379
+ },
380
+ {
381
+ "epoch": 1.119047619047619,
382
+ "grad_norm": 0.8317013732600135,
383
+ "learning_rate": 2.7614844919614878e-05,
384
+ "loss": 1.3393,
385
+ "num_tokens": 14012389.0,
386
+ "step": 47
387
+ },
388
+ {
389
+ "epoch": 1.1428571428571428,
390
+ "grad_norm": 0.8420079333202011,
391
+ "learning_rate": 2.749493887453007e-05,
392
+ "loss": 1.321,
393
+ "num_tokens": 14310791.0,
394
+ "step": 48
395
+ },
396
+ {
397
+ "epoch": 1.1666666666666667,
398
+ "grad_norm": 0.808825236941867,
399
+ "learning_rate": 2.7372399581093692e-05,
400
+ "loss": 1.3444,
401
+ "num_tokens": 14658237.0,
402
+ "step": 49
403
+ },
404
+ {
405
+ "epoch": 1.1904761904761905,
406
+ "grad_norm": 0.9423947902880564,
407
+ "learning_rate": 2.724725638702619e-05,
408
+ "loss": 1.3208,
409
+ "num_tokens": 14932524.0,
410
+ "step": 50
411
+ },
412
+ {
413
+ "epoch": 1.2142857142857142,
414
+ "grad_norm": 0.8920368672000416,
415
+ "learning_rate": 2.7119539263672863e-05,
416
+ "loss": 1.3663,
417
+ "num_tokens": 15221313.0,
418
+ "step": 51
419
+ },
420
+ {
421
+ "epoch": 1.2380952380952381,
422
+ "grad_norm": 1.037715092791166,
423
+ "learning_rate": 2.698927879882581e-05,
424
+ "loss": 1.3133,
425
+ "num_tokens": 15503438.0,
426
+ "step": 52
427
+ },
428
+ {
429
+ "epoch": 1.2619047619047619,
430
+ "grad_norm": 0.9566394653382752,
431
+ "learning_rate": 2.6856506189398304e-05,
432
+ "loss": 1.3418,
433
+ "num_tokens": 15812209.0,
434
+ "step": 53
435
+ },
436
+ {
437
+ "epoch": 1.2857142857142856,
438
+ "grad_norm": 0.7773982206501678,
439
+ "learning_rate": 2.672125323395319e-05,
440
+ "loss": 1.3214,
441
+ "num_tokens": 16133281.0,
442
+ "step": 54
443
+ },
444
+ {
445
+ "epoch": 1.3095238095238095,
446
+ "grad_norm": 0.7309052587570527,
447
+ "learning_rate": 2.6583552325087277e-05,
448
+ "loss": 1.3106,
449
+ "num_tokens": 16469546.0,
450
+ "step": 55
451
+ },
452
+ {
453
+ "epoch": 1.3333333333333333,
454
+ "grad_norm": 0.731975052614095,
455
+ "learning_rate": 2.644343644167344e-05,
456
+ "loss": 1.315,
457
+ "num_tokens": 16740412.0,
458
+ "step": 56
459
+ },
460
+ {
461
+ "epoch": 1.3571428571428572,
462
+ "grad_norm": 0.888533248470318,
463
+ "learning_rate": 2.6300939140962265e-05,
464
+ "loss": 1.3347,
465
+ "num_tokens": 17069209.0,
466
+ "step": 57
467
+ },
468
+ {
469
+ "epoch": 1.380952380952381,
470
+ "grad_norm": 0.8174152731954014,
471
+ "learning_rate": 2.615609455054523e-05,
472
+ "loss": 1.3252,
473
+ "num_tokens": 17390230.0,
474
+ "step": 58
475
+ },
476
+ {
477
+ "epoch": 1.4047619047619047,
478
+ "grad_norm": 0.7703407162710431,
479
+ "learning_rate": 2.6008937360181247e-05,
480
+ "loss": 1.333,
481
+ "num_tokens": 17751056.0,
482
+ "step": 59
483
+ },
484
+ {
485
+ "epoch": 1.4285714285714286,
486
+ "grad_norm": 0.8093422656169592,
487
+ "learning_rate": 2.5859502813488634e-05,
488
+ "loss": 1.2688,
489
+ "num_tokens": 18029513.0,
490
+ "step": 60
491
+ },
492
+ {
493
+ "epoch": 1.4523809523809523,
494
+ "grad_norm": 0.7448128748019907,
495
+ "learning_rate": 2.570782669950435e-05,
496
+ "loss": 1.2866,
497
+ "num_tokens": 18342199.0,
498
+ "step": 61
499
+ },
500
+ {
501
+ "epoch": 1.4761904761904763,
502
+ "grad_norm": 0.797247555706446,
503
+ "learning_rate": 2.5553945344112646e-05,
504
+ "loss": 1.3474,
505
+ "num_tokens": 18654129.0,
506
+ "step": 62
507
+ },
508
+ {
509
+ "epoch": 1.5,
510
+ "grad_norm": 0.8270748581084111,
511
+ "learning_rate": 2.539789560134521e-05,
512
+ "loss": 1.3099,
513
+ "num_tokens": 18959464.0,
514
+ "step": 63
515
+ },
516
+ {
517
+ "epoch": 1.5238095238095237,
518
+ "grad_norm": 0.813301315394994,
519
+ "learning_rate": 2.5239714844554674e-05,
520
+ "loss": 1.3041,
521
+ "num_tokens": 19214541.0,
522
+ "step": 64
523
+ },
524
+ {
525
+ "epoch": 1.5476190476190477,
526
+ "grad_norm": 0.951995926855497,
527
+ "learning_rate": 2.5079440957463894e-05,
528
+ "loss": 1.2476,
529
+ "num_tokens": 19491950.0,
530
+ "step": 65
531
+ },
532
+ {
533
+ "epoch": 1.5714285714285714,
534
+ "grad_norm": 0.7172356466200617,
535
+ "learning_rate": 2.4917112325092904e-05,
536
+ "loss": 1.2979,
537
+ "num_tokens": 19789279.0,
538
+ "step": 66
539
+ },
540
+ {
541
+ "epoch": 1.5952380952380953,
542
+ "grad_norm": 0.842364452878783,
543
+ "learning_rate": 2.475276782456585e-05,
544
+ "loss": 1.2964,
545
+ "num_tokens": 20050385.0,
546
+ "step": 67
547
+ },
548
+ {
549
+ "epoch": 1.619047619047619,
550
+ "grad_norm": 0.8775202665218085,
551
+ "learning_rate": 2.4586446815800056e-05,
552
+ "loss": 1.2306,
553
+ "num_tokens": 20293468.0,
554
+ "step": 68
555
+ },
556
+ {
557
+ "epoch": 1.6428571428571428,
558
+ "grad_norm": 0.8077565121484018,
559
+ "learning_rate": 2.441818913207947e-05,
560
+ "loss": 1.2124,
561
+ "num_tokens": 20548563.0,
562
+ "step": 69
563
+ },
564
+ {
565
+ "epoch": 1.6666666666666665,
566
+ "grad_norm": 0.7891520078797498,
567
+ "learning_rate": 2.4248035070514732e-05,
568
+ "loss": 1.31,
569
+ "num_tokens": 20899914.0,
570
+ "step": 70
571
+ },
572
+ {
573
+ "epoch": 1.6904761904761905,
574
+ "grad_norm": 0.8935642914804406,
575
+ "learning_rate": 2.4076025382392166e-05,
576
+ "loss": 1.2904,
577
+ "num_tokens": 21207831.0,
578
+ "step": 71
579
+ },
580
+ {
581
+ "epoch": 1.7142857142857144,
582
+ "grad_norm": 0.7791700089762551,
583
+ "learning_rate": 2.3902201263413968e-05,
584
+ "loss": 1.3009,
585
+ "num_tokens": 21535523.0,
586
+ "step": 72
587
+ },
588
+ {
589
+ "epoch": 1.7380952380952381,
590
+ "grad_norm": 0.6803499303828192,
591
+ "learning_rate": 2.372660434383203e-05,
592
+ "loss": 1.3084,
593
+ "num_tokens": 21866131.0,
594
+ "step": 73
595
+ },
596
+ {
597
+ "epoch": 1.7619047619047619,
598
+ "grad_norm": 0.8712043390870515,
599
+ "learning_rate": 2.3549276678477605e-05,
600
+ "loss": 1.2974,
601
+ "num_tokens": 22201523.0,
602
+ "step": 74
603
+ },
604
+ {
605
+ "epoch": 1.7857142857142856,
606
+ "grad_norm": 0.8755800517621928,
607
+ "learning_rate": 2.337026073668934e-05,
608
+ "loss": 1.2914,
609
+ "num_tokens": 22488112.0,
610
+ "step": 75
611
+ },
612
+ {
613
+ "epoch": 1.8095238095238095,
614
+ "grad_norm": 0.6269216466100511,
615
+ "learning_rate": 2.3189599392142027e-05,
616
+ "loss": 1.3065,
617
+ "num_tokens": 22813071.0,
618
+ "step": 76
619
+ },
620
+ {
621
+ "epoch": 1.8333333333333335,
622
+ "grad_norm": 0.8732441594396502,
623
+ "learning_rate": 2.3007335912578496e-05,
624
+ "loss": 1.2387,
625
+ "num_tokens": 23107173.0,
626
+ "step": 77
627
+ },
628
+ {
629
+ "epoch": 1.8571428571428572,
630
+ "grad_norm": 0.8321205173092046,
631
+ "learning_rate": 2.282351394944717e-05,
632
+ "loss": 1.3084,
633
+ "num_tokens": 23422105.0,
634
+ "step": 78
635
+ },
636
+ {
637
+ "epoch": 1.880952380952381,
638
+ "grad_norm": 0.6201826285098905,
639
+ "learning_rate": 2.263817752744767e-05,
640
+ "loss": 1.2937,
641
+ "num_tokens": 23788463.0,
642
+ "step": 79
643
+ },
644
+ {
645
+ "epoch": 1.9047619047619047,
646
+ "grad_norm": 0.864663931903862,
647
+ "learning_rate": 2.2451371033987086e-05,
648
+ "loss": 1.2315,
649
+ "num_tokens": 24016057.0,
650
+ "step": 80
651
+ },
652
+ {
653
+ "epoch": 1.9285714285714286,
654
+ "grad_norm": 0.8151031466504542,
655
+ "learning_rate": 2.226313920854934e-05,
656
+ "loss": 1.2065,
657
+ "num_tokens": 24286443.0,
658
+ "step": 81
659
+ },
660
+ {
661
+ "epoch": 1.9523809523809523,
662
+ "grad_norm": 0.69377209773933,
663
+ "learning_rate": 2.207352713198024e-05,
664
+ "loss": 1.2522,
665
+ "num_tokens": 24564564.0,
666
+ "step": 82
667
+ },
668
+ {
669
+ "epoch": 1.9761904761904763,
670
+ "grad_norm": 0.7749074972232748,
671
+ "learning_rate": 2.1882580215690765e-05,
672
+ "loss": 1.3075,
673
+ "num_tokens": 24856761.0,
674
+ "step": 83
675
+ },
676
+ {
677
+ "epoch": 2.0,
678
+ "grad_norm": 0.836947163785287,
679
+ "learning_rate": 2.169034419078124e-05,
680
+ "loss": 1.2472,
681
+ "num_tokens": 25119155.0,
682
+ "step": 84
683
+ },
684
+ {
685
+ "epoch": 2.0238095238095237,
686
+ "grad_norm": 1.302298749181937,
687
+ "learning_rate": 2.1496865097088846e-05,
688
+ "loss": 1.0524,
689
+ "num_tokens": 25461378.0,
690
+ "step": 85
691
+ },
692
+ {
693
+ "epoch": 2.0476190476190474,
694
+ "grad_norm": 0.9938997123428022,
695
+ "learning_rate": 2.130218927216128e-05,
696
+ "loss": 0.9867,
697
+ "num_tokens": 25757738.0,
698
+ "step": 86
699
+ },
700
+ {
701
+ "epoch": 2.0714285714285716,
702
+ "grad_norm": 2.239246853283492,
703
+ "learning_rate": 2.110636334015907e-05,
704
+ "loss": 0.9849,
705
+ "num_tokens": 26050090.0,
706
+ "step": 87
707
+ },
708
+ {
709
+ "epoch": 2.0952380952380953,
710
+ "grad_norm": 2.255240802193634,
711
+ "learning_rate": 2.0909434200689264e-05,
712
+ "loss": 1.0166,
713
+ "num_tokens": 26323746.0,
714
+ "step": 88
715
+ },
716
+ {
717
+ "epoch": 2.119047619047619,
718
+ "grad_norm": 1.0279577082424196,
719
+ "learning_rate": 2.0711449017573122e-05,
720
+ "loss": 1.016,
721
+ "num_tokens": 26607004.0,
722
+ "step": 89
723
+ },
724
+ {
725
+ "epoch": 2.142857142857143,
726
+ "grad_norm": 0.8902435973371925,
727
+ "learning_rate": 2.0512455207550557e-05,
728
+ "loss": 0.9812,
729
+ "num_tokens": 26898427.0,
730
+ "step": 90
731
+ },
732
+ {
733
+ "epoch": 2.1666666666666665,
734
+ "grad_norm": 0.9183593054714583,
735
+ "learning_rate": 2.0312500428924023e-05,
736
+ "loss": 1.0121,
737
+ "num_tokens": 27205357.0,
738
+ "step": 91
739
+ },
740
+ {
741
+ "epoch": 2.1904761904761907,
742
+ "grad_norm": 0.9376600266290521,
743
+ "learning_rate": 2.0111632570144487e-05,
744
+ "loss": 0.988,
745
+ "num_tokens": 27496694.0,
746
+ "step": 92
747
+ },
748
+ {
749
+ "epoch": 2.2142857142857144,
750
+ "grad_norm": 0.7220165463532561,
751
+ "learning_rate": 1.9909899738342324e-05,
752
+ "loss": 1.0119,
753
+ "num_tokens": 27794697.0,
754
+ "step": 93
755
+ },
756
+ {
757
+ "epoch": 2.238095238095238,
758
+ "grad_norm": 0.9053661036106623,
759
+ "learning_rate": 1.9707350247805862e-05,
760
+ "loss": 1.0194,
761
+ "num_tokens": 28115378.0,
762
+ "step": 94
763
+ },
764
+ {
765
+ "epoch": 2.261904761904762,
766
+ "grad_norm": 0.8729413169116081,
767
+ "learning_rate": 1.9504032608410243e-05,
768
+ "loss": 0.9866,
769
+ "num_tokens": 28404844.0,
770
+ "step": 95
771
+ },
772
+ {
773
+ "epoch": 2.2857142857142856,
774
+ "grad_norm": 0.7627072905875467,
775
+ "learning_rate": 1.9299995513999512e-05,
776
+ "loss": 1.0105,
777
+ "num_tokens": 28717491.0,
778
+ "step": 96
779
+ },
780
+ {
781
+ "epoch": 2.3095238095238093,
782
+ "grad_norm": 0.7347382833732581,
783
+ "learning_rate": 1.90952878307246e-05,
784
+ "loss": 0.9667,
785
+ "num_tokens": 28969175.0,
786
+ "step": 97
787
+ },
788
+ {
789
+ "epoch": 2.3333333333333335,
790
+ "grad_norm": 0.8054230888254635,
791
+ "learning_rate": 1.888995858534005e-05,
792
+ "loss": 1.0089,
793
+ "num_tokens": 29278525.0,
794
+ "step": 98
795
+ },
796
+ {
797
+ "epoch": 2.357142857142857,
798
+ "grad_norm": 0.7107149711009891,
799
+ "learning_rate": 1.8684056953462328e-05,
800
+ "loss": 0.9881,
801
+ "num_tokens": 29589990.0,
802
+ "step": 99
803
+ },
804
+ {
805
+ "epoch": 2.380952380952381,
806
+ "grad_norm": 0.709536600984241,
807
+ "learning_rate": 1.8477632247792365e-05,
808
+ "loss": 0.9676,
809
+ "num_tokens": 29851176.0,
810
+ "step": 100
811
+ },
812
+ {
813
+ "epoch": 2.4047619047619047,
814
+ "grad_norm": 0.7059346761233823,
815
+ "learning_rate": 1.827073390630542e-05,
816
+ "loss": 0.9866,
817
+ "num_tokens": 30150417.0,
818
+ "step": 101
819
+ },
820
+ {
821
+ "epoch": 2.4285714285714284,
822
+ "grad_norm": 0.6475677168167608,
823
+ "learning_rate": 1.806341148041082e-05,
824
+ "loss": 0.9679,
825
+ "num_tokens": 30476868.0,
826
+ "step": 102
827
+ },
828
+ {
829
+ "epoch": 2.4523809523809526,
830
+ "grad_norm": 0.6574153556079124,
831
+ "learning_rate": 1.785571462308458e-05,
832
+ "loss": 0.961,
833
+ "num_tokens": 30720675.0,
834
+ "step": 103
835
+ },
836
+ {
837
+ "epoch": 2.4761904761904763,
838
+ "grad_norm": 0.782684876535666,
839
+ "learning_rate": 1.764769307697769e-05,
840
+ "loss": 0.9449,
841
+ "num_tokens": 30943687.0,
842
+ "step": 104
843
+ },
844
+ {
845
+ "epoch": 2.5,
846
+ "grad_norm": 0.6959657002281912,
847
+ "learning_rate": 1.7439396662502947e-05,
848
+ "loss": 0.9588,
849
+ "num_tokens": 31226585.0,
850
+ "step": 105
851
+ },
852
+ {
853
+ "epoch": 2.5238095238095237,
854
+ "grad_norm": 0.6638836519525753,
855
+ "learning_rate": 1.723087526590314e-05,
856
+ "loss": 0.9657,
857
+ "num_tokens": 31536165.0,
858
+ "step": 106
859
+ },
860
+ {
861
+ "epoch": 2.5476190476190474,
862
+ "grad_norm": 0.5543755171176452,
863
+ "learning_rate": 1.702217882730345e-05,
864
+ "loss": 0.9605,
865
+ "num_tokens": 31859352.0,
866
+ "step": 107
867
+ },
868
+ {
869
+ "epoch": 2.571428571428571,
870
+ "grad_norm": 0.7519329941062585,
871
+ "learning_rate": 1.6813357328751003e-05,
872
+ "loss": 0.951,
873
+ "num_tokens": 32141589.0,
874
+ "step": 108
875
+ },
876
+ {
877
+ "epoch": 2.5952380952380953,
878
+ "grad_norm": 0.6384580732849452,
879
+ "learning_rate": 1.660446078224433e-05,
880
+ "loss": 0.9641,
881
+ "num_tokens": 32415823.0,
882
+ "step": 109
883
+ },
884
+ {
885
+ "epoch": 2.619047619047619,
886
+ "grad_norm": 0.6375498198212793,
887
+ "learning_rate": 1.6395539217755673e-05,
888
+ "loss": 0.9767,
889
+ "num_tokens": 32738959.0,
890
+ "step": 110
891
+ },
892
+ {
893
+ "epoch": 2.642857142857143,
894
+ "grad_norm": 0.5909380929927559,
895
+ "learning_rate": 1.6186642671249e-05,
896
+ "loss": 0.9553,
897
+ "num_tokens": 33046758.0,
898
+ "step": 111
899
+ },
900
+ {
901
+ "epoch": 2.6666666666666665,
902
+ "grad_norm": 0.6627084816829975,
903
+ "learning_rate": 1.597782117269655e-05,
904
+ "loss": 0.9918,
905
+ "num_tokens": 33356268.0,
906
+ "step": 112
907
+ },
908
+ {
909
+ "epoch": 2.6904761904761907,
910
+ "grad_norm": 0.6209028590481098,
911
+ "learning_rate": 1.5769124734096862e-05,
912
+ "loss": 0.9946,
913
+ "num_tokens": 33669521.0,
914
+ "step": 113
915
+ },
916
+ {
917
+ "epoch": 2.7142857142857144,
918
+ "grad_norm": 0.5682420553998002,
919
+ "learning_rate": 1.5560603337497055e-05,
920
+ "loss": 0.9809,
921
+ "num_tokens": 34006046.0,
922
+ "step": 114
923
+ },
924
+ {
925
+ "epoch": 2.738095238095238,
926
+ "grad_norm": 0.6102593452284443,
927
+ "learning_rate": 1.5352306923022314e-05,
928
+ "loss": 0.9949,
929
+ "num_tokens": 34329314.0,
930
+ "step": 115
931
+ },
932
+ {
933
+ "epoch": 2.761904761904762,
934
+ "grad_norm": 0.5963721089952566,
935
+ "learning_rate": 1.5144285376915424e-05,
936
+ "loss": 0.9496,
937
+ "num_tokens": 34649593.0,
938
+ "step": 116
939
+ },
940
+ {
941
+ "epoch": 2.7857142857142856,
942
+ "grad_norm": 0.6334479812477452,
943
+ "learning_rate": 1.4936588519589182e-05,
944
+ "loss": 0.9865,
945
+ "num_tokens": 34968486.0,
946
+ "step": 117
947
+ },
948
+ {
949
+ "epoch": 2.8095238095238093,
950
+ "grad_norm": 0.579179313504508,
951
+ "learning_rate": 1.4729266093694578e-05,
952
+ "loss": 1.0112,
953
+ "num_tokens": 35335674.0,
954
+ "step": 118
955
+ },
956
+ {
957
+ "epoch": 2.8333333333333335,
958
+ "grad_norm": 0.6547466538591331,
959
+ "learning_rate": 1.4522367752207636e-05,
960
+ "loss": 0.9476,
961
+ "num_tokens": 35620162.0,
962
+ "step": 119
963
+ },
964
+ {
965
+ "epoch": 2.857142857142857,
966
+ "grad_norm": 0.6519356419157716,
967
+ "learning_rate": 1.4315943046537676e-05,
968
+ "loss": 0.9574,
969
+ "num_tokens": 35888853.0,
970
+ "step": 120
971
+ },
972
+ {
973
+ "epoch": 2.880952380952381,
974
+ "grad_norm": 0.6056113333547287,
975
+ "learning_rate": 1.411004141465995e-05,
976
+ "loss": 0.9453,
977
+ "num_tokens": 36190378.0,
978
+ "step": 121
979
+ },
980
+ {
981
+ "epoch": 2.9047619047619047,
982
+ "grad_norm": 0.5999985950614662,
983
+ "learning_rate": 1.3904712169275403e-05,
984
+ "loss": 0.9878,
985
+ "num_tokens": 36499565.0,
986
+ "step": 122
987
+ },
988
+ {
989
+ "epoch": 2.928571428571429,
990
+ "grad_norm": 0.6170461063881166,
991
+ "learning_rate": 1.3700004486000488e-05,
992
+ "loss": 1.003,
993
+ "num_tokens": 36819385.0,
994
+ "step": 123
995
+ },
996
+ {
997
+ "epoch": 2.9523809523809526,
998
+ "grad_norm": 0.5609993983165859,
999
+ "learning_rate": 1.3495967391589758e-05,
1000
+ "loss": 0.9709,
1001
+ "num_tokens": 37126003.0,
1002
+ "step": 124
1003
+ },
1004
+ {
1005
+ "epoch": 2.9761904761904763,
1006
+ "grad_norm": 0.686886397394039,
1007
+ "learning_rate": 1.3292649752194144e-05,
1008
+ "loss": 0.9514,
1009
+ "num_tokens": 37385037.0,
1010
+ "step": 125
1011
+ },
1012
+ {
1013
+ "epoch": 3.0,
1014
+ "grad_norm": 0.6489847174148443,
1015
+ "learning_rate": 1.309010026165768e-05,
1016
+ "loss": 0.9919,
1017
+ "num_tokens": 37681802.0,
1018
+ "step": 126
1019
+ },
1020
+ {
1021
+ "epoch": 3.0238095238095237,
1022
+ "grad_norm": 1.3407025275092483,
1023
+ "learning_rate": 1.288836742985552e-05,
1024
+ "loss": 0.7691,
1025
+ "num_tokens": 37984864.0,
1026
+ "step": 127
1027
+ },
1028
+ {
1029
+ "epoch": 3.0476190476190474,
1030
+ "grad_norm": 1.1240882365427978,
1031
+ "learning_rate": 1.2687499571075978e-05,
1032
+ "loss": 0.7481,
1033
+ "num_tokens": 38255245.0,
1034
+ "step": 128
1035
+ },
1036
+ {
1037
+ "epoch": 3.0714285714285716,
1038
+ "grad_norm": 0.7591608642049189,
1039
+ "learning_rate": 1.2487544792449443e-05,
1040
+ "loss": 0.7418,
1041
+ "num_tokens": 38525517.0,
1042
+ "step": 129
1043
+ },
1044
+ {
1045
+ "epoch": 3.0952380952380953,
1046
+ "grad_norm": 1.4687837390641727,
1047
+ "learning_rate": 1.2288550982426879e-05,
1048
+ "loss": 0.7639,
1049
+ "num_tokens": 38849309.0,
1050
+ "step": 130
1051
+ },
1052
+ {
1053
+ "epoch": 3.119047619047619,
1054
+ "grad_norm": 1.6813633030275168,
1055
+ "learning_rate": 1.2090565799310738e-05,
1056
+ "loss": 0.7419,
1057
+ "num_tokens": 39220621.0,
1058
+ "step": 131
1059
+ },
1060
+ {
1061
+ "epoch": 3.142857142857143,
1062
+ "grad_norm": 1.5003572460299635,
1063
+ "learning_rate": 1.1893636659840927e-05,
1064
+ "loss": 0.7561,
1065
+ "num_tokens": 39564336.0,
1066
+ "step": 132
1067
+ },
1068
+ {
1069
+ "epoch": 3.1666666666666665,
1070
+ "grad_norm": 1.0050005240430444,
1071
+ "learning_rate": 1.169781072783872e-05,
1072
+ "loss": 0.7044,
1073
+ "num_tokens": 39847434.0,
1074
+ "step": 133
1075
+ },
1076
+ {
1077
+ "epoch": 3.1904761904761907,
1078
+ "grad_norm": 0.8969397973329072,
1079
+ "learning_rate": 1.1503134902911155e-05,
1080
+ "loss": 0.722,
1081
+ "num_tokens": 40157495.0,
1082
+ "step": 134
1083
+ },
1084
+ {
1085
+ "epoch": 3.2142857142857144,
1086
+ "grad_norm": 0.8424389883668345,
1087
+ "learning_rate": 1.1309655809218764e-05,
1088
+ "loss": 0.7033,
1089
+ "num_tokens": 40469966.0,
1090
+ "step": 135
1091
+ },
1092
+ {
1093
+ "epoch": 3.238095238095238,
1094
+ "grad_norm": 0.8450270760954434,
1095
+ "learning_rate": 1.1117419784309236e-05,
1096
+ "loss": 0.7047,
1097
+ "num_tokens": 40771634.0,
1098
+ "step": 136
1099
+ },
1100
+ {
1101
+ "epoch": 3.261904761904762,
1102
+ "grad_norm": 0.7538567142823028,
1103
+ "learning_rate": 1.0926472868019764e-05,
1104
+ "loss": 0.715,
1105
+ "num_tokens": 41087014.0,
1106
+ "step": 137
1107
+ },
1108
+ {
1109
+ "epoch": 3.2857142857142856,
1110
+ "grad_norm": 0.799941449070327,
1111
+ "learning_rate": 1.0736860791450658e-05,
1112
+ "loss": 0.7477,
1113
+ "num_tokens": 41435947.0,
1114
+ "step": 138
1115
+ },
1116
+ {
1117
+ "epoch": 3.3095238095238093,
1118
+ "grad_norm": 0.821321998423862,
1119
+ "learning_rate": 1.0548628966012917e-05,
1120
+ "loss": 0.7057,
1121
+ "num_tokens": 41733553.0,
1122
+ "step": 139
1123
+ },
1124
+ {
1125
+ "epoch": 3.3333333333333335,
1126
+ "grad_norm": 0.8039593601908853,
1127
+ "learning_rate": 1.036182247255233e-05,
1128
+ "loss": 0.6771,
1129
+ "num_tokens": 41998508.0,
1130
+ "step": 140
1131
+ },
1132
+ {
1133
+ "epoch": 3.357142857142857,
1134
+ "grad_norm": 0.7486362398650765,
1135
+ "learning_rate": 1.0176486050552834e-05,
1136
+ "loss": 0.7025,
1137
+ "num_tokens": 42328033.0,
1138
+ "step": 141
1139
+ },
1140
+ {
1141
+ "epoch": 3.380952380952381,
1142
+ "grad_norm": 0.7435262398434506,
1143
+ "learning_rate": 9.99266408742151e-06,
1144
+ "loss": 0.6721,
1145
+ "num_tokens": 42576679.0,
1146
+ "step": 142
1147
+ },
1148
+ {
1149
+ "epoch": 3.4047619047619047,
1150
+ "grad_norm": 0.640772240723941,
1151
+ "learning_rate": 9.810400607857975e-06,
1152
+ "loss": 0.7008,
1153
+ "num_tokens": 42881848.0,
1154
+ "step": 143
1155
+ },
1156
+ {
1157
+ "epoch": 3.4285714285714284,
1158
+ "grad_norm": 0.6750307967979159,
1159
+ "learning_rate": 9.629739263310663e-06,
1160
+ "loss": 0.6576,
1161
+ "num_tokens": 43180370.0,
1162
+ "step": 144
1163
+ },
1164
+ {
1165
+ "epoch": 3.4523809523809526,
1166
+ "grad_norm": 0.6349593868773542,
1167
+ "learning_rate": 9.4507233215224e-06,
1168
+ "loss": 0.6895,
1169
+ "num_tokens": 43485194.0,
1170
+ "step": 145
1171
+ },
1172
+ {
1173
+ "epoch": 3.4761904761904763,
1174
+ "grad_norm": 0.7007765296778146,
1175
+ "learning_rate": 9.273395656167974e-06,
1176
+ "loss": 0.6616,
1177
+ "num_tokens": 43735211.0,
1178
+ "step": 146
1179
+ },
1180
+ {
1181
+ "epoch": 3.5,
1182
+ "grad_norm": 0.6838183617031265,
1183
+ "learning_rate": 9.097798736586033e-06,
1184
+ "loss": 0.6899,
1185
+ "num_tokens": 44018672.0,
1186
+ "step": 147
1187
+ },
1188
+ {
1189
+ "epoch": 3.5238095238095237,
1190
+ "grad_norm": 0.6632040105335824,
1191
+ "learning_rate": 8.92397461760784e-06,
1192
+ "loss": 0.6846,
1193
+ "num_tokens": 44328901.0,
1194
+ "step": 148
1195
+ },
1196
+ {
1197
+ "epoch": 3.5476190476190474,
1198
+ "grad_norm": 0.6283627701359418,
1199
+ "learning_rate": 8.751964929485264e-06,
1200
+ "loss": 0.7087,
1201
+ "num_tokens": 44639895.0,
1202
+ "step": 149
1203
+ },
1204
+ {
1205
+ "epoch": 3.571428571428571,
1206
+ "grad_norm": 0.6949482066022805,
1207
+ "learning_rate": 8.58181086792053e-06,
1208
+ "loss": 0.7229,
1209
+ "num_tokens": 44912674.0,
1210
+ "step": 150
1211
+ },
1212
+ {
1213
+ "epoch": 3.5952380952380953,
1214
+ "grad_norm": 0.6161930464924805,
1215
+ "learning_rate": 8.413553184199947e-06,
1216
+ "loss": 0.7318,
1217
+ "num_tokens": 45239369.0,
1218
+ "step": 151
1219
+ },
1220
+ {
1221
+ "epoch": 3.619047619047619,
1222
+ "grad_norm": 0.6775157046640012,
1223
+ "learning_rate": 8.247232175434151e-06,
1224
+ "loss": 0.69,
1225
+ "num_tokens": 45487078.0,
1226
+ "step": 152
1227
+ },
1228
+ {
1229
+ "epoch": 3.642857142857143,
1230
+ "grad_norm": 0.6598743917563118,
1231
+ "learning_rate": 8.082887674907099e-06,
1232
+ "loss": 0.6823,
1233
+ "num_tokens": 45764060.0,
1234
+ "step": 153
1235
+ },
1236
+ {
1237
+ "epoch": 3.6666666666666665,
1238
+ "grad_norm": 0.6389058360603095,
1239
+ "learning_rate": 7.92055904253611e-06,
1240
+ "loss": 0.7304,
1241
+ "num_tokens": 46055608.0,
1242
+ "step": 154
1243
+ },
1244
+ {
1245
+ "epoch": 3.6904761904761907,
1246
+ "grad_norm": 0.6077489003813343,
1247
+ "learning_rate": 7.760285155445328e-06,
1248
+ "loss": 0.7093,
1249
+ "num_tokens": 46359953.0,
1250
+ "step": 155
1251
+ },
1252
+ {
1253
+ "epoch": 3.7142857142857144,
1254
+ "grad_norm": 0.5967397524054626,
1255
+ "learning_rate": 7.602104398654793e-06,
1256
+ "loss": 0.7066,
1257
+ "num_tokens": 46668754.0,
1258
+ "step": 156
1259
+ },
1260
+ {
1261
+ "epoch": 3.738095238095238,
1262
+ "grad_norm": 0.6350745338711014,
1263
+ "learning_rate": 7.446054655887351e-06,
1264
+ "loss": 0.7024,
1265
+ "num_tokens": 46984092.0,
1266
+ "step": 157
1267
+ },
1268
+ {
1269
+ "epoch": 3.761904761904762,
1270
+ "grad_norm": 0.654479549343766,
1271
+ "learning_rate": 7.292173300495655e-06,
1272
+ "loss": 0.7024,
1273
+ "num_tokens": 47279370.0,
1274
+ "step": 158
1275
+ },
1276
+ {
1277
+ "epoch": 3.7857142857142856,
1278
+ "grad_norm": 0.6241512933913473,
1279
+ "learning_rate": 7.140497186511365e-06,
1280
+ "loss": 0.6782,
1281
+ "num_tokens": 47607126.0,
1282
+ "step": 159
1283
+ },
1284
+ {
1285
+ "epoch": 3.8095238095238093,
1286
+ "grad_norm": 0.6892381674511956,
1287
+ "learning_rate": 6.99106263981875e-06,
1288
+ "loss": 0.6708,
1289
+ "num_tokens": 47854285.0,
1290
+ "step": 160
1291
+ },
1292
+ {
1293
+ "epoch": 3.8333333333333335,
1294
+ "grad_norm": 0.6054186909041117,
1295
+ "learning_rate": 6.843905449454775e-06,
1296
+ "loss": 0.7098,
1297
+ "num_tokens": 48172146.0,
1298
+ "step": 161
1299
+ },
1300
+ {
1301
+ "epoch": 3.857142857142857,
1302
+ "grad_norm": 0.5956432849446107,
1303
+ "learning_rate": 6.699060859037737e-06,
1304
+ "loss": 0.7313,
1305
+ "num_tokens": 48512170.0,
1306
+ "step": 162
1307
+ },
1308
+ {
1309
+ "epoch": 3.880952380952381,
1310
+ "grad_norm": 0.6733062415864223,
1311
+ "learning_rate": 6.556563558326561e-06,
1312
+ "loss": 0.7001,
1313
+ "num_tokens": 48798674.0,
1314
+ "step": 163
1315
+ },
1316
+ {
1317
+ "epoch": 3.9047619047619047,
1318
+ "grad_norm": 0.6484125556278886,
1319
+ "learning_rate": 6.416447674912726e-06,
1320
+ "loss": 0.7064,
1321
+ "num_tokens": 49078950.0,
1322
+ "step": 164
1323
+ },
1324
+ {
1325
+ "epoch": 3.928571428571429,
1326
+ "grad_norm": 0.6651238762004111,
1327
+ "learning_rate": 6.278746766046815e-06,
1328
+ "loss": 0.6613,
1329
+ "num_tokens": 49321513.0,
1330
+ "step": 165
1331
+ },
1332
+ {
1333
+ "epoch": 3.9523809523809526,
1334
+ "grad_norm": 0.6123780278745324,
1335
+ "learning_rate": 6.143493810601696e-06,
1336
+ "loss": 0.6737,
1337
+ "num_tokens": 49622007.0,
1338
+ "step": 166
1339
+ },
1340
+ {
1341
+ "epoch": 3.9761904761904763,
1342
+ "grad_norm": 0.6147575021892541,
1343
+ "learning_rate": 6.010721201174187e-06,
1344
+ "loss": 0.7213,
1345
+ "num_tokens": 49949396.0,
1346
+ "step": 167
1347
+ },
1348
+ {
1349
+ "epoch": 4.0,
1350
+ "grad_norm": 0.6525996500721447,
1351
+ "learning_rate": 5.880460736327139e-06,
1352
+ "loss": 0.6751,
1353
+ "num_tokens": 50240879.0,
1354
+ "step": 168
1355
+ },
1356
+ {
1357
+ "epoch": 4.023809523809524,
1358
+ "grad_norm": 1.1945930661064657,
1359
+ "learning_rate": 5.7527436129738094e-06,
1360
+ "loss": 0.5781,
1361
+ "num_tokens": 50522550.0,
1362
+ "step": 169
1363
+ },
1364
+ {
1365
+ "epoch": 4.0476190476190474,
1366
+ "grad_norm": 1.0060804692820782,
1367
+ "learning_rate": 5.62760041890631e-06,
1368
+ "loss": 0.5498,
1369
+ "num_tokens": 50813206.0,
1370
+ "step": 170
1371
+ },
1372
+ {
1373
+ "epoch": 4.071428571428571,
1374
+ "grad_norm": 0.751291910016424,
1375
+ "learning_rate": 5.505061125469931e-06,
1376
+ "loss": 0.5308,
1377
+ "num_tokens": 51062794.0,
1378
+ "step": 171
1379
+ },
1380
+ {
1381
+ "epoch": 4.095238095238095,
1382
+ "grad_norm": 0.7527801055673746,
1383
+ "learning_rate": 5.385155080385125e-06,
1384
+ "loss": 0.5223,
1385
+ "num_tokens": 51335547.0,
1386
+ "step": 172
1387
+ },
1388
+ {
1389
+ "epoch": 4.119047619047619,
1390
+ "grad_norm": 0.9363987669688705,
1391
+ "learning_rate": 5.267911000718819e-06,
1392
+ "loss": 0.5258,
1393
+ "num_tokens": 51637656.0,
1394
+ "step": 173
1395
+ },
1396
+ {
1397
+ "epoch": 4.142857142857143,
1398
+ "grad_norm": 1.1015381943904987,
1399
+ "learning_rate": 5.153356966006791e-06,
1400
+ "loss": 0.5196,
1401
+ "num_tokens": 51953140.0,
1402
+ "step": 174
1403
+ },
1404
+ {
1405
+ "epoch": 4.166666666666667,
1406
+ "grad_norm": 1.0393498678201005,
1407
+ "learning_rate": 5.041520411528727e-06,
1408
+ "loss": 0.55,
1409
+ "num_tokens": 52262499.0,
1410
+ "step": 175
1411
+ },
1412
+ {
1413
+ "epoch": 4.190476190476191,
1414
+ "grad_norm": 0.8250332666446762,
1415
+ "learning_rate": 4.932428121737547e-06,
1416
+ "loss": 0.5226,
1417
+ "num_tokens": 52568631.0,
1418
+ "step": 176
1419
+ },
1420
+ {
1421
+ "epoch": 4.214285714285714,
1422
+ "grad_norm": 0.7655426183040523,
1423
+ "learning_rate": 4.826106223844648e-06,
1424
+ "loss": 0.5127,
1425
+ "num_tokens": 52834140.0,
1426
+ "step": 177
1427
+ },
1428
+ {
1429
+ "epoch": 4.238095238095238,
1430
+ "grad_norm": 0.6995684804290214,
1431
+ "learning_rate": 4.722580181562527e-06,
1432
+ "loss": 0.494,
1433
+ "num_tokens": 53156720.0,
1434
+ "step": 178
1435
+ },
1436
+ {
1437
+ "epoch": 4.261904761904762,
1438
+ "grad_norm": 0.7748083731595724,
1439
+ "learning_rate": 4.621874789006295e-06,
1440
+ "loss": 0.5381,
1441
+ "num_tokens": 53491051.0,
1442
+ "step": 179
1443
+ },
1444
+ {
1445
+ "epoch": 4.285714285714286,
1446
+ "grad_norm": 0.8131952113654631,
1447
+ "learning_rate": 4.524014164755603e-06,
1448
+ "loss": 0.4968,
1449
+ "num_tokens": 53787155.0,
1450
+ "step": 180
1451
+ },
1452
+ {
1453
+ "epoch": 4.309523809523809,
1454
+ "grad_norm": 0.7367042370774411,
1455
+ "learning_rate": 4.429021746078311e-06,
1456
+ "loss": 0.4861,
1457
+ "num_tokens": 54049978.0,
1458
+ "step": 181
1459
+ },
1460
+ {
1461
+ "epoch": 4.333333333333333,
1462
+ "grad_norm": 0.7194704571250619,
1463
+ "learning_rate": 4.336920283317344e-06,
1464
+ "loss": 0.5212,
1465
+ "num_tokens": 54305304.0,
1466
+ "step": 182
1467
+ },
1468
+ {
1469
+ "epoch": 4.357142857142857,
1470
+ "grad_norm": 0.6527660461115374,
1471
+ "learning_rate": 4.247731834442083e-06,
1472
+ "loss": 0.5263,
1473
+ "num_tokens": 54632438.0,
1474
+ "step": 183
1475
+ },
1476
+ {
1477
+ "epoch": 4.380952380952381,
1478
+ "grad_norm": 0.7266514800309486,
1479
+ "learning_rate": 4.161477759765542e-06,
1480
+ "loss": 0.5202,
1481
+ "num_tokens": 54954887.0,
1482
+ "step": 184
1483
+ },
1484
+ {
1485
+ "epoch": 4.404761904761905,
1486
+ "grad_norm": 0.761296256236184,
1487
+ "learning_rate": 4.078178716828666e-06,
1488
+ "loss": 0.5379,
1489
+ "num_tokens": 55245795.0,
1490
+ "step": 185
1491
+ },
1492
+ {
1493
+ "epoch": 4.428571428571429,
1494
+ "grad_norm": 0.7945856951237223,
1495
+ "learning_rate": 3.9978546554529185e-06,
1496
+ "loss": 0.5176,
1497
+ "num_tokens": 55546573.0,
1498
+ "step": 186
1499
+ },
1500
+ {
1501
+ "epoch": 4.4523809523809526,
1502
+ "grad_norm": 0.7468660256983087,
1503
+ "learning_rate": 3.920524812962359e-06,
1504
+ "loss": 0.5096,
1505
+ "num_tokens": 55843166.0,
1506
+ "step": 187
1507
+ },
1508
+ {
1509
+ "epoch": 4.476190476190476,
1510
+ "grad_norm": 0.7193745095120948,
1511
+ "learning_rate": 3.846207709576389e-06,
1512
+ "loss": 0.5205,
1513
+ "num_tokens": 56138218.0,
1514
+ "step": 188
1515
+ },
1516
+ {
1517
+ "epoch": 4.5,
1518
+ "grad_norm": 0.6655820277052442,
1519
+ "learning_rate": 3.7749211439742257e-06,
1520
+ "loss": 0.5041,
1521
+ "num_tokens": 56417291.0,
1522
+ "step": 189
1523
+ },
1524
+ {
1525
+ "epoch": 4.523809523809524,
1526
+ "grad_norm": 0.7239518179183401,
1527
+ "learning_rate": 3.7066821890321687e-06,
1528
+ "loss": 0.4846,
1529
+ "num_tokens": 56692919.0,
1530
+ "step": 190
1531
+ },
1532
+ {
1533
+ "epoch": 4.5476190476190474,
1534
+ "grad_norm": 0.6477616515263318,
1535
+ "learning_rate": 3.6415071877347165e-06,
1536
+ "loss": 0.539,
1537
+ "num_tokens": 57031682.0,
1538
+ "step": 191
1539
+ },
1540
+ {
1541
+ "epoch": 4.571428571428571,
1542
+ "grad_norm": 0.6640615854897481,
1543
+ "learning_rate": 3.579411749260486e-06,
1544
+ "loss": 0.4944,
1545
+ "num_tokens": 57344445.0,
1546
+ "step": 192
1547
+ },
1548
+ {
1549
+ "epoch": 4.595238095238095,
1550
+ "grad_norm": 0.6848846128939778,
1551
+ "learning_rate": 3.520410745243858e-06,
1552
+ "loss": 0.5121,
1553
+ "num_tokens": 57613210.0,
1554
+ "step": 193
1555
+ },
1556
+ {
1557
+ "epoch": 4.619047619047619,
1558
+ "grad_norm": 0.6422019750084208,
1559
+ "learning_rate": 3.4645183062132826e-06,
1560
+ "loss": 0.5193,
1561
+ "num_tokens": 57942367.0,
1562
+ "step": 194
1563
+ },
1564
+ {
1565
+ "epoch": 4.642857142857143,
1566
+ "grad_norm": 0.7657789249156284,
1567
+ "learning_rate": 3.411747818207066e-06,
1568
+ "loss": 0.4979,
1569
+ "num_tokens": 58201641.0,
1570
+ "step": 195
1571
+ },
1572
+ {
1573
+ "epoch": 4.666666666666667,
1574
+ "grad_norm": 0.6739307543671282,
1575
+ "learning_rate": 3.3621119195674597e-06,
1576
+ "loss": 0.4888,
1577
+ "num_tokens": 58517521.0,
1578
+ "step": 196
1579
+ },
1580
+ {
1581
+ "epoch": 4.690476190476191,
1582
+ "grad_norm": 0.6865319310088059,
1583
+ "learning_rate": 3.3156224979138306e-06,
1584
+ "loss": 0.494,
1585
+ "num_tokens": 58816395.0,
1586
+ "step": 197
1587
+ },
1588
+ {
1589
+ "epoch": 4.714285714285714,
1590
+ "grad_norm": 0.6584126661804732,
1591
+ "learning_rate": 3.2722906872956088e-06,
1592
+ "loss": 0.5181,
1593
+ "num_tokens": 59120541.0,
1594
+ "step": 198
1595
+ },
1596
+ {
1597
+ "epoch": 4.738095238095238,
1598
+ "grad_norm": 0.6099916205741043,
1599
+ "learning_rate": 3.232126865525731e-06,
1600
+ "loss": 0.5293,
1601
+ "num_tokens": 59461204.0,
1602
+ "step": 199
1603
+ },
1604
+ {
1605
+ "epoch": 4.761904761904762,
1606
+ "grad_norm": 0.667910578763116,
1607
+ "learning_rate": 3.1951406516951874e-06,
1608
+ "loss": 0.5116,
1609
+ "num_tokens": 59751334.0,
1610
+ "step": 200
1611
+ },
1612
+ {
1613
+ "epoch": 4.785714285714286,
1614
+ "grad_norm": 0.6141509992681458,
1615
+ "learning_rate": 3.1613409038692853e-06,
1616
+ "loss": 0.5124,
1617
+ "num_tokens": 60077027.0,
1618
+ "step": 201
1619
+ },
1620
+ {
1621
+ "epoch": 4.809523809523809,
1622
+ "grad_norm": 0.6302007739092258,
1623
+ "learning_rate": 3.130735716966174e-06,
1624
+ "loss": 0.5405,
1625
+ "num_tokens": 60408543.0,
1626
+ "step": 202
1627
+ },
1628
+ {
1629
+ "epoch": 4.833333333333333,
1630
+ "grad_norm": 0.6664208365053094,
1631
+ "learning_rate": 3.1033324208181422e-06,
1632
+ "loss": 0.5142,
1633
+ "num_tokens": 60702718.0,
1634
+ "step": 203
1635
+ },
1636
+ {
1637
+ "epoch": 4.857142857142857,
1638
+ "grad_norm": 0.6536331401194854,
1639
+ "learning_rate": 3.0791375784161456e-06,
1640
+ "loss": 0.5193,
1641
+ "num_tokens": 61018938.0,
1642
+ "step": 204
1643
+ },
1644
+ {
1645
+ "epoch": 4.880952380952381,
1646
+ "grad_norm": 0.6969216972121804,
1647
+ "learning_rate": 3.058156984337999e-06,
1648
+ "loss": 0.5061,
1649
+ "num_tokens": 61279305.0,
1650
+ "step": 205
1651
+ },
1652
+ {
1653
+ "epoch": 4.904761904761905,
1654
+ "grad_norm": 0.6535453587519624,
1655
+ "learning_rate": 3.0403956633605944e-06,
1656
+ "loss": 0.4729,
1657
+ "num_tokens": 61565633.0,
1658
+ "step": 206
1659
+ },
1660
+ {
1661
+ "epoch": 4.928571428571429,
1662
+ "grad_norm": 0.6753649990944919,
1663
+ "learning_rate": 3.0258578692564804e-06,
1664
+ "loss": 0.4772,
1665
+ "num_tokens": 61841335.0,
1666
+ "step": 207
1667
+ },
1668
+ {
1669
+ "epoch": 4.9523809523809526,
1670
+ "grad_norm": 0.6358980505171887,
1671
+ "learning_rate": 3.014547083775106e-06,
1672
+ "loss": 0.5398,
1673
+ "num_tokens": 62169402.0,
1674
+ "step": 208
1675
+ },
1676
+ {
1677
+ "epoch": 4.976190476190476,
1678
+ "grad_norm": 0.5805504932681311,
1679
+ "learning_rate": 3.0064660158089516e-06,
1680
+ "loss": 0.524,
1681
+ "num_tokens": 62538856.0,
1682
+ "step": 209
1683
+ },
1684
+ {
1685
+ "epoch": 5.0,
1686
+ "grad_norm": 0.6817532772499482,
1687
+ "learning_rate": 3.001616600744755e-06,
1688
+ "loss": 0.5121,
1689
+ "num_tokens": 62797894.0,
1690
+ "step": 210
1691
+ },
1692
+ {
1693
+ "epoch": 5.0,
1694
+ "step": 210,
1695
+ "total_flos": 2.7972111648056934e+17,
1696
+ "train_loss": 1.0216455256655104,
1697
+ "train_runtime": 2780.2712,
1698
+ "train_samples_per_second": 9.666,
1699
+ "train_steps_per_second": 0.076
1700
+ }
1701
+ ],
1702
+ "logging_steps": 1,
1703
+ "max_steps": 210,
1704
+ "num_input_tokens_seen": 0,
1705
+ "num_train_epochs": 5,
1706
+ "save_steps": 500,
1707
+ "stateful_callbacks": {
1708
+ "TrainerControl": {
1709
+ "args": {
1710
+ "should_epoch_stop": false,
1711
+ "should_evaluate": false,
1712
+ "should_log": false,
1713
+ "should_save": true,
1714
+ "should_training_stop": true
1715
+ },
1716
+ "attributes": {}
1717
+ }
1718
+ },
1719
+ "total_flos": 2.7972111648056934e+17,
1720
+ "train_batch_size": 2,
1721
+ "trial_name": null,
1722
+ "trial_params": null
1723
+ }