crossroderick commited on
Commit
70fdfe0
·
1 Parent(s): d17d151

Model training update with 13 epochs

Browse files
Files changed (43) hide show
  1. README.md +4 -0
  2. checkpoints/{checkpoint-37000 → checkpoint-48000}/config.json +0 -0
  3. checkpoints/{checkpoint-37000 → checkpoint-48000}/generation_config.json +0 -0
  4. checkpoints/{checkpoint-37000 → checkpoint-48000}/model.safetensors +1 -1
  5. checkpoints/{checkpoint-37500 → checkpoint-48000}/optimizer.pt +1 -1
  6. checkpoints/{checkpoint-37500 → checkpoint-48000}/rng_state.pth +1 -1
  7. checkpoints/{checkpoint-37660 → checkpoint-48000}/scaler.pt +1 -1
  8. checkpoints/{checkpoint-37000 → checkpoint-48000}/scheduler.pt +1 -1
  9. checkpoints/{checkpoint-37000 → checkpoint-48000}/special_tokens_map.json +0 -0
  10. checkpoints/checkpoint-48000/spiece.model +3 -0
  11. checkpoints/{checkpoint-37000 → checkpoint-48000}/tokenizer.json +0 -0
  12. checkpoints/{checkpoint-37000 → checkpoint-48000}/tokenizer_config.json +0 -0
  13. checkpoints/{checkpoint-37500 → checkpoint-48000}/trainer_state.json +377 -230
  14. checkpoints/{checkpoint-37500 → checkpoint-48000}/training_args.bin +1 -1
  15. checkpoints/{checkpoint-37500 → checkpoint-48500}/config.json +0 -0
  16. checkpoints/{checkpoint-37500 → checkpoint-48500}/generation_config.json +0 -0
  17. checkpoints/{checkpoint-37500 → checkpoint-48500}/model.safetensors +1 -1
  18. checkpoints/{checkpoint-37000 → checkpoint-48500}/optimizer.pt +1 -1
  19. checkpoints/{checkpoint-37660 → checkpoint-48500}/rng_state.pth +1 -1
  20. checkpoints/{checkpoint-37500 → checkpoint-48500}/scaler.pt +1 -1
  21. checkpoints/{checkpoint-37500 → checkpoint-48500}/scheduler.pt +1 -1
  22. checkpoints/{checkpoint-37500 → checkpoint-48500}/special_tokens_map.json +0 -0
  23. checkpoints/checkpoint-48500/spiece.model +3 -0
  24. checkpoints/{checkpoint-37500 → checkpoint-48500}/tokenizer.json +0 -0
  25. checkpoints/{checkpoint-37500 → checkpoint-48500}/tokenizer_config.json +0 -0
  26. checkpoints/{checkpoint-37000 → checkpoint-48500}/trainer_state.json +388 -227
  27. checkpoints/{checkpoint-37000 → checkpoint-48500}/training_args.bin +1 -1
  28. checkpoints/{checkpoint-37660 → checkpoint-48958}/config.json +0 -0
  29. checkpoints/{checkpoint-37660 → checkpoint-48958}/generation_config.json +0 -0
  30. checkpoints/{checkpoint-37660 → checkpoint-48958}/model.safetensors +1 -1
  31. checkpoints/{checkpoint-37660 → checkpoint-48958}/optimizer.pt +1 -1
  32. checkpoints/{checkpoint-37000 → checkpoint-48958}/rng_state.pth +1 -1
  33. checkpoints/{checkpoint-37000 → checkpoint-48958}/scaler.pt +1 -1
  34. checkpoints/{checkpoint-37660 → checkpoint-48958}/scheduler.pt +1 -1
  35. checkpoints/{checkpoint-37660 → checkpoint-48958}/special_tokens_map.json +0 -0
  36. checkpoints/checkpoint-48958/spiece.model +3 -0
  37. checkpoints/{checkpoint-37660 → checkpoint-48958}/tokenizer.json +0 -0
  38. checkpoints/{checkpoint-37660 → checkpoint-48958}/tokenizer_config.json +0 -0
  39. checkpoints/{checkpoint-37660 → checkpoint-48958}/trainer_state.json +384 -230
  40. checkpoints/{checkpoint-37660 → checkpoint-48958}/training_args.bin +1 -1
  41. model.safetensors +1 -1
  42. spiece.model +3 -0
  43. src/train_t5.py +2 -2
README.md CHANGED
@@ -15,6 +15,10 @@ pipeline_tag: text2text-generation
15
 
16
  Unlike language models that *generate* creatively, DalaT5 is trained as a **faithful transliterator** - preserving content while transforming form.
17
 
 
 
 
 
18
  ---
19
 
20
  ## 🧠 Purpose
 
15
 
16
  Unlike language models that *generate* creatively, DalaT5 is trained as a **faithful transliterator** - preserving content while transforming form.
17
 
18
+ ⚠️ Limitations
19
+ - May produce unexpected outputs for very short inputs or mixed-script text
20
+ - Still under refinement - accuracy may vary across dialects or uncommon characters
21
+
22
  ---
23
 
24
  ## 🧠 Purpose
checkpoints/{checkpoint-37000 → checkpoint-48000}/config.json RENAMED
File without changes
checkpoints/{checkpoint-37000 → checkpoint-48000}/generation_config.json RENAMED
File without changes
checkpoints/{checkpoint-37000 → checkpoint-48000}/model.safetensors RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:29b1522edd113a0e6957aaaefb789a41231c072754a6740ee6dabc5183e2ab7c
3
  size 242041896
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:448d7df6b5c8a8d5c909e3c4d89c4aa963ded7b56216c411ac30831d871a0c0f
3
  size 242041896
checkpoints/{checkpoint-37500 → checkpoint-48000}/optimizer.pt RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bb190324515d3eb9a2541f5086e592766f75af11d74431aad585b10d13f894d8
3
  size 484163514
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9fe7c27b785e4ff27d48401dd7f31aae5c64bc7ca605f94862d0563134ecebf
3
  size 484163514
checkpoints/{checkpoint-37500 → checkpoint-48000}/rng_state.pth RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:36c312354a009a35011043e8b37fe022597fef3b47dab58890e9f101000c7480
3
  size 14244
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:00ce3eb813e88edd424cdc90e44c1583d8456c098e486a0ebabbe770aaf7ec12
3
  size 14244
checkpoints/{checkpoint-37660 → checkpoint-48000}/scaler.pt RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a1b02513987752f55c2a09cb46fb561fd6490aaf9c1a9fb121a08671c8653dcd
3
  size 988
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca97e2ef7b4a9b2ec6c7f4a1a43c701938b26da462b3f6f5c3deffe10916cd2d
3
  size 988
checkpoints/{checkpoint-37000 → checkpoint-48000}/scheduler.pt RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:692049a676411f6e31c0de3385e53b736cdc1c7f432b04e599e3367d6d86ea3b
3
  size 1064
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0517e4908a64405450c76a0e1a824f08d0bbd55697d60f15520762091350a0a
3
  size 1064
checkpoints/{checkpoint-37000 → checkpoint-48000}/special_tokens_map.json RENAMED
File without changes
checkpoints/checkpoint-48000/spiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d60acb128cf7b7f2536e8f38a5b18a05535c9e14c7a355904270e15b0945ea86
3
+ size 791656
checkpoints/{checkpoint-37000 → checkpoint-48000}/tokenizer.json RENAMED
File without changes
checkpoints/{checkpoint-37000 → checkpoint-48000}/tokenizer_config.json RENAMED
File without changes
checkpoints/{checkpoint-37500 → checkpoint-48000}/trainer_state.json RENAMED
@@ -2,543 +2,690 @@
2
  "best_global_step": null,
3
  "best_metric": null,
4
  "best_model_checkpoint": null,
5
- "epoch": 9.957514604354753,
6
  "eval_steps": 500,
7
- "global_step": 37500,
8
  "is_hyper_param_search": false,
9
  "is_local_process_zero": true,
10
  "is_world_process_zero": true,
11
  "log_history": [
12
  {
13
  "epoch": 0.1327668613913967,
14
- "grad_norm": 0.7285390496253967,
15
- "learning_rate": 4.934014869888476e-05,
16
- "loss": 2.9202,
17
  "step": 500
18
  },
19
  {
20
  "epoch": 0.2655337227827934,
21
- "grad_norm": 0.6645896434783936,
22
- "learning_rate": 4.867631439192778e-05,
23
- "loss": 1.7398,
24
  "step": 1000
25
  },
26
  {
27
  "epoch": 0.3983005841741901,
28
- "grad_norm": 1.236431360244751,
29
- "learning_rate": 4.8012480084970795e-05,
30
- "loss": 1.4128,
31
  "step": 1500
32
  },
33
  {
34
  "epoch": 0.5310674455655868,
35
- "grad_norm": 0.4946064054965973,
36
- "learning_rate": 4.734864577801381e-05,
37
- "loss": 1.2492,
38
  "step": 2000
39
  },
40
  {
41
  "epoch": 0.6638343069569835,
42
- "grad_norm": 0.4114533066749573,
43
- "learning_rate": 4.668481147105683e-05,
44
- "loss": 1.1439,
45
  "step": 2500
46
  },
47
  {
48
  "epoch": 0.7966011683483802,
49
- "grad_norm": 0.4554953873157501,
50
- "learning_rate": 4.602097716409984e-05,
51
- "loss": 1.0689,
52
  "step": 3000
53
  },
54
  {
55
  "epoch": 0.929368029739777,
56
- "grad_norm": 0.41119593381881714,
57
- "learning_rate": 4.5357142857142856e-05,
58
- "loss": 1.0057,
59
  "step": 3500
60
  },
61
  {
62
  "epoch": 1.0621348911311737,
63
- "grad_norm": 0.40032336115837097,
64
- "learning_rate": 4.4693308550185877e-05,
65
- "loss": 0.9672,
66
  "step": 4000
67
  },
68
  {
69
  "epoch": 1.1949017525225702,
70
- "grad_norm": 0.3746163845062256,
71
- "learning_rate": 4.402947424322889e-05,
72
- "loss": 0.9234,
73
  "step": 4500
74
  },
75
  {
76
  "epoch": 1.327668613913967,
77
- "grad_norm": 0.4201946556568146,
78
- "learning_rate": 4.3365639936271904e-05,
79
- "loss": 0.8978,
80
  "step": 5000
81
  },
82
  {
83
  "epoch": 1.4604354753053639,
84
- "grad_norm": 0.48306697607040405,
85
- "learning_rate": 4.2701805629314924e-05,
86
- "loss": 0.8672,
87
  "step": 5500
88
  },
89
  {
90
  "epoch": 1.5932023366967605,
91
- "grad_norm": 0.348651647567749,
92
- "learning_rate": 4.2037971322357945e-05,
93
- "loss": 0.8373,
94
  "step": 6000
95
  },
96
  {
97
  "epoch": 1.725969198088157,
98
- "grad_norm": 0.3810581862926483,
99
- "learning_rate": 4.137413701540096e-05,
100
- "loss": 0.8161,
101
  "step": 6500
102
  },
103
  {
104
  "epoch": 1.858736059479554,
105
- "grad_norm": 0.41756191849708557,
106
- "learning_rate": 4.071030270844398e-05,
107
- "loss": 0.8011,
108
  "step": 7000
109
  },
110
  {
111
  "epoch": 1.9915029208709507,
112
- "grad_norm": 0.3746052384376526,
113
- "learning_rate": 4.004646840148699e-05,
114
- "loss": 0.7917,
115
  "step": 7500
116
  },
117
  {
118
  "epoch": 2.1242697822623473,
119
- "grad_norm": 0.3314072787761688,
120
- "learning_rate": 3.9382634094530006e-05,
121
- "loss": 0.772,
122
  "step": 8000
123
  },
124
  {
125
  "epoch": 2.257036643653744,
126
- "grad_norm": 0.36195287108421326,
127
- "learning_rate": 3.871879978757303e-05,
128
- "loss": 0.7519,
129
  "step": 8500
130
  },
131
  {
132
  "epoch": 2.3898035050451405,
133
- "grad_norm": 0.36162152886390686,
134
- "learning_rate": 3.805496548061604e-05,
135
- "loss": 0.744,
136
  "step": 9000
137
  },
138
  {
139
  "epoch": 2.5225703664365375,
140
- "grad_norm": 0.3393162786960602,
141
- "learning_rate": 3.739245884227297e-05,
142
- "loss": 0.7317,
143
  "step": 9500
144
  },
145
  {
146
  "epoch": 2.655337227827934,
147
- "grad_norm": 0.42738890647888184,
148
- "learning_rate": 3.67299522039299e-05,
149
- "loss": 0.7227,
150
  "step": 10000
151
  },
152
  {
153
  "epoch": 2.7881040892193307,
154
- "grad_norm": 0.34136396646499634,
155
- "learning_rate": 3.6066117896972915e-05,
156
- "loss": 0.7084,
157
  "step": 10500
158
  },
159
  {
160
  "epoch": 2.9208709506107278,
161
- "grad_norm": 0.33245041966438293,
162
- "learning_rate": 3.5402283590015936e-05,
163
- "loss": 0.6993,
164
  "step": 11000
165
  },
166
  {
167
  "epoch": 3.0536378120021244,
168
- "grad_norm": 0.46563392877578735,
169
- "learning_rate": 3.473844928305895e-05,
170
- "loss": 0.6865,
171
  "step": 11500
172
  },
173
  {
174
  "epoch": 3.186404673393521,
175
- "grad_norm": 0.3868368864059448,
176
- "learning_rate": 3.407461497610196e-05,
177
- "loss": 0.6822,
178
  "step": 12000
179
  },
180
  {
181
  "epoch": 3.3191715347849176,
182
- "grad_norm": 0.30418872833251953,
183
- "learning_rate": 3.3410780669144984e-05,
184
- "loss": 0.6775,
185
  "step": 12500
186
  },
187
  {
188
  "epoch": 3.451938396176314,
189
- "grad_norm": 0.34485992789268494,
190
- "learning_rate": 3.2746946362188e-05,
191
- "loss": 0.6713,
192
  "step": 13000
193
  },
194
  {
195
  "epoch": 3.584705257567711,
196
- "grad_norm": 0.33921709656715393,
197
- "learning_rate": 3.208311205523102e-05,
198
- "loss": 0.658,
199
  "step": 13500
200
  },
201
  {
202
  "epoch": 3.717472118959108,
203
- "grad_norm": 0.36646100878715515,
204
- "learning_rate": 3.141927774827403e-05,
205
- "loss": 0.6516,
206
  "step": 14000
207
  },
208
  {
209
  "epoch": 3.8502389803505044,
210
- "grad_norm": 0.32367828488349915,
211
- "learning_rate": 3.075544344131705e-05,
212
- "loss": 0.6479,
213
  "step": 14500
214
  },
215
  {
216
  "epoch": 3.9830058417419014,
217
- "grad_norm": 0.32735565304756165,
218
- "learning_rate": 3.0091609134360066e-05,
219
- "loss": 0.6423,
220
  "step": 15000
221
  },
222
  {
223
  "epoch": 4.115772703133298,
224
- "grad_norm": 0.43194663524627686,
225
- "learning_rate": 2.9427774827403083e-05,
226
- "loss": 0.6416,
227
  "step": 15500
228
  },
229
  {
230
  "epoch": 4.248539564524695,
231
- "grad_norm": 0.29106882214546204,
232
- "learning_rate": 2.8763940520446096e-05,
233
- "loss": 0.6318,
234
  "step": 16000
235
  },
236
  {
237
  "epoch": 4.381306425916091,
238
- "grad_norm": 0.2671768069267273,
239
- "learning_rate": 2.810143388210303e-05,
240
- "loss": 0.6311,
241
  "step": 16500
242
  },
243
  {
244
  "epoch": 4.514073287307488,
245
- "grad_norm": 0.311146080493927,
246
- "learning_rate": 2.7437599575146044e-05,
247
- "loss": 0.6243,
248
  "step": 17000
249
  },
250
  {
251
  "epoch": 4.646840148698884,
252
- "grad_norm": 0.3101503551006317,
253
- "learning_rate": 2.677376526818906e-05,
254
- "loss": 0.6157,
255
  "step": 17500
256
  },
257
  {
258
  "epoch": 4.779607010090281,
259
- "grad_norm": 0.3017677366733551,
260
- "learning_rate": 2.6109930961232075e-05,
261
- "loss": 0.6086,
262
  "step": 18000
263
  },
264
  {
265
  "epoch": 4.9123738714816785,
266
- "grad_norm": 0.31505176424980164,
267
- "learning_rate": 2.5446096654275092e-05,
268
- "loss": 0.6146,
269
  "step": 18500
270
  },
271
  {
272
  "epoch": 5.045140732873075,
273
- "grad_norm": 0.35715609788894653,
274
- "learning_rate": 2.4783590015932023e-05,
275
- "loss": 0.602,
276
  "step": 19000
277
  },
278
  {
279
  "epoch": 5.177907594264472,
280
- "grad_norm": 0.2930135726928711,
281
- "learning_rate": 2.4119755708975043e-05,
282
- "loss": 0.6024,
283
  "step": 19500
284
  },
285
  {
286
  "epoch": 5.310674455655868,
287
- "grad_norm": 0.3474890887737274,
288
- "learning_rate": 2.3455921402018057e-05,
289
- "loss": 0.6002,
290
  "step": 20000
291
  },
292
  {
293
  "epoch": 5.443441317047265,
294
- "grad_norm": 0.29057538509368896,
295
- "learning_rate": 2.2792087095061074e-05,
296
- "loss": 0.6006,
297
  "step": 20500
298
  },
299
  {
300
  "epoch": 5.5762081784386615,
301
- "grad_norm": 0.3273596167564392,
302
- "learning_rate": 2.212825278810409e-05,
303
- "loss": 0.5912,
304
  "step": 21000
305
  },
306
  {
307
  "epoch": 5.708975039830058,
308
- "grad_norm": 0.27121296525001526,
309
- "learning_rate": 2.146574614976102e-05,
310
- "loss": 0.593,
311
  "step": 21500
312
  },
313
  {
314
  "epoch": 5.8417419012214555,
315
- "grad_norm": 0.29718518257141113,
316
- "learning_rate": 2.0801911842804035e-05,
317
- "loss": 0.591,
318
  "step": 22000
319
  },
320
  {
321
  "epoch": 5.974508762612852,
322
- "grad_norm": 0.32410022616386414,
323
- "learning_rate": 2.0138077535847052e-05,
324
- "loss": 0.5857,
325
  "step": 22500
326
  },
327
  {
328
  "epoch": 6.107275624004249,
329
- "grad_norm": 0.2846163213253021,
330
- "learning_rate": 1.9475570897503983e-05,
331
- "loss": 0.5802,
332
  "step": 23000
333
  },
334
  {
335
  "epoch": 6.240042485395645,
336
- "grad_norm": 0.30459314584732056,
337
- "learning_rate": 1.8811736590547e-05,
338
- "loss": 0.5806,
339
  "step": 23500
340
  },
341
  {
342
  "epoch": 6.372809346787042,
343
- "grad_norm": 0.5301333069801331,
344
- "learning_rate": 1.8147902283590017e-05,
345
- "loss": 0.5789,
346
  "step": 24000
347
  },
348
  {
349
  "epoch": 6.5055762081784385,
350
- "grad_norm": 0.2727649509906769,
351
- "learning_rate": 1.7484067976633034e-05,
352
- "loss": 0.5783,
353
  "step": 24500
354
  },
355
  {
356
  "epoch": 6.638343069569835,
357
- "grad_norm": 0.3003462255001068,
358
- "learning_rate": 1.682023366967605e-05,
359
- "loss": 0.5765,
360
  "step": 25000
361
  },
362
  {
363
  "epoch": 6.771109930961232,
364
- "grad_norm": 0.35589084029197693,
365
- "learning_rate": 1.6157727031332982e-05,
366
- "loss": 0.5787,
367
  "step": 25500
368
  },
369
  {
370
  "epoch": 6.903876792352628,
371
- "grad_norm": 0.2738860845565796,
372
- "learning_rate": 1.5493892724375996e-05,
373
- "loss": 0.5734,
374
  "step": 26000
375
  },
376
  {
377
  "epoch": 7.036643653744026,
378
- "grad_norm": 0.42223164439201355,
379
- "learning_rate": 1.4830058417419013e-05,
380
- "loss": 0.5704,
381
  "step": 26500
382
  },
383
  {
384
  "epoch": 7.169410515135422,
385
- "grad_norm": 0.2938649654388428,
386
- "learning_rate": 1.4166224110462028e-05,
387
- "loss": 0.5686,
388
  "step": 27000
389
  },
390
  {
391
  "epoch": 7.302177376526819,
392
- "grad_norm": 0.275078147649765,
393
- "learning_rate": 1.3503717472118959e-05,
394
- "loss": 0.5666,
395
  "step": 27500
396
  },
397
  {
398
  "epoch": 7.434944237918216,
399
- "grad_norm": 0.35505712032318115,
400
- "learning_rate": 1.2839883165161976e-05,
401
- "loss": 0.5631,
402
  "step": 28000
403
  },
404
  {
405
  "epoch": 7.567711099309612,
406
- "grad_norm": 0.2507877051830292,
407
- "learning_rate": 1.2176048858204993e-05,
408
- "loss": 0.5688,
409
  "step": 28500
410
  },
411
  {
412
  "epoch": 7.700477960701009,
413
- "grad_norm": 0.2846459746360779,
414
- "learning_rate": 1.1512214551248008e-05,
415
- "loss": 0.5594,
416
  "step": 29000
417
  },
418
  {
419
  "epoch": 7.833244822092405,
420
- "grad_norm": 0.31158626079559326,
421
- "learning_rate": 1.0848380244291025e-05,
422
- "loss": 0.5653,
423
  "step": 29500
424
  },
425
  {
426
  "epoch": 7.966011683483803,
427
- "grad_norm": 0.2899467647075653,
428
- "learning_rate": 1.0184545937334042e-05,
429
- "loss": 0.562,
430
  "step": 30000
431
  },
432
  {
433
  "epoch": 8.098778544875199,
434
- "grad_norm": 0.27900761365890503,
435
- "learning_rate": 9.520711630377058e-06,
436
- "loss": 0.559,
437
  "step": 30500
438
  },
439
  {
440
  "epoch": 8.231545406266596,
441
- "grad_norm": 0.29301363229751587,
442
- "learning_rate": 8.856877323420075e-06,
443
- "loss": 0.5604,
444
  "step": 31000
445
  },
446
  {
447
  "epoch": 8.364312267657992,
448
- "grad_norm": 0.2812318801879883,
449
- "learning_rate": 8.19304301646309e-06,
450
- "loss": 0.559,
451
  "step": 31500
452
  },
453
  {
454
  "epoch": 8.49707912904939,
455
- "grad_norm": 0.27755317091941833,
456
- "learning_rate": 7.530536378120022e-06,
457
- "loss": 0.5527,
458
  "step": 32000
459
  },
460
  {
461
  "epoch": 8.629845990440787,
462
- "grad_norm": 0.3323802053928375,
463
- "learning_rate": 6.866702071163038e-06,
464
- "loss": 0.5576,
465
  "step": 32500
466
  },
467
  {
468
  "epoch": 8.762612851832182,
469
- "grad_norm": 0.2453739196062088,
470
- "learning_rate": 6.202867764206054e-06,
471
- "loss": 0.557,
472
  "step": 33000
473
  },
474
  {
475
  "epoch": 8.89537971322358,
476
- "grad_norm": 0.28488314151763916,
477
- "learning_rate": 5.539033457249071e-06,
478
- "loss": 0.5586,
479
  "step": 33500
480
  },
481
  {
482
  "epoch": 9.028146574614976,
483
- "grad_norm": 0.2731677293777466,
484
- "learning_rate": 4.876526818906001e-06,
485
- "loss": 0.5529,
486
  "step": 34000
487
  },
488
  {
489
  "epoch": 9.160913436006373,
490
- "grad_norm": 0.34274822473526,
491
- "learning_rate": 4.214020180562932e-06,
492
- "loss": 0.5562,
493
  "step": 34500
494
  },
495
  {
496
  "epoch": 9.293680297397769,
497
- "grad_norm": 0.2875533103942871,
498
- "learning_rate": 3.550185873605948e-06,
499
- "loss": 0.5528,
500
  "step": 35000
501
  },
502
  {
503
  "epoch": 9.426447158789166,
504
- "grad_norm": 0.2516155242919922,
505
- "learning_rate": 2.8863515666489647e-06,
506
- "loss": 0.5538,
507
  "step": 35500
508
  },
509
  {
510
  "epoch": 9.559214020180562,
511
- "grad_norm": 0.2524682283401489,
512
- "learning_rate": 2.222517259691981e-06,
513
- "loss": 0.5546,
514
  "step": 36000
515
  },
516
  {
517
  "epoch": 9.69198088157196,
518
- "grad_norm": 0.25429150462150574,
519
- "learning_rate": 1.5586829527349974e-06,
520
- "loss": 0.5537,
521
  "step": 36500
522
  },
523
  {
524
  "epoch": 9.824747742963357,
525
- "grad_norm": 0.2699441611766815,
526
- "learning_rate": 8.948486457780139e-07,
527
- "loss": 0.5522,
528
  "step": 37000
529
  },
530
  {
531
  "epoch": 9.957514604354753,
532
- "grad_norm": 0.35291793942451477,
533
- "learning_rate": 2.3101433882103027e-07,
534
- "loss": 0.5493,
535
  "step": 37500
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
536
  }
537
  ],
538
  "logging_steps": 500,
539
- "max_steps": 37660,
540
  "num_input_tokens_seen": 0,
541
- "num_train_epochs": 10,
542
  "save_steps": 500,
543
  "stateful_callbacks": {
544
  "TrainerControl": {
@@ -552,7 +699,7 @@
552
  "attributes": {}
553
  }
554
  },
555
- "total_flos": 8.119594531160064e+16,
556
  "train_batch_size": 32,
557
  "trial_name": null,
558
  "trial_params": null
 
2
  "best_global_step": null,
3
  "best_metric": null,
4
  "best_model_checkpoint": null,
5
+ "epoch": 12.745618693574084,
6
  "eval_steps": 500,
7
+ "global_step": 48000,
8
  "is_hyper_param_search": false,
9
  "is_local_process_zero": true,
10
  "is_world_process_zero": true,
11
  "log_history": [
12
  {
13
  "epoch": 0.1327668613913967,
14
+ "grad_norm": 0.9114758372306824,
15
+ "learning_rate": 4.9492422076065206e-05,
16
+ "loss": 2.9213,
17
  "step": 500
18
  },
19
  {
20
  "epoch": 0.2655337227827934,
21
+ "grad_norm": 0.8418619632720947,
22
+ "learning_rate": 4.89817803014829e-05,
23
+ "loss": 1.7378,
24
  "step": 1000
25
  },
26
  {
27
  "epoch": 0.3983005841741901,
28
+ "grad_norm": 0.709621012210846,
29
+ "learning_rate": 4.8471138526900614e-05,
30
+ "loss": 1.4102,
31
  "step": 1500
32
  },
33
  {
34
  "epoch": 0.5310674455655868,
35
+ "grad_norm": 0.4253033995628357,
36
+ "learning_rate": 4.796049675231832e-05,
37
+ "loss": 1.247,
38
  "step": 2000
39
  },
40
  {
41
  "epoch": 0.6638343069569835,
42
+ "grad_norm": 0.4772707521915436,
43
+ "learning_rate": 4.744985497773602e-05,
44
+ "loss": 1.1408,
45
  "step": 2500
46
  },
47
  {
48
  "epoch": 0.7966011683483802,
49
+ "grad_norm": 0.43140164017677307,
50
+ "learning_rate": 4.6939213203153725e-05,
51
+ "loss": 1.0649,
52
  "step": 3000
53
  },
54
  {
55
  "epoch": 0.929368029739777,
56
+ "grad_norm": 0.39506375789642334,
57
+ "learning_rate": 4.642857142857143e-05,
58
+ "loss": 1.0004,
59
  "step": 3500
60
  },
61
  {
62
  "epoch": 1.0621348911311737,
63
+ "grad_norm": 0.3946131765842438,
64
+ "learning_rate": 4.591792965398913e-05,
65
+ "loss": 0.9611,
66
  "step": 4000
67
  },
68
  {
69
  "epoch": 1.1949017525225702,
70
+ "grad_norm": 0.3581591546535492,
71
+ "learning_rate": 4.540728787940684e-05,
72
+ "loss": 0.917,
73
  "step": 4500
74
  },
75
  {
76
  "epoch": 1.327668613913967,
77
+ "grad_norm": 0.42795246839523315,
78
+ "learning_rate": 4.489664610482455e-05,
79
+ "loss": 0.8907,
80
  "step": 5000
81
  },
82
  {
83
  "epoch": 1.4604354753053639,
84
+ "grad_norm": 0.4635187089443207,
85
+ "learning_rate": 4.4386004330242245e-05,
86
+ "loss": 0.8598,
87
  "step": 5500
88
  },
89
  {
90
  "epoch": 1.5932023366967605,
91
+ "grad_norm": 0.3534747064113617,
92
+ "learning_rate": 4.3875362555659955e-05,
93
+ "loss": 0.8295,
94
  "step": 6000
95
  },
96
  {
97
  "epoch": 1.725969198088157,
98
+ "grad_norm": 0.39130711555480957,
99
+ "learning_rate": 4.336472078107766e-05,
100
+ "loss": 0.808,
101
  "step": 6500
102
  },
103
  {
104
  "epoch": 1.858736059479554,
105
+ "grad_norm": 0.440703809261322,
106
+ "learning_rate": 4.285407900649537e-05,
107
+ "loss": 0.7927,
108
  "step": 7000
109
  },
110
  {
111
  "epoch": 1.9915029208709507,
112
+ "grad_norm": 0.3961372375488281,
113
+ "learning_rate": 4.234343723191307e-05,
114
+ "loss": 0.7828,
115
  "step": 7500
116
  },
117
  {
118
  "epoch": 2.1242697822623473,
119
+ "grad_norm": 0.3364185690879822,
120
+ "learning_rate": 4.1833816740879935e-05,
121
+ "loss": 0.7628,
122
  "step": 8000
123
  },
124
  {
125
  "epoch": 2.257036643653744,
126
+ "grad_norm": 0.34151241183280945,
127
+ "learning_rate": 4.1323174966297646e-05,
128
+ "loss": 0.7424,
129
  "step": 8500
130
  },
131
  {
132
  "epoch": 2.3898035050451405,
133
+ "grad_norm": 0.3292118310928345,
134
+ "learning_rate": 4.081253319171535e-05,
135
+ "loss": 0.7341,
136
  "step": 9000
137
  },
138
  {
139
  "epoch": 2.5225703664365375,
140
+ "grad_norm": 0.33259105682373047,
141
+ "learning_rate": 4.0301891417133054e-05,
142
+ "loss": 0.7212,
143
  "step": 9500
144
  },
145
  {
146
  "epoch": 2.655337227827934,
147
+ "grad_norm": 0.35891205072402954,
148
+ "learning_rate": 3.979124964255076e-05,
149
+ "loss": 0.712,
150
  "step": 10000
151
  },
152
  {
153
  "epoch": 2.7881040892193307,
154
+ "grad_norm": 0.3436354398727417,
155
+ "learning_rate": 3.928060786796847e-05,
156
+ "loss": 0.6975,
157
  "step": 10500
158
  },
159
  {
160
  "epoch": 2.9208709506107278,
161
+ "grad_norm": 0.3285069465637207,
162
+ "learning_rate": 3.8769966093386165e-05,
163
+ "loss": 0.6881,
164
  "step": 11000
165
  },
166
  {
167
  "epoch": 3.0536378120021244,
168
+ "grad_norm": 0.44189152121543884,
169
+ "learning_rate": 3.826034560235304e-05,
170
+ "loss": 0.6749,
171
  "step": 11500
172
  },
173
  {
174
  "epoch": 3.186404673393521,
175
+ "grad_norm": 0.34346967935562134,
176
+ "learning_rate": 3.7749703827770744e-05,
177
+ "loss": 0.6704,
178
  "step": 12000
179
  },
180
  {
181
  "epoch": 3.3191715347849176,
182
+ "grad_norm": 0.29128846526145935,
183
+ "learning_rate": 3.723906205318845e-05,
184
+ "loss": 0.6652,
185
  "step": 12500
186
  },
187
  {
188
  "epoch": 3.451938396176314,
189
+ "grad_norm": 0.3013540208339691,
190
+ "learning_rate": 3.672842027860615e-05,
191
+ "loss": 0.6588,
192
  "step": 13000
193
  },
194
  {
195
  "epoch": 3.584705257567711,
196
+ "grad_norm": 0.32138076424598694,
197
+ "learning_rate": 3.6217778504023856e-05,
198
+ "loss": 0.6453,
199
  "step": 13500
200
  },
201
  {
202
  "epoch": 3.717472118959108,
203
+ "grad_norm": 0.3408374786376953,
204
+ "learning_rate": 3.5707136729441566e-05,
205
+ "loss": 0.6388,
206
  "step": 14000
207
  },
208
  {
209
  "epoch": 3.8502389803505044,
210
+ "grad_norm": 0.9397606253623962,
211
+ "learning_rate": 3.519649495485927e-05,
212
+ "loss": 0.6349,
213
  "step": 14500
214
  },
215
  {
216
  "epoch": 3.9830058417419014,
217
+ "grad_norm": 0.3192440867424011,
218
+ "learning_rate": 3.4685853180276974e-05,
219
+ "loss": 0.6291,
220
  "step": 15000
221
  },
222
  {
223
  "epoch": 4.115772703133298,
224
+ "grad_norm": 0.3549179136753082,
225
+ "learning_rate": 3.417521140569468e-05,
226
+ "loss": 0.6278,
227
  "step": 15500
228
  },
229
  {
230
  "epoch": 4.248539564524695,
231
+ "grad_norm": 0.3110153079032898,
232
+ "learning_rate": 3.366456963111239e-05,
233
+ "loss": 0.618,
234
  "step": 16000
235
  },
236
  {
237
  "epoch": 4.381306425916091,
238
+ "grad_norm": 0.2719564735889435,
239
+ "learning_rate": 3.3153927856530086e-05,
240
+ "loss": 0.6169,
241
  "step": 16500
242
  },
243
  {
244
  "epoch": 4.514073287307488,
245
+ "grad_norm": 0.2858710289001465,
246
+ "learning_rate": 3.2643286081947796e-05,
247
+ "loss": 0.61,
248
  "step": 17000
249
  },
250
  {
251
  "epoch": 4.646840148698884,
252
+ "grad_norm": 0.31373563408851624,
253
+ "learning_rate": 3.21326443073655e-05,
254
+ "loss": 0.6011,
255
  "step": 17500
256
  },
257
  {
258
  "epoch": 4.779607010090281,
259
+ "grad_norm": 0.29438045620918274,
260
+ "learning_rate": 3.1622002532783204e-05,
261
+ "loss": 0.5938,
262
  "step": 18000
263
  },
264
  {
265
  "epoch": 4.9123738714816785,
266
+ "grad_norm": 0.3415851593017578,
267
+ "learning_rate": 3.111238204175007e-05,
268
+ "loss": 0.5992,
269
  "step": 18500
270
  },
271
  {
272
  "epoch": 5.045140732873075,
273
+ "grad_norm": 0.35383546352386475,
274
+ "learning_rate": 3.060276155071694e-05,
275
+ "loss": 0.5871,
276
  "step": 19000
277
  },
278
  {
279
  "epoch": 5.177907594264472,
280
+ "grad_norm": 0.3242381811141968,
281
+ "learning_rate": 3.009314105968381e-05,
282
+ "loss": 0.5867,
283
  "step": 19500
284
  },
285
  {
286
  "epoch": 5.310674455655868,
287
+ "grad_norm": 0.28274649381637573,
288
+ "learning_rate": 2.9582499285101516e-05,
289
+ "loss": 0.584,
290
  "step": 20000
291
  },
292
  {
293
  "epoch": 5.443441317047265,
294
+ "grad_norm": 0.3075231611728668,
295
+ "learning_rate": 2.9071857510519223e-05,
296
+ "loss": 0.584,
297
  "step": 20500
298
  },
299
  {
300
  "epoch": 5.5762081784386615,
301
+ "grad_norm": 0.29568806290626526,
302
+ "learning_rate": 2.8561215735936924e-05,
303
+ "loss": 0.5743,
304
  "step": 21000
305
  },
306
  {
307
  "epoch": 5.708975039830058,
308
+ "grad_norm": 0.32808518409729004,
309
+ "learning_rate": 2.805057396135463e-05,
310
+ "loss": 0.5757,
311
  "step": 21500
312
  },
313
  {
314
  "epoch": 5.8417419012214555,
315
+ "grad_norm": 0.256596177816391,
316
+ "learning_rate": 2.7539932186772338e-05,
317
+ "loss": 0.5735,
318
  "step": 22000
319
  },
320
  {
321
  "epoch": 5.974508762612852,
322
+ "grad_norm": 0.313557505607605,
323
+ "learning_rate": 2.702929041219004e-05,
324
+ "loss": 0.5679,
325
  "step": 22500
326
  },
327
  {
328
  "epoch": 6.107275624004249,
329
+ "grad_norm": 0.274058997631073,
330
+ "learning_rate": 2.6518648637607746e-05,
331
+ "loss": 0.562,
332
  "step": 23000
333
  },
334
  {
335
  "epoch": 6.240042485395645,
336
+ "grad_norm": 0.2777511477470398,
337
+ "learning_rate": 2.6008006863025453e-05,
338
+ "loss": 0.5619,
339
  "step": 23500
340
  },
341
  {
342
  "epoch": 6.372809346787042,
343
+ "grad_norm": 0.3301125466823578,
344
+ "learning_rate": 2.549736508844316e-05,
345
+ "loss": 0.5598,
346
  "step": 24000
347
  },
348
  {
349
  "epoch": 6.5055762081784385,
350
+ "grad_norm": 0.2844313383102417,
351
+ "learning_rate": 2.498672331386086e-05,
352
+ "loss": 0.5589,
353
  "step": 24500
354
  },
355
  {
356
  "epoch": 6.638343069569835,
357
+ "grad_norm": 0.268718421459198,
358
+ "learning_rate": 2.4476081539278568e-05,
359
+ "loss": 0.5566,
360
  "step": 25000
361
  },
362
  {
363
  "epoch": 6.771109930961232,
364
+ "grad_norm": 0.3230023980140686,
365
+ "learning_rate": 2.3965439764696272e-05,
366
+ "loss": 0.5582,
367
  "step": 25500
368
  },
369
  {
370
  "epoch": 6.903876792352628,
371
+ "grad_norm": 0.27747681736946106,
372
+ "learning_rate": 2.3454797990113976e-05,
373
+ "loss": 0.5527,
374
  "step": 26000
375
  },
376
  {
377
  "epoch": 7.036643653744026,
378
+ "grad_norm": 0.29863470792770386,
379
+ "learning_rate": 2.2945177499080848e-05,
380
+ "loss": 0.5491,
381
  "step": 26500
382
  },
383
  {
384
  "epoch": 7.169410515135422,
385
+ "grad_norm": 0.30289873480796814,
386
+ "learning_rate": 2.243453572449855e-05,
387
+ "loss": 0.5468,
388
  "step": 27000
389
  },
390
  {
391
  "epoch": 7.302177376526819,
392
+ "grad_norm": 0.2766277492046356,
393
+ "learning_rate": 2.192491523346542e-05,
394
+ "loss": 0.5444,
395
  "step": 27500
396
  },
397
  {
398
  "epoch": 7.434944237918216,
399
+ "grad_norm": 0.3069545030593872,
400
+ "learning_rate": 2.1414273458883124e-05,
401
+ "loss": 0.5403,
402
  "step": 28000
403
  },
404
  {
405
  "epoch": 7.567711099309612,
406
+ "grad_norm": 0.258329302072525,
407
+ "learning_rate": 2.090363168430083e-05,
408
+ "loss": 0.5453,
409
  "step": 28500
410
  },
411
  {
412
  "epoch": 7.700477960701009,
413
+ "grad_norm": 0.2901703119277954,
414
+ "learning_rate": 2.0392989909718535e-05,
415
+ "loss": 0.5357,
416
  "step": 29000
417
  },
418
  {
419
  "epoch": 7.833244822092405,
420
+ "grad_norm": 0.35300034284591675,
421
+ "learning_rate": 1.988234813513624e-05,
422
+ "loss": 0.541,
423
  "step": 29500
424
  },
425
  {
426
  "epoch": 7.966011683483803,
427
+ "grad_norm": 0.2620261311531067,
428
+ "learning_rate": 1.9371706360553946e-05,
429
+ "loss": 0.5371,
430
  "step": 30000
431
  },
432
  {
433
  "epoch": 8.098778544875199,
434
+ "grad_norm": 0.3098488450050354,
435
+ "learning_rate": 1.886106458597165e-05,
436
+ "loss": 0.5337,
437
  "step": 30500
438
  },
439
  {
440
  "epoch": 8.231545406266596,
441
+ "grad_norm": 0.2904013991355896,
442
+ "learning_rate": 1.8350422811389357e-05,
443
+ "loss": 0.5342,
444
  "step": 31000
445
  },
446
  {
447
  "epoch": 8.364312267657992,
448
+ "grad_norm": 0.29218047857284546,
449
+ "learning_rate": 1.783978103680706e-05,
450
+ "loss": 0.5323,
451
  "step": 31500
452
  },
453
  {
454
  "epoch": 8.49707912904939,
455
+ "grad_norm": 0.3310258090496063,
456
+ "learning_rate": 1.7329139262224765e-05,
457
+ "loss": 0.5258,
458
  "step": 32000
459
  },
460
  {
461
  "epoch": 8.629845990440787,
462
+ "grad_norm": 0.3069627583026886,
463
+ "learning_rate": 1.6818497487642472e-05,
464
+ "loss": 0.5299,
465
  "step": 32500
466
  },
467
  {
468
  "epoch": 8.762612851832182,
469
+ "grad_norm": 0.24625258147716522,
470
+ "learning_rate": 1.630887699660934e-05,
471
+ "loss": 0.5285,
472
  "step": 33000
473
  },
474
  {
475
  "epoch": 8.89537971322358,
476
+ "grad_norm": 0.26636838912963867,
477
+ "learning_rate": 1.5798235222027044e-05,
478
+ "loss": 0.5294,
479
  "step": 33500
480
  },
481
  {
482
  "epoch": 9.028146574614976,
483
+ "grad_norm": 0.2842467725276947,
484
+ "learning_rate": 1.5287593447444748e-05,
485
+ "loss": 0.5235,
486
  "step": 34000
487
  },
488
  {
489
  "epoch": 9.160913436006373,
490
+ "grad_norm": 0.3261110782623291,
491
+ "learning_rate": 1.4776951672862455e-05,
492
+ "loss": 0.5256,
493
  "step": 34500
494
  },
495
  {
496
  "epoch": 9.293680297397769,
497
+ "grad_norm": 0.2750456929206848,
498
+ "learning_rate": 1.4266309898280159e-05,
499
+ "loss": 0.5218,
500
  "step": 35000
501
  },
502
  {
503
  "epoch": 9.426447158789166,
504
+ "grad_norm": 0.26470229029655457,
505
+ "learning_rate": 1.3755668123697864e-05,
506
+ "loss": 0.522,
507
  "step": 35500
508
  },
509
  {
510
  "epoch": 9.559214020180562,
511
+ "grad_norm": 0.24200379848480225,
512
+ "learning_rate": 1.3245026349115568e-05,
513
+ "loss": 0.5222,
514
  "step": 36000
515
  },
516
  {
517
  "epoch": 9.69198088157196,
518
+ "grad_norm": 0.30407610535621643,
519
+ "learning_rate": 1.2734384574533272e-05,
520
+ "loss": 0.5208,
521
  "step": 36500
522
  },
523
  {
524
  "epoch": 9.824747742963357,
525
+ "grad_norm": 0.26741334795951843,
526
+ "learning_rate": 1.2224764083500144e-05,
527
+ "loss": 0.5185,
528
  "step": 37000
529
  },
530
  {
531
  "epoch": 9.957514604354753,
532
+ "grad_norm": 0.2811224162578583,
533
+ "learning_rate": 1.1714122308917848e-05,
534
+ "loss": 0.515,
535
  "step": 37500
536
+ },
537
+ {
538
+ "epoch": 10.09028146574615,
539
+ "grad_norm": 0.2725277543067932,
540
+ "learning_rate": 1.1204501817884718e-05,
541
+ "loss": 0.517,
542
+ "step": 38000
543
+ },
544
+ {
545
+ "epoch": 10.223048327137546,
546
+ "grad_norm": 0.31137147545814514,
547
+ "learning_rate": 1.0693860043302423e-05,
548
+ "loss": 0.5155,
549
+ "step": 38500
550
+ },
551
+ {
552
+ "epoch": 10.355815188528943,
553
+ "grad_norm": 0.26093247532844543,
554
+ "learning_rate": 1.0183218268720129e-05,
555
+ "loss": 0.5148,
556
+ "step": 39000
557
+ },
558
+ {
559
+ "epoch": 10.488582049920339,
560
+ "grad_norm": 0.2848931550979614,
561
+ "learning_rate": 9.672576494137833e-06,
562
+ "loss": 0.5134,
563
+ "step": 39500
564
+ },
565
+ {
566
+ "epoch": 10.621348911311737,
567
+ "grad_norm": 0.24945715069770813,
568
+ "learning_rate": 9.161934719555536e-06,
569
+ "loss": 0.5136,
570
+ "step": 40000
571
+ },
572
+ {
573
+ "epoch": 10.754115772703134,
574
+ "grad_norm": 0.28524720668792725,
575
+ "learning_rate": 8.651292944973242e-06,
576
+ "loss": 0.5167,
577
+ "step": 40500
578
+ },
579
+ {
580
+ "epoch": 10.88688263409453,
581
+ "grad_norm": 0.29454296827316284,
582
+ "learning_rate": 8.140651170390948e-06,
583
+ "loss": 0.5151,
584
+ "step": 41000
585
+ },
586
+ {
587
+ "epoch": 11.019649495485927,
588
+ "grad_norm": 0.30919119715690613,
589
+ "learning_rate": 7.632051962906982e-06,
590
+ "loss": 0.5121,
591
+ "step": 41500
592
+ },
593
+ {
594
+ "epoch": 11.152416356877323,
595
+ "grad_norm": 0.36948204040527344,
596
+ "learning_rate": 7.121410188324687e-06,
597
+ "loss": 0.5146,
598
+ "step": 42000
599
+ },
600
+ {
601
+ "epoch": 11.28518321826872,
602
+ "grad_norm": 0.2883196771144867,
603
+ "learning_rate": 6.610768413742392e-06,
604
+ "loss": 0.5118,
605
+ "step": 42500
606
+ },
607
+ {
608
+ "epoch": 11.417950079660116,
609
+ "grad_norm": 0.2851753532886505,
610
+ "learning_rate": 6.100126639160097e-06,
611
+ "loss": 0.5092,
612
+ "step": 43000
613
+ },
614
+ {
615
+ "epoch": 11.550716941051514,
616
+ "grad_norm": 0.27395716309547424,
617
+ "learning_rate": 5.5894848645778016e-06,
618
+ "loss": 0.5044,
619
+ "step": 43500
620
+ },
621
+ {
622
+ "epoch": 11.683483802442911,
623
+ "grad_norm": 0.2726575434207916,
624
+ "learning_rate": 5.078843089995506e-06,
625
+ "loss": 0.5106,
626
+ "step": 44000
627
+ },
628
+ {
629
+ "epoch": 11.816250663834307,
630
+ "grad_norm": 0.29727038741111755,
631
+ "learning_rate": 4.568201315413211e-06,
632
+ "loss": 0.5095,
633
+ "step": 44500
634
+ },
635
+ {
636
+ "epoch": 11.949017525225704,
637
+ "grad_norm": 0.2694978713989258,
638
+ "learning_rate": 4.0575595408309166e-06,
639
+ "loss": 0.5118,
640
+ "step": 45000
641
+ },
642
+ {
643
+ "epoch": 12.0817843866171,
644
+ "grad_norm": 0.2318025678396225,
645
+ "learning_rate": 3.5469177662486213e-06,
646
+ "loss": 0.5126,
647
+ "step": 45500
648
+ },
649
+ {
650
+ "epoch": 12.214551248008497,
651
+ "grad_norm": 0.27759501338005066,
652
+ "learning_rate": 3.0362759916663264e-06,
653
+ "loss": 0.5081,
654
+ "step": 46000
655
+ },
656
+ {
657
+ "epoch": 12.347318109399893,
658
+ "grad_norm": 0.2869941294193268,
659
+ "learning_rate": 2.525634217084031e-06,
660
+ "loss": 0.5046,
661
+ "step": 46500
662
+ },
663
+ {
664
+ "epoch": 12.48008497079129,
665
+ "grad_norm": 0.32994431257247925,
666
+ "learning_rate": 2.0149924425017362e-06,
667
+ "loss": 0.5104,
668
+ "step": 47000
669
+ },
670
+ {
671
+ "epoch": 12.612851832182688,
672
+ "grad_norm": 0.28273916244506836,
673
+ "learning_rate": 1.5053719514686058e-06,
674
+ "loss": 0.5036,
675
+ "step": 47500
676
+ },
677
+ {
678
+ "epoch": 12.745618693574084,
679
+ "grad_norm": 0.2604888379573822,
680
+ "learning_rate": 9.947301768863107e-07,
681
+ "loss": 0.5086,
682
+ "step": 48000
683
  }
684
  ],
685
  "logging_steps": 500,
686
+ "max_steps": 48958,
687
  "num_input_tokens_seen": 0,
688
+ "num_train_epochs": 13,
689
  "save_steps": 500,
690
  "stateful_callbacks": {
691
  "TrainerControl": {
 
699
  "attributes": {}
700
  }
701
  },
702
+ "total_flos": 1.0393032276836352e+17,
703
  "train_batch_size": 32,
704
  "trial_name": null,
705
  "trial_params": null
checkpoints/{checkpoint-37500 → checkpoint-48000}/training_args.bin RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:344c4c96587b5961c7260f4c9743524e13a2580c248f0b960480131c9cd7dc77
3
  size 5240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8e94179d735e9ead00b90fad45af99e009779a12dba4e32a7dde92da29b59e62
3
  size 5240
checkpoints/{checkpoint-37500 → checkpoint-48500}/config.json RENAMED
File without changes
checkpoints/{checkpoint-37500 → checkpoint-48500}/generation_config.json RENAMED
File without changes
checkpoints/{checkpoint-37500 → checkpoint-48500}/model.safetensors RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a338402e3bf19f5af3748021e8a94171ceca1c35df5582f413ff92c5d7228a92
3
  size 242041896
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad7908f1a4a883ec2a05b25bc3adea9d943ff8aca1b6ea1dbfbe7566f2567d29
3
  size 242041896
checkpoints/{checkpoint-37000 → checkpoint-48500}/optimizer.pt RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3c9e020b49ebb44ca57e84d793a9643e505fb8255d4d357edc119fd54a72f91b
3
  size 484163514
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5690c8f1770608957d58ee1669185db353a51865711ab0bf64130487c7819403
3
  size 484163514
checkpoints/{checkpoint-37660 → checkpoint-48500}/rng_state.pth RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c93c42240bdb62b933fb7098be37ca6ca9d1c6a51fc610080600494b7e572605
3
  size 14244
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ddfcaf6362c874707d13984943bc1adea41c31767683dbe5609d23dc5ebfbeca
3
  size 14244
checkpoints/{checkpoint-37500 → checkpoint-48500}/scaler.pt RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d417ea58bb57bcc1fcfdc67de914d6ad404bafbe873af35078ff3406669f3f98
3
  size 988
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:59e87ed4c6364d70f1e30a51af92bf3a98a2981d4fdf0ef1f2dd5fd5300af10e
3
  size 988
checkpoints/{checkpoint-37500 → checkpoint-48500}/scheduler.pt RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:eb8fd32604fbb9aabbcbeef28b92eac4f06ac2ae6413a4773092416494ae47d7
3
  size 1064
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4fdc49e90537dfdf6d385a7dd2e81cb630ecb40b0032ba11685e60e793c47af2
3
  size 1064
checkpoints/{checkpoint-37500 → checkpoint-48500}/special_tokens_map.json RENAMED
File without changes
checkpoints/checkpoint-48500/spiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d60acb128cf7b7f2536e8f38a5b18a05535c9e14c7a355904270e15b0945ea86
3
+ size 791656
checkpoints/{checkpoint-37500 → checkpoint-48500}/tokenizer.json RENAMED
File without changes
checkpoints/{checkpoint-37500 → checkpoint-48500}/tokenizer_config.json RENAMED
File without changes
checkpoints/{checkpoint-37000 → checkpoint-48500}/trainer_state.json RENAMED
@@ -2,536 +2,697 @@
2
  "best_global_step": null,
3
  "best_metric": null,
4
  "best_model_checkpoint": null,
5
- "epoch": 9.824747742963357,
6
  "eval_steps": 500,
7
- "global_step": 37000,
8
  "is_hyper_param_search": false,
9
  "is_local_process_zero": true,
10
  "is_world_process_zero": true,
11
  "log_history": [
12
  {
13
  "epoch": 0.1327668613913967,
14
- "grad_norm": 0.7285390496253967,
15
- "learning_rate": 4.934014869888476e-05,
16
- "loss": 2.9202,
17
  "step": 500
18
  },
19
  {
20
  "epoch": 0.2655337227827934,
21
- "grad_norm": 0.6645896434783936,
22
- "learning_rate": 4.867631439192778e-05,
23
- "loss": 1.7398,
24
  "step": 1000
25
  },
26
  {
27
  "epoch": 0.3983005841741901,
28
- "grad_norm": 1.236431360244751,
29
- "learning_rate": 4.8012480084970795e-05,
30
- "loss": 1.4128,
31
  "step": 1500
32
  },
33
  {
34
  "epoch": 0.5310674455655868,
35
- "grad_norm": 0.4946064054965973,
36
- "learning_rate": 4.734864577801381e-05,
37
- "loss": 1.2492,
38
  "step": 2000
39
  },
40
  {
41
  "epoch": 0.6638343069569835,
42
- "grad_norm": 0.4114533066749573,
43
- "learning_rate": 4.668481147105683e-05,
44
- "loss": 1.1439,
45
  "step": 2500
46
  },
47
  {
48
  "epoch": 0.7966011683483802,
49
- "grad_norm": 0.4554953873157501,
50
- "learning_rate": 4.602097716409984e-05,
51
- "loss": 1.0689,
52
  "step": 3000
53
  },
54
  {
55
  "epoch": 0.929368029739777,
56
- "grad_norm": 0.41119593381881714,
57
- "learning_rate": 4.5357142857142856e-05,
58
- "loss": 1.0057,
59
  "step": 3500
60
  },
61
  {
62
  "epoch": 1.0621348911311737,
63
- "grad_norm": 0.40032336115837097,
64
- "learning_rate": 4.4693308550185877e-05,
65
- "loss": 0.9672,
66
  "step": 4000
67
  },
68
  {
69
  "epoch": 1.1949017525225702,
70
- "grad_norm": 0.3746163845062256,
71
- "learning_rate": 4.402947424322889e-05,
72
- "loss": 0.9234,
73
  "step": 4500
74
  },
75
  {
76
  "epoch": 1.327668613913967,
77
- "grad_norm": 0.4201946556568146,
78
- "learning_rate": 4.3365639936271904e-05,
79
- "loss": 0.8978,
80
  "step": 5000
81
  },
82
  {
83
  "epoch": 1.4604354753053639,
84
- "grad_norm": 0.48306697607040405,
85
- "learning_rate": 4.2701805629314924e-05,
86
- "loss": 0.8672,
87
  "step": 5500
88
  },
89
  {
90
  "epoch": 1.5932023366967605,
91
- "grad_norm": 0.348651647567749,
92
- "learning_rate": 4.2037971322357945e-05,
93
- "loss": 0.8373,
94
  "step": 6000
95
  },
96
  {
97
  "epoch": 1.725969198088157,
98
- "grad_norm": 0.3810581862926483,
99
- "learning_rate": 4.137413701540096e-05,
100
- "loss": 0.8161,
101
  "step": 6500
102
  },
103
  {
104
  "epoch": 1.858736059479554,
105
- "grad_norm": 0.41756191849708557,
106
- "learning_rate": 4.071030270844398e-05,
107
- "loss": 0.8011,
108
  "step": 7000
109
  },
110
  {
111
  "epoch": 1.9915029208709507,
112
- "grad_norm": 0.3746052384376526,
113
- "learning_rate": 4.004646840148699e-05,
114
- "loss": 0.7917,
115
  "step": 7500
116
  },
117
  {
118
  "epoch": 2.1242697822623473,
119
- "grad_norm": 0.3314072787761688,
120
- "learning_rate": 3.9382634094530006e-05,
121
- "loss": 0.772,
122
  "step": 8000
123
  },
124
  {
125
  "epoch": 2.257036643653744,
126
- "grad_norm": 0.36195287108421326,
127
- "learning_rate": 3.871879978757303e-05,
128
- "loss": 0.7519,
129
  "step": 8500
130
  },
131
  {
132
  "epoch": 2.3898035050451405,
133
- "grad_norm": 0.36162152886390686,
134
- "learning_rate": 3.805496548061604e-05,
135
- "loss": 0.744,
136
  "step": 9000
137
  },
138
  {
139
  "epoch": 2.5225703664365375,
140
- "grad_norm": 0.3393162786960602,
141
- "learning_rate": 3.739245884227297e-05,
142
- "loss": 0.7317,
143
  "step": 9500
144
  },
145
  {
146
  "epoch": 2.655337227827934,
147
- "grad_norm": 0.42738890647888184,
148
- "learning_rate": 3.67299522039299e-05,
149
- "loss": 0.7227,
150
  "step": 10000
151
  },
152
  {
153
  "epoch": 2.7881040892193307,
154
- "grad_norm": 0.34136396646499634,
155
- "learning_rate": 3.6066117896972915e-05,
156
- "loss": 0.7084,
157
  "step": 10500
158
  },
159
  {
160
  "epoch": 2.9208709506107278,
161
- "grad_norm": 0.33245041966438293,
162
- "learning_rate": 3.5402283590015936e-05,
163
- "loss": 0.6993,
164
  "step": 11000
165
  },
166
  {
167
  "epoch": 3.0536378120021244,
168
- "grad_norm": 0.46563392877578735,
169
- "learning_rate": 3.473844928305895e-05,
170
- "loss": 0.6865,
171
  "step": 11500
172
  },
173
  {
174
  "epoch": 3.186404673393521,
175
- "grad_norm": 0.3868368864059448,
176
- "learning_rate": 3.407461497610196e-05,
177
- "loss": 0.6822,
178
  "step": 12000
179
  },
180
  {
181
  "epoch": 3.3191715347849176,
182
- "grad_norm": 0.30418872833251953,
183
- "learning_rate": 3.3410780669144984e-05,
184
- "loss": 0.6775,
185
  "step": 12500
186
  },
187
  {
188
  "epoch": 3.451938396176314,
189
- "grad_norm": 0.34485992789268494,
190
- "learning_rate": 3.2746946362188e-05,
191
- "loss": 0.6713,
192
  "step": 13000
193
  },
194
  {
195
  "epoch": 3.584705257567711,
196
- "grad_norm": 0.33921709656715393,
197
- "learning_rate": 3.208311205523102e-05,
198
- "loss": 0.658,
199
  "step": 13500
200
  },
201
  {
202
  "epoch": 3.717472118959108,
203
- "grad_norm": 0.36646100878715515,
204
- "learning_rate": 3.141927774827403e-05,
205
- "loss": 0.6516,
206
  "step": 14000
207
  },
208
  {
209
  "epoch": 3.8502389803505044,
210
- "grad_norm": 0.32367828488349915,
211
- "learning_rate": 3.075544344131705e-05,
212
- "loss": 0.6479,
213
  "step": 14500
214
  },
215
  {
216
  "epoch": 3.9830058417419014,
217
- "grad_norm": 0.32735565304756165,
218
- "learning_rate": 3.0091609134360066e-05,
219
- "loss": 0.6423,
220
  "step": 15000
221
  },
222
  {
223
  "epoch": 4.115772703133298,
224
- "grad_norm": 0.43194663524627686,
225
- "learning_rate": 2.9427774827403083e-05,
226
- "loss": 0.6416,
227
  "step": 15500
228
  },
229
  {
230
  "epoch": 4.248539564524695,
231
- "grad_norm": 0.29106882214546204,
232
- "learning_rate": 2.8763940520446096e-05,
233
- "loss": 0.6318,
234
  "step": 16000
235
  },
236
  {
237
  "epoch": 4.381306425916091,
238
- "grad_norm": 0.2671768069267273,
239
- "learning_rate": 2.810143388210303e-05,
240
- "loss": 0.6311,
241
  "step": 16500
242
  },
243
  {
244
  "epoch": 4.514073287307488,
245
- "grad_norm": 0.311146080493927,
246
- "learning_rate": 2.7437599575146044e-05,
247
- "loss": 0.6243,
248
  "step": 17000
249
  },
250
  {
251
  "epoch": 4.646840148698884,
252
- "grad_norm": 0.3101503551006317,
253
- "learning_rate": 2.677376526818906e-05,
254
- "loss": 0.6157,
255
  "step": 17500
256
  },
257
  {
258
  "epoch": 4.779607010090281,
259
- "grad_norm": 0.3017677366733551,
260
- "learning_rate": 2.6109930961232075e-05,
261
- "loss": 0.6086,
262
  "step": 18000
263
  },
264
  {
265
  "epoch": 4.9123738714816785,
266
- "grad_norm": 0.31505176424980164,
267
- "learning_rate": 2.5446096654275092e-05,
268
- "loss": 0.6146,
269
  "step": 18500
270
  },
271
  {
272
  "epoch": 5.045140732873075,
273
- "grad_norm": 0.35715609788894653,
274
- "learning_rate": 2.4783590015932023e-05,
275
- "loss": 0.602,
276
  "step": 19000
277
  },
278
  {
279
  "epoch": 5.177907594264472,
280
- "grad_norm": 0.2930135726928711,
281
- "learning_rate": 2.4119755708975043e-05,
282
- "loss": 0.6024,
283
  "step": 19500
284
  },
285
  {
286
  "epoch": 5.310674455655868,
287
- "grad_norm": 0.3474890887737274,
288
- "learning_rate": 2.3455921402018057e-05,
289
- "loss": 0.6002,
290
  "step": 20000
291
  },
292
  {
293
  "epoch": 5.443441317047265,
294
- "grad_norm": 0.29057538509368896,
295
- "learning_rate": 2.2792087095061074e-05,
296
- "loss": 0.6006,
297
  "step": 20500
298
  },
299
  {
300
  "epoch": 5.5762081784386615,
301
- "grad_norm": 0.3273596167564392,
302
- "learning_rate": 2.212825278810409e-05,
303
- "loss": 0.5912,
304
  "step": 21000
305
  },
306
  {
307
  "epoch": 5.708975039830058,
308
- "grad_norm": 0.27121296525001526,
309
- "learning_rate": 2.146574614976102e-05,
310
- "loss": 0.593,
311
  "step": 21500
312
  },
313
  {
314
  "epoch": 5.8417419012214555,
315
- "grad_norm": 0.29718518257141113,
316
- "learning_rate": 2.0801911842804035e-05,
317
- "loss": 0.591,
318
  "step": 22000
319
  },
320
  {
321
  "epoch": 5.974508762612852,
322
- "grad_norm": 0.32410022616386414,
323
- "learning_rate": 2.0138077535847052e-05,
324
- "loss": 0.5857,
325
  "step": 22500
326
  },
327
  {
328
  "epoch": 6.107275624004249,
329
- "grad_norm": 0.2846163213253021,
330
- "learning_rate": 1.9475570897503983e-05,
331
- "loss": 0.5802,
332
  "step": 23000
333
  },
334
  {
335
  "epoch": 6.240042485395645,
336
- "grad_norm": 0.30459314584732056,
337
- "learning_rate": 1.8811736590547e-05,
338
- "loss": 0.5806,
339
  "step": 23500
340
  },
341
  {
342
  "epoch": 6.372809346787042,
343
- "grad_norm": 0.5301333069801331,
344
- "learning_rate": 1.8147902283590017e-05,
345
- "loss": 0.5789,
346
  "step": 24000
347
  },
348
  {
349
  "epoch": 6.5055762081784385,
350
- "grad_norm": 0.2727649509906769,
351
- "learning_rate": 1.7484067976633034e-05,
352
- "loss": 0.5783,
353
  "step": 24500
354
  },
355
  {
356
  "epoch": 6.638343069569835,
357
- "grad_norm": 0.3003462255001068,
358
- "learning_rate": 1.682023366967605e-05,
359
- "loss": 0.5765,
360
  "step": 25000
361
  },
362
  {
363
  "epoch": 6.771109930961232,
364
- "grad_norm": 0.35589084029197693,
365
- "learning_rate": 1.6157727031332982e-05,
366
- "loss": 0.5787,
367
  "step": 25500
368
  },
369
  {
370
  "epoch": 6.903876792352628,
371
- "grad_norm": 0.2738860845565796,
372
- "learning_rate": 1.5493892724375996e-05,
373
- "loss": 0.5734,
374
  "step": 26000
375
  },
376
  {
377
  "epoch": 7.036643653744026,
378
- "grad_norm": 0.42223164439201355,
379
- "learning_rate": 1.4830058417419013e-05,
380
- "loss": 0.5704,
381
  "step": 26500
382
  },
383
  {
384
  "epoch": 7.169410515135422,
385
- "grad_norm": 0.2938649654388428,
386
- "learning_rate": 1.4166224110462028e-05,
387
- "loss": 0.5686,
388
  "step": 27000
389
  },
390
  {
391
  "epoch": 7.302177376526819,
392
- "grad_norm": 0.275078147649765,
393
- "learning_rate": 1.3503717472118959e-05,
394
- "loss": 0.5666,
395
  "step": 27500
396
  },
397
  {
398
  "epoch": 7.434944237918216,
399
- "grad_norm": 0.35505712032318115,
400
- "learning_rate": 1.2839883165161976e-05,
401
- "loss": 0.5631,
402
  "step": 28000
403
  },
404
  {
405
  "epoch": 7.567711099309612,
406
- "grad_norm": 0.2507877051830292,
407
- "learning_rate": 1.2176048858204993e-05,
408
- "loss": 0.5688,
409
  "step": 28500
410
  },
411
  {
412
  "epoch": 7.700477960701009,
413
- "grad_norm": 0.2846459746360779,
414
- "learning_rate": 1.1512214551248008e-05,
415
- "loss": 0.5594,
416
  "step": 29000
417
  },
418
  {
419
  "epoch": 7.833244822092405,
420
- "grad_norm": 0.31158626079559326,
421
- "learning_rate": 1.0848380244291025e-05,
422
- "loss": 0.5653,
423
  "step": 29500
424
  },
425
  {
426
  "epoch": 7.966011683483803,
427
- "grad_norm": 0.2899467647075653,
428
- "learning_rate": 1.0184545937334042e-05,
429
- "loss": 0.562,
430
  "step": 30000
431
  },
432
  {
433
  "epoch": 8.098778544875199,
434
- "grad_norm": 0.27900761365890503,
435
- "learning_rate": 9.520711630377058e-06,
436
- "loss": 0.559,
437
  "step": 30500
438
  },
439
  {
440
  "epoch": 8.231545406266596,
441
- "grad_norm": 0.29301363229751587,
442
- "learning_rate": 8.856877323420075e-06,
443
- "loss": 0.5604,
444
  "step": 31000
445
  },
446
  {
447
  "epoch": 8.364312267657992,
448
- "grad_norm": 0.2812318801879883,
449
- "learning_rate": 8.19304301646309e-06,
450
- "loss": 0.559,
451
  "step": 31500
452
  },
453
  {
454
  "epoch": 8.49707912904939,
455
- "grad_norm": 0.27755317091941833,
456
- "learning_rate": 7.530536378120022e-06,
457
- "loss": 0.5527,
458
  "step": 32000
459
  },
460
  {
461
  "epoch": 8.629845990440787,
462
- "grad_norm": 0.3323802053928375,
463
- "learning_rate": 6.866702071163038e-06,
464
- "loss": 0.5576,
465
  "step": 32500
466
  },
467
  {
468
  "epoch": 8.762612851832182,
469
- "grad_norm": 0.2453739196062088,
470
- "learning_rate": 6.202867764206054e-06,
471
- "loss": 0.557,
472
  "step": 33000
473
  },
474
  {
475
  "epoch": 8.89537971322358,
476
- "grad_norm": 0.28488314151763916,
477
- "learning_rate": 5.539033457249071e-06,
478
- "loss": 0.5586,
479
  "step": 33500
480
  },
481
  {
482
  "epoch": 9.028146574614976,
483
- "grad_norm": 0.2731677293777466,
484
- "learning_rate": 4.876526818906001e-06,
485
- "loss": 0.5529,
486
  "step": 34000
487
  },
488
  {
489
  "epoch": 9.160913436006373,
490
- "grad_norm": 0.34274822473526,
491
- "learning_rate": 4.214020180562932e-06,
492
- "loss": 0.5562,
493
  "step": 34500
494
  },
495
  {
496
  "epoch": 9.293680297397769,
497
- "grad_norm": 0.2875533103942871,
498
- "learning_rate": 3.550185873605948e-06,
499
- "loss": 0.5528,
500
  "step": 35000
501
  },
502
  {
503
  "epoch": 9.426447158789166,
504
- "grad_norm": 0.2516155242919922,
505
- "learning_rate": 2.8863515666489647e-06,
506
- "loss": 0.5538,
507
  "step": 35500
508
  },
509
  {
510
  "epoch": 9.559214020180562,
511
- "grad_norm": 0.2524682283401489,
512
- "learning_rate": 2.222517259691981e-06,
513
- "loss": 0.5546,
514
  "step": 36000
515
  },
516
  {
517
  "epoch": 9.69198088157196,
518
- "grad_norm": 0.25429150462150574,
519
- "learning_rate": 1.5586829527349974e-06,
520
- "loss": 0.5537,
521
  "step": 36500
522
  },
523
  {
524
  "epoch": 9.824747742963357,
525
- "grad_norm": 0.2699441611766815,
526
- "learning_rate": 8.948486457780139e-07,
527
- "loss": 0.5522,
528
  "step": 37000
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
529
  }
530
  ],
531
  "logging_steps": 500,
532
- "max_steps": 37660,
533
  "num_input_tokens_seen": 0,
534
- "num_train_epochs": 10,
535
  "save_steps": 500,
536
  "stateful_callbacks": {
537
  "TrainerControl": {
@@ -545,7 +706,7 @@
545
  "attributes": {}
546
  }
547
  },
548
- "total_flos": 8.011321089982464e+16,
549
  "train_batch_size": 32,
550
  "trial_name": null,
551
  "trial_params": null
 
2
  "best_global_step": null,
3
  "best_metric": null,
4
  "best_model_checkpoint": null,
5
+ "epoch": 12.878385554965481,
6
  "eval_steps": 500,
7
+ "global_step": 48500,
8
  "is_hyper_param_search": false,
9
  "is_local_process_zero": true,
10
  "is_world_process_zero": true,
11
  "log_history": [
12
  {
13
  "epoch": 0.1327668613913967,
14
+ "grad_norm": 0.9114758372306824,
15
+ "learning_rate": 4.9492422076065206e-05,
16
+ "loss": 2.9213,
17
  "step": 500
18
  },
19
  {
20
  "epoch": 0.2655337227827934,
21
+ "grad_norm": 0.8418619632720947,
22
+ "learning_rate": 4.89817803014829e-05,
23
+ "loss": 1.7378,
24
  "step": 1000
25
  },
26
  {
27
  "epoch": 0.3983005841741901,
28
+ "grad_norm": 0.709621012210846,
29
+ "learning_rate": 4.8471138526900614e-05,
30
+ "loss": 1.4102,
31
  "step": 1500
32
  },
33
  {
34
  "epoch": 0.5310674455655868,
35
+ "grad_norm": 0.4253033995628357,
36
+ "learning_rate": 4.796049675231832e-05,
37
+ "loss": 1.247,
38
  "step": 2000
39
  },
40
  {
41
  "epoch": 0.6638343069569835,
42
+ "grad_norm": 0.4772707521915436,
43
+ "learning_rate": 4.744985497773602e-05,
44
+ "loss": 1.1408,
45
  "step": 2500
46
  },
47
  {
48
  "epoch": 0.7966011683483802,
49
+ "grad_norm": 0.43140164017677307,
50
+ "learning_rate": 4.6939213203153725e-05,
51
+ "loss": 1.0649,
52
  "step": 3000
53
  },
54
  {
55
  "epoch": 0.929368029739777,
56
+ "grad_norm": 0.39506375789642334,
57
+ "learning_rate": 4.642857142857143e-05,
58
+ "loss": 1.0004,
59
  "step": 3500
60
  },
61
  {
62
  "epoch": 1.0621348911311737,
63
+ "grad_norm": 0.3946131765842438,
64
+ "learning_rate": 4.591792965398913e-05,
65
+ "loss": 0.9611,
66
  "step": 4000
67
  },
68
  {
69
  "epoch": 1.1949017525225702,
70
+ "grad_norm": 0.3581591546535492,
71
+ "learning_rate": 4.540728787940684e-05,
72
+ "loss": 0.917,
73
  "step": 4500
74
  },
75
  {
76
  "epoch": 1.327668613913967,
77
+ "grad_norm": 0.42795246839523315,
78
+ "learning_rate": 4.489664610482455e-05,
79
+ "loss": 0.8907,
80
  "step": 5000
81
  },
82
  {
83
  "epoch": 1.4604354753053639,
84
+ "grad_norm": 0.4635187089443207,
85
+ "learning_rate": 4.4386004330242245e-05,
86
+ "loss": 0.8598,
87
  "step": 5500
88
  },
89
  {
90
  "epoch": 1.5932023366967605,
91
+ "grad_norm": 0.3534747064113617,
92
+ "learning_rate": 4.3875362555659955e-05,
93
+ "loss": 0.8295,
94
  "step": 6000
95
  },
96
  {
97
  "epoch": 1.725969198088157,
98
+ "grad_norm": 0.39130711555480957,
99
+ "learning_rate": 4.336472078107766e-05,
100
+ "loss": 0.808,
101
  "step": 6500
102
  },
103
  {
104
  "epoch": 1.858736059479554,
105
+ "grad_norm": 0.440703809261322,
106
+ "learning_rate": 4.285407900649537e-05,
107
+ "loss": 0.7927,
108
  "step": 7000
109
  },
110
  {
111
  "epoch": 1.9915029208709507,
112
+ "grad_norm": 0.3961372375488281,
113
+ "learning_rate": 4.234343723191307e-05,
114
+ "loss": 0.7828,
115
  "step": 7500
116
  },
117
  {
118
  "epoch": 2.1242697822623473,
119
+ "grad_norm": 0.3364185690879822,
120
+ "learning_rate": 4.1833816740879935e-05,
121
+ "loss": 0.7628,
122
  "step": 8000
123
  },
124
  {
125
  "epoch": 2.257036643653744,
126
+ "grad_norm": 0.34151241183280945,
127
+ "learning_rate": 4.1323174966297646e-05,
128
+ "loss": 0.7424,
129
  "step": 8500
130
  },
131
  {
132
  "epoch": 2.3898035050451405,
133
+ "grad_norm": 0.3292118310928345,
134
+ "learning_rate": 4.081253319171535e-05,
135
+ "loss": 0.7341,
136
  "step": 9000
137
  },
138
  {
139
  "epoch": 2.5225703664365375,
140
+ "grad_norm": 0.33259105682373047,
141
+ "learning_rate": 4.0301891417133054e-05,
142
+ "loss": 0.7212,
143
  "step": 9500
144
  },
145
  {
146
  "epoch": 2.655337227827934,
147
+ "grad_norm": 0.35891205072402954,
148
+ "learning_rate": 3.979124964255076e-05,
149
+ "loss": 0.712,
150
  "step": 10000
151
  },
152
  {
153
  "epoch": 2.7881040892193307,
154
+ "grad_norm": 0.3436354398727417,
155
+ "learning_rate": 3.928060786796847e-05,
156
+ "loss": 0.6975,
157
  "step": 10500
158
  },
159
  {
160
  "epoch": 2.9208709506107278,
161
+ "grad_norm": 0.3285069465637207,
162
+ "learning_rate": 3.8769966093386165e-05,
163
+ "loss": 0.6881,
164
  "step": 11000
165
  },
166
  {
167
  "epoch": 3.0536378120021244,
168
+ "grad_norm": 0.44189152121543884,
169
+ "learning_rate": 3.826034560235304e-05,
170
+ "loss": 0.6749,
171
  "step": 11500
172
  },
173
  {
174
  "epoch": 3.186404673393521,
175
+ "grad_norm": 0.34346967935562134,
176
+ "learning_rate": 3.7749703827770744e-05,
177
+ "loss": 0.6704,
178
  "step": 12000
179
  },
180
  {
181
  "epoch": 3.3191715347849176,
182
+ "grad_norm": 0.29128846526145935,
183
+ "learning_rate": 3.723906205318845e-05,
184
+ "loss": 0.6652,
185
  "step": 12500
186
  },
187
  {
188
  "epoch": 3.451938396176314,
189
+ "grad_norm": 0.3013540208339691,
190
+ "learning_rate": 3.672842027860615e-05,
191
+ "loss": 0.6588,
192
  "step": 13000
193
  },
194
  {
195
  "epoch": 3.584705257567711,
196
+ "grad_norm": 0.32138076424598694,
197
+ "learning_rate": 3.6217778504023856e-05,
198
+ "loss": 0.6453,
199
  "step": 13500
200
  },
201
  {
202
  "epoch": 3.717472118959108,
203
+ "grad_norm": 0.3408374786376953,
204
+ "learning_rate": 3.5707136729441566e-05,
205
+ "loss": 0.6388,
206
  "step": 14000
207
  },
208
  {
209
  "epoch": 3.8502389803505044,
210
+ "grad_norm": 0.9397606253623962,
211
+ "learning_rate": 3.519649495485927e-05,
212
+ "loss": 0.6349,
213
  "step": 14500
214
  },
215
  {
216
  "epoch": 3.9830058417419014,
217
+ "grad_norm": 0.3192440867424011,
218
+ "learning_rate": 3.4685853180276974e-05,
219
+ "loss": 0.6291,
220
  "step": 15000
221
  },
222
  {
223
  "epoch": 4.115772703133298,
224
+ "grad_norm": 0.3549179136753082,
225
+ "learning_rate": 3.417521140569468e-05,
226
+ "loss": 0.6278,
227
  "step": 15500
228
  },
229
  {
230
  "epoch": 4.248539564524695,
231
+ "grad_norm": 0.3110153079032898,
232
+ "learning_rate": 3.366456963111239e-05,
233
+ "loss": 0.618,
234
  "step": 16000
235
  },
236
  {
237
  "epoch": 4.381306425916091,
238
+ "grad_norm": 0.2719564735889435,
239
+ "learning_rate": 3.3153927856530086e-05,
240
+ "loss": 0.6169,
241
  "step": 16500
242
  },
243
  {
244
  "epoch": 4.514073287307488,
245
+ "grad_norm": 0.2858710289001465,
246
+ "learning_rate": 3.2643286081947796e-05,
247
+ "loss": 0.61,
248
  "step": 17000
249
  },
250
  {
251
  "epoch": 4.646840148698884,
252
+ "grad_norm": 0.31373563408851624,
253
+ "learning_rate": 3.21326443073655e-05,
254
+ "loss": 0.6011,
255
  "step": 17500
256
  },
257
  {
258
  "epoch": 4.779607010090281,
259
+ "grad_norm": 0.29438045620918274,
260
+ "learning_rate": 3.1622002532783204e-05,
261
+ "loss": 0.5938,
262
  "step": 18000
263
  },
264
  {
265
  "epoch": 4.9123738714816785,
266
+ "grad_norm": 0.3415851593017578,
267
+ "learning_rate": 3.111238204175007e-05,
268
+ "loss": 0.5992,
269
  "step": 18500
270
  },
271
  {
272
  "epoch": 5.045140732873075,
273
+ "grad_norm": 0.35383546352386475,
274
+ "learning_rate": 3.060276155071694e-05,
275
+ "loss": 0.5871,
276
  "step": 19000
277
  },
278
  {
279
  "epoch": 5.177907594264472,
280
+ "grad_norm": 0.3242381811141968,
281
+ "learning_rate": 3.009314105968381e-05,
282
+ "loss": 0.5867,
283
  "step": 19500
284
  },
285
  {
286
  "epoch": 5.310674455655868,
287
+ "grad_norm": 0.28274649381637573,
288
+ "learning_rate": 2.9582499285101516e-05,
289
+ "loss": 0.584,
290
  "step": 20000
291
  },
292
  {
293
  "epoch": 5.443441317047265,
294
+ "grad_norm": 0.3075231611728668,
295
+ "learning_rate": 2.9071857510519223e-05,
296
+ "loss": 0.584,
297
  "step": 20500
298
  },
299
  {
300
  "epoch": 5.5762081784386615,
301
+ "grad_norm": 0.29568806290626526,
302
+ "learning_rate": 2.8561215735936924e-05,
303
+ "loss": 0.5743,
304
  "step": 21000
305
  },
306
  {
307
  "epoch": 5.708975039830058,
308
+ "grad_norm": 0.32808518409729004,
309
+ "learning_rate": 2.805057396135463e-05,
310
+ "loss": 0.5757,
311
  "step": 21500
312
  },
313
  {
314
  "epoch": 5.8417419012214555,
315
+ "grad_norm": 0.256596177816391,
316
+ "learning_rate": 2.7539932186772338e-05,
317
+ "loss": 0.5735,
318
  "step": 22000
319
  },
320
  {
321
  "epoch": 5.974508762612852,
322
+ "grad_norm": 0.313557505607605,
323
+ "learning_rate": 2.702929041219004e-05,
324
+ "loss": 0.5679,
325
  "step": 22500
326
  },
327
  {
328
  "epoch": 6.107275624004249,
329
+ "grad_norm": 0.274058997631073,
330
+ "learning_rate": 2.6518648637607746e-05,
331
+ "loss": 0.562,
332
  "step": 23000
333
  },
334
  {
335
  "epoch": 6.240042485395645,
336
+ "grad_norm": 0.2777511477470398,
337
+ "learning_rate": 2.6008006863025453e-05,
338
+ "loss": 0.5619,
339
  "step": 23500
340
  },
341
  {
342
  "epoch": 6.372809346787042,
343
+ "grad_norm": 0.3301125466823578,
344
+ "learning_rate": 2.549736508844316e-05,
345
+ "loss": 0.5598,
346
  "step": 24000
347
  },
348
  {
349
  "epoch": 6.5055762081784385,
350
+ "grad_norm": 0.2844313383102417,
351
+ "learning_rate": 2.498672331386086e-05,
352
+ "loss": 0.5589,
353
  "step": 24500
354
  },
355
  {
356
  "epoch": 6.638343069569835,
357
+ "grad_norm": 0.268718421459198,
358
+ "learning_rate": 2.4476081539278568e-05,
359
+ "loss": 0.5566,
360
  "step": 25000
361
  },
362
  {
363
  "epoch": 6.771109930961232,
364
+ "grad_norm": 0.3230023980140686,
365
+ "learning_rate": 2.3965439764696272e-05,
366
+ "loss": 0.5582,
367
  "step": 25500
368
  },
369
  {
370
  "epoch": 6.903876792352628,
371
+ "grad_norm": 0.27747681736946106,
372
+ "learning_rate": 2.3454797990113976e-05,
373
+ "loss": 0.5527,
374
  "step": 26000
375
  },
376
  {
377
  "epoch": 7.036643653744026,
378
+ "grad_norm": 0.29863470792770386,
379
+ "learning_rate": 2.2945177499080848e-05,
380
+ "loss": 0.5491,
381
  "step": 26500
382
  },
383
  {
384
  "epoch": 7.169410515135422,
385
+ "grad_norm": 0.30289873480796814,
386
+ "learning_rate": 2.243453572449855e-05,
387
+ "loss": 0.5468,
388
  "step": 27000
389
  },
390
  {
391
  "epoch": 7.302177376526819,
392
+ "grad_norm": 0.2766277492046356,
393
+ "learning_rate": 2.192491523346542e-05,
394
+ "loss": 0.5444,
395
  "step": 27500
396
  },
397
  {
398
  "epoch": 7.434944237918216,
399
+ "grad_norm": 0.3069545030593872,
400
+ "learning_rate": 2.1414273458883124e-05,
401
+ "loss": 0.5403,
402
  "step": 28000
403
  },
404
  {
405
  "epoch": 7.567711099309612,
406
+ "grad_norm": 0.258329302072525,
407
+ "learning_rate": 2.090363168430083e-05,
408
+ "loss": 0.5453,
409
  "step": 28500
410
  },
411
  {
412
  "epoch": 7.700477960701009,
413
+ "grad_norm": 0.2901703119277954,
414
+ "learning_rate": 2.0392989909718535e-05,
415
+ "loss": 0.5357,
416
  "step": 29000
417
  },
418
  {
419
  "epoch": 7.833244822092405,
420
+ "grad_norm": 0.35300034284591675,
421
+ "learning_rate": 1.988234813513624e-05,
422
+ "loss": 0.541,
423
  "step": 29500
424
  },
425
  {
426
  "epoch": 7.966011683483803,
427
+ "grad_norm": 0.2620261311531067,
428
+ "learning_rate": 1.9371706360553946e-05,
429
+ "loss": 0.5371,
430
  "step": 30000
431
  },
432
  {
433
  "epoch": 8.098778544875199,
434
+ "grad_norm": 0.3098488450050354,
435
+ "learning_rate": 1.886106458597165e-05,
436
+ "loss": 0.5337,
437
  "step": 30500
438
  },
439
  {
440
  "epoch": 8.231545406266596,
441
+ "grad_norm": 0.2904013991355896,
442
+ "learning_rate": 1.8350422811389357e-05,
443
+ "loss": 0.5342,
444
  "step": 31000
445
  },
446
  {
447
  "epoch": 8.364312267657992,
448
+ "grad_norm": 0.29218047857284546,
449
+ "learning_rate": 1.783978103680706e-05,
450
+ "loss": 0.5323,
451
  "step": 31500
452
  },
453
  {
454
  "epoch": 8.49707912904939,
455
+ "grad_norm": 0.3310258090496063,
456
+ "learning_rate": 1.7329139262224765e-05,
457
+ "loss": 0.5258,
458
  "step": 32000
459
  },
460
  {
461
  "epoch": 8.629845990440787,
462
+ "grad_norm": 0.3069627583026886,
463
+ "learning_rate": 1.6818497487642472e-05,
464
+ "loss": 0.5299,
465
  "step": 32500
466
  },
467
  {
468
  "epoch": 8.762612851832182,
469
+ "grad_norm": 0.24625258147716522,
470
+ "learning_rate": 1.630887699660934e-05,
471
+ "loss": 0.5285,
472
  "step": 33000
473
  },
474
  {
475
  "epoch": 8.89537971322358,
476
+ "grad_norm": 0.26636838912963867,
477
+ "learning_rate": 1.5798235222027044e-05,
478
+ "loss": 0.5294,
479
  "step": 33500
480
  },
481
  {
482
  "epoch": 9.028146574614976,
483
+ "grad_norm": 0.2842467725276947,
484
+ "learning_rate": 1.5287593447444748e-05,
485
+ "loss": 0.5235,
486
  "step": 34000
487
  },
488
  {
489
  "epoch": 9.160913436006373,
490
+ "grad_norm": 0.3261110782623291,
491
+ "learning_rate": 1.4776951672862455e-05,
492
+ "loss": 0.5256,
493
  "step": 34500
494
  },
495
  {
496
  "epoch": 9.293680297397769,
497
+ "grad_norm": 0.2750456929206848,
498
+ "learning_rate": 1.4266309898280159e-05,
499
+ "loss": 0.5218,
500
  "step": 35000
501
  },
502
  {
503
  "epoch": 9.426447158789166,
504
+ "grad_norm": 0.26470229029655457,
505
+ "learning_rate": 1.3755668123697864e-05,
506
+ "loss": 0.522,
507
  "step": 35500
508
  },
509
  {
510
  "epoch": 9.559214020180562,
511
+ "grad_norm": 0.24200379848480225,
512
+ "learning_rate": 1.3245026349115568e-05,
513
+ "loss": 0.5222,
514
  "step": 36000
515
  },
516
  {
517
  "epoch": 9.69198088157196,
518
+ "grad_norm": 0.30407610535621643,
519
+ "learning_rate": 1.2734384574533272e-05,
520
+ "loss": 0.5208,
521
  "step": 36500
522
  },
523
  {
524
  "epoch": 9.824747742963357,
525
+ "grad_norm": 0.26741334795951843,
526
+ "learning_rate": 1.2224764083500144e-05,
527
+ "loss": 0.5185,
528
  "step": 37000
529
+ },
530
+ {
531
+ "epoch": 9.957514604354753,
532
+ "grad_norm": 0.2811224162578583,
533
+ "learning_rate": 1.1714122308917848e-05,
534
+ "loss": 0.515,
535
+ "step": 37500
536
+ },
537
+ {
538
+ "epoch": 10.09028146574615,
539
+ "grad_norm": 0.2725277543067932,
540
+ "learning_rate": 1.1204501817884718e-05,
541
+ "loss": 0.517,
542
+ "step": 38000
543
+ },
544
+ {
545
+ "epoch": 10.223048327137546,
546
+ "grad_norm": 0.31137147545814514,
547
+ "learning_rate": 1.0693860043302423e-05,
548
+ "loss": 0.5155,
549
+ "step": 38500
550
+ },
551
+ {
552
+ "epoch": 10.355815188528943,
553
+ "grad_norm": 0.26093247532844543,
554
+ "learning_rate": 1.0183218268720129e-05,
555
+ "loss": 0.5148,
556
+ "step": 39000
557
+ },
558
+ {
559
+ "epoch": 10.488582049920339,
560
+ "grad_norm": 0.2848931550979614,
561
+ "learning_rate": 9.672576494137833e-06,
562
+ "loss": 0.5134,
563
+ "step": 39500
564
+ },
565
+ {
566
+ "epoch": 10.621348911311737,
567
+ "grad_norm": 0.24945715069770813,
568
+ "learning_rate": 9.161934719555536e-06,
569
+ "loss": 0.5136,
570
+ "step": 40000
571
+ },
572
+ {
573
+ "epoch": 10.754115772703134,
574
+ "grad_norm": 0.28524720668792725,
575
+ "learning_rate": 8.651292944973242e-06,
576
+ "loss": 0.5167,
577
+ "step": 40500
578
+ },
579
+ {
580
+ "epoch": 10.88688263409453,
581
+ "grad_norm": 0.29454296827316284,
582
+ "learning_rate": 8.140651170390948e-06,
583
+ "loss": 0.5151,
584
+ "step": 41000
585
+ },
586
+ {
587
+ "epoch": 11.019649495485927,
588
+ "grad_norm": 0.30919119715690613,
589
+ "learning_rate": 7.632051962906982e-06,
590
+ "loss": 0.5121,
591
+ "step": 41500
592
+ },
593
+ {
594
+ "epoch": 11.152416356877323,
595
+ "grad_norm": 0.36948204040527344,
596
+ "learning_rate": 7.121410188324687e-06,
597
+ "loss": 0.5146,
598
+ "step": 42000
599
+ },
600
+ {
601
+ "epoch": 11.28518321826872,
602
+ "grad_norm": 0.2883196771144867,
603
+ "learning_rate": 6.610768413742392e-06,
604
+ "loss": 0.5118,
605
+ "step": 42500
606
+ },
607
+ {
608
+ "epoch": 11.417950079660116,
609
+ "grad_norm": 0.2851753532886505,
610
+ "learning_rate": 6.100126639160097e-06,
611
+ "loss": 0.5092,
612
+ "step": 43000
613
+ },
614
+ {
615
+ "epoch": 11.550716941051514,
616
+ "grad_norm": 0.27395716309547424,
617
+ "learning_rate": 5.5894848645778016e-06,
618
+ "loss": 0.5044,
619
+ "step": 43500
620
+ },
621
+ {
622
+ "epoch": 11.683483802442911,
623
+ "grad_norm": 0.2726575434207916,
624
+ "learning_rate": 5.078843089995506e-06,
625
+ "loss": 0.5106,
626
+ "step": 44000
627
+ },
628
+ {
629
+ "epoch": 11.816250663834307,
630
+ "grad_norm": 0.29727038741111755,
631
+ "learning_rate": 4.568201315413211e-06,
632
+ "loss": 0.5095,
633
+ "step": 44500
634
+ },
635
+ {
636
+ "epoch": 11.949017525225704,
637
+ "grad_norm": 0.2694978713989258,
638
+ "learning_rate": 4.0575595408309166e-06,
639
+ "loss": 0.5118,
640
+ "step": 45000
641
+ },
642
+ {
643
+ "epoch": 12.0817843866171,
644
+ "grad_norm": 0.2318025678396225,
645
+ "learning_rate": 3.5469177662486213e-06,
646
+ "loss": 0.5126,
647
+ "step": 45500
648
+ },
649
+ {
650
+ "epoch": 12.214551248008497,
651
+ "grad_norm": 0.27759501338005066,
652
+ "learning_rate": 3.0362759916663264e-06,
653
+ "loss": 0.5081,
654
+ "step": 46000
655
+ },
656
+ {
657
+ "epoch": 12.347318109399893,
658
+ "grad_norm": 0.2869941294193268,
659
+ "learning_rate": 2.525634217084031e-06,
660
+ "loss": 0.5046,
661
+ "step": 46500
662
+ },
663
+ {
664
+ "epoch": 12.48008497079129,
665
+ "grad_norm": 0.32994431257247925,
666
+ "learning_rate": 2.0149924425017362e-06,
667
+ "loss": 0.5104,
668
+ "step": 47000
669
+ },
670
+ {
671
+ "epoch": 12.612851832182688,
672
+ "grad_norm": 0.28273916244506836,
673
+ "learning_rate": 1.5053719514686058e-06,
674
+ "loss": 0.5036,
675
+ "step": 47500
676
+ },
677
+ {
678
+ "epoch": 12.745618693574084,
679
+ "grad_norm": 0.2604888379573822,
680
+ "learning_rate": 9.947301768863107e-07,
681
+ "loss": 0.5086,
682
+ "step": 48000
683
+ },
684
+ {
685
+ "epoch": 12.878385554965481,
686
+ "grad_norm": 0.287817120552063,
687
+ "learning_rate": 4.851096858531803e-07,
688
+ "loss": 0.5126,
689
+ "step": 48500
690
  }
691
  ],
692
  "logging_steps": 500,
693
+ "max_steps": 48958,
694
  "num_input_tokens_seen": 0,
695
+ "num_train_epochs": 13,
696
  "save_steps": 500,
697
  "stateful_callbacks": {
698
  "TrainerControl": {
 
706
  "attributes": {}
707
  }
708
  },
709
+ "total_flos": 1.0501305718013952e+17,
710
  "train_batch_size": 32,
711
  "trial_name": null,
712
  "trial_params": null
checkpoints/{checkpoint-37000 → checkpoint-48500}/training_args.bin RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:344c4c96587b5961c7260f4c9743524e13a2580c248f0b960480131c9cd7dc77
3
  size 5240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8e94179d735e9ead00b90fad45af99e009779a12dba4e32a7dde92da29b59e62
3
  size 5240
checkpoints/{checkpoint-37660 → checkpoint-48958}/config.json RENAMED
File without changes
checkpoints/{checkpoint-37660 → checkpoint-48958}/generation_config.json RENAMED
File without changes
checkpoints/{checkpoint-37660 → checkpoint-48958}/model.safetensors RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b78f8b4750e01e9e992daf6187c84c8ecfad35d6183f0336f1d4661a41534df9
3
  size 242041896
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8ea98de1cde992e950903fb96553ceb84e46b447461bc9f940922b80e9bc3c6
3
  size 242041896
checkpoints/{checkpoint-37660 → checkpoint-48958}/optimizer.pt RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:866ce67c1ca64817c5bedb2cd226dd6b42eb9c6b9a197984e5c72c49f3fad0b4
3
  size 484163514
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b7c5f5d93aa888eb04d22fec7da585a62df18817ce300bb426e1c619ffd5fbd
3
  size 484163514
checkpoints/{checkpoint-37000 → checkpoint-48958}/rng_state.pth RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2b6e4d98cfbb205caa76399a96ea3fc5c3960f39509b1e3fe6146e9237d2e38e
3
  size 14244
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95c3338e3a3d60e44d86ce6eac15796914159dbfffe2407b4e60c3ab111b52e4
3
  size 14244
checkpoints/{checkpoint-37000 → checkpoint-48958}/scaler.pt RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d64c876d78c9e6f6ab330d55f358e5c588f452bd34137dabaa6c33001baaf827
3
  size 988
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b7a5ccbbd174b024f0fc150e65ae009070562b75b60db0af53945cd22f7011f
3
  size 988
checkpoints/{checkpoint-37660 → checkpoint-48958}/scheduler.pt RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:137105948ee6cac22e2b0464aacbd6b0a4d90a8f9ac6be52aa48e82e061b5d09
3
  size 1064
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8faa6a70188374ad73132930e26b5735fb0a78d3a8b7f1cb788c26dc910c2fd
3
  size 1064
checkpoints/{checkpoint-37660 → checkpoint-48958}/special_tokens_map.json RENAMED
File without changes
checkpoints/checkpoint-48958/spiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d60acb128cf7b7f2536e8f38a5b18a05535c9e14c7a355904270e15b0945ea86
3
+ size 791656
checkpoints/{checkpoint-37660 → checkpoint-48958}/tokenizer.json RENAMED
File without changes
checkpoints/{checkpoint-37660 → checkpoint-48958}/tokenizer_config.json RENAMED
File without changes
checkpoints/{checkpoint-37660 → checkpoint-48958}/trainer_state.json RENAMED
@@ -2,543 +2,697 @@
2
  "best_global_step": null,
3
  "best_metric": null,
4
  "best_model_checkpoint": null,
5
- "epoch": 10.0,
6
  "eval_steps": 500,
7
- "global_step": 37660,
8
  "is_hyper_param_search": false,
9
  "is_local_process_zero": true,
10
  "is_world_process_zero": true,
11
  "log_history": [
12
  {
13
  "epoch": 0.1327668613913967,
14
- "grad_norm": 0.7285390496253967,
15
- "learning_rate": 4.934014869888476e-05,
16
- "loss": 2.9202,
17
  "step": 500
18
  },
19
  {
20
  "epoch": 0.2655337227827934,
21
- "grad_norm": 0.6645896434783936,
22
- "learning_rate": 4.867631439192778e-05,
23
- "loss": 1.7398,
24
  "step": 1000
25
  },
26
  {
27
  "epoch": 0.3983005841741901,
28
- "grad_norm": 1.236431360244751,
29
- "learning_rate": 4.8012480084970795e-05,
30
- "loss": 1.4128,
31
  "step": 1500
32
  },
33
  {
34
  "epoch": 0.5310674455655868,
35
- "grad_norm": 0.4946064054965973,
36
- "learning_rate": 4.734864577801381e-05,
37
- "loss": 1.2492,
38
  "step": 2000
39
  },
40
  {
41
  "epoch": 0.6638343069569835,
42
- "grad_norm": 0.4114533066749573,
43
- "learning_rate": 4.668481147105683e-05,
44
- "loss": 1.1439,
45
  "step": 2500
46
  },
47
  {
48
  "epoch": 0.7966011683483802,
49
- "grad_norm": 0.4554953873157501,
50
- "learning_rate": 4.602097716409984e-05,
51
- "loss": 1.0689,
52
  "step": 3000
53
  },
54
  {
55
  "epoch": 0.929368029739777,
56
- "grad_norm": 0.41119593381881714,
57
- "learning_rate": 4.5357142857142856e-05,
58
- "loss": 1.0057,
59
  "step": 3500
60
  },
61
  {
62
  "epoch": 1.0621348911311737,
63
- "grad_norm": 0.40032336115837097,
64
- "learning_rate": 4.4693308550185877e-05,
65
- "loss": 0.9672,
66
  "step": 4000
67
  },
68
  {
69
  "epoch": 1.1949017525225702,
70
- "grad_norm": 0.3746163845062256,
71
- "learning_rate": 4.402947424322889e-05,
72
- "loss": 0.9234,
73
  "step": 4500
74
  },
75
  {
76
  "epoch": 1.327668613913967,
77
- "grad_norm": 0.4201946556568146,
78
- "learning_rate": 4.3365639936271904e-05,
79
- "loss": 0.8978,
80
  "step": 5000
81
  },
82
  {
83
  "epoch": 1.4604354753053639,
84
- "grad_norm": 0.48306697607040405,
85
- "learning_rate": 4.2701805629314924e-05,
86
- "loss": 0.8672,
87
  "step": 5500
88
  },
89
  {
90
  "epoch": 1.5932023366967605,
91
- "grad_norm": 0.348651647567749,
92
- "learning_rate": 4.2037971322357945e-05,
93
- "loss": 0.8373,
94
  "step": 6000
95
  },
96
  {
97
  "epoch": 1.725969198088157,
98
- "grad_norm": 0.3810581862926483,
99
- "learning_rate": 4.137413701540096e-05,
100
- "loss": 0.8161,
101
  "step": 6500
102
  },
103
  {
104
  "epoch": 1.858736059479554,
105
- "grad_norm": 0.41756191849708557,
106
- "learning_rate": 4.071030270844398e-05,
107
- "loss": 0.8011,
108
  "step": 7000
109
  },
110
  {
111
  "epoch": 1.9915029208709507,
112
- "grad_norm": 0.3746052384376526,
113
- "learning_rate": 4.004646840148699e-05,
114
- "loss": 0.7917,
115
  "step": 7500
116
  },
117
  {
118
  "epoch": 2.1242697822623473,
119
- "grad_norm": 0.3314072787761688,
120
- "learning_rate": 3.9382634094530006e-05,
121
- "loss": 0.772,
122
  "step": 8000
123
  },
124
  {
125
  "epoch": 2.257036643653744,
126
- "grad_norm": 0.36195287108421326,
127
- "learning_rate": 3.871879978757303e-05,
128
- "loss": 0.7519,
129
  "step": 8500
130
  },
131
  {
132
  "epoch": 2.3898035050451405,
133
- "grad_norm": 0.36162152886390686,
134
- "learning_rate": 3.805496548061604e-05,
135
- "loss": 0.744,
136
  "step": 9000
137
  },
138
  {
139
  "epoch": 2.5225703664365375,
140
- "grad_norm": 0.3393162786960602,
141
- "learning_rate": 3.739245884227297e-05,
142
- "loss": 0.7317,
143
  "step": 9500
144
  },
145
  {
146
  "epoch": 2.655337227827934,
147
- "grad_norm": 0.42738890647888184,
148
- "learning_rate": 3.67299522039299e-05,
149
- "loss": 0.7227,
150
  "step": 10000
151
  },
152
  {
153
  "epoch": 2.7881040892193307,
154
- "grad_norm": 0.34136396646499634,
155
- "learning_rate": 3.6066117896972915e-05,
156
- "loss": 0.7084,
157
  "step": 10500
158
  },
159
  {
160
  "epoch": 2.9208709506107278,
161
- "grad_norm": 0.33245041966438293,
162
- "learning_rate": 3.5402283590015936e-05,
163
- "loss": 0.6993,
164
  "step": 11000
165
  },
166
  {
167
  "epoch": 3.0536378120021244,
168
- "grad_norm": 0.46563392877578735,
169
- "learning_rate": 3.473844928305895e-05,
170
- "loss": 0.6865,
171
  "step": 11500
172
  },
173
  {
174
  "epoch": 3.186404673393521,
175
- "grad_norm": 0.3868368864059448,
176
- "learning_rate": 3.407461497610196e-05,
177
- "loss": 0.6822,
178
  "step": 12000
179
  },
180
  {
181
  "epoch": 3.3191715347849176,
182
- "grad_norm": 0.30418872833251953,
183
- "learning_rate": 3.3410780669144984e-05,
184
- "loss": 0.6775,
185
  "step": 12500
186
  },
187
  {
188
  "epoch": 3.451938396176314,
189
- "grad_norm": 0.34485992789268494,
190
- "learning_rate": 3.2746946362188e-05,
191
- "loss": 0.6713,
192
  "step": 13000
193
  },
194
  {
195
  "epoch": 3.584705257567711,
196
- "grad_norm": 0.33921709656715393,
197
- "learning_rate": 3.208311205523102e-05,
198
- "loss": 0.658,
199
  "step": 13500
200
  },
201
  {
202
  "epoch": 3.717472118959108,
203
- "grad_norm": 0.36646100878715515,
204
- "learning_rate": 3.141927774827403e-05,
205
- "loss": 0.6516,
206
  "step": 14000
207
  },
208
  {
209
  "epoch": 3.8502389803505044,
210
- "grad_norm": 0.32367828488349915,
211
- "learning_rate": 3.075544344131705e-05,
212
- "loss": 0.6479,
213
  "step": 14500
214
  },
215
  {
216
  "epoch": 3.9830058417419014,
217
- "grad_norm": 0.32735565304756165,
218
- "learning_rate": 3.0091609134360066e-05,
219
- "loss": 0.6423,
220
  "step": 15000
221
  },
222
  {
223
  "epoch": 4.115772703133298,
224
- "grad_norm": 0.43194663524627686,
225
- "learning_rate": 2.9427774827403083e-05,
226
- "loss": 0.6416,
227
  "step": 15500
228
  },
229
  {
230
  "epoch": 4.248539564524695,
231
- "grad_norm": 0.29106882214546204,
232
- "learning_rate": 2.8763940520446096e-05,
233
- "loss": 0.6318,
234
  "step": 16000
235
  },
236
  {
237
  "epoch": 4.381306425916091,
238
- "grad_norm": 0.2671768069267273,
239
- "learning_rate": 2.810143388210303e-05,
240
- "loss": 0.6311,
241
  "step": 16500
242
  },
243
  {
244
  "epoch": 4.514073287307488,
245
- "grad_norm": 0.311146080493927,
246
- "learning_rate": 2.7437599575146044e-05,
247
- "loss": 0.6243,
248
  "step": 17000
249
  },
250
  {
251
  "epoch": 4.646840148698884,
252
- "grad_norm": 0.3101503551006317,
253
- "learning_rate": 2.677376526818906e-05,
254
- "loss": 0.6157,
255
  "step": 17500
256
  },
257
  {
258
  "epoch": 4.779607010090281,
259
- "grad_norm": 0.3017677366733551,
260
- "learning_rate": 2.6109930961232075e-05,
261
- "loss": 0.6086,
262
  "step": 18000
263
  },
264
  {
265
  "epoch": 4.9123738714816785,
266
- "grad_norm": 0.31505176424980164,
267
- "learning_rate": 2.5446096654275092e-05,
268
- "loss": 0.6146,
269
  "step": 18500
270
  },
271
  {
272
  "epoch": 5.045140732873075,
273
- "grad_norm": 0.35715609788894653,
274
- "learning_rate": 2.4783590015932023e-05,
275
- "loss": 0.602,
276
  "step": 19000
277
  },
278
  {
279
  "epoch": 5.177907594264472,
280
- "grad_norm": 0.2930135726928711,
281
- "learning_rate": 2.4119755708975043e-05,
282
- "loss": 0.6024,
283
  "step": 19500
284
  },
285
  {
286
  "epoch": 5.310674455655868,
287
- "grad_norm": 0.3474890887737274,
288
- "learning_rate": 2.3455921402018057e-05,
289
- "loss": 0.6002,
290
  "step": 20000
291
  },
292
  {
293
  "epoch": 5.443441317047265,
294
- "grad_norm": 0.29057538509368896,
295
- "learning_rate": 2.2792087095061074e-05,
296
- "loss": 0.6006,
297
  "step": 20500
298
  },
299
  {
300
  "epoch": 5.5762081784386615,
301
- "grad_norm": 0.3273596167564392,
302
- "learning_rate": 2.212825278810409e-05,
303
- "loss": 0.5912,
304
  "step": 21000
305
  },
306
  {
307
  "epoch": 5.708975039830058,
308
- "grad_norm": 0.27121296525001526,
309
- "learning_rate": 2.146574614976102e-05,
310
- "loss": 0.593,
311
  "step": 21500
312
  },
313
  {
314
  "epoch": 5.8417419012214555,
315
- "grad_norm": 0.29718518257141113,
316
- "learning_rate": 2.0801911842804035e-05,
317
- "loss": 0.591,
318
  "step": 22000
319
  },
320
  {
321
  "epoch": 5.974508762612852,
322
- "grad_norm": 0.32410022616386414,
323
- "learning_rate": 2.0138077535847052e-05,
324
- "loss": 0.5857,
325
  "step": 22500
326
  },
327
  {
328
  "epoch": 6.107275624004249,
329
- "grad_norm": 0.2846163213253021,
330
- "learning_rate": 1.9475570897503983e-05,
331
- "loss": 0.5802,
332
  "step": 23000
333
  },
334
  {
335
  "epoch": 6.240042485395645,
336
- "grad_norm": 0.30459314584732056,
337
- "learning_rate": 1.8811736590547e-05,
338
- "loss": 0.5806,
339
  "step": 23500
340
  },
341
  {
342
  "epoch": 6.372809346787042,
343
- "grad_norm": 0.5301333069801331,
344
- "learning_rate": 1.8147902283590017e-05,
345
- "loss": 0.5789,
346
  "step": 24000
347
  },
348
  {
349
  "epoch": 6.5055762081784385,
350
- "grad_norm": 0.2727649509906769,
351
- "learning_rate": 1.7484067976633034e-05,
352
- "loss": 0.5783,
353
  "step": 24500
354
  },
355
  {
356
  "epoch": 6.638343069569835,
357
- "grad_norm": 0.3003462255001068,
358
- "learning_rate": 1.682023366967605e-05,
359
- "loss": 0.5765,
360
  "step": 25000
361
  },
362
  {
363
  "epoch": 6.771109930961232,
364
- "grad_norm": 0.35589084029197693,
365
- "learning_rate": 1.6157727031332982e-05,
366
- "loss": 0.5787,
367
  "step": 25500
368
  },
369
  {
370
  "epoch": 6.903876792352628,
371
- "grad_norm": 0.2738860845565796,
372
- "learning_rate": 1.5493892724375996e-05,
373
- "loss": 0.5734,
374
  "step": 26000
375
  },
376
  {
377
  "epoch": 7.036643653744026,
378
- "grad_norm": 0.42223164439201355,
379
- "learning_rate": 1.4830058417419013e-05,
380
- "loss": 0.5704,
381
  "step": 26500
382
  },
383
  {
384
  "epoch": 7.169410515135422,
385
- "grad_norm": 0.2938649654388428,
386
- "learning_rate": 1.4166224110462028e-05,
387
- "loss": 0.5686,
388
  "step": 27000
389
  },
390
  {
391
  "epoch": 7.302177376526819,
392
- "grad_norm": 0.275078147649765,
393
- "learning_rate": 1.3503717472118959e-05,
394
- "loss": 0.5666,
395
  "step": 27500
396
  },
397
  {
398
  "epoch": 7.434944237918216,
399
- "grad_norm": 0.35505712032318115,
400
- "learning_rate": 1.2839883165161976e-05,
401
- "loss": 0.5631,
402
  "step": 28000
403
  },
404
  {
405
  "epoch": 7.567711099309612,
406
- "grad_norm": 0.2507877051830292,
407
- "learning_rate": 1.2176048858204993e-05,
408
- "loss": 0.5688,
409
  "step": 28500
410
  },
411
  {
412
  "epoch": 7.700477960701009,
413
- "grad_norm": 0.2846459746360779,
414
- "learning_rate": 1.1512214551248008e-05,
415
- "loss": 0.5594,
416
  "step": 29000
417
  },
418
  {
419
  "epoch": 7.833244822092405,
420
- "grad_norm": 0.31158626079559326,
421
- "learning_rate": 1.0848380244291025e-05,
422
- "loss": 0.5653,
423
  "step": 29500
424
  },
425
  {
426
  "epoch": 7.966011683483803,
427
- "grad_norm": 0.2899467647075653,
428
- "learning_rate": 1.0184545937334042e-05,
429
- "loss": 0.562,
430
  "step": 30000
431
  },
432
  {
433
  "epoch": 8.098778544875199,
434
- "grad_norm": 0.27900761365890503,
435
- "learning_rate": 9.520711630377058e-06,
436
- "loss": 0.559,
437
  "step": 30500
438
  },
439
  {
440
  "epoch": 8.231545406266596,
441
- "grad_norm": 0.29301363229751587,
442
- "learning_rate": 8.856877323420075e-06,
443
- "loss": 0.5604,
444
  "step": 31000
445
  },
446
  {
447
  "epoch": 8.364312267657992,
448
- "grad_norm": 0.2812318801879883,
449
- "learning_rate": 8.19304301646309e-06,
450
- "loss": 0.559,
451
  "step": 31500
452
  },
453
  {
454
  "epoch": 8.49707912904939,
455
- "grad_norm": 0.27755317091941833,
456
- "learning_rate": 7.530536378120022e-06,
457
- "loss": 0.5527,
458
  "step": 32000
459
  },
460
  {
461
  "epoch": 8.629845990440787,
462
- "grad_norm": 0.3323802053928375,
463
- "learning_rate": 6.866702071163038e-06,
464
- "loss": 0.5576,
465
  "step": 32500
466
  },
467
  {
468
  "epoch": 8.762612851832182,
469
- "grad_norm": 0.2453739196062088,
470
- "learning_rate": 6.202867764206054e-06,
471
- "loss": 0.557,
472
  "step": 33000
473
  },
474
  {
475
  "epoch": 8.89537971322358,
476
- "grad_norm": 0.28488314151763916,
477
- "learning_rate": 5.539033457249071e-06,
478
- "loss": 0.5586,
479
  "step": 33500
480
  },
481
  {
482
  "epoch": 9.028146574614976,
483
- "grad_norm": 0.2731677293777466,
484
- "learning_rate": 4.876526818906001e-06,
485
- "loss": 0.5529,
486
  "step": 34000
487
  },
488
  {
489
  "epoch": 9.160913436006373,
490
- "grad_norm": 0.34274822473526,
491
- "learning_rate": 4.214020180562932e-06,
492
- "loss": 0.5562,
493
  "step": 34500
494
  },
495
  {
496
  "epoch": 9.293680297397769,
497
- "grad_norm": 0.2875533103942871,
498
- "learning_rate": 3.550185873605948e-06,
499
- "loss": 0.5528,
500
  "step": 35000
501
  },
502
  {
503
  "epoch": 9.426447158789166,
504
- "grad_norm": 0.2516155242919922,
505
- "learning_rate": 2.8863515666489647e-06,
506
- "loss": 0.5538,
507
  "step": 35500
508
  },
509
  {
510
  "epoch": 9.559214020180562,
511
- "grad_norm": 0.2524682283401489,
512
- "learning_rate": 2.222517259691981e-06,
513
- "loss": 0.5546,
514
  "step": 36000
515
  },
516
  {
517
  "epoch": 9.69198088157196,
518
- "grad_norm": 0.25429150462150574,
519
- "learning_rate": 1.5586829527349974e-06,
520
- "loss": 0.5537,
521
  "step": 36500
522
  },
523
  {
524
  "epoch": 9.824747742963357,
525
- "grad_norm": 0.2699441611766815,
526
- "learning_rate": 8.948486457780139e-07,
527
- "loss": 0.5522,
528
  "step": 37000
529
  },
530
  {
531
  "epoch": 9.957514604354753,
532
- "grad_norm": 0.35291793942451477,
533
- "learning_rate": 2.3101433882103027e-07,
534
- "loss": 0.5493,
535
  "step": 37500
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
536
  }
537
  ],
538
  "logging_steps": 500,
539
- "max_steps": 37660,
540
  "num_input_tokens_seen": 0,
541
- "num_train_epochs": 10,
542
  "save_steps": 500,
543
  "stateful_callbacks": {
544
  "TrainerControl": {
@@ -552,7 +706,7 @@
552
  "attributes": {}
553
  }
554
  },
555
- "total_flos": 8.154140525985792e+16,
556
  "train_batch_size": 32,
557
  "trial_name": null,
558
  "trial_params": null
 
2
  "best_global_step": null,
3
  "best_metric": null,
4
  "best_model_checkpoint": null,
5
+ "epoch": 13.0,
6
  "eval_steps": 500,
7
+ "global_step": 48958,
8
  "is_hyper_param_search": false,
9
  "is_local_process_zero": true,
10
  "is_world_process_zero": true,
11
  "log_history": [
12
  {
13
  "epoch": 0.1327668613913967,
14
+ "grad_norm": 0.9114758372306824,
15
+ "learning_rate": 4.9492422076065206e-05,
16
+ "loss": 2.9213,
17
  "step": 500
18
  },
19
  {
20
  "epoch": 0.2655337227827934,
21
+ "grad_norm": 0.8418619632720947,
22
+ "learning_rate": 4.89817803014829e-05,
23
+ "loss": 1.7378,
24
  "step": 1000
25
  },
26
  {
27
  "epoch": 0.3983005841741901,
28
+ "grad_norm": 0.709621012210846,
29
+ "learning_rate": 4.8471138526900614e-05,
30
+ "loss": 1.4102,
31
  "step": 1500
32
  },
33
  {
34
  "epoch": 0.5310674455655868,
35
+ "grad_norm": 0.4253033995628357,
36
+ "learning_rate": 4.796049675231832e-05,
37
+ "loss": 1.247,
38
  "step": 2000
39
  },
40
  {
41
  "epoch": 0.6638343069569835,
42
+ "grad_norm": 0.4772707521915436,
43
+ "learning_rate": 4.744985497773602e-05,
44
+ "loss": 1.1408,
45
  "step": 2500
46
  },
47
  {
48
  "epoch": 0.7966011683483802,
49
+ "grad_norm": 0.43140164017677307,
50
+ "learning_rate": 4.6939213203153725e-05,
51
+ "loss": 1.0649,
52
  "step": 3000
53
  },
54
  {
55
  "epoch": 0.929368029739777,
56
+ "grad_norm": 0.39506375789642334,
57
+ "learning_rate": 4.642857142857143e-05,
58
+ "loss": 1.0004,
59
  "step": 3500
60
  },
61
  {
62
  "epoch": 1.0621348911311737,
63
+ "grad_norm": 0.3946131765842438,
64
+ "learning_rate": 4.591792965398913e-05,
65
+ "loss": 0.9611,
66
  "step": 4000
67
  },
68
  {
69
  "epoch": 1.1949017525225702,
70
+ "grad_norm": 0.3581591546535492,
71
+ "learning_rate": 4.540728787940684e-05,
72
+ "loss": 0.917,
73
  "step": 4500
74
  },
75
  {
76
  "epoch": 1.327668613913967,
77
+ "grad_norm": 0.42795246839523315,
78
+ "learning_rate": 4.489664610482455e-05,
79
+ "loss": 0.8907,
80
  "step": 5000
81
  },
82
  {
83
  "epoch": 1.4604354753053639,
84
+ "grad_norm": 0.4635187089443207,
85
+ "learning_rate": 4.4386004330242245e-05,
86
+ "loss": 0.8598,
87
  "step": 5500
88
  },
89
  {
90
  "epoch": 1.5932023366967605,
91
+ "grad_norm": 0.3534747064113617,
92
+ "learning_rate": 4.3875362555659955e-05,
93
+ "loss": 0.8295,
94
  "step": 6000
95
  },
96
  {
97
  "epoch": 1.725969198088157,
98
+ "grad_norm": 0.39130711555480957,
99
+ "learning_rate": 4.336472078107766e-05,
100
+ "loss": 0.808,
101
  "step": 6500
102
  },
103
  {
104
  "epoch": 1.858736059479554,
105
+ "grad_norm": 0.440703809261322,
106
+ "learning_rate": 4.285407900649537e-05,
107
+ "loss": 0.7927,
108
  "step": 7000
109
  },
110
  {
111
  "epoch": 1.9915029208709507,
112
+ "grad_norm": 0.3961372375488281,
113
+ "learning_rate": 4.234343723191307e-05,
114
+ "loss": 0.7828,
115
  "step": 7500
116
  },
117
  {
118
  "epoch": 2.1242697822623473,
119
+ "grad_norm": 0.3364185690879822,
120
+ "learning_rate": 4.1833816740879935e-05,
121
+ "loss": 0.7628,
122
  "step": 8000
123
  },
124
  {
125
  "epoch": 2.257036643653744,
126
+ "grad_norm": 0.34151241183280945,
127
+ "learning_rate": 4.1323174966297646e-05,
128
+ "loss": 0.7424,
129
  "step": 8500
130
  },
131
  {
132
  "epoch": 2.3898035050451405,
133
+ "grad_norm": 0.3292118310928345,
134
+ "learning_rate": 4.081253319171535e-05,
135
+ "loss": 0.7341,
136
  "step": 9000
137
  },
138
  {
139
  "epoch": 2.5225703664365375,
140
+ "grad_norm": 0.33259105682373047,
141
+ "learning_rate": 4.0301891417133054e-05,
142
+ "loss": 0.7212,
143
  "step": 9500
144
  },
145
  {
146
  "epoch": 2.655337227827934,
147
+ "grad_norm": 0.35891205072402954,
148
+ "learning_rate": 3.979124964255076e-05,
149
+ "loss": 0.712,
150
  "step": 10000
151
  },
152
  {
153
  "epoch": 2.7881040892193307,
154
+ "grad_norm": 0.3436354398727417,
155
+ "learning_rate": 3.928060786796847e-05,
156
+ "loss": 0.6975,
157
  "step": 10500
158
  },
159
  {
160
  "epoch": 2.9208709506107278,
161
+ "grad_norm": 0.3285069465637207,
162
+ "learning_rate": 3.8769966093386165e-05,
163
+ "loss": 0.6881,
164
  "step": 11000
165
  },
166
  {
167
  "epoch": 3.0536378120021244,
168
+ "grad_norm": 0.44189152121543884,
169
+ "learning_rate": 3.826034560235304e-05,
170
+ "loss": 0.6749,
171
  "step": 11500
172
  },
173
  {
174
  "epoch": 3.186404673393521,
175
+ "grad_norm": 0.34346967935562134,
176
+ "learning_rate": 3.7749703827770744e-05,
177
+ "loss": 0.6704,
178
  "step": 12000
179
  },
180
  {
181
  "epoch": 3.3191715347849176,
182
+ "grad_norm": 0.29128846526145935,
183
+ "learning_rate": 3.723906205318845e-05,
184
+ "loss": 0.6652,
185
  "step": 12500
186
  },
187
  {
188
  "epoch": 3.451938396176314,
189
+ "grad_norm": 0.3013540208339691,
190
+ "learning_rate": 3.672842027860615e-05,
191
+ "loss": 0.6588,
192
  "step": 13000
193
  },
194
  {
195
  "epoch": 3.584705257567711,
196
+ "grad_norm": 0.32138076424598694,
197
+ "learning_rate": 3.6217778504023856e-05,
198
+ "loss": 0.6453,
199
  "step": 13500
200
  },
201
  {
202
  "epoch": 3.717472118959108,
203
+ "grad_norm": 0.3408374786376953,
204
+ "learning_rate": 3.5707136729441566e-05,
205
+ "loss": 0.6388,
206
  "step": 14000
207
  },
208
  {
209
  "epoch": 3.8502389803505044,
210
+ "grad_norm": 0.9397606253623962,
211
+ "learning_rate": 3.519649495485927e-05,
212
+ "loss": 0.6349,
213
  "step": 14500
214
  },
215
  {
216
  "epoch": 3.9830058417419014,
217
+ "grad_norm": 0.3192440867424011,
218
+ "learning_rate": 3.4685853180276974e-05,
219
+ "loss": 0.6291,
220
  "step": 15000
221
  },
222
  {
223
  "epoch": 4.115772703133298,
224
+ "grad_norm": 0.3549179136753082,
225
+ "learning_rate": 3.417521140569468e-05,
226
+ "loss": 0.6278,
227
  "step": 15500
228
  },
229
  {
230
  "epoch": 4.248539564524695,
231
+ "grad_norm": 0.3110153079032898,
232
+ "learning_rate": 3.366456963111239e-05,
233
+ "loss": 0.618,
234
  "step": 16000
235
  },
236
  {
237
  "epoch": 4.381306425916091,
238
+ "grad_norm": 0.2719564735889435,
239
+ "learning_rate": 3.3153927856530086e-05,
240
+ "loss": 0.6169,
241
  "step": 16500
242
  },
243
  {
244
  "epoch": 4.514073287307488,
245
+ "grad_norm": 0.2858710289001465,
246
+ "learning_rate": 3.2643286081947796e-05,
247
+ "loss": 0.61,
248
  "step": 17000
249
  },
250
  {
251
  "epoch": 4.646840148698884,
252
+ "grad_norm": 0.31373563408851624,
253
+ "learning_rate": 3.21326443073655e-05,
254
+ "loss": 0.6011,
255
  "step": 17500
256
  },
257
  {
258
  "epoch": 4.779607010090281,
259
+ "grad_norm": 0.29438045620918274,
260
+ "learning_rate": 3.1622002532783204e-05,
261
+ "loss": 0.5938,
262
  "step": 18000
263
  },
264
  {
265
  "epoch": 4.9123738714816785,
266
+ "grad_norm": 0.3415851593017578,
267
+ "learning_rate": 3.111238204175007e-05,
268
+ "loss": 0.5992,
269
  "step": 18500
270
  },
271
  {
272
  "epoch": 5.045140732873075,
273
+ "grad_norm": 0.35383546352386475,
274
+ "learning_rate": 3.060276155071694e-05,
275
+ "loss": 0.5871,
276
  "step": 19000
277
  },
278
  {
279
  "epoch": 5.177907594264472,
280
+ "grad_norm": 0.3242381811141968,
281
+ "learning_rate": 3.009314105968381e-05,
282
+ "loss": 0.5867,
283
  "step": 19500
284
  },
285
  {
286
  "epoch": 5.310674455655868,
287
+ "grad_norm": 0.28274649381637573,
288
+ "learning_rate": 2.9582499285101516e-05,
289
+ "loss": 0.584,
290
  "step": 20000
291
  },
292
  {
293
  "epoch": 5.443441317047265,
294
+ "grad_norm": 0.3075231611728668,
295
+ "learning_rate": 2.9071857510519223e-05,
296
+ "loss": 0.584,
297
  "step": 20500
298
  },
299
  {
300
  "epoch": 5.5762081784386615,
301
+ "grad_norm": 0.29568806290626526,
302
+ "learning_rate": 2.8561215735936924e-05,
303
+ "loss": 0.5743,
304
  "step": 21000
305
  },
306
  {
307
  "epoch": 5.708975039830058,
308
+ "grad_norm": 0.32808518409729004,
309
+ "learning_rate": 2.805057396135463e-05,
310
+ "loss": 0.5757,
311
  "step": 21500
312
  },
313
  {
314
  "epoch": 5.8417419012214555,
315
+ "grad_norm": 0.256596177816391,
316
+ "learning_rate": 2.7539932186772338e-05,
317
+ "loss": 0.5735,
318
  "step": 22000
319
  },
320
  {
321
  "epoch": 5.974508762612852,
322
+ "grad_norm": 0.313557505607605,
323
+ "learning_rate": 2.702929041219004e-05,
324
+ "loss": 0.5679,
325
  "step": 22500
326
  },
327
  {
328
  "epoch": 6.107275624004249,
329
+ "grad_norm": 0.274058997631073,
330
+ "learning_rate": 2.6518648637607746e-05,
331
+ "loss": 0.562,
332
  "step": 23000
333
  },
334
  {
335
  "epoch": 6.240042485395645,
336
+ "grad_norm": 0.2777511477470398,
337
+ "learning_rate": 2.6008006863025453e-05,
338
+ "loss": 0.5619,
339
  "step": 23500
340
  },
341
  {
342
  "epoch": 6.372809346787042,
343
+ "grad_norm": 0.3301125466823578,
344
+ "learning_rate": 2.549736508844316e-05,
345
+ "loss": 0.5598,
346
  "step": 24000
347
  },
348
  {
349
  "epoch": 6.5055762081784385,
350
+ "grad_norm": 0.2844313383102417,
351
+ "learning_rate": 2.498672331386086e-05,
352
+ "loss": 0.5589,
353
  "step": 24500
354
  },
355
  {
356
  "epoch": 6.638343069569835,
357
+ "grad_norm": 0.268718421459198,
358
+ "learning_rate": 2.4476081539278568e-05,
359
+ "loss": 0.5566,
360
  "step": 25000
361
  },
362
  {
363
  "epoch": 6.771109930961232,
364
+ "grad_norm": 0.3230023980140686,
365
+ "learning_rate": 2.3965439764696272e-05,
366
+ "loss": 0.5582,
367
  "step": 25500
368
  },
369
  {
370
  "epoch": 6.903876792352628,
371
+ "grad_norm": 0.27747681736946106,
372
+ "learning_rate": 2.3454797990113976e-05,
373
+ "loss": 0.5527,
374
  "step": 26000
375
  },
376
  {
377
  "epoch": 7.036643653744026,
378
+ "grad_norm": 0.29863470792770386,
379
+ "learning_rate": 2.2945177499080848e-05,
380
+ "loss": 0.5491,
381
  "step": 26500
382
  },
383
  {
384
  "epoch": 7.169410515135422,
385
+ "grad_norm": 0.30289873480796814,
386
+ "learning_rate": 2.243453572449855e-05,
387
+ "loss": 0.5468,
388
  "step": 27000
389
  },
390
  {
391
  "epoch": 7.302177376526819,
392
+ "grad_norm": 0.2766277492046356,
393
+ "learning_rate": 2.192491523346542e-05,
394
+ "loss": 0.5444,
395
  "step": 27500
396
  },
397
  {
398
  "epoch": 7.434944237918216,
399
+ "grad_norm": 0.3069545030593872,
400
+ "learning_rate": 2.1414273458883124e-05,
401
+ "loss": 0.5403,
402
  "step": 28000
403
  },
404
  {
405
  "epoch": 7.567711099309612,
406
+ "grad_norm": 0.258329302072525,
407
+ "learning_rate": 2.090363168430083e-05,
408
+ "loss": 0.5453,
409
  "step": 28500
410
  },
411
  {
412
  "epoch": 7.700477960701009,
413
+ "grad_norm": 0.2901703119277954,
414
+ "learning_rate": 2.0392989909718535e-05,
415
+ "loss": 0.5357,
416
  "step": 29000
417
  },
418
  {
419
  "epoch": 7.833244822092405,
420
+ "grad_norm": 0.35300034284591675,
421
+ "learning_rate": 1.988234813513624e-05,
422
+ "loss": 0.541,
423
  "step": 29500
424
  },
425
  {
426
  "epoch": 7.966011683483803,
427
+ "grad_norm": 0.2620261311531067,
428
+ "learning_rate": 1.9371706360553946e-05,
429
+ "loss": 0.5371,
430
  "step": 30000
431
  },
432
  {
433
  "epoch": 8.098778544875199,
434
+ "grad_norm": 0.3098488450050354,
435
+ "learning_rate": 1.886106458597165e-05,
436
+ "loss": 0.5337,
437
  "step": 30500
438
  },
439
  {
440
  "epoch": 8.231545406266596,
441
+ "grad_norm": 0.2904013991355896,
442
+ "learning_rate": 1.8350422811389357e-05,
443
+ "loss": 0.5342,
444
  "step": 31000
445
  },
446
  {
447
  "epoch": 8.364312267657992,
448
+ "grad_norm": 0.29218047857284546,
449
+ "learning_rate": 1.783978103680706e-05,
450
+ "loss": 0.5323,
451
  "step": 31500
452
  },
453
  {
454
  "epoch": 8.49707912904939,
455
+ "grad_norm": 0.3310258090496063,
456
+ "learning_rate": 1.7329139262224765e-05,
457
+ "loss": 0.5258,
458
  "step": 32000
459
  },
460
  {
461
  "epoch": 8.629845990440787,
462
+ "grad_norm": 0.3069627583026886,
463
+ "learning_rate": 1.6818497487642472e-05,
464
+ "loss": 0.5299,
465
  "step": 32500
466
  },
467
  {
468
  "epoch": 8.762612851832182,
469
+ "grad_norm": 0.24625258147716522,
470
+ "learning_rate": 1.630887699660934e-05,
471
+ "loss": 0.5285,
472
  "step": 33000
473
  },
474
  {
475
  "epoch": 8.89537971322358,
476
+ "grad_norm": 0.26636838912963867,
477
+ "learning_rate": 1.5798235222027044e-05,
478
+ "loss": 0.5294,
479
  "step": 33500
480
  },
481
  {
482
  "epoch": 9.028146574614976,
483
+ "grad_norm": 0.2842467725276947,
484
+ "learning_rate": 1.5287593447444748e-05,
485
+ "loss": 0.5235,
486
  "step": 34000
487
  },
488
  {
489
  "epoch": 9.160913436006373,
490
+ "grad_norm": 0.3261110782623291,
491
+ "learning_rate": 1.4776951672862455e-05,
492
+ "loss": 0.5256,
493
  "step": 34500
494
  },
495
  {
496
  "epoch": 9.293680297397769,
497
+ "grad_norm": 0.2750456929206848,
498
+ "learning_rate": 1.4266309898280159e-05,
499
+ "loss": 0.5218,
500
  "step": 35000
501
  },
502
  {
503
  "epoch": 9.426447158789166,
504
+ "grad_norm": 0.26470229029655457,
505
+ "learning_rate": 1.3755668123697864e-05,
506
+ "loss": 0.522,
507
  "step": 35500
508
  },
509
  {
510
  "epoch": 9.559214020180562,
511
+ "grad_norm": 0.24200379848480225,
512
+ "learning_rate": 1.3245026349115568e-05,
513
+ "loss": 0.5222,
514
  "step": 36000
515
  },
516
  {
517
  "epoch": 9.69198088157196,
518
+ "grad_norm": 0.30407610535621643,
519
+ "learning_rate": 1.2734384574533272e-05,
520
+ "loss": 0.5208,
521
  "step": 36500
522
  },
523
  {
524
  "epoch": 9.824747742963357,
525
+ "grad_norm": 0.26741334795951843,
526
+ "learning_rate": 1.2224764083500144e-05,
527
+ "loss": 0.5185,
528
  "step": 37000
529
  },
530
  {
531
  "epoch": 9.957514604354753,
532
+ "grad_norm": 0.2811224162578583,
533
+ "learning_rate": 1.1714122308917848e-05,
534
+ "loss": 0.515,
535
  "step": 37500
536
+ },
537
+ {
538
+ "epoch": 10.09028146574615,
539
+ "grad_norm": 0.2725277543067932,
540
+ "learning_rate": 1.1204501817884718e-05,
541
+ "loss": 0.517,
542
+ "step": 38000
543
+ },
544
+ {
545
+ "epoch": 10.223048327137546,
546
+ "grad_norm": 0.31137147545814514,
547
+ "learning_rate": 1.0693860043302423e-05,
548
+ "loss": 0.5155,
549
+ "step": 38500
550
+ },
551
+ {
552
+ "epoch": 10.355815188528943,
553
+ "grad_norm": 0.26093247532844543,
554
+ "learning_rate": 1.0183218268720129e-05,
555
+ "loss": 0.5148,
556
+ "step": 39000
557
+ },
558
+ {
559
+ "epoch": 10.488582049920339,
560
+ "grad_norm": 0.2848931550979614,
561
+ "learning_rate": 9.672576494137833e-06,
562
+ "loss": 0.5134,
563
+ "step": 39500
564
+ },
565
+ {
566
+ "epoch": 10.621348911311737,
567
+ "grad_norm": 0.24945715069770813,
568
+ "learning_rate": 9.161934719555536e-06,
569
+ "loss": 0.5136,
570
+ "step": 40000
571
+ },
572
+ {
573
+ "epoch": 10.754115772703134,
574
+ "grad_norm": 0.28524720668792725,
575
+ "learning_rate": 8.651292944973242e-06,
576
+ "loss": 0.5167,
577
+ "step": 40500
578
+ },
579
+ {
580
+ "epoch": 10.88688263409453,
581
+ "grad_norm": 0.29454296827316284,
582
+ "learning_rate": 8.140651170390948e-06,
583
+ "loss": 0.5151,
584
+ "step": 41000
585
+ },
586
+ {
587
+ "epoch": 11.019649495485927,
588
+ "grad_norm": 0.30919119715690613,
589
+ "learning_rate": 7.632051962906982e-06,
590
+ "loss": 0.5121,
591
+ "step": 41500
592
+ },
593
+ {
594
+ "epoch": 11.152416356877323,
595
+ "grad_norm": 0.36948204040527344,
596
+ "learning_rate": 7.121410188324687e-06,
597
+ "loss": 0.5146,
598
+ "step": 42000
599
+ },
600
+ {
601
+ "epoch": 11.28518321826872,
602
+ "grad_norm": 0.2883196771144867,
603
+ "learning_rate": 6.610768413742392e-06,
604
+ "loss": 0.5118,
605
+ "step": 42500
606
+ },
607
+ {
608
+ "epoch": 11.417950079660116,
609
+ "grad_norm": 0.2851753532886505,
610
+ "learning_rate": 6.100126639160097e-06,
611
+ "loss": 0.5092,
612
+ "step": 43000
613
+ },
614
+ {
615
+ "epoch": 11.550716941051514,
616
+ "grad_norm": 0.27395716309547424,
617
+ "learning_rate": 5.5894848645778016e-06,
618
+ "loss": 0.5044,
619
+ "step": 43500
620
+ },
621
+ {
622
+ "epoch": 11.683483802442911,
623
+ "grad_norm": 0.2726575434207916,
624
+ "learning_rate": 5.078843089995506e-06,
625
+ "loss": 0.5106,
626
+ "step": 44000
627
+ },
628
+ {
629
+ "epoch": 11.816250663834307,
630
+ "grad_norm": 0.29727038741111755,
631
+ "learning_rate": 4.568201315413211e-06,
632
+ "loss": 0.5095,
633
+ "step": 44500
634
+ },
635
+ {
636
+ "epoch": 11.949017525225704,
637
+ "grad_norm": 0.2694978713989258,
638
+ "learning_rate": 4.0575595408309166e-06,
639
+ "loss": 0.5118,
640
+ "step": 45000
641
+ },
642
+ {
643
+ "epoch": 12.0817843866171,
644
+ "grad_norm": 0.2318025678396225,
645
+ "learning_rate": 3.5469177662486213e-06,
646
+ "loss": 0.5126,
647
+ "step": 45500
648
+ },
649
+ {
650
+ "epoch": 12.214551248008497,
651
+ "grad_norm": 0.27759501338005066,
652
+ "learning_rate": 3.0362759916663264e-06,
653
+ "loss": 0.5081,
654
+ "step": 46000
655
+ },
656
+ {
657
+ "epoch": 12.347318109399893,
658
+ "grad_norm": 0.2869941294193268,
659
+ "learning_rate": 2.525634217084031e-06,
660
+ "loss": 0.5046,
661
+ "step": 46500
662
+ },
663
+ {
664
+ "epoch": 12.48008497079129,
665
+ "grad_norm": 0.32994431257247925,
666
+ "learning_rate": 2.0149924425017362e-06,
667
+ "loss": 0.5104,
668
+ "step": 47000
669
+ },
670
+ {
671
+ "epoch": 12.612851832182688,
672
+ "grad_norm": 0.28273916244506836,
673
+ "learning_rate": 1.5053719514686058e-06,
674
+ "loss": 0.5036,
675
+ "step": 47500
676
+ },
677
+ {
678
+ "epoch": 12.745618693574084,
679
+ "grad_norm": 0.2604888379573822,
680
+ "learning_rate": 9.947301768863107e-07,
681
+ "loss": 0.5086,
682
+ "step": 48000
683
+ },
684
+ {
685
+ "epoch": 12.878385554965481,
686
+ "grad_norm": 0.287817120552063,
687
+ "learning_rate": 4.851096858531803e-07,
688
+ "loss": 0.5126,
689
+ "step": 48500
690
  }
691
  ],
692
  "logging_steps": 500,
693
+ "max_steps": 48958,
694
  "num_input_tokens_seen": 0,
695
+ "num_train_epochs": 13,
696
  "save_steps": 500,
697
  "stateful_callbacks": {
698
  "TrainerControl": {
 
706
  "attributes": {}
707
  }
708
  },
709
+ "total_flos": 1.060038268378153e+17,
710
  "train_batch_size": 32,
711
  "trial_name": null,
712
  "trial_params": null
checkpoints/{checkpoint-37660 → checkpoint-48958}/training_args.bin RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:344c4c96587b5961c7260f4c9743524e13a2580c248f0b960480131c9cd7dc77
3
  size 5240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8e94179d735e9ead00b90fad45af99e009779a12dba4e32a7dde92da29b59e62
3
  size 5240
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b78f8b4750e01e9e992daf6187c84c8ecfad35d6183f0336f1d4661a41534df9
3
  size 242041896
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8ea98de1cde992e950903fb96553ceb84e46b447461bc9f940922b80e9bc3c6
3
  size 242041896
spiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d60acb128cf7b7f2536e8f38a5b18a05535c9e14c7a355904270e15b0945ea86
3
+ size 791656
src/train_t5.py CHANGED
@@ -49,7 +49,7 @@ data_collator = DataCollatorForSeq2Seq(tokenizer = tokeniser, model = model)
49
  training_args = TrainingArguments(
50
  output_dir = output_dir,
51
  overwrite_output_dir = True,
52
- num_train_epochs = 10,
53
  per_device_train_batch_size = 32,
54
  gradient_accumulation_steps = 2,
55
  save_strategy = "steps",
@@ -73,4 +73,4 @@ trainer.train()
73
  model.save_pretrained(output_dir)
74
  tokeniser.save_pretrained(output_dir)
75
 
76
- print("DalaT5 training complete.")
 
49
  training_args = TrainingArguments(
50
  output_dir = output_dir,
51
  overwrite_output_dir = True,
52
+ num_train_epochs = 13,
53
  per_device_train_batch_size = 32,
54
  gradient_accumulation_steps = 2,
55
  save_strategy = "steps",
 
73
  model.save_pretrained(output_dir)
74
  tokeniser.save_pretrained(output_dir)
75
 
76
+ print("DalaT5 training complete.")