zhangfz commited on
Commit
c73b63e
·
1 Parent(s): 0a5076b
Files changed (35) hide show
  1. logs_qkvo_pure/adam_lr_search/avg_loss_log_vs_steps.png +3 -0
  2. logs_qkvo_pure/adam_lr_search/avg_loss_vs_steps.png +3 -0
  3. logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.0001_seed_42.log +0 -0
  4. logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.0001_seed_43.log +0 -0
  5. logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.0002_seed_42.log +0 -0
  6. logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.0002_seed_43.log +0 -0
  7. logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.0005_seed_42.log +0 -0
  8. logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.0005_seed_43.log +0 -0
  9. logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.001_seed_42.log +0 -0
  10. logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.002_seed_42.log +0 -0
  11. logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.005_seed_42.log +0 -0
  12. logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.005_seed_43.log +0 -0
  13. logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.01_seed_42.log +0 -0
  14. logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.01_seed_43.log +0 -0
  15. logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.02_seed_42.log +0 -0
  16. logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.02_seed_43.log +0 -0
  17. logs_qkvo_pure/mode_adam_adam_lr_0.001_seed_42.log +0 -0
  18. logs_qkvo_pure/mode_adam_adam_lr_0.002_seed_42.log +0 -0
  19. logs_qkvo_pure/mode_adam_adam_lr_0.005_seed_42.log +0 -0
  20. logs_qkvo_pure/muon_lr_search/avg_loss_log_vs_steps.png +3 -0
  21. logs_qkvo_pure/muon_lr_search/avg_loss_vs_steps.png +3 -0
  22. logs_qkvo_pure/muon_lr_search/mode_muon_adam_lr_0.002_muon_lr_0.0005_seed_42.log +708 -0
  23. logs_qkvo_pure/muon_lr_search/mode_muon_adam_lr_0.002_muon_lr_0.001_seed_42.log +0 -0
  24. logs_qkvo_pure/muon_lr_search/mode_muon_adam_lr_0.002_muon_lr_0.002_seed_42.log +0 -0
  25. logs_qkvo_pure/muon_lr_search/mode_muon_adam_lr_0.002_muon_lr_0.005_seed_42.log +0 -0
  26. logs_qkvo_pure/muon_lr_search/mode_muon_adam_lr_0.002_muon_lr_0.01_seed_42.log +0 -0
  27. logs_qkvo_pure/muon_lr_search/mode_muon_adam_lr_0.002_muon_lr_0.02_seed_42.log +0 -0
  28. logs_qkvo_pure/muon_lr_search_new/avg_loss_log_vs_steps.png +3 -0
  29. logs_qkvo_pure/muon_lr_search_new/avg_loss_vs_steps.png +3 -0
  30. logs_qkvo_pure/muon_lr_search_new/mode_muon_adam_lr_0.002_muon_lr_0.0005_seed_42.log +0 -0
  31. logs_qkvo_pure/muon_lr_search_new/mode_muon_adam_lr_0.002_muon_lr_0.001_seed_42.log +0 -0
  32. logs_qkvo_pure/muon_lr_search_new/mode_muon_adam_lr_0.002_muon_lr_0.002_seed_42.log +0 -0
  33. logs_qkvo_pure/muon_lr_search_new/mode_muon_adam_lr_0.002_muon_lr_0.005_seed_42.log +0 -0
  34. logs_qkvo_pure/muon_lr_search_new/mode_muon_adam_lr_0.002_muon_lr_0.01_seed_42.log +0 -0
  35. logs_qkvo_pure/muon_lr_search_new/mode_muon_adam_lr_0.002_muon_lr_0.02_seed_42.log +2373 -0
logs_qkvo_pure/adam_lr_search/avg_loss_log_vs_steps.png ADDED

Git LFS Details

  • SHA256: 958aefa84dab93a0b38680efc0de2b3977a3c82232e4553406736869f50e3b53
  • Pointer size: 131 Bytes
  • Size of remote file: 111 kB
logs_qkvo_pure/adam_lr_search/avg_loss_vs_steps.png ADDED

Git LFS Details

  • SHA256: 51584562ddbab4eeb47159fe3ee8a73bfa478837a02528aaa52ad0018ac22e9a
  • Pointer size: 131 Bytes
  • Size of remote file: 106 kB
logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.0001_seed_42.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.0001_seed_43.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.0002_seed_42.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.0002_seed_43.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.0005_seed_42.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.0005_seed_43.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.001_seed_42.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.002_seed_42.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.005_seed_42.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.005_seed_43.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.01_seed_42.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.01_seed_43.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.02_seed_42.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_qkvo_pure/adam_lr_search/mode_adam_adam_lr_0.02_seed_43.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_qkvo_pure/mode_adam_adam_lr_0.001_seed_42.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_qkvo_pure/mode_adam_adam_lr_0.002_seed_42.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_qkvo_pure/mode_adam_adam_lr_0.005_seed_42.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_qkvo_pure/muon_lr_search/avg_loss_log_vs_steps.png ADDED

Git LFS Details

  • SHA256: 58bdbad890a4862de433f4acd94cbf7c971f9ddae4cfee432b154ea0132cbaa9
  • Pointer size: 131 Bytes
  • Size of remote file: 108 kB
logs_qkvo_pure/muon_lr_search/avg_loss_vs_steps.png ADDED

Git LFS Details

  • SHA256: 26ea62ed0145df0059f76abd658fb63e0fd0bc0df34d93211af96d15839f1995
  • Pointer size: 131 Bytes
  • Size of remote file: 103 kB
logs_qkvo_pure/muon_lr_search/mode_muon_adam_lr_0.002_muon_lr_0.0005_seed_42.log ADDED
@@ -0,0 +1,708 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ step:0 validation loss:11.018949
2
+ step:0 train loss:11.015424
3
+ step:1 train loss:11.020226
4
+ step:2 train loss:11.010934
5
+ step:3 train loss:11.004910
6
+ step:4 train loss:10.995027
7
+ step:5 train loss:10.983676
8
+ step:6 train loss:10.966785
9
+ step:7 train loss:10.951075
10
+ step:8 train loss:10.930555
11
+ step:9 train loss:10.909342
12
+ step:10 train loss:10.881525
13
+ step:11 train loss:10.860868
14
+ step:12 train loss:10.825106
15
+ step:13 train loss:10.794963
16
+ step:14 train loss:10.756892
17
+ step:15 train loss:10.723540
18
+ step:16 train loss:10.684589
19
+ step:17 train loss:10.645847
20
+ step:18 train loss:10.600722
21
+ step:19 train loss:10.553440
22
+ step:20 train loss:10.501952
23
+ step:21 train loss:10.457741
24
+ step:22 train loss:10.394848
25
+ step:23 train loss:10.350867
26
+ step:24 train loss:10.280701
27
+ step:25 train loss:10.234509
28
+ step:26 train loss:10.165276
29
+ step:27 train loss:10.098972
30
+ step:28 train loss:10.046017
31
+ step:29 train loss:9.981583
32
+ step:30 train loss:9.910318
33
+ step:31 train loss:9.830559
34
+ step:32 train loss:9.757634
35
+ step:33 train loss:9.686543
36
+ step:34 train loss:9.624190
37
+ step:35 train loss:9.534348
38
+ step:36 train loss:9.458261
39
+ step:37 train loss:9.367187
40
+ step:38 train loss:9.302447
41
+ step:39 train loss:9.206266
42
+ step:40 train loss:9.131829
43
+ step:41 train loss:9.039160
44
+ step:42 train loss:8.981525
45
+ step:43 train loss:8.865849
46
+ step:44 train loss:8.802135
47
+ step:45 train loss:8.709974
48
+ step:46 train loss:8.649731
49
+ step:47 train loss:8.561390
50
+ step:48 train loss:8.481024
51
+ step:49 train loss:8.391659
52
+ step:50 train loss:8.305132
53
+ step:51 train loss:8.221050
54
+ step:52 train loss:8.185582
55
+ step:53 train loss:8.105663
56
+ step:54 train loss:8.051863
57
+ step:55 train loss:7.957572
58
+ step:56 train loss:7.902859
59
+ step:57 train loss:7.874125
60
+ step:58 train loss:7.783765
61
+ step:59 train loss:7.759455
62
+ step:60 train loss:7.721583
63
+ step:61 train loss:7.685786
64
+ step:62 train loss:7.657363
65
+ step:63 train loss:7.689013
66
+ step:64 train loss:7.598711
67
+ step:65 train loss:7.607277
68
+ step:66 train loss:7.624282
69
+ step:67 train loss:7.625703
70
+ step:68 train loss:7.613712
71
+ step:69 train loss:7.600225
72
+ step:70 train loss:7.590165
73
+ step:71 train loss:7.564875
74
+ step:72 train loss:7.594479
75
+ step:73 train loss:7.562237
76
+ step:74 train loss:7.588921
77
+ step:75 train loss:7.537924
78
+ step:76 train loss:7.613895
79
+ step:77 train loss:7.543286
80
+ step:78 train loss:7.379977
81
+ step:79 train loss:7.484556
82
+ step:80 train loss:7.466608
83
+ step:81 train loss:7.517383
84
+ step:82 train loss:7.491320
85
+ step:83 train loss:7.456035
86
+ step:84 train loss:7.421113
87
+ step:85 train loss:7.392701
88
+ step:86 train loss:7.375344
89
+ step:87 train loss:7.341681
90
+ step:88 train loss:7.342148
91
+ step:89 train loss:7.299115
92
+ step:90 train loss:7.324209
93
+ step:91 train loss:7.316027
94
+ step:92 train loss:7.304208
95
+ step:93 train loss:7.238531
96
+ step:94 train loss:7.206048
97
+ step:95 train loss:7.142555
98
+ step:96 train loss:7.217937
99
+ step:97 train loss:7.148865
100
+ step:98 train loss:7.135708
101
+ step:99 train loss:7.083437
102
+ step:100 train loss:7.132834
103
+ step:101 train loss:7.001304
104
+ step:102 train loss:6.993102
105
+ step:103 train loss:6.967243
106
+ step:104 train loss:6.993345
107
+ step:105 train loss:7.031693
108
+ step:106 train loss:6.963967
109
+ step:107 train loss:6.910224
110
+ step:108 train loss:6.915035
111
+ step:109 train loss:6.945443
112
+ step:110 train loss:6.850996
113
+ step:111 train loss:6.856386
114
+ step:112 train loss:6.833060
115
+ step:113 train loss:6.782409
116
+ step:114 train loss:6.833003
117
+ step:115 train loss:6.776278
118
+ step:116 train loss:6.741609
119
+ step:117 train loss:6.672617
120
+ step:118 train loss:6.726232
121
+ step:119 train loss:6.663210
122
+ step:120 train loss:6.671381
123
+ step:121 train loss:6.581036
124
+ step:122 train loss:6.671529
125
+ step:123 train loss:6.583089
126
+ step:124 train loss:6.565653
127
+ step:125 train loss:6.533828
128
+ step:126 train loss:6.628871
129
+ step:127 train loss:6.533030
130
+ step:128 train loss:6.575610
131
+ step:129 train loss:6.548450
132
+ step:130 train loss:6.575078
133
+ step:131 train loss:6.513773
134
+ step:132 train loss:6.431153
135
+ step:133 train loss:6.492492
136
+ step:134 train loss:6.463880
137
+ step:135 train loss:6.371145
138
+ step:136 train loss:6.410591
139
+ step:137 train loss:6.407073
140
+ step:138 train loss:6.343710
141
+ step:139 train loss:6.419179
142
+ step:140 train loss:6.328504
143
+ step:141 train loss:6.426642
144
+ step:142 train loss:6.371757
145
+ step:143 train loss:6.378293
146
+ step:144 train loss:6.349747
147
+ step:145 train loss:6.282640
148
+ step:146 train loss:6.293992
149
+ step:147 train loss:6.345430
150
+ step:148 train loss:6.353338
151
+ step:149 train loss:6.303926
152
+ step:150 train loss:6.302433
153
+ step:151 train loss:6.212167
154
+ step:152 train loss:6.255791
155
+ step:153 train loss:6.232176
156
+ step:154 train loss:6.314730
157
+ step:155 train loss:6.285755
158
+ step:156 train loss:6.317771
159
+ step:157 train loss:6.215672
160
+ step:158 train loss:6.200233
161
+ step:159 train loss:6.232330
162
+ step:160 train loss:6.215403
163
+ step:161 train loss:6.209176
164
+ step:162 train loss:6.178005
165
+ step:163 train loss:6.194960
166
+ step:164 train loss:6.194765
167
+ step:165 train loss:6.214000
168
+ step:166 train loss:6.159634
169
+ step:167 train loss:6.163892
170
+ step:168 train loss:6.136242
171
+ step:169 train loss:6.088985
172
+ step:170 train loss:6.059000
173
+ step:171 train loss:6.173427
174
+ step:172 train loss:6.100713
175
+ step:173 train loss:6.151384
176
+ step:174 train loss:6.150017
177
+ step:175 train loss:6.116456
178
+ step:176 train loss:6.075884
179
+ step:177 train loss:6.113425
180
+ step:178 train loss:6.121142
181
+ step:179 train loss:6.077321
182
+ step:180 train loss:6.059571
183
+ step:181 train loss:6.096775
184
+ step:182 train loss:6.030660
185
+ step:183 train loss:6.114416
186
+ step:184 train loss:6.081723
187
+ step:185 train loss:6.014050
188
+ step:186 train loss:6.151138
189
+ step:187 train loss:6.083933
190
+ step:188 train loss:5.923741
191
+ step:189 train loss:6.064409
192
+ step:190 train loss:6.047544
193
+ step:191 train loss:6.020611
194
+ step:192 train loss:5.892870
195
+ step:193 train loss:6.085207
196
+ step:194 train loss:6.053609
197
+ step:195 train loss:6.033236
198
+ step:196 train loss:6.055465
199
+ step:197 train loss:6.013649
200
+ step:198 train loss:6.016072
201
+ step:199 train loss:6.045198
202
+ step:200 train loss:6.079931
203
+ step:201 train loss:6.015906
204
+ step:202 train loss:6.050323
205
+ step:203 train loss:5.989286
206
+ step:204 train loss:5.994330
207
+ step:205 train loss:5.914724
208
+ step:206 train loss:6.008636
209
+ step:207 train loss:5.965271
210
+ step:208 train loss:5.958157
211
+ step:209 train loss:5.943935
212
+ step:210 train loss:5.932757
213
+ step:211 train loss:5.979126
214
+ step:212 train loss:5.957274
215
+ step:213 train loss:5.940977
216
+ step:214 train loss:5.975878
217
+ step:215 train loss:5.958874
218
+ step:216 train loss:5.921475
219
+ step:217 train loss:5.931257
220
+ step:218 train loss:5.894432
221
+ step:219 train loss:5.879438
222
+ step:220 train loss:5.918484
223
+ step:221 train loss:5.874071
224
+ step:222 train loss:5.957614
225
+ step:223 train loss:5.922194
226
+ step:224 train loss:5.911490
227
+ step:225 train loss:5.887461
228
+ step:226 train loss:5.861408
229
+ step:227 train loss:5.919147
230
+ step:228 train loss:5.916242
231
+ step:229 train loss:5.963889
232
+ step:230 train loss:5.839691
233
+ step:231 train loss:5.880941
234
+ step:232 train loss:5.913732
235
+ step:233 train loss:5.815922
236
+ step:234 train loss:5.881902
237
+ step:235 train loss:5.926662
238
+ step:236 train loss:5.892923
239
+ step:237 train loss:5.926398
240
+ step:238 train loss:5.896809
241
+ step:239 train loss:5.812529
242
+ step:240 train loss:5.929081
243
+ step:241 train loss:5.949173
244
+ step:242 train loss:5.919721
245
+ step:243 train loss:5.849203
246
+ step:244 train loss:5.853334
247
+ step:245 train loss:5.828572
248
+ step:246 train loss:5.803466
249
+ step:247 train loss:5.872789
250
+ step:248 train loss:5.799171
251
+ step:249 train loss:5.846122
252
+ step:250 validation loss:5.837239
253
+ step:250 train loss:5.816980
254
+ step:251 train loss:5.867065
255
+ step:252 train loss:5.802760
256
+ step:253 train loss:5.809064
257
+ step:254 train loss:5.793290
258
+ step:255 train loss:5.837733
259
+ step:256 train loss:5.812046
260
+ step:257 train loss:5.885970
261
+ step:258 train loss:5.763566
262
+ step:259 train loss:5.792757
263
+ step:260 train loss:5.785553
264
+ step:261 train loss:5.788903
265
+ step:262 train loss:5.795781
266
+ step:263 train loss:5.845345
267
+ step:264 train loss:5.759592
268
+ step:265 train loss:5.797788
269
+ step:266 train loss:5.744943
270
+ step:267 train loss:5.810608
271
+ step:268 train loss:5.760835
272
+ step:269 train loss:5.788585
273
+ step:270 train loss:5.814303
274
+ step:271 train loss:5.759589
275
+ step:272 train loss:5.797612
276
+ step:273 train loss:5.807883
277
+ step:274 train loss:5.699788
278
+ step:275 train loss:5.808730
279
+ step:276 train loss:5.759241
280
+ step:277 train loss:5.721187
281
+ step:278 train loss:5.750453
282
+ step:279 train loss:5.729042
283
+ step:280 train loss:5.756226
284
+ step:281 train loss:5.825380
285
+ step:282 train loss:5.731263
286
+ step:283 train loss:5.722671
287
+ step:284 train loss:5.741635
288
+ step:285 train loss:5.757961
289
+ step:286 train loss:5.728635
290
+ step:287 train loss:5.741354
291
+ step:288 train loss:5.703504
292
+ step:289 train loss:5.752779
293
+ step:290 train loss:5.761066
294
+ step:291 train loss:5.719710
295
+ step:292 train loss:5.757103
296
+ step:293 train loss:5.743849
297
+ step:294 train loss:5.707259
298
+ step:295 train loss:5.792729
299
+ step:296 train loss:5.743566
300
+ step:297 train loss:5.782072
301
+ step:298 train loss:5.710087
302
+ step:299 train loss:5.742099
303
+ step:300 train loss:5.678102
304
+ step:301 train loss:5.720137
305
+ step:302 train loss:5.717441
306
+ step:303 train loss:5.693343
307
+ step:304 train loss:5.749202
308
+ step:305 train loss:5.675310
309
+ step:306 train loss:5.693254
310
+ step:307 train loss:5.717690
311
+ step:308 train loss:5.653879
312
+ step:309 train loss:5.751453
313
+ step:310 train loss:5.731911
314
+ step:311 train loss:5.682853
315
+ step:312 train loss:5.714482
316
+ step:313 train loss:5.705677
317
+ step:314 train loss:5.719196
318
+ step:315 train loss:5.675369
319
+ step:316 train loss:5.646206
320
+ step:317 train loss:5.636640
321
+ step:318 train loss:5.653316
322
+ step:319 train loss:5.697946
323
+ step:320 train loss:5.660789
324
+ step:321 train loss:5.691439
325
+ step:322 train loss:5.686815
326
+ step:323 train loss:5.737013
327
+ step:324 train loss:5.705341
328
+ step:325 train loss:5.698605
329
+ step:326 train loss:5.708385
330
+ step:327 train loss:5.711277
331
+ step:328 train loss:5.697366
332
+ step:329 train loss:5.665713
333
+ step:330 train loss:5.641728
334
+ step:331 train loss:5.659482
335
+ step:332 train loss:5.616224
336
+ step:333 train loss:5.608384
337
+ step:334 train loss:5.681748
338
+ step:335 train loss:5.715389
339
+ step:336 train loss:5.886877
340
+ step:337 train loss:5.711463
341
+ step:338 train loss:5.629460
342
+ step:339 train loss:5.635762
343
+ step:340 train loss:5.628103
344
+ step:341 train loss:5.603494
345
+ step:342 train loss:5.683806
346
+ step:343 train loss:5.627341
347
+ step:344 train loss:5.670202
348
+ step:345 train loss:5.585654
349
+ step:346 train loss:5.626231
350
+ step:347 train loss:5.590869
351
+ step:348 train loss:5.580972
352
+ step:349 train loss:5.514933
353
+ step:350 train loss:5.586253
354
+ step:351 train loss:5.645633
355
+ step:352 train loss:5.556901
356
+ step:353 train loss:5.636076
357
+ step:354 train loss:5.603020
358
+ step:355 train loss:5.613499
359
+ step:356 train loss:5.600191
360
+ step:357 train loss:5.665856
361
+ step:358 train loss:5.681579
362
+ step:359 train loss:5.516611
363
+ step:360 train loss:5.693751
364
+ step:361 train loss:5.635183
365
+ step:362 train loss:5.577714
366
+ step:363 train loss:5.634393
367
+ step:364 train loss:5.628038
368
+ step:365 train loss:5.658887
369
+ step:366 train loss:5.602283
370
+ step:367 train loss:5.626603
371
+ step:368 train loss:5.613094
372
+ step:369 train loss:5.553933
373
+ step:370 train loss:5.643659
374
+ step:371 train loss:5.569902
375
+ step:372 train loss:5.635789
376
+ step:373 train loss:5.576738
377
+ step:374 train loss:5.570609
378
+ step:375 train loss:5.630656
379
+ step:376 train loss:5.578053
380
+ step:377 train loss:5.498881
381
+ step:378 train loss:5.599305
382
+ step:379 train loss:5.615202
383
+ step:380 train loss:5.559390
384
+ step:381 train loss:5.627434
385
+ step:382 train loss:5.599933
386
+ step:383 train loss:5.574039
387
+ step:384 train loss:5.547537
388
+ step:385 train loss:5.573546
389
+ step:386 train loss:5.589078
390
+ step:387 train loss:5.617164
391
+ step:388 train loss:5.590526
392
+ step:389 train loss:5.520203
393
+ step:390 train loss:5.590436
394
+ step:391 train loss:5.523112
395
+ step:392 train loss:5.593059
396
+ step:393 train loss:5.609905
397
+ step:394 train loss:5.566988
398
+ step:395 train loss:5.553980
399
+ step:396 train loss:5.457115
400
+ step:397 train loss:5.629135
401
+ step:398 train loss:5.544865
402
+ step:399 train loss:5.528760
403
+ step:400 train loss:5.588879
404
+ step:401 train loss:5.521435
405
+ step:402 train loss:5.565764
406
+ step:403 train loss:5.565872
407
+ step:404 train loss:5.526936
408
+ step:405 train loss:5.555054
409
+ step:406 train loss:5.563742
410
+ step:407 train loss:5.623383
411
+ step:408 train loss:5.579584
412
+ step:409 train loss:5.521736
413
+ step:410 train loss:5.529564
414
+ step:411 train loss:5.540267
415
+ step:412 train loss:5.595044
416
+ step:413 train loss:5.504812
417
+ step:414 train loss:5.569297
418
+ step:415 train loss:5.593074
419
+ step:416 train loss:5.525351
420
+ step:417 train loss:5.522644
421
+ step:418 train loss:5.518416
422
+ step:419 train loss:5.532266
423
+ step:420 train loss:5.519451
424
+ step:421 train loss:5.511771
425
+ step:422 train loss:5.497843
426
+ step:423 train loss:5.512751
427
+ step:424 train loss:5.516061
428
+ step:425 train loss:5.553890
429
+ step:426 train loss:5.492079
430
+ step:427 train loss:5.500880
431
+ step:428 train loss:5.491987
432
+ step:429 train loss:5.490248
433
+ step:430 train loss:5.480282
434
+ step:431 train loss:5.574747
435
+ step:432 train loss:5.501834
436
+ step:433 train loss:5.563447
437
+ step:434 train loss:5.524998
438
+ step:435 train loss:5.542096
439
+ step:436 train loss:5.559086
440
+ step:437 train loss:5.535322
441
+ step:438 train loss:5.519480
442
+ step:439 train loss:5.499341
443
+ step:440 train loss:5.490997
444
+ step:441 train loss:5.491372
445
+ step:442 train loss:5.494779
446
+ step:443 train loss:5.497370
447
+ step:444 train loss:5.549359
448
+ step:445 train loss:5.501597
449
+ step:446 train loss:5.514030
450
+ step:447 train loss:5.513056
451
+ step:448 train loss:5.528311
452
+ step:449 train loss:5.500819
453
+ step:450 train loss:5.486154
454
+ step:451 train loss:5.572198
455
+ step:452 train loss:5.482662
456
+ step:453 train loss:5.525918
457
+ step:454 train loss:5.441131
458
+ step:455 train loss:5.509805
459
+ step:456 train loss:5.498071
460
+ step:457 train loss:5.483315
461
+ step:458 train loss:5.497639
462
+ step:459 train loss:5.479138
463
+ step:460 train loss:5.540781
464
+ step:461 train loss:5.506932
465
+ step:462 train loss:5.406518
466
+ step:463 train loss:5.530899
467
+ step:464 train loss:5.517510
468
+ step:465 train loss:5.480126
469
+ step:466 train loss:5.505144
470
+ step:467 train loss:5.499542
471
+ step:468 train loss:5.508452
472
+ step:469 train loss:5.480200
473
+ step:470 train loss:5.431437
474
+ step:471 train loss:5.580779
475
+ step:472 train loss:5.481690
476
+ step:473 train loss:5.493110
477
+ step:474 train loss:5.503496
478
+ step:475 train loss:5.514795
479
+ step:476 train loss:5.455892
480
+ step:477 train loss:5.467666
481
+ step:478 train loss:5.472945
482
+ step:479 train loss:5.428407
483
+ step:480 train loss:5.519136
484
+ step:481 train loss:5.456609
485
+ step:482 train loss:5.399099
486
+ step:483 train loss:5.494047
487
+ step:484 train loss:5.468803
488
+ step:485 train loss:5.434709
489
+ step:486 train loss:5.497624
490
+ step:487 train loss:5.452084
491
+ step:488 train loss:5.450278
492
+ step:489 train loss:5.472017
493
+ step:490 train loss:5.437662
494
+ step:491 train loss:5.435221
495
+ step:492 train loss:5.510285
496
+ step:493 train loss:5.423973
497
+ step:494 train loss:5.430881
498
+ step:495 train loss:5.490462
499
+ step:496 train loss:5.434524
500
+ step:497 train loss:5.476788
501
+ step:498 train loss:5.470979
502
+ step:499 train loss:5.493303
503
+ step:500 validation loss:5.454007
504
+ step:500 train loss:5.451817
505
+ step:501 train loss:5.508993
506
+ step:502 train loss:5.399317
507
+ step:503 train loss:5.412842
508
+ step:504 train loss:5.471434
509
+ step:505 train loss:5.461817
510
+ step:506 train loss:5.398189
511
+ step:507 train loss:5.513207
512
+ step:508 train loss:5.433660
513
+ step:509 train loss:5.454690
514
+ step:510 train loss:5.432570
515
+ step:511 train loss:5.375609
516
+ step:512 train loss:5.444704
517
+ step:513 train loss:5.452021
518
+ step:514 train loss:5.471115
519
+ step:515 train loss:5.446244
520
+ step:516 train loss:5.471374
521
+ step:517 train loss:5.474245
522
+ step:518 train loss:5.426744
523
+ step:519 train loss:5.496078
524
+ step:520 train loss:5.427716
525
+ step:521 train loss:5.397823
526
+ step:522 train loss:5.475583
527
+ step:523 train loss:5.408001
528
+ step:524 train loss:5.452801
529
+ step:525 train loss:5.367191
530
+ step:526 train loss:5.456772
531
+ step:527 train loss:5.416982
532
+ step:528 train loss:5.443983
533
+ step:529 train loss:5.428410
534
+ step:530 train loss:5.424571
535
+ step:531 train loss:5.399049
536
+ step:532 train loss:5.414495
537
+ step:533 train loss:5.425094
538
+ step:534 train loss:5.445930
539
+ step:535 train loss:5.494602
540
+ step:536 train loss:5.402138
541
+ step:537 train loss:5.384509
542
+ step:538 train loss:5.368041
543
+ step:539 train loss:5.503932
544
+ step:540 train loss:5.357913
545
+ step:541 train loss:5.460105
546
+ step:542 train loss:5.453763
547
+ step:543 train loss:5.418608
548
+ step:544 train loss:5.448548
549
+ step:545 train loss:5.386162
550
+ step:546 train loss:5.369950
551
+ step:547 train loss:5.398388
552
+ step:548 train loss:5.406528
553
+ step:549 train loss:5.379954
554
+ step:550 train loss:5.410077
555
+ step:551 train loss:5.423714
556
+ step:552 train loss:5.506493
557
+ step:553 train loss:5.408998
558
+ step:554 train loss:5.415755
559
+ step:555 train loss:5.487288
560
+ step:556 train loss:5.394042
561
+ step:557 train loss:5.386750
562
+ step:558 train loss:5.401230
563
+ step:559 train loss:5.472310
564
+ step:560 train loss:5.396537
565
+ step:561 train loss:5.383673
566
+ step:562 train loss:5.387293
567
+ step:563 train loss:5.387549
568
+ step:564 train loss:5.437938
569
+ step:565 train loss:5.409759
570
+ step:566 train loss:5.422817
571
+ step:567 train loss:5.399188
572
+ step:568 train loss:5.429495
573
+ step:569 train loss:5.401677
574
+ step:570 train loss:5.384977
575
+ step:571 train loss:5.373534
576
+ step:572 train loss:5.380027
577
+ step:573 train loss:5.375136
578
+ step:574 train loss:5.431943
579
+ step:575 train loss:5.364378
580
+ step:576 train loss:5.378711
581
+ step:577 train loss:5.442268
582
+ step:578 train loss:5.399104
583
+ step:579 train loss:5.390522
584
+ step:580 train loss:5.407828
585
+ step:581 train loss:5.411538
586
+ step:582 train loss:5.410308
587
+ step:583 train loss:5.402235
588
+ step:584 train loss:5.373002
589
+ step:585 train loss:5.390867
590
+ step:586 train loss:5.398320
591
+ step:587 train loss:5.411067
592
+ step:588 train loss:5.373336
593
+ step:589 train loss:5.408735
594
+ step:590 train loss:5.451546
595
+ step:591 train loss:5.338789
596
+ step:592 train loss:5.334523
597
+ step:593 train loss:5.389324
598
+ step:594 train loss:5.337025
599
+ step:595 train loss:5.416079
600
+ step:596 train loss:5.396035
601
+ step:597 train loss:5.350927
602
+ step:598 train loss:5.395294
603
+ step:599 train loss:5.385292
604
+ step:600 train loss:5.368805
605
+ step:601 train loss:5.320899
606
+ step:602 train loss:5.354075
607
+ step:603 train loss:5.408556
608
+ step:604 train loss:5.403461
609
+ step:605 train loss:5.399994
610
+ step:606 train loss:5.304999
611
+ step:607 train loss:5.384509
612
+ step:608 train loss:5.335430
613
+ step:609 train loss:5.346176
614
+ step:610 train loss:5.349381
615
+ step:611 train loss:5.374700
616
+ step:612 train loss:5.327247
617
+ step:613 train loss:5.337881
618
+ step:614 train loss:5.386038
619
+ step:615 train loss:5.371188
620
+ step:616 train loss:5.348369
621
+ step:617 train loss:5.364955
622
+ step:618 train loss:5.332599
623
+ step:619 train loss:5.401806
624
+ step:620 train loss:5.339460
625
+ step:621 train loss:5.395061
626
+ step:622 train loss:5.349128
627
+ step:623 train loss:5.391778
628
+ step:624 train loss:5.365892
629
+ step:625 train loss:5.366135
630
+ step:626 train loss:5.391775
631
+ step:627 train loss:5.324592
632
+ step:628 train loss:5.342957
633
+ step:629 train loss:5.292043
634
+ step:630 train loss:5.341146
635
+ step:631 train loss:5.347159
636
+ step:632 train loss:5.333919
637
+ step:633 train loss:5.353924
638
+ step:634 train loss:5.363871
639
+ step:635 train loss:5.350885
640
+ step:636 train loss:5.310527
641
+ step:637 train loss:5.286098
642
+ step:638 train loss:5.311001
643
+ step:639 train loss:5.338397
644
+ step:640 train loss:5.358508
645
+ step:641 train loss:5.355056
646
+ step:642 train loss:5.352121
647
+ step:643 train loss:5.348762
648
+ step:644 train loss:5.330700
649
+ step:645 train loss:5.354775
650
+ step:646 train loss:5.333034
651
+ step:647 train loss:5.355504
652
+ step:648 train loss:5.422951
653
+ step:649 train loss:5.357683
654
+ step:650 train loss:5.341992
655
+ step:651 train loss:5.279242
656
+ step:652 train loss:5.295226
657
+ step:653 train loss:5.349594
658
+ step:654 train loss:5.264517
659
+ step:655 train loss:5.341141
660
+ step:656 train loss:5.357602
661
+ step:657 train loss:5.281108
662
+ step:658 train loss:5.353639
663
+ step:659 train loss:5.306508
664
+ step:660 train loss:5.371544
665
+ step:661 train loss:5.363912
666
+ step:662 train loss:5.332908
667
+ step:663 train loss:5.328643
668
+ step:664 train loss:5.279181
669
+ step:665 train loss:5.292587
670
+ step:666 train loss:5.296033
671
+ step:667 train loss:5.315818
672
+ step:668 train loss:5.320154
673
+ step:669 train loss:5.319018
674
+ step:670 train loss:5.289472
675
+ step:671 train loss:5.277840
676
+ step:672 train loss:5.365256
677
+ step:673 train loss:5.327517
678
+ step:674 train loss:5.312008
679
+ step:675 train loss:5.289848
680
+ step:676 train loss:5.308525
681
+ step:677 train loss:5.311978
682
+ step:678 train loss:5.257811
683
+ step:679 train loss:5.322995
684
+ step:680 train loss:5.304199
685
+ step:681 train loss:5.282611
686
+ step:682 train loss:5.321229
687
+ step:683 train loss:5.353194
688
+ step:684 train loss:5.264623
689
+ step:685 train loss:5.252784
690
+ step:686 train loss:5.380219
691
+ step:687 train loss:5.297470
692
+ step:688 train loss:5.233657
693
+ step:689 train loss:5.325809
694
+ step:690 train loss:5.268259
695
+ step:691 train loss:5.268190
696
+ step:692 train loss:5.331085
697
+ step:693 train loss:5.258225
698
+ step:694 train loss:5.249166
699
+ step:695 train loss:5.260811
700
+ step:696 train loss:5.283231
701
+ step:697 train loss:5.276121
702
+ step:698 train loss:5.268666
703
+ step:699 train loss:5.309930
704
+ step:700 train loss:5.288491
705
+ step:701 train loss:5.256346
706
+ step:702 train loss:5.307794
707
+ step:703 train loss:5.206234
708
+ step:704 train loss:5.251676
logs_qkvo_pure/muon_lr_search/mode_muon_adam_lr_0.002_muon_lr_0.001_seed_42.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_qkvo_pure/muon_lr_search/mode_muon_adam_lr_0.002_muon_lr_0.002_seed_42.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_qkvo_pure/muon_lr_search/mode_muon_adam_lr_0.002_muon_lr_0.005_seed_42.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_qkvo_pure/muon_lr_search/mode_muon_adam_lr_0.002_muon_lr_0.01_seed_42.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_qkvo_pure/muon_lr_search/mode_muon_adam_lr_0.002_muon_lr_0.02_seed_42.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_qkvo_pure/muon_lr_search_new/avg_loss_log_vs_steps.png ADDED

Git LFS Details

  • SHA256: a3d3d1ab61bcf26fcd0b7894d1c465e9a71a2026048670521be5b58a136f379f
  • Pointer size: 131 Bytes
  • Size of remote file: 119 kB
logs_qkvo_pure/muon_lr_search_new/avg_loss_vs_steps.png ADDED

Git LFS Details

  • SHA256: 8c9b7d806a023e3bad563aea83abf9499f4eca0c6f4d873c6001181c0783567a
  • Pointer size: 131 Bytes
  • Size of remote file: 112 kB
logs_qkvo_pure/muon_lr_search_new/mode_muon_adam_lr_0.002_muon_lr_0.0005_seed_42.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_qkvo_pure/muon_lr_search_new/mode_muon_adam_lr_0.002_muon_lr_0.001_seed_42.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_qkvo_pure/muon_lr_search_new/mode_muon_adam_lr_0.002_muon_lr_0.002_seed_42.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_qkvo_pure/muon_lr_search_new/mode_muon_adam_lr_0.002_muon_lr_0.005_seed_42.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_qkvo_pure/muon_lr_search_new/mode_muon_adam_lr_0.002_muon_lr_0.01_seed_42.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_qkvo_pure/muon_lr_search_new/mode_muon_adam_lr_0.002_muon_lr_0.02_seed_42.log ADDED
@@ -0,0 +1,2373 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Reference code for GPT-2 training and inference.
3
+ Will save the model weights into files, to be read from C as initialization.
4
+
5
+ References:
6
+ 1) the official GPT-2 TensorFlow implementation released by OpenAI:
7
+ https://github.com/openai/gpt-2/blob/master/src/model.py
8
+ 2) huggingface/transformers PyTorch implementation:
9
+ https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt2/modeling_gpt2.py
10
+
11
+ Example launches to only benchmark the speed of bfloat16 compiled GPU training:
12
+ 1 GPU:
13
+ python train_gpt2.py --write_tensors=0 --num_iterations=50 --sequence_length=1024 --compile=1 --tensorcores=1 --dtype=bfloat16
14
+ you can also turn on flash-attention by appending --flash=1
15
+ 4 GPU:
16
+ torchrun --standalone --nproc_per_node=4 train_gpt2.py --write_tensors=0 --num_iterations=50 --sequence_length=1024 --compile=1 --tensorcores=1 --dtype=bfloat16
17
+ """
18
+ import sys
19
+ with open(sys.argv[0]) as f:
20
+ code = f.read() # read the code of this file ASAP, for logging
21
+
22
+ import os
23
+ import math
24
+ import glob
25
+ import struct
26
+ import inspect
27
+ from contextlib import nullcontext
28
+ from dataclasses import dataclass
29
+ import random
30
+
31
+ import numpy as np
32
+ import torch
33
+ from torch import Tensor
34
+ import torch.nn as nn
35
+ from torch.nn import functional as F
36
+ import torch._inductor.config as config
37
+ from torch.nn.parallel import DistributedDataParallel as DDP
38
+ from torch.distributed import init_process_group, destroy_process_group
39
+ from torch.distributed.optim import ZeroRedundancyOptimizer
40
+ import torch.distributed as dist
41
+
42
+ # Import Muon optimizer
43
+ import sys
44
+ sys.path.append("/home/aiops/zhangfz/MUON_theory_copy/MUON_theory/modded-nanogpt/optimizers")
45
+ from MUON_fix import Muon
46
+
47
+ # Import GPT model
48
+ sys.path.append("/home/aiops/zhangfz/MUON_theory_copy/MUON_theory/modded-nanogpt/models")
49
+ from nano_GPT_qkvo_pure import GPT, GPTConfig
50
+
51
+
52
+ # -----------------------------------------------------------------------------
53
+ # Our own simple Distributed Data Loader
54
+
55
+ def _peek_data_shard(filename):
56
+ # only reads the header, returns header data
57
+ with open(filename, "rb") as f:
58
+ # first read the header, which is 256 int32 integers (4 bytes each)
59
+ header = np.frombuffer(f.read(256*4), dtype=np.int32)
60
+ if header[0] != 20240520:
61
+ print("ERROR: magic number mismatch in the data .bin file!")
62
+ print("---> HINT: Are you passing in a correct file with --input_bin?")
63
+ print("---> HINT: Dataset encoding changed recently, re-run data prepro or refer again to README")
64
+ print("---> HINT: For example re-run: `python dev/data/tinyshakespeare.py`, then re-try")
65
+ exit(1)
66
+ assert header[1] == 1, "unsupported version"
67
+ ntok = header[2] # number of tokens (claimed)
68
+ return ntok # for now just return the number of tokens
69
+
70
+ def _load_data_shard(filename):
71
+ with open(filename, "rb") as f:
72
+ # first read the header, which is 256 int32 integers (4 bytes each)
73
+ header = np.frombuffer(f.read(256*4), dtype=np.int32)
74
+ assert header[0] == 20240520, "magic number mismatch in the data .bin file"
75
+ assert header[1] == 1, "unsupported version"
76
+ ntok = header[2] # number of tokens (claimed)
77
+ # the rest of it are tokens, stored as uint16
78
+ tokens = np.frombuffer(f.read(), dtype=np.uint16)
79
+ assert len(tokens) == ntok, "number of tokens read does not match header?"
80
+ return tokens
81
+
82
+ class DistributedDataLoader:
83
+ def __init__(self, filename_pattern, B, T, process_rank, num_processes):
84
+ self.process_rank = process_rank
85
+ self.num_processes = num_processes
86
+ self.B = B
87
+ self.T = T
88
+
89
+ # glob files that match the pattern
90
+ self.files = sorted(glob.glob(filename_pattern))
91
+ assert len(self.files) > 0, f"did not find any files that match the pattern {filename_pattern}"
92
+
93
+ # load and validate all data shards, count number of tokens in total
94
+ ntok_total = 0
95
+ for fname in self.files:
96
+ shard_ntok = _peek_data_shard(fname)
97
+ assert shard_ntok >= num_processes * B * T + 1
98
+ ntok_total += shard_ntok
99
+ self.ntok_total = ntok_total
100
+ print0(f"DataLoader: total number of tokens: {ntok_total:,} across {len(self.files)} files")
101
+
102
+ # kick things off
103
+ self.current_shard = None
104
+ self.reset()
105
+
106
+ def reset(self):
107
+ # we're being a bit clever here: if we already had shard 0 loaded,
108
+ # then don't do the work to reload it, just reset the pointer
109
+ if self.current_shard != 0:
110
+ self.current_shard = 0
111
+ self.tokens = _load_data_shard(self.files[self.current_shard])
112
+ self.current_position = self.process_rank * self.B * self.T
113
+
114
+ def advance(self): # advance to next data shard
115
+ self.current_shard = (self.current_shard + 1) % len(self.files)
116
+ self.current_position = self.process_rank * self.B * self.T
117
+ self.tokens = _load_data_shard(self.files[self.current_shard])
118
+
119
+ def next_batch(self):
120
+ B = self.B
121
+ T = self.T
122
+ buf = self.tokens[self.current_position : self.current_position+B*T+1]
123
+ buf = torch.tensor(buf.astype(np.int32), dtype=torch.long)
124
+ x = (buf[:-1]).view(B, T) # inputs
125
+ y = (buf[1:]).view(B, T) # targets
126
+ # advance the start pointer in current shard
127
+ self.current_position += B * T * self.num_processes
128
+ # if loading the next batch would be out of bounds advance the shard
129
+ if self.current_position + (B * T * self.num_processes + 1) > len(self.tokens):
130
+ self.advance()
131
+ return x, y
132
+
133
+ # -----------------------------------------------------------------------------
134
+ # Python -> C bridge utilities for saving params/grads/activations to .bin files
135
+
136
+ def write_fp32(tensor, file):
137
+ t = tensor.detach().cpu().to(torch.float32)
138
+ b = t.numpy().tobytes()
139
+ file.write(b)
140
+
141
+ def write_bf16(tensor, file):
142
+ t = tensor.detach().cpu().to(torch.bfloat16)
143
+ # numpy doesn't have bf16 datatype so we have to trick it
144
+ t = t.view(torch.int16) # trick: reinterpret as int16
145
+ b = t.numpy().tobytes()
146
+ file.write(b)
147
+
148
+ def write_tensors(model_tensors, L, file, dtype):
149
+ # writes the GPT-2 model's weights to a binary file
150
+ assert dtype in {"float32", "bfloat16"}
151
+ write_fun = write_fp32 if dtype == "float32" else write_bf16
152
+ write_fun(model_tensors["transformer.wte.weight"], file) # (V, C)
153
+ write_fun(model_tensors["transformer.wpe.weight"], file) # (T, C)
154
+ for i in range(L): # (L, C)
155
+ write_fun(model_tensors[f"transformer.h.{i}.ln_1.weight"], file)
156
+ for i in range(L): # (L, C)
157
+ write_fun(model_tensors[f"transformer.h.{i}.ln_1.bias"], file)
158
+ for i in range(L): # (L, 3C, C)
159
+ write_fun(model_tensors[f"transformer.h.{i}.attn.c_attn.weight"], file)
160
+ for i in range(L): # (L, 3C)
161
+ write_fun(model_tensors[f"transformer.h.{i}.attn.c_attn.bias"], file)
162
+ for i in range(L): # (L, C, C)
163
+ write_fun(model_tensors[f"transformer.h.{i}.attn.c_proj.weight"], file)
164
+ for i in range(L): # (L, C)
165
+ write_fun(model_tensors[f"transformer.h.{i}.attn.c_proj.bias"], file)
166
+ for i in range(L): # (L, C)
167
+ write_fun(model_tensors[f"transformer.h.{i}.ln_2.weight"], file)
168
+ for i in range(L): # (L, C)
169
+ write_fun(model_tensors[f"transformer.h.{i}.ln_2.bias"], file)
170
+ for i in range(L): # (L, 4C, C)
171
+ write_fun(model_tensors[f"transformer.h.{i}.mlp.c_fc.weight"], file)
172
+ for i in range(L): # (L, 4C)
173
+ write_fun(model_tensors[f"transformer.h.{i}.mlp.c_fc.bias"], file)
174
+ for i in range(L): # (L, C, 4C)
175
+ write_fun(model_tensors[f"transformer.h.{i}.mlp.c_proj.weight"], file)
176
+ for i in range(L): # (L, C)
177
+ write_fun(model_tensors[f"transformer.h.{i}.mlp.c_proj.bias"], file)
178
+ write_fun(model_tensors["transformer.ln_f.weight"], file) # (C, )
179
+ write_fun(model_tensors["transformer.ln_f.bias"], file) # (C, )
180
+
181
+ @torch.no_grad()
182
+ def pad_vocab(tensor, multiple=128, value=0):
183
+ """
184
+ The dimension of the vocab size in GPT-2 is 50,257
185
+ which is unfortunately a very unfriendly number for a lot of
186
+ matrix operations on the GPU. So we pad it to the nearest
187
+ friendlier multiple, e.g. 50,304 if multiple=128 when we
188
+ export the weights into C land. This is a NOOP algorithmically
189
+ and is only done to make the tensor operations more efficient.
190
+ """
191
+ assert tensor.ndim == 2
192
+ V, C = tensor.shape
193
+ assert V == 50257, "just being defensive here"
194
+ # calculate padded vocab size by rounding up to nearest multiple
195
+ Vp = ((V + multiple - 1) // multiple) * multiple
196
+ # pad the tensor
197
+ pad_rows = Vp - V
198
+ padded = tensor if pad_rows == 0 else F.pad(tensor, (0, 0, 0, pad_rows), value=value)
199
+ assert padded.shape == (Vp, C)
200
+ return padded
201
+
202
+ def write_model(model, filename, dtype):
203
+ # everything we need to instantiate the model
204
+ # 1) header is: version int, GPTConfig ints, padding to 1024 bytes
205
+ assert dtype in {"float32", "bfloat16"} # float16 todo maybe later
206
+ version = {
207
+ "float32": 3, # 3: all tensors are fp32, padded vocab
208
+ "bfloat16": 5, # 5: all tensors are bf16, padded vocab
209
+ }[dtype]
210
+ header = torch.zeros(256, dtype=torch.int32)
211
+ header[0] = 20240326 # magic
212
+ header[1] = version # checkpoint version
213
+ header[2] = model.config.block_size
214
+ header[3] = model.config.vocab_size
215
+ header[4] = model.config.n_layer
216
+ header[5] = model.config.n_head
217
+ header[6] = model.config.n_embd
218
+ # 2) the parameters follow the header
219
+ params = {name: param.cpu() for name, param in model.named_parameters()}
220
+ # pad the vocab to a multiple of 128 here at export, for efficiency in C
221
+ wte = params["transformer.wte.weight"] # (V, C)
222
+ wte_padded = pad_vocab(wte) # (Vp, C)
223
+ params["transformer.wte.weight"] = wte_padded # (Vp, C)
224
+ print(f"padded vocab size from {wte.size(0)} to {wte_padded.size(0)}")
225
+ header[7] = wte_padded.size(0) # padded vocab size store in header
226
+ # now write to file
227
+ with open(filename, "wb") as file:
228
+ file.write(header.numpy().tobytes()) # header
229
+ write_tensors(params, model.config.n_layer, file, dtype) # params
230
+ print(f"wrote {filename}")
231
+
232
+ def write_state(model, x, y, logits, loss, filename):
233
+ # the state is used for debugging.
234
+ # it contains information about the input, logits, loss, and the parameter gradients
235
+ # this can be used for checking the computation correctness in C
236
+ header = torch.zeros(256, dtype=torch.int32)
237
+ header[0] = 20240327 # magic
238
+ header[1] = 2 # run state version = 2 (1 -> 2 for padded vocab changes)
239
+ header[2] = x.size(0) # batch size of the batch, B
240
+ header[3] = x.size(1) # temporal extent of the batch, T
241
+ grads = {name: param.grad.cpu() for name, param in model.named_parameters()}
242
+ # pad the vocab grads here as well, to mirror write_model
243
+ wte_grad = grads["transformer.wte.weight"] # (V, C)
244
+ wte_grad_padded = pad_vocab(wte_grad, value=0) # (Vp, C) # TODO later maybe pad with nan?
245
+ grads["transformer.wte.weight"] = wte_grad_padded # (Vp, C)
246
+ print(f"padded vocab size in reference grads from {wte_grad.size(0)} to {wte_grad_padded.size(0)}")
247
+ with open(filename, "wb") as file:
248
+ # header
249
+ file.write(header.numpy().tobytes())
250
+ # input x
251
+ file.write(x.cpu().numpy().astype("int32").tobytes()) # (B, T)
252
+ # targets y
253
+ file.write(y.cpu().numpy().astype("int32").tobytes()) # (B, T)
254
+ # logits (result of the model forward pass)
255
+ write_fp32(logits.cpu(), file)
256
+ # loss (single float, result of the cross entropy loss)
257
+ write_fp32(loss.cpu(), file)
258
+ # gradients
259
+ write_tensors(grads, model.config.n_layer, file, "float32")
260
+ print(f"wrote {filename}")
261
+
262
+ def write_tokenizer(enc, filename):
263
+ n = enc.max_token_value + 1
264
+ header = torch.zeros(256, dtype=torch.int32)
265
+ header[0] = 20240328 # magic
266
+ header[1] = 2 # tokenizer version = 2 (1 -> 2: includes EOT token)
267
+ header[2] = n # number of tokens
268
+ header[3] = enc.eot_token # EOT token
269
+ with open(filename, "wb") as file:
270
+ file.write(header.numpy().tobytes())
271
+ for i in range(n):
272
+ b = enc.decode_bytes([i])
273
+ length = len(b)
274
+ assert length < 256, f"Token length exceeds 255: {length}"
275
+ file.write(struct.pack("<B", length)) # Write the length as a 1-byte unsigned integer
276
+ file.write(b) # Write the actual bytes
277
+ print(f"wrote {filename}")
278
+
279
+ def set_seed(seed):
280
+ random.seed(seed)
281
+ np.random.seed(seed)
282
+ torch.manual_seed(seed)
283
+ if torch.cuda.is_available():
284
+ torch.cuda.manual_seed_all(seed)
285
+ print(f"PRINT: Set seed to {seed}", flush=True) # Print immediately for all ranks
286
+
287
+ # -----------------------------------------------------------------------------
288
+ # int main
289
+
290
+ def print0(*args, **kwargs):
291
+ # modified print that only prints from the master process
292
+ # if this is not a distributed run, it's just a print
293
+ if int(os.environ.get("RANK", 0)) == 0:
294
+ print(*args, **kwargs)
295
+
296
+ if __name__ == "__main__":
297
+ import time
298
+ import argparse
299
+ import tiktoken
300
+ print0(f"Running pytorch {torch.version.__version__}")
301
+
302
+ # default settings will overfit a tiny batch of data
303
+ # and save model weights and debug state to disk on the first iteration
304
+ parser = argparse.ArgumentParser()
305
+ # file system input / output
306
+ parser.add_argument("--input_bin", type=str, default="dev/data/tinyshakespeare/tiny_shakespeare_val.bin", help="input .bin to train on")
307
+ parser.add_argument("--input_val_bin", type=str, default="", help="input .bin to eval validation loss on")
308
+ parser.add_argument("--output_dir", type=str, default="", help="output directory to which to write logs and checkpoints")
309
+ parser.add_argument("--model", type=str, default="gpt2", help="gpt2|gpt2-medium|gpt2-large|gpt2-xl|d12|d24|d36|d48")
310
+ # token layout for each step of the optimization
311
+ parser.add_argument("--batch_size", type=int, default=4, help="batch size, in units of #batch dimensions")
312
+ parser.add_argument("--sequence_length", type=int, default=64, help="sequence length")
313
+ parser.add_argument("--total_batch_size", type=int, default=256, help="total desired batch size, in units of #tokens")
314
+ # workload (number of steps)
315
+ parser.add_argument("--num_iterations", type=int, default=10, help="number of iterations to run")
316
+ parser.add_argument("--inference_only", type=int, default=0, help="only run inference")
317
+ # optimization
318
+ parser.add_argument("--adam_lr", type=float, default=1e-4, help="learning rate warmup iterations")
319
+ parser.add_argument("--warmup_iters", type=int, default=0, help="learning rate warmup iterations")
320
+ parser.add_argument("--lr_decay_frac", type=float, default=1.0, help="learning rate warmup iterations")
321
+ parser.add_argument("--weight_decay", type=float, default=0.0, help="weight decay")
322
+ parser.add_argument("--grad_clip", type=float, default=1.0, help="maximum gradient magnitude")
323
+ # evaluation
324
+ parser.add_argument("--val_loss_every", type=int, default=0, help="every how mant steps to evaluate val loss?")
325
+ parser.add_argument("--val_max_steps", type=int, default=20, help="how many batches of val to average?")
326
+ parser.add_argument("--sample_every", type=int, default=0, help="how often to sample from the model?")
327
+ # debugging
328
+ parser.add_argument("--overfit_single_batch", type=int, default=1, help="overfit just one batch of data")
329
+ # numerics
330
+ parser.add_argument("--tensorcores", type=int, default=0, help="use tensorcores")
331
+ # memory management
332
+ parser.add_argument("--device", type=str, default="", help="by default we autodetect, or set it here")
333
+ parser.add_argument("--compile", type=int, default=0, help="torch.compile the model")
334
+ parser.add_argument("--flash", type=int, default=0, help="use flash attention")
335
+ parser.add_argument("--dtype", type=str, default="float32", help="float32|float16|bfloat16")
336
+ parser.add_argument("--zero_stage", type=int, default=0, help="zero redundancy optimizer stage (0/1/2/3)")
337
+ # Muon optimizer specific arguments
338
+ parser.add_argument("--optimizer", type=str, default="adam", help="optimizer to use: adam|muon")
339
+ parser.add_argument("--muon_lr", type=float, default=0.02, help="learning rate for Muon optimizer")
340
+ parser.add_argument("--muon_momentum", type=float, default=0.95, help="momentum for Muon optimizer")
341
+ parser.add_argument("--muon_weight_decay", type=float, default=0.00, help="weight decay for Muon optimizer")
342
+ parser.add_argument("--muon_ns_steps", type=int, default=5, help="number of Newton-Schulz steps for Muon")
343
+ parser.add_argument("--muon_nesterov", type=bool, default=False, help="use Nesterov momentum for Muon (0/1)")
344
+ # python -> C bridge
345
+ parser.add_argument("--write_tensors", type=int, default=1, help="write tensors to disk")
346
+ parser.add_argument("--seed", type=int, default=42, help="random seed")
347
+ args = parser.parse_args()
348
+
349
+ # args error checking and convenience variables
350
+ B, T = args.batch_size, args.sequence_length
351
+ assert 1 <= T <= 1024
352
+ assert args.dtype in {"float32", "float16", "bfloat16"}
353
+ assert args.model in {"gpt2", "gpt2-medium", "gpt2-large", "gpt2-xl", "d12", "d24", "d36", "d48"}
354
+ assert args.optimizer in {"adam", "muon"}
355
+
356
+ set_seed(args.seed)
357
+
358
+ # set up DDP (distributed data parallel). torchrun sets this env variable
359
+ ddp = int(os.environ.get('RANK', -1)) != -1 # is this a ddp run?
360
+ if ddp:
361
+ # use of DDP atm demands CUDA, we set the device appropriately according to rank
362
+ assert torch.cuda.is_available(), "for now i think we need CUDA for DDP"
363
+ init_process_group(backend='nccl')
364
+ ddp_rank = int(os.environ['RANK'])
365
+ ddp_local_rank = int(os.environ['LOCAL_RANK'])
366
+ ddp_world_size = int(os.environ['WORLD_SIZE'])
367
+ device = f'cuda:{ddp_local_rank}'
368
+ torch.cuda.set_device(device)
369
+ master_process = ddp_rank == 0 # this process will do logging, checkpointing etc.
370
+ seed_offset = 0 # each process gets the exact same seed
371
+ zero_stage = args.zero_stage
372
+ else:
373
+ ddp_rank = 0
374
+ ddp_local_rank = 0
375
+ zero_stage = 0
376
+ ddp_world_size = 1
377
+ master_process = True
378
+ seed_offset = 0
379
+ # select the device
380
+ if args.device:
381
+ # provided explicitly by the user
382
+ device = args.device
383
+ else:
384
+ # attempt to autodetect the device
385
+ device = "cpu"
386
+ if torch.cuda.is_available():
387
+ device = "cuda"
388
+ elif hasattr(torch.backends, "mps") and torch.backends.mps.is_available():
389
+ device = "mps"
390
+ print(f"using device: {device}")
391
+ device_type = 'cuda' if 'cuda' in device else 'cpu'
392
+
393
+ # calculate gradient accumulation from the desired total batch size and the current run configuration
394
+ tokens_per_fwdbwd = B * T * ddp_world_size
395
+ assert args.total_batch_size % tokens_per_fwdbwd == 0
396
+ grad_accum_steps = args.total_batch_size // tokens_per_fwdbwd
397
+ print0(f"total desired batch size: {args.total_batch_size}")
398
+ print0(f"=> calculated gradient accumulation steps: {grad_accum_steps}")
399
+
400
+ # set up a context manager following the desired dtype and device
401
+ ptdtype = {'float32': torch.float32, 'bfloat16': torch.bfloat16, 'float16': torch.float16}[args.dtype]
402
+ ctx = torch.amp.autocast(device_type=device_type, dtype=ptdtype) if device_type == "cuda" else nullcontext()
403
+
404
+ # rng / reproducibility
405
+ torch.manual_seed(42)
406
+ if torch.cuda.is_available():
407
+ torch.cuda.manual_seed(42)
408
+
409
+ # set the torch precision mode to use TensorFloat32 (TF32) for matmuls
410
+ # docs https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html
411
+ if args.tensorcores:
412
+ torch.set_float32_matmul_precision('high')
413
+
414
+ # turn on/off flash attention
415
+ assert args.flash in {0, 1}
416
+ FLASH = args.flash
417
+
418
+ # init (and write) the tokenizer
419
+ enc = tiktoken.get_encoding("gpt2")
420
+ if master_process and args.write_tensors: # tokenizer is technically not tensors but ok
421
+ write_tokenizer(enc, "gpt2_tokenizer.bin")
422
+
423
+ # init the model, either from scratch or from OpenAI pretrained checkpoint
424
+ if args.model[0] == "d":
425
+ # from scratch (random weights)
426
+ model_config = {
427
+ "d12": GPTConfig(block_size=1024, vocab_size=50257, n_layer=12, n_head=12, n_embd=768),
428
+ "d24": GPTConfig(block_size=1024, vocab_size=50257, n_layer=24, n_head=16, n_embd=1024),
429
+ "d36": GPTConfig(block_size=1024, vocab_size=50257, n_layer=36, n_head=20, n_embd=1280),
430
+ "d48": GPTConfig(block_size=1024, vocab_size=50257, n_layer=48, n_head=25, n_embd=1600),
431
+ }[args.model]
432
+ model = GPT(model_config)
433
+ else:
434
+ # load the GPT-2 model weights
435
+ model = GPT.from_pretrained(args.model)
436
+ model.train()
437
+ model.to(device)
438
+ if args.compile:
439
+ if hasattr(config, "coordinate_descent_tuning"):
440
+ config.coordinate_descent_tuning = True # suggested by @Chillee
441
+ print0("compiling the model...")
442
+ model = torch.compile(model)
443
+
444
+ # -------------------------------------------------------------------------
445
+ # Our own version of a simple DistributedDataLoader
446
+
447
+ # load tokens
448
+ train_loader = DistributedDataLoader(args.input_bin, B, T, ddp_rank, ddp_world_size)
449
+ val_loader = None
450
+ if args.input_val_bin:
451
+ val_loader = DistributedDataLoader(args.input_val_bin, B, T, ddp_rank, ddp_world_size)
452
+
453
+ # -------------------------------------------------------------------------
454
+ # PyTorch -> C bridge: save some weights and state for C to load later as reference
455
+
456
+ # do one forward pass to generate ground truth for our C tests
457
+ if master_process and args.write_tensors and (not args.inference_only):
458
+ x, y = train_loader.next_batch()
459
+ x, y = x.to(device), y.to(device)
460
+ logits, loss = model(x, y)
461
+ loss.backward()
462
+ # save model params, in both float32 and bfloat16
463
+ model_to_size = {"gpt2": "124M", "gpt2-medium": "355M", "gpt2-large": "774M", "gpt2-xl": "1558M"}
464
+ model_to_size.update({f"d{d}": f"d{d}" for d in [12, 24, 36, 48]})
465
+ model_size_str = model_to_size[args.model] # e.g. "124M", or "d12"
466
+ write_model(model, f"gpt2_{model_size_str}.bin", dtype="float32")
467
+ write_model(model, f"gpt2_{model_size_str}_bf16.bin", dtype="bfloat16")
468
+ # save x, y, logits, loss, and parameter gradients, for debugging C
469
+ # always store these in fp32 to have an accurate reference (?)
470
+ write_state(model, x, y, logits, loss, f"gpt2_{model_size_str}_debug_state.bin")
471
+ # reset the train_loader for the optimization below
472
+ train_loader.reset()
473
+
474
+ # -------------------------------------------------------------------------
475
+ # main training loop
476
+
477
+ # here we wrap model into DDP container
478
+ if ddp:
479
+ model = DDP(model, device_ids=[ddp_local_rank])
480
+ raw_model = model.module if ddp else model # always contains the "raw" unwrapped model
481
+
482
+
483
+ def configure_adam(model, weight_decay, learning_rate, betas, device_type, zero_stage):
484
+ # start with all of the candidate parameters
485
+ param_dict = {pn: p for pn, p in model.named_parameters()}
486
+ # filter out those that do not require grad
487
+ param_dict = {pn: p for pn, p in param_dict.items() if p.requires_grad}
488
+ # create optim groups. Any parameters that is 2D will be weight decayed, otherwise no.
489
+ # i.e. all weight tensors in matmuls + embeddings decay, all biases and layernorms don't.
490
+ decay_params = [p for n, p in param_dict.items() if p.dim() >= 2]
491
+ nodecay_params = [p for n, p in param_dict.items() if p.dim() < 2]
492
+ optim_groups = [
493
+ {'params': decay_params, 'weight_decay': weight_decay},
494
+ {'params': nodecay_params, 'weight_decay': 0.0}
495
+ ]
496
+ num_decay_params = sum(p.numel() for p in decay_params)
497
+ num_nodecay_params = sum(p.numel() for p in nodecay_params)
498
+ print0(f"num decayed parameter tensors: {len(decay_params)}, with {num_decay_params:,} parameters")
499
+ print0(f"num non-decayed parameter tensors: {len(nodecay_params)}, with {num_nodecay_params:,} parameters")
500
+ # Create AdamW optimizer and use the fused version if it is available
501
+ fused_available = 'fused' in inspect.signature(torch.optim.AdamW).parameters
502
+ use_fused = fused_available and device_type == 'cuda'
503
+ print0(f"using fused AdamW: {use_fused}")
504
+ if zero_stage == 1:
505
+ print0("using ZeroRedundancyOptimizer")
506
+ optimizer = ZeroRedundancyOptimizer(**optim_groups[0], optimizer_class=torch.optim.AdamW,
507
+ lr=learning_rate, betas=betas, fused=use_fused)
508
+ optimizer.add_param_group(optim_groups[1])
509
+ else:
510
+ print0("using regular AdamW")
511
+ optimizer = torch.optim.AdamW(optim_groups, lr=learning_rate, betas=betas, fused=use_fused)
512
+ return [optimizer]
513
+
514
+ def configure_muon(model, weight_decay, adam_lr, muon_lr, momentum, nesterov, ns_steps, device_type, zero_stage, ddp_rank, ddp_world_size):
515
+ # start with all of the candidate parameters
516
+ param_dict = {pn: p for pn, p in model.named_parameters()}
517
+ # filter out those that do not require grad
518
+ param_dict = {pn: p for pn, p in param_dict.items() if p.requires_grad}
519
+
520
+ # For Muon, we need to separate 2D parameters (which can be orthogonalized)
521
+ # from other parameters (which should use standard optimization)
522
+ muon_params = [] # 2D parameters for Muon
523
+ other_params = [] # other parameters for AdamW
524
+
525
+ muon_name = []
526
+ other_name = []
527
+ for n, p in param_dict.items():
528
+ if "wte.weight" in n :
529
+ other_params.append(p)
530
+ other_name.append(n)
531
+ continue
532
+
533
+ if p.dim() >= 2: # 2D parameters (weight matrices)
534
+ muon_params.append(p)
535
+ muon_name.append(n)
536
+ else: # 1D parameters (biases, embeddings, etc.)
537
+ other_params.append(p)
538
+ other_name.append(n)
539
+
540
+ # print("================================================\n")
541
+ # print(f"Muon parameters: {muon_name}\n")
542
+ # print(f"Other parameters: {other_name}\n")
543
+ # print("================================================\n")
544
+
545
+ print0(f"Muon parameters (2D): {len(muon_params)} tensors")
546
+ print0(f"Other parameters (non-2D): {len(other_params)} tensors")
547
+
548
+ # Create Muon optimizer for 2D parameters
549
+ muon_optimizer = None
550
+ if muon_params:
551
+ muon_optimizer = Muon(
552
+ params=muon_params,
553
+ lr=muon_lr,
554
+ weight_decay=weight_decay,
555
+ momentum=momentum,
556
+ nesterov=nesterov,
557
+ ns_steps=ns_steps,
558
+ rank=ddp_rank,
559
+ world_size=ddp_world_size
560
+ )
561
+
562
+ # Create AdamW optimizer for non-2D parameters
563
+ adam_optimizer = None
564
+ if other_params:
565
+ # create optim groups for AdamW
566
+ # decay_params = [p for p in other_params if p.dim() >= 2]
567
+ # nodecay_params = [p for p in other_params if p.dim() < 2]
568
+ optim_groups = [
569
+ {'params': other_params, 'weight_decay': weight_decay},
570
+ # {'params': nodecay_params, 'weight_decay': 0.0}
571
+ ]
572
+
573
+ # Create AdamW optimizer
574
+ fused_available = 'fused' in inspect.signature(torch.optim.AdamW).parameters
575
+ use_fused = fused_available and device_type == 'cuda'
576
+ print0(f"using fused AdamW for non-Muon params: {use_fused}")
577
+
578
+ if zero_stage == 1:
579
+ print0("using ZeroRedundancyOptimizer for non-Muon params")
580
+ adam_optimizer = ZeroRedundancyOptimizer(**optim_groups[0], optimizer_class=torch.optim.AdamW,
581
+ lr=adam_lr, betas=(0.9, 0.95), fused=use_fused)
582
+ # adam_optimizer.add_param_group(optim_groups[1])
583
+ else:
584
+ print0("using regular AdamW for non-Muon params")
585
+ adam_optimizer = torch.optim.AdamW(optim_groups, lr=adam_lr, betas=(0.9, 0.95), fused=use_fused)
586
+
587
+ return [muon_optimizer, adam_optimizer]
588
+
589
+ # init the optimizer
590
+ if args.optimizer == "adam":
591
+ optimizers = configure_adam(model=raw_model, weight_decay=args.weight_decay,
592
+ learning_rate=args.adam_lr, betas=(0.9, 0.95),
593
+ device_type=device, zero_stage=zero_stage)
594
+ elif args.optimizer == "muon":
595
+ optimizers = configure_muon(
596
+ model=raw_model,
597
+ weight_decay=args.muon_weight_decay,
598
+ muon_lr=args.muon_lr,
599
+ adam_lr=args.adam_lr,
600
+ momentum=args.muon_momentum,
601
+ nesterov=bool(args.muon_nesterov),
602
+ ns_steps=args.muon_ns_steps,
603
+ device_type=device,
604
+ zero_stage=zero_stage,
605
+ ddp_rank=ddp_rank,
606
+ ddp_world_size=ddp_world_size
607
+ )
608
+ # We'll use muon_optimizer and adam_optimizer separately
609
+
610
+ # learning rate decay scheduler (cosine with warmup)
611
+ def get_lr(it,base_lr):
612
+ # if args.optimizer == "adam":
613
+ # base_lr = args.adam_lr
614
+ # else: # muon
615
+ # base_lr = args.muon_lr
616
+ min_lr = base_lr * args.lr_decay_frac
617
+ # 1) linear warmup for warmup_iters steps
618
+ if it < args.warmup_iters:
619
+ return base_lr * (it+1) / args.warmup_iters
620
+ # 2) if it > lr_decay_iters, return min learning rate
621
+ if it > args.num_iterations:
622
+ return min_lr
623
+ # 3) in between, use cosine decay down to min learning rate
624
+ decay_ratio = (it - args.warmup_iters) / (args.num_iterations - args.warmup_iters)
625
+ assert 0 <= decay_ratio <= 1
626
+ coeff = 0.5 * (1.0 + math.cos(math.pi * decay_ratio)) # coeff starts at 1 and goes to 0
627
+ return min_lr + coeff * (base_lr - min_lr)
628
+
629
+ def get_wsd_lr(it,base_lr):
630
+ min_lr = base_lr * args.lr_decay_frac
631
+ # 1) linear warmup for warmup_iters steps
632
+ if it < args.warmup_iters:
633
+ return base_lr * (it+1) / args.warmup_iters
634
+ else:
635
+ return base_lr
636
+
637
+ # create the logging directory if it does not exist
638
+ logfile = None
639
+ file_name = f"mode_{args.optimizer}_adam_lr_{args.adam_lr}_muon_lr_{args.muon_lr}_seed_{args.seed}.log"
640
+ if args.output_dir:
641
+ os.makedirs(args.output_dir, exist_ok=True)
642
+ logfile = os.path.join(args.output_dir, file_name)
643
+ # create the log file "main.log" inside it, and wipe it clean
644
+ with open(logfile, "w") as f:
645
+ pass
646
+ if master_process:
647
+ with open(logfile, "a") as f:
648
+ f.write(code)
649
+
650
+ if device == "cuda":
651
+ torch.cuda.reset_peak_memory_stats()
652
+ timings = []
653
+ norm = -1.0 # dummy value to print in inference-only mode
654
+ for step in range(args.num_iterations + 1):
655
+ t0 = time.time()
656
+ last_step = (step == args.num_iterations)
657
+
658
+ # once in a while evaluate the validation dataset
659
+ if (args.val_loss_every > 0 \
660
+ and (step % args.val_loss_every == 0 or last_step)) \
661
+ and (val_loader is not None):
662
+ model.eval()
663
+ val_loader.reset()
664
+ with torch.no_grad():
665
+ val_loss = 0.0
666
+ for _ in range(args.val_max_steps):
667
+ x, y = val_loader.next_batch()
668
+ x, y = x.to(device), y.to(device)
669
+ _, loss = model(x, y, return_logits=False)
670
+ val_loss += loss.item()
671
+ val_loss /= args.val_max_steps
672
+ # log to console and to file
673
+ print0(f"val loss {val_loss}")
674
+ if master_process and logfile is not None:
675
+ with open(logfile, "a") as f:
676
+ f.write("step:%d validation loss:%f\n" % (step, val_loss))
677
+
678
+ # once in a while perform model inference on the master process
679
+ if (args.sample_every > 0 \
680
+ and (step % args.sample_every == 0 or last_step)) \
681
+ and master_process:
682
+ model.eval()
683
+ # before we end, let's also do one round of inference
684
+ # we'll kick off the generation with "<|endoftext|>", which designates the start of a new sequence
685
+ start_ids = [enc.eot_token]
686
+ xg = (torch.tensor(start_ids, dtype=torch.long, device=device)[None, ...])
687
+ max_new_tokens = 32
688
+ temperature = 1.0
689
+ top_k = 40
690
+ yg = raw_model.generate(xg, max_new_tokens, temperature=temperature, top_k=top_k)
691
+ print0('---------------')
692
+ print0(enc.decode(yg[0].tolist()))
693
+ print0('---------------')
694
+
695
+ # bit confusing: we want to make sure to eval and sample on 0th iteration
696
+ # but also after the very last iteration. so we loop for step <= num_iterations
697
+ # instead of just < num_iterations (one extra due to <=), only to do
698
+ # the validation/sampling one last time, and then we break right here as we're done.
699
+ if last_step:
700
+ break
701
+
702
+ # --------------- TRAINING SECTION BEGIN -----------------
703
+ model.train()
704
+ # Zero gradients for the appropriate optimizer(s)
705
+
706
+ for optimizer in optimizers:
707
+ if isinstance(optimizer, ZeroRedundancyOptimizer) or isinstance(optimizer, torch.optim.AdamW):
708
+ optimizer.zero_grad(set_to_none=True)
709
+ elif isinstance(optimizer, Muon):
710
+ optimizer.zero_grad()
711
+ # if args.optimizer == "adam":
712
+ # optimizer.zero_grad(set_to_none=True)
713
+ # else: # muon
714
+ # if muon_optimizer is not None:
715
+ # muon_optimizer.zero_grad()
716
+ # if adam_optimizer is not None:
717
+ # adam_optimizer.zero_grad(set_to_none=True)
718
+ # if we are trying to overfit a single batch, we reset the loader here
719
+ if args.overfit_single_batch:
720
+ train_loader.reset()
721
+ # micro-batch loop where we do gradient accumulation to reach desired total batch size
722
+ lossf = 0.0 # for getting the mean loss (as simple float) over the accumulation steps
723
+ for micro_step in range(grad_accum_steps):
724
+ # fetch a batch
725
+ x, y = train_loader.next_batch()
726
+ x, y = x.to(device), y.to(device)
727
+ if ddp:
728
+ # we want only the last micro-step to sync grads in a DDP model
729
+ # the official way to do this is with model.no_sync(), but that is a
730
+ # context manager that bloats the code, so we just toggle this variable
731
+ model.require_backward_grad_sync = (micro_step == grad_accum_steps - 1)
732
+ # forward pass
733
+ with ctx:
734
+ _, loss = model(x, y, return_logits=False)
735
+ # we have to scale the loss to account for gradient accumulation,
736
+ # because the gradients just add on each successive backward().
737
+ # addition of gradients corresponds to a SUM in the objective, but
738
+ # instead of a SUM we want MEAN, so we scale the loss here
739
+ loss = loss / grad_accum_steps
740
+ lossf += loss.detach() # keep track of the mean loss
741
+ # backward pass
742
+ if not args.inference_only:
743
+ loss.backward()
744
+ if ddp:
745
+ dist.all_reduce(lossf, op=dist.ReduceOp.AVG)
746
+ lossf = lossf.item()
747
+ norm = torch.nn.utils.clip_grad_norm_(model.parameters(), args.grad_clip)
748
+ # determine and set the learning rate for this iteration
749
+
750
+
751
+ # Update learning rate and step the appropriate optimizer(s)
752
+ # if args.optimizer == "adam":
753
+ # adam_lr = get_wsd_lr(step,args.adam_lr)
754
+ # for param_group in optimizer.param_groups:
755
+ # param_group['lr'] = adam_lr
756
+ # optimizer.step()
757
+ # else: # muon
758
+ # if muon_optimizer is not None:
759
+ # muon_lr = get_wsd_lr(step,args.muon_lr)
760
+ # for param_group in muon_optimizer.param_groups:
761
+ # param_group['lr'] = muon_lr
762
+ # muon_optimizer.step()
763
+ # if adam_optimizer is not None:
764
+ # adam_lr = get_wsd_lr(step,args.adam_lr)
765
+ # for param_group in adam_optimizer.param_groups:
766
+ # param_group['lr'] = adam_lr
767
+ # adam_optimizer.step()
768
+ for optimizer in optimizers:
769
+ if isinstance(optimizer, ZeroRedundancyOptimizer) or isinstance(optimizer, torch.optim.AdamW):
770
+ adam_lr = get_wsd_lr(step,args.adam_lr)
771
+ for param_group in optimizer.param_groups:
772
+ param_group['lr'] = adam_lr
773
+ optimizer.step()
774
+ elif isinstance(optimizer, Muon):
775
+ muon_lr = get_wsd_lr(step,args.muon_lr)
776
+ for param_group in optimizer.param_groups:
777
+ param_group['lr'] = muon_lr
778
+ optimizer.step()
779
+ else:
780
+ raise ValueError(f"Unsupported optimizer: {type(optimizer)}")
781
+ # --------------- TRAINING SECTION END -------------------
782
+ # everything that follows now is just diagnostics, prints, logging, etc.
783
+
784
+ # wait on the CPU for all device work to end so we get accurate per-iteration timings below
785
+ if device == "mps":
786
+ torch.mps.synchronize()
787
+ elif device == "cuda":
788
+ torch.cuda.synchronize()
789
+ # time and print
790
+ t1 = time.time()
791
+ # the 0th iteration is often an outlier (much slower) => skip logging it
792
+ tokens_per_second = grad_accum_steps * ddp_world_size * B * T / (t1-t0)
793
+ print0(f"step {step+1:4d}/{args.num_iterations} | train loss {lossf:.6f} | norm {norm:.4f} | ({(t1-t0)*1000:.2f} ms | {tokens_per_second:.0f} tok/s)")
794
+ # log to logile
795
+ if master_process and logfile is not None:
796
+ with open(logfile, "a") as f:
797
+ f.write("step:%d train loss:%f\n" % (step, lossf))
798
+
799
+ # keep track of smooth timings, last 20 iterations
800
+ if step > 0 and step > args.num_iterations - 20:
801
+ timings.append(t1-t0)
802
+
803
+ # print the average of the last 20 timings, to get something smooth-ish
804
+ timings = timings[-20:]
805
+ print0(f"final {len(timings)} iters avg: {np.mean(timings)*1000:.3f}ms")
806
+ print0(f"peak memory consumption: {torch.cuda.max_memory_allocated() // 1024 // 1024} MiB")
807
+
808
+ # -------------------------------------------------------------------------
809
+ # clean up nice
810
+ if ddp:
811
+ destroy_process_group()
812
+ step:0 validation loss:11.021495
813
+ step:0 train loss:11.015424
814
+ step:1 train loss:11.017307
815
+ step:2 train loss:11.002539
816
+ step:3 train loss:10.987658
817
+ step:4 train loss:10.966583
818
+ step:5 train loss:10.940186
819
+ step:6 train loss:10.906839
820
+ step:7 train loss:10.868830
821
+ step:8 train loss:10.830691
822
+ step:9 train loss:10.780419
823
+ step:10 train loss:10.728622
824
+ step:11 train loss:10.679697
825
+ step:12 train loss:10.610229
826
+ step:13 train loss:10.545015
827
+ step:14 train loss:10.478123
828
+ step:15 train loss:10.409031
829
+ step:16 train loss:10.336840
830
+ step:17 train loss:10.266420
831
+ step:18 train loss:10.193359
832
+ step:19 train loss:10.110159
833
+ step:20 train loss:10.029213
834
+ step:21 train loss:9.952513
835
+ step:22 train loss:9.840858
836
+ step:23 train loss:9.776922
837
+ step:24 train loss:9.665634
838
+ step:25 train loss:9.595795
839
+ step:26 train loss:9.498317
840
+ step:27 train loss:9.395107
841
+ step:28 train loss:9.328226
842
+ step:29 train loss:9.237043
843
+ step:30 train loss:9.148466
844
+ step:31 train loss:9.030872
845
+ step:32 train loss:8.935430
846
+ step:33 train loss:8.850748
847
+ step:34 train loss:8.789619
848
+ step:35 train loss:8.669458
849
+ step:36 train loss:8.587969
850
+ step:37 train loss:8.478680
851
+ step:38 train loss:8.428891
852
+ step:39 train loss:8.318810
853
+ step:40 train loss:8.245317
854
+ step:41 train loss:8.136267
855
+ step:42 train loss:8.097561
856
+ step:43 train loss:7.968538
857
+ step:44 train loss:7.898306
858
+ step:45 train loss:7.837488
859
+ step:46 train loss:7.776301
860
+ step:47 train loss:7.719494
861
+ step:48 train loss:7.618882
862
+ step:49 train loss:7.559425
863
+ step:50 train loss:7.459339
864
+ step:51 train loss:7.429264
865
+ step:52 train loss:7.396437
866
+ step:53 train loss:7.347573
867
+ step:54 train loss:7.306512
868
+ step:55 train loss:7.238846
869
+ step:56 train loss:7.180341
870
+ step:57 train loss:7.193172
871
+ step:58 train loss:7.104615
872
+ step:59 train loss:7.112510
873
+ step:60 train loss:7.082651
874
+ step:61 train loss:7.039591
875
+ step:62 train loss:7.012208
876
+ step:63 train loss:7.050370
877
+ step:64 train loss:6.934842
878
+ step:65 train loss:6.955455
879
+ step:66 train loss:6.945438
880
+ step:67 train loss:6.957443
881
+ step:68 train loss:6.899389
882
+ step:69 train loss:6.870329
883
+ step:70 train loss:6.830938
884
+ step:71 train loss:6.801010
885
+ step:72 train loss:6.820164
886
+ step:73 train loss:6.762726
887
+ step:74 train loss:6.779589
888
+ step:75 train loss:6.713727
889
+ step:76 train loss:6.798114
890
+ step:77 train loss:6.727730
891
+ step:78 train loss:6.478737
892
+ step:79 train loss:6.641900
893
+ step:80 train loss:6.612185
894
+ step:81 train loss:6.701096
895
+ step:82 train loss:6.647059
896
+ step:83 train loss:6.602648
897
+ step:84 train loss:6.559029
898
+ step:85 train loss:6.535501
899
+ step:86 train loss:6.526028
900
+ step:87 train loss:6.499008
901
+ step:88 train loss:6.496197
902
+ step:89 train loss:6.448274
903
+ step:90 train loss:6.492400
904
+ step:91 train loss:6.491443
905
+ step:92 train loss:6.499742
906
+ step:93 train loss:6.450020
907
+ step:94 train loss:6.406896
908
+ step:95 train loss:6.348999
909
+ step:96 train loss:6.452277
910
+ step:97 train loss:6.394535
911
+ step:98 train loss:6.378076
912
+ step:99 train loss:6.346870
913
+ step:100 train loss:6.357969
914
+ step:101 train loss:6.291591
915
+ step:102 train loss:6.310251
916
+ step:103 train loss:6.300294
917
+ step:104 train loss:6.319246
918
+ step:105 train loss:6.374966
919
+ step:106 train loss:6.326587
920
+ step:107 train loss:6.271721
921
+ step:108 train loss:6.299007
922
+ step:109 train loss:6.331908
923
+ step:110 train loss:6.257711
924
+ step:111 train loss:6.278391
925
+ step:112 train loss:6.272741
926
+ step:113 train loss:6.226389
927
+ step:114 train loss:6.275346
928
+ step:115 train loss:6.246730
929
+ step:116 train loss:6.220730
930
+ step:117 train loss:6.168935
931
+ step:118 train loss:6.216222
932
+ step:119 train loss:6.171053
933
+ step:120 train loss:6.192867
934
+ step:121 train loss:6.109637
935
+ step:122 train loss:6.204206
936
+ step:123 train loss:6.134442
937
+ step:124 train loss:6.120174
938
+ step:125 train loss:6.095740
939
+ step:126 train loss:6.198706
940
+ step:127 train loss:6.110419
941
+ step:128 train loss:6.164063
942
+ step:129 train loss:6.131337
943
+ step:130 train loss:6.151025
944
+ step:131 train loss:6.109653
945
+ step:132 train loss:6.055182
946
+ step:133 train loss:6.096249
947
+ step:134 train loss:6.085812
948
+ step:135 train loss:5.997619
949
+ step:136 train loss:6.041600
950
+ step:137 train loss:6.047670
951
+ step:138 train loss:5.993762
952
+ step:139 train loss:6.067573
953
+ step:140 train loss:5.986075
954
+ step:141 train loss:6.069380
955
+ step:142 train loss:6.028786
956
+ step:143 train loss:6.039732
957
+ step:144 train loss:6.015032
958
+ step:145 train loss:5.949231
959
+ step:146 train loss:5.965765
960
+ step:147 train loss:6.013845
961
+ step:148 train loss:6.027832
962
+ step:149 train loss:5.982743
963
+ step:150 train loss:5.980625
964
+ step:151 train loss:5.898334
965
+ step:152 train loss:5.937162
966
+ step:153 train loss:5.922023
967
+ step:154 train loss:5.985418
968
+ step:155 train loss:5.976247
969
+ step:156 train loss:5.991820
970
+ step:157 train loss:5.915868
971
+ step:158 train loss:5.897432
972
+ step:159 train loss:5.923434
973
+ step:160 train loss:5.915922
974
+ step:161 train loss:5.913770
975
+ step:162 train loss:5.874032
976
+ step:163 train loss:5.897184
977
+ step:164 train loss:5.886250
978
+ step:165 train loss:5.913967
979
+ step:166 train loss:5.863026
980
+ step:167 train loss:5.870093
981
+ step:168 train loss:5.851177
982
+ step:169 train loss:5.800628
983
+ step:170 train loss:5.771802
984
+ step:171 train loss:5.883171
985
+ step:172 train loss:5.820835
986
+ step:173 train loss:5.879887
987
+ step:174 train loss:5.872364
988
+ step:175 train loss:5.832941
989
+ step:176 train loss:5.794168
990
+ step:177 train loss:5.826818
991
+ step:178 train loss:5.835801
992
+ step:179 train loss:5.796124
993
+ step:180 train loss:5.763860
994
+ step:181 train loss:5.813160
995
+ step:182 train loss:5.748355
996
+ step:183 train loss:5.827563
997
+ step:184 train loss:5.789713
998
+ step:185 train loss:5.737475
999
+ step:186 train loss:5.855652
1000
+ step:187 train loss:5.796614
1001
+ step:188 train loss:5.654269
1002
+ step:189 train loss:5.776381
1003
+ step:190 train loss:5.769153
1004
+ step:191 train loss:5.715700
1005
+ step:192 train loss:5.652488
1006
+ step:193 train loss:5.785297
1007
+ step:194 train loss:5.791453
1008
+ step:195 train loss:5.777102
1009
+ step:196 train loss:5.768545
1010
+ step:197 train loss:5.741137
1011
+ step:198 train loss:5.708144
1012
+ step:199 train loss:5.768876
1013
+ step:200 train loss:5.842128
1014
+ step:201 train loss:5.739259
1015
+ step:202 train loss:5.757513
1016
+ step:203 train loss:5.715420
1017
+ step:204 train loss:5.728333
1018
+ step:205 train loss:5.624386
1019
+ step:206 train loss:5.704189
1020
+ step:207 train loss:5.722406
1021
+ step:208 train loss:5.652603
1022
+ step:209 train loss:5.644573
1023
+ step:210 train loss:5.654663
1024
+ step:211 train loss:5.692886
1025
+ step:212 train loss:5.680385
1026
+ step:213 train loss:5.665684
1027
+ step:214 train loss:5.680418
1028
+ step:215 train loss:5.669912
1029
+ step:216 train loss:5.632862
1030
+ step:217 train loss:5.669870
1031
+ step:218 train loss:5.620474
1032
+ step:219 train loss:5.596796
1033
+ step:220 train loss:5.622748
1034
+ step:221 train loss:5.589097
1035
+ step:222 train loss:5.628883
1036
+ step:223 train loss:5.647436
1037
+ step:224 train loss:5.648049
1038
+ step:225 train loss:5.576045
1039
+ step:226 train loss:5.574128
1040
+ step:227 train loss:5.632720
1041
+ step:228 train loss:5.597082
1042
+ step:229 train loss:5.663924
1043
+ step:230 train loss:5.533484
1044
+ step:231 train loss:5.617034
1045
+ step:232 train loss:5.585238
1046
+ step:233 train loss:5.544968
1047
+ step:234 train loss:5.581217
1048
+ step:235 train loss:5.609170
1049
+ step:236 train loss:5.565095
1050
+ step:237 train loss:5.648161
1051
+ step:238 train loss:5.600225
1052
+ step:239 train loss:5.527732
1053
+ step:240 train loss:5.600331
1054
+ step:241 train loss:5.643022
1055
+ step:242 train loss:5.602471
1056
+ step:243 train loss:5.529858
1057
+ step:244 train loss:5.546822
1058
+ step:245 train loss:5.518763
1059
+ step:246 train loss:5.504246
1060
+ step:247 train loss:5.522960
1061
+ step:248 train loss:5.482553
1062
+ step:249 train loss:5.532679
1063
+ step:250 validation loss:5.511507
1064
+ step:250 train loss:5.498803
1065
+ step:251 train loss:5.538063
1066
+ step:252 train loss:5.491353
1067
+ step:253 train loss:5.489568
1068
+ step:254 train loss:5.472230
1069
+ step:255 train loss:5.495923
1070
+ step:256 train loss:5.486245
1071
+ step:257 train loss:5.544394
1072
+ step:258 train loss:5.458185
1073
+ step:259 train loss:5.473286
1074
+ step:260 train loss:5.431251
1075
+ step:261 train loss:5.440634
1076
+ step:262 train loss:5.483522
1077
+ step:263 train loss:5.474167
1078
+ step:264 train loss:5.425406
1079
+ step:265 train loss:5.437705
1080
+ step:266 train loss:5.427199
1081
+ step:267 train loss:5.449633
1082
+ step:268 train loss:5.388786
1083
+ step:269 train loss:5.419320
1084
+ step:270 train loss:5.446882
1085
+ step:271 train loss:5.431798
1086
+ step:272 train loss:5.388842
1087
+ step:273 train loss:5.460462
1088
+ step:274 train loss:5.341482
1089
+ step:275 train loss:5.435415
1090
+ step:276 train loss:5.389358
1091
+ step:277 train loss:5.367981
1092
+ step:278 train loss:5.352919
1093
+ step:279 train loss:5.349316
1094
+ step:280 train loss:5.398752
1095
+ step:281 train loss:5.466868
1096
+ step:282 train loss:5.327730
1097
+ step:283 train loss:5.363546
1098
+ step:284 train loss:5.356159
1099
+ step:285 train loss:5.396748
1100
+ step:286 train loss:5.340771
1101
+ step:287 train loss:5.342405
1102
+ step:288 train loss:5.293667
1103
+ step:289 train loss:5.347423
1104
+ step:290 train loss:5.369082
1105
+ step:291 train loss:5.306618
1106
+ step:292 train loss:5.353625
1107
+ step:293 train loss:5.305127
1108
+ step:294 train loss:5.354642
1109
+ step:295 train loss:5.335859
1110
+ step:296 train loss:5.332960
1111
+ step:297 train loss:5.347226
1112
+ step:298 train loss:5.256810
1113
+ step:299 train loss:5.309451
1114
+ step:300 train loss:5.251300
1115
+ step:301 train loss:5.284420
1116
+ step:302 train loss:5.241238
1117
+ step:303 train loss:5.261605
1118
+ step:304 train loss:5.287779
1119
+ step:305 train loss:5.238199
1120
+ step:306 train loss:5.228479
1121
+ step:307 train loss:5.241763
1122
+ step:308 train loss:5.176595
1123
+ step:309 train loss:5.302693
1124
+ step:310 train loss:5.266294
1125
+ step:311 train loss:5.234930
1126
+ step:312 train loss:5.223832
1127
+ step:313 train loss:5.230992
1128
+ step:314 train loss:5.226810
1129
+ step:315 train loss:5.161324
1130
+ step:316 train loss:5.159932
1131
+ step:317 train loss:5.154632
1132
+ step:318 train loss:5.159330
1133
+ step:319 train loss:5.212348
1134
+ step:320 train loss:5.117433
1135
+ step:321 train loss:5.190118
1136
+ step:322 train loss:5.167397
1137
+ step:323 train loss:5.229135
1138
+ step:324 train loss:5.183520
1139
+ step:325 train loss:5.188560
1140
+ step:326 train loss:5.175596
1141
+ step:327 train loss:5.191351
1142
+ step:328 train loss:5.148170
1143
+ step:329 train loss:5.143776
1144
+ step:330 train loss:5.089268
1145
+ step:331 train loss:5.120739
1146
+ step:332 train loss:5.090946
1147
+ step:333 train loss:5.030110
1148
+ step:334 train loss:5.123396
1149
+ step:335 train loss:5.158096
1150
+ step:336 train loss:5.362187
1151
+ step:337 train loss:5.151428
1152
+ step:338 train loss:5.079495
1153
+ step:339 train loss:5.043005
1154
+ step:340 train loss:5.047062
1155
+ step:341 train loss:5.023942
1156
+ step:342 train loss:5.099456
1157
+ step:343 train loss:5.055502
1158
+ step:344 train loss:5.050872
1159
+ step:345 train loss:5.001206
1160
+ step:346 train loss:5.041559
1161
+ step:347 train loss:5.007703
1162
+ step:348 train loss:4.979145
1163
+ step:349 train loss:4.928859
1164
+ step:350 train loss:4.961483
1165
+ step:351 train loss:5.025234
1166
+ step:352 train loss:4.968834
1167
+ step:353 train loss:5.004018
1168
+ step:354 train loss:4.957245
1169
+ step:355 train loss:4.986541
1170
+ step:356 train loss:4.946774
1171
+ step:357 train loss:5.019356
1172
+ step:358 train loss:5.062447
1173
+ step:359 train loss:4.902654
1174
+ step:360 train loss:5.010986
1175
+ step:361 train loss:4.997455
1176
+ step:362 train loss:4.967745
1177
+ step:363 train loss:4.960323
1178
+ step:364 train loss:4.971955
1179
+ step:365 train loss:4.975463
1180
+ step:366 train loss:4.911398
1181
+ step:367 train loss:4.985394
1182
+ step:368 train loss:4.909604
1183
+ step:369 train loss:4.918077
1184
+ step:370 train loss:4.951027
1185
+ step:371 train loss:4.876147
1186
+ step:372 train loss:4.967766
1187
+ step:373 train loss:4.889700
1188
+ step:374 train loss:4.882738
1189
+ step:375 train loss:4.939146
1190
+ step:376 train loss:4.893357
1191
+ step:377 train loss:4.808064
1192
+ step:378 train loss:4.873933
1193
+ step:379 train loss:4.885363
1194
+ step:380 train loss:4.817454
1195
+ step:381 train loss:4.924068
1196
+ step:382 train loss:4.851533
1197
+ step:383 train loss:4.811456
1198
+ step:384 train loss:4.841870
1199
+ step:385 train loss:4.816648
1200
+ step:386 train loss:4.846949
1201
+ step:387 train loss:4.890012
1202
+ step:388 train loss:4.813462
1203
+ step:389 train loss:4.799527
1204
+ step:390 train loss:4.801789
1205
+ step:391 train loss:4.795315
1206
+ step:392 train loss:4.822580
1207
+ step:393 train loss:4.801601
1208
+ step:394 train loss:4.834167
1209
+ step:395 train loss:4.725907
1210
+ step:396 train loss:4.721747
1211
+ step:397 train loss:4.796023
1212
+ step:398 train loss:4.779165
1213
+ step:399 train loss:4.740551
1214
+ step:400 train loss:4.768107
1215
+ step:401 train loss:4.739222
1216
+ step:402 train loss:4.754579
1217
+ step:403 train loss:4.729427
1218
+ step:404 train loss:4.716895
1219
+ step:405 train loss:4.711387
1220
+ step:406 train loss:4.771140
1221
+ step:407 train loss:4.796683
1222
+ step:408 train loss:4.749212
1223
+ step:409 train loss:4.714596
1224
+ step:410 train loss:4.675590
1225
+ step:411 train loss:4.663490
1226
+ step:412 train loss:4.755675
1227
+ step:413 train loss:4.669036
1228
+ step:414 train loss:4.723219
1229
+ step:415 train loss:4.714604
1230
+ step:416 train loss:4.682702
1231
+ step:417 train loss:4.707096
1232
+ step:418 train loss:4.663315
1233
+ step:419 train loss:4.652900
1234
+ step:420 train loss:4.594244
1235
+ step:421 train loss:4.630994
1236
+ step:422 train loss:4.596540
1237
+ step:423 train loss:4.643395
1238
+ step:424 train loss:4.612567
1239
+ step:425 train loss:4.729231
1240
+ step:426 train loss:4.604239
1241
+ step:427 train loss:4.610860
1242
+ step:428 train loss:4.622014
1243
+ step:429 train loss:4.561036
1244
+ step:430 train loss:4.606919
1245
+ step:431 train loss:4.611240
1246
+ step:432 train loss:4.639371
1247
+ step:433 train loss:4.631077
1248
+ step:434 train loss:4.587337
1249
+ step:435 train loss:4.631926
1250
+ step:436 train loss:4.655809
1251
+ step:437 train loss:4.597250
1252
+ step:438 train loss:4.615482
1253
+ step:439 train loss:4.551104
1254
+ step:440 train loss:4.586331
1255
+ step:441 train loss:4.508221
1256
+ step:442 train loss:4.525724
1257
+ step:443 train loss:4.554875
1258
+ step:444 train loss:4.617531
1259
+ step:445 train loss:4.574759
1260
+ step:446 train loss:4.581038
1261
+ step:447 train loss:4.515603
1262
+ step:448 train loss:4.636861
1263
+ step:449 train loss:4.541403
1264
+ step:450 train loss:4.566509
1265
+ step:451 train loss:4.627400
1266
+ step:452 train loss:4.595576
1267
+ step:453 train loss:4.602299
1268
+ step:454 train loss:4.476817
1269
+ step:455 train loss:4.530272
1270
+ step:456 train loss:4.534359
1271
+ step:457 train loss:4.504605
1272
+ step:458 train loss:4.548941
1273
+ step:459 train loss:4.527772
1274
+ step:460 train loss:4.586617
1275
+ step:461 train loss:4.522080
1276
+ step:462 train loss:4.464002
1277
+ step:463 train loss:4.506139
1278
+ step:464 train loss:4.526798
1279
+ step:465 train loss:4.504621
1280
+ step:466 train loss:4.527721
1281
+ step:467 train loss:4.478045
1282
+ step:468 train loss:4.544846
1283
+ step:469 train loss:4.497848
1284
+ step:470 train loss:4.449895
1285
+ step:471 train loss:4.544323
1286
+ step:472 train loss:4.456493
1287
+ step:473 train loss:4.521075
1288
+ step:474 train loss:4.481557
1289
+ step:475 train loss:4.529571
1290
+ step:476 train loss:4.520624
1291
+ step:477 train loss:4.432263
1292
+ step:478 train loss:4.474338
1293
+ step:479 train loss:4.440193
1294
+ step:480 train loss:4.515778
1295
+ step:481 train loss:4.511002
1296
+ step:482 train loss:4.384820
1297
+ step:483 train loss:4.499825
1298
+ step:484 train loss:4.446632
1299
+ step:485 train loss:4.401309
1300
+ step:486 train loss:4.444818
1301
+ step:487 train loss:4.450336
1302
+ step:488 train loss:4.431090
1303
+ step:489 train loss:4.454013
1304
+ step:490 train loss:4.396312
1305
+ step:491 train loss:4.452537
1306
+ step:492 train loss:4.440995
1307
+ step:493 train loss:4.416666
1308
+ step:494 train loss:4.448147
1309
+ step:495 train loss:4.412883
1310
+ step:496 train loss:4.410392
1311
+ step:497 train loss:4.374882
1312
+ step:498 train loss:4.450910
1313
+ step:499 train loss:4.469098
1314
+ step:500 validation loss:4.393310
1315
+ step:500 train loss:4.427353
1316
+ step:501 train loss:4.419229
1317
+ step:502 train loss:4.460567
1318
+ step:503 train loss:4.383344
1319
+ step:504 train loss:4.449477
1320
+ step:505 train loss:4.409599
1321
+ step:506 train loss:4.376177
1322
+ step:507 train loss:4.421039
1323
+ step:508 train loss:4.417836
1324
+ step:509 train loss:4.451217
1325
+ step:510 train loss:4.332636
1326
+ step:511 train loss:4.376929
1327
+ step:512 train loss:4.378193
1328
+ step:513 train loss:4.394472
1329
+ step:514 train loss:4.481476
1330
+ step:515 train loss:4.366212
1331
+ step:516 train loss:4.480642
1332
+ step:517 train loss:4.395843
1333
+ step:518 train loss:4.357895
1334
+ step:519 train loss:4.470166
1335
+ step:520 train loss:4.342049
1336
+ step:521 train loss:4.376186
1337
+ step:522 train loss:4.410932
1338
+ step:523 train loss:4.422513
1339
+ step:524 train loss:4.330533
1340
+ step:525 train loss:4.334688
1341
+ step:526 train loss:4.413133
1342
+ step:527 train loss:4.349930
1343
+ step:528 train loss:4.371693
1344
+ step:529 train loss:4.403624
1345
+ step:530 train loss:4.371160
1346
+ step:531 train loss:4.368406
1347
+ step:532 train loss:4.355752
1348
+ step:533 train loss:4.326072
1349
+ step:534 train loss:4.375371
1350
+ step:535 train loss:4.398804
1351
+ step:536 train loss:4.424671
1352
+ step:537 train loss:4.314857
1353
+ step:538 train loss:4.279737
1354
+ step:539 train loss:4.421515
1355
+ step:540 train loss:4.444015
1356
+ step:541 train loss:4.340447
1357
+ step:542 train loss:4.317795
1358
+ step:543 train loss:4.343908
1359
+ step:544 train loss:4.354474
1360
+ step:545 train loss:4.340141
1361
+ step:546 train loss:4.325336
1362
+ step:547 train loss:4.360598
1363
+ step:548 train loss:4.240135
1364
+ step:549 train loss:4.352332
1365
+ step:550 train loss:4.289812
1366
+ step:551 train loss:4.335394
1367
+ step:552 train loss:4.442968
1368
+ step:553 train loss:4.346203
1369
+ step:554 train loss:4.334588
1370
+ step:555 train loss:4.380287
1371
+ step:556 train loss:4.314001
1372
+ step:557 train loss:4.288407
1373
+ step:558 train loss:4.281936
1374
+ step:559 train loss:4.330935
1375
+ step:560 train loss:4.406416
1376
+ step:561 train loss:4.281101
1377
+ step:562 train loss:4.282530
1378
+ step:563 train loss:4.315229
1379
+ step:564 train loss:4.325429
1380
+ step:565 train loss:4.290172
1381
+ step:566 train loss:4.366910
1382
+ step:567 train loss:4.302389
1383
+ step:568 train loss:4.393555
1384
+ step:569 train loss:4.323826
1385
+ step:570 train loss:4.268733
1386
+ step:571 train loss:4.291211
1387
+ step:572 train loss:4.250766
1388
+ step:573 train loss:4.324527
1389
+ step:574 train loss:4.399329
1390
+ step:575 train loss:4.265396
1391
+ step:576 train loss:4.325948
1392
+ step:577 train loss:4.306362
1393
+ step:578 train loss:4.303312
1394
+ step:579 train loss:4.357180
1395
+ step:580 train loss:4.259871
1396
+ step:581 train loss:4.335917
1397
+ step:582 train loss:4.352087
1398
+ step:583 train loss:4.307730
1399
+ step:584 train loss:4.294944
1400
+ step:585 train loss:4.297154
1401
+ step:586 train loss:4.271719
1402
+ step:587 train loss:4.404578
1403
+ step:588 train loss:4.254440
1404
+ step:589 train loss:4.364561
1405
+ step:590 train loss:4.338441
1406
+ step:591 train loss:4.238070
1407
+ step:592 train loss:4.283438
1408
+ step:593 train loss:4.271563
1409
+ step:594 train loss:4.231655
1410
+ step:595 train loss:4.390388
1411
+ step:596 train loss:4.253740
1412
+ step:597 train loss:4.278986
1413
+ step:598 train loss:4.294640
1414
+ step:599 train loss:4.274227
1415
+ step:600 train loss:4.262802
1416
+ step:601 train loss:4.259683
1417
+ step:602 train loss:4.273860
1418
+ step:603 train loss:4.292183
1419
+ step:604 train loss:4.274671
1420
+ step:605 train loss:4.314252
1421
+ step:606 train loss:4.258016
1422
+ step:607 train loss:4.297611
1423
+ step:608 train loss:4.240560
1424
+ step:609 train loss:4.239917
1425
+ step:610 train loss:4.290593
1426
+ step:611 train loss:4.219511
1427
+ step:612 train loss:4.252311
1428
+ step:613 train loss:4.215436
1429
+ step:614 train loss:4.234926
1430
+ step:615 train loss:4.276649
1431
+ step:616 train loss:4.224585
1432
+ step:617 train loss:4.245788
1433
+ step:618 train loss:4.254639
1434
+ step:619 train loss:4.264497
1435
+ step:620 train loss:4.289402
1436
+ step:621 train loss:4.264825
1437
+ step:622 train loss:4.282917
1438
+ step:623 train loss:4.289814
1439
+ step:624 train loss:4.254495
1440
+ step:625 train loss:4.284570
1441
+ step:626 train loss:4.264802
1442
+ step:627 train loss:4.245918
1443
+ step:628 train loss:4.265225
1444
+ step:629 train loss:4.172097
1445
+ step:630 train loss:4.243270
1446
+ step:631 train loss:4.233099
1447
+ step:632 train loss:4.213527
1448
+ step:633 train loss:4.221115
1449
+ step:634 train loss:4.263871
1450
+ step:635 train loss:4.218144
1451
+ step:636 train loss:4.241845
1452
+ step:637 train loss:4.159383
1453
+ step:638 train loss:4.167646
1454
+ step:639 train loss:4.269362
1455
+ step:640 train loss:4.204619
1456
+ step:641 train loss:4.218515
1457
+ step:642 train loss:4.309052
1458
+ step:643 train loss:4.177016
1459
+ step:644 train loss:4.250973
1460
+ step:645 train loss:4.225832
1461
+ step:646 train loss:4.222608
1462
+ step:647 train loss:4.244049
1463
+ step:648 train loss:4.328261
1464
+ step:649 train loss:4.239064
1465
+ step:650 train loss:4.219040
1466
+ step:651 train loss:4.215826
1467
+ step:652 train loss:4.157939
1468
+ step:653 train loss:4.194305
1469
+ step:654 train loss:4.164826
1470
+ step:655 train loss:4.268385
1471
+ step:656 train loss:4.198242
1472
+ step:657 train loss:4.243348
1473
+ step:658 train loss:4.195463
1474
+ step:659 train loss:4.255297
1475
+ step:660 train loss:4.254205
1476
+ step:661 train loss:4.278774
1477
+ step:662 train loss:4.254154
1478
+ step:663 train loss:4.259056
1479
+ step:664 train loss:4.172908
1480
+ step:665 train loss:4.156044
1481
+ step:666 train loss:4.225113
1482
+ step:667 train loss:4.228320
1483
+ step:668 train loss:4.238627
1484
+ step:669 train loss:4.209437
1485
+ step:670 train loss:4.223588
1486
+ step:671 train loss:4.203549
1487
+ step:672 train loss:4.191574
1488
+ step:673 train loss:4.290041
1489
+ step:674 train loss:4.182269
1490
+ step:675 train loss:4.184412
1491
+ step:676 train loss:4.221931
1492
+ step:677 train loss:4.190480
1493
+ step:678 train loss:4.166787
1494
+ step:679 train loss:4.204130
1495
+ step:680 train loss:4.188313
1496
+ step:681 train loss:4.213797
1497
+ step:682 train loss:4.139986
1498
+ step:683 train loss:4.218596
1499
+ step:684 train loss:4.253249
1500
+ step:685 train loss:4.201429
1501
+ step:686 train loss:4.238118
1502
+ step:687 train loss:4.256205
1503
+ step:688 train loss:4.119943
1504
+ step:689 train loss:4.175318
1505
+ step:690 train loss:4.191999
1506
+ step:691 train loss:4.175777
1507
+ step:692 train loss:4.238293
1508
+ step:693 train loss:4.171908
1509
+ step:694 train loss:4.187183
1510
+ step:695 train loss:4.166661
1511
+ step:696 train loss:4.154706
1512
+ step:697 train loss:4.201894
1513
+ step:698 train loss:4.149628
1514
+ step:699 train loss:4.198371
1515
+ step:700 train loss:4.188437
1516
+ step:701 train loss:4.142958
1517
+ step:702 train loss:4.162082
1518
+ step:703 train loss:4.147178
1519
+ step:704 train loss:4.101230
1520
+ step:705 train loss:4.171474
1521
+ step:706 train loss:4.069380
1522
+ step:707 train loss:4.131087
1523
+ step:708 train loss:4.188102
1524
+ step:709 train loss:4.180864
1525
+ step:710 train loss:4.155499
1526
+ step:711 train loss:4.163764
1527
+ step:712 train loss:4.165518
1528
+ step:713 train loss:4.112861
1529
+ step:714 train loss:4.185285
1530
+ step:715 train loss:4.065869
1531
+ step:716 train loss:4.252131
1532
+ step:717 train loss:4.157557
1533
+ step:718 train loss:4.181412
1534
+ step:719 train loss:4.194964
1535
+ step:720 train loss:4.189048
1536
+ step:721 train loss:4.131832
1537
+ step:722 train loss:4.205163
1538
+ step:723 train loss:4.182544
1539
+ step:724 train loss:4.182381
1540
+ step:725 train loss:4.131506
1541
+ step:726 train loss:4.115680
1542
+ step:727 train loss:4.171064
1543
+ step:728 train loss:4.143806
1544
+ step:729 train loss:4.136785
1545
+ step:730 train loss:4.193211
1546
+ step:731 train loss:4.197047
1547
+ step:732 train loss:4.182890
1548
+ step:733 train loss:4.206203
1549
+ step:734 train loss:4.135615
1550
+ step:735 train loss:4.237441
1551
+ step:736 train loss:4.178821
1552
+ step:737 train loss:4.142222
1553
+ step:738 train loss:4.187204
1554
+ step:739 train loss:4.158437
1555
+ step:740 train loss:4.232876
1556
+ step:741 train loss:4.187460
1557
+ step:742 train loss:4.149143
1558
+ step:743 train loss:4.113138
1559
+ step:744 train loss:4.180312
1560
+ step:745 train loss:4.057991
1561
+ step:746 train loss:4.116617
1562
+ step:747 train loss:4.158803
1563
+ step:748 train loss:4.107720
1564
+ step:749 train loss:4.159406
1565
+ step:750 validation loss:4.100407
1566
+ step:750 train loss:4.090472
1567
+ step:751 train loss:4.137776
1568
+ step:752 train loss:4.098166
1569
+ step:753 train loss:4.144568
1570
+ step:754 train loss:4.122446
1571
+ step:755 train loss:4.223899
1572
+ step:756 train loss:4.137798
1573
+ step:757 train loss:4.247702
1574
+ step:758 train loss:4.125173
1575
+ step:759 train loss:4.132003
1576
+ step:760 train loss:4.103979
1577
+ step:761 train loss:4.132847
1578
+ step:762 train loss:4.098803
1579
+ step:763 train loss:4.120800
1580
+ step:764 train loss:4.088158
1581
+ step:765 train loss:4.106368
1582
+ step:766 train loss:4.153855
1583
+ step:767 train loss:4.285463
1584
+ step:768 train loss:4.153776
1585
+ step:769 train loss:4.165555
1586
+ step:770 train loss:4.225049
1587
+ step:771 train loss:4.142545
1588
+ step:772 train loss:4.196192
1589
+ step:773 train loss:4.079477
1590
+ step:774 train loss:4.110145
1591
+ step:775 train loss:4.175649
1592
+ step:776 train loss:4.089301
1593
+ step:777 train loss:4.093771
1594
+ step:778 train loss:4.095689
1595
+ step:779 train loss:4.078810
1596
+ step:780 train loss:4.130891
1597
+ step:781 train loss:4.091107
1598
+ step:782 train loss:4.083716
1599
+ step:783 train loss:4.102267
1600
+ step:784 train loss:4.083166
1601
+ step:785 train loss:4.085423
1602
+ step:786 train loss:4.107359
1603
+ step:787 train loss:4.025120
1604
+ step:788 train loss:4.133759
1605
+ step:789 train loss:4.093121
1606
+ step:790 train loss:4.113019
1607
+ step:791 train loss:4.109143
1608
+ step:792 train loss:4.219778
1609
+ step:793 train loss:4.141148
1610
+ step:794 train loss:4.080012
1611
+ step:795 train loss:4.086285
1612
+ step:796 train loss:4.393638
1613
+ step:797 train loss:4.136972
1614
+ step:798 train loss:4.045201
1615
+ step:799 train loss:4.122879
1616
+ step:800 train loss:4.186136
1617
+ step:801 train loss:4.081709
1618
+ step:802 train loss:4.248178
1619
+ step:803 train loss:4.093216
1620
+ step:804 train loss:4.080031
1621
+ step:805 train loss:4.101005
1622
+ step:806 train loss:4.097311
1623
+ step:807 train loss:4.046363
1624
+ step:808 train loss:4.122826
1625
+ step:809 train loss:4.090263
1626
+ step:810 train loss:4.093955
1627
+ step:811 train loss:4.084640
1628
+ step:812 train loss:4.087770
1629
+ step:813 train loss:4.136417
1630
+ step:814 train loss:4.244318
1631
+ step:815 train loss:4.106221
1632
+ step:816 train loss:4.065707
1633
+ step:817 train loss:4.123710
1634
+ step:818 train loss:4.080160
1635
+ step:819 train loss:4.049644
1636
+ step:820 train loss:4.110887
1637
+ step:821 train loss:4.022964
1638
+ step:822 train loss:4.042496
1639
+ step:823 train loss:4.093456
1640
+ step:824 train loss:4.018581
1641
+ step:825 train loss:4.014795
1642
+ step:826 train loss:4.066572
1643
+ step:827 train loss:3.991488
1644
+ step:828 train loss:4.074113
1645
+ step:829 train loss:4.050081
1646
+ step:830 train loss:4.058598
1647
+ step:831 train loss:4.122734
1648
+ step:832 train loss:4.174010
1649
+ step:833 train loss:4.095309
1650
+ step:834 train loss:4.073218
1651
+ step:835 train loss:4.066260
1652
+ step:836 train loss:4.013987
1653
+ step:837 train loss:4.084777
1654
+ step:838 train loss:4.019686
1655
+ step:839 train loss:4.068089
1656
+ step:840 train loss:4.087542
1657
+ step:841 train loss:4.074968
1658
+ step:842 train loss:4.050657
1659
+ step:843 train loss:4.036591
1660
+ step:844 train loss:4.081662
1661
+ step:845 train loss:3.992584
1662
+ step:846 train loss:4.088906
1663
+ step:847 train loss:4.124359
1664
+ step:848 train loss:4.025726
1665
+ step:849 train loss:4.074891
1666
+ step:850 train loss:4.067529
1667
+ step:851 train loss:4.111156
1668
+ step:852 train loss:4.090915
1669
+ step:853 train loss:4.006748
1670
+ step:854 train loss:4.051981
1671
+ step:855 train loss:4.070954
1672
+ step:856 train loss:4.019452
1673
+ step:857 train loss:4.072253
1674
+ step:858 train loss:4.058503
1675
+ step:859 train loss:3.986559
1676
+ step:860 train loss:4.061744
1677
+ step:861 train loss:4.078319
1678
+ step:862 train loss:3.993880
1679
+ step:863 train loss:4.009288
1680
+ step:864 train loss:4.062699
1681
+ step:865 train loss:4.042477
1682
+ step:866 train loss:4.030247
1683
+ step:867 train loss:4.196805
1684
+ step:868 train loss:4.065006
1685
+ step:869 train loss:4.016693
1686
+ step:870 train loss:3.980330
1687
+ step:871 train loss:4.035506
1688
+ step:872 train loss:4.022592
1689
+ step:873 train loss:4.056123
1690
+ step:874 train loss:4.075579
1691
+ step:875 train loss:3.917429
1692
+ step:876 train loss:4.059820
1693
+ step:877 train loss:3.982265
1694
+ step:878 train loss:4.126727
1695
+ step:879 train loss:3.978958
1696
+ step:880 train loss:4.082157
1697
+ step:881 train loss:4.046810
1698
+ step:882 train loss:4.003259
1699
+ step:883 train loss:4.042023
1700
+ step:884 train loss:4.056682
1701
+ step:885 train loss:4.036390
1702
+ step:886 train loss:3.984952
1703
+ step:887 train loss:4.040345
1704
+ step:888 train loss:4.147490
1705
+ step:889 train loss:4.034448
1706
+ step:890 train loss:4.012470
1707
+ step:891 train loss:3.950345
1708
+ step:892 train loss:3.988950
1709
+ step:893 train loss:4.064858
1710
+ step:894 train loss:3.980763
1711
+ step:895 train loss:3.988190
1712
+ step:896 train loss:4.045144
1713
+ step:897 train loss:4.016442
1714
+ step:898 train loss:4.026119
1715
+ step:899 train loss:3.998734
1716
+ step:900 train loss:4.088009
1717
+ step:901 train loss:3.992893
1718
+ step:902 train loss:3.983998
1719
+ step:903 train loss:4.193089
1720
+ step:904 train loss:4.066815
1721
+ step:905 train loss:4.037220
1722
+ step:906 train loss:4.030595
1723
+ step:907 train loss:4.029167
1724
+ step:908 train loss:4.049891
1725
+ step:909 train loss:3.996425
1726
+ step:910 train loss:4.114166
1727
+ step:911 train loss:4.077013
1728
+ step:912 train loss:3.972678
1729
+ step:913 train loss:4.009423
1730
+ step:914 train loss:4.004431
1731
+ step:915 train loss:4.020821
1732
+ step:916 train loss:4.045393
1733
+ step:917 train loss:4.088541
1734
+ step:918 train loss:4.060804
1735
+ step:919 train loss:4.181206
1736
+ step:920 train loss:3.948839
1737
+ step:921 train loss:4.056776
1738
+ step:922 train loss:3.960792
1739
+ step:923 train loss:4.007132
1740
+ step:924 train loss:3.979956
1741
+ step:925 train loss:3.951149
1742
+ step:926 train loss:4.077478
1743
+ step:927 train loss:3.944209
1744
+ step:928 train loss:4.035848
1745
+ step:929 train loss:4.028355
1746
+ step:930 train loss:4.045132
1747
+ step:931 train loss:4.035497
1748
+ step:932 train loss:4.002022
1749
+ step:933 train loss:4.088483
1750
+ step:934 train loss:3.983762
1751
+ step:935 train loss:4.110620
1752
+ step:936 train loss:4.021106
1753
+ step:937 train loss:4.022020
1754
+ step:938 train loss:3.990609
1755
+ step:939 train loss:3.925499
1756
+ step:940 train loss:3.990597
1757
+ step:941 train loss:3.970042
1758
+ step:942 train loss:4.033924
1759
+ step:943 train loss:3.955588
1760
+ step:944 train loss:4.062877
1761
+ step:945 train loss:3.994523
1762
+ step:946 train loss:4.039502
1763
+ step:947 train loss:4.139359
1764
+ step:948 train loss:4.028047
1765
+ step:949 train loss:3.975475
1766
+ step:950 train loss:3.971011
1767
+ step:951 train loss:3.969028
1768
+ step:952 train loss:4.070521
1769
+ step:953 train loss:3.983914
1770
+ step:954 train loss:4.000311
1771
+ step:955 train loss:3.981298
1772
+ step:956 train loss:3.982191
1773
+ step:957 train loss:4.010962
1774
+ step:958 train loss:4.013579
1775
+ step:959 train loss:4.050619
1776
+ step:960 train loss:4.032233
1777
+ step:961 train loss:4.047357
1778
+ step:962 train loss:3.964910
1779
+ step:963 train loss:3.960313
1780
+ step:964 train loss:4.004961
1781
+ step:965 train loss:3.951746
1782
+ step:966 train loss:3.951941
1783
+ step:967 train loss:3.974328
1784
+ step:968 train loss:4.031517
1785
+ step:969 train loss:3.966717
1786
+ step:970 train loss:4.009393
1787
+ step:971 train loss:3.947788
1788
+ step:972 train loss:3.961646
1789
+ step:973 train loss:3.967481
1790
+ step:974 train loss:3.948026
1791
+ step:975 train loss:4.070353
1792
+ step:976 train loss:3.969415
1793
+ step:977 train loss:4.032722
1794
+ step:978 train loss:3.977338
1795
+ step:979 train loss:3.955246
1796
+ step:980 train loss:3.947230
1797
+ step:981 train loss:3.984202
1798
+ step:982 train loss:3.978875
1799
+ step:983 train loss:3.961284
1800
+ step:984 train loss:3.997459
1801
+ step:985 train loss:3.978372
1802
+ step:986 train loss:4.008345
1803
+ step:987 train loss:4.056268
1804
+ step:988 train loss:3.968272
1805
+ step:989 train loss:3.978174
1806
+ step:990 train loss:3.935039
1807
+ step:991 train loss:3.942272
1808
+ step:992 train loss:3.952811
1809
+ step:993 train loss:3.970752
1810
+ step:994 train loss:3.960445
1811
+ step:995 train loss:3.942155
1812
+ step:996 train loss:3.949069
1813
+ step:997 train loss:3.968068
1814
+ step:998 train loss:3.945670
1815
+ step:999 train loss:3.963030
1816
+ step:1000 validation loss:3.922949
1817
+ step:1000 train loss:4.002028
1818
+ step:1001 train loss:3.985914
1819
+ step:1002 train loss:3.988024
1820
+ step:1003 train loss:3.934859
1821
+ step:1004 train loss:3.954869
1822
+ step:1005 train loss:3.976978
1823
+ step:1006 train loss:4.014203
1824
+ step:1007 train loss:3.949594
1825
+ step:1008 train loss:3.934643
1826
+ step:1009 train loss:3.986087
1827
+ step:1010 train loss:4.022973
1828
+ step:1011 train loss:4.005935
1829
+ step:1012 train loss:3.942850
1830
+ step:1013 train loss:3.974207
1831
+ step:1014 train loss:3.870000
1832
+ step:1015 train loss:3.965679
1833
+ step:1016 train loss:3.984982
1834
+ step:1017 train loss:3.924944
1835
+ step:1018 train loss:3.967738
1836
+ step:1019 train loss:3.961305
1837
+ step:1020 train loss:3.955120
1838
+ step:1021 train loss:3.986247
1839
+ step:1022 train loss:3.959225
1840
+ step:1023 train loss:3.942993
1841
+ step:1024 train loss:3.961884
1842
+ step:1025 train loss:4.006050
1843
+ step:1026 train loss:3.929019
1844
+ step:1027 train loss:3.975284
1845
+ step:1028 train loss:3.942400
1846
+ step:1029 train loss:3.952156
1847
+ step:1030 train loss:3.939455
1848
+ step:1031 train loss:4.029231
1849
+ step:1032 train loss:3.936928
1850
+ step:1033 train loss:3.930429
1851
+ step:1034 train loss:3.980088
1852
+ step:1035 train loss:3.969071
1853
+ step:1036 train loss:3.925973
1854
+ step:1037 train loss:3.938655
1855
+ step:1038 train loss:4.011632
1856
+ step:1039 train loss:4.091969
1857
+ step:1040 train loss:3.954931
1858
+ step:1041 train loss:3.969803
1859
+ step:1042 train loss:3.943537
1860
+ step:1043 train loss:3.916167
1861
+ step:1044 train loss:4.011185
1862
+ step:1045 train loss:3.944720
1863
+ step:1046 train loss:3.855110
1864
+ step:1047 train loss:3.967314
1865
+ step:1048 train loss:3.933725
1866
+ step:1049 train loss:4.026780
1867
+ step:1050 train loss:3.916263
1868
+ step:1051 train loss:3.907289
1869
+ step:1052 train loss:4.019715
1870
+ step:1053 train loss:3.941101
1871
+ step:1054 train loss:3.931720
1872
+ step:1055 train loss:3.972822
1873
+ step:1056 train loss:3.899542
1874
+ step:1057 train loss:3.873503
1875
+ step:1058 train loss:3.910534
1876
+ step:1059 train loss:3.933813
1877
+ step:1060 train loss:3.920648
1878
+ step:1061 train loss:3.977629
1879
+ step:1062 train loss:3.864970
1880
+ step:1063 train loss:3.989268
1881
+ step:1064 train loss:3.932365
1882
+ step:1065 train loss:3.900620
1883
+ step:1066 train loss:3.963389
1884
+ step:1067 train loss:3.870997
1885
+ step:1068 train loss:3.937557
1886
+ step:1069 train loss:3.924787
1887
+ step:1070 train loss:3.946663
1888
+ step:1071 train loss:3.933115
1889
+ step:1072 train loss:3.953652
1890
+ step:1073 train loss:3.873840
1891
+ step:1074 train loss:3.931653
1892
+ step:1075 train loss:3.867524
1893
+ step:1076 train loss:3.984814
1894
+ step:1077 train loss:3.922936
1895
+ step:1078 train loss:3.975248
1896
+ step:1079 train loss:3.967391
1897
+ step:1080 train loss:3.905135
1898
+ step:1081 train loss:3.929991
1899
+ step:1082 train loss:3.927953
1900
+ step:1083 train loss:3.891830
1901
+ step:1084 train loss:3.905551
1902
+ step:1085 train loss:3.933024
1903
+ step:1086 train loss:3.923809
1904
+ step:1087 train loss:3.921296
1905
+ step:1088 train loss:3.928435
1906
+ step:1089 train loss:3.921442
1907
+ step:1090 train loss:3.879214
1908
+ step:1091 train loss:3.865694
1909
+ step:1092 train loss:3.967604
1910
+ step:1093 train loss:3.895097
1911
+ step:1094 train loss:3.877093
1912
+ step:1095 train loss:3.951813
1913
+ step:1096 train loss:3.913936
1914
+ step:1097 train loss:3.884236
1915
+ step:1098 train loss:3.897169
1916
+ step:1099 train loss:3.939652
1917
+ step:1100 train loss:3.962886
1918
+ step:1101 train loss:3.947030
1919
+ step:1102 train loss:3.967351
1920
+ step:1103 train loss:3.915780
1921
+ step:1104 train loss:3.927171
1922
+ step:1105 train loss:3.939467
1923
+ step:1106 train loss:3.973014
1924
+ step:1107 train loss:3.986967
1925
+ step:1108 train loss:4.002523
1926
+ step:1109 train loss:3.942834
1927
+ step:1110 train loss:3.897256
1928
+ step:1111 train loss:3.896074
1929
+ step:1112 train loss:3.889287
1930
+ step:1113 train loss:3.795183
1931
+ step:1114 train loss:3.869312
1932
+ step:1115 train loss:3.940021
1933
+ step:1116 train loss:3.918025
1934
+ step:1117 train loss:3.943326
1935
+ step:1118 train loss:3.973754
1936
+ step:1119 train loss:3.956108
1937
+ step:1120 train loss:3.921861
1938
+ step:1121 train loss:3.908311
1939
+ step:1122 train loss:3.965761
1940
+ step:1123 train loss:3.940778
1941
+ step:1124 train loss:3.864900
1942
+ step:1125 train loss:3.884663
1943
+ step:1126 train loss:3.885263
1944
+ step:1127 train loss:3.905897
1945
+ step:1128 train loss:3.842694
1946
+ step:1129 train loss:3.987027
1947
+ step:1130 train loss:3.853091
1948
+ step:1131 train loss:3.944433
1949
+ step:1132 train loss:3.897341
1950
+ step:1133 train loss:3.897770
1951
+ step:1134 train loss:3.913785
1952
+ step:1135 train loss:3.945745
1953
+ step:1136 train loss:3.930482
1954
+ step:1137 train loss:3.928375
1955
+ step:1138 train loss:3.878631
1956
+ step:1139 train loss:4.012078
1957
+ step:1140 train loss:3.854665
1958
+ step:1141 train loss:3.925866
1959
+ step:1142 train loss:3.883603
1960
+ step:1143 train loss:3.998202
1961
+ step:1144 train loss:3.920395
1962
+ step:1145 train loss:3.873887
1963
+ step:1146 train loss:3.912208
1964
+ step:1147 train loss:3.886749
1965
+ step:1148 train loss:3.921994
1966
+ step:1149 train loss:3.997519
1967
+ step:1150 train loss:3.949852
1968
+ step:1151 train loss:3.933699
1969
+ step:1152 train loss:3.862453
1970
+ step:1153 train loss:3.871907
1971
+ step:1154 train loss:3.878570
1972
+ step:1155 train loss:3.943954
1973
+ step:1156 train loss:3.881603
1974
+ step:1157 train loss:3.945322
1975
+ step:1158 train loss:3.910732
1976
+ step:1159 train loss:3.938544
1977
+ step:1160 train loss:3.874466
1978
+ step:1161 train loss:3.939105
1979
+ step:1162 train loss:3.906281
1980
+ step:1163 train loss:3.838763
1981
+ step:1164 train loss:3.848461
1982
+ step:1165 train loss:3.899048
1983
+ step:1166 train loss:3.867635
1984
+ step:1167 train loss:3.875611
1985
+ step:1168 train loss:3.930719
1986
+ step:1169 train loss:3.868754
1987
+ step:1170 train loss:3.972005
1988
+ step:1171 train loss:3.870772
1989
+ step:1172 train loss:3.852270
1990
+ step:1173 train loss:3.912057
1991
+ step:1174 train loss:3.836881
1992
+ step:1175 train loss:3.916032
1993
+ step:1176 train loss:3.983089
1994
+ step:1177 train loss:3.891183
1995
+ step:1178 train loss:3.861855
1996
+ step:1179 train loss:3.841957
1997
+ step:1180 train loss:3.901367
1998
+ step:1181 train loss:3.857945
1999
+ step:1182 train loss:3.974981
2000
+ step:1183 train loss:3.873095
2001
+ step:1184 train loss:3.818384
2002
+ step:1185 train loss:3.882956
2003
+ step:1186 train loss:3.894314
2004
+ step:1187 train loss:3.854965
2005
+ step:1188 train loss:3.877975
2006
+ step:1189 train loss:3.838130
2007
+ step:1190 train loss:3.895848
2008
+ step:1191 train loss:3.867215
2009
+ step:1192 train loss:3.923374
2010
+ step:1193 train loss:3.915555
2011
+ step:1194 train loss:3.995477
2012
+ step:1195 train loss:3.935253
2013
+ step:1196 train loss:3.937064
2014
+ step:1197 train loss:3.849087
2015
+ step:1198 train loss:3.846049
2016
+ step:1199 train loss:4.042292
2017
+ step:1200 train loss:3.780253
2018
+ step:1201 train loss:3.896569
2019
+ step:1202 train loss:3.887328
2020
+ step:1203 train loss:3.871069
2021
+ step:1204 train loss:3.880103
2022
+ step:1205 train loss:3.872984
2023
+ step:1206 train loss:3.865698
2024
+ step:1207 train loss:3.906325
2025
+ step:1208 train loss:3.900407
2026
+ step:1209 train loss:3.826255
2027
+ step:1210 train loss:3.903646
2028
+ step:1211 train loss:3.849854
2029
+ step:1212 train loss:3.872662
2030
+ step:1213 train loss:3.868860
2031
+ step:1214 train loss:3.914253
2032
+ step:1215 train loss:3.878576
2033
+ step:1216 train loss:3.799879
2034
+ step:1217 train loss:3.877932
2035
+ step:1218 train loss:3.879938
2036
+ step:1219 train loss:3.849298
2037
+ step:1220 train loss:3.851941
2038
+ step:1221 train loss:3.832316
2039
+ step:1222 train loss:3.992810
2040
+ step:1223 train loss:3.895532
2041
+ step:1224 train loss:3.842948
2042
+ step:1225 train loss:3.883440
2043
+ step:1226 train loss:3.868147
2044
+ step:1227 train loss:3.888015
2045
+ step:1228 train loss:3.829001
2046
+ step:1229 train loss:3.861384
2047
+ step:1230 train loss:3.915865
2048
+ step:1231 train loss:3.828366
2049
+ step:1232 train loss:3.867975
2050
+ step:1233 train loss:3.865742
2051
+ step:1234 train loss:3.901035
2052
+ step:1235 train loss:3.849055
2053
+ step:1236 train loss:3.889338
2054
+ step:1237 train loss:3.843271
2055
+ step:1238 train loss:3.857498
2056
+ step:1239 train loss:3.876526
2057
+ step:1240 train loss:3.817823
2058
+ step:1241 train loss:3.828878
2059
+ step:1242 train loss:3.884361
2060
+ step:1243 train loss:3.849646
2061
+ step:1244 train loss:3.907674
2062
+ step:1245 train loss:3.957650
2063
+ step:1246 train loss:3.878843
2064
+ step:1247 train loss:3.862557
2065
+ step:1248 train loss:3.809530
2066
+ step:1249 train loss:3.859677
2067
+ step:1250 validation loss:3.826611
2068
+ step:1250 train loss:3.893163
2069
+ step:1251 train loss:3.877445
2070
+ step:1252 train loss:3.814513
2071
+ step:1253 train loss:3.824313
2072
+ step:1254 train loss:3.822617
2073
+ step:1255 train loss:3.839972
2074
+ step:1256 train loss:3.884478
2075
+ step:1257 train loss:3.918450
2076
+ step:1258 train loss:3.850078
2077
+ step:1259 train loss:3.827213
2078
+ step:1260 train loss:3.938088
2079
+ step:1261 train loss:3.953469
2080
+ step:1262 train loss:3.854068
2081
+ step:1263 train loss:3.841062
2082
+ step:1264 train loss:3.865801
2083
+ step:1265 train loss:3.860689
2084
+ step:1266 train loss:3.890187
2085
+ step:1267 train loss:3.865319
2086
+ step:1268 train loss:3.833306
2087
+ step:1269 train loss:3.852707
2088
+ step:1270 train loss:3.756754
2089
+ step:1271 train loss:3.825117
2090
+ step:1272 train loss:3.802334
2091
+ step:1273 train loss:3.925034
2092
+ step:1274 train loss:3.835282
2093
+ step:1275 train loss:3.875565
2094
+ step:1276 train loss:3.878094
2095
+ step:1277 train loss:3.889197
2096
+ step:1278 train loss:3.807224
2097
+ step:1279 train loss:3.875180
2098
+ step:1280 train loss:3.846453
2099
+ step:1281 train loss:3.868839
2100
+ step:1282 train loss:3.841353
2101
+ step:1283 train loss:3.883650
2102
+ step:1284 train loss:3.892779
2103
+ step:1285 train loss:3.851052
2104
+ step:1286 train loss:3.829006
2105
+ step:1287 train loss:3.808278
2106
+ step:1288 train loss:3.884778
2107
+ step:1289 train loss:3.942343
2108
+ step:1290 train loss:3.822271
2109
+ step:1291 train loss:3.849371
2110
+ step:1292 train loss:3.873369
2111
+ step:1293 train loss:3.802647
2112
+ step:1294 train loss:3.833345
2113
+ step:1295 train loss:3.913428
2114
+ step:1296 train loss:3.873119
2115
+ step:1297 train loss:3.811568
2116
+ step:1298 train loss:3.916762
2117
+ step:1299 train loss:3.887358
2118
+ step:1300 train loss:3.812883
2119
+ step:1301 train loss:3.899492
2120
+ step:1302 train loss:3.821421
2121
+ step:1303 train loss:3.865287
2122
+ step:1304 train loss:3.911485
2123
+ step:1305 train loss:3.852350
2124
+ step:1306 train loss:3.900435
2125
+ step:1307 train loss:3.810319
2126
+ step:1308 train loss:3.804298
2127
+ step:1309 train loss:3.841265
2128
+ step:1310 train loss:3.795567
2129
+ step:1311 train loss:3.831146
2130
+ step:1312 train loss:3.901070
2131
+ step:1313 train loss:3.787488
2132
+ step:1314 train loss:3.846757
2133
+ step:1315 train loss:3.840177
2134
+ step:1316 train loss:3.770811
2135
+ step:1317 train loss:3.803823
2136
+ step:1318 train loss:3.891773
2137
+ step:1319 train loss:3.900659
2138
+ step:1320 train loss:3.816842
2139
+ step:1321 train loss:3.858860
2140
+ step:1322 train loss:3.899477
2141
+ step:1323 train loss:3.870969
2142
+ step:1324 train loss:3.952270
2143
+ step:1325 train loss:3.842304
2144
+ step:1326 train loss:3.873678
2145
+ step:1327 train loss:3.916664
2146
+ step:1328 train loss:3.773400
2147
+ step:1329 train loss:3.807928
2148
+ step:1330 train loss:3.890032
2149
+ step:1331 train loss:3.761925
2150
+ step:1332 train loss:3.895957
2151
+ step:1333 train loss:3.831857
2152
+ step:1334 train loss:3.872089
2153
+ step:1335 train loss:3.898932
2154
+ step:1336 train loss:3.873413
2155
+ step:1337 train loss:3.864900
2156
+ step:1338 train loss:3.928332
2157
+ step:1339 train loss:3.820869
2158
+ step:1340 train loss:3.873405
2159
+ step:1341 train loss:3.884965
2160
+ step:1342 train loss:3.890243
2161
+ step:1343 train loss:3.732008
2162
+ step:1344 train loss:3.943747
2163
+ step:1345 train loss:3.923591
2164
+ step:1346 train loss:3.863621
2165
+ step:1347 train loss:3.847536
2166
+ step:1348 train loss:3.785698
2167
+ step:1349 train loss:3.805537
2168
+ step:1350 train loss:3.812785
2169
+ step:1351 train loss:3.828299
2170
+ step:1352 train loss:3.872229
2171
+ step:1353 train loss:3.823015
2172
+ step:1354 train loss:3.870942
2173
+ step:1355 train loss:3.865641
2174
+ step:1356 train loss:3.820480
2175
+ step:1357 train loss:3.841460
2176
+ step:1358 train loss:3.792540
2177
+ step:1359 train loss:3.849933
2178
+ step:1360 train loss:4.029719
2179
+ step:1361 train loss:3.840365
2180
+ step:1362 train loss:3.825315
2181
+ step:1363 train loss:3.825120
2182
+ step:1364 train loss:3.766476
2183
+ step:1365 train loss:3.814825
2184
+ step:1366 train loss:3.804184
2185
+ step:1367 train loss:3.792404
2186
+ step:1368 train loss:3.809503
2187
+ step:1369 train loss:3.841174
2188
+ step:1370 train loss:3.861629
2189
+ step:1371 train loss:3.873272
2190
+ step:1372 train loss:3.774899
2191
+ step:1373 train loss:3.875476
2192
+ step:1374 train loss:3.927669
2193
+ step:1375 train loss:3.801474
2194
+ step:1376 train loss:3.858068
2195
+ step:1377 train loss:3.868899
2196
+ step:1378 train loss:3.834599
2197
+ step:1379 train loss:3.822088
2198
+ step:1380 train loss:3.857166
2199
+ step:1381 train loss:3.860100
2200
+ step:1382 train loss:3.823347
2201
+ step:1383 train loss:3.760774
2202
+ step:1384 train loss:3.873748
2203
+ step:1385 train loss:3.825006
2204
+ step:1386 train loss:3.781129
2205
+ step:1387 train loss:3.855453
2206
+ step:1388 train loss:3.835993
2207
+ step:1389 train loss:3.808662
2208
+ step:1390 train loss:3.836672
2209
+ step:1391 train loss:3.827110
2210
+ step:1392 train loss:3.858364
2211
+ step:1393 train loss:3.919786
2212
+ step:1394 train loss:3.811592
2213
+ step:1395 train loss:3.820995
2214
+ step:1396 train loss:3.859890
2215
+ step:1397 train loss:3.878932
2216
+ step:1398 train loss:3.841174
2217
+ step:1399 train loss:3.839590
2218
+ step:1400 train loss:3.797957
2219
+ step:1401 train loss:3.804448
2220
+ step:1402 train loss:3.844164
2221
+ step:1403 train loss:3.802745
2222
+ step:1404 train loss:3.761545
2223
+ step:1405 train loss:3.807818
2224
+ step:1406 train loss:3.861656
2225
+ step:1407 train loss:3.807901
2226
+ step:1408 train loss:3.764139
2227
+ step:1409 train loss:3.851090
2228
+ step:1410 train loss:3.800162
2229
+ step:1411 train loss:3.890156
2230
+ step:1412 train loss:3.837520
2231
+ step:1413 train loss:3.812350
2232
+ step:1414 train loss:3.849592
2233
+ step:1415 train loss:3.799215
2234
+ step:1416 train loss:3.856426
2235
+ step:1417 train loss:3.769259
2236
+ step:1418 train loss:3.812027
2237
+ step:1419 train loss:3.808388
2238
+ step:1420 train loss:3.818705
2239
+ step:1421 train loss:3.830803
2240
+ step:1422 train loss:3.888608
2241
+ step:1423 train loss:3.867841
2242
+ step:1424 train loss:3.777815
2243
+ step:1425 train loss:3.804133
2244
+ step:1426 train loss:3.826435
2245
+ step:1427 train loss:3.749098
2246
+ step:1428 train loss:3.834265
2247
+ step:1429 train loss:3.815309
2248
+ step:1430 train loss:3.766468
2249
+ step:1431 train loss:3.832919
2250
+ step:1432 train loss:3.837049
2251
+ step:1433 train loss:3.793988
2252
+ step:1434 train loss:3.747606
2253
+ step:1435 train loss:3.838619
2254
+ step:1436 train loss:3.777488
2255
+ step:1437 train loss:3.778184
2256
+ step:1438 train loss:3.773561
2257
+ step:1439 train loss:3.749923
2258
+ step:1440 train loss:3.817825
2259
+ step:1441 train loss:3.880980
2260
+ step:1442 train loss:3.812835
2261
+ step:1443 train loss:3.748357
2262
+ step:1444 train loss:3.772249
2263
+ step:1445 train loss:3.792529
2264
+ step:1446 train loss:3.829174
2265
+ step:1447 train loss:3.793631
2266
+ step:1448 train loss:3.772240
2267
+ step:1449 train loss:3.851706
2268
+ step:1450 train loss:3.803333
2269
+ step:1451 train loss:3.746500
2270
+ step:1452 train loss:3.806686
2271
+ step:1453 train loss:3.805840
2272
+ step:1454 train loss:3.790751
2273
+ step:1455 train loss:3.743222
2274
+ step:1456 train loss:3.783026
2275
+ step:1457 train loss:3.802218
2276
+ step:1458 train loss:3.871597
2277
+ step:1459 train loss:3.763002
2278
+ step:1460 train loss:3.770894
2279
+ step:1461 train loss:3.878401
2280
+ step:1462 train loss:3.803609
2281
+ step:1463 train loss:3.778724
2282
+ step:1464 train loss:3.799502
2283
+ step:1465 train loss:3.751837
2284
+ step:1466 train loss:3.847315
2285
+ step:1467 train loss:3.806876
2286
+ step:1468 train loss:3.783533
2287
+ step:1469 train loss:3.802670
2288
+ step:1470 train loss:3.777166
2289
+ step:1471 train loss:3.775966
2290
+ step:1472 train loss:3.765676
2291
+ step:1473 train loss:3.760405
2292
+ step:1474 train loss:3.741285
2293
+ step:1475 train loss:3.807868
2294
+ step:1476 train loss:3.818596
2295
+ step:1477 train loss:3.802255
2296
+ step:1478 train loss:3.782046
2297
+ step:1479 train loss:3.759466
2298
+ step:1480 train loss:3.756655
2299
+ step:1481 train loss:3.767833
2300
+ step:1482 train loss:3.828053
2301
+ step:1483 train loss:3.782181
2302
+ step:1484 train loss:3.826070
2303
+ step:1485 train loss:3.854391
2304
+ step:1486 train loss:3.789398
2305
+ step:1487 train loss:3.769118
2306
+ step:1488 train loss:3.746739
2307
+ step:1489 train loss:3.767208
2308
+ step:1490 train loss:3.841179
2309
+ step:1491 train loss:3.780541
2310
+ step:1492 train loss:3.788824
2311
+ step:1493 train loss:3.798223
2312
+ step:1494 train loss:3.792627
2313
+ step:1495 train loss:3.732736
2314
+ step:1496 train loss:3.807983
2315
+ step:1497 train loss:3.756237
2316
+ step:1498 train loss:3.745954
2317
+ step:1499 train loss:3.737407
2318
+ step:1500 validation loss:3.757109
2319
+ step:1500 train loss:3.815202
2320
+ step:1501 train loss:3.740230
2321
+ step:1502 train loss:3.718124
2322
+ step:1503 train loss:3.816290
2323
+ step:1504 train loss:3.659916
2324
+ step:1505 train loss:3.776286
2325
+ step:1506 train loss:3.738159
2326
+ step:1507 train loss:3.734453
2327
+ step:1508 train loss:3.719521
2328
+ step:1509 train loss:3.807832
2329
+ step:1510 train loss:3.723871
2330
+ step:1511 train loss:3.758124
2331
+ step:1512 train loss:3.765553
2332
+ step:1513 train loss:3.755434
2333
+ step:1514 train loss:3.785947
2334
+ step:1515 train loss:3.808322
2335
+ step:1516 train loss:3.735939
2336
+ step:1517 train loss:3.829880
2337
+ step:1518 train loss:3.800495
2338
+ step:1519 train loss:3.817586
2339
+ step:1520 train loss:3.778425
2340
+ step:1521 train loss:3.778063
2341
+ step:1522 train loss:3.797825
2342
+ step:1523 train loss:3.763740
2343
+ step:1524 train loss:3.750288
2344
+ step:1525 train loss:3.763456
2345
+ step:1526 train loss:3.753563
2346
+ step:1527 train loss:3.798655
2347
+ step:1528 train loss:3.803994
2348
+ step:1529 train loss:3.816316
2349
+ step:1530 train loss:3.793533
2350
+ step:1531 train loss:3.740641
2351
+ step:1532 train loss:3.832451
2352
+ step:1533 train loss:3.772871
2353
+ step:1534 train loss:3.763093
2354
+ step:1535 train loss:3.778946
2355
+ step:1536 train loss:3.821423
2356
+ step:1537 train loss:3.795830
2357
+ step:1538 train loss:3.764590
2358
+ step:1539 train loss:3.780171
2359
+ step:1540 train loss:3.743726
2360
+ step:1541 train loss:3.831665
2361
+ step:1542 train loss:3.776404
2362
+ step:1543 train loss:3.879735
2363
+ step:1544 train loss:3.733360
2364
+ step:1545 train loss:3.736825
2365
+ step:1546 train loss:3.750626
2366
+ step:1547 train loss:3.782934
2367
+ step:1548 train loss:3.743047
2368
+ step:1549 train loss:3.805847
2369
+ step:1550 train loss:3.774482
2370
+ step:1551 train loss:3.786910
2371
+ step:1552 train loss:3.798052
2372
+ step:1553 train loss:3.800518
2373
+ step:1554 train loss:3.789695