wf8888884 commited on
Commit
a25887f
·
verified ·
1 Parent(s): 8d6bfe6

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. Area/checkpoint-450/adapter_model.safetensors +3 -0
  2. Area/checkpoint-450/optimizer.pt +3 -0
  3. Area/checkpoint-50/optimizer.pt +3 -0
  4. Area/checkpoint-550/optimizer.pt +3 -0
  5. Area/checkpoint-580/optimizer.pt +3 -0
  6. Area/dpo/checkpoint-260/adapter_model.safetensors +3 -0
  7. Area_SFT/checkpoint-580/added_tokens.json +3 -0
  8. Area_SFT/checkpoint-580/tokenizer.json +0 -0
  9. Area_SFT/checkpoint-580/trainer_state.json +903 -0
  10. Area_Time_SFT/README.md +62 -0
  11. Area_Time_SFT/adapter_config.json +34 -0
  12. Area_Time_SFT/added_tokens.json +3 -0
  13. Area_Time_SFT/all_results.json +8 -0
  14. Area_Time_SFT/checkpoint-100/README.md +202 -0
  15. Area_Time_SFT/checkpoint-100/adapter_config.json +34 -0
  16. Area_Time_SFT/checkpoint-100/added_tokens.json +3 -0
  17. Area_Time_SFT/checkpoint-100/special_tokens_map.json +36 -0
  18. Area_Time_SFT/checkpoint-100/tokenizer.json +0 -0
  19. Area_Time_SFT/checkpoint-100/tokenizer_config.json +59 -0
  20. Area_Time_SFT/checkpoint-100/trainer_state.json +183 -0
  21. Area_Time_SFT/checkpoint-200/README.md +202 -0
  22. Area_Time_SFT/checkpoint-200/adapter_config.json +34 -0
  23. Area_Time_SFT/checkpoint-200/added_tokens.json +3 -0
  24. Area_Time_SFT/checkpoint-200/special_tokens_map.json +36 -0
  25. Area_Time_SFT/checkpoint-200/tokenizer.json +0 -0
  26. Area_Time_SFT/checkpoint-200/tokenizer_config.json +59 -0
  27. Area_Time_SFT/checkpoint-200/trainer_state.json +333 -0
  28. Area_Time_SFT/checkpoint-300/README.md +202 -0
  29. Area_Time_SFT/checkpoint-300/adapter_config.json +34 -0
  30. Area_Time_SFT/checkpoint-300/added_tokens.json +3 -0
  31. Area_Time_SFT/checkpoint-300/special_tokens_map.json +36 -0
  32. Area_Time_SFT/checkpoint-300/tokenizer.json +0 -0
  33. Area_Time_SFT/checkpoint-300/tokenizer_config.json +59 -0
  34. Area_Time_SFT/checkpoint-300/trainer_state.json +483 -0
  35. Area_Time_SFT/checkpoint-400/README.md +202 -0
  36. Area_Time_SFT/checkpoint-400/adapter_config.json +34 -0
  37. Area_Time_SFT/checkpoint-400/added_tokens.json +3 -0
  38. Area_Time_SFT/checkpoint-400/special_tokens_map.json +36 -0
  39. Area_Time_SFT/checkpoint-400/tokenizer.json +0 -0
  40. Area_Time_SFT/checkpoint-400/tokenizer_config.json +59 -0
  41. Area_Time_SFT/checkpoint-400/trainer_state.json +633 -0
  42. Area_Time_SFT/checkpoint-500/README.md +202 -0
  43. Area_Time_SFT/checkpoint-500/adapter_config.json +34 -0
  44. Area_Time_SFT/checkpoint-500/added_tokens.json +3 -0
  45. Area_Time_SFT/checkpoint-500/special_tokens_map.json +36 -0
  46. Area_Time_SFT/checkpoint-500/tokenizer.json +0 -0
  47. Area_Time_SFT/checkpoint-500/tokenizer_config.json +59 -0
  48. Area_Time_SFT/checkpoint-500/trainer_state.json +783 -0
  49. Area_Time_SFT/checkpoint-540/README.md +202 -0
  50. Area_Time_SFT/checkpoint-540/adapter_config.json +34 -0
Area/checkpoint-450/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b925e48aa23adc2cdd9ca0b8044801906869dffccc4b879e6b4fe4083f1f78dc
3
+ size 80013120
Area/checkpoint-450/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:696714bb521e00079957b8e9e06bd7e9dd3a2193c7f39958b9567a52a129d4ab
3
+ size 160284754
Area/checkpoint-50/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e25e6943d497b99434bfb5ae5df805403baf57ac8392c79935192925336fb4ac
3
+ size 160284754
Area/checkpoint-550/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1168e6efefee2c5f638ea577a8f65eda7c86d9965236af1e88238af2d9cf82f2
3
+ size 160284754
Area/checkpoint-580/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:08bcc2c624a7e85c903a923dd15e34c7b8ffbd2639db23c6abb9125133a47ece
3
+ size 160284754
Area/dpo/checkpoint-260/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eedc35fe5992cc3ff33851aa6fc5d4ac6b08e63168a97043b7c2de85d2636b84
3
+ size 80013120
Area_SFT/checkpoint-580/added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "[PAD]": 32000
3
+ }
Area_SFT/checkpoint-580/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
Area_SFT/checkpoint-580/trainer_state.json ADDED
@@ -0,0 +1,903 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 19.502092050209207,
5
+ "eval_steps": 500,
6
+ "global_step": 580,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.33472803347280333,
13
+ "grad_norm": 3.9892160892486572,
14
+ "learning_rate": 8.620689655172415e-07,
15
+ "logits/chosen": -2.315223217010498,
16
+ "logits/rejected": -2.3654401302337646,
17
+ "logps/chosen": -65.86729431152344,
18
+ "logps/rejected": -77.53572845458984,
19
+ "loss": 0.6929,
20
+ "rewards/accuracies": 0.3499999940395355,
21
+ "rewards/chosen": 0.0023138518445193768,
22
+ "rewards/margins": -0.001122759305872023,
23
+ "rewards/rejected": 0.0034366101026535034,
24
+ "step": 10
25
+ },
26
+ {
27
+ "epoch": 0.6694560669456067,
28
+ "grad_norm": 3.5659756660461426,
29
+ "learning_rate": 1.724137931034483e-06,
30
+ "logits/chosen": -2.341399669647217,
31
+ "logits/rejected": -2.3567094802856445,
32
+ "logps/chosen": -66.60242462158203,
33
+ "logps/rejected": -69.70094299316406,
34
+ "loss": 0.6929,
35
+ "rewards/accuracies": 0.512499988079071,
36
+ "rewards/chosen": -0.0013719359412789345,
37
+ "rewards/margins": -0.0035000313073396683,
38
+ "rewards/rejected": 0.002128095831722021,
39
+ "step": 20
40
+ },
41
+ {
42
+ "epoch": 1.00418410041841,
43
+ "grad_norm": 4.912586688995361,
44
+ "learning_rate": 2.5862068965517246e-06,
45
+ "logits/chosen": -2.3429622650146484,
46
+ "logits/rejected": -2.3658394813537598,
47
+ "logps/chosen": -71.6301040649414,
48
+ "logps/rejected": -78.41346740722656,
49
+ "loss": 0.6938,
50
+ "rewards/accuracies": 0.5375000238418579,
51
+ "rewards/chosen": 0.003577103139832616,
52
+ "rewards/margins": 0.00785654503852129,
53
+ "rewards/rejected": -0.004279441200196743,
54
+ "step": 30
55
+ },
56
+ {
57
+ "epoch": 1.3389121338912133,
58
+ "grad_norm": 4.810107707977295,
59
+ "learning_rate": 3.448275862068966e-06,
60
+ "logits/chosen": -2.3610458374023438,
61
+ "logits/rejected": -2.3885395526885986,
62
+ "logps/chosen": -66.8291244506836,
63
+ "logps/rejected": -62.15415573120117,
64
+ "loss": 0.6893,
65
+ "rewards/accuracies": 0.5874999761581421,
66
+ "rewards/chosen": -6.734435737598687e-05,
67
+ "rewards/margins": 0.006865750066936016,
68
+ "rewards/rejected": -0.0069330958649516106,
69
+ "step": 40
70
+ },
71
+ {
72
+ "epoch": 1.6736401673640167,
73
+ "grad_norm": 4.670071125030518,
74
+ "learning_rate": 4.310344827586207e-06,
75
+ "logits/chosen": -2.304999351501465,
76
+ "logits/rejected": -2.335301399230957,
77
+ "logps/chosen": -75.09913635253906,
78
+ "logps/rejected": -77.72399139404297,
79
+ "loss": 0.6878,
80
+ "rewards/accuracies": 0.612500011920929,
81
+ "rewards/chosen": 0.003225918160751462,
82
+ "rewards/margins": 0.010454346425831318,
83
+ "rewards/rejected": -0.007228427566587925,
84
+ "step": 50
85
+ },
86
+ {
87
+ "epoch": 2.00836820083682,
88
+ "grad_norm": 4.2342000007629395,
89
+ "learning_rate": 4.999818897894192e-06,
90
+ "logits/chosen": -2.363574504852295,
91
+ "logits/rejected": -2.363882064819336,
92
+ "logps/chosen": -62.84125900268555,
93
+ "logps/rejected": -61.92932891845703,
94
+ "loss": 0.6855,
95
+ "rewards/accuracies": 0.6625000238418579,
96
+ "rewards/chosen": -0.0032769464887678623,
97
+ "rewards/margins": 0.02090405486524105,
98
+ "rewards/rejected": -0.024181004613637924,
99
+ "step": 60
100
+ },
101
+ {
102
+ "epoch": 2.3430962343096233,
103
+ "grad_norm": 4.369245529174805,
104
+ "learning_rate": 4.9934830787948756e-06,
105
+ "logits/chosen": -2.378016948699951,
106
+ "logits/rejected": -2.373137950897217,
107
+ "logps/chosen": -74.67327880859375,
108
+ "logps/rejected": -69.20399475097656,
109
+ "loss": 0.668,
110
+ "rewards/accuracies": 0.7875000238418579,
111
+ "rewards/chosen": -0.0003526444488670677,
112
+ "rewards/margins": 0.04865006357431412,
113
+ "rewards/rejected": -0.04900271072983742,
114
+ "step": 70
115
+ },
116
+ {
117
+ "epoch": 2.6778242677824267,
118
+ "grad_norm": 4.444687366485596,
119
+ "learning_rate": 4.978118375700895e-06,
120
+ "logits/chosen": -2.3403103351593018,
121
+ "logits/rejected": -2.370321273803711,
122
+ "logps/chosen": -77.29728698730469,
123
+ "logps/rejected": -85.79756164550781,
124
+ "loss": 0.6566,
125
+ "rewards/accuracies": 0.8374999761581421,
126
+ "rewards/chosen": 0.0051120575517416,
127
+ "rewards/margins": 0.09415190666913986,
128
+ "rewards/rejected": -0.08903985470533371,
129
+ "step": 80
130
+ },
131
+ {
132
+ "epoch": 3.01255230125523,
133
+ "grad_norm": 4.876573085784912,
134
+ "learning_rate": 4.953780424089803e-06,
135
+ "logits/chosen": -2.3614611625671387,
136
+ "logits/rejected": -2.385697841644287,
137
+ "logps/chosen": -73.22442626953125,
138
+ "logps/rejected": -82.25682067871094,
139
+ "loss": 0.645,
140
+ "rewards/accuracies": 0.800000011920929,
141
+ "rewards/chosen": -0.016868198290467262,
142
+ "rewards/margins": 0.10679063946008682,
143
+ "rewards/rejected": -0.12365883588790894,
144
+ "step": 90
145
+ },
146
+ {
147
+ "epoch": 3.3472803347280333,
148
+ "grad_norm": 4.355966567993164,
149
+ "learning_rate": 4.920557351506409e-06,
150
+ "logits/chosen": -2.323256254196167,
151
+ "logits/rejected": -2.341057300567627,
152
+ "logps/chosen": -78.37105560302734,
153
+ "logps/rejected": -86.8406982421875,
154
+ "loss": 0.6072,
155
+ "rewards/accuracies": 0.824999988079071,
156
+ "rewards/chosen": -0.015012519434094429,
157
+ "rewards/margins": 0.20561759173870087,
158
+ "rewards/rejected": -0.22063009440898895,
159
+ "step": 100
160
+ },
161
+ {
162
+ "epoch": 3.7698744769874475,
163
+ "grad_norm": 4.361391067504883,
164
+ "learning_rate": 4.878569458453592e-06,
165
+ "logits/chosen": -2.3163838386535645,
166
+ "logits/rejected": -2.3566031455993652,
167
+ "logps/chosen": -83.33145904541016,
168
+ "logps/rejected": -96.48517608642578,
169
+ "loss": 0.5908,
170
+ "rewards/accuracies": 0.8374999761581421,
171
+ "rewards/chosen": -0.08870697021484375,
172
+ "rewards/margins": 0.24879300594329834,
173
+ "rewards/rejected": -0.3374999761581421,
174
+ "step": 110
175
+ },
176
+ {
177
+ "epoch": 4.104602510460251,
178
+ "grad_norm": 4.315061569213867,
179
+ "learning_rate": 4.827968782785062e-06,
180
+ "logits/chosen": -2.3728129863739014,
181
+ "logits/rejected": -2.3889667987823486,
182
+ "logps/chosen": -73.0484619140625,
183
+ "logps/rejected": -73.4913558959961,
184
+ "loss": 0.5783,
185
+ "rewards/accuracies": 0.8999999761581421,
186
+ "rewards/chosen": -0.0605628564953804,
187
+ "rewards/margins": 0.2945060133934021,
188
+ "rewards/rejected": -0.3550689220428467,
189
+ "step": 120
190
+ },
191
+ {
192
+ "epoch": 4.439330543933054,
193
+ "grad_norm": 4.438860893249512,
194
+ "learning_rate": 4.7689385491773934e-06,
195
+ "logits/chosen": -2.3526523113250732,
196
+ "logits/rejected": -2.364795684814453,
197
+ "logps/chosen": -67.69630432128906,
198
+ "logps/rejected": -84.85731506347656,
199
+ "loss": 0.5338,
200
+ "rewards/accuracies": 0.8999999761581421,
201
+ "rewards/chosen": -0.1054786667227745,
202
+ "rewards/margins": 0.4161924421787262,
203
+ "rewards/rejected": -0.5216711759567261,
204
+ "step": 130
205
+ },
206
+ {
207
+ "epoch": 4.7740585774058575,
208
+ "grad_norm": 4.5405473709106445,
209
+ "learning_rate": 4.70169250567482e-06,
210
+ "logits/chosen": -2.3756489753723145,
211
+ "logits/rejected": -2.374919891357422,
212
+ "logps/chosen": -68.5466079711914,
213
+ "logps/rejected": -76.15412902832031,
214
+ "loss": 0.5215,
215
+ "rewards/accuracies": 0.824999988079071,
216
+ "rewards/chosen": -0.16213981807231903,
217
+ "rewards/margins": 0.47565969824790955,
218
+ "rewards/rejected": -0.6377995610237122,
219
+ "step": 140
220
+ },
221
+ {
222
+ "epoch": 5.108786610878661,
223
+ "grad_norm": 4.596691608428955,
224
+ "learning_rate": 4.626474149709127e-06,
225
+ "logits/chosen": -2.428659439086914,
226
+ "logits/rejected": -2.4141571521759033,
227
+ "logps/chosen": -78.08479309082031,
228
+ "logps/rejected": -68.3617172241211,
229
+ "loss": 0.5019,
230
+ "rewards/accuracies": 0.8500000238418579,
231
+ "rewards/chosen": -0.20662447810173035,
232
+ "rewards/margins": 0.4026559889316559,
233
+ "rewards/rejected": -0.6092804670333862,
234
+ "step": 150
235
+ },
236
+ {
237
+ "epoch": 5.443514644351464,
238
+ "grad_norm": 4.364648818969727,
239
+ "learning_rate": 4.54355584639723e-06,
240
+ "logits/chosen": -2.408982992172241,
241
+ "logits/rejected": -2.4170727729797363,
242
+ "logps/chosen": -81.3556900024414,
243
+ "logps/rejected": -86.85897064208984,
244
+ "loss": 0.4586,
245
+ "rewards/accuracies": 0.824999988079071,
246
+ "rewards/chosen": -0.23941104114055634,
247
+ "rewards/margins": 0.675674319267273,
248
+ "rewards/rejected": -0.9150853157043457,
249
+ "step": 160
250
+ },
251
+ {
252
+ "epoch": 5.7782426778242675,
253
+ "grad_norm": 5.241800308227539,
254
+ "learning_rate": 4.45323784230908e-06,
255
+ "logits/chosen": -2.4194908142089844,
256
+ "logits/rejected": -2.4498963356018066,
257
+ "logps/chosen": -62.32392120361328,
258
+ "logps/rejected": -76.39479064941406,
259
+ "loss": 0.4442,
260
+ "rewards/accuracies": 0.887499988079071,
261
+ "rewards/chosen": -0.26662638783454895,
262
+ "rewards/margins": 0.6662653088569641,
263
+ "rewards/rejected": -0.9328916668891907,
264
+ "step": 170
265
+ },
266
+ {
267
+ "epoch": 6.112970711297071,
268
+ "grad_norm": 4.73954439163208,
269
+ "learning_rate": 4.355847178277025e-06,
270
+ "logits/chosen": -2.4365036487579346,
271
+ "logits/rejected": -2.435439348220825,
272
+ "logps/chosen": -73.06513977050781,
273
+ "logps/rejected": -81.04569244384766,
274
+ "loss": 0.4355,
275
+ "rewards/accuracies": 0.8999999761581421,
276
+ "rewards/chosen": -0.37924182415008545,
277
+ "rewards/margins": 0.7773979902267456,
278
+ "rewards/rejected": -1.156639814376831,
279
+ "step": 180
280
+ },
281
+ {
282
+ "epoch": 6.447698744769874,
283
+ "grad_norm": 5.250921726226807,
284
+ "learning_rate": 4.2517365051833564e-06,
285
+ "logits/chosen": -2.387922525405884,
286
+ "logits/rejected": -2.3835678100585938,
287
+ "logps/chosen": -64.85784912109375,
288
+ "logps/rejected": -90.08439636230469,
289
+ "loss": 0.3719,
290
+ "rewards/accuracies": 0.925000011920929,
291
+ "rewards/chosen": -0.42228370904922485,
292
+ "rewards/margins": 1.0562283992767334,
293
+ "rewards/rejected": -1.478512167930603,
294
+ "step": 190
295
+ },
296
+ {
297
+ "epoch": 6.7824267782426775,
298
+ "grad_norm": 5.088508129119873,
299
+ "learning_rate": 4.141282807014034e-06,
300
+ "logits/chosen": -2.376319169998169,
301
+ "logits/rejected": -2.3985953330993652,
302
+ "logps/chosen": -70.64585876464844,
303
+ "logps/rejected": -89.17048645019531,
304
+ "loss": 0.3829,
305
+ "rewards/accuracies": 0.9375,
306
+ "rewards/chosen": -0.5233972072601318,
307
+ "rewards/margins": 1.1063960790634155,
308
+ "rewards/rejected": -1.629793405532837,
309
+ "step": 200
310
+ },
311
+ {
312
+ "epoch": 7.117154811715481,
313
+ "grad_norm": 4.6062092781066895,
314
+ "learning_rate": 4.024886035802432e-06,
315
+ "logits/chosen": -2.371851682662964,
316
+ "logits/rejected": -2.3844287395477295,
317
+ "logps/chosen": -74.63328552246094,
318
+ "logps/rejected": -97.81452178955078,
319
+ "loss": 0.3522,
320
+ "rewards/accuracies": 0.925000011920929,
321
+ "rewards/chosen": -0.6278538703918457,
322
+ "rewards/margins": 1.2317354679107666,
323
+ "rewards/rejected": -1.8595889806747437,
324
+ "step": 210
325
+ },
326
+ {
327
+ "epoch": 7.451882845188284,
328
+ "grad_norm": 5.105669021606445,
329
+ "learning_rate": 3.9029676634059565e-06,
330
+ "logits/chosen": -2.4011385440826416,
331
+ "logits/rejected": -2.4039382934570312,
332
+ "logps/chosen": -75.92952728271484,
333
+ "logps/rejected": -78.41490936279297,
334
+ "loss": 0.3219,
335
+ "rewards/accuracies": 0.8999999761581421,
336
+ "rewards/chosen": -0.39645594358444214,
337
+ "rewards/margins": 1.2095177173614502,
338
+ "rewards/rejected": -1.6059738397598267,
339
+ "step": 220
340
+ },
341
+ {
342
+ "epoch": 7.786610878661088,
343
+ "grad_norm": 6.292915344238281,
344
+ "learning_rate": 3.7759691553595214e-06,
345
+ "logits/chosen": -2.3707780838012695,
346
+ "logits/rejected": -2.377169609069824,
347
+ "logps/chosen": -88.07064056396484,
348
+ "logps/rejected": -108.6225814819336,
349
+ "loss": 0.3041,
350
+ "rewards/accuracies": 0.9125000238418579,
351
+ "rewards/chosen": -0.9827474355697632,
352
+ "rewards/margins": 1.3651618957519531,
353
+ "rewards/rejected": -2.3479092121124268,
354
+ "step": 230
355
+ },
356
+ {
357
+ "epoch": 8.121338912133892,
358
+ "grad_norm": 5.0669097900390625,
359
+ "learning_rate": 3.6443503723320837e-06,
360
+ "logits/chosen": -2.3608062267303467,
361
+ "logits/rejected": -2.3792402744293213,
362
+ "logps/chosen": -72.83047485351562,
363
+ "logps/rejected": -91.09341430664062,
364
+ "loss": 0.3065,
365
+ "rewards/accuracies": 0.9125000238418579,
366
+ "rewards/chosen": -0.9334943890571594,
367
+ "rewards/margins": 1.3210034370422363,
368
+ "rewards/rejected": -2.25449800491333,
369
+ "step": 240
370
+ },
371
+ {
372
+ "epoch": 8.456066945606695,
373
+ "grad_norm": 5.0598931312561035,
374
+ "learning_rate": 3.508587904974522e-06,
375
+ "logits/chosen": -2.324855327606201,
376
+ "logits/rejected": -2.364541530609131,
377
+ "logps/chosen": -90.57644653320312,
378
+ "logps/rejected": -106.41752624511719,
379
+ "loss": 0.2498,
380
+ "rewards/accuracies": 0.9750000238418579,
381
+ "rewards/chosen": -0.8531273007392883,
382
+ "rewards/margins": 1.8315904140472412,
383
+ "rewards/rejected": -2.684717893600464,
384
+ "step": 250
385
+ },
386
+ {
387
+ "epoch": 8.790794979079498,
388
+ "grad_norm": 6.120776653289795,
389
+ "learning_rate": 3.3691733481883693e-06,
390
+ "logits/chosen": -2.3436760902404785,
391
+ "logits/rejected": -2.3720099925994873,
392
+ "logps/chosen": -86.95789337158203,
393
+ "logps/rejected": -102.34903717041016,
394
+ "loss": 0.2532,
395
+ "rewards/accuracies": 0.925000011920929,
396
+ "rewards/chosen": -1.1573801040649414,
397
+ "rewards/margins": 1.7690637111663818,
398
+ "rewards/rejected": -2.9264438152313232,
399
+ "step": 260
400
+ },
401
+ {
402
+ "epoch": 9.125523012552302,
403
+ "grad_norm": 4.666015625,
404
+ "learning_rate": 3.226611521064278e-06,
405
+ "logits/chosen": -2.3132309913635254,
406
+ "logits/rejected": -2.309297800064087,
407
+ "logps/chosen": -78.139404296875,
408
+ "logps/rejected": -99.09760284423828,
409
+ "loss": 0.2314,
410
+ "rewards/accuracies": 0.9750000238418579,
411
+ "rewards/chosen": -1.0649818181991577,
412
+ "rewards/margins": 1.8774713277816772,
413
+ "rewards/rejected": -2.942453384399414,
414
+ "step": 270
415
+ },
416
+ {
417
+ "epoch": 9.460251046025105,
418
+ "grad_norm": 8.85567855834961,
419
+ "learning_rate": 3.0814186389357765e-06,
420
+ "logits/chosen": -2.3629987239837646,
421
+ "logits/rejected": -2.385927200317383,
422
+ "logps/chosen": -91.09283447265625,
423
+ "logps/rejected": -102.37603759765625,
424
+ "loss": 0.2142,
425
+ "rewards/accuracies": 0.9624999761581421,
426
+ "rewards/chosen": -1.5404099225997925,
427
+ "rewards/margins": 2.121422290802002,
428
+ "rewards/rejected": -3.661832094192505,
429
+ "step": 280
430
+ },
431
+ {
432
+ "epoch": 9.794979079497908,
433
+ "grad_norm": 5.228074550628662,
434
+ "learning_rate": 2.9341204441673267e-06,
435
+ "logits/chosen": -2.356905221939087,
436
+ "logits/rejected": -2.3635311126708984,
437
+ "logps/chosen": -91.65778350830078,
438
+ "logps/rejected": -117.89949035644531,
439
+ "loss": 0.1881,
440
+ "rewards/accuracies": 1.0,
441
+ "rewards/chosen": -1.6620346307754517,
442
+ "rewards/margins": 2.1766200065612793,
443
+ "rewards/rejected": -3.8386547565460205,
444
+ "step": 290
445
+ },
446
+ {
447
+ "epoch": 10.129707112970712,
448
+ "grad_norm": 5.115809440612793,
449
+ "learning_rate": 2.785250302445062e-06,
450
+ "logits/chosen": -2.2903695106506348,
451
+ "logits/rejected": -2.2926692962646484,
452
+ "logps/chosen": -104.5173110961914,
453
+ "logps/rejected": -123.13216400146484,
454
+ "loss": 0.1798,
455
+ "rewards/accuracies": 0.9624999761581421,
456
+ "rewards/chosen": -1.7224146127700806,
457
+ "rewards/margins": 2.3892369270324707,
458
+ "rewards/rejected": -4.111651420593262,
459
+ "step": 300
460
+ },
461
+ {
462
+ "epoch": 10.464435146443515,
463
+ "grad_norm": 5.882064342498779,
464
+ "learning_rate": 2.6353472714635443e-06,
465
+ "logits/chosen": -2.2836384773254395,
466
+ "logits/rejected": -2.2969231605529785,
467
+ "logps/chosen": -88.8235855102539,
468
+ "logps/rejected": -119.67433166503906,
469
+ "loss": 0.1558,
470
+ "rewards/accuracies": 0.987500011920929,
471
+ "rewards/chosen": -1.6937462091445923,
472
+ "rewards/margins": 2.4059743881225586,
473
+ "rewards/rejected": -4.0997209548950195,
474
+ "step": 310
475
+ },
476
+ {
477
+ "epoch": 10.799163179916318,
478
+ "grad_norm": 6.9003376960754395,
479
+ "learning_rate": 2.4849541490017868e-06,
480
+ "logits/chosen": -2.289567232131958,
481
+ "logits/rejected": -2.3216423988342285,
482
+ "logps/chosen": -90.58432006835938,
483
+ "logps/rejected": -118.13006591796875,
484
+ "loss": 0.1538,
485
+ "rewards/accuracies": 0.9750000238418579,
486
+ "rewards/chosen": -1.6574989557266235,
487
+ "rewards/margins": 2.9354054927825928,
488
+ "rewards/rejected": -4.592904567718506,
489
+ "step": 320
490
+ },
491
+ {
492
+ "epoch": 11.133891213389122,
493
+ "grad_norm": 4.916522979736328,
494
+ "learning_rate": 2.3346155074564712e-06,
495
+ "logits/chosen": -2.2699310779571533,
496
+ "logits/rejected": -2.3001017570495605,
497
+ "logps/chosen": -100.2576675415039,
498
+ "logps/rejected": -133.8759307861328,
499
+ "loss": 0.1373,
500
+ "rewards/accuracies": 0.9750000238418579,
501
+ "rewards/chosen": -2.174388885498047,
502
+ "rewards/margins": 3.038696050643921,
503
+ "rewards/rejected": -5.213086128234863,
504
+ "step": 330
505
+ },
506
+ {
507
+ "epoch": 11.468619246861925,
508
+ "grad_norm": 6.739722728729248,
509
+ "learning_rate": 2.184875721949277e-06,
510
+ "logits/chosen": -2.2740581035614014,
511
+ "logits/rejected": -2.315854549407959,
512
+ "logps/chosen": -83.28224182128906,
513
+ "logps/rejected": -107.7516098022461,
514
+ "loss": 0.1257,
515
+ "rewards/accuracies": 0.9624999761581421,
516
+ "rewards/chosen": -1.777440071105957,
517
+ "rewards/margins": 2.704913377761841,
518
+ "rewards/rejected": -4.482353687286377,
519
+ "step": 340
520
+ },
521
+ {
522
+ "epoch": 11.803347280334728,
523
+ "grad_norm": 4.988001823425293,
524
+ "learning_rate": 2.0362769991485514e-06,
525
+ "logits/chosen": -2.2616047859191895,
526
+ "logits/rejected": -2.2596449851989746,
527
+ "logps/chosen": -107.07649230957031,
528
+ "logps/rejected": -139.80697631835938,
529
+ "loss": 0.1184,
530
+ "rewards/accuracies": 0.987500011920929,
531
+ "rewards/chosen": -2.618734359741211,
532
+ "rewards/margins": 3.291966199874878,
533
+ "rewards/rejected": -5.910700798034668,
534
+ "step": 350
535
+ },
536
+ {
537
+ "epoch": 12.138075313807532,
538
+ "grad_norm": 4.956677436828613,
539
+ "learning_rate": 1.8893574139429226e-06,
540
+ "logits/chosen": -2.233889102935791,
541
+ "logits/rejected": -2.2601330280303955,
542
+ "logps/chosen": -95.82877349853516,
543
+ "logps/rejected": -138.9019775390625,
544
+ "loss": 0.1106,
545
+ "rewards/accuracies": 0.987500011920929,
546
+ "rewards/chosen": -2.5194194316864014,
547
+ "rewards/margins": 3.470710039138794,
548
+ "rewards/rejected": -5.990128993988037,
549
+ "step": 360
550
+ },
551
+ {
552
+ "epoch": 12.472803347280335,
553
+ "grad_norm": 4.895273208618164,
554
+ "learning_rate": 1.744648961076068e-06,
555
+ "logits/chosen": -2.2324471473693848,
556
+ "logits/rejected": -2.233158588409424,
557
+ "logps/chosen": -117.90779113769531,
558
+ "logps/rejected": -141.53753662109375,
559
+ "loss": 0.0907,
560
+ "rewards/accuracies": 0.9750000238418579,
561
+ "rewards/chosen": -2.7019529342651367,
562
+ "rewards/margins": 3.4567368030548096,
563
+ "rewards/rejected": -6.158689975738525,
564
+ "step": 370
565
+ },
566
+ {
567
+ "epoch": 12.807531380753138,
568
+ "grad_norm": 5.789585590362549,
569
+ "learning_rate": 1.602675628797636e-06,
570
+ "logits/chosen": -2.2296676635742188,
571
+ "logits/rejected": -2.2535061836242676,
572
+ "logps/chosen": -117.69709777832031,
573
+ "logps/rejected": -150.61538696289062,
574
+ "loss": 0.0923,
575
+ "rewards/accuracies": 1.0,
576
+ "rewards/chosen": -3.4674232006073,
577
+ "rewards/margins": 3.8480961322784424,
578
+ "rewards/rejected": -7.3155198097229,
579
+ "step": 380
580
+ },
581
+ {
582
+ "epoch": 13.142259414225942,
583
+ "grad_norm": 4.082385540008545,
584
+ "learning_rate": 1.4639515015056205e-06,
585
+ "logits/chosen": -2.232024908065796,
586
+ "logits/rejected": -2.235680103302002,
587
+ "logps/chosen": -96.60597229003906,
588
+ "logps/rejected": -130.7404022216797,
589
+ "loss": 0.0876,
590
+ "rewards/accuracies": 0.9624999761581421,
591
+ "rewards/chosen": -2.816174268722534,
592
+ "rewards/margins": 3.2225749492645264,
593
+ "rewards/rejected": -6.038748741149902,
594
+ "step": 390
595
+ },
596
+ {
597
+ "epoch": 13.476987447698745,
598
+ "grad_norm": 4.423525333404541,
599
+ "learning_rate": 1.328978898250525e-06,
600
+ "logits/chosen": -2.2275261878967285,
601
+ "logits/rejected": -2.2222421169281006,
602
+ "logps/chosen": -107.16130065917969,
603
+ "logps/rejected": -148.48500061035156,
604
+ "loss": 0.0662,
605
+ "rewards/accuracies": 1.0,
606
+ "rewards/chosen": -3.10858154296875,
607
+ "rewards/margins": 3.9508070945739746,
608
+ "rewards/rejected": -7.059388637542725,
609
+ "step": 400
610
+ },
611
+ {
612
+ "epoch": 13.811715481171548,
613
+ "grad_norm": 3.721898078918457,
614
+ "learning_rate": 1.198246553841744e-06,
615
+ "logits/chosen": -2.2333359718322754,
616
+ "logits/rejected": -2.2442851066589355,
617
+ "logps/chosen": -104.8399429321289,
618
+ "logps/rejected": -137.98049926757812,
619
+ "loss": 0.0808,
620
+ "rewards/accuracies": 0.987500011920929,
621
+ "rewards/chosen": -3.3306171894073486,
622
+ "rewards/margins": 3.471170425415039,
623
+ "rewards/rejected": -6.80178689956665,
624
+ "step": 410
625
+ },
626
+ {
627
+ "epoch": 14.146443514644352,
628
+ "grad_norm": 4.411396026611328,
629
+ "learning_rate": 1.0722278491423998e-06,
630
+ "logits/chosen": -2.2033934593200684,
631
+ "logits/rejected": -2.206735610961914,
632
+ "logps/chosen": -122.04057312011719,
633
+ "logps/rejected": -139.2510528564453,
634
+ "loss": 0.0651,
635
+ "rewards/accuracies": 1.0,
636
+ "rewards/chosen": -3.4570648670196533,
637
+ "rewards/margins": 3.5551300048828125,
638
+ "rewards/rejected": -7.012194633483887,
639
+ "step": 420
640
+ },
641
+ {
642
+ "epoch": 14.481171548117155,
643
+ "grad_norm": 4.514885902404785,
644
+ "learning_rate": 9.513790969606926e-07,
645
+ "logits/chosen": -2.1915841102600098,
646
+ "logits/rejected": -2.23836088180542,
647
+ "logps/chosen": -111.24171447753906,
648
+ "logps/rejected": -159.8766326904297,
649
+ "loss": 0.0609,
650
+ "rewards/accuracies": 1.0,
651
+ "rewards/chosen": -3.719008207321167,
652
+ "rewards/margins": 4.095301628112793,
653
+ "rewards/rejected": -7.814309597015381,
654
+ "step": 430
655
+ },
656
+ {
657
+ "epoch": 14.815899581589958,
658
+ "grad_norm": 6.274470329284668,
659
+ "learning_rate": 8.361378897445643e-07,
660
+ "logits/chosen": -2.2278056144714355,
661
+ "logits/rejected": -2.2360167503356934,
662
+ "logps/chosen": -95.31124877929688,
663
+ "logps/rejected": -136.5842742919922,
664
+ "loss": 0.0624,
665
+ "rewards/accuracies": 1.0,
666
+ "rewards/chosen": -3.1528682708740234,
667
+ "rewards/margins": 4.095580577850342,
668
+ "rewards/rejected": -7.248448848724365,
669
+ "step": 440
670
+ },
671
+ {
672
+ "epoch": 15.150627615062762,
673
+ "grad_norm": 4.49701452255249,
674
+ "learning_rate": 7.269215150626391e-07,
675
+ "logits/chosen": -2.196305513381958,
676
+ "logits/rejected": -2.2363815307617188,
677
+ "logps/chosen": -101.97003173828125,
678
+ "logps/rejected": -151.15646362304688,
679
+ "loss": 0.0513,
680
+ "rewards/accuracies": 1.0,
681
+ "rewards/chosen": -3.562624454498291,
682
+ "rewards/margins": 4.104978084564209,
683
+ "rewards/rejected": -7.667603492736816,
684
+ "step": 450
685
+ },
686
+ {
687
+ "epoch": 15.485355648535565,
688
+ "grad_norm": 4.746140956878662,
689
+ "learning_rate": 6.241254446089942e-07,
690
+ "logits/chosen": -2.1973156929016113,
691
+ "logits/rejected": -2.217236042022705,
692
+ "logps/chosen": -108.36579895019531,
693
+ "logps/rejected": -146.45358276367188,
694
+ "loss": 0.0588,
695
+ "rewards/accuracies": 0.987500011920929,
696
+ "rewards/chosen": -3.8717312812805176,
697
+ "rewards/margins": 3.9130451679229736,
698
+ "rewards/rejected": -7.784776210784912,
699
+ "step": 460
700
+ },
701
+ {
702
+ "epoch": 15.820083682008368,
703
+ "grad_norm": 2.910703182220459,
704
+ "learning_rate": 5.281219022030423e-07,
705
+ "logits/chosen": -2.1933655738830566,
706
+ "logits/rejected": -2.193134307861328,
707
+ "logps/chosen": -125.05366516113281,
708
+ "logps/rejected": -158.47085571289062,
709
+ "loss": 0.0484,
710
+ "rewards/accuracies": 1.0,
711
+ "rewards/chosen": -3.9004642963409424,
712
+ "rewards/margins": 4.252579689025879,
713
+ "rewards/rejected": -8.153043746948242,
714
+ "step": 470
715
+ },
716
+ {
717
+ "epoch": 16.15481171548117,
718
+ "grad_norm": 2.814772367477417,
719
+ "learning_rate": 4.392585159698087e-07,
720
+ "logits/chosen": -2.1886072158813477,
721
+ "logits/rejected": -2.1937201023101807,
722
+ "logps/chosen": -113.6917724609375,
723
+ "logps/rejected": -160.83851623535156,
724
+ "loss": 0.0443,
725
+ "rewards/accuracies": 1.0,
726
+ "rewards/chosen": -3.779496669769287,
727
+ "rewards/margins": 4.261423587799072,
728
+ "rewards/rejected": -8.04092025756836,
729
+ "step": 480
730
+ },
731
+ {
732
+ "epoch": 16.489539748953973,
733
+ "grad_norm": 3.579289197921753,
734
+ "learning_rate": 3.578570595810274e-07,
735
+ "logits/chosen": -2.19553542137146,
736
+ "logits/rejected": -2.1956517696380615,
737
+ "logps/chosen": -110.0953140258789,
738
+ "logps/rejected": -165.99652099609375,
739
+ "loss": 0.0483,
740
+ "rewards/accuracies": 1.0,
741
+ "rewards/chosen": -3.7303287982940674,
742
+ "rewards/margins": 4.63196325302124,
743
+ "rewards/rejected": -8.36229133605957,
744
+ "step": 490
745
+ },
746
+ {
747
+ "epoch": 16.824267782426777,
748
+ "grad_norm": 4.428997039794922,
749
+ "learning_rate": 2.8421228711503127e-07,
750
+ "logits/chosen": -2.1704812049865723,
751
+ "logits/rejected": -2.183809280395508,
752
+ "logps/chosen": -99.66941833496094,
753
+ "logps/rejected": -152.3459930419922,
754
+ "loss": 0.0468,
755
+ "rewards/accuracies": 1.0,
756
+ "rewards/chosen": -3.7393298149108887,
757
+ "rewards/margins": 4.549952030181885,
758
+ "rewards/rejected": -8.289281845092773,
759
+ "step": 500
760
+ },
761
+ {
762
+ "epoch": 17.15899581589958,
763
+ "grad_norm": 3.5141501426696777,
764
+ "learning_rate": 2.1859086575439225e-07,
765
+ "logits/chosen": -2.114220380783081,
766
+ "logits/rejected": -2.1453700065612793,
767
+ "logps/chosen": -119.66983795166016,
768
+ "logps/rejected": -161.91326904296875,
769
+ "loss": 0.0398,
770
+ "rewards/accuracies": 1.0,
771
+ "rewards/chosen": -4.162126064300537,
772
+ "rewards/margins": 4.615090370178223,
773
+ "rewards/rejected": -8.777216911315918,
774
+ "step": 510
775
+ },
776
+ {
777
+ "epoch": 17.493723849372383,
778
+ "grad_norm": 3.1655192375183105,
779
+ "learning_rate": 1.6123041018599766e-07,
780
+ "logits/chosen": -2.1598916053771973,
781
+ "logits/rejected": -2.151259660720825,
782
+ "logps/chosen": -112.63690185546875,
783
+ "logps/rejected": -166.2643280029297,
784
+ "loss": 0.0436,
785
+ "rewards/accuracies": 1.0,
786
+ "rewards/chosen": -3.9895179271698,
787
+ "rewards/margins": 4.71376895904541,
788
+ "rewards/rejected": -8.703287124633789,
789
+ "step": 520
790
+ },
791
+ {
792
+ "epoch": 17.828451882845187,
793
+ "grad_norm": 3.882448673248291,
794
+ "learning_rate": 1.1233862220001168e-07,
795
+ "logits/chosen": -2.1259069442749023,
796
+ "logits/rejected": -2.1679906845092773,
797
+ "logps/chosen": -125.42464447021484,
798
+ "logps/rejected": -172.642822265625,
799
+ "loss": 0.0477,
800
+ "rewards/accuracies": 1.0,
801
+ "rewards/chosen": -4.517868995666504,
802
+ "rewards/margins": 4.534079551696777,
803
+ "rewards/rejected": -9.051949501037598,
804
+ "step": 530
805
+ },
806
+ {
807
+ "epoch": 18.16317991631799,
808
+ "grad_norm": 4.275852203369141,
809
+ "learning_rate": 7.209253860320897e-08,
810
+ "logits/chosen": -2.1740193367004395,
811
+ "logits/rejected": -2.1897895336151123,
812
+ "logps/chosen": -133.6866455078125,
813
+ "logps/rejected": -160.42288208007812,
814
+ "loss": 0.0408,
815
+ "rewards/accuracies": 1.0,
816
+ "rewards/chosen": -4.688433647155762,
817
+ "rewards/margins": 4.159676551818848,
818
+ "rewards/rejected": -8.848111152648926,
819
+ "step": 540
820
+ },
821
+ {
822
+ "epoch": 18.497907949790793,
823
+ "grad_norm": 3.586958646774292,
824
+ "learning_rate": 4.063789016999331e-08,
825
+ "logits/chosen": -2.157022476196289,
826
+ "logits/rejected": -2.179140567779541,
827
+ "logps/chosen": -122.80704498291016,
828
+ "logps/rejected": -170.018798828125,
829
+ "loss": 0.0423,
830
+ "rewards/accuracies": 1.0,
831
+ "rewards/chosen": -4.441340923309326,
832
+ "rewards/margins": 4.718876838684082,
833
+ "rewards/rejected": -9.16021728515625,
834
+ "step": 550
835
+ },
836
+ {
837
+ "epoch": 18.8326359832636,
838
+ "grad_norm": 2.9948108196258545,
839
+ "learning_rate": 1.808857395232788e-08,
840
+ "logits/chosen": -2.1356325149536133,
841
+ "logits/rejected": -2.1427738666534424,
842
+ "logps/chosen": -112.40225982666016,
843
+ "logps/rejected": -166.0186767578125,
844
+ "loss": 0.04,
845
+ "rewards/accuracies": 1.0,
846
+ "rewards/chosen": -4.3510541915893555,
847
+ "rewards/margins": 4.859889030456543,
848
+ "rewards/rejected": -9.210943222045898,
849
+ "step": 560
850
+ },
851
+ {
852
+ "epoch": 19.1673640167364,
853
+ "grad_norm": 3.9700310230255127,
854
+ "learning_rate": 4.526240859345499e-09,
855
+ "logits/chosen": -2.1602721214294434,
856
+ "logits/rejected": -2.168781042098999,
857
+ "logps/chosen": -125.03184509277344,
858
+ "logps/rejected": -174.86129760742188,
859
+ "loss": 0.041,
860
+ "rewards/accuracies": 1.0,
861
+ "rewards/chosen": -4.168228626251221,
862
+ "rewards/margins": 4.846875190734863,
863
+ "rewards/rejected": -9.015104293823242,
864
+ "step": 570
865
+ },
866
+ {
867
+ "epoch": 19.502092050209207,
868
+ "grad_norm": 3.226668119430542,
869
+ "learning_rate": 0.0,
870
+ "logits/chosen": -2.183656692504883,
871
+ "logits/rejected": -2.190368175506592,
872
+ "logps/chosen": -107.56755065917969,
873
+ "logps/rejected": -153.33026123046875,
874
+ "loss": 0.0408,
875
+ "rewards/accuracies": 0.987500011920929,
876
+ "rewards/chosen": -4.3311662673950195,
877
+ "rewards/margins": 4.147943019866943,
878
+ "rewards/rejected": -8.479108810424805,
879
+ "step": 580
880
+ }
881
+ ],
882
+ "logging_steps": 10,
883
+ "max_steps": 580,
884
+ "num_input_tokens_seen": 0,
885
+ "num_train_epochs": 20,
886
+ "save_steps": 50,
887
+ "stateful_callbacks": {
888
+ "TrainerControl": {
889
+ "args": {
890
+ "should_epoch_stop": false,
891
+ "should_evaluate": false,
892
+ "should_log": false,
893
+ "should_save": true,
894
+ "should_training_stop": true
895
+ },
896
+ "attributes": {}
897
+ }
898
+ },
899
+ "total_flos": 2.1306294447112192e+18,
900
+ "train_batch_size": 1,
901
+ "trial_name": null,
902
+ "trial_params": null
903
+ }
Area_Time_SFT/README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: ishorn5/RTLCoder-v1.1
3
+ library_name: peft
4
+ license: other
5
+ tags:
6
+ - llama-factory
7
+ - lora
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: Area_Time_SFT
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # Area_Time_SFT
18
+
19
+ This model is a fine-tuned version of [ishorn5/RTLCoder-v1.1](https://huggingface.co/ishorn5/RTLCoder-v1.1) on the area_time dataset.
20
+
21
+ ## Model description
22
+
23
+ More information needed
24
+
25
+ ## Intended uses & limitations
26
+
27
+ More information needed
28
+
29
+ ## Training and evaluation data
30
+
31
+ More information needed
32
+
33
+ ## Training procedure
34
+
35
+ ### Training hyperparameters
36
+
37
+ The following hyperparameters were used during training:
38
+ - learning_rate: 5e-06
39
+ - train_batch_size: 1
40
+ - eval_batch_size: 8
41
+ - seed: 42
42
+ - distributed_type: multi-GPU
43
+ - num_devices: 8
44
+ - gradient_accumulation_steps: 8
45
+ - total_train_batch_size: 64
46
+ - total_eval_batch_size: 64
47
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
+ - lr_scheduler_type: cosine
49
+ - lr_scheduler_warmup_ratio: 0.1
50
+ - num_epochs: 20.0
51
+
52
+ ### Training results
53
+
54
+
55
+
56
+ ### Framework versions
57
+
58
+ - PEFT 0.12.0
59
+ - Transformers 4.45.2
60
+ - Pytorch 2.4.1+cu124
61
+ - Datasets 2.21.0
62
+ - Tokenizers 0.20.0
Area_Time_SFT/adapter_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "ishorn5/RTLCoder-v1.1",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 16,
14
+ "lora_dropout": 0.0,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 8,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "gate_proj",
24
+ "k_proj",
25
+ "up_proj",
26
+ "down_proj",
27
+ "o_proj",
28
+ "v_proj",
29
+ "q_proj"
30
+ ],
31
+ "task_type": "CAUSAL_LM",
32
+ "use_dora": false,
33
+ "use_rslora": false
34
+ }
Area_Time_SFT/added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "[PAD]": 32000
3
+ }
Area_Time_SFT/all_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 19.45945945945946,
3
+ "total_flos": 1.9727302677684552e+18,
4
+ "train_loss": 0.2973077946239048,
5
+ "train_runtime": 4131.3591,
6
+ "train_samples_per_second": 8.573,
7
+ "train_steps_per_second": 0.131
8
+ }
Area_Time_SFT/checkpoint-100/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: ishorn5/RTLCoder-v1.1
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.12.0
Area_Time_SFT/checkpoint-100/adapter_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "ishorn5/RTLCoder-v1.1",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 16,
14
+ "lora_dropout": 0.0,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 8,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "gate_proj",
24
+ "k_proj",
25
+ "up_proj",
26
+ "down_proj",
27
+ "o_proj",
28
+ "v_proj",
29
+ "q_proj"
30
+ ],
31
+ "task_type": "CAUSAL_LM",
32
+ "use_dora": false,
33
+ "use_rslora": false
34
+ }
Area_Time_SFT/checkpoint-100/added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "[PAD]": 32000
3
+ }
Area_Time_SFT/checkpoint-100/special_tokens_map.json ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<unk>",
4
+ "<s>",
5
+ "</s>",
6
+ "[PAD]"
7
+ ],
8
+ "bos_token": {
9
+ "content": "<s>",
10
+ "lstrip": false,
11
+ "normalized": false,
12
+ "rstrip": false,
13
+ "single_word": false
14
+ },
15
+ "eos_token": {
16
+ "content": "</s>",
17
+ "lstrip": false,
18
+ "normalized": false,
19
+ "rstrip": false,
20
+ "single_word": false
21
+ },
22
+ "pad_token": {
23
+ "content": "[PAD]",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false
28
+ },
29
+ "unk_token": {
30
+ "content": "<unk>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false
35
+ }
36
+ }
Area_Time_SFT/checkpoint-100/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
Area_Time_SFT/checkpoint-100/tokenizer_config.json ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "add_prefix_space": true,
5
+ "added_tokens_decoder": {
6
+ "0": {
7
+ "content": "<unk>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "1": {
15
+ "content": "<s>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": true
21
+ },
22
+ "2": {
23
+ "content": "</s>",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false,
28
+ "special": true
29
+ },
30
+ "32000": {
31
+ "content": "[PAD]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false,
36
+ "special": true
37
+ }
38
+ },
39
+ "additional_special_tokens": [
40
+ "<unk>",
41
+ "<s>",
42
+ "</s>",
43
+ "[PAD]"
44
+ ],
45
+ "bos_token": "<s>",
46
+ "chat_template": "{{ '<s>' }}{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% endif %}{% if system_message is defined %}{{ system_message + '\n\n' }}{% endif %}{% for message in loop_messages %}{% set content = message['content'] %}{% if message['role'] == 'user' %}{{ 'User: ' + content + '\n\nAssistant:' }}{% elif message['role'] == 'assistant' %}{{ content + '</s>' }}{% endif %}{% endfor %}",
47
+ "clean_up_tokenization_spaces": false,
48
+ "eos_token": "</s>",
49
+ "legacy": true,
50
+ "model_max_length": 2048,
51
+ "pad_token": "[PAD]",
52
+ "padding_side": "right",
53
+ "sp_model_kwargs": {},
54
+ "spaces_between_special_tokens": false,
55
+ "split_special_tokens": false,
56
+ "tokenizer_class": "LlamaTokenizer",
57
+ "unk_token": "<unk>",
58
+ "use_default_system_prompt": true
59
+ }
Area_Time_SFT/checkpoint-100/trainer_state.json ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 3.6036036036036037,
5
+ "eval_steps": 500,
6
+ "global_step": 100,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.36036036036036034,
13
+ "grad_norm": 4.098762512207031,
14
+ "learning_rate": 9.259259259259259e-07,
15
+ "logits/chosen": -2.332144260406494,
16
+ "logits/rejected": -2.3385167121887207,
17
+ "logps/chosen": -80.89369201660156,
18
+ "logps/rejected": -70.11573791503906,
19
+ "loss": 0.6929,
20
+ "rewards/accuracies": 0.4625000059604645,
21
+ "rewards/chosen": 0.00037987352698110044,
22
+ "rewards/margins": 0.004227532539516687,
23
+ "rewards/rejected": -0.003847658634185791,
24
+ "step": 10
25
+ },
26
+ {
27
+ "epoch": 0.7207207207207207,
28
+ "grad_norm": 3.641324281692505,
29
+ "learning_rate": 1.8518518518518519e-06,
30
+ "logits/chosen": -2.323789119720459,
31
+ "logits/rejected": -2.351041793823242,
32
+ "logps/chosen": -73.2725601196289,
33
+ "logps/rejected": -81.80250549316406,
34
+ "loss": 0.6932,
35
+ "rewards/accuracies": 0.5874999761581421,
36
+ "rewards/chosen": 0.0012983012711629272,
37
+ "rewards/margins": 0.004207946360111237,
38
+ "rewards/rejected": -0.0029096449725329876,
39
+ "step": 20
40
+ },
41
+ {
42
+ "epoch": 1.0810810810810811,
43
+ "grad_norm": 3.9276015758514404,
44
+ "learning_rate": 2.7777777777777783e-06,
45
+ "logits/chosen": -2.3353028297424316,
46
+ "logits/rejected": -2.3445916175842285,
47
+ "logps/chosen": -69.34381103515625,
48
+ "logps/rejected": -74.37530517578125,
49
+ "loss": 0.6941,
50
+ "rewards/accuracies": 0.4375,
51
+ "rewards/chosen": -0.01035231165587902,
52
+ "rewards/margins": -0.006389107555150986,
53
+ "rewards/rejected": -0.00396320316940546,
54
+ "step": 30
55
+ },
56
+ {
57
+ "epoch": 1.4414414414414414,
58
+ "grad_norm": 4.809000492095947,
59
+ "learning_rate": 3.7037037037037037e-06,
60
+ "logits/chosen": -2.343184232711792,
61
+ "logits/rejected": -2.360262393951416,
62
+ "logps/chosen": -77.91002655029297,
63
+ "logps/rejected": -76.27156066894531,
64
+ "loss": 0.6902,
65
+ "rewards/accuracies": 0.5625,
66
+ "rewards/chosen": -0.008437180891633034,
67
+ "rewards/margins": 0.008391124196350574,
68
+ "rewards/rejected": -0.016828304156661034,
69
+ "step": 40
70
+ },
71
+ {
72
+ "epoch": 1.8018018018018018,
73
+ "grad_norm": 4.504117012023926,
74
+ "learning_rate": 4.62962962962963e-06,
75
+ "logits/chosen": -2.3394973278045654,
76
+ "logits/rejected": -2.3635268211364746,
77
+ "logps/chosen": -83.62376403808594,
78
+ "logps/rejected": -267.64569091796875,
79
+ "loss": 0.6851,
80
+ "rewards/accuracies": 0.48750001192092896,
81
+ "rewards/chosen": 0.01695835217833519,
82
+ "rewards/margins": 0.14291717112064362,
83
+ "rewards/rejected": -0.12595881521701813,
84
+ "step": 50
85
+ },
86
+ {
87
+ "epoch": 2.1621621621621623,
88
+ "grad_norm": 4.033559322357178,
89
+ "learning_rate": 4.998119881260576e-06,
90
+ "logits/chosen": -2.32966685295105,
91
+ "logits/rejected": -2.3370490074157715,
92
+ "logps/chosen": -78.54629516601562,
93
+ "logps/rejected": -82.67992401123047,
94
+ "loss": 0.6767,
95
+ "rewards/accuracies": 0.637499988079071,
96
+ "rewards/chosen": -0.03485359251499176,
97
+ "rewards/margins": 0.035757362842559814,
98
+ "rewards/rejected": -0.07061095535755157,
99
+ "step": 60
100
+ },
101
+ {
102
+ "epoch": 2.5225225225225225,
103
+ "grad_norm": 4.979142189025879,
104
+ "learning_rate": 4.9866405060165044e-06,
105
+ "logits/chosen": -2.364291191101074,
106
+ "logits/rejected": -2.376107931137085,
107
+ "logps/chosen": -70.61842346191406,
108
+ "logps/rejected": -81.80282592773438,
109
+ "loss": 0.6636,
110
+ "rewards/accuracies": 0.675000011920929,
111
+ "rewards/chosen": -0.06025966256856918,
112
+ "rewards/margins": 0.03742799907922745,
113
+ "rewards/rejected": -0.09768766909837723,
114
+ "step": 70
115
+ },
116
+ {
117
+ "epoch": 2.8828828828828827,
118
+ "grad_norm": 4.0428690910339355,
119
+ "learning_rate": 4.964774158361991e-06,
120
+ "logits/chosen": -2.3341965675354004,
121
+ "logits/rejected": -2.3440909385681152,
122
+ "logps/chosen": -86.3591537475586,
123
+ "logps/rejected": -77.45347595214844,
124
+ "loss": 0.6519,
125
+ "rewards/accuracies": 0.7124999761581421,
126
+ "rewards/chosen": -0.09531830251216888,
127
+ "rewards/margins": 0.09854079782962799,
128
+ "rewards/rejected": -0.19385910034179688,
129
+ "step": 80
130
+ },
131
+ {
132
+ "epoch": 3.2432432432432434,
133
+ "grad_norm": 4.31919527053833,
134
+ "learning_rate": 4.93261217644956e-06,
135
+ "logits/chosen": -2.351658821105957,
136
+ "logits/rejected": -2.3437318801879883,
137
+ "logps/chosen": -77.31346130371094,
138
+ "logps/rejected": -80.43277740478516,
139
+ "loss": 0.6243,
140
+ "rewards/accuracies": 0.800000011920929,
141
+ "rewards/chosen": -0.1381106823682785,
142
+ "rewards/margins": 0.16527524590492249,
143
+ "rewards/rejected": -0.3033859133720398,
144
+ "step": 90
145
+ },
146
+ {
147
+ "epoch": 3.6036036036036037,
148
+ "grad_norm": 5.029748439788818,
149
+ "learning_rate": 4.8902889044347e-06,
150
+ "logits/chosen": -2.3354241847991943,
151
+ "logits/rejected": -2.358518600463867,
152
+ "logps/chosen": -75.03588104248047,
153
+ "logps/rejected": -86.44483947753906,
154
+ "loss": 0.6025,
155
+ "rewards/accuracies": 0.699999988079071,
156
+ "rewards/chosen": -0.22239580750465393,
157
+ "rewards/margins": 0.1877792775630951,
158
+ "rewards/rejected": -0.41017502546310425,
159
+ "step": 100
160
+ }
161
+ ],
162
+ "logging_steps": 10,
163
+ "max_steps": 540,
164
+ "num_input_tokens_seen": 0,
165
+ "num_train_epochs": 20,
166
+ "save_steps": 100,
167
+ "stateful_callbacks": {
168
+ "TrainerControl": {
169
+ "args": {
170
+ "should_epoch_stop": false,
171
+ "should_evaluate": false,
172
+ "should_log": false,
173
+ "should_save": true,
174
+ "should_training_stop": false
175
+ },
176
+ "attributes": {}
177
+ }
178
+ },
179
+ "total_flos": 3.6533669780363674e+17,
180
+ "train_batch_size": 1,
181
+ "trial_name": null,
182
+ "trial_params": null
183
+ }
Area_Time_SFT/checkpoint-200/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: ishorn5/RTLCoder-v1.1
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.12.0
Area_Time_SFT/checkpoint-200/adapter_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "ishorn5/RTLCoder-v1.1",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 16,
14
+ "lora_dropout": 0.0,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 8,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "gate_proj",
24
+ "k_proj",
25
+ "up_proj",
26
+ "down_proj",
27
+ "o_proj",
28
+ "v_proj",
29
+ "q_proj"
30
+ ],
31
+ "task_type": "CAUSAL_LM",
32
+ "use_dora": false,
33
+ "use_rslora": false
34
+ }
Area_Time_SFT/checkpoint-200/added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "[PAD]": 32000
3
+ }
Area_Time_SFT/checkpoint-200/special_tokens_map.json ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<unk>",
4
+ "<s>",
5
+ "</s>",
6
+ "[PAD]"
7
+ ],
8
+ "bos_token": {
9
+ "content": "<s>",
10
+ "lstrip": false,
11
+ "normalized": false,
12
+ "rstrip": false,
13
+ "single_word": false
14
+ },
15
+ "eos_token": {
16
+ "content": "</s>",
17
+ "lstrip": false,
18
+ "normalized": false,
19
+ "rstrip": false,
20
+ "single_word": false
21
+ },
22
+ "pad_token": {
23
+ "content": "[PAD]",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false
28
+ },
29
+ "unk_token": {
30
+ "content": "<unk>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false
35
+ }
36
+ }
Area_Time_SFT/checkpoint-200/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
Area_Time_SFT/checkpoint-200/tokenizer_config.json ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "add_prefix_space": true,
5
+ "added_tokens_decoder": {
6
+ "0": {
7
+ "content": "<unk>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "1": {
15
+ "content": "<s>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": true
21
+ },
22
+ "2": {
23
+ "content": "</s>",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false,
28
+ "special": true
29
+ },
30
+ "32000": {
31
+ "content": "[PAD]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false,
36
+ "special": true
37
+ }
38
+ },
39
+ "additional_special_tokens": [
40
+ "<unk>",
41
+ "<s>",
42
+ "</s>",
43
+ "[PAD]"
44
+ ],
45
+ "bos_token": "<s>",
46
+ "chat_template": "{{ '<s>' }}{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% endif %}{% if system_message is defined %}{{ system_message + '\n\n' }}{% endif %}{% for message in loop_messages %}{% set content = message['content'] %}{% if message['role'] == 'user' %}{{ 'User: ' + content + '\n\nAssistant:' }}{% elif message['role'] == 'assistant' %}{{ content + '</s>' }}{% endif %}{% endfor %}",
47
+ "clean_up_tokenization_spaces": false,
48
+ "eos_token": "</s>",
49
+ "legacy": true,
50
+ "model_max_length": 2048,
51
+ "pad_token": "[PAD]",
52
+ "padding_side": "right",
53
+ "sp_model_kwargs": {},
54
+ "spaces_between_special_tokens": false,
55
+ "split_special_tokens": false,
56
+ "tokenizer_class": "LlamaTokenizer",
57
+ "unk_token": "<unk>",
58
+ "use_default_system_prompt": true
59
+ }
Area_Time_SFT/checkpoint-200/trainer_state.json ADDED
@@ -0,0 +1,333 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 7.207207207207207,
5
+ "eval_steps": 500,
6
+ "global_step": 200,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.36036036036036034,
13
+ "grad_norm": 4.098762512207031,
14
+ "learning_rate": 9.259259259259259e-07,
15
+ "logits/chosen": -2.332144260406494,
16
+ "logits/rejected": -2.3385167121887207,
17
+ "logps/chosen": -80.89369201660156,
18
+ "logps/rejected": -70.11573791503906,
19
+ "loss": 0.6929,
20
+ "rewards/accuracies": 0.4625000059604645,
21
+ "rewards/chosen": 0.00037987352698110044,
22
+ "rewards/margins": 0.004227532539516687,
23
+ "rewards/rejected": -0.003847658634185791,
24
+ "step": 10
25
+ },
26
+ {
27
+ "epoch": 0.7207207207207207,
28
+ "grad_norm": 3.641324281692505,
29
+ "learning_rate": 1.8518518518518519e-06,
30
+ "logits/chosen": -2.323789119720459,
31
+ "logits/rejected": -2.351041793823242,
32
+ "logps/chosen": -73.2725601196289,
33
+ "logps/rejected": -81.80250549316406,
34
+ "loss": 0.6932,
35
+ "rewards/accuracies": 0.5874999761581421,
36
+ "rewards/chosen": 0.0012983012711629272,
37
+ "rewards/margins": 0.004207946360111237,
38
+ "rewards/rejected": -0.0029096449725329876,
39
+ "step": 20
40
+ },
41
+ {
42
+ "epoch": 1.0810810810810811,
43
+ "grad_norm": 3.9276015758514404,
44
+ "learning_rate": 2.7777777777777783e-06,
45
+ "logits/chosen": -2.3353028297424316,
46
+ "logits/rejected": -2.3445916175842285,
47
+ "logps/chosen": -69.34381103515625,
48
+ "logps/rejected": -74.37530517578125,
49
+ "loss": 0.6941,
50
+ "rewards/accuracies": 0.4375,
51
+ "rewards/chosen": -0.01035231165587902,
52
+ "rewards/margins": -0.006389107555150986,
53
+ "rewards/rejected": -0.00396320316940546,
54
+ "step": 30
55
+ },
56
+ {
57
+ "epoch": 1.4414414414414414,
58
+ "grad_norm": 4.809000492095947,
59
+ "learning_rate": 3.7037037037037037e-06,
60
+ "logits/chosen": -2.343184232711792,
61
+ "logits/rejected": -2.360262393951416,
62
+ "logps/chosen": -77.91002655029297,
63
+ "logps/rejected": -76.27156066894531,
64
+ "loss": 0.6902,
65
+ "rewards/accuracies": 0.5625,
66
+ "rewards/chosen": -0.008437180891633034,
67
+ "rewards/margins": 0.008391124196350574,
68
+ "rewards/rejected": -0.016828304156661034,
69
+ "step": 40
70
+ },
71
+ {
72
+ "epoch": 1.8018018018018018,
73
+ "grad_norm": 4.504117012023926,
74
+ "learning_rate": 4.62962962962963e-06,
75
+ "logits/chosen": -2.3394973278045654,
76
+ "logits/rejected": -2.3635268211364746,
77
+ "logps/chosen": -83.62376403808594,
78
+ "logps/rejected": -267.64569091796875,
79
+ "loss": 0.6851,
80
+ "rewards/accuracies": 0.48750001192092896,
81
+ "rewards/chosen": 0.01695835217833519,
82
+ "rewards/margins": 0.14291717112064362,
83
+ "rewards/rejected": -0.12595881521701813,
84
+ "step": 50
85
+ },
86
+ {
87
+ "epoch": 2.1621621621621623,
88
+ "grad_norm": 4.033559322357178,
89
+ "learning_rate": 4.998119881260576e-06,
90
+ "logits/chosen": -2.32966685295105,
91
+ "logits/rejected": -2.3370490074157715,
92
+ "logps/chosen": -78.54629516601562,
93
+ "logps/rejected": -82.67992401123047,
94
+ "loss": 0.6767,
95
+ "rewards/accuracies": 0.637499988079071,
96
+ "rewards/chosen": -0.03485359251499176,
97
+ "rewards/margins": 0.035757362842559814,
98
+ "rewards/rejected": -0.07061095535755157,
99
+ "step": 60
100
+ },
101
+ {
102
+ "epoch": 2.5225225225225225,
103
+ "grad_norm": 4.979142189025879,
104
+ "learning_rate": 4.9866405060165044e-06,
105
+ "logits/chosen": -2.364291191101074,
106
+ "logits/rejected": -2.376107931137085,
107
+ "logps/chosen": -70.61842346191406,
108
+ "logps/rejected": -81.80282592773438,
109
+ "loss": 0.6636,
110
+ "rewards/accuracies": 0.675000011920929,
111
+ "rewards/chosen": -0.06025966256856918,
112
+ "rewards/margins": 0.03742799907922745,
113
+ "rewards/rejected": -0.09768766909837723,
114
+ "step": 70
115
+ },
116
+ {
117
+ "epoch": 2.8828828828828827,
118
+ "grad_norm": 4.0428690910339355,
119
+ "learning_rate": 4.964774158361991e-06,
120
+ "logits/chosen": -2.3341965675354004,
121
+ "logits/rejected": -2.3440909385681152,
122
+ "logps/chosen": -86.3591537475586,
123
+ "logps/rejected": -77.45347595214844,
124
+ "loss": 0.6519,
125
+ "rewards/accuracies": 0.7124999761581421,
126
+ "rewards/chosen": -0.09531830251216888,
127
+ "rewards/margins": 0.09854079782962799,
128
+ "rewards/rejected": -0.19385910034179688,
129
+ "step": 80
130
+ },
131
+ {
132
+ "epoch": 3.2432432432432434,
133
+ "grad_norm": 4.31919527053833,
134
+ "learning_rate": 4.93261217644956e-06,
135
+ "logits/chosen": -2.351658821105957,
136
+ "logits/rejected": -2.3437318801879883,
137
+ "logps/chosen": -77.31346130371094,
138
+ "logps/rejected": -80.43277740478516,
139
+ "loss": 0.6243,
140
+ "rewards/accuracies": 0.800000011920929,
141
+ "rewards/chosen": -0.1381106823682785,
142
+ "rewards/margins": 0.16527524590492249,
143
+ "rewards/rejected": -0.3033859133720398,
144
+ "step": 90
145
+ },
146
+ {
147
+ "epoch": 3.6036036036036037,
148
+ "grad_norm": 5.029748439788818,
149
+ "learning_rate": 4.8902889044347e-06,
150
+ "logits/chosen": -2.3354241847991943,
151
+ "logits/rejected": -2.358518600463867,
152
+ "logps/chosen": -75.03588104248047,
153
+ "logps/rejected": -86.44483947753906,
154
+ "loss": 0.6025,
155
+ "rewards/accuracies": 0.699999988079071,
156
+ "rewards/chosen": -0.22239580750465393,
157
+ "rewards/margins": 0.1877792775630951,
158
+ "rewards/rejected": -0.41017502546310425,
159
+ "step": 100
160
+ },
161
+ {
162
+ "epoch": 3.963963963963964,
163
+ "grad_norm": 4.6208648681640625,
164
+ "learning_rate": 4.837981131305475e-06,
165
+ "logits/chosen": -2.3195366859436035,
166
+ "logits/rejected": -2.3129196166992188,
167
+ "logps/chosen": -72.09532928466797,
168
+ "logps/rejected": -73.18878936767578,
169
+ "loss": 0.5955,
170
+ "rewards/accuracies": 0.875,
171
+ "rewards/chosen": -0.22240504622459412,
172
+ "rewards/margins": 0.22803232073783875,
173
+ "rewards/rejected": -0.45043739676475525,
174
+ "step": 110
175
+ },
176
+ {
177
+ "epoch": 4.324324324324325,
178
+ "grad_norm": 4.163040637969971,
179
+ "learning_rate": 4.775907352415367e-06,
180
+ "logits/chosen": -2.3427720069885254,
181
+ "logits/rejected": -2.3731276988983154,
182
+ "logps/chosen": -85.9415283203125,
183
+ "logps/rejected": -93.53765869140625,
184
+ "loss": 0.5506,
185
+ "rewards/accuracies": 0.8500000238418579,
186
+ "rewards/chosen": -0.24449148774147034,
187
+ "rewards/margins": 0.3563767373561859,
188
+ "rewards/rejected": -0.6008682250976562,
189
+ "step": 120
190
+ },
191
+ {
192
+ "epoch": 4.684684684684685,
193
+ "grad_norm": 4.228254795074463,
194
+ "learning_rate": 4.70432685680402e-06,
195
+ "logits/chosen": -2.336733341217041,
196
+ "logits/rejected": -2.3446521759033203,
197
+ "logps/chosen": -81.07231140136719,
198
+ "logps/rejected": -90.82849884033203,
199
+ "loss": 0.5248,
200
+ "rewards/accuracies": 0.8125,
201
+ "rewards/chosen": -0.005195322446525097,
202
+ "rewards/margins": 0.6936509609222412,
203
+ "rewards/rejected": -0.6988462209701538,
204
+ "step": 130
205
+ },
206
+ {
207
+ "epoch": 5.045045045045045,
208
+ "grad_norm": 4.454960346221924,
209
+ "learning_rate": 4.623538644118244e-06,
210
+ "logits/chosen": -2.3331754207611084,
211
+ "logits/rejected": -2.3434836864471436,
212
+ "logps/chosen": -83.67604064941406,
213
+ "logps/rejected": -82.92774200439453,
214
+ "loss": 0.5288,
215
+ "rewards/accuracies": 0.875,
216
+ "rewards/chosen": -0.2716079652309418,
217
+ "rewards/margins": 0.4593236446380615,
218
+ "rewards/rejected": -0.7309316396713257,
219
+ "step": 140
220
+ },
221
+ {
222
+ "epoch": 5.405405405405405,
223
+ "grad_norm": 5.223482608795166,
224
+ "learning_rate": 4.533880175657419e-06,
225
+ "logits/chosen": -2.362809658050537,
226
+ "logits/rejected": -2.3657679557800293,
227
+ "logps/chosen": -73.20018768310547,
228
+ "logps/rejected": -85.37998962402344,
229
+ "loss": 0.4682,
230
+ "rewards/accuracies": 0.8500000238418579,
231
+ "rewards/chosen": -0.3221462368965149,
232
+ "rewards/margins": 0.5968301892280579,
233
+ "rewards/rejected": -0.9189764261245728,
234
+ "step": 150
235
+ },
236
+ {
237
+ "epoch": 5.7657657657657655,
238
+ "grad_norm": 4.905521869659424,
239
+ "learning_rate": 4.435725964760331e-06,
240
+ "logits/chosen": -2.3808655738830566,
241
+ "logits/rejected": -2.368286609649658,
242
+ "logps/chosen": -68.88943481445312,
243
+ "logps/rejected": -82.69029235839844,
244
+ "loss": 0.4586,
245
+ "rewards/accuracies": 0.875,
246
+ "rewards/chosen": -0.3172217011451721,
247
+ "rewards/margins": 0.7665462493896484,
248
+ "rewards/rejected": -1.0837678909301758,
249
+ "step": 160
250
+ },
251
+ {
252
+ "epoch": 6.126126126126126,
253
+ "grad_norm": 5.399628162384033,
254
+ "learning_rate": 4.329486012421531e-06,
255
+ "logits/chosen": -2.365935802459717,
256
+ "logits/rejected": -2.363004684448242,
257
+ "logps/chosen": -70.47642517089844,
258
+ "logps/rejected": -84.02542877197266,
259
+ "loss": 0.4462,
260
+ "rewards/accuracies": 0.8374999761581421,
261
+ "rewards/chosen": -0.45835933089256287,
262
+ "rewards/margins": 0.8438631892204285,
263
+ "rewards/rejected": -1.302222490310669,
264
+ "step": 170
265
+ },
266
+ {
267
+ "epoch": 6.486486486486487,
268
+ "grad_norm": 4.843445777893066,
269
+ "learning_rate": 4.215604094671835e-06,
270
+ "logits/chosen": -2.357231855392456,
271
+ "logits/rejected": -2.360239028930664,
272
+ "logps/chosen": -78.67561340332031,
273
+ "logps/rejected": -88.39659118652344,
274
+ "loss": 0.3976,
275
+ "rewards/accuracies": 0.9125000238418579,
276
+ "rewards/chosen": -0.4842923581600189,
277
+ "rewards/margins": 0.8027322888374329,
278
+ "rewards/rejected": -1.2870244979858398,
279
+ "step": 180
280
+ },
281
+ {
282
+ "epoch": 6.846846846846847,
283
+ "grad_norm": 4.972764015197754,
284
+ "learning_rate": 4.094555908876765e-06,
285
+ "logits/chosen": -2.3751468658447266,
286
+ "logits/rejected": -2.3993237018585205,
287
+ "logps/chosen": -73.63652038574219,
288
+ "logps/rejected": -278.0970458984375,
289
+ "loss": 0.3959,
290
+ "rewards/accuracies": 0.8500000238418579,
291
+ "rewards/chosen": -0.4291106164455414,
292
+ "rewards/margins": 0.9967883229255676,
293
+ "rewards/rejected": -1.4258991479873657,
294
+ "step": 190
295
+ },
296
+ {
297
+ "epoch": 7.207207207207207,
298
+ "grad_norm": 5.071193218231201,
299
+ "learning_rate": 3.966847086696045e-06,
300
+ "logits/chosen": -2.3572330474853516,
301
+ "logits/rejected": -2.357269763946533,
302
+ "logps/chosen": -84.92713928222656,
303
+ "logps/rejected": -98.15062713623047,
304
+ "loss": 0.3544,
305
+ "rewards/accuracies": 0.9375,
306
+ "rewards/chosen": -0.5852295756340027,
307
+ "rewards/margins": 1.2983506917953491,
308
+ "rewards/rejected": -1.883580207824707,
309
+ "step": 200
310
+ }
311
+ ],
312
+ "logging_steps": 10,
313
+ "max_steps": 540,
314
+ "num_input_tokens_seen": 0,
315
+ "num_train_epochs": 20,
316
+ "save_steps": 100,
317
+ "stateful_callbacks": {
318
+ "TrainerControl": {
319
+ "args": {
320
+ "should_epoch_stop": false,
321
+ "should_evaluate": false,
322
+ "should_log": false,
323
+ "should_save": true,
324
+ "should_training_stop": false
325
+ },
326
+ "attributes": {}
327
+ }
328
+ },
329
+ "total_flos": 7.302906796614615e+17,
330
+ "train_batch_size": 1,
331
+ "trial_name": null,
332
+ "trial_params": null
333
+ }
Area_Time_SFT/checkpoint-300/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: ishorn5/RTLCoder-v1.1
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.12.0
Area_Time_SFT/checkpoint-300/adapter_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "ishorn5/RTLCoder-v1.1",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 16,
14
+ "lora_dropout": 0.0,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 8,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "gate_proj",
24
+ "k_proj",
25
+ "up_proj",
26
+ "down_proj",
27
+ "o_proj",
28
+ "v_proj",
29
+ "q_proj"
30
+ ],
31
+ "task_type": "CAUSAL_LM",
32
+ "use_dora": false,
33
+ "use_rslora": false
34
+ }
Area_Time_SFT/checkpoint-300/added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "[PAD]": 32000
3
+ }
Area_Time_SFT/checkpoint-300/special_tokens_map.json ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<unk>",
4
+ "<s>",
5
+ "</s>",
6
+ "[PAD]"
7
+ ],
8
+ "bos_token": {
9
+ "content": "<s>",
10
+ "lstrip": false,
11
+ "normalized": false,
12
+ "rstrip": false,
13
+ "single_word": false
14
+ },
15
+ "eos_token": {
16
+ "content": "</s>",
17
+ "lstrip": false,
18
+ "normalized": false,
19
+ "rstrip": false,
20
+ "single_word": false
21
+ },
22
+ "pad_token": {
23
+ "content": "[PAD]",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false
28
+ },
29
+ "unk_token": {
30
+ "content": "<unk>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false
35
+ }
36
+ }
Area_Time_SFT/checkpoint-300/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
Area_Time_SFT/checkpoint-300/tokenizer_config.json ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "add_prefix_space": true,
5
+ "added_tokens_decoder": {
6
+ "0": {
7
+ "content": "<unk>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "1": {
15
+ "content": "<s>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": true
21
+ },
22
+ "2": {
23
+ "content": "</s>",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false,
28
+ "special": true
29
+ },
30
+ "32000": {
31
+ "content": "[PAD]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false,
36
+ "special": true
37
+ }
38
+ },
39
+ "additional_special_tokens": [
40
+ "<unk>",
41
+ "<s>",
42
+ "</s>",
43
+ "[PAD]"
44
+ ],
45
+ "bos_token": "<s>",
46
+ "chat_template": "{{ '<s>' }}{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% endif %}{% if system_message is defined %}{{ system_message + '\n\n' }}{% endif %}{% for message in loop_messages %}{% set content = message['content'] %}{% if message['role'] == 'user' %}{{ 'User: ' + content + '\n\nAssistant:' }}{% elif message['role'] == 'assistant' %}{{ content + '</s>' }}{% endif %}{% endfor %}",
47
+ "clean_up_tokenization_spaces": false,
48
+ "eos_token": "</s>",
49
+ "legacy": true,
50
+ "model_max_length": 2048,
51
+ "pad_token": "[PAD]",
52
+ "padding_side": "right",
53
+ "sp_model_kwargs": {},
54
+ "spaces_between_special_tokens": false,
55
+ "split_special_tokens": false,
56
+ "tokenizer_class": "LlamaTokenizer",
57
+ "unk_token": "<unk>",
58
+ "use_default_system_prompt": true
59
+ }
Area_Time_SFT/checkpoint-300/trainer_state.json ADDED
@@ -0,0 +1,483 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 10.81081081081081,
5
+ "eval_steps": 500,
6
+ "global_step": 300,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.36036036036036034,
13
+ "grad_norm": 4.098762512207031,
14
+ "learning_rate": 9.259259259259259e-07,
15
+ "logits/chosen": -2.332144260406494,
16
+ "logits/rejected": -2.3385167121887207,
17
+ "logps/chosen": -80.89369201660156,
18
+ "logps/rejected": -70.11573791503906,
19
+ "loss": 0.6929,
20
+ "rewards/accuracies": 0.4625000059604645,
21
+ "rewards/chosen": 0.00037987352698110044,
22
+ "rewards/margins": 0.004227532539516687,
23
+ "rewards/rejected": -0.003847658634185791,
24
+ "step": 10
25
+ },
26
+ {
27
+ "epoch": 0.7207207207207207,
28
+ "grad_norm": 3.641324281692505,
29
+ "learning_rate": 1.8518518518518519e-06,
30
+ "logits/chosen": -2.323789119720459,
31
+ "logits/rejected": -2.351041793823242,
32
+ "logps/chosen": -73.2725601196289,
33
+ "logps/rejected": -81.80250549316406,
34
+ "loss": 0.6932,
35
+ "rewards/accuracies": 0.5874999761581421,
36
+ "rewards/chosen": 0.0012983012711629272,
37
+ "rewards/margins": 0.004207946360111237,
38
+ "rewards/rejected": -0.0029096449725329876,
39
+ "step": 20
40
+ },
41
+ {
42
+ "epoch": 1.0810810810810811,
43
+ "grad_norm": 3.9276015758514404,
44
+ "learning_rate": 2.7777777777777783e-06,
45
+ "logits/chosen": -2.3353028297424316,
46
+ "logits/rejected": -2.3445916175842285,
47
+ "logps/chosen": -69.34381103515625,
48
+ "logps/rejected": -74.37530517578125,
49
+ "loss": 0.6941,
50
+ "rewards/accuracies": 0.4375,
51
+ "rewards/chosen": -0.01035231165587902,
52
+ "rewards/margins": -0.006389107555150986,
53
+ "rewards/rejected": -0.00396320316940546,
54
+ "step": 30
55
+ },
56
+ {
57
+ "epoch": 1.4414414414414414,
58
+ "grad_norm": 4.809000492095947,
59
+ "learning_rate": 3.7037037037037037e-06,
60
+ "logits/chosen": -2.343184232711792,
61
+ "logits/rejected": -2.360262393951416,
62
+ "logps/chosen": -77.91002655029297,
63
+ "logps/rejected": -76.27156066894531,
64
+ "loss": 0.6902,
65
+ "rewards/accuracies": 0.5625,
66
+ "rewards/chosen": -0.008437180891633034,
67
+ "rewards/margins": 0.008391124196350574,
68
+ "rewards/rejected": -0.016828304156661034,
69
+ "step": 40
70
+ },
71
+ {
72
+ "epoch": 1.8018018018018018,
73
+ "grad_norm": 4.504117012023926,
74
+ "learning_rate": 4.62962962962963e-06,
75
+ "logits/chosen": -2.3394973278045654,
76
+ "logits/rejected": -2.3635268211364746,
77
+ "logps/chosen": -83.62376403808594,
78
+ "logps/rejected": -267.64569091796875,
79
+ "loss": 0.6851,
80
+ "rewards/accuracies": 0.48750001192092896,
81
+ "rewards/chosen": 0.01695835217833519,
82
+ "rewards/margins": 0.14291717112064362,
83
+ "rewards/rejected": -0.12595881521701813,
84
+ "step": 50
85
+ },
86
+ {
87
+ "epoch": 2.1621621621621623,
88
+ "grad_norm": 4.033559322357178,
89
+ "learning_rate": 4.998119881260576e-06,
90
+ "logits/chosen": -2.32966685295105,
91
+ "logits/rejected": -2.3370490074157715,
92
+ "logps/chosen": -78.54629516601562,
93
+ "logps/rejected": -82.67992401123047,
94
+ "loss": 0.6767,
95
+ "rewards/accuracies": 0.637499988079071,
96
+ "rewards/chosen": -0.03485359251499176,
97
+ "rewards/margins": 0.035757362842559814,
98
+ "rewards/rejected": -0.07061095535755157,
99
+ "step": 60
100
+ },
101
+ {
102
+ "epoch": 2.5225225225225225,
103
+ "grad_norm": 4.979142189025879,
104
+ "learning_rate": 4.9866405060165044e-06,
105
+ "logits/chosen": -2.364291191101074,
106
+ "logits/rejected": -2.376107931137085,
107
+ "logps/chosen": -70.61842346191406,
108
+ "logps/rejected": -81.80282592773438,
109
+ "loss": 0.6636,
110
+ "rewards/accuracies": 0.675000011920929,
111
+ "rewards/chosen": -0.06025966256856918,
112
+ "rewards/margins": 0.03742799907922745,
113
+ "rewards/rejected": -0.09768766909837723,
114
+ "step": 70
115
+ },
116
+ {
117
+ "epoch": 2.8828828828828827,
118
+ "grad_norm": 4.0428690910339355,
119
+ "learning_rate": 4.964774158361991e-06,
120
+ "logits/chosen": -2.3341965675354004,
121
+ "logits/rejected": -2.3440909385681152,
122
+ "logps/chosen": -86.3591537475586,
123
+ "logps/rejected": -77.45347595214844,
124
+ "loss": 0.6519,
125
+ "rewards/accuracies": 0.7124999761581421,
126
+ "rewards/chosen": -0.09531830251216888,
127
+ "rewards/margins": 0.09854079782962799,
128
+ "rewards/rejected": -0.19385910034179688,
129
+ "step": 80
130
+ },
131
+ {
132
+ "epoch": 3.2432432432432434,
133
+ "grad_norm": 4.31919527053833,
134
+ "learning_rate": 4.93261217644956e-06,
135
+ "logits/chosen": -2.351658821105957,
136
+ "logits/rejected": -2.3437318801879883,
137
+ "logps/chosen": -77.31346130371094,
138
+ "logps/rejected": -80.43277740478516,
139
+ "loss": 0.6243,
140
+ "rewards/accuracies": 0.800000011920929,
141
+ "rewards/chosen": -0.1381106823682785,
142
+ "rewards/margins": 0.16527524590492249,
143
+ "rewards/rejected": -0.3033859133720398,
144
+ "step": 90
145
+ },
146
+ {
147
+ "epoch": 3.6036036036036037,
148
+ "grad_norm": 5.029748439788818,
149
+ "learning_rate": 4.8902889044347e-06,
150
+ "logits/chosen": -2.3354241847991943,
151
+ "logits/rejected": -2.358518600463867,
152
+ "logps/chosen": -75.03588104248047,
153
+ "logps/rejected": -86.44483947753906,
154
+ "loss": 0.6025,
155
+ "rewards/accuracies": 0.699999988079071,
156
+ "rewards/chosen": -0.22239580750465393,
157
+ "rewards/margins": 0.1877792775630951,
158
+ "rewards/rejected": -0.41017502546310425,
159
+ "step": 100
160
+ },
161
+ {
162
+ "epoch": 3.963963963963964,
163
+ "grad_norm": 4.6208648681640625,
164
+ "learning_rate": 4.837981131305475e-06,
165
+ "logits/chosen": -2.3195366859436035,
166
+ "logits/rejected": -2.3129196166992188,
167
+ "logps/chosen": -72.09532928466797,
168
+ "logps/rejected": -73.18878936767578,
169
+ "loss": 0.5955,
170
+ "rewards/accuracies": 0.875,
171
+ "rewards/chosen": -0.22240504622459412,
172
+ "rewards/margins": 0.22803232073783875,
173
+ "rewards/rejected": -0.45043739676475525,
174
+ "step": 110
175
+ },
176
+ {
177
+ "epoch": 4.324324324324325,
178
+ "grad_norm": 4.163040637969971,
179
+ "learning_rate": 4.775907352415367e-06,
180
+ "logits/chosen": -2.3427720069885254,
181
+ "logits/rejected": -2.3731276988983154,
182
+ "logps/chosen": -85.9415283203125,
183
+ "logps/rejected": -93.53765869140625,
184
+ "loss": 0.5506,
185
+ "rewards/accuracies": 0.8500000238418579,
186
+ "rewards/chosen": -0.24449148774147034,
187
+ "rewards/margins": 0.3563767373561859,
188
+ "rewards/rejected": -0.6008682250976562,
189
+ "step": 120
190
+ },
191
+ {
192
+ "epoch": 4.684684684684685,
193
+ "grad_norm": 4.228254795074463,
194
+ "learning_rate": 4.70432685680402e-06,
195
+ "logits/chosen": -2.336733341217041,
196
+ "logits/rejected": -2.3446521759033203,
197
+ "logps/chosen": -81.07231140136719,
198
+ "logps/rejected": -90.82849884033203,
199
+ "loss": 0.5248,
200
+ "rewards/accuracies": 0.8125,
201
+ "rewards/chosen": -0.005195322446525097,
202
+ "rewards/margins": 0.6936509609222412,
203
+ "rewards/rejected": -0.6988462209701538,
204
+ "step": 130
205
+ },
206
+ {
207
+ "epoch": 5.045045045045045,
208
+ "grad_norm": 4.454960346221924,
209
+ "learning_rate": 4.623538644118244e-06,
210
+ "logits/chosen": -2.3331754207611084,
211
+ "logits/rejected": -2.3434836864471436,
212
+ "logps/chosen": -83.67604064941406,
213
+ "logps/rejected": -82.92774200439453,
214
+ "loss": 0.5288,
215
+ "rewards/accuracies": 0.875,
216
+ "rewards/chosen": -0.2716079652309418,
217
+ "rewards/margins": 0.4593236446380615,
218
+ "rewards/rejected": -0.7309316396713257,
219
+ "step": 140
220
+ },
221
+ {
222
+ "epoch": 5.405405405405405,
223
+ "grad_norm": 5.223482608795166,
224
+ "learning_rate": 4.533880175657419e-06,
225
+ "logits/chosen": -2.362809658050537,
226
+ "logits/rejected": -2.3657679557800293,
227
+ "logps/chosen": -73.20018768310547,
228
+ "logps/rejected": -85.37998962402344,
229
+ "loss": 0.4682,
230
+ "rewards/accuracies": 0.8500000238418579,
231
+ "rewards/chosen": -0.3221462368965149,
232
+ "rewards/margins": 0.5968301892280579,
233
+ "rewards/rejected": -0.9189764261245728,
234
+ "step": 150
235
+ },
236
+ {
237
+ "epoch": 5.7657657657657655,
238
+ "grad_norm": 4.905521869659424,
239
+ "learning_rate": 4.435725964760331e-06,
240
+ "logits/chosen": -2.3808655738830566,
241
+ "logits/rejected": -2.368286609649658,
242
+ "logps/chosen": -68.88943481445312,
243
+ "logps/rejected": -82.69029235839844,
244
+ "loss": 0.4586,
245
+ "rewards/accuracies": 0.875,
246
+ "rewards/chosen": -0.3172217011451721,
247
+ "rewards/margins": 0.7665462493896484,
248
+ "rewards/rejected": -1.0837678909301758,
249
+ "step": 160
250
+ },
251
+ {
252
+ "epoch": 6.126126126126126,
253
+ "grad_norm": 5.399628162384033,
254
+ "learning_rate": 4.329486012421531e-06,
255
+ "logits/chosen": -2.365935802459717,
256
+ "logits/rejected": -2.363004684448242,
257
+ "logps/chosen": -70.47642517089844,
258
+ "logps/rejected": -84.02542877197266,
259
+ "loss": 0.4462,
260
+ "rewards/accuracies": 0.8374999761581421,
261
+ "rewards/chosen": -0.45835933089256287,
262
+ "rewards/margins": 0.8438631892204285,
263
+ "rewards/rejected": -1.302222490310669,
264
+ "step": 170
265
+ },
266
+ {
267
+ "epoch": 6.486486486486487,
268
+ "grad_norm": 4.843445777893066,
269
+ "learning_rate": 4.215604094671835e-06,
270
+ "logits/chosen": -2.357231855392456,
271
+ "logits/rejected": -2.360239028930664,
272
+ "logps/chosen": -78.67561340332031,
273
+ "logps/rejected": -88.39659118652344,
274
+ "loss": 0.3976,
275
+ "rewards/accuracies": 0.9125000238418579,
276
+ "rewards/chosen": -0.4842923581600189,
277
+ "rewards/margins": 0.8027322888374329,
278
+ "rewards/rejected": -1.2870244979858398,
279
+ "step": 180
280
+ },
281
+ {
282
+ "epoch": 6.846846846846847,
283
+ "grad_norm": 4.972764015197754,
284
+ "learning_rate": 4.094555908876765e-06,
285
+ "logits/chosen": -2.3751468658447266,
286
+ "logits/rejected": -2.3993237018585205,
287
+ "logps/chosen": -73.63652038574219,
288
+ "logps/rejected": -278.0970458984375,
289
+ "loss": 0.3959,
290
+ "rewards/accuracies": 0.8500000238418579,
291
+ "rewards/chosen": -0.4291106164455414,
292
+ "rewards/margins": 0.9967883229255676,
293
+ "rewards/rejected": -1.4258991479873657,
294
+ "step": 190
295
+ },
296
+ {
297
+ "epoch": 7.207207207207207,
298
+ "grad_norm": 5.071193218231201,
299
+ "learning_rate": 3.966847086696045e-06,
300
+ "logits/chosen": -2.3572330474853516,
301
+ "logits/rejected": -2.357269763946533,
302
+ "logps/chosen": -84.92713928222656,
303
+ "logps/rejected": -98.15062713623047,
304
+ "loss": 0.3544,
305
+ "rewards/accuracies": 0.9375,
306
+ "rewards/chosen": -0.5852295756340027,
307
+ "rewards/margins": 1.2983506917953491,
308
+ "rewards/rejected": -1.883580207824707,
309
+ "step": 200
310
+ },
311
+ {
312
+ "epoch": 7.5675675675675675,
313
+ "grad_norm": 5.1891655921936035,
314
+ "learning_rate": 3.833011082004229e-06,
315
+ "logits/chosen": -2.368424892425537,
316
+ "logits/rejected": -2.378568649291992,
317
+ "logps/chosen": -72.57874298095703,
318
+ "logps/rejected": -84.37443542480469,
319
+ "loss": 0.3421,
320
+ "rewards/accuracies": 0.8999999761581421,
321
+ "rewards/chosen": -0.48721614480018616,
322
+ "rewards/margins": 1.2057541608810425,
323
+ "rewards/rejected": -1.6929700374603271,
324
+ "step": 210
325
+ },
326
+ {
327
+ "epoch": 7.927927927927928,
328
+ "grad_norm": 5.771843433380127,
329
+ "learning_rate": 3.693606942594873e-06,
330
+ "logits/chosen": -2.3891513347625732,
331
+ "logits/rejected": -2.4053854942321777,
332
+ "logps/chosen": -75.97737121582031,
333
+ "logps/rejected": -97.49588012695312,
334
+ "loss": 0.3211,
335
+ "rewards/accuracies": 0.8374999761581421,
336
+ "rewards/chosen": -0.6163657903671265,
337
+ "rewards/margins": 1.1816037893295288,
338
+ "rewards/rejected": -1.7979698181152344,
339
+ "step": 220
340
+ },
341
+ {
342
+ "epoch": 8.288288288288289,
343
+ "grad_norm": 5.1563029289245605,
344
+ "learning_rate": 3.549216974976073e-06,
345
+ "logits/chosen": -2.4075605869293213,
346
+ "logits/rejected": -2.406411647796631,
347
+ "logps/chosen": -82.80142974853516,
348
+ "logps/rejected": -96.36463928222656,
349
+ "loss": 0.2848,
350
+ "rewards/accuracies": 0.9750000238418579,
351
+ "rewards/chosen": -0.8106307983398438,
352
+ "rewards/margins": 1.647127389907837,
353
+ "rewards/rejected": -2.4577584266662598,
354
+ "step": 230
355
+ },
356
+ {
357
+ "epoch": 8.64864864864865,
358
+ "grad_norm": 5.483398914337158,
359
+ "learning_rate": 3.400444312011776e-06,
360
+ "logits/chosen": -2.3797879219055176,
361
+ "logits/rejected": -2.362518787384033,
362
+ "logps/chosen": -82.14349365234375,
363
+ "logps/rejected": -97.63994598388672,
364
+ "loss": 0.278,
365
+ "rewards/accuracies": 0.949999988079071,
366
+ "rewards/chosen": -0.9469457864761353,
367
+ "rewards/margins": 1.488023281097412,
368
+ "rewards/rejected": -2.434968948364258,
369
+ "step": 240
370
+ },
371
+ {
372
+ "epoch": 9.00900900900901,
373
+ "grad_norm": 5.042275905609131,
374
+ "learning_rate": 3.2479103935691047e-06,
375
+ "logits/chosen": -2.3207201957702637,
376
+ "logits/rejected": -2.341810941696167,
377
+ "logps/chosen": -85.28227233886719,
378
+ "logps/rejected": -116.27372741699219,
379
+ "loss": 0.2494,
380
+ "rewards/accuracies": 0.9375,
381
+ "rewards/chosen": -1.0121912956237793,
382
+ "rewards/margins": 1.997532606124878,
383
+ "rewards/rejected": -3.0097243785858154,
384
+ "step": 250
385
+ },
386
+ {
387
+ "epoch": 9.36936936936937,
388
+ "grad_norm": 5.468939781188965,
389
+ "learning_rate": 3.092252370695298e-06,
390
+ "logits/chosen": -2.3408374786376953,
391
+ "logits/rejected": -2.366006851196289,
392
+ "logps/chosen": -72.05101013183594,
393
+ "logps/rejected": -102.21392822265625,
394
+ "loss": 0.2457,
395
+ "rewards/accuracies": 0.949999988079071,
396
+ "rewards/chosen": -1.0319160223007202,
397
+ "rewards/margins": 1.8719971179962158,
398
+ "rewards/rejected": -2.9039134979248047,
399
+ "step": 260
400
+ },
401
+ {
402
+ "epoch": 9.72972972972973,
403
+ "grad_norm": 6.745687007904053,
404
+ "learning_rate": 2.9341204441673267e-06,
405
+ "logits/chosen": -2.327451467514038,
406
+ "logits/rejected": -2.3463993072509766,
407
+ "logps/chosen": -86.53431701660156,
408
+ "logps/rejected": -116.40992736816406,
409
+ "loss": 0.2059,
410
+ "rewards/accuracies": 0.9750000238418579,
411
+ "rewards/chosen": -1.3356889486312866,
412
+ "rewards/margins": 1.9991543292999268,
413
+ "rewards/rejected": -3.334843397140503,
414
+ "step": 270
415
+ },
416
+ {
417
+ "epoch": 10.09009009009009,
418
+ "grad_norm": 5.230775833129883,
419
+ "learning_rate": 2.7741751485313295e-06,
420
+ "logits/chosen": -2.3630144596099854,
421
+ "logits/rejected": -2.3630847930908203,
422
+ "logps/chosen": -76.57563018798828,
423
+ "logps/rejected": -99.20953369140625,
424
+ "loss": 0.2034,
425
+ "rewards/accuracies": 0.925000011920929,
426
+ "rewards/chosen": -1.2317285537719727,
427
+ "rewards/margins": 1.8740953207015991,
428
+ "rewards/rejected": -3.1058237552642822,
429
+ "step": 280
430
+ },
431
+ {
432
+ "epoch": 10.45045045045045,
433
+ "grad_norm": 6.581757545471191,
434
+ "learning_rate": 2.6130845929767662e-06,
435
+ "logits/chosen": -2.3247475624084473,
436
+ "logits/rejected": -2.3450474739074707,
437
+ "logps/chosen": -83.9271240234375,
438
+ "logps/rejected": -109.54156494140625,
439
+ "loss": 0.174,
440
+ "rewards/accuracies": 0.987500011920929,
441
+ "rewards/chosen": -1.4542334079742432,
442
+ "rewards/margins": 2.3127167224884033,
443
+ "rewards/rejected": -3.7669498920440674,
444
+ "step": 290
445
+ },
446
+ {
447
+ "epoch": 10.81081081081081,
448
+ "grad_norm": 5.604727745056152,
449
+ "learning_rate": 2.4515216705704396e-06,
450
+ "logits/chosen": -2.279327869415283,
451
+ "logits/rejected": -2.319913387298584,
452
+ "logps/chosen": -78.63652801513672,
453
+ "logps/rejected": -115.9185562133789,
454
+ "loss": 0.1831,
455
+ "rewards/accuracies": 0.9624999761581421,
456
+ "rewards/chosen": -1.3030173778533936,
457
+ "rewards/margins": 2.5270209312438965,
458
+ "rewards/rejected": -3.830038547515869,
459
+ "step": 300
460
+ }
461
+ ],
462
+ "logging_steps": 10,
463
+ "max_steps": 540,
464
+ "num_input_tokens_seen": 0,
465
+ "num_train_epochs": 20,
466
+ "save_steps": 100,
467
+ "stateful_callbacks": {
468
+ "TrainerControl": {
469
+ "args": {
470
+ "should_epoch_stop": false,
471
+ "should_evaluate": false,
472
+ "should_log": false,
473
+ "should_save": true,
474
+ "should_training_stop": false
475
+ },
476
+ "attributes": {}
477
+ }
478
+ },
479
+ "total_flos": 1.0963305022661591e+18,
480
+ "train_batch_size": 1,
481
+ "trial_name": null,
482
+ "trial_params": null
483
+ }
Area_Time_SFT/checkpoint-400/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: ishorn5/RTLCoder-v1.1
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.12.0
Area_Time_SFT/checkpoint-400/adapter_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "ishorn5/RTLCoder-v1.1",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 16,
14
+ "lora_dropout": 0.0,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 8,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "gate_proj",
24
+ "k_proj",
25
+ "up_proj",
26
+ "down_proj",
27
+ "o_proj",
28
+ "v_proj",
29
+ "q_proj"
30
+ ],
31
+ "task_type": "CAUSAL_LM",
32
+ "use_dora": false,
33
+ "use_rslora": false
34
+ }
Area_Time_SFT/checkpoint-400/added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "[PAD]": 32000
3
+ }
Area_Time_SFT/checkpoint-400/special_tokens_map.json ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<unk>",
4
+ "<s>",
5
+ "</s>",
6
+ "[PAD]"
7
+ ],
8
+ "bos_token": {
9
+ "content": "<s>",
10
+ "lstrip": false,
11
+ "normalized": false,
12
+ "rstrip": false,
13
+ "single_word": false
14
+ },
15
+ "eos_token": {
16
+ "content": "</s>",
17
+ "lstrip": false,
18
+ "normalized": false,
19
+ "rstrip": false,
20
+ "single_word": false
21
+ },
22
+ "pad_token": {
23
+ "content": "[PAD]",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false
28
+ },
29
+ "unk_token": {
30
+ "content": "<unk>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false
35
+ }
36
+ }
Area_Time_SFT/checkpoint-400/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
Area_Time_SFT/checkpoint-400/tokenizer_config.json ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "add_prefix_space": true,
5
+ "added_tokens_decoder": {
6
+ "0": {
7
+ "content": "<unk>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "1": {
15
+ "content": "<s>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": true
21
+ },
22
+ "2": {
23
+ "content": "</s>",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false,
28
+ "special": true
29
+ },
30
+ "32000": {
31
+ "content": "[PAD]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false,
36
+ "special": true
37
+ }
38
+ },
39
+ "additional_special_tokens": [
40
+ "<unk>",
41
+ "<s>",
42
+ "</s>",
43
+ "[PAD]"
44
+ ],
45
+ "bos_token": "<s>",
46
+ "chat_template": "{{ '<s>' }}{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% endif %}{% if system_message is defined %}{{ system_message + '\n\n' }}{% endif %}{% for message in loop_messages %}{% set content = message['content'] %}{% if message['role'] == 'user' %}{{ 'User: ' + content + '\n\nAssistant:' }}{% elif message['role'] == 'assistant' %}{{ content + '</s>' }}{% endif %}{% endfor %}",
47
+ "clean_up_tokenization_spaces": false,
48
+ "eos_token": "</s>",
49
+ "legacy": true,
50
+ "model_max_length": 2048,
51
+ "pad_token": "[PAD]",
52
+ "padding_side": "right",
53
+ "sp_model_kwargs": {},
54
+ "spaces_between_special_tokens": false,
55
+ "split_special_tokens": false,
56
+ "tokenizer_class": "LlamaTokenizer",
57
+ "unk_token": "<unk>",
58
+ "use_default_system_prompt": true
59
+ }
Area_Time_SFT/checkpoint-400/trainer_state.json ADDED
@@ -0,0 +1,633 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 14.414414414414415,
5
+ "eval_steps": 500,
6
+ "global_step": 400,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.36036036036036034,
13
+ "grad_norm": 4.098762512207031,
14
+ "learning_rate": 9.259259259259259e-07,
15
+ "logits/chosen": -2.332144260406494,
16
+ "logits/rejected": -2.3385167121887207,
17
+ "logps/chosen": -80.89369201660156,
18
+ "logps/rejected": -70.11573791503906,
19
+ "loss": 0.6929,
20
+ "rewards/accuracies": 0.4625000059604645,
21
+ "rewards/chosen": 0.00037987352698110044,
22
+ "rewards/margins": 0.004227532539516687,
23
+ "rewards/rejected": -0.003847658634185791,
24
+ "step": 10
25
+ },
26
+ {
27
+ "epoch": 0.7207207207207207,
28
+ "grad_norm": 3.641324281692505,
29
+ "learning_rate": 1.8518518518518519e-06,
30
+ "logits/chosen": -2.323789119720459,
31
+ "logits/rejected": -2.351041793823242,
32
+ "logps/chosen": -73.2725601196289,
33
+ "logps/rejected": -81.80250549316406,
34
+ "loss": 0.6932,
35
+ "rewards/accuracies": 0.5874999761581421,
36
+ "rewards/chosen": 0.0012983012711629272,
37
+ "rewards/margins": 0.004207946360111237,
38
+ "rewards/rejected": -0.0029096449725329876,
39
+ "step": 20
40
+ },
41
+ {
42
+ "epoch": 1.0810810810810811,
43
+ "grad_norm": 3.9276015758514404,
44
+ "learning_rate": 2.7777777777777783e-06,
45
+ "logits/chosen": -2.3353028297424316,
46
+ "logits/rejected": -2.3445916175842285,
47
+ "logps/chosen": -69.34381103515625,
48
+ "logps/rejected": -74.37530517578125,
49
+ "loss": 0.6941,
50
+ "rewards/accuracies": 0.4375,
51
+ "rewards/chosen": -0.01035231165587902,
52
+ "rewards/margins": -0.006389107555150986,
53
+ "rewards/rejected": -0.00396320316940546,
54
+ "step": 30
55
+ },
56
+ {
57
+ "epoch": 1.4414414414414414,
58
+ "grad_norm": 4.809000492095947,
59
+ "learning_rate": 3.7037037037037037e-06,
60
+ "logits/chosen": -2.343184232711792,
61
+ "logits/rejected": -2.360262393951416,
62
+ "logps/chosen": -77.91002655029297,
63
+ "logps/rejected": -76.27156066894531,
64
+ "loss": 0.6902,
65
+ "rewards/accuracies": 0.5625,
66
+ "rewards/chosen": -0.008437180891633034,
67
+ "rewards/margins": 0.008391124196350574,
68
+ "rewards/rejected": -0.016828304156661034,
69
+ "step": 40
70
+ },
71
+ {
72
+ "epoch": 1.8018018018018018,
73
+ "grad_norm": 4.504117012023926,
74
+ "learning_rate": 4.62962962962963e-06,
75
+ "logits/chosen": -2.3394973278045654,
76
+ "logits/rejected": -2.3635268211364746,
77
+ "logps/chosen": -83.62376403808594,
78
+ "logps/rejected": -267.64569091796875,
79
+ "loss": 0.6851,
80
+ "rewards/accuracies": 0.48750001192092896,
81
+ "rewards/chosen": 0.01695835217833519,
82
+ "rewards/margins": 0.14291717112064362,
83
+ "rewards/rejected": -0.12595881521701813,
84
+ "step": 50
85
+ },
86
+ {
87
+ "epoch": 2.1621621621621623,
88
+ "grad_norm": 4.033559322357178,
89
+ "learning_rate": 4.998119881260576e-06,
90
+ "logits/chosen": -2.32966685295105,
91
+ "logits/rejected": -2.3370490074157715,
92
+ "logps/chosen": -78.54629516601562,
93
+ "logps/rejected": -82.67992401123047,
94
+ "loss": 0.6767,
95
+ "rewards/accuracies": 0.637499988079071,
96
+ "rewards/chosen": -0.03485359251499176,
97
+ "rewards/margins": 0.035757362842559814,
98
+ "rewards/rejected": -0.07061095535755157,
99
+ "step": 60
100
+ },
101
+ {
102
+ "epoch": 2.5225225225225225,
103
+ "grad_norm": 4.979142189025879,
104
+ "learning_rate": 4.9866405060165044e-06,
105
+ "logits/chosen": -2.364291191101074,
106
+ "logits/rejected": -2.376107931137085,
107
+ "logps/chosen": -70.61842346191406,
108
+ "logps/rejected": -81.80282592773438,
109
+ "loss": 0.6636,
110
+ "rewards/accuracies": 0.675000011920929,
111
+ "rewards/chosen": -0.06025966256856918,
112
+ "rewards/margins": 0.03742799907922745,
113
+ "rewards/rejected": -0.09768766909837723,
114
+ "step": 70
115
+ },
116
+ {
117
+ "epoch": 2.8828828828828827,
118
+ "grad_norm": 4.0428690910339355,
119
+ "learning_rate": 4.964774158361991e-06,
120
+ "logits/chosen": -2.3341965675354004,
121
+ "logits/rejected": -2.3440909385681152,
122
+ "logps/chosen": -86.3591537475586,
123
+ "logps/rejected": -77.45347595214844,
124
+ "loss": 0.6519,
125
+ "rewards/accuracies": 0.7124999761581421,
126
+ "rewards/chosen": -0.09531830251216888,
127
+ "rewards/margins": 0.09854079782962799,
128
+ "rewards/rejected": -0.19385910034179688,
129
+ "step": 80
130
+ },
131
+ {
132
+ "epoch": 3.2432432432432434,
133
+ "grad_norm": 4.31919527053833,
134
+ "learning_rate": 4.93261217644956e-06,
135
+ "logits/chosen": -2.351658821105957,
136
+ "logits/rejected": -2.3437318801879883,
137
+ "logps/chosen": -77.31346130371094,
138
+ "logps/rejected": -80.43277740478516,
139
+ "loss": 0.6243,
140
+ "rewards/accuracies": 0.800000011920929,
141
+ "rewards/chosen": -0.1381106823682785,
142
+ "rewards/margins": 0.16527524590492249,
143
+ "rewards/rejected": -0.3033859133720398,
144
+ "step": 90
145
+ },
146
+ {
147
+ "epoch": 3.6036036036036037,
148
+ "grad_norm": 5.029748439788818,
149
+ "learning_rate": 4.8902889044347e-06,
150
+ "logits/chosen": -2.3354241847991943,
151
+ "logits/rejected": -2.358518600463867,
152
+ "logps/chosen": -75.03588104248047,
153
+ "logps/rejected": -86.44483947753906,
154
+ "loss": 0.6025,
155
+ "rewards/accuracies": 0.699999988079071,
156
+ "rewards/chosen": -0.22239580750465393,
157
+ "rewards/margins": 0.1877792775630951,
158
+ "rewards/rejected": -0.41017502546310425,
159
+ "step": 100
160
+ },
161
+ {
162
+ "epoch": 3.963963963963964,
163
+ "grad_norm": 4.6208648681640625,
164
+ "learning_rate": 4.837981131305475e-06,
165
+ "logits/chosen": -2.3195366859436035,
166
+ "logits/rejected": -2.3129196166992188,
167
+ "logps/chosen": -72.09532928466797,
168
+ "logps/rejected": -73.18878936767578,
169
+ "loss": 0.5955,
170
+ "rewards/accuracies": 0.875,
171
+ "rewards/chosen": -0.22240504622459412,
172
+ "rewards/margins": 0.22803232073783875,
173
+ "rewards/rejected": -0.45043739676475525,
174
+ "step": 110
175
+ },
176
+ {
177
+ "epoch": 4.324324324324325,
178
+ "grad_norm": 4.163040637969971,
179
+ "learning_rate": 4.775907352415367e-06,
180
+ "logits/chosen": -2.3427720069885254,
181
+ "logits/rejected": -2.3731276988983154,
182
+ "logps/chosen": -85.9415283203125,
183
+ "logps/rejected": -93.53765869140625,
184
+ "loss": 0.5506,
185
+ "rewards/accuracies": 0.8500000238418579,
186
+ "rewards/chosen": -0.24449148774147034,
187
+ "rewards/margins": 0.3563767373561859,
188
+ "rewards/rejected": -0.6008682250976562,
189
+ "step": 120
190
+ },
191
+ {
192
+ "epoch": 4.684684684684685,
193
+ "grad_norm": 4.228254795074463,
194
+ "learning_rate": 4.70432685680402e-06,
195
+ "logits/chosen": -2.336733341217041,
196
+ "logits/rejected": -2.3446521759033203,
197
+ "logps/chosen": -81.07231140136719,
198
+ "logps/rejected": -90.82849884033203,
199
+ "loss": 0.5248,
200
+ "rewards/accuracies": 0.8125,
201
+ "rewards/chosen": -0.005195322446525097,
202
+ "rewards/margins": 0.6936509609222412,
203
+ "rewards/rejected": -0.6988462209701538,
204
+ "step": 130
205
+ },
206
+ {
207
+ "epoch": 5.045045045045045,
208
+ "grad_norm": 4.454960346221924,
209
+ "learning_rate": 4.623538644118244e-06,
210
+ "logits/chosen": -2.3331754207611084,
211
+ "logits/rejected": -2.3434836864471436,
212
+ "logps/chosen": -83.67604064941406,
213
+ "logps/rejected": -82.92774200439453,
214
+ "loss": 0.5288,
215
+ "rewards/accuracies": 0.875,
216
+ "rewards/chosen": -0.2716079652309418,
217
+ "rewards/margins": 0.4593236446380615,
218
+ "rewards/rejected": -0.7309316396713257,
219
+ "step": 140
220
+ },
221
+ {
222
+ "epoch": 5.405405405405405,
223
+ "grad_norm": 5.223482608795166,
224
+ "learning_rate": 4.533880175657419e-06,
225
+ "logits/chosen": -2.362809658050537,
226
+ "logits/rejected": -2.3657679557800293,
227
+ "logps/chosen": -73.20018768310547,
228
+ "logps/rejected": -85.37998962402344,
229
+ "loss": 0.4682,
230
+ "rewards/accuracies": 0.8500000238418579,
231
+ "rewards/chosen": -0.3221462368965149,
232
+ "rewards/margins": 0.5968301892280579,
233
+ "rewards/rejected": -0.9189764261245728,
234
+ "step": 150
235
+ },
236
+ {
237
+ "epoch": 5.7657657657657655,
238
+ "grad_norm": 4.905521869659424,
239
+ "learning_rate": 4.435725964760331e-06,
240
+ "logits/chosen": -2.3808655738830566,
241
+ "logits/rejected": -2.368286609649658,
242
+ "logps/chosen": -68.88943481445312,
243
+ "logps/rejected": -82.69029235839844,
244
+ "loss": 0.4586,
245
+ "rewards/accuracies": 0.875,
246
+ "rewards/chosen": -0.3172217011451721,
247
+ "rewards/margins": 0.7665462493896484,
248
+ "rewards/rejected": -1.0837678909301758,
249
+ "step": 160
250
+ },
251
+ {
252
+ "epoch": 6.126126126126126,
253
+ "grad_norm": 5.399628162384033,
254
+ "learning_rate": 4.329486012421531e-06,
255
+ "logits/chosen": -2.365935802459717,
256
+ "logits/rejected": -2.363004684448242,
257
+ "logps/chosen": -70.47642517089844,
258
+ "logps/rejected": -84.02542877197266,
259
+ "loss": 0.4462,
260
+ "rewards/accuracies": 0.8374999761581421,
261
+ "rewards/chosen": -0.45835933089256287,
262
+ "rewards/margins": 0.8438631892204285,
263
+ "rewards/rejected": -1.302222490310669,
264
+ "step": 170
265
+ },
266
+ {
267
+ "epoch": 6.486486486486487,
268
+ "grad_norm": 4.843445777893066,
269
+ "learning_rate": 4.215604094671835e-06,
270
+ "logits/chosen": -2.357231855392456,
271
+ "logits/rejected": -2.360239028930664,
272
+ "logps/chosen": -78.67561340332031,
273
+ "logps/rejected": -88.39659118652344,
274
+ "loss": 0.3976,
275
+ "rewards/accuracies": 0.9125000238418579,
276
+ "rewards/chosen": -0.4842923581600189,
277
+ "rewards/margins": 0.8027322888374329,
278
+ "rewards/rejected": -1.2870244979858398,
279
+ "step": 180
280
+ },
281
+ {
282
+ "epoch": 6.846846846846847,
283
+ "grad_norm": 4.972764015197754,
284
+ "learning_rate": 4.094555908876765e-06,
285
+ "logits/chosen": -2.3751468658447266,
286
+ "logits/rejected": -2.3993237018585205,
287
+ "logps/chosen": -73.63652038574219,
288
+ "logps/rejected": -278.0970458984375,
289
+ "loss": 0.3959,
290
+ "rewards/accuracies": 0.8500000238418579,
291
+ "rewards/chosen": -0.4291106164455414,
292
+ "rewards/margins": 0.9967883229255676,
293
+ "rewards/rejected": -1.4258991479873657,
294
+ "step": 190
295
+ },
296
+ {
297
+ "epoch": 7.207207207207207,
298
+ "grad_norm": 5.071193218231201,
299
+ "learning_rate": 3.966847086696045e-06,
300
+ "logits/chosen": -2.3572330474853516,
301
+ "logits/rejected": -2.357269763946533,
302
+ "logps/chosen": -84.92713928222656,
303
+ "logps/rejected": -98.15062713623047,
304
+ "loss": 0.3544,
305
+ "rewards/accuracies": 0.9375,
306
+ "rewards/chosen": -0.5852295756340027,
307
+ "rewards/margins": 1.2983506917953491,
308
+ "rewards/rejected": -1.883580207824707,
309
+ "step": 200
310
+ },
311
+ {
312
+ "epoch": 7.5675675675675675,
313
+ "grad_norm": 5.1891655921936035,
314
+ "learning_rate": 3.833011082004229e-06,
315
+ "logits/chosen": -2.368424892425537,
316
+ "logits/rejected": -2.378568649291992,
317
+ "logps/chosen": -72.57874298095703,
318
+ "logps/rejected": -84.37443542480469,
319
+ "loss": 0.3421,
320
+ "rewards/accuracies": 0.8999999761581421,
321
+ "rewards/chosen": -0.48721614480018616,
322
+ "rewards/margins": 1.2057541608810425,
323
+ "rewards/rejected": -1.6929700374603271,
324
+ "step": 210
325
+ },
326
+ {
327
+ "epoch": 7.927927927927928,
328
+ "grad_norm": 5.771843433380127,
329
+ "learning_rate": 3.693606942594873e-06,
330
+ "logits/chosen": -2.3891513347625732,
331
+ "logits/rejected": -2.4053854942321777,
332
+ "logps/chosen": -75.97737121582031,
333
+ "logps/rejected": -97.49588012695312,
334
+ "loss": 0.3211,
335
+ "rewards/accuracies": 0.8374999761581421,
336
+ "rewards/chosen": -0.6163657903671265,
337
+ "rewards/margins": 1.1816037893295288,
338
+ "rewards/rejected": -1.7979698181152344,
339
+ "step": 220
340
+ },
341
+ {
342
+ "epoch": 8.288288288288289,
343
+ "grad_norm": 5.1563029289245605,
344
+ "learning_rate": 3.549216974976073e-06,
345
+ "logits/chosen": -2.4075605869293213,
346
+ "logits/rejected": -2.406411647796631,
347
+ "logps/chosen": -82.80142974853516,
348
+ "logps/rejected": -96.36463928222656,
349
+ "loss": 0.2848,
350
+ "rewards/accuracies": 0.9750000238418579,
351
+ "rewards/chosen": -0.8106307983398438,
352
+ "rewards/margins": 1.647127389907837,
353
+ "rewards/rejected": -2.4577584266662598,
354
+ "step": 230
355
+ },
356
+ {
357
+ "epoch": 8.64864864864865,
358
+ "grad_norm": 5.483398914337158,
359
+ "learning_rate": 3.400444312011776e-06,
360
+ "logits/chosen": -2.3797879219055176,
361
+ "logits/rejected": -2.362518787384033,
362
+ "logps/chosen": -82.14349365234375,
363
+ "logps/rejected": -97.63994598388672,
364
+ "loss": 0.278,
365
+ "rewards/accuracies": 0.949999988079071,
366
+ "rewards/chosen": -0.9469457864761353,
367
+ "rewards/margins": 1.488023281097412,
368
+ "rewards/rejected": -2.434968948364258,
369
+ "step": 240
370
+ },
371
+ {
372
+ "epoch": 9.00900900900901,
373
+ "grad_norm": 5.042275905609131,
374
+ "learning_rate": 3.2479103935691047e-06,
375
+ "logits/chosen": -2.3207201957702637,
376
+ "logits/rejected": -2.341810941696167,
377
+ "logps/chosen": -85.28227233886719,
378
+ "logps/rejected": -116.27372741699219,
379
+ "loss": 0.2494,
380
+ "rewards/accuracies": 0.9375,
381
+ "rewards/chosen": -1.0121912956237793,
382
+ "rewards/margins": 1.997532606124878,
383
+ "rewards/rejected": -3.0097243785858154,
384
+ "step": 250
385
+ },
386
+ {
387
+ "epoch": 9.36936936936937,
388
+ "grad_norm": 5.468939781188965,
389
+ "learning_rate": 3.092252370695298e-06,
390
+ "logits/chosen": -2.3408374786376953,
391
+ "logits/rejected": -2.366006851196289,
392
+ "logps/chosen": -72.05101013183594,
393
+ "logps/rejected": -102.21392822265625,
394
+ "loss": 0.2457,
395
+ "rewards/accuracies": 0.949999988079071,
396
+ "rewards/chosen": -1.0319160223007202,
397
+ "rewards/margins": 1.8719971179962158,
398
+ "rewards/rejected": -2.9039134979248047,
399
+ "step": 260
400
+ },
401
+ {
402
+ "epoch": 9.72972972972973,
403
+ "grad_norm": 6.745687007904053,
404
+ "learning_rate": 2.9341204441673267e-06,
405
+ "logits/chosen": -2.327451467514038,
406
+ "logits/rejected": -2.3463993072509766,
407
+ "logps/chosen": -86.53431701660156,
408
+ "logps/rejected": -116.40992736816406,
409
+ "loss": 0.2059,
410
+ "rewards/accuracies": 0.9750000238418579,
411
+ "rewards/chosen": -1.3356889486312866,
412
+ "rewards/margins": 1.9991543292999268,
413
+ "rewards/rejected": -3.334843397140503,
414
+ "step": 270
415
+ },
416
+ {
417
+ "epoch": 10.09009009009009,
418
+ "grad_norm": 5.230775833129883,
419
+ "learning_rate": 2.7741751485313295e-06,
420
+ "logits/chosen": -2.3630144596099854,
421
+ "logits/rejected": -2.3630847930908203,
422
+ "logps/chosen": -76.57563018798828,
423
+ "logps/rejected": -99.20953369140625,
424
+ "loss": 0.2034,
425
+ "rewards/accuracies": 0.925000011920929,
426
+ "rewards/chosen": -1.2317285537719727,
427
+ "rewards/margins": 1.8740953207015991,
428
+ "rewards/rejected": -3.1058237552642822,
429
+ "step": 280
430
+ },
431
+ {
432
+ "epoch": 10.45045045045045,
433
+ "grad_norm": 6.581757545471191,
434
+ "learning_rate": 2.6130845929767662e-06,
435
+ "logits/chosen": -2.3247475624084473,
436
+ "logits/rejected": -2.3450474739074707,
437
+ "logps/chosen": -83.9271240234375,
438
+ "logps/rejected": -109.54156494140625,
439
+ "loss": 0.174,
440
+ "rewards/accuracies": 0.987500011920929,
441
+ "rewards/chosen": -1.4542334079742432,
442
+ "rewards/margins": 2.3127167224884033,
443
+ "rewards/rejected": -3.7669498920440674,
444
+ "step": 290
445
+ },
446
+ {
447
+ "epoch": 10.81081081081081,
448
+ "grad_norm": 5.604727745056152,
449
+ "learning_rate": 2.4515216705704396e-06,
450
+ "logits/chosen": -2.279327869415283,
451
+ "logits/rejected": -2.319913387298584,
452
+ "logps/chosen": -78.63652801513672,
453
+ "logps/rejected": -115.9185562133789,
454
+ "loss": 0.1831,
455
+ "rewards/accuracies": 0.9624999761581421,
456
+ "rewards/chosen": -1.3030173778533936,
457
+ "rewards/margins": 2.5270209312438965,
458
+ "rewards/rejected": -3.830038547515869,
459
+ "step": 300
460
+ },
461
+ {
462
+ "epoch": 11.17117117117117,
463
+ "grad_norm": 4.8863606452941895,
464
+ "learning_rate": 2.290161247507733e-06,
465
+ "logits/chosen": -2.273766040802002,
466
+ "logits/rejected": -2.3243603706359863,
467
+ "logps/chosen": -90.69010925292969,
468
+ "logps/rejected": -131.49423217773438,
469
+ "loss": 0.1513,
470
+ "rewards/accuracies": 0.987500011920929,
471
+ "rewards/chosen": -1.566362738609314,
472
+ "rewards/margins": 3.1073012351989746,
473
+ "rewards/rejected": -4.673664093017578,
474
+ "step": 310
475
+ },
476
+ {
477
+ "epoch": 11.531531531531531,
478
+ "grad_norm": 5.772294521331787,
479
+ "learning_rate": 2.129677344121879e-06,
480
+ "logits/chosen": -2.302643299102783,
481
+ "logits/rejected": -2.3125722408294678,
482
+ "logps/chosen": -78.91960144042969,
483
+ "logps/rejected": -103.53559875488281,
484
+ "loss": 0.1624,
485
+ "rewards/accuracies": 0.987500011920929,
486
+ "rewards/chosen": -1.6784160137176514,
487
+ "rewards/margins": 2.4029908180236816,
488
+ "rewards/rejected": -4.081407070159912,
489
+ "step": 320
490
+ },
491
+ {
492
+ "epoch": 11.891891891891891,
493
+ "grad_norm": 5.915937423706055,
494
+ "learning_rate": 1.970740319426474e-06,
495
+ "logits/chosen": -2.275726795196533,
496
+ "logits/rejected": -2.302337169647217,
497
+ "logps/chosen": -99.52557373046875,
498
+ "logps/rejected": -122.73197174072266,
499
+ "loss": 0.1348,
500
+ "rewards/accuracies": 0.987500011920929,
501
+ "rewards/chosen": -1.9647445678710938,
502
+ "rewards/margins": 2.8115408420562744,
503
+ "rewards/rejected": -4.776285648345947,
504
+ "step": 330
505
+ },
506
+ {
507
+ "epoch": 12.252252252252251,
508
+ "grad_norm": 5.65620231628418,
509
+ "learning_rate": 1.8140140709517467e-06,
510
+ "logits/chosen": -2.274402379989624,
511
+ "logits/rejected": -2.2855653762817383,
512
+ "logps/chosen": -86.69510650634766,
513
+ "logps/rejected": -116.1146469116211,
514
+ "loss": 0.1366,
515
+ "rewards/accuracies": 0.9624999761581421,
516
+ "rewards/chosen": -1.855790376663208,
517
+ "rewards/margins": 2.536457061767578,
518
+ "rewards/rejected": -4.392247200012207,
519
+ "step": 340
520
+ },
521
+ {
522
+ "epoch": 12.612612612612612,
523
+ "grad_norm": 4.642848491668701,
524
+ "learning_rate": 1.6601532615711452e-06,
525
+ "logits/chosen": -2.2652974128723145,
526
+ "logits/rejected": -2.285008192062378,
527
+ "logps/chosen": -89.00364685058594,
528
+ "logps/rejected": -126.0003890991211,
529
+ "loss": 0.1216,
530
+ "rewards/accuracies": 0.9750000238418579,
531
+ "rewards/chosen": -1.8776098489761353,
532
+ "rewards/margins": 2.8089137077331543,
533
+ "rewards/rejected": -4.6865234375,
534
+ "step": 350
535
+ },
536
+ {
537
+ "epoch": 12.972972972972974,
538
+ "grad_norm": 4.52380895614624,
539
+ "learning_rate": 1.509800584902108e-06,
540
+ "logits/chosen": -2.263986349105835,
541
+ "logits/rejected": -2.2855420112609863,
542
+ "logps/chosen": -91.05010986328125,
543
+ "logps/rejected": -133.76361083984375,
544
+ "loss": 0.1076,
545
+ "rewards/accuracies": 1.0,
546
+ "rewards/chosen": -2.4504590034484863,
547
+ "rewards/margins": 3.495802402496338,
548
+ "rewards/rejected": -5.946260929107666,
549
+ "step": 360
550
+ },
551
+ {
552
+ "epoch": 13.333333333333334,
553
+ "grad_norm": 4.304037094116211,
554
+ "learning_rate": 1.3635840807037487e-06,
555
+ "logits/chosen": -2.261019229888916,
556
+ "logits/rejected": -2.264559268951416,
557
+ "logps/chosen": -93.27009582519531,
558
+ "logps/rejected": -118.05653381347656,
559
+ "loss": 0.1072,
560
+ "rewards/accuracies": 0.9750000238418579,
561
+ "rewards/chosen": -2.1757514476776123,
562
+ "rewards/margins": 3.157127857208252,
563
+ "rewards/rejected": -5.332879066467285,
564
+ "step": 370
565
+ },
566
+ {
567
+ "epoch": 13.693693693693694,
568
+ "grad_norm": 5.501009941101074,
569
+ "learning_rate": 1.2221145114853172e-06,
570
+ "logits/chosen": -2.211054563522339,
571
+ "logits/rejected": -2.22572660446167,
572
+ "logps/chosen": -90.1929702758789,
573
+ "logps/rejected": -138.2399139404297,
574
+ "loss": 0.0889,
575
+ "rewards/accuracies": 1.0,
576
+ "rewards/chosen": -2.5342886447906494,
577
+ "rewards/margins": 3.2334110736846924,
578
+ "rewards/rejected": -5.767699241638184,
579
+ "step": 380
580
+ },
581
+ {
582
+ "epoch": 14.054054054054054,
583
+ "grad_norm": 5.037735939025879,
584
+ "learning_rate": 1.085982811283654e-06,
585
+ "logits/chosen": -2.2411131858825684,
586
+ "logits/rejected": -2.261753559112549,
587
+ "logps/chosen": -98.27137756347656,
588
+ "logps/rejected": -134.04779052734375,
589
+ "loss": 0.0971,
590
+ "rewards/accuracies": 1.0,
591
+ "rewards/chosen": -2.593488931655884,
592
+ "rewards/margins": 3.3826117515563965,
593
+ "rewards/rejected": -5.976100444793701,
594
+ "step": 390
595
+ },
596
+ {
597
+ "epoch": 14.414414414414415,
598
+ "grad_norm": 5.258338451385498,
599
+ "learning_rate": 9.557576172663577e-07,
600
+ "logits/chosen": -2.244196653366089,
601
+ "logits/rejected": -2.2605862617492676,
602
+ "logps/chosen": -93.3480224609375,
603
+ "logps/rejected": -145.70272827148438,
604
+ "loss": 0.0913,
605
+ "rewards/accuracies": 0.9750000238418579,
606
+ "rewards/chosen": -2.5507616996765137,
607
+ "rewards/margins": 3.9265968799591064,
608
+ "rewards/rejected": -6.477358341217041,
609
+ "step": 400
610
+ }
611
+ ],
612
+ "logging_steps": 10,
613
+ "max_steps": 540,
614
+ "num_input_tokens_seen": 0,
615
+ "num_train_epochs": 20,
616
+ "save_steps": 100,
617
+ "stateful_callbacks": {
618
+ "TrainerControl": {
619
+ "args": {
620
+ "should_epoch_stop": false,
621
+ "should_evaluate": false,
622
+ "should_log": false,
623
+ "should_save": true,
624
+ "should_training_stop": false
625
+ },
626
+ "attributes": {}
627
+ }
628
+ },
629
+ "total_flos": 1.4615028896534364e+18,
630
+ "train_batch_size": 1,
631
+ "trial_name": null,
632
+ "trial_params": null
633
+ }
Area_Time_SFT/checkpoint-500/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: ishorn5/RTLCoder-v1.1
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.12.0
Area_Time_SFT/checkpoint-500/adapter_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "ishorn5/RTLCoder-v1.1",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 16,
14
+ "lora_dropout": 0.0,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 8,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "gate_proj",
24
+ "k_proj",
25
+ "up_proj",
26
+ "down_proj",
27
+ "o_proj",
28
+ "v_proj",
29
+ "q_proj"
30
+ ],
31
+ "task_type": "CAUSAL_LM",
32
+ "use_dora": false,
33
+ "use_rslora": false
34
+ }
Area_Time_SFT/checkpoint-500/added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "[PAD]": 32000
3
+ }
Area_Time_SFT/checkpoint-500/special_tokens_map.json ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<unk>",
4
+ "<s>",
5
+ "</s>",
6
+ "[PAD]"
7
+ ],
8
+ "bos_token": {
9
+ "content": "<s>",
10
+ "lstrip": false,
11
+ "normalized": false,
12
+ "rstrip": false,
13
+ "single_word": false
14
+ },
15
+ "eos_token": {
16
+ "content": "</s>",
17
+ "lstrip": false,
18
+ "normalized": false,
19
+ "rstrip": false,
20
+ "single_word": false
21
+ },
22
+ "pad_token": {
23
+ "content": "[PAD]",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false
28
+ },
29
+ "unk_token": {
30
+ "content": "<unk>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false
35
+ }
36
+ }
Area_Time_SFT/checkpoint-500/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
Area_Time_SFT/checkpoint-500/tokenizer_config.json ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "add_prefix_space": true,
5
+ "added_tokens_decoder": {
6
+ "0": {
7
+ "content": "<unk>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "1": {
15
+ "content": "<s>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": true
21
+ },
22
+ "2": {
23
+ "content": "</s>",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false,
28
+ "special": true
29
+ },
30
+ "32000": {
31
+ "content": "[PAD]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false,
36
+ "special": true
37
+ }
38
+ },
39
+ "additional_special_tokens": [
40
+ "<unk>",
41
+ "<s>",
42
+ "</s>",
43
+ "[PAD]"
44
+ ],
45
+ "bos_token": "<s>",
46
+ "chat_template": "{{ '<s>' }}{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% endif %}{% if system_message is defined %}{{ system_message + '\n\n' }}{% endif %}{% for message in loop_messages %}{% set content = message['content'] %}{% if message['role'] == 'user' %}{{ 'User: ' + content + '\n\nAssistant:' }}{% elif message['role'] == 'assistant' %}{{ content + '</s>' }}{% endif %}{% endfor %}",
47
+ "clean_up_tokenization_spaces": false,
48
+ "eos_token": "</s>",
49
+ "legacy": true,
50
+ "model_max_length": 2048,
51
+ "pad_token": "[PAD]",
52
+ "padding_side": "right",
53
+ "sp_model_kwargs": {},
54
+ "spaces_between_special_tokens": false,
55
+ "split_special_tokens": false,
56
+ "tokenizer_class": "LlamaTokenizer",
57
+ "unk_token": "<unk>",
58
+ "use_default_system_prompt": true
59
+ }
Area_Time_SFT/checkpoint-500/trainer_state.json ADDED
@@ -0,0 +1,783 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 18.01801801801802,
5
+ "eval_steps": 500,
6
+ "global_step": 500,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.36036036036036034,
13
+ "grad_norm": 4.098762512207031,
14
+ "learning_rate": 9.259259259259259e-07,
15
+ "logits/chosen": -2.332144260406494,
16
+ "logits/rejected": -2.3385167121887207,
17
+ "logps/chosen": -80.89369201660156,
18
+ "logps/rejected": -70.11573791503906,
19
+ "loss": 0.6929,
20
+ "rewards/accuracies": 0.4625000059604645,
21
+ "rewards/chosen": 0.00037987352698110044,
22
+ "rewards/margins": 0.004227532539516687,
23
+ "rewards/rejected": -0.003847658634185791,
24
+ "step": 10
25
+ },
26
+ {
27
+ "epoch": 0.7207207207207207,
28
+ "grad_norm": 3.641324281692505,
29
+ "learning_rate": 1.8518518518518519e-06,
30
+ "logits/chosen": -2.323789119720459,
31
+ "logits/rejected": -2.351041793823242,
32
+ "logps/chosen": -73.2725601196289,
33
+ "logps/rejected": -81.80250549316406,
34
+ "loss": 0.6932,
35
+ "rewards/accuracies": 0.5874999761581421,
36
+ "rewards/chosen": 0.0012983012711629272,
37
+ "rewards/margins": 0.004207946360111237,
38
+ "rewards/rejected": -0.0029096449725329876,
39
+ "step": 20
40
+ },
41
+ {
42
+ "epoch": 1.0810810810810811,
43
+ "grad_norm": 3.9276015758514404,
44
+ "learning_rate": 2.7777777777777783e-06,
45
+ "logits/chosen": -2.3353028297424316,
46
+ "logits/rejected": -2.3445916175842285,
47
+ "logps/chosen": -69.34381103515625,
48
+ "logps/rejected": -74.37530517578125,
49
+ "loss": 0.6941,
50
+ "rewards/accuracies": 0.4375,
51
+ "rewards/chosen": -0.01035231165587902,
52
+ "rewards/margins": -0.006389107555150986,
53
+ "rewards/rejected": -0.00396320316940546,
54
+ "step": 30
55
+ },
56
+ {
57
+ "epoch": 1.4414414414414414,
58
+ "grad_norm": 4.809000492095947,
59
+ "learning_rate": 3.7037037037037037e-06,
60
+ "logits/chosen": -2.343184232711792,
61
+ "logits/rejected": -2.360262393951416,
62
+ "logps/chosen": -77.91002655029297,
63
+ "logps/rejected": -76.27156066894531,
64
+ "loss": 0.6902,
65
+ "rewards/accuracies": 0.5625,
66
+ "rewards/chosen": -0.008437180891633034,
67
+ "rewards/margins": 0.008391124196350574,
68
+ "rewards/rejected": -0.016828304156661034,
69
+ "step": 40
70
+ },
71
+ {
72
+ "epoch": 1.8018018018018018,
73
+ "grad_norm": 4.504117012023926,
74
+ "learning_rate": 4.62962962962963e-06,
75
+ "logits/chosen": -2.3394973278045654,
76
+ "logits/rejected": -2.3635268211364746,
77
+ "logps/chosen": -83.62376403808594,
78
+ "logps/rejected": -267.64569091796875,
79
+ "loss": 0.6851,
80
+ "rewards/accuracies": 0.48750001192092896,
81
+ "rewards/chosen": 0.01695835217833519,
82
+ "rewards/margins": 0.14291717112064362,
83
+ "rewards/rejected": -0.12595881521701813,
84
+ "step": 50
85
+ },
86
+ {
87
+ "epoch": 2.1621621621621623,
88
+ "grad_norm": 4.033559322357178,
89
+ "learning_rate": 4.998119881260576e-06,
90
+ "logits/chosen": -2.32966685295105,
91
+ "logits/rejected": -2.3370490074157715,
92
+ "logps/chosen": -78.54629516601562,
93
+ "logps/rejected": -82.67992401123047,
94
+ "loss": 0.6767,
95
+ "rewards/accuracies": 0.637499988079071,
96
+ "rewards/chosen": -0.03485359251499176,
97
+ "rewards/margins": 0.035757362842559814,
98
+ "rewards/rejected": -0.07061095535755157,
99
+ "step": 60
100
+ },
101
+ {
102
+ "epoch": 2.5225225225225225,
103
+ "grad_norm": 4.979142189025879,
104
+ "learning_rate": 4.9866405060165044e-06,
105
+ "logits/chosen": -2.364291191101074,
106
+ "logits/rejected": -2.376107931137085,
107
+ "logps/chosen": -70.61842346191406,
108
+ "logps/rejected": -81.80282592773438,
109
+ "loss": 0.6636,
110
+ "rewards/accuracies": 0.675000011920929,
111
+ "rewards/chosen": -0.06025966256856918,
112
+ "rewards/margins": 0.03742799907922745,
113
+ "rewards/rejected": -0.09768766909837723,
114
+ "step": 70
115
+ },
116
+ {
117
+ "epoch": 2.8828828828828827,
118
+ "grad_norm": 4.0428690910339355,
119
+ "learning_rate": 4.964774158361991e-06,
120
+ "logits/chosen": -2.3341965675354004,
121
+ "logits/rejected": -2.3440909385681152,
122
+ "logps/chosen": -86.3591537475586,
123
+ "logps/rejected": -77.45347595214844,
124
+ "loss": 0.6519,
125
+ "rewards/accuracies": 0.7124999761581421,
126
+ "rewards/chosen": -0.09531830251216888,
127
+ "rewards/margins": 0.09854079782962799,
128
+ "rewards/rejected": -0.19385910034179688,
129
+ "step": 80
130
+ },
131
+ {
132
+ "epoch": 3.2432432432432434,
133
+ "grad_norm": 4.31919527053833,
134
+ "learning_rate": 4.93261217644956e-06,
135
+ "logits/chosen": -2.351658821105957,
136
+ "logits/rejected": -2.3437318801879883,
137
+ "logps/chosen": -77.31346130371094,
138
+ "logps/rejected": -80.43277740478516,
139
+ "loss": 0.6243,
140
+ "rewards/accuracies": 0.800000011920929,
141
+ "rewards/chosen": -0.1381106823682785,
142
+ "rewards/margins": 0.16527524590492249,
143
+ "rewards/rejected": -0.3033859133720398,
144
+ "step": 90
145
+ },
146
+ {
147
+ "epoch": 3.6036036036036037,
148
+ "grad_norm": 5.029748439788818,
149
+ "learning_rate": 4.8902889044347e-06,
150
+ "logits/chosen": -2.3354241847991943,
151
+ "logits/rejected": -2.358518600463867,
152
+ "logps/chosen": -75.03588104248047,
153
+ "logps/rejected": -86.44483947753906,
154
+ "loss": 0.6025,
155
+ "rewards/accuracies": 0.699999988079071,
156
+ "rewards/chosen": -0.22239580750465393,
157
+ "rewards/margins": 0.1877792775630951,
158
+ "rewards/rejected": -0.41017502546310425,
159
+ "step": 100
160
+ },
161
+ {
162
+ "epoch": 3.963963963963964,
163
+ "grad_norm": 4.6208648681640625,
164
+ "learning_rate": 4.837981131305475e-06,
165
+ "logits/chosen": -2.3195366859436035,
166
+ "logits/rejected": -2.3129196166992188,
167
+ "logps/chosen": -72.09532928466797,
168
+ "logps/rejected": -73.18878936767578,
169
+ "loss": 0.5955,
170
+ "rewards/accuracies": 0.875,
171
+ "rewards/chosen": -0.22240504622459412,
172
+ "rewards/margins": 0.22803232073783875,
173
+ "rewards/rejected": -0.45043739676475525,
174
+ "step": 110
175
+ },
176
+ {
177
+ "epoch": 4.324324324324325,
178
+ "grad_norm": 4.163040637969971,
179
+ "learning_rate": 4.775907352415367e-06,
180
+ "logits/chosen": -2.3427720069885254,
181
+ "logits/rejected": -2.3731276988983154,
182
+ "logps/chosen": -85.9415283203125,
183
+ "logps/rejected": -93.53765869140625,
184
+ "loss": 0.5506,
185
+ "rewards/accuracies": 0.8500000238418579,
186
+ "rewards/chosen": -0.24449148774147034,
187
+ "rewards/margins": 0.3563767373561859,
188
+ "rewards/rejected": -0.6008682250976562,
189
+ "step": 120
190
+ },
191
+ {
192
+ "epoch": 4.684684684684685,
193
+ "grad_norm": 4.228254795074463,
194
+ "learning_rate": 4.70432685680402e-06,
195
+ "logits/chosen": -2.336733341217041,
196
+ "logits/rejected": -2.3446521759033203,
197
+ "logps/chosen": -81.07231140136719,
198
+ "logps/rejected": -90.82849884033203,
199
+ "loss": 0.5248,
200
+ "rewards/accuracies": 0.8125,
201
+ "rewards/chosen": -0.005195322446525097,
202
+ "rewards/margins": 0.6936509609222412,
203
+ "rewards/rejected": -0.6988462209701538,
204
+ "step": 130
205
+ },
206
+ {
207
+ "epoch": 5.045045045045045,
208
+ "grad_norm": 4.454960346221924,
209
+ "learning_rate": 4.623538644118244e-06,
210
+ "logits/chosen": -2.3331754207611084,
211
+ "logits/rejected": -2.3434836864471436,
212
+ "logps/chosen": -83.67604064941406,
213
+ "logps/rejected": -82.92774200439453,
214
+ "loss": 0.5288,
215
+ "rewards/accuracies": 0.875,
216
+ "rewards/chosen": -0.2716079652309418,
217
+ "rewards/margins": 0.4593236446380615,
218
+ "rewards/rejected": -0.7309316396713257,
219
+ "step": 140
220
+ },
221
+ {
222
+ "epoch": 5.405405405405405,
223
+ "grad_norm": 5.223482608795166,
224
+ "learning_rate": 4.533880175657419e-06,
225
+ "logits/chosen": -2.362809658050537,
226
+ "logits/rejected": -2.3657679557800293,
227
+ "logps/chosen": -73.20018768310547,
228
+ "logps/rejected": -85.37998962402344,
229
+ "loss": 0.4682,
230
+ "rewards/accuracies": 0.8500000238418579,
231
+ "rewards/chosen": -0.3221462368965149,
232
+ "rewards/margins": 0.5968301892280579,
233
+ "rewards/rejected": -0.9189764261245728,
234
+ "step": 150
235
+ },
236
+ {
237
+ "epoch": 5.7657657657657655,
238
+ "grad_norm": 4.905521869659424,
239
+ "learning_rate": 4.435725964760331e-06,
240
+ "logits/chosen": -2.3808655738830566,
241
+ "logits/rejected": -2.368286609649658,
242
+ "logps/chosen": -68.88943481445312,
243
+ "logps/rejected": -82.69029235839844,
244
+ "loss": 0.4586,
245
+ "rewards/accuracies": 0.875,
246
+ "rewards/chosen": -0.3172217011451721,
247
+ "rewards/margins": 0.7665462493896484,
248
+ "rewards/rejected": -1.0837678909301758,
249
+ "step": 160
250
+ },
251
+ {
252
+ "epoch": 6.126126126126126,
253
+ "grad_norm": 5.399628162384033,
254
+ "learning_rate": 4.329486012421531e-06,
255
+ "logits/chosen": -2.365935802459717,
256
+ "logits/rejected": -2.363004684448242,
257
+ "logps/chosen": -70.47642517089844,
258
+ "logps/rejected": -84.02542877197266,
259
+ "loss": 0.4462,
260
+ "rewards/accuracies": 0.8374999761581421,
261
+ "rewards/chosen": -0.45835933089256287,
262
+ "rewards/margins": 0.8438631892204285,
263
+ "rewards/rejected": -1.302222490310669,
264
+ "step": 170
265
+ },
266
+ {
267
+ "epoch": 6.486486486486487,
268
+ "grad_norm": 4.843445777893066,
269
+ "learning_rate": 4.215604094671835e-06,
270
+ "logits/chosen": -2.357231855392456,
271
+ "logits/rejected": -2.360239028930664,
272
+ "logps/chosen": -78.67561340332031,
273
+ "logps/rejected": -88.39659118652344,
274
+ "loss": 0.3976,
275
+ "rewards/accuracies": 0.9125000238418579,
276
+ "rewards/chosen": -0.4842923581600189,
277
+ "rewards/margins": 0.8027322888374329,
278
+ "rewards/rejected": -1.2870244979858398,
279
+ "step": 180
280
+ },
281
+ {
282
+ "epoch": 6.846846846846847,
283
+ "grad_norm": 4.972764015197754,
284
+ "learning_rate": 4.094555908876765e-06,
285
+ "logits/chosen": -2.3751468658447266,
286
+ "logits/rejected": -2.3993237018585205,
287
+ "logps/chosen": -73.63652038574219,
288
+ "logps/rejected": -278.0970458984375,
289
+ "loss": 0.3959,
290
+ "rewards/accuracies": 0.8500000238418579,
291
+ "rewards/chosen": -0.4291106164455414,
292
+ "rewards/margins": 0.9967883229255676,
293
+ "rewards/rejected": -1.4258991479873657,
294
+ "step": 190
295
+ },
296
+ {
297
+ "epoch": 7.207207207207207,
298
+ "grad_norm": 5.071193218231201,
299
+ "learning_rate": 3.966847086696045e-06,
300
+ "logits/chosen": -2.3572330474853516,
301
+ "logits/rejected": -2.357269763946533,
302
+ "logps/chosen": -84.92713928222656,
303
+ "logps/rejected": -98.15062713623047,
304
+ "loss": 0.3544,
305
+ "rewards/accuracies": 0.9375,
306
+ "rewards/chosen": -0.5852295756340027,
307
+ "rewards/margins": 1.2983506917953491,
308
+ "rewards/rejected": -1.883580207824707,
309
+ "step": 200
310
+ },
311
+ {
312
+ "epoch": 7.5675675675675675,
313
+ "grad_norm": 5.1891655921936035,
314
+ "learning_rate": 3.833011082004229e-06,
315
+ "logits/chosen": -2.368424892425537,
316
+ "logits/rejected": -2.378568649291992,
317
+ "logps/chosen": -72.57874298095703,
318
+ "logps/rejected": -84.37443542480469,
319
+ "loss": 0.3421,
320
+ "rewards/accuracies": 0.8999999761581421,
321
+ "rewards/chosen": -0.48721614480018616,
322
+ "rewards/margins": 1.2057541608810425,
323
+ "rewards/rejected": -1.6929700374603271,
324
+ "step": 210
325
+ },
326
+ {
327
+ "epoch": 7.927927927927928,
328
+ "grad_norm": 5.771843433380127,
329
+ "learning_rate": 3.693606942594873e-06,
330
+ "logits/chosen": -2.3891513347625732,
331
+ "logits/rejected": -2.4053854942321777,
332
+ "logps/chosen": -75.97737121582031,
333
+ "logps/rejected": -97.49588012695312,
334
+ "loss": 0.3211,
335
+ "rewards/accuracies": 0.8374999761581421,
336
+ "rewards/chosen": -0.6163657903671265,
337
+ "rewards/margins": 1.1816037893295288,
338
+ "rewards/rejected": -1.7979698181152344,
339
+ "step": 220
340
+ },
341
+ {
342
+ "epoch": 8.288288288288289,
343
+ "grad_norm": 5.1563029289245605,
344
+ "learning_rate": 3.549216974976073e-06,
345
+ "logits/chosen": -2.4075605869293213,
346
+ "logits/rejected": -2.406411647796631,
347
+ "logps/chosen": -82.80142974853516,
348
+ "logps/rejected": -96.36463928222656,
349
+ "loss": 0.2848,
350
+ "rewards/accuracies": 0.9750000238418579,
351
+ "rewards/chosen": -0.8106307983398438,
352
+ "rewards/margins": 1.647127389907837,
353
+ "rewards/rejected": -2.4577584266662598,
354
+ "step": 230
355
+ },
356
+ {
357
+ "epoch": 8.64864864864865,
358
+ "grad_norm": 5.483398914337158,
359
+ "learning_rate": 3.400444312011776e-06,
360
+ "logits/chosen": -2.3797879219055176,
361
+ "logits/rejected": -2.362518787384033,
362
+ "logps/chosen": -82.14349365234375,
363
+ "logps/rejected": -97.63994598388672,
364
+ "loss": 0.278,
365
+ "rewards/accuracies": 0.949999988079071,
366
+ "rewards/chosen": -0.9469457864761353,
367
+ "rewards/margins": 1.488023281097412,
368
+ "rewards/rejected": -2.434968948364258,
369
+ "step": 240
370
+ },
371
+ {
372
+ "epoch": 9.00900900900901,
373
+ "grad_norm": 5.042275905609131,
374
+ "learning_rate": 3.2479103935691047e-06,
375
+ "logits/chosen": -2.3207201957702637,
376
+ "logits/rejected": -2.341810941696167,
377
+ "logps/chosen": -85.28227233886719,
378
+ "logps/rejected": -116.27372741699219,
379
+ "loss": 0.2494,
380
+ "rewards/accuracies": 0.9375,
381
+ "rewards/chosen": -1.0121912956237793,
382
+ "rewards/margins": 1.997532606124878,
383
+ "rewards/rejected": -3.0097243785858154,
384
+ "step": 250
385
+ },
386
+ {
387
+ "epoch": 9.36936936936937,
388
+ "grad_norm": 5.468939781188965,
389
+ "learning_rate": 3.092252370695298e-06,
390
+ "logits/chosen": -2.3408374786376953,
391
+ "logits/rejected": -2.366006851196289,
392
+ "logps/chosen": -72.05101013183594,
393
+ "logps/rejected": -102.21392822265625,
394
+ "loss": 0.2457,
395
+ "rewards/accuracies": 0.949999988079071,
396
+ "rewards/chosen": -1.0319160223007202,
397
+ "rewards/margins": 1.8719971179962158,
398
+ "rewards/rejected": -2.9039134979248047,
399
+ "step": 260
400
+ },
401
+ {
402
+ "epoch": 9.72972972972973,
403
+ "grad_norm": 6.745687007904053,
404
+ "learning_rate": 2.9341204441673267e-06,
405
+ "logits/chosen": -2.327451467514038,
406
+ "logits/rejected": -2.3463993072509766,
407
+ "logps/chosen": -86.53431701660156,
408
+ "logps/rejected": -116.40992736816406,
409
+ "loss": 0.2059,
410
+ "rewards/accuracies": 0.9750000238418579,
411
+ "rewards/chosen": -1.3356889486312866,
412
+ "rewards/margins": 1.9991543292999268,
413
+ "rewards/rejected": -3.334843397140503,
414
+ "step": 270
415
+ },
416
+ {
417
+ "epoch": 10.09009009009009,
418
+ "grad_norm": 5.230775833129883,
419
+ "learning_rate": 2.7741751485313295e-06,
420
+ "logits/chosen": -2.3630144596099854,
421
+ "logits/rejected": -2.3630847930908203,
422
+ "logps/chosen": -76.57563018798828,
423
+ "logps/rejected": -99.20953369140625,
424
+ "loss": 0.2034,
425
+ "rewards/accuracies": 0.925000011920929,
426
+ "rewards/chosen": -1.2317285537719727,
427
+ "rewards/margins": 1.8740953207015991,
428
+ "rewards/rejected": -3.1058237552642822,
429
+ "step": 280
430
+ },
431
+ {
432
+ "epoch": 10.45045045045045,
433
+ "grad_norm": 6.581757545471191,
434
+ "learning_rate": 2.6130845929767662e-06,
435
+ "logits/chosen": -2.3247475624084473,
436
+ "logits/rejected": -2.3450474739074707,
437
+ "logps/chosen": -83.9271240234375,
438
+ "logps/rejected": -109.54156494140625,
439
+ "loss": 0.174,
440
+ "rewards/accuracies": 0.987500011920929,
441
+ "rewards/chosen": -1.4542334079742432,
442
+ "rewards/margins": 2.3127167224884033,
443
+ "rewards/rejected": -3.7669498920440674,
444
+ "step": 290
445
+ },
446
+ {
447
+ "epoch": 10.81081081081081,
448
+ "grad_norm": 5.604727745056152,
449
+ "learning_rate": 2.4515216705704396e-06,
450
+ "logits/chosen": -2.279327869415283,
451
+ "logits/rejected": -2.319913387298584,
452
+ "logps/chosen": -78.63652801513672,
453
+ "logps/rejected": -115.9185562133789,
454
+ "loss": 0.1831,
455
+ "rewards/accuracies": 0.9624999761581421,
456
+ "rewards/chosen": -1.3030173778533936,
457
+ "rewards/margins": 2.5270209312438965,
458
+ "rewards/rejected": -3.830038547515869,
459
+ "step": 300
460
+ },
461
+ {
462
+ "epoch": 11.17117117117117,
463
+ "grad_norm": 4.8863606452941895,
464
+ "learning_rate": 2.290161247507733e-06,
465
+ "logits/chosen": -2.273766040802002,
466
+ "logits/rejected": -2.3243603706359863,
467
+ "logps/chosen": -90.69010925292969,
468
+ "logps/rejected": -131.49423217773438,
469
+ "loss": 0.1513,
470
+ "rewards/accuracies": 0.987500011920929,
471
+ "rewards/chosen": -1.566362738609314,
472
+ "rewards/margins": 3.1073012351989746,
473
+ "rewards/rejected": -4.673664093017578,
474
+ "step": 310
475
+ },
476
+ {
477
+ "epoch": 11.531531531531531,
478
+ "grad_norm": 5.772294521331787,
479
+ "learning_rate": 2.129677344121879e-06,
480
+ "logits/chosen": -2.302643299102783,
481
+ "logits/rejected": -2.3125722408294678,
482
+ "logps/chosen": -78.91960144042969,
483
+ "logps/rejected": -103.53559875488281,
484
+ "loss": 0.1624,
485
+ "rewards/accuracies": 0.987500011920929,
486
+ "rewards/chosen": -1.6784160137176514,
487
+ "rewards/margins": 2.4029908180236816,
488
+ "rewards/rejected": -4.081407070159912,
489
+ "step": 320
490
+ },
491
+ {
492
+ "epoch": 11.891891891891891,
493
+ "grad_norm": 5.915937423706055,
494
+ "learning_rate": 1.970740319426474e-06,
495
+ "logits/chosen": -2.275726795196533,
496
+ "logits/rejected": -2.302337169647217,
497
+ "logps/chosen": -99.52557373046875,
498
+ "logps/rejected": -122.73197174072266,
499
+ "loss": 0.1348,
500
+ "rewards/accuracies": 0.987500011920929,
501
+ "rewards/chosen": -1.9647445678710938,
502
+ "rewards/margins": 2.8115408420562744,
503
+ "rewards/rejected": -4.776285648345947,
504
+ "step": 330
505
+ },
506
+ {
507
+ "epoch": 12.252252252252251,
508
+ "grad_norm": 5.65620231628418,
509
+ "learning_rate": 1.8140140709517467e-06,
510
+ "logits/chosen": -2.274402379989624,
511
+ "logits/rejected": -2.2855653762817383,
512
+ "logps/chosen": -86.69510650634766,
513
+ "logps/rejected": -116.1146469116211,
514
+ "loss": 0.1366,
515
+ "rewards/accuracies": 0.9624999761581421,
516
+ "rewards/chosen": -1.855790376663208,
517
+ "rewards/margins": 2.536457061767578,
518
+ "rewards/rejected": -4.392247200012207,
519
+ "step": 340
520
+ },
521
+ {
522
+ "epoch": 12.612612612612612,
523
+ "grad_norm": 4.642848491668701,
524
+ "learning_rate": 1.6601532615711452e-06,
525
+ "logits/chosen": -2.2652974128723145,
526
+ "logits/rejected": -2.285008192062378,
527
+ "logps/chosen": -89.00364685058594,
528
+ "logps/rejected": -126.0003890991211,
529
+ "loss": 0.1216,
530
+ "rewards/accuracies": 0.9750000238418579,
531
+ "rewards/chosen": -1.8776098489761353,
532
+ "rewards/margins": 2.8089137077331543,
533
+ "rewards/rejected": -4.6865234375,
534
+ "step": 350
535
+ },
536
+ {
537
+ "epoch": 12.972972972972974,
538
+ "grad_norm": 4.52380895614624,
539
+ "learning_rate": 1.509800584902108e-06,
540
+ "logits/chosen": -2.263986349105835,
541
+ "logits/rejected": -2.2855420112609863,
542
+ "logps/chosen": -91.05010986328125,
543
+ "logps/rejected": -133.76361083984375,
544
+ "loss": 0.1076,
545
+ "rewards/accuracies": 1.0,
546
+ "rewards/chosen": -2.4504590034484863,
547
+ "rewards/margins": 3.495802402496338,
548
+ "rewards/rejected": -5.946260929107666,
549
+ "step": 360
550
+ },
551
+ {
552
+ "epoch": 13.333333333333334,
553
+ "grad_norm": 4.304037094116211,
554
+ "learning_rate": 1.3635840807037487e-06,
555
+ "logits/chosen": -2.261019229888916,
556
+ "logits/rejected": -2.264559268951416,
557
+ "logps/chosen": -93.27009582519531,
558
+ "logps/rejected": -118.05653381347656,
559
+ "loss": 0.1072,
560
+ "rewards/accuracies": 0.9750000238418579,
561
+ "rewards/chosen": -2.1757514476776123,
562
+ "rewards/margins": 3.157127857208252,
563
+ "rewards/rejected": -5.332879066467285,
564
+ "step": 370
565
+ },
566
+ {
567
+ "epoch": 13.693693693693694,
568
+ "grad_norm": 5.501009941101074,
569
+ "learning_rate": 1.2221145114853172e-06,
570
+ "logits/chosen": -2.211054563522339,
571
+ "logits/rejected": -2.22572660446167,
572
+ "logps/chosen": -90.1929702758789,
573
+ "logps/rejected": -138.2399139404297,
574
+ "loss": 0.0889,
575
+ "rewards/accuracies": 1.0,
576
+ "rewards/chosen": -2.5342886447906494,
577
+ "rewards/margins": 3.2334110736846924,
578
+ "rewards/rejected": -5.767699241638184,
579
+ "step": 380
580
+ },
581
+ {
582
+ "epoch": 14.054054054054054,
583
+ "grad_norm": 5.037735939025879,
584
+ "learning_rate": 1.085982811283654e-06,
585
+ "logits/chosen": -2.2411131858825684,
586
+ "logits/rejected": -2.261753559112549,
587
+ "logps/chosen": -98.27137756347656,
588
+ "logps/rejected": -134.04779052734375,
589
+ "loss": 0.0971,
590
+ "rewards/accuracies": 1.0,
591
+ "rewards/chosen": -2.593488931655884,
592
+ "rewards/margins": 3.3826117515563965,
593
+ "rewards/rejected": -5.976100444793701,
594
+ "step": 390
595
+ },
596
+ {
597
+ "epoch": 14.414414414414415,
598
+ "grad_norm": 5.258338451385498,
599
+ "learning_rate": 9.557576172663577e-07,
600
+ "logits/chosen": -2.244196653366089,
601
+ "logits/rejected": -2.2605862617492676,
602
+ "logps/chosen": -93.3480224609375,
603
+ "logps/rejected": -145.70272827148438,
604
+ "loss": 0.0913,
605
+ "rewards/accuracies": 0.9750000238418579,
606
+ "rewards/chosen": -2.5507616996765137,
607
+ "rewards/margins": 3.9265968799591064,
608
+ "rewards/rejected": -6.477358341217041,
609
+ "step": 400
610
+ },
611
+ {
612
+ "epoch": 14.774774774774775,
613
+ "grad_norm": 4.47573184967041,
614
+ "learning_rate": 8.319828944714508e-07,
615
+ "logits/chosen": -2.26932954788208,
616
+ "logits/rejected": -2.274758815765381,
617
+ "logps/chosen": -102.62467956542969,
618
+ "logps/rejected": -132.93116760253906,
619
+ "loss": 0.0805,
620
+ "rewards/accuracies": 0.9750000238418579,
621
+ "rewards/chosen": -2.376272201538086,
622
+ "rewards/margins": 3.278709888458252,
623
+ "rewards/rejected": -5.654982089996338,
624
+ "step": 410
625
+ },
626
+ {
627
+ "epoch": 15.135135135135135,
628
+ "grad_norm": 4.006404399871826,
629
+ "learning_rate": 7.151756636052529e-07,
630
+ "logits/chosen": -2.240022659301758,
631
+ "logits/rejected": -2.2303688526153564,
632
+ "logps/chosen": -112.4423828125,
633
+ "logps/rejected": -143.22732543945312,
634
+ "loss": 0.0796,
635
+ "rewards/accuracies": 0.987500011920929,
636
+ "rewards/chosen": -2.971478223800659,
637
+ "rewards/margins": 3.4153618812561035,
638
+ "rewards/rejected": -6.3868408203125,
639
+ "step": 420
640
+ },
641
+ {
642
+ "epoch": 15.495495495495495,
643
+ "grad_norm": 4.521777629852295,
644
+ "learning_rate": 6.058238413897052e-07,
645
+ "logits/chosen": -2.1890573501586914,
646
+ "logits/rejected": -2.241264820098877,
647
+ "logps/chosen": -111.05755615234375,
648
+ "logps/rejected": -146.14109802246094,
649
+ "loss": 0.0724,
650
+ "rewards/accuracies": 1.0,
651
+ "rewards/chosen": -3.0282044410705566,
652
+ "rewards/margins": 3.6133179664611816,
653
+ "rewards/rejected": -6.641521453857422,
654
+ "step": 430
655
+ },
656
+ {
657
+ "epoch": 15.855855855855856,
658
+ "grad_norm": 4.783228397369385,
659
+ "learning_rate": 5.043842024802675e-07,
660
+ "logits/chosen": -2.1972153186798096,
661
+ "logits/rejected": -2.192469358444214,
662
+ "logps/chosen": -102.45082092285156,
663
+ "logps/rejected": -138.3446502685547,
664
+ "loss": 0.0723,
665
+ "rewards/accuracies": 1.0,
666
+ "rewards/chosen": -2.7581396102905273,
667
+ "rewards/margins": 3.8694820404052734,
668
+ "rewards/rejected": -6.627622127532959,
669
+ "step": 440
670
+ },
671
+ {
672
+ "epoch": 16.216216216216218,
673
+ "grad_norm": 4.521092891693115,
674
+ "learning_rate": 4.1128047146765936e-07,
675
+ "logits/chosen": -2.1869163513183594,
676
+ "logits/rejected": -2.182262897491455,
677
+ "logps/chosen": -104.68853759765625,
678
+ "logps/rejected": -145.3485870361328,
679
+ "loss": 0.075,
680
+ "rewards/accuracies": 0.987500011920929,
681
+ "rewards/chosen": -2.978736400604248,
682
+ "rewards/margins": 3.886040449142456,
683
+ "rewards/rejected": -6.864776611328125,
684
+ "step": 450
685
+ },
686
+ {
687
+ "epoch": 16.576576576576578,
688
+ "grad_norm": 4.271122932434082,
689
+ "learning_rate": 3.269015529333805e-07,
690
+ "logits/chosen": -2.2191543579101562,
691
+ "logits/rejected": -2.2385966777801514,
692
+ "logps/chosen": -91.767578125,
693
+ "logps/rejected": -135.73141479492188,
694
+ "loss": 0.0701,
695
+ "rewards/accuracies": 0.987500011920929,
696
+ "rewards/chosen": -2.994217872619629,
697
+ "rewards/margins": 3.757829189300537,
698
+ "rewards/rejected": -6.752047061920166,
699
+ "step": 460
700
+ },
701
+ {
702
+ "epoch": 16.936936936936938,
703
+ "grad_norm": 4.7400712966918945,
704
+ "learning_rate": 2.515999069522676e-07,
705
+ "logits/chosen": -2.174427032470703,
706
+ "logits/rejected": -2.211961507797241,
707
+ "logps/chosen": -100.193603515625,
708
+ "logps/rejected": -139.8927001953125,
709
+ "loss": 0.0682,
710
+ "rewards/accuracies": 0.9750000238418579,
711
+ "rewards/chosen": -3.3388049602508545,
712
+ "rewards/margins": 3.942704677581787,
713
+ "rewards/rejected": -7.281510353088379,
714
+ "step": 470
715
+ },
716
+ {
717
+ "epoch": 17.2972972972973,
718
+ "grad_norm": 4.059471130371094,
719
+ "learning_rate": 1.8569007682777417e-07,
720
+ "logits/chosen": -2.172400951385498,
721
+ "logits/rejected": -2.2151694297790527,
722
+ "logps/chosen": -109.02021789550781,
723
+ "logps/rejected": -408.2373962402344,
724
+ "loss": 0.0601,
725
+ "rewards/accuracies": 1.0,
726
+ "rewards/chosen": -3.6841540336608887,
727
+ "rewards/margins": 10.677932739257812,
728
+ "rewards/rejected": -14.362088203430176,
729
+ "step": 480
730
+ },
731
+ {
732
+ "epoch": 17.65765765765766,
733
+ "grad_norm": 3.554147243499756,
734
+ "learning_rate": 1.2944737520980883e-07,
735
+ "logits/chosen": -2.2005763053894043,
736
+ "logits/rejected": -2.234318494796753,
737
+ "logps/chosen": -105.55352783203125,
738
+ "logps/rejected": -156.26739501953125,
739
+ "loss": 0.0667,
740
+ "rewards/accuracies": 0.987500011920929,
741
+ "rewards/chosen": -3.3424956798553467,
742
+ "rewards/margins": 3.9150681495666504,
743
+ "rewards/rejected": -7.257563591003418,
744
+ "step": 490
745
+ },
746
+ {
747
+ "epoch": 18.01801801801802,
748
+ "grad_norm": 4.700870990753174,
749
+ "learning_rate": 8.310673408334496e-08,
750
+ "logits/chosen": -2.2091004848480225,
751
+ "logits/rejected": -2.2198190689086914,
752
+ "logps/chosen": -123.39643859863281,
753
+ "logps/rejected": -159.49868774414062,
754
+ "loss": 0.0679,
755
+ "rewards/accuracies": 1.0,
756
+ "rewards/chosen": -3.437526226043701,
757
+ "rewards/margins": 3.751330852508545,
758
+ "rewards/rejected": -7.188857078552246,
759
+ "step": 500
760
+ }
761
+ ],
762
+ "logging_steps": 10,
763
+ "max_steps": 540,
764
+ "num_input_tokens_seen": 0,
765
+ "num_train_epochs": 20,
766
+ "save_steps": 100,
767
+ "stateful_callbacks": {
768
+ "TrainerControl": {
769
+ "args": {
770
+ "should_epoch_stop": false,
771
+ "should_evaluate": false,
772
+ "should_log": false,
773
+ "should_save": true,
774
+ "should_training_stop": false
775
+ },
776
+ "attributes": {}
777
+ }
778
+ },
779
+ "total_flos": 1.826528764968829e+18,
780
+ "train_batch_size": 1,
781
+ "trial_name": null,
782
+ "trial_params": null
783
+ }
Area_Time_SFT/checkpoint-540/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: ishorn5/RTLCoder-v1.1
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.12.0
Area_Time_SFT/checkpoint-540/adapter_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "ishorn5/RTLCoder-v1.1",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 16,
14
+ "lora_dropout": 0.0,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 8,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "gate_proj",
24
+ "k_proj",
25
+ "up_proj",
26
+ "down_proj",
27
+ "o_proj",
28
+ "v_proj",
29
+ "q_proj"
30
+ ],
31
+ "task_type": "CAUSAL_LM",
32
+ "use_dora": false,
33
+ "use_rslora": false
34
+ }