Lansechen commited on
Commit
05e47f4
·
verified ·
1 Parent(s): 25b743d

Model save

Browse files
README.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-7B
3
+ library_name: transformers
4
+ model_name: Qwen2.5-7B-Open-R1-GRPO-math-lighteval
5
+ tags:
6
+ - generated_from_trainer
7
+ - trl
8
+ - grpo
9
+ licence: license
10
+ ---
11
+
12
+ # Model Card for Qwen2.5-7B-Open-R1-GRPO-math-lighteval
13
+
14
+ This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B).
15
+ It has been trained using [TRL](https://github.com/huggingface/trl).
16
+
17
+ ## Quick start
18
+
19
+ ```python
20
+ from transformers import pipeline
21
+
22
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="Lansechen/Qwen2.5-7B-Open-R1-GRPO-math-lighteval", device="cuda")
24
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
+ print(output["generated_text"])
26
+ ```
27
+
28
+ ## Training procedure
29
+
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenran1995-the-chinese-university-of-hong-kong/huggingface/runs/q96trpqe)
31
+
32
+
33
+ This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
34
+
35
+ ### Framework versions
36
+
37
+ - TRL: 0.16.0
38
+ - Transformers: 4.49.0
39
+ - Pytorch: 2.5.1+cu121
40
+ - Datasets: 3.3.1
41
+ - Tokenizers: 0.21.0
42
+
43
+ ## Citations
44
+
45
+ Cite GRPO as:
46
+
47
+ ```bibtex
48
+ @article{zhihong2024deepseekmath,
49
+ title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
50
+ author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
51
+ year = 2024,
52
+ eprint = {arXiv:2402.03300},
53
+ }
54
+
55
+ ```
56
+
57
+ Cite TRL as:
58
+
59
+ ```bibtex
60
+ @misc{vonwerra2022trl,
61
+ title = {{TRL: Transformer Reinforcement Learning}},
62
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
63
+ year = 2020,
64
+ journal = {GitHub repository},
65
+ publisher = {GitHub},
66
+ howpublished = {\url{https://github.com/huggingface/trl}}
67
+ }
68
+ ```
all_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_flos": 0.0,
3
+ "train_loss": 0.023034477807496758,
4
+ "train_runtime": 35029.9588,
5
+ "train_samples": 7500,
6
+ "train_samples_per_second": 0.428,
7
+ "train_steps_per_second": 0.004
8
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "eos_token_id": 151643,
4
+ "max_new_tokens": 2048,
5
+ "transformers_version": "4.49.0"
6
+ }
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_flos": 0.0,
3
+ "train_loss": 0.023034477807496758,
4
+ "train_runtime": 35029.9588,
5
+ "train_samples": 7500,
6
+ "train_samples_per_second": 0.428,
7
+ "train_steps_per_second": 0.004
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,1898 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 1.9850746268656716,
5
+ "eval_steps": 100,
6
+ "global_step": 132,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "clip_ratio": 0.0,
13
+ "completion_length": 468.4821586608887,
14
+ "epoch": 0.014925373134328358,
15
+ "grad_norm": 0.6381782293319702,
16
+ "learning_rate": 7.142857142857142e-08,
17
+ "loss": 0.0029,
18
+ "num_tokens": 546936.0,
19
+ "reward": 0.27120537124574184,
20
+ "reward_std": 0.39265505224466324,
21
+ "rewards/accuracy_reward": 0.20647320989519358,
22
+ "rewards/format_reward": 0.06473214365541935,
23
+ "step": 1
24
+ },
25
+ {
26
+ "clip_ratio": 0.0,
27
+ "completion_length": 471.2355079650879,
28
+ "epoch": 0.029850746268656716,
29
+ "grad_norm": 0.47330012917518616,
30
+ "learning_rate": 1.4285714285714285e-07,
31
+ "loss": 0.0029,
32
+ "num_tokens": 1100635.0,
33
+ "reward": 0.29352679662406445,
34
+ "reward_std": 0.3840556889772415,
35
+ "rewards/accuracy_reward": 0.22767856996506453,
36
+ "rewards/format_reward": 0.06584821548312902,
37
+ "step": 2
38
+ },
39
+ {
40
+ "clip_ratio": 0.0,
41
+ "completion_length": 503.05247497558594,
42
+ "epoch": 0.04477611940298507,
43
+ "grad_norm": 0.40551432967185974,
44
+ "learning_rate": 2.1428571428571426e-07,
45
+ "loss": -0.0092,
46
+ "num_tokens": 1698274.0,
47
+ "reward": 0.24888394214212894,
48
+ "reward_std": 0.36480507254600525,
49
+ "rewards/accuracy_reward": 0.20424107275903225,
50
+ "rewards/format_reward": 0.04464285809081048,
51
+ "step": 3
52
+ },
53
+ {
54
+ "clip_ratio": 0.0,
55
+ "completion_length": 526.8214530944824,
56
+ "epoch": 0.05970149253731343,
57
+ "grad_norm": 0.4093160331249237,
58
+ "learning_rate": 2.857142857142857e-07,
59
+ "loss": 0.0128,
60
+ "num_tokens": 2299818.0,
61
+ "reward": 0.25781251303851604,
62
+ "reward_std": 0.35233646258711815,
63
+ "rewards/accuracy_reward": 0.1997767873108387,
64
+ "rewards/format_reward": 0.05803571571595967,
65
+ "step": 4
66
+ },
67
+ {
68
+ "clip_ratio": 0.0,
69
+ "completion_length": 512.1830596923828,
70
+ "epoch": 0.07462686567164178,
71
+ "grad_norm": 0.5151273012161255,
72
+ "learning_rate": 3.5714285714285716e-07,
73
+ "loss": 0.0079,
74
+ "num_tokens": 2896238.0,
75
+ "reward": 0.2667410895228386,
76
+ "reward_std": 0.36962299421429634,
77
+ "rewards/accuracy_reward": 0.18750000279396772,
78
+ "rewards/format_reward": 0.07924107019789517,
79
+ "step": 5
80
+ },
81
+ {
82
+ "clip_ratio": 0.0,
83
+ "completion_length": 459.7879638671875,
84
+ "epoch": 0.08955223880597014,
85
+ "grad_norm": 0.5247090458869934,
86
+ "learning_rate": 4.285714285714285e-07,
87
+ "loss": 0.0056,
88
+ "num_tokens": 3432696.0,
89
+ "reward": 0.2645089402794838,
90
+ "reward_std": 0.3864951953291893,
91
+ "rewards/accuracy_reward": 0.21316964365541935,
92
+ "rewards/format_reward": 0.0513392862631008,
93
+ "step": 6
94
+ },
95
+ {
96
+ "clip_ratio": 0.0,
97
+ "completion_length": 485.81252670288086,
98
+ "epoch": 0.1044776119402985,
99
+ "grad_norm": 0.7675074338912964,
100
+ "learning_rate": 5e-07,
101
+ "loss": 0.0057,
102
+ "num_tokens": 3997032.0,
103
+ "reward": 0.2901785857975483,
104
+ "reward_std": 0.38608507812023163,
105
+ "rewards/accuracy_reward": 0.203125,
106
+ "rewards/format_reward": 0.08705357182770967,
107
+ "step": 7
108
+ },
109
+ {
110
+ "clip_ratio": 0.0,
111
+ "completion_length": 503.35828399658203,
112
+ "epoch": 0.11940298507462686,
113
+ "grad_norm": 0.5886371731758118,
114
+ "learning_rate": 5.714285714285714e-07,
115
+ "loss": -0.0036,
116
+ "num_tokens": 4565689.0,
117
+ "reward": 0.3314732313156128,
118
+ "reward_std": 0.4302351512014866,
119
+ "rewards/accuracy_reward": 0.21205356996506453,
120
+ "rewards/format_reward": 0.11941964458674192,
121
+ "step": 8
122
+ },
123
+ {
124
+ "clip_ratio": 0.0,
125
+ "completion_length": 454.2031440734863,
126
+ "epoch": 0.13432835820895522,
127
+ "grad_norm": 1.0013484954833984,
128
+ "learning_rate": 6.428571428571429e-07,
129
+ "loss": -0.0081,
130
+ "num_tokens": 5097991.0,
131
+ "reward": 0.3560268022119999,
132
+ "reward_std": 0.4213982783257961,
133
+ "rewards/accuracy_reward": 0.22321428451687098,
134
+ "rewards/format_reward": 0.1328125,
135
+ "step": 9
136
+ },
137
+ {
138
+ "clip_ratio": 0.0,
139
+ "completion_length": 437.2924270629883,
140
+ "epoch": 0.14925373134328357,
141
+ "grad_norm": 1.295032262802124,
142
+ "learning_rate": 7.142857142857143e-07,
143
+ "loss": 0.0078,
144
+ "num_tokens": 5614445.0,
145
+ "reward": 0.4207589514553547,
146
+ "reward_std": 0.49341823533177376,
147
+ "rewards/accuracy_reward": 0.2254464291036129,
148
+ "rewards/format_reward": 0.19531249906867743,
149
+ "step": 10
150
+ },
151
+ {
152
+ "clip_ratio": 0.0,
153
+ "completion_length": 522.3638687133789,
154
+ "epoch": 0.16417910447761194,
155
+ "grad_norm": 4.296813011169434,
156
+ "learning_rate": 7.857142857142856e-07,
157
+ "loss": 0.0091,
158
+ "num_tokens": 6212539.0,
159
+ "reward": 0.4508928842842579,
160
+ "reward_std": 0.48767242580652237,
161
+ "rewards/accuracy_reward": 0.203125,
162
+ "rewards/format_reward": 0.2477678582072258,
163
+ "step": 11
164
+ },
165
+ {
166
+ "clip_ratio": 0.0,
167
+ "completion_length": 484.03127670288086,
168
+ "epoch": 0.1791044776119403,
169
+ "grad_norm": 0.9652361869812012,
170
+ "learning_rate": 8.57142857142857e-07,
171
+ "loss": 0.0202,
172
+ "num_tokens": 6776311.0,
173
+ "reward": 0.525669664144516,
174
+ "reward_std": 0.5405469685792923,
175
+ "rewards/accuracy_reward": 0.2198660708963871,
176
+ "rewards/format_reward": 0.3058035746216774,
177
+ "step": 12
178
+ },
179
+ {
180
+ "clip_ratio": 0.0,
181
+ "completion_length": 491.166316986084,
182
+ "epoch": 0.19402985074626866,
183
+ "grad_norm": 0.43946775794029236,
184
+ "learning_rate": 9.285714285714285e-07,
185
+ "loss": 0.0234,
186
+ "num_tokens": 7339580.0,
187
+ "reward": 0.7756696790456772,
188
+ "reward_std": 0.5723891109228134,
189
+ "rewards/accuracy_reward": 0.2388392873108387,
190
+ "rewards/format_reward": 0.5368303470313549,
191
+ "step": 13
192
+ },
193
+ {
194
+ "clip_ratio": 0.0,
195
+ "completion_length": 453.806941986084,
196
+ "epoch": 0.208955223880597,
197
+ "grad_norm": 1.8365284204483032,
198
+ "learning_rate": 1e-06,
199
+ "loss": 0.0215,
200
+ "num_tokens": 7870063.0,
201
+ "reward": 0.8973214775323868,
202
+ "reward_std": 0.5517464429140091,
203
+ "rewards/accuracy_reward": 0.2645089253783226,
204
+ "rewards/format_reward": 0.6328125074505806,
205
+ "step": 14
206
+ },
207
+ {
208
+ "clip_ratio": 0.0,
209
+ "completion_length": 442.7131881713867,
210
+ "epoch": 0.22388059701492538,
211
+ "grad_norm": 0.4182904362678528,
212
+ "learning_rate": 9.998286624877785e-07,
213
+ "loss": 0.0323,
214
+ "num_tokens": 8381238.0,
215
+ "reward": 0.9241071864962578,
216
+ "reward_std": 0.5212078019976616,
217
+ "rewards/accuracy_reward": 0.2198660708963871,
218
+ "rewards/format_reward": 0.7042410671710968,
219
+ "step": 15
220
+ },
221
+ {
222
+ "clip_ratio": 0.0,
223
+ "completion_length": 489.45203399658203,
224
+ "epoch": 0.23880597014925373,
225
+ "grad_norm": 0.4200661778450012,
226
+ "learning_rate": 9.99314767377287e-07,
227
+ "loss": 0.0269,
228
+ "num_tokens": 8943155.0,
229
+ "reward": 1.010044701397419,
230
+ "reward_std": 0.5311430767178535,
231
+ "rewards/accuracy_reward": 0.2633928544819355,
232
+ "rewards/format_reward": 0.746651791036129,
233
+ "step": 16
234
+ },
235
+ {
236
+ "clip_ratio": 0.0,
237
+ "completion_length": 437.2343864440918,
238
+ "epoch": 0.2537313432835821,
239
+ "grad_norm": 0.35031601786613464,
240
+ "learning_rate": 9.98458666866564e-07,
241
+ "loss": -0.0231,
242
+ "num_tokens": 9484653.0,
243
+ "reward": 1.1908482760190964,
244
+ "reward_std": 0.4788440987467766,
245
+ "rewards/accuracy_reward": 0.30691963993012905,
246
+ "rewards/format_reward": 0.8839285671710968,
247
+ "step": 17
248
+ },
249
+ {
250
+ "clip_ratio": 0.0,
251
+ "completion_length": 466.06028747558594,
252
+ "epoch": 0.26865671641791045,
253
+ "grad_norm": 0.31605255603790283,
254
+ "learning_rate": 9.972609476841365e-07,
255
+ "loss": 0.031,
256
+ "num_tokens": 10038427.0,
257
+ "reward": 1.2421875596046448,
258
+ "reward_std": 0.4797636419534683,
259
+ "rewards/accuracy_reward": 0.34263392724096775,
260
+ "rewards/format_reward": 0.8995535746216774,
261
+ "step": 18
262
+ },
263
+ {
264
+ "clip_ratio": 0.0,
265
+ "completion_length": 492.533504486084,
266
+ "epoch": 0.2835820895522388,
267
+ "grad_norm": 0.3311372399330139,
268
+ "learning_rate": 9.957224306869053e-07,
269
+ "loss": 0.0183,
270
+ "num_tokens": 10604529.0,
271
+ "reward": 1.3191964626312256,
272
+ "reward_std": 0.4729856923222542,
273
+ "rewards/accuracy_reward": 0.4151785634458065,
274
+ "rewards/format_reward": 0.9040178507566452,
275
+ "step": 19
276
+ },
277
+ {
278
+ "clip_ratio": 0.0,
279
+ "completion_length": 479.89734649658203,
280
+ "epoch": 0.29850746268656714,
281
+ "grad_norm": 0.339732825756073,
282
+ "learning_rate": 9.938441702975689e-07,
283
+ "loss": 0.0107,
284
+ "num_tokens": 11161797.0,
285
+ "reward": 1.3560268729925156,
286
+ "reward_std": 0.47978325933218,
287
+ "rewards/accuracy_reward": 0.4374999962747097,
288
+ "rewards/format_reward": 0.918526791036129,
289
+ "step": 20
290
+ },
291
+ {
292
+ "clip_ratio": 0.0,
293
+ "completion_length": 552.010066986084,
294
+ "epoch": 0.31343283582089554,
295
+ "grad_norm": 0.23753176629543304,
296
+ "learning_rate": 9.916274537819773e-07,
297
+ "loss": 0.0283,
298
+ "num_tokens": 11780870.0,
299
+ "reward": 1.4386161267757416,
300
+ "reward_std": 0.46573930606245995,
301
+ "rewards/accuracy_reward": 0.5189732164144516,
302
+ "rewards/format_reward": 0.9196428656578064,
303
+ "step": 21
304
+ },
305
+ {
306
+ "clip_ratio": 0.0,
307
+ "completion_length": 487.6372947692871,
308
+ "epoch": 0.3283582089552239,
309
+ "grad_norm": 0.2946343719959259,
310
+ "learning_rate": 9.890738003669027e-07,
311
+ "loss": 0.0361,
312
+ "num_tokens": 12347257.0,
313
+ "reward": 1.491071492433548,
314
+ "reward_std": 0.42494936659932137,
315
+ "rewards/accuracy_reward": 0.5357142835855484,
316
+ "rewards/format_reward": 0.9553571343421936,
317
+ "step": 22
318
+ },
319
+ {
320
+ "clip_ratio": 0.0,
321
+ "completion_length": 479.8527069091797,
322
+ "epoch": 0.34328358208955223,
323
+ "grad_norm": 0.2775673568248749,
324
+ "learning_rate": 9.861849601988383e-07,
325
+ "loss": 0.013,
326
+ "num_tokens": 12907781.0,
327
+ "reward": 1.545758992433548,
328
+ "reward_std": 0.39185454323887825,
329
+ "rewards/accuracy_reward": 0.6004464291036129,
330
+ "rewards/format_reward": 0.9453125074505806,
331
+ "step": 23
332
+ },
333
+ {
334
+ "clip_ratio": 0.0,
335
+ "completion_length": 570.8906555175781,
336
+ "epoch": 0.3582089552238806,
337
+ "grad_norm": 0.2118677943944931,
338
+ "learning_rate": 9.82962913144534e-07,
339
+ "loss": 0.0149,
340
+ "num_tokens": 13555419.0,
341
+ "reward": 1.547991156578064,
342
+ "reward_std": 0.369963688775897,
343
+ "rewards/accuracy_reward": 0.6071428544819355,
344
+ "rewards/format_reward": 0.940848208963871,
345
+ "step": 24
346
+ },
347
+ {
348
+ "clip_ratio": 0.0,
349
+ "completion_length": 533.5781478881836,
350
+ "epoch": 0.373134328358209,
351
+ "grad_norm": 0.18608489632606506,
352
+ "learning_rate": 9.794098674340966e-07,
353
+ "loss": 0.011,
354
+ "num_tokens": 14148665.0,
355
+ "reward": 1.6908482909202576,
356
+ "reward_std": 0.2971882149577141,
357
+ "rewards/accuracy_reward": 0.7209821492433548,
358
+ "rewards/format_reward": 0.9698660746216774,
359
+ "step": 25
360
+ },
361
+ {
362
+ "clip_ratio": 0.0,
363
+ "completion_length": 579.241096496582,
364
+ "epoch": 0.3880597014925373,
365
+ "grad_norm": 0.19908326864242554,
366
+ "learning_rate": 9.755282581475767e-07,
367
+ "loss": 0.0489,
368
+ "num_tokens": 14805265.0,
369
+ "reward": 1.5781250596046448,
370
+ "reward_std": 0.3244300540536642,
371
+ "rewards/accuracy_reward": 0.6238839328289032,
372
+ "rewards/format_reward": 0.9542410746216774,
373
+ "step": 26
374
+ },
375
+ {
376
+ "clip_ratio": 0.0,
377
+ "completion_length": 601.6830520629883,
378
+ "epoch": 0.40298507462686567,
379
+ "grad_norm": 0.16485629975795746,
380
+ "learning_rate": 9.713207455460892e-07,
381
+ "loss": 0.0507,
382
+ "num_tokens": 15471061.0,
383
+ "reward": 1.6294643580913544,
384
+ "reward_std": 0.259660467505455,
385
+ "rewards/accuracy_reward": 0.6629464402794838,
386
+ "rewards/format_reward": 0.9665178656578064,
387
+ "step": 27
388
+ },
389
+ {
390
+ "clip_ratio": 0.0,
391
+ "completion_length": 574.0223426818848,
392
+ "epoch": 0.417910447761194,
393
+ "grad_norm": 0.1541140377521515,
394
+ "learning_rate": 9.667902132486008e-07,
395
+ "loss": 0.0453,
396
+ "num_tokens": 16104777.0,
397
+ "reward": 1.6495536267757416,
398
+ "reward_std": 0.25614376924932003,
399
+ "rewards/accuracy_reward": 0.684151791036129,
400
+ "rewards/format_reward": 0.9654017835855484,
401
+ "step": 28
402
+ },
403
+ {
404
+ "clip_ratio": 0.0,
405
+ "completion_length": 663.6596221923828,
406
+ "epoch": 0.43283582089552236,
407
+ "grad_norm": 0.14992094039916992,
408
+ "learning_rate": 9.619397662556433e-07,
409
+ "loss": 0.0387,
410
+ "num_tokens": 16838656.0,
411
+ "reward": 1.617187574505806,
412
+ "reward_std": 0.25877264328300953,
413
+ "rewards/accuracy_reward": 0.6529017873108387,
414
+ "rewards/format_reward": 0.9642857164144516,
415
+ "step": 29
416
+ },
417
+ {
418
+ "clip_ratio": 0.0,
419
+ "completion_length": 632.5580749511719,
420
+ "epoch": 0.44776119402985076,
421
+ "grad_norm": 0.1843111515045166,
422
+ "learning_rate": 9.567727288213004e-07,
423
+ "loss": 0.0511,
424
+ "num_tokens": 17528148.0,
425
+ "reward": 1.7154018729925156,
426
+ "reward_std": 0.23728702031075954,
427
+ "rewards/accuracy_reward": 0.7410714328289032,
428
+ "rewards/format_reward": 0.9743303582072258,
429
+ "step": 30
430
+ },
431
+ {
432
+ "clip_ratio": 0.0,
433
+ "completion_length": 628.217658996582,
434
+ "epoch": 0.4626865671641791,
435
+ "grad_norm": 0.11407707631587982,
436
+ "learning_rate": 9.512926421749303e-07,
437
+ "loss": 0.0391,
438
+ "num_tokens": 18214823.0,
439
+ "reward": 1.6651786416769028,
440
+ "reward_std": 0.18615373829379678,
441
+ "rewards/accuracy_reward": 0.6841517835855484,
442
+ "rewards/format_reward": 0.9810267835855484,
443
+ "step": 31
444
+ },
445
+ {
446
+ "clip_ratio": 0.0,
447
+ "completion_length": 595.1272659301758,
448
+ "epoch": 0.47761194029850745,
449
+ "grad_norm": 0.12041105329990387,
450
+ "learning_rate": 9.455032620941839e-07,
451
+ "loss": 0.0291,
452
+ "num_tokens": 18876793.0,
453
+ "reward": 1.7310268729925156,
454
+ "reward_std": 0.1854203026741743,
455
+ "rewards/accuracy_reward": 0.7500000074505806,
456
+ "rewards/format_reward": 0.9810267835855484,
457
+ "step": 32
458
+ },
459
+ {
460
+ "clip_ratio": 0.0,
461
+ "completion_length": 664.4553909301758,
462
+ "epoch": 0.4925373134328358,
463
+ "grad_norm": 0.15564288198947906,
464
+ "learning_rate": 9.394085563309826e-07,
465
+ "loss": 0.0552,
466
+ "num_tokens": 19597657.0,
467
+ "reward": 1.6562500596046448,
468
+ "reward_std": 0.2201189510524273,
469
+ "rewards/accuracy_reward": 0.680803582072258,
470
+ "rewards/format_reward": 0.9754464253783226,
471
+ "step": 33
472
+ },
473
+ {
474
+ "clip_ratio": 0.0,
475
+ "completion_length": 683.2623062133789,
476
+ "epoch": 0.5074626865671642,
477
+ "grad_norm": 0.11131128668785095,
478
+ "learning_rate": 9.330127018922193e-07,
479
+ "loss": 0.044,
480
+ "num_tokens": 20340292.0,
481
+ "reward": 1.672991156578064,
482
+ "reward_std": 0.2345678173005581,
483
+ "rewards/accuracy_reward": 0.6941964291036129,
484
+ "rewards/format_reward": 0.978794626891613,
485
+ "step": 34
486
+ },
487
+ {
488
+ "clip_ratio": 0.0,
489
+ "completion_length": 643.5792694091797,
490
+ "epoch": 0.5223880597014925,
491
+ "grad_norm": 0.10805458575487137,
492
+ "learning_rate": 9.26320082177046e-07,
493
+ "loss": 0.0302,
494
+ "num_tokens": 21056931.0,
495
+ "reward": 1.6930804401636124,
496
+ "reward_std": 0.2090724390000105,
497
+ "rewards/accuracy_reward": 0.7064732126891613,
498
+ "rewards/format_reward": 0.9866071343421936,
499
+ "step": 35
500
+ },
501
+ {
502
+ "clip_ratio": 0.0,
503
+ "completion_length": 671.2388725280762,
504
+ "epoch": 0.5373134328358209,
505
+ "grad_norm": 0.11671151965856552,
506
+ "learning_rate": 9.19335283972712e-07,
507
+ "loss": 0.0517,
508
+ "num_tokens": 21801897.0,
509
+ "reward": 1.6685268580913544,
510
+ "reward_std": 0.23604967631399632,
511
+ "rewards/accuracy_reward": 0.6908482164144516,
512
+ "rewards/format_reward": 0.9776785671710968,
513
+ "step": 36
514
+ },
515
+ {
516
+ "clip_ratio": 0.0,
517
+ "completion_length": 627.1127471923828,
518
+ "epoch": 0.5522388059701493,
519
+ "grad_norm": 0.11113366484642029,
520
+ "learning_rate": 9.120630943110077e-07,
521
+ "loss": 0.0203,
522
+ "num_tokens": 22491214.0,
523
+ "reward": 1.7321429252624512,
524
+ "reward_std": 0.17728407308459282,
525
+ "rewards/accuracy_reward": 0.7477678507566452,
526
+ "rewards/format_reward": 0.984375,
527
+ "step": 37
528
+ },
529
+ {
530
+ "clip_ratio": 0.0,
531
+ "completion_length": 648.8895416259766,
532
+ "epoch": 0.5671641791044776,
533
+ "grad_norm": 0.09676128625869751,
534
+ "learning_rate": 9.045084971874737e-07,
535
+ "loss": 0.0296,
536
+ "num_tokens": 23218403.0,
537
+ "reward": 1.7366072237491608,
538
+ "reward_std": 0.16557986475527287,
539
+ "rewards/accuracy_reward": 0.745535708963871,
540
+ "rewards/format_reward": 0.9910714104771614,
541
+ "step": 38
542
+ },
543
+ {
544
+ "clip_ratio": 0.0,
545
+ "completion_length": 621.256721496582,
546
+ "epoch": 0.582089552238806,
547
+ "grad_norm": 0.10444407165050507,
548
+ "learning_rate": 8.966766701456176e-07,
549
+ "loss": 0.0293,
550
+ "num_tokens": 23901713.0,
551
+ "reward": 1.6908483058214188,
552
+ "reward_std": 0.18806083872914314,
553
+ "rewards/accuracy_reward": 0.7109375,
554
+ "rewards/format_reward": 0.979910708963871,
555
+ "step": 39
556
+ },
557
+ {
558
+ "clip_ratio": 0.0,
559
+ "completion_length": 713.9565048217773,
560
+ "epoch": 0.5970149253731343,
561
+ "grad_norm": 0.09341216832399368,
562
+ "learning_rate": 8.885729807284854e-07,
563
+ "loss": 0.0245,
564
+ "num_tokens": 24665114.0,
565
+ "reward": 1.6953125596046448,
566
+ "reward_std": 0.1814118754118681,
567
+ "rewards/accuracy_reward": 0.7131696492433548,
568
+ "rewards/format_reward": 0.9821428507566452,
569
+ "step": 40
570
+ },
571
+ {
572
+ "clip_ratio": 0.0,
573
+ "completion_length": 651.0513610839844,
574
+ "epoch": 0.6119402985074627,
575
+ "grad_norm": 0.1323300451040268,
576
+ "learning_rate": 8.802029828000155e-07,
577
+ "loss": 0.0425,
578
+ "num_tokens": 25389064.0,
579
+ "reward": 1.6674107909202576,
580
+ "reward_std": 0.2174726091325283,
581
+ "rewards/accuracy_reward": 0.6841517835855484,
582
+ "rewards/format_reward": 0.9832589253783226,
583
+ "step": 41
584
+ },
585
+ {
586
+ "clip_ratio": 0.0,
587
+ "completion_length": 607.2377471923828,
588
+ "epoch": 0.6268656716417911,
589
+ "grad_norm": 0.11196275800466537,
590
+ "learning_rate": 8.71572412738697e-07,
591
+ "loss": 0.0262,
592
+ "num_tokens": 26055629.0,
593
+ "reward": 1.8258929252624512,
594
+ "reward_std": 0.15602111723273993,
595
+ "rewards/accuracy_reward": 0.834821417927742,
596
+ "rewards/format_reward": 0.991071417927742,
597
+ "step": 42
598
+ },
599
+ {
600
+ "clip_ratio": 0.0,
601
+ "completion_length": 627.4364242553711,
602
+ "epoch": 0.6417910447761194,
603
+ "grad_norm": 0.10694889724254608,
604
+ "learning_rate": 8.626871855061437e-07,
605
+ "loss": 0.0058,
606
+ "num_tokens": 26753428.0,
607
+ "reward": 1.7589286416769028,
608
+ "reward_std": 0.15414638444781303,
609
+ "rewards/accuracy_reward": 0.7645089328289032,
610
+ "rewards/format_reward": 0.9944196343421936,
611
+ "step": 43
612
+ },
613
+ {
614
+ "clip_ratio": 0.0,
615
+ "completion_length": 601.2935638427734,
616
+ "epoch": 0.6567164179104478,
617
+ "grad_norm": 0.09472611546516418,
618
+ "learning_rate": 8.535533905932737e-07,
619
+ "loss": 0.0191,
620
+ "num_tokens": 27419099.0,
621
+ "reward": 1.7455357760190964,
622
+ "reward_std": 0.15037838742136955,
623
+ "rewards/accuracy_reward": 0.7544642835855484,
624
+ "rewards/format_reward": 0.9910714253783226,
625
+ "step": 44
626
+ },
627
+ {
628
+ "clip_ratio": 0.0,
629
+ "completion_length": 621.0245742797852,
630
+ "epoch": 0.6716417910447762,
631
+ "grad_norm": 0.10363104939460754,
632
+ "learning_rate": 8.441772878468769e-07,
633
+ "loss": 0.034,
634
+ "num_tokens": 28101745.0,
635
+ "reward": 1.743303656578064,
636
+ "reward_std": 0.1812288984656334,
637
+ "rewards/accuracy_reward": 0.756696417927742,
638
+ "rewards/format_reward": 0.9866071343421936,
639
+ "step": 45
640
+ },
641
+ {
642
+ "clip_ratio": 0.0,
643
+ "completion_length": 713.8482437133789,
644
+ "epoch": 0.6865671641791045,
645
+ "grad_norm": 0.10190044343471527,
646
+ "learning_rate": 8.34565303179429e-07,
647
+ "loss": 0.0563,
648
+ "num_tokens": 28875145.0,
649
+ "reward": 1.6640625894069672,
650
+ "reward_std": 0.2248408328741789,
651
+ "rewards/accuracy_reward": 0.6886160746216774,
652
+ "rewards/format_reward": 0.9754464253783226,
653
+ "step": 46
654
+ },
655
+ {
656
+ "clip_ratio": 0.0,
657
+ "completion_length": 670.1551666259766,
658
+ "epoch": 0.7014925373134329,
659
+ "grad_norm": 0.11430803686380386,
660
+ "learning_rate": 8.247240241650917e-07,
661
+ "loss": 0.0096,
662
+ "num_tokens": 29591660.0,
663
+ "reward": 1.7377232760190964,
664
+ "reward_std": 0.1653135661035776,
665
+ "rewards/accuracy_reward": 0.7455357052385807,
666
+ "rewards/format_reward": 0.9921874850988388,
667
+ "step": 47
668
+ },
669
+ {
670
+ "clip_ratio": 0.0,
671
+ "completion_length": 653.9408874511719,
672
+ "epoch": 0.7164179104477612,
673
+ "grad_norm": 0.12349370867013931,
674
+ "learning_rate": 8.146601955249187e-07,
675
+ "loss": 0.0196,
676
+ "num_tokens": 30304407.0,
677
+ "reward": 1.7377232909202576,
678
+ "reward_std": 0.19286326505243778,
679
+ "rewards/accuracy_reward": 0.7511160746216774,
680
+ "rewards/format_reward": 0.9866071343421936,
681
+ "step": 48
682
+ },
683
+ {
684
+ "clip_ratio": 0.0,
685
+ "completion_length": 667.8393173217773,
686
+ "epoch": 0.7313432835820896,
687
+ "grad_norm": 0.10884106904268265,
688
+ "learning_rate": 8.043807145043603e-07,
689
+ "loss": 0.0286,
690
+ "num_tokens": 31039191.0,
691
+ "reward": 1.7377233058214188,
692
+ "reward_std": 0.1800588108599186,
693
+ "rewards/accuracy_reward": 0.746651791036129,
694
+ "rewards/format_reward": 0.991071417927742,
695
+ "step": 49
696
+ },
697
+ {
698
+ "clip_ratio": 0.0,
699
+ "completion_length": 632.0323944091797,
700
+ "epoch": 0.746268656716418,
701
+ "grad_norm": 0.10950858145952225,
702
+ "learning_rate": 7.938926261462365e-07,
703
+ "loss": 0.0221,
704
+ "num_tokens": 31743644.0,
705
+ "reward": 1.7098215073347092,
706
+ "reward_std": 0.15341119468212128,
707
+ "rewards/accuracy_reward": 0.71875,
708
+ "rewards/format_reward": 0.9910714104771614,
709
+ "step": 50
710
+ },
711
+ {
712
+ "clip_ratio": 0.0,
713
+ "completion_length": 633.6294937133789,
714
+ "epoch": 0.7611940298507462,
715
+ "grad_norm": 0.12094567716121674,
716
+ "learning_rate": 7.832031184624164e-07,
717
+ "loss": 0.0344,
718
+ "num_tokens": 32449160.0,
719
+ "reward": 1.7287947237491608,
720
+ "reward_std": 0.184023879468441,
721
+ "rewards/accuracy_reward": 0.7366071343421936,
722
+ "rewards/format_reward": 0.9921874850988388,
723
+ "step": 51
724
+ },
725
+ {
726
+ "clip_ratio": 0.0,
727
+ "completion_length": 659.0848617553711,
728
+ "epoch": 0.7761194029850746,
729
+ "grad_norm": 0.10622338950634003,
730
+ "learning_rate": 7.723195175075135e-07,
731
+ "loss": 0.0386,
732
+ "num_tokens": 33168348.0,
733
+ "reward": 1.6964286416769028,
734
+ "reward_std": 0.17664530966430902,
735
+ "rewards/accuracy_reward": 0.717633917927742,
736
+ "rewards/format_reward": 0.9787946417927742,
737
+ "step": 52
738
+ },
739
+ {
740
+ "clip_ratio": 0.0,
741
+ "completion_length": 671.2433319091797,
742
+ "epoch": 0.7910447761194029,
743
+ "grad_norm": 0.10682238638401031,
744
+ "learning_rate": 7.612492823579744e-07,
745
+ "loss": 0.0376,
746
+ "num_tokens": 33897454.0,
747
+ "reward": 1.671875074505806,
748
+ "reward_std": 0.1950401533395052,
749
+ "rewards/accuracy_reward": 0.6886160746216774,
750
+ "rewards/format_reward": 0.983258917927742,
751
+ "step": 53
752
+ },
753
+ {
754
+ "clip_ratio": 0.0,
755
+ "completion_length": 606.1585159301758,
756
+ "epoch": 0.8059701492537313,
757
+ "grad_norm": 0.10302180051803589,
758
+ "learning_rate": 7.5e-07,
759
+ "loss": 0.0104,
760
+ "num_tokens": 34575404.0,
761
+ "reward": 1.7410715073347092,
762
+ "reward_std": 0.16222931072115898,
763
+ "rewards/accuracy_reward": 0.745535708963871,
764
+ "rewards/format_reward": 0.9955357015132904,
765
+ "step": 54
766
+ },
767
+ {
768
+ "clip_ratio": 0.0,
769
+ "completion_length": 625.7221260070801,
770
+ "epoch": 0.8208955223880597,
771
+ "grad_norm": 0.10820939391851425,
772
+ "learning_rate": 7.385793801298042e-07,
773
+ "loss": 0.0312,
774
+ "num_tokens": 35275171.0,
775
+ "reward": 1.7243304252624512,
776
+ "reward_std": 0.20950445905327797,
777
+ "rewards/accuracy_reward": 0.7366071417927742,
778
+ "rewards/format_reward": 0.9877232015132904,
779
+ "step": 55
780
+ },
781
+ {
782
+ "clip_ratio": 0.0,
783
+ "completion_length": 633.1205596923828,
784
+ "epoch": 0.835820895522388,
785
+ "grad_norm": 0.13289578258991241,
786
+ "learning_rate": 7.269952498697734e-07,
787
+ "loss": 0.0142,
788
+ "num_tokens": 35973503.0,
789
+ "reward": 1.6941965073347092,
790
+ "reward_std": 0.176253211684525,
791
+ "rewards/accuracy_reward": 0.703125,
792
+ "rewards/format_reward": 0.9910714253783226,
793
+ "step": 56
794
+ },
795
+ {
796
+ "clip_ratio": 0.0,
797
+ "completion_length": 625.6227951049805,
798
+ "epoch": 0.8507462686567164,
799
+ "grad_norm": 0.13948650658130646,
800
+ "learning_rate": 7.152555484041475e-07,
801
+ "loss": 0.0413,
802
+ "num_tokens": 36657061.0,
803
+ "reward": 1.7879465222358704,
804
+ "reward_std": 0.15797736030071974,
805
+ "rewards/accuracy_reward": 0.8035714253783226,
806
+ "rewards/format_reward": 0.9843749925494194,
807
+ "step": 57
808
+ },
809
+ {
810
+ "clip_ratio": 0.0,
811
+ "completion_length": 650.3381958007812,
812
+ "epoch": 0.8656716417910447,
813
+ "grad_norm": 0.2045215368270874,
814
+ "learning_rate": 7.033683215379002e-07,
815
+ "loss": 0.0374,
816
+ "num_tokens": 37365732.0,
817
+ "reward": 1.695312574505806,
818
+ "reward_std": 0.17142421007156372,
819
+ "rewards/accuracy_reward": 0.7120535671710968,
820
+ "rewards/format_reward": 0.983258917927742,
821
+ "step": 58
822
+ },
823
+ {
824
+ "clip_ratio": 0.0,
825
+ "completion_length": 641.5770378112793,
826
+ "epoch": 0.8805970149253731,
827
+ "grad_norm": 0.13572581112384796,
828
+ "learning_rate": 6.913417161825449e-07,
829
+ "loss": 0.0067,
830
+ "num_tokens": 38065993.0,
831
+ "reward": 1.7890625894069672,
832
+ "reward_std": 0.16594232060015202,
833
+ "rewards/accuracy_reward": 0.7946428656578064,
834
+ "rewards/format_reward": 0.9944196343421936,
835
+ "step": 59
836
+ },
837
+ {
838
+ "clip_ratio": 0.0,
839
+ "completion_length": 598.4498100280762,
840
+ "epoch": 0.8955223880597015,
841
+ "grad_norm": 0.118675097823143,
842
+ "learning_rate": 6.7918397477265e-07,
843
+ "loss": 0.0253,
844
+ "num_tokens": 38737236.0,
845
+ "reward": 1.7466518580913544,
846
+ "reward_std": 0.17820186354219913,
847
+ "rewards/accuracy_reward": 0.7522321492433548,
848
+ "rewards/format_reward": 0.9944196343421936,
849
+ "step": 60
850
+ },
851
+ {
852
+ "clip_ratio": 0.0,
853
+ "completion_length": 641.1663208007812,
854
+ "epoch": 0.9104477611940298,
855
+ "grad_norm": 0.08891814202070236,
856
+ "learning_rate": 6.669034296168854e-07,
857
+ "loss": 0.0319,
858
+ "num_tokens": 39453897.0,
859
+ "reward": 1.709821492433548,
860
+ "reward_std": 0.14528072997927666,
861
+ "rewards/accuracy_reward": 0.71875,
862
+ "rewards/format_reward": 0.9910714253783226,
863
+ "step": 61
864
+ },
865
+ {
866
+ "clip_ratio": 0.0,
867
+ "completion_length": 612.7120742797852,
868
+ "epoch": 0.9253731343283582,
869
+ "grad_norm": 0.18538488447666168,
870
+ "learning_rate": 6.545084971874736e-07,
871
+ "loss": 0.0169,
872
+ "num_tokens": 40128303.0,
873
+ "reward": 1.7712054252624512,
874
+ "reward_std": 0.16406728280708194,
875
+ "rewards/accuracy_reward": 0.7812499925494194,
876
+ "rewards/format_reward": 0.9899553433060646,
877
+ "step": 62
878
+ },
879
+ {
880
+ "clip_ratio": 0.0,
881
+ "completion_length": 678.2556076049805,
882
+ "epoch": 0.9402985074626866,
883
+ "grad_norm": 0.1315467208623886,
884
+ "learning_rate": 6.420076723519614e-07,
885
+ "loss": 0.0448,
886
+ "num_tokens": 40865532.0,
887
+ "reward": 1.7254465073347092,
888
+ "reward_std": 0.2296902798116207,
889
+ "rewards/accuracy_reward": 0.7544642984867096,
890
+ "rewards/format_reward": 0.9709821417927742,
891
+ "step": 63
892
+ },
893
+ {
894
+ "clip_ratio": 0.0,
895
+ "completion_length": 583.9475708007812,
896
+ "epoch": 0.9552238805970149,
897
+ "grad_norm": 0.1169319748878479,
898
+ "learning_rate": 6.294095225512604e-07,
899
+ "loss": 0.0223,
900
+ "num_tokens": 41525429.0,
901
+ "reward": 1.7522322088479996,
902
+ "reward_std": 0.18113385140895844,
903
+ "rewards/accuracy_reward": 0.7622767761349678,
904
+ "rewards/format_reward": 0.9899553507566452,
905
+ "step": 64
906
+ },
907
+ {
908
+ "clip_ratio": 0.0,
909
+ "completion_length": 596.311408996582,
910
+ "epoch": 0.9701492537313433,
911
+ "grad_norm": 0.12513795495033264,
912
+ "learning_rate": 6.167226819279527e-07,
913
+ "loss": 0.0178,
914
+ "num_tokens": 42192916.0,
915
+ "reward": 1.7477679252624512,
916
+ "reward_std": 0.14999555610120296,
917
+ "rewards/accuracy_reward": 0.753348208963871,
918
+ "rewards/format_reward": 0.9944196343421936,
919
+ "step": 65
920
+ },
921
+ {
922
+ "clip_ratio": 0.0,
923
+ "completion_length": 649.2395858764648,
924
+ "epoch": 0.9850746268656716,
925
+ "grad_norm": 0.6537142992019653,
926
+ "learning_rate": 6.039558454088795e-07,
927
+ "loss": 0.0283,
928
+ "num_tokens": 42898514.0,
929
+ "reward": 1.733258992433548,
930
+ "reward_std": 0.15077468007802963,
931
+ "rewards/accuracy_reward": 0.7455357164144516,
932
+ "rewards/format_reward": 0.9877232164144516,
933
+ "step": 66
934
+ },
935
+ {
936
+ "clip_ratio": 0.0,
937
+ "completion_length": 617.8482437133789,
938
+ "epoch": 1.0149253731343284,
939
+ "grad_norm": 0.09364171326160431,
940
+ "learning_rate": 5.911177627460738e-07,
941
+ "loss": 0.0312,
942
+ "num_tokens": 43570490.0,
943
+ "reward": 1.7712054550647736,
944
+ "reward_std": 0.16040645446628332,
945
+ "rewards/accuracy_reward": 0.7924107164144516,
946
+ "rewards/format_reward": 0.9787946343421936,
947
+ "step": 67
948
+ },
949
+ {
950
+ "clip_ratio": 0.0,
951
+ "completion_length": 664.5725784301758,
952
+ "epoch": 1.0298507462686568,
953
+ "grad_norm": 0.09059074521064758,
954
+ "learning_rate": 5.782172325201155e-07,
955
+ "loss": 0.04,
956
+ "num_tokens": 44298899.0,
957
+ "reward": 1.7410715073347092,
958
+ "reward_std": 0.1766328364610672,
959
+ "rewards/accuracy_reward": 0.7555803656578064,
960
+ "rewards/format_reward": 0.9854910671710968,
961
+ "step": 68
962
+ },
963
+ {
964
+ "clip_ratio": 0.0,
965
+ "completion_length": 597.5100708007812,
966
+ "epoch": 1.044776119402985,
967
+ "grad_norm": 0.10916374623775482,
968
+ "learning_rate": 5.652630961100258e-07,
969
+ "loss": 0.0065,
970
+ "num_tokens": 44959604.0,
971
+ "reward": 1.7511161714792252,
972
+ "reward_std": 0.14327580528333783,
973
+ "rewards/accuracy_reward": 0.754464291036129,
974
+ "rewards/format_reward": 0.9966517835855484,
975
+ "step": 69
976
+ },
977
+ {
978
+ "clip_ratio": 0.0,
979
+ "completion_length": 612.0524826049805,
980
+ "epoch": 1.0597014925373134,
981
+ "grad_norm": 0.11315930634737015,
982
+ "learning_rate": 5.522642316338268e-07,
983
+ "loss": 0.0129,
984
+ "num_tokens": 45635299.0,
985
+ "reward": 1.7700893580913544,
986
+ "reward_std": 0.17250201851129532,
987
+ "rewards/accuracy_reward": 0.776785708963871,
988
+ "rewards/format_reward": 0.9933035671710968,
989
+ "step": 70
990
+ },
991
+ {
992
+ "clip_ratio": 0.0,
993
+ "completion_length": 597.2611808776855,
994
+ "epoch": 1.0746268656716418,
995
+ "grad_norm": 0.12196239084005356,
996
+ "learning_rate": 5.392295478639225e-07,
997
+ "loss": 0.0234,
998
+ "num_tokens": 46305437.0,
999
+ "reward": 1.7700893729925156,
1000
+ "reward_std": 0.15816278103739023,
1001
+ "rewards/accuracy_reward": 0.7756696417927742,
1002
+ "rewards/format_reward": 0.9944196417927742,
1003
+ "step": 71
1004
+ },
1005
+ {
1006
+ "clip_ratio": 0.0,
1007
+ "completion_length": 642.2678756713867,
1008
+ "epoch": 1.0895522388059702,
1009
+ "grad_norm": 0.11447075009346008,
1010
+ "learning_rate": 5.26167978121472e-07,
1011
+ "loss": 0.0379,
1012
+ "num_tokens": 47003701.0,
1013
+ "reward": 1.75334832072258,
1014
+ "reward_std": 0.17948786355555058,
1015
+ "rewards/accuracy_reward": 0.7667410746216774,
1016
+ "rewards/format_reward": 0.9866071343421936,
1017
+ "step": 72
1018
+ },
1019
+ {
1020
+ "clip_ratio": 0.0,
1021
+ "completion_length": 640.4107437133789,
1022
+ "epoch": 1.1044776119402986,
1023
+ "grad_norm": 0.11286026239395142,
1024
+ "learning_rate": 5.130884741539366e-07,
1025
+ "loss": 0.029,
1026
+ "num_tokens": 47710189.0,
1027
+ "reward": 1.6953125894069672,
1028
+ "reward_std": 0.20528794033452868,
1029
+ "rewards/accuracy_reward": 0.7042410671710968,
1030
+ "rewards/format_reward": 0.991071417927742,
1031
+ "step": 73
1032
+ },
1033
+ {
1034
+ "clip_ratio": 0.0,
1035
+ "completion_length": 616.1919898986816,
1036
+ "epoch": 1.1194029850746268,
1037
+ "grad_norm": 0.11942121386528015,
1038
+ "learning_rate": 5e-07,
1039
+ "loss": 0.019,
1040
+ "num_tokens": 48375529.0,
1041
+ "reward": 1.7991072237491608,
1042
+ "reward_std": 0.17037044698372483,
1043
+ "rewards/accuracy_reward": 0.8058035746216774,
1044
+ "rewards/format_reward": 0.9933035671710968,
1045
+ "step": 74
1046
+ },
1047
+ {
1048
+ "clip_ratio": 0.0,
1049
+ "completion_length": 626.8359680175781,
1050
+ "epoch": 1.1343283582089552,
1051
+ "grad_norm": 0.12714813649654388,
1052
+ "learning_rate": 4.869115258460634e-07,
1053
+ "loss": 0.0344,
1054
+ "num_tokens": 49061558.0,
1055
+ "reward": 1.7343750894069672,
1056
+ "reward_std": 0.17231870535761118,
1057
+ "rewards/accuracy_reward": 0.7488839253783226,
1058
+ "rewards/format_reward": 0.9854910597205162,
1059
+ "step": 75
1060
+ },
1061
+ {
1062
+ "clip_ratio": 0.0,
1063
+ "completion_length": 593.3370780944824,
1064
+ "epoch": 1.1492537313432836,
1065
+ "grad_norm": 0.09834135323762894,
1066
+ "learning_rate": 4.7383202187852804e-07,
1067
+ "loss": 0.0158,
1068
+ "num_tokens": 49720004.0,
1069
+ "reward": 1.8203125894069672,
1070
+ "reward_std": 0.12710504699498415,
1071
+ "rewards/accuracy_reward": 0.8325892984867096,
1072
+ "rewards/format_reward": 0.987723208963871,
1073
+ "step": 76
1074
+ },
1075
+ {
1076
+ "clip_ratio": 0.0,
1077
+ "completion_length": 532.5000267028809,
1078
+ "epoch": 1.164179104477612,
1079
+ "grad_norm": 0.10094507038593292,
1080
+ "learning_rate": 4.6077045213607755e-07,
1081
+ "loss": 0.0103,
1082
+ "num_tokens": 50322556.0,
1083
+ "reward": 1.845982238650322,
1084
+ "reward_std": 0.10092606954276562,
1085
+ "rewards/accuracy_reward": 0.8459821417927742,
1086
+ "rewards/format_reward": 1.0,
1087
+ "step": 77
1088
+ },
1089
+ {
1090
+ "clip_ratio": 0.0,
1091
+ "completion_length": 619.5044860839844,
1092
+ "epoch": 1.1791044776119404,
1093
+ "grad_norm": 0.09271937608718872,
1094
+ "learning_rate": 4.477357683661733e-07,
1095
+ "loss": 0.0121,
1096
+ "num_tokens": 51004440.0,
1097
+ "reward": 1.712053656578064,
1098
+ "reward_std": 0.13478131592273712,
1099
+ "rewards/accuracy_reward": 0.7243303582072258,
1100
+ "rewards/format_reward": 0.987723208963871,
1101
+ "step": 78
1102
+ },
1103
+ {
1104
+ "clip_ratio": 0.0,
1105
+ "completion_length": 661.3080673217773,
1106
+ "epoch": 1.1940298507462686,
1107
+ "grad_norm": 0.1302761286497116,
1108
+ "learning_rate": 4.347369038899743e-07,
1109
+ "loss": 0.0196,
1110
+ "num_tokens": 51728116.0,
1111
+ "reward": 1.7321429401636124,
1112
+ "reward_std": 0.16369346249848604,
1113
+ "rewards/accuracy_reward": 0.7399553507566452,
1114
+ "rewards/format_reward": 0.9921875,
1115
+ "step": 79
1116
+ },
1117
+ {
1118
+ "clip_ratio": 0.0,
1119
+ "completion_length": 673.4252510070801,
1120
+ "epoch": 1.208955223880597,
1121
+ "grad_norm": 0.10427302122116089,
1122
+ "learning_rate": 4.2178276747988444e-07,
1123
+ "loss": 0.0256,
1124
+ "num_tokens": 52457921.0,
1125
+ "reward": 1.6930804401636124,
1126
+ "reward_std": 0.13809803500771523,
1127
+ "rewards/accuracy_reward": 0.7008928582072258,
1128
+ "rewards/format_reward": 0.9921874925494194,
1129
+ "step": 80
1130
+ },
1131
+ {
1132
+ "clip_ratio": 0.0,
1133
+ "completion_length": 632.3080596923828,
1134
+ "epoch": 1.2238805970149254,
1135
+ "grad_norm": 0.09508039057254791,
1136
+ "learning_rate": 4.0888223725392624e-07,
1137
+ "loss": 0.0227,
1138
+ "num_tokens": 53149381.0,
1139
+ "reward": 1.742187574505806,
1140
+ "reward_std": 0.147241884842515,
1141
+ "rewards/accuracy_reward": 0.75,
1142
+ "rewards/format_reward": 0.9921874850988388,
1143
+ "step": 81
1144
+ },
1145
+ {
1146
+ "clip_ratio": 0.0,
1147
+ "completion_length": 602.2265930175781,
1148
+ "epoch": 1.2388059701492538,
1149
+ "grad_norm": 0.09547509253025055,
1150
+ "learning_rate": 3.960441545911204e-07,
1151
+ "loss": 0.0134,
1152
+ "num_tokens": 53826080.0,
1153
+ "reward": 1.7991072088479996,
1154
+ "reward_std": 0.1459453795105219,
1155
+ "rewards/accuracy_reward": 0.8002232015132904,
1156
+ "rewards/format_reward": 0.9988839253783226,
1157
+ "step": 82
1158
+ },
1159
+ {
1160
+ "clip_ratio": 0.0,
1161
+ "completion_length": 634.0826187133789,
1162
+ "epoch": 1.2537313432835822,
1163
+ "grad_norm": 0.0878840833902359,
1164
+ "learning_rate": 3.8327731807204744e-07,
1165
+ "loss": 0.0183,
1166
+ "num_tokens": 54522986.0,
1167
+ "reward": 1.7377232760190964,
1168
+ "reward_std": 0.17751624016091228,
1169
+ "rewards/accuracy_reward": 0.745535708963871,
1170
+ "rewards/format_reward": 0.9921874925494194,
1171
+ "step": 83
1172
+ },
1173
+ {
1174
+ "clip_ratio": 0.0,
1175
+ "completion_length": 627.9698944091797,
1176
+ "epoch": 1.2686567164179103,
1177
+ "grad_norm": 0.1188497319817543,
1178
+ "learning_rate": 3.7059047744873955e-07,
1179
+ "loss": 0.0205,
1180
+ "num_tokens": 55224271.0,
1181
+ "reward": 1.7555804401636124,
1182
+ "reward_std": 0.16282380744814873,
1183
+ "rewards/accuracy_reward": 0.761160708963871,
1184
+ "rewards/format_reward": 0.9944196417927742,
1185
+ "step": 84
1186
+ },
1187
+ {
1188
+ "clip_ratio": 0.0,
1189
+ "completion_length": 610.5402069091797,
1190
+ "epoch": 1.2835820895522387,
1191
+ "grad_norm": 0.0854121670126915,
1192
+ "learning_rate": 3.5799232764803867e-07,
1193
+ "loss": 0.0124,
1194
+ "num_tokens": 55889819.0,
1195
+ "reward": 1.8024554699659348,
1196
+ "reward_std": 0.12497595604509115,
1197
+ "rewards/accuracy_reward": 0.8035714328289032,
1198
+ "rewards/format_reward": 0.9988839253783226,
1199
+ "step": 85
1200
+ },
1201
+ {
1202
+ "clip_ratio": 0.0,
1203
+ "completion_length": 610.6161003112793,
1204
+ "epoch": 1.2985074626865671,
1205
+ "grad_norm": 0.1087036058306694,
1206
+ "learning_rate": 3.454915028125263e-07,
1207
+ "loss": 0.0278,
1208
+ "num_tokens": 56559627.0,
1209
+ "reward": 1.710937574505806,
1210
+ "reward_std": 0.15050204377621412,
1211
+ "rewards/accuracy_reward": 0.71875,
1212
+ "rewards/format_reward": 0.9921874925494194,
1213
+ "step": 86
1214
+ },
1215
+ {
1216
+ "clip_ratio": 0.0,
1217
+ "completion_length": 659.1116409301758,
1218
+ "epoch": 1.3134328358208955,
1219
+ "grad_norm": 0.09039624035358429,
1220
+ "learning_rate": 3.330965703831146e-07,
1221
+ "loss": 0.0385,
1222
+ "num_tokens": 57282351.0,
1223
+ "reward": 1.7176340222358704,
1224
+ "reward_std": 0.16426212899386883,
1225
+ "rewards/accuracy_reward": 0.7321428582072258,
1226
+ "rewards/format_reward": 0.9854910671710968,
1227
+ "step": 87
1228
+ },
1229
+ {
1230
+ "clip_ratio": 0.0,
1231
+ "completion_length": 578.8861999511719,
1232
+ "epoch": 1.328358208955224,
1233
+ "grad_norm": 0.09018764644861221,
1234
+ "learning_rate": 3.2081602522734985e-07,
1235
+ "loss": 0.0131,
1236
+ "num_tokens": 57939417.0,
1237
+ "reward": 1.85491082072258,
1238
+ "reward_std": 0.11039695050567389,
1239
+ "rewards/accuracy_reward": 0.8560267835855484,
1240
+ "rewards/format_reward": 0.9988839253783226,
1241
+ "step": 88
1242
+ },
1243
+ {
1244
+ "clip_ratio": 0.0,
1245
+ "completion_length": 578.7924346923828,
1246
+ "epoch": 1.3432835820895521,
1247
+ "grad_norm": 0.12793904542922974,
1248
+ "learning_rate": 3.086582838174551e-07,
1249
+ "loss": 0.0431,
1250
+ "num_tokens": 58583903.0,
1251
+ "reward": 1.77678582072258,
1252
+ "reward_std": 0.1774543970823288,
1253
+ "rewards/accuracy_reward": 0.7834821417927742,
1254
+ "rewards/format_reward": 0.9933035597205162,
1255
+ "step": 89
1256
+ },
1257
+ {
1258
+ "clip_ratio": 0.0,
1259
+ "completion_length": 635.9888610839844,
1260
+ "epoch": 1.3582089552238805,
1261
+ "grad_norm": 0.10590506345033646,
1262
+ "learning_rate": 2.9663167846209996e-07,
1263
+ "loss": 0.0308,
1264
+ "num_tokens": 59286877.0,
1265
+ "reward": 1.705357238650322,
1266
+ "reward_std": 0.16925656888633966,
1267
+ "rewards/accuracy_reward": 0.71875,
1268
+ "rewards/format_reward": 0.9866071417927742,
1269
+ "step": 90
1270
+ },
1271
+ {
1272
+ "clip_ratio": 0.0,
1273
+ "completion_length": 559.2232398986816,
1274
+ "epoch": 1.373134328358209,
1275
+ "grad_norm": 0.11253120750188828,
1276
+ "learning_rate": 2.847444515958523e-07,
1277
+ "loss": 0.0138,
1278
+ "num_tokens": 59920069.0,
1279
+ "reward": 1.8337054401636124,
1280
+ "reward_std": 0.1393702062778175,
1281
+ "rewards/accuracy_reward": 0.8370535671710968,
1282
+ "rewards/format_reward": 0.9966517761349678,
1283
+ "step": 91
1284
+ },
1285
+ {
1286
+ "clip_ratio": 0.0,
1287
+ "completion_length": 627.5044937133789,
1288
+ "epoch": 1.3880597014925373,
1289
+ "grad_norm": 0.10160024464130402,
1290
+ "learning_rate": 2.730047501302266e-07,
1291
+ "loss": 0.0299,
1292
+ "num_tokens": 60608377.0,
1293
+ "reward": 1.7879465222358704,
1294
+ "reward_std": 0.1563611626625061,
1295
+ "rewards/accuracy_reward": 0.8024553507566452,
1296
+ "rewards/format_reward": 0.9854910671710968,
1297
+ "step": 92
1298
+ },
1299
+ {
1300
+ "clip_ratio": 0.0,
1301
+ "completion_length": 625.7399826049805,
1302
+ "epoch": 1.4029850746268657,
1303
+ "grad_norm": 0.09554547071456909,
1304
+ "learning_rate": 2.6142061987019574e-07,
1305
+ "loss": 0.0153,
1306
+ "num_tokens": 61288944.0,
1307
+ "reward": 1.783482238650322,
1308
+ "reward_std": 0.1566515825688839,
1309
+ "rewards/accuracy_reward": 0.7890625,
1310
+ "rewards/format_reward": 0.9944196417927742,
1311
+ "step": 93
1312
+ },
1313
+ {
1314
+ "clip_ratio": 0.0,
1315
+ "completion_length": 581.5256996154785,
1316
+ "epoch": 1.417910447761194,
1317
+ "grad_norm": 0.08792009949684143,
1318
+ "learning_rate": 2.500000000000001e-07,
1319
+ "loss": 0.0273,
1320
+ "num_tokens": 61952575.0,
1321
+ "reward": 1.7310268878936768,
1322
+ "reward_std": 0.14925330318510532,
1323
+ "rewards/accuracy_reward": 0.7377232164144516,
1324
+ "rewards/format_reward": 0.9933035671710968,
1325
+ "step": 94
1326
+ },
1327
+ {
1328
+ "clip_ratio": 0.0,
1329
+ "completion_length": 628.5502471923828,
1330
+ "epoch": 1.4328358208955223,
1331
+ "grad_norm": 0.12849754095077515,
1332
+ "learning_rate": 2.387507176420256e-07,
1333
+ "loss": 0.0301,
1334
+ "num_tokens": 62647396.0,
1335
+ "reward": 1.7087054401636124,
1336
+ "reward_std": 0.1948286723345518,
1337
+ "rewards/accuracy_reward": 0.7310267835855484,
1338
+ "rewards/format_reward": 0.9776785597205162,
1339
+ "step": 95
1340
+ },
1341
+ {
1342
+ "clip_ratio": 0.0,
1343
+ "completion_length": 645.8973579406738,
1344
+ "epoch": 1.4477611940298507,
1345
+ "grad_norm": 0.30540189146995544,
1346
+ "learning_rate": 2.2768048249248644e-07,
1347
+ "loss": 0.0235,
1348
+ "num_tokens": 63355344.0,
1349
+ "reward": 1.7477679252624512,
1350
+ "reward_std": 0.16450455971062183,
1351
+ "rewards/accuracy_reward": 0.7533482164144516,
1352
+ "rewards/format_reward": 0.9944196343421936,
1353
+ "step": 96
1354
+ },
1355
+ {
1356
+ "clip_ratio": 0.0,
1357
+ "completion_length": 581.8024864196777,
1358
+ "epoch": 1.462686567164179,
1359
+ "grad_norm": 0.09905046969652176,
1360
+ "learning_rate": 2.167968815375837e-07,
1361
+ "loss": 0.0151,
1362
+ "num_tokens": 64012431.0,
1363
+ "reward": 1.7756697088479996,
1364
+ "reward_std": 0.13652249239385128,
1365
+ "rewards/accuracy_reward": 0.78125,
1366
+ "rewards/format_reward": 0.9944196417927742,
1367
+ "step": 97
1368
+ },
1369
+ {
1370
+ "clip_ratio": 0.0,
1371
+ "completion_length": 658.9587326049805,
1372
+ "epoch": 1.4776119402985075,
1373
+ "grad_norm": 0.10181548446416855,
1374
+ "learning_rate": 2.0610737385376348e-07,
1375
+ "loss": 0.0381,
1376
+ "num_tokens": 64744754.0,
1377
+ "reward": 1.699776828289032,
1378
+ "reward_std": 0.19275180995464325,
1379
+ "rewards/accuracy_reward": 0.7098214253783226,
1380
+ "rewards/format_reward": 0.9899553507566452,
1381
+ "step": 98
1382
+ },
1383
+ {
1384
+ "clip_ratio": 0.0,
1385
+ "completion_length": 672.4922180175781,
1386
+ "epoch": 1.4925373134328357,
1387
+ "grad_norm": 0.10780028253793716,
1388
+ "learning_rate": 1.9561928549563966e-07,
1389
+ "loss": 0.0107,
1390
+ "num_tokens": 65485395.0,
1391
+ "reward": 1.633928656578064,
1392
+ "reward_std": 0.1676093488931656,
1393
+ "rewards/accuracy_reward": 0.6350446492433548,
1394
+ "rewards/format_reward": 0.9988839253783226,
1395
+ "step": 99
1396
+ },
1397
+ {
1398
+ "epoch": 1.5074626865671643,
1399
+ "grad_norm": 0.09826894104480743,
1400
+ "learning_rate": 1.8533980447508135e-07,
1401
+ "loss": 0.0277,
1402
+ "step": 100
1403
+ },
1404
+ {
1405
+ "epoch": 1.5074626865671643,
1406
+ "eval_clip_ratio": 0.0,
1407
+ "eval_completion_length": 608.394744617313,
1408
+ "eval_loss": 0.02166515402495861,
1409
+ "eval_num_tokens": 66141067.0,
1410
+ "eval_reward": 1.7128642008291277,
1411
+ "eval_reward_std": 0.17956038917201525,
1412
+ "eval_rewards/accuracy_reward": 0.7203461689323021,
1413
+ "eval_rewards/format_reward": 0.99251795181349,
1414
+ "eval_runtime": 7500.3981,
1415
+ "eval_samples_per_second": 0.667,
1416
+ "eval_steps_per_second": 0.006,
1417
+ "step": 100
1418
+ },
1419
+ {
1420
+ "clip_ratio": 0.0,
1421
+ "completion_length": 610.5853996276855,
1422
+ "epoch": 1.5223880597014925,
1423
+ "grad_norm": 0.09803326427936554,
1424
+ "learning_rate": 1.7527597583490823e-07,
1425
+ "loss": 0.0144,
1426
+ "num_tokens": 66825204.0,
1427
+ "reward": 1.758370615541935,
1428
+ "reward_std": 0.14415120193734765,
1429
+ "rewards/accuracy_reward": 0.7678571417927742,
1430
+ "rewards/format_reward": 0.9905133880674839,
1431
+ "step": 101
1432
+ },
1433
+ {
1434
+ "clip_ratio": 0.0,
1435
+ "completion_length": 589.2846298217773,
1436
+ "epoch": 1.537313432835821,
1437
+ "grad_norm": 0.09236069023609161,
1438
+ "learning_rate": 1.6543469682057104e-07,
1439
+ "loss": 0.023,
1440
+ "num_tokens": 67489387.0,
1441
+ "reward": 1.7488840073347092,
1442
+ "reward_std": 0.12999783549457788,
1443
+ "rewards/accuracy_reward": 0.7578124925494194,
1444
+ "rewards/format_reward": 0.9910714253783226,
1445
+ "step": 102
1446
+ },
1447
+ {
1448
+ "clip_ratio": 0.0,
1449
+ "completion_length": 631.8917617797852,
1450
+ "epoch": 1.5522388059701493,
1451
+ "grad_norm": 0.09945357590913773,
1452
+ "learning_rate": 1.5582271215312293e-07,
1453
+ "loss": 0.0348,
1454
+ "num_tokens": 68182122.0,
1455
+ "reward": 1.7488840073347092,
1456
+ "reward_std": 0.14755096472799778,
1457
+ "rewards/accuracy_reward": 0.7589285746216774,
1458
+ "rewards/format_reward": 0.9899553433060646,
1459
+ "step": 103
1460
+ },
1461
+ {
1462
+ "clip_ratio": 0.0,
1463
+ "completion_length": 585.780158996582,
1464
+ "epoch": 1.5671641791044775,
1465
+ "grad_norm": 0.1044122576713562,
1466
+ "learning_rate": 1.4644660940672627e-07,
1467
+ "loss": 0.0259,
1468
+ "num_tokens": 68843053.0,
1469
+ "reward": 1.765625074505806,
1470
+ "reward_std": 0.15429440513253212,
1471
+ "rewards/accuracy_reward": 0.7834821417927742,
1472
+ "rewards/format_reward": 0.9821428582072258,
1473
+ "step": 104
1474
+ },
1475
+ {
1476
+ "clip_ratio": 0.0,
1477
+ "completion_length": 621.1797103881836,
1478
+ "epoch": 1.582089552238806,
1479
+ "grad_norm": 0.11087001115083694,
1480
+ "learning_rate": 1.3731281449385628e-07,
1481
+ "loss": 0.0144,
1482
+ "num_tokens": 69536934.0,
1483
+ "reward": 1.7779018729925156,
1484
+ "reward_std": 0.1481210682541132,
1485
+ "rewards/accuracy_reward": 0.7823660671710968,
1486
+ "rewards/format_reward": 0.995535708963871,
1487
+ "step": 105
1488
+ },
1489
+ {
1490
+ "clip_ratio": 0.0,
1491
+ "completion_length": 660.2511444091797,
1492
+ "epoch": 1.5970149253731343,
1493
+ "grad_norm": 0.09636900573968887,
1494
+ "learning_rate": 1.284275872613028e-07,
1495
+ "loss": 0.0152,
1496
+ "num_tokens": 70259407.0,
1497
+ "reward": 1.7109375894069672,
1498
+ "reward_std": 0.17005854472517967,
1499
+ "rewards/accuracy_reward": 0.7220982164144516,
1500
+ "rewards/format_reward": 0.9888392761349678,
1501
+ "step": 106
1502
+ },
1503
+ {
1504
+ "clip_ratio": 0.0,
1505
+ "completion_length": 674.9152069091797,
1506
+ "epoch": 1.6119402985074627,
1507
+ "grad_norm": 0.10703866928815842,
1508
+ "learning_rate": 1.1979701719998454e-07,
1509
+ "loss": 0.0272,
1510
+ "num_tokens": 70998891.0,
1511
+ "reward": 1.7343750894069672,
1512
+ "reward_std": 0.18828168138861656,
1513
+ "rewards/accuracy_reward": 0.7444196343421936,
1514
+ "rewards/format_reward": 0.9899553433060646,
1515
+ "step": 107
1516
+ },
1517
+ {
1518
+ "clip_ratio": 0.0,
1519
+ "completion_length": 604.8225708007812,
1520
+ "epoch": 1.626865671641791,
1521
+ "grad_norm": 0.107986681163311,
1522
+ "learning_rate": 1.1142701927151454e-07,
1523
+ "loss": 0.0242,
1524
+ "num_tokens": 71660740.0,
1525
+ "reward": 1.805803656578064,
1526
+ "reward_std": 0.1751266662031412,
1527
+ "rewards/accuracy_reward": 0.8125000074505806,
1528
+ "rewards/format_reward": 0.9933035522699356,
1529
+ "step": 108
1530
+ },
1531
+ {
1532
+ "clip_ratio": 0.0,
1533
+ "completion_length": 639.0335083007812,
1534
+ "epoch": 1.6417910447761193,
1535
+ "grad_norm": 0.09717655181884766,
1536
+ "learning_rate": 1.0332332985438247e-07,
1537
+ "loss": 0.0221,
1538
+ "num_tokens": 72367578.0,
1539
+ "reward": 1.7700893580913544,
1540
+ "reward_std": 0.16510999109596014,
1541
+ "rewards/accuracy_reward": 0.7779017835855484,
1542
+ "rewards/format_reward": 0.9921874850988388,
1543
+ "step": 109
1544
+ },
1545
+ {
1546
+ "clip_ratio": 0.0,
1547
+ "completion_length": 596.9654197692871,
1548
+ "epoch": 1.6567164179104479,
1549
+ "grad_norm": 0.09363789856433868,
1550
+ "learning_rate": 9.549150281252632e-08,
1551
+ "loss": 0.0057,
1552
+ "num_tokens": 73028371.0,
1553
+ "reward": 1.7689733058214188,
1554
+ "reward_std": 0.1158353891223669,
1555
+ "rewards/accuracy_reward": 0.7745535671710968,
1556
+ "rewards/format_reward": 0.9944196417927742,
1557
+ "step": 110
1558
+ },
1559
+ {
1560
+ "clip_ratio": 0.0,
1561
+ "completion_length": 646.8337478637695,
1562
+ "epoch": 1.671641791044776,
1563
+ "grad_norm": 0.10138875991106033,
1564
+ "learning_rate": 8.793690568899215e-08,
1565
+ "loss": 0.0239,
1566
+ "num_tokens": 73731510.0,
1567
+ "reward": 1.7578125894069672,
1568
+ "reward_std": 0.15553954057395458,
1569
+ "rewards/accuracy_reward": 0.762276791036129,
1570
+ "rewards/format_reward": 0.995535708963871,
1571
+ "step": 111
1572
+ },
1573
+ {
1574
+ "clip_ratio": 0.0,
1575
+ "completion_length": 651.131721496582,
1576
+ "epoch": 1.6865671641791045,
1577
+ "grad_norm": 0.08721350878477097,
1578
+ "learning_rate": 8.066471602728803e-08,
1579
+ "loss": 0.0261,
1580
+ "num_tokens": 74432588.0,
1581
+ "reward": 1.7589286416769028,
1582
+ "reward_std": 0.14256859384477139,
1583
+ "rewards/accuracy_reward": 0.7712053507566452,
1584
+ "rewards/format_reward": 0.9877232164144516,
1585
+ "step": 112
1586
+ },
1587
+ {
1588
+ "clip_ratio": 0.0,
1589
+ "completion_length": 607.5703277587891,
1590
+ "epoch": 1.7014925373134329,
1591
+ "grad_norm": 0.10357489436864853,
1592
+ "learning_rate": 7.36799178229539e-08,
1593
+ "loss": 0.0318,
1594
+ "num_tokens": 75104779.0,
1595
+ "reward": 1.7220982760190964,
1596
+ "reward_std": 0.16208719182759523,
1597
+ "rewards/accuracy_reward": 0.7377232164144516,
1598
+ "rewards/format_reward": 0.984375,
1599
+ "step": 113
1600
+ },
1601
+ {
1602
+ "clip_ratio": 0.0,
1603
+ "completion_length": 582.2377395629883,
1604
+ "epoch": 1.716417910447761,
1605
+ "grad_norm": 0.09729992598295212,
1606
+ "learning_rate": 6.698729810778064e-08,
1607
+ "loss": 0.015,
1608
+ "num_tokens": 75758304.0,
1609
+ "reward": 1.7935268580913544,
1610
+ "reward_std": 0.1367027312517166,
1611
+ "rewards/accuracy_reward": 0.7957589328289032,
1612
+ "rewards/format_reward": 0.9977678582072258,
1613
+ "step": 114
1614
+ },
1615
+ {
1616
+ "clip_ratio": 0.0,
1617
+ "completion_length": 644.0134124755859,
1618
+ "epoch": 1.7313432835820897,
1619
+ "grad_norm": 0.10120779275894165,
1620
+ "learning_rate": 6.059144366901736e-08,
1621
+ "loss": 0.0228,
1622
+ "num_tokens": 76463884.0,
1623
+ "reward": 1.7354911714792252,
1624
+ "reward_std": 0.14026323426514864,
1625
+ "rewards/accuracy_reward": 0.7488839253783226,
1626
+ "rewards/format_reward": 0.9866071343421936,
1627
+ "step": 115
1628
+ },
1629
+ {
1630
+ "clip_ratio": 0.0,
1631
+ "completion_length": 577.9464569091797,
1632
+ "epoch": 1.7462686567164178,
1633
+ "grad_norm": 0.08987674117088318,
1634
+ "learning_rate": 5.44967379058161e-08,
1635
+ "loss": 0.0337,
1636
+ "num_tokens": 77102308.0,
1637
+ "reward": 1.7555804550647736,
1638
+ "reward_std": 0.1271761180832982,
1639
+ "rewards/accuracy_reward": 0.762276791036129,
1640
+ "rewards/format_reward": 0.9933035597205162,
1641
+ "step": 116
1642
+ },
1643
+ {
1644
+ "clip_ratio": 0.0,
1645
+ "completion_length": 644.6161041259766,
1646
+ "epoch": 1.7611940298507462,
1647
+ "grad_norm": 0.11021538078784943,
1648
+ "learning_rate": 4.870735782506979e-08,
1649
+ "loss": 0.0105,
1650
+ "num_tokens": 77835332.0,
1651
+ "reward": 1.7142857909202576,
1652
+ "reward_std": 0.18193747848272324,
1653
+ "rewards/accuracy_reward": 0.7165178582072258,
1654
+ "rewards/format_reward": 0.9977678507566452,
1655
+ "step": 117
1656
+ },
1657
+ {
1658
+ "clip_ratio": 0.0,
1659
+ "completion_length": 640.1942176818848,
1660
+ "epoch": 1.7761194029850746,
1661
+ "grad_norm": 0.12281131744384766,
1662
+ "learning_rate": 4.322727117869951e-08,
1663
+ "loss": 0.0103,
1664
+ "num_tokens": 78535354.0,
1665
+ "reward": 1.7522322237491608,
1666
+ "reward_std": 0.15158831980079412,
1667
+ "rewards/accuracy_reward": 0.7555803507566452,
1668
+ "rewards/format_reward": 0.9966517835855484,
1669
+ "step": 118
1670
+ },
1671
+ {
1672
+ "clip_ratio": 0.0,
1673
+ "completion_length": 620.7723617553711,
1674
+ "epoch": 1.7910447761194028,
1675
+ "grad_norm": 0.09940478950738907,
1676
+ "learning_rate": 3.806023374435663e-08,
1677
+ "loss": 0.0351,
1678
+ "num_tokens": 79216974.0,
1679
+ "reward": 1.7254465073347092,
1680
+ "reward_std": 0.16859996505081654,
1681
+ "rewards/accuracy_reward": 0.7366071492433548,
1682
+ "rewards/format_reward": 0.9888392835855484,
1683
+ "step": 119
1684
+ },
1685
+ {
1686
+ "clip_ratio": 0.0,
1687
+ "completion_length": 641.8460083007812,
1688
+ "epoch": 1.8059701492537314,
1689
+ "grad_norm": 0.09752647578716278,
1690
+ "learning_rate": 3.3209786751399184e-08,
1691
+ "loss": 0.0242,
1692
+ "num_tokens": 79931012.0,
1693
+ "reward": 1.7723215073347092,
1694
+ "reward_std": 0.17135184817016125,
1695
+ "rewards/accuracy_reward": 0.7845982164144516,
1696
+ "rewards/format_reward": 0.9877232164144516,
1697
+ "step": 120
1698
+ },
1699
+ {
1700
+ "clip_ratio": 0.0,
1701
+ "completion_length": 605.100471496582,
1702
+ "epoch": 1.8208955223880596,
1703
+ "grad_norm": 0.09651315957307816,
1704
+ "learning_rate": 2.8679254453910785e-08,
1705
+ "loss": 0.025,
1706
+ "num_tokens": 80603486.0,
1707
+ "reward": 1.7366072237491608,
1708
+ "reward_std": 0.14862325228750706,
1709
+ "rewards/accuracy_reward": 0.7410714328289032,
1710
+ "rewards/format_reward": 0.9955357164144516,
1711
+ "step": 121
1712
+ },
1713
+ {
1714
+ "clip_ratio": 0.0,
1715
+ "completion_length": 651.4765853881836,
1716
+ "epoch": 1.835820895522388,
1717
+ "grad_norm": 0.09396825730800629,
1718
+ "learning_rate": 2.4471741852423233e-08,
1719
+ "loss": 0.0172,
1720
+ "num_tokens": 81334001.0,
1721
+ "reward": 1.7354911714792252,
1722
+ "reward_std": 0.16738921124488115,
1723
+ "rewards/accuracy_reward": 0.7511160746216774,
1724
+ "rewards/format_reward": 0.9843749850988388,
1725
+ "step": 122
1726
+ },
1727
+ {
1728
+ "clip_ratio": 0.0,
1729
+ "completion_length": 638.3326187133789,
1730
+ "epoch": 1.8507462686567164,
1731
+ "grad_norm": 0.1153474748134613,
1732
+ "learning_rate": 2.0590132565903473e-08,
1733
+ "loss": 0.0411,
1734
+ "num_tokens": 82038987.0,
1735
+ "reward": 1.7734375894069672,
1736
+ "reward_std": 0.2178222257643938,
1737
+ "rewards/accuracy_reward": 0.7868303507566452,
1738
+ "rewards/format_reward": 0.9866071417927742,
1739
+ "step": 123
1740
+ },
1741
+ {
1742
+ "clip_ratio": 0.0,
1743
+ "completion_length": 599.5000228881836,
1744
+ "epoch": 1.8656716417910446,
1745
+ "grad_norm": 0.10678599029779434,
1746
+ "learning_rate": 1.7037086855465898e-08,
1747
+ "loss": 0.0201,
1748
+ "num_tokens": 82700603.0,
1749
+ "reward": 1.7354911714792252,
1750
+ "reward_std": 0.1632389174774289,
1751
+ "rewards/accuracy_reward": 0.7421874925494194,
1752
+ "rewards/format_reward": 0.9933035597205162,
1753
+ "step": 124
1754
+ },
1755
+ {
1756
+ "clip_ratio": 0.0,
1757
+ "completion_length": 644.1205596923828,
1758
+ "epoch": 1.8805970149253732,
1759
+ "grad_norm": 0.08962120860815048,
1760
+ "learning_rate": 1.3815039801161722e-08,
1761
+ "loss": 0.0185,
1762
+ "num_tokens": 83404503.0,
1763
+ "reward": 1.750000074505806,
1764
+ "reward_std": 0.15682824235409498,
1765
+ "rewards/accuracy_reward": 0.7566964253783226,
1766
+ "rewards/format_reward": 0.9933035671710968,
1767
+ "step": 125
1768
+ },
1769
+ {
1770
+ "clip_ratio": 0.0,
1771
+ "completion_length": 565.2109603881836,
1772
+ "epoch": 1.8955223880597014,
1773
+ "grad_norm": 0.10807648301124573,
1774
+ "learning_rate": 1.0926199633097154e-08,
1775
+ "loss": 0.0042,
1776
+ "num_tokens": 84037108.0,
1777
+ "reward": 1.8046875894069672,
1778
+ "reward_std": 0.13849271647632122,
1779
+ "rewards/accuracy_reward": 0.8080357164144516,
1780
+ "rewards/format_reward": 0.9966517835855484,
1781
+ "step": 126
1782
+ },
1783
+ {
1784
+ "clip_ratio": 0.0,
1785
+ "completion_length": 633.560302734375,
1786
+ "epoch": 1.9104477611940298,
1787
+ "grad_norm": 0.10242209583520889,
1788
+ "learning_rate": 8.372546218022746e-09,
1789
+ "loss": 0.0202,
1790
+ "num_tokens": 84741714.0,
1791
+ "reward": 1.7120536714792252,
1792
+ "reward_std": 0.158757739700377,
1793
+ "rewards/accuracy_reward": 0.7187500074505806,
1794
+ "rewards/format_reward": 0.9933035671710968,
1795
+ "step": 127
1796
+ },
1797
+ {
1798
+ "clip_ratio": 0.0,
1799
+ "completion_length": 597.8248062133789,
1800
+ "epoch": 1.9253731343283582,
1801
+ "grad_norm": 0.09526298940181732,
1802
+ "learning_rate": 6.15582970243117e-09,
1803
+ "loss": 0.0091,
1804
+ "num_tokens": 85407733.0,
1805
+ "reward": 1.8069197237491608,
1806
+ "reward_std": 0.11288107186555862,
1807
+ "rewards/accuracy_reward": 0.8069196492433548,
1808
+ "rewards/format_reward": 1.0,
1809
+ "step": 128
1810
+ },
1811
+ {
1812
+ "clip_ratio": 0.0,
1813
+ "completion_length": 644.9765853881836,
1814
+ "epoch": 1.9402985074626866,
1815
+ "grad_norm": 0.09915687888860703,
1816
+ "learning_rate": 4.277569313094809e-09,
1817
+ "loss": 0.0311,
1818
+ "num_tokens": 86108960.0,
1819
+ "reward": 1.7622768729925156,
1820
+ "reward_std": 0.1561026033014059,
1821
+ "rewards/accuracy_reward": 0.7756696343421936,
1822
+ "rewards/format_reward": 0.9866071343421936,
1823
+ "step": 129
1824
+ },
1825
+ {
1826
+ "clip_ratio": 0.0,
1827
+ "completion_length": 646.7611846923828,
1828
+ "epoch": 1.955223880597015,
1829
+ "grad_norm": 0.09854050725698471,
1830
+ "learning_rate": 2.739052315863355e-09,
1831
+ "loss": 0.007,
1832
+ "num_tokens": 86830202.0,
1833
+ "reward": 1.6919643580913544,
1834
+ "reward_std": 0.14856339246034622,
1835
+ "rewards/accuracy_reward": 0.6953125074505806,
1836
+ "rewards/format_reward": 0.9966517835855484,
1837
+ "step": 130
1838
+ },
1839
+ {
1840
+ "clip_ratio": 0.0,
1841
+ "completion_length": 597.3638725280762,
1842
+ "epoch": 1.9701492537313432,
1843
+ "grad_norm": 0.11233126372098923,
1844
+ "learning_rate": 1.541333133436018e-09,
1845
+ "loss": -0.002,
1846
+ "num_tokens": 87492608.0,
1847
+ "reward": 1.7845982909202576,
1848
+ "reward_std": 0.14027062244713306,
1849
+ "rewards/accuracy_reward": 0.7868303582072258,
1850
+ "rewards/format_reward": 0.9977678507566452,
1851
+ "step": 131
1852
+ },
1853
+ {
1854
+ "clip_ratio": 0.0,
1855
+ "completion_length": 627.9294013977051,
1856
+ "epoch": 1.9850746268656716,
1857
+ "grad_norm": 0.10069043189287186,
1858
+ "learning_rate": 6.852326227130833e-10,
1859
+ "loss": 0.0303,
1860
+ "num_tokens": 88193234.0,
1861
+ "reward": 1.7500000894069672,
1862
+ "reward_std": 0.14365093689411879,
1863
+ "rewards/accuracy_reward": 0.7555803582072258,
1864
+ "rewards/format_reward": 0.9944196343421936,
1865
+ "step": 132
1866
+ },
1867
+ {
1868
+ "epoch": 1.9850746268656716,
1869
+ "step": 132,
1870
+ "total_flos": 0.0,
1871
+ "train_loss": 0.023034477807496758,
1872
+ "train_runtime": 35029.9588,
1873
+ "train_samples_per_second": 0.428,
1874
+ "train_steps_per_second": 0.004
1875
+ }
1876
+ ],
1877
+ "logging_steps": 1,
1878
+ "max_steps": 134,
1879
+ "num_input_tokens_seen": 0,
1880
+ "num_train_epochs": 2,
1881
+ "save_steps": 500,
1882
+ "stateful_callbacks": {
1883
+ "TrainerControl": {
1884
+ "args": {
1885
+ "should_epoch_stop": false,
1886
+ "should_evaluate": false,
1887
+ "should_log": false,
1888
+ "should_save": true,
1889
+ "should_training_stop": false
1890
+ },
1891
+ "attributes": {}
1892
+ }
1893
+ },
1894
+ "total_flos": 0.0,
1895
+ "train_batch_size": 16,
1896
+ "trial_name": null,
1897
+ "trial_params": null
1898
+ }