Lansechen commited on
Commit
844318c
·
verified ·
1 Parent(s): 421f27d

Model save

Browse files
README.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-7B
3
+ library_name: transformers
4
+ model_name: Qwen2.5-7B-Open-R1-GRPO-math-lighteval-cosine
5
+ tags:
6
+ - generated_from_trainer
7
+ - trl
8
+ - grpo
9
+ licence: license
10
+ ---
11
+
12
+ # Model Card for Qwen2.5-7B-Open-R1-GRPO-math-lighteval-cosine
13
+
14
+ This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B).
15
+ It has been trained using [TRL](https://github.com/huggingface/trl).
16
+
17
+ ## Quick start
18
+
19
+ ```python
20
+ from transformers import pipeline
21
+
22
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="Lansechen/Qwen2.5-7B-Open-R1-GRPO-math-lighteval-cosine", device="cuda")
24
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
+ print(output["generated_text"])
26
+ ```
27
+
28
+ ## Training procedure
29
+
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenran1995-the-chinese-university-of-hong-kong/huggingface/runs/kgymgtyl)
31
+
32
+
33
+ This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
34
+
35
+ ### Framework versions
36
+
37
+ - TRL: 0.16.0
38
+ - Transformers: 4.49.0
39
+ - Pytorch: 2.5.1+cu121
40
+ - Datasets: 3.3.1
41
+ - Tokenizers: 0.21.0
42
+
43
+ ## Citations
44
+
45
+ Cite GRPO as:
46
+
47
+ ```bibtex
48
+ @article{zhihong2024deepseekmath,
49
+ title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
50
+ author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
51
+ year = 2024,
52
+ eprint = {arXiv:2402.03300},
53
+ }
54
+
55
+ ```
56
+
57
+ Cite TRL as:
58
+
59
+ ```bibtex
60
+ @misc{vonwerra2022trl,
61
+ title = {{TRL: Transformer Reinforcement Learning}},
62
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
63
+ year = 2020,
64
+ journal = {GitHub repository},
65
+ publisher = {GitHub},
66
+ howpublished = {\url{https://github.com/huggingface/trl}}
67
+ }
68
+ ```
all_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_flos": 0.0,
3
+ "train_loss": 0.02113412496052633,
4
+ "train_runtime": 52235.1468,
5
+ "train_samples": 7500,
6
+ "train_samples_per_second": 0.287,
7
+ "train_steps_per_second": 0.003
8
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "eos_token_id": 151643,
4
+ "max_new_tokens": 2048,
5
+ "transformers_version": "4.49.0"
6
+ }
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_flos": 0.0,
3
+ "train_loss": 0.02113412496052633,
4
+ "train_runtime": 52235.1468,
5
+ "train_samples": 7500,
6
+ "train_samples_per_second": 0.287,
7
+ "train_steps_per_second": 0.003
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,2030 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 1.9850746268656716,
5
+ "eval_steps": 100,
6
+ "global_step": 132,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "clip_ratio": 0.0,
13
+ "completion_length": 468.4821586608887,
14
+ "epoch": 0.014925373134328358,
15
+ "grad_norm": 0.5261219143867493,
16
+ "learning_rate": 7.142857142857142e-08,
17
+ "loss": -0.0272,
18
+ "num_tokens": 546936.0,
19
+ "reward": 0.14720686484361067,
20
+ "reward_std": 0.6725633442401886,
21
+ "rewards/accuracy_reward": 0.20535713713616133,
22
+ "rewards/cosine_scaled_reward": -0.122882429510355,
23
+ "rewards/format_reward": 0.06473214365541935,
24
+ "step": 1
25
+ },
26
+ {
27
+ "clip_ratio": 0.0,
28
+ "completion_length": 471.2355079650879,
29
+ "epoch": 0.029850746268656716,
30
+ "grad_norm": 0.41708439588546753,
31
+ "learning_rate": 1.4285714285714285e-07,
32
+ "loss": -0.0173,
33
+ "num_tokens": 1100635.0,
34
+ "reward": 0.20620827795937657,
35
+ "reward_std": 0.6912260502576828,
36
+ "rewards/accuracy_reward": 0.22991071362048388,
37
+ "rewards/cosine_scaled_reward": -0.08955066278576851,
38
+ "rewards/format_reward": 0.06584821548312902,
39
+ "step": 2
40
+ },
41
+ {
42
+ "clip_ratio": 0.0,
43
+ "completion_length": 477.5357322692871,
44
+ "epoch": 0.04477611940298507,
45
+ "grad_norm": 0.4044201970100403,
46
+ "learning_rate": 2.1428571428571426e-07,
47
+ "loss": -0.0228,
48
+ "num_tokens": 1675411.0,
49
+ "reward": 0.22738021425902843,
50
+ "reward_std": 0.7466238886117935,
51
+ "rewards/accuracy_reward": 0.24330357275903225,
52
+ "rewards/cosine_scaled_reward": -0.08065551635809243,
53
+ "rewards/format_reward": 0.06473214365541935,
54
+ "step": 3
55
+ },
56
+ {
57
+ "clip_ratio": 0.0,
58
+ "completion_length": 541.2355117797852,
59
+ "epoch": 0.05970149253731343,
60
+ "grad_norm": 0.794707715511322,
61
+ "learning_rate": 2.857142857142857e-07,
62
+ "loss": -0.041,
63
+ "num_tokens": 2289870.0,
64
+ "reward": 0.11700092989485711,
65
+ "reward_std": 0.6238975562155247,
66
+ "rewards/accuracy_reward": 0.19084821175783873,
67
+ "rewards/cosine_scaled_reward": -0.13188300124602392,
68
+ "rewards/format_reward": 0.05803571501746774,
69
+ "step": 4
70
+ },
71
+ {
72
+ "clip_ratio": 0.0,
73
+ "completion_length": 530.3102874755859,
74
+ "epoch": 0.07462686567164178,
75
+ "grad_norm": 0.4885825216770172,
76
+ "learning_rate": 3.5714285714285716e-07,
77
+ "loss": 0.0208,
78
+ "num_tokens": 2902532.0,
79
+ "reward": 0.1085223974660039,
80
+ "reward_std": 0.618420671671629,
81
+ "rewards/accuracy_reward": 0.17075893166474998,
82
+ "rewards/cosine_scaled_reward": -0.14929011272033677,
83
+ "rewards/format_reward": 0.08705357159487903,
84
+ "step": 5
85
+ },
86
+ {
87
+ "clip_ratio": 0.0,
88
+ "completion_length": 451.17859268188477,
89
+ "epoch": 0.08955223880597014,
90
+ "grad_norm": 0.5443016290664673,
91
+ "learning_rate": 4.285714285714285e-07,
92
+ "loss": -0.002,
93
+ "num_tokens": 3431276.0,
94
+ "reward": 0.20862307911738753,
95
+ "reward_std": 0.6792935952544212,
96
+ "rewards/accuracy_reward": 0.22879464365541935,
97
+ "rewards/cosine_scaled_reward": -0.10052871843799949,
98
+ "rewards/format_reward": 0.0803571434225887,
99
+ "step": 6
100
+ },
101
+ {
102
+ "clip_ratio": 0.0,
103
+ "completion_length": 488.42748260498047,
104
+ "epoch": 0.1044776119402985,
105
+ "grad_norm": 0.50808185338974,
106
+ "learning_rate": 5e-07,
107
+ "loss": 0.0121,
108
+ "num_tokens": 3997955.0,
109
+ "reward": 0.18962736055254936,
110
+ "reward_std": 0.6948749274015427,
111
+ "rewards/accuracy_reward": 0.2075892873108387,
112
+ "rewards/cosine_scaled_reward": -0.11952443420886993,
113
+ "rewards/format_reward": 0.10156249906867743,
114
+ "step": 7
115
+ },
116
+ {
117
+ "clip_ratio": 0.0,
118
+ "completion_length": 509.26230239868164,
119
+ "epoch": 0.11940298507462686,
120
+ "grad_norm": 0.4024961292743683,
121
+ "learning_rate": 5.714285714285714e-07,
122
+ "loss": -0.0025,
123
+ "num_tokens": 4571902.0,
124
+ "reward": 0.2240892630070448,
125
+ "reward_std": 0.711250901222229,
126
+ "rewards/accuracy_reward": 0.2232142835855484,
127
+ "rewards/cosine_scaled_reward": -0.09957145689986646,
128
+ "rewards/format_reward": 0.10044642817229033,
129
+ "step": 8
130
+ },
131
+ {
132
+ "clip_ratio": 0.0,
133
+ "completion_length": 457.17078399658203,
134
+ "epoch": 0.13432835820895522,
135
+ "grad_norm": 0.4010670483112335,
136
+ "learning_rate": 6.428571428571429e-07,
137
+ "loss": -0.0008,
138
+ "num_tokens": 5106863.0,
139
+ "reward": 0.23711032513529062,
140
+ "reward_std": 0.7162540927529335,
141
+ "rewards/accuracy_reward": 0.2254464291036129,
142
+ "rewards/cosine_scaled_reward": -0.09324682882288471,
143
+ "rewards/format_reward": 0.10491071362048388,
144
+ "step": 9
145
+ },
146
+ {
147
+ "clip_ratio": 0.0,
148
+ "completion_length": 447.43528747558594,
149
+ "epoch": 0.14925373134328357,
150
+ "grad_norm": 1.9963996410369873,
151
+ "learning_rate": 7.142857142857143e-07,
152
+ "loss": -0.0009,
153
+ "num_tokens": 5632405.0,
154
+ "reward": 0.30765493400394917,
155
+ "reward_std": 0.7187102138996124,
156
+ "rewards/accuracy_reward": 0.22879464365541935,
157
+ "rewards/cosine_scaled_reward": -0.10529151372611523,
158
+ "rewards/format_reward": 0.18415178498253226,
159
+ "step": 10
160
+ },
161
+ {
162
+ "clip_ratio": 0.0,
163
+ "completion_length": 529.4140815734863,
164
+ "epoch": 0.16417910447761194,
165
+ "grad_norm": 1.5985616445541382,
166
+ "learning_rate": 7.857142857142856e-07,
167
+ "loss": -0.0056,
168
+ "num_tokens": 6236816.0,
169
+ "reward": 0.34489849023520947,
170
+ "reward_std": 0.7811058536171913,
171
+ "rewards/accuracy_reward": 0.2131696417927742,
172
+ "rewards/cosine_scaled_reward": -0.10153009975329041,
173
+ "rewards/format_reward": 0.23325892724096775,
174
+ "step": 11
175
+ },
176
+ {
177
+ "clip_ratio": 0.0,
178
+ "completion_length": 495.9085006713867,
179
+ "epoch": 0.1791044776119403,
180
+ "grad_norm": 0.925566554069519,
181
+ "learning_rate": 8.57142857142857e-07,
182
+ "loss": -0.0093,
183
+ "num_tokens": 6811230.0,
184
+ "reward": 0.3949438240379095,
185
+ "reward_std": 0.7836679667234421,
186
+ "rewards/accuracy_reward": 0.2399553582072258,
187
+ "rewards/cosine_scaled_reward": -0.07045797364844475,
188
+ "rewards/format_reward": 0.2254464253783226,
189
+ "step": 12
190
+ },
191
+ {
192
+ "clip_ratio": 0.0,
193
+ "completion_length": 517.7366333007812,
194
+ "epoch": 0.19402985074626866,
195
+ "grad_norm": 0.5331482887268066,
196
+ "learning_rate": 9.285714285714285e-07,
197
+ "loss": 0.0139,
198
+ "num_tokens": 7398306.0,
199
+ "reward": 0.6416976638138294,
200
+ "reward_std": 0.8412953615188599,
201
+ "rewards/accuracy_reward": 0.2790178544819355,
202
+ "rewards/cosine_scaled_reward": -0.05026664771139622,
203
+ "rewards/format_reward": 0.4129464291036129,
204
+ "step": 13
205
+ },
206
+ {
207
+ "clip_ratio": 0.0,
208
+ "completion_length": 459.45761489868164,
209
+ "epoch": 0.208955223880597,
210
+ "grad_norm": 1.6511790752410889,
211
+ "learning_rate": 1e-06,
212
+ "loss": -0.0051,
213
+ "num_tokens": 7933852.0,
214
+ "reward": 0.6904712095856667,
215
+ "reward_std": 0.8300624415278435,
216
+ "rewards/accuracy_reward": 0.2589285708963871,
217
+ "rewards/cosine_scaled_reward": -0.07738597225397825,
218
+ "rewards/format_reward": 0.5089285783469677,
219
+ "step": 14
220
+ },
221
+ {
222
+ "clip_ratio": 0.0,
223
+ "completion_length": 445.76007080078125,
224
+ "epoch": 0.22388059701492538,
225
+ "grad_norm": 1.0562385320663452,
226
+ "learning_rate": 9.998286624877785e-07,
227
+ "loss": -0.0069,
228
+ "num_tokens": 8447757.0,
229
+ "reward": 0.785442516207695,
230
+ "reward_std": 0.8137651458382607,
231
+ "rewards/accuracy_reward": 0.2645089328289032,
232
+ "rewards/cosine_scaled_reward": -0.0638878676109016,
233
+ "rewards/format_reward": 0.5848214291036129,
234
+ "step": 15
235
+ },
236
+ {
237
+ "clip_ratio": 0.0,
238
+ "completion_length": 519.2634162902832,
239
+ "epoch": 0.23880597014925373,
240
+ "grad_norm": 0.5114538073539734,
241
+ "learning_rate": 9.99314767377287e-07,
242
+ "loss": -0.0076,
243
+ "num_tokens": 9036385.0,
244
+ "reward": 0.9658294171094894,
245
+ "reward_std": 0.8675010874867439,
246
+ "rewards/accuracy_reward": 0.31696428544819355,
247
+ "rewards/cosine_scaled_reward": 0.0015436606481671333,
248
+ "rewards/format_reward": 0.647321417927742,
249
+ "step": 16
250
+ },
251
+ {
252
+ "clip_ratio": 0.0,
253
+ "completion_length": 468.31921768188477,
254
+ "epoch": 0.2537313432835821,
255
+ "grad_norm": 0.332457035779953,
256
+ "learning_rate": 9.98458666866564e-07,
257
+ "loss": 0.0202,
258
+ "num_tokens": 9605735.0,
259
+ "reward": 1.2345628887414932,
260
+ "reward_std": 0.8789637982845306,
261
+ "rewards/accuracy_reward": 0.3939732164144516,
262
+ "rewards/cosine_scaled_reward": 0.07161642531355028,
263
+ "rewards/format_reward": 0.7689732164144516,
264
+ "step": 17
265
+ },
266
+ {
267
+ "clip_ratio": 0.0,
268
+ "completion_length": 495.0089569091797,
269
+ "epoch": 0.26865671641791045,
270
+ "grad_norm": 0.6736157536506653,
271
+ "learning_rate": 9.972609476841365e-07,
272
+ "loss": -0.0093,
273
+ "num_tokens": 10185447.0,
274
+ "reward": 1.3518186658620834,
275
+ "reward_std": 0.856958419084549,
276
+ "rewards/accuracy_reward": 0.4162946417927742,
277
+ "rewards/cosine_scaled_reward": 0.10181858949363232,
278
+ "rewards/format_reward": 0.8337053582072258,
279
+ "step": 18
280
+ },
281
+ {
282
+ "clip_ratio": 0.0,
283
+ "completion_length": 551.4609642028809,
284
+ "epoch": 0.2835820895522388,
285
+ "grad_norm": 0.415921688079834,
286
+ "learning_rate": 9.957224306869053e-07,
287
+ "loss": 0.0506,
288
+ "num_tokens": 10804348.0,
289
+ "reward": 1.4847271889448166,
290
+ "reward_std": 0.8531165644526482,
291
+ "rewards/accuracy_reward": 0.4810267873108387,
292
+ "rewards/cosine_scaled_reward": 0.17222709371708333,
293
+ "rewards/format_reward": 0.831473208963871,
294
+ "step": 19
295
+ },
296
+ {
297
+ "clip_ratio": 0.0,
298
+ "completion_length": 525.5926551818848,
299
+ "epoch": 0.29850746268656714,
300
+ "grad_norm": 0.4402889609336853,
301
+ "learning_rate": 9.938441702975689e-07,
302
+ "loss": 0.0101,
303
+ "num_tokens": 11402559.0,
304
+ "reward": 1.6339532285928726,
305
+ "reward_std": 0.8665541037917137,
306
+ "rewards/accuracy_reward": 0.5569196380674839,
307
+ "rewards/cosine_scaled_reward": 0.234399588778615,
308
+ "rewards/format_reward": 0.8426339253783226,
309
+ "step": 20
310
+ },
311
+ {
312
+ "clip_ratio": 0.0,
313
+ "completion_length": 579.1317176818848,
314
+ "epoch": 0.31343283582089554,
315
+ "grad_norm": 0.26198580861091614,
316
+ "learning_rate": 9.916274537819773e-07,
317
+ "loss": 0.0268,
318
+ "num_tokens": 12045933.0,
319
+ "reward": 1.7713945358991623,
320
+ "reward_std": 0.7743762731552124,
321
+ "rewards/accuracy_reward": 0.5993303582072258,
322
+ "rewards/cosine_scaled_reward": 0.29929623380303383,
323
+ "rewards/format_reward": 0.8727678582072258,
324
+ "step": 21
325
+ },
326
+ {
327
+ "clip_ratio": 0.0,
328
+ "completion_length": 531.8270225524902,
329
+ "epoch": 0.3283582089552239,
330
+ "grad_norm": 0.23108772933483124,
331
+ "learning_rate": 9.890738003669027e-07,
332
+ "loss": 0.0361,
333
+ "num_tokens": 12651914.0,
334
+ "reward": 1.882157564163208,
335
+ "reward_std": 0.7610342055559158,
336
+ "rewards/accuracy_reward": 0.6462053582072258,
337
+ "rewards/cosine_scaled_reward": 0.3196575213223696,
338
+ "rewards/format_reward": 0.9162946343421936,
339
+ "step": 22
340
+ },
341
+ {
342
+ "clip_ratio": 0.0,
343
+ "completion_length": 520.4475708007812,
344
+ "epoch": 0.34328358208955223,
345
+ "grad_norm": 0.28641998767852783,
346
+ "learning_rate": 9.861849601988383e-07,
347
+ "loss": -0.0001,
348
+ "num_tokens": 13248811.0,
349
+ "reward": 1.8569733500480652,
350
+ "reward_std": 0.7132585346698761,
351
+ "rewards/accuracy_reward": 0.6428571492433548,
352
+ "rewards/cosine_scaled_reward": 0.30898220650851727,
353
+ "rewards/format_reward": 0.9051339253783226,
354
+ "step": 23
355
+ },
356
+ {
357
+ "clip_ratio": 0.0,
358
+ "completion_length": 607.8270263671875,
359
+ "epoch": 0.3582089552238806,
360
+ "grad_norm": 0.2447742223739624,
361
+ "learning_rate": 9.82962913144534e-07,
362
+ "loss": 0.0428,
363
+ "num_tokens": 13929544.0,
364
+ "reward": 1.9759656339883804,
365
+ "reward_std": 0.6598386131227016,
366
+ "rewards/accuracy_reward": 0.6741071417927742,
367
+ "rewards/cosine_scaled_reward": 0.3933762777596712,
368
+ "rewards/format_reward": 0.9084821417927742,
369
+ "step": 24
370
+ },
371
+ {
372
+ "clip_ratio": 0.0,
373
+ "completion_length": 549.8973388671875,
374
+ "epoch": 0.373134328358209,
375
+ "grad_norm": 0.21301080286502838,
376
+ "learning_rate": 9.794098674340966e-07,
377
+ "loss": -0.0086,
378
+ "num_tokens": 14537412.0,
379
+ "reward": 2.1477435529232025,
380
+ "reward_std": 0.5154926143586636,
381
+ "rewards/accuracy_reward": 0.7589285671710968,
382
+ "rewards/cosine_scaled_reward": 0.4524309542030096,
383
+ "rewards/format_reward": 0.9363839402794838,
384
+ "step": 25
385
+ },
386
+ {
387
+ "clip_ratio": 0.0,
388
+ "completion_length": 610.3136367797852,
389
+ "epoch": 0.3880597014925373,
390
+ "grad_norm": 0.24111290276050568,
391
+ "learning_rate": 9.755282581475767e-07,
392
+ "loss": 0.0084,
393
+ "num_tokens": 15221853.0,
394
+ "reward": 2.011464387178421,
395
+ "reward_std": 0.5517045110464096,
396
+ "rewards/accuracy_reward": 0.6785714328289032,
397
+ "rewards/cosine_scaled_reward": 0.38981253653764725,
398
+ "rewards/format_reward": 0.9430803656578064,
399
+ "step": 26
400
+ },
401
+ {
402
+ "clip_ratio": 0.0,
403
+ "completion_length": 628.5591735839844,
404
+ "epoch": 0.40298507462686567,
405
+ "grad_norm": 0.178500697016716,
406
+ "learning_rate": 9.713207455460892e-07,
407
+ "loss": 0.0453,
408
+ "num_tokens": 15911730.0,
409
+ "reward": 2.0012327134609222,
410
+ "reward_std": 0.5107778459787369,
411
+ "rewards/accuracy_reward": 0.671875,
412
+ "rewards/cosine_scaled_reward": 0.3717683330178261,
413
+ "rewards/format_reward": 0.9575892835855484,
414
+ "step": 27
415
+ },
416
+ {
417
+ "clip_ratio": 0.0,
418
+ "completion_length": 595.2087249755859,
419
+ "epoch": 0.417910447761194,
420
+ "grad_norm": 0.21335452795028687,
421
+ "learning_rate": 9.667902132486008e-07,
422
+ "loss": -0.0053,
423
+ "num_tokens": 16564429.0,
424
+ "reward": 2.099992021918297,
425
+ "reward_std": 0.464496249333024,
426
+ "rewards/accuracy_reward": 0.7075892873108387,
427
+ "rewards/cosine_scaled_reward": 0.43034904822707176,
428
+ "rewards/format_reward": 0.9620535671710968,
429
+ "step": 28
430
+ },
431
+ {
432
+ "clip_ratio": 0.0,
433
+ "completion_length": 685.9821701049805,
434
+ "epoch": 0.43283582089552236,
435
+ "grad_norm": 0.19047509133815765,
436
+ "learning_rate": 9.619397662556433e-07,
437
+ "loss": -0.0067,
438
+ "num_tokens": 17318309.0,
439
+ "reward": 2.014256924390793,
440
+ "reward_std": 0.5083763264119625,
441
+ "rewards/accuracy_reward": 0.6674107126891613,
442
+ "rewards/cosine_scaled_reward": 0.3847925327718258,
443
+ "rewards/format_reward": 0.9620535746216774,
444
+ "step": 29
445
+ },
446
+ {
447
+ "clip_ratio": 0.0,
448
+ "completion_length": 666.8728103637695,
449
+ "epoch": 0.44776119402985076,
450
+ "grad_norm": 0.17837414145469666,
451
+ "learning_rate": 9.567727288213004e-07,
452
+ "loss": 0.0278,
453
+ "num_tokens": 18038547.0,
454
+ "reward": 2.189936801791191,
455
+ "reward_std": 0.49968117475509644,
456
+ "rewards/accuracy_reward": 0.7332589328289032,
457
+ "rewards/cosine_scaled_reward": 0.4812314659357071,
458
+ "rewards/format_reward": 0.9754464328289032,
459
+ "step": 30
460
+ },
461
+ {
462
+ "clip_ratio": 0.0,
463
+ "completion_length": 665.0145416259766,
464
+ "epoch": 0.4626865671641791,
465
+ "grad_norm": 0.21215161681175232,
466
+ "learning_rate": 9.512926421749303e-07,
467
+ "loss": 0.0113,
468
+ "num_tokens": 18758192.0,
469
+ "reward": 2.1073115468025208,
470
+ "reward_std": 0.3873421475291252,
471
+ "rewards/accuracy_reward": 0.700892873108387,
472
+ "rewards/cosine_scaled_reward": 0.4309721440076828,
473
+ "rewards/format_reward": 0.975446417927742,
474
+ "step": 31
475
+ },
476
+ {
477
+ "clip_ratio": 0.0,
478
+ "completion_length": 633.1138763427734,
479
+ "epoch": 0.47761194029850745,
480
+ "grad_norm": 0.1608922779560089,
481
+ "learning_rate": 9.455032620941839e-07,
482
+ "loss": 0.0103,
483
+ "num_tokens": 19454198.0,
484
+ "reward": 2.229922592639923,
485
+ "reward_std": 0.4106268659234047,
486
+ "rewards/accuracy_reward": 0.753348208963871,
487
+ "rewards/cosine_scaled_reward": 0.5011278428137302,
488
+ "rewards/format_reward": 0.9754464253783226,
489
+ "step": 32
490
+ },
491
+ {
492
+ "clip_ratio": 0.0,
493
+ "completion_length": 714.6283645629883,
494
+ "epoch": 0.4925373134328358,
495
+ "grad_norm": 0.1497848778963089,
496
+ "learning_rate": 9.394085563309826e-07,
497
+ "loss": 0.0303,
498
+ "num_tokens": 20220017.0,
499
+ "reward": 2.096575230360031,
500
+ "reward_std": 0.4976784512400627,
501
+ "rewards/accuracy_reward": 0.6897321492433548,
502
+ "rewards/cosine_scaled_reward": 0.43362870812416077,
503
+ "rewards/format_reward": 0.9732142761349678,
504
+ "step": 33
505
+ },
506
+ {
507
+ "clip_ratio": 0.0,
508
+ "completion_length": 738.8002471923828,
509
+ "epoch": 0.5074626865671642,
510
+ "grad_norm": 0.1632772833108902,
511
+ "learning_rate": 9.330127018922193e-07,
512
+ "loss": 0.0345,
513
+ "num_tokens": 21012414.0,
514
+ "reward": 2.112109124660492,
515
+ "reward_std": 0.5067962445318699,
516
+ "rewards/accuracy_reward": 0.6997767873108387,
517
+ "rewards/cosine_scaled_reward": 0.4391179643571377,
518
+ "rewards/format_reward": 0.9732142761349678,
519
+ "step": 34
520
+ },
521
+ {
522
+ "clip_ratio": 0.0,
523
+ "completion_length": 704.3192291259766,
524
+ "epoch": 0.5223880597014925,
525
+ "grad_norm": 0.14925751090049744,
526
+ "learning_rate": 9.26320082177046e-07,
527
+ "loss": 0.0209,
528
+ "num_tokens": 21783476.0,
529
+ "reward": 2.198708087205887,
530
+ "reward_std": 0.4495688285678625,
531
+ "rewards/accuracy_reward": 0.734375,
532
+ "rewards/cosine_scaled_reward": 0.4888865761458874,
533
+ "rewards/format_reward": 0.9754464328289032,
534
+ "step": 35
535
+ },
536
+ {
537
+ "clip_ratio": 0.0,
538
+ "completion_length": 741.2444610595703,
539
+ "epoch": 0.5373134328358209,
540
+ "grad_norm": 0.2386803925037384,
541
+ "learning_rate": 9.19335283972712e-07,
542
+ "loss": 0.0318,
543
+ "num_tokens": 22591167.0,
544
+ "reward": 2.112916797399521,
545
+ "reward_std": 0.4805358611047268,
546
+ "rewards/accuracy_reward": 0.7020089216530323,
547
+ "rewards/cosine_scaled_reward": 0.44997028447687626,
548
+ "rewards/format_reward": 0.9609374850988388,
549
+ "step": 36
550
+ },
551
+ {
552
+ "clip_ratio": 0.0,
553
+ "completion_length": 709.0658721923828,
554
+ "epoch": 0.5522388059701493,
555
+ "grad_norm": 0.5252798199653625,
556
+ "learning_rate": 9.120630943110077e-07,
557
+ "loss": 0.0082,
558
+ "num_tokens": 23353914.0,
559
+ "reward": 2.2200856059789658,
560
+ "reward_std": 0.40633704140782356,
561
+ "rewards/accuracy_reward": 0.7410714328289032,
562
+ "rewards/cosine_scaled_reward": 0.4968712218105793,
563
+ "rewards/format_reward": 0.9821428582072258,
564
+ "step": 37
565
+ },
566
+ {
567
+ "clip_ratio": 0.0,
568
+ "completion_length": 731.2042617797852,
569
+ "epoch": 0.5671641791044776,
570
+ "grad_norm": 0.23690421879291534,
571
+ "learning_rate": 9.045084971874737e-07,
572
+ "loss": 0.0175,
573
+ "num_tokens": 24154857.0,
574
+ "reward": 2.22691310942173,
575
+ "reward_std": 0.41700269654393196,
576
+ "rewards/accuracy_reward": 0.7410714253783226,
577
+ "rewards/cosine_scaled_reward": 0.5014665201306343,
578
+ "rewards/format_reward": 0.9843749850988388,
579
+ "step": 38
580
+ },
581
+ {
582
+ "clip_ratio": 0.0,
583
+ "completion_length": 739.1239242553711,
584
+ "epoch": 0.582089552238806,
585
+ "grad_norm": 1.277125358581543,
586
+ "learning_rate": 8.966766701456176e-07,
587
+ "loss": 0.0124,
588
+ "num_tokens": 24943776.0,
589
+ "reward": 2.0895985513925552,
590
+ "reward_std": 0.43780123069882393,
591
+ "rewards/accuracy_reward": 0.6863839216530323,
592
+ "rewards/cosine_scaled_reward": 0.4233037494122982,
593
+ "rewards/format_reward": 0.979910708963871,
594
+ "step": 39
595
+ },
596
+ {
597
+ "clip_ratio": 0.0,
598
+ "completion_length": 840.8571701049805,
599
+ "epoch": 0.5970149253731343,
600
+ "grad_norm": 0.24747931957244873,
601
+ "learning_rate": 8.885729807284854e-07,
602
+ "loss": 0.0112,
603
+ "num_tokens": 25820880.0,
604
+ "reward": 2.133217602968216,
605
+ "reward_std": 0.4699713662266731,
606
+ "rewards/accuracy_reward": 0.709821417927742,
607
+ "rewards/cosine_scaled_reward": 0.4535300172865391,
608
+ "rewards/format_reward": 0.9698660597205162,
609
+ "step": 40
610
+ },
611
+ {
612
+ "clip_ratio": 0.0,
613
+ "completion_length": 772.5960083007812,
614
+ "epoch": 0.6119402985074627,
615
+ "grad_norm": 0.3448956608772278,
616
+ "learning_rate": 8.802029828000155e-07,
617
+ "loss": 0.0288,
618
+ "num_tokens": 26653734.0,
619
+ "reward": 2.087254598736763,
620
+ "reward_std": 0.45059962198138237,
621
+ "rewards/accuracy_reward": 0.684151791036129,
622
+ "rewards/cosine_scaled_reward": 0.43212054669857025,
623
+ "rewards/format_reward": 0.9709821417927742,
624
+ "step": 41
625
+ },
626
+ {
627
+ "clip_ratio": 0.0,
628
+ "completion_length": 738.7221298217773,
629
+ "epoch": 0.6268656716417911,
630
+ "grad_norm": 0.18860669434070587,
631
+ "learning_rate": 8.71572412738697e-07,
632
+ "loss": 0.0237,
633
+ "num_tokens": 27438109.0,
634
+ "reward": 2.371794670820236,
635
+ "reward_std": 0.41881701350212097,
636
+ "rewards/accuracy_reward": 0.8091517835855484,
637
+ "rewards/cosine_scaled_reward": 0.5838480927050114,
638
+ "rewards/format_reward": 0.9787946343421936,
639
+ "step": 42
640
+ },
641
+ {
642
+ "clip_ratio": 0.0,
643
+ "completion_length": 772.6094131469727,
644
+ "epoch": 0.6417910447761194,
645
+ "grad_norm": 0.18746379017829895,
646
+ "learning_rate": 8.626871855061437e-07,
647
+ "loss": 0.0157,
648
+ "num_tokens": 28265983.0,
649
+ "reward": 2.2571807503700256,
650
+ "reward_std": 0.39931730553507805,
651
+ "rewards/accuracy_reward": 0.7533482164144516,
652
+ "rewards/cosine_scaled_reward": 0.5205735377967358,
653
+ "rewards/format_reward": 0.9832589253783226,
654
+ "step": 43
655
+ },
656
+ {
657
+ "clip_ratio": 0.0,
658
+ "completion_length": 735.1373062133789,
659
+ "epoch": 0.6567164179104478,
660
+ "grad_norm": 0.16102778911590576,
661
+ "learning_rate": 8.535533905932737e-07,
662
+ "loss": 0.026,
663
+ "num_tokens": 29051578.0,
664
+ "reward": 2.2478812634944916,
665
+ "reward_std": 0.45085589960217476,
666
+ "rewards/accuracy_reward": 0.7555803582072258,
667
+ "rewards/cosine_scaled_reward": 0.5112740248441696,
668
+ "rewards/format_reward": 0.9810267761349678,
669
+ "step": 44
670
+ },
671
+ {
672
+ "clip_ratio": 0.0,
673
+ "completion_length": 769.5279388427734,
674
+ "epoch": 0.6716417910447762,
675
+ "grad_norm": 0.14632996916770935,
676
+ "learning_rate": 8.441772878468769e-07,
677
+ "loss": 0.0325,
678
+ "num_tokens": 29867283.0,
679
+ "reward": 2.278545081615448,
680
+ "reward_std": 0.37804416939616203,
681
+ "rewards/accuracy_reward": 0.7723214253783226,
682
+ "rewards/cosine_scaled_reward": 0.5229645892977715,
683
+ "rewards/format_reward": 0.983258917927742,
684
+ "step": 45
685
+ },
686
+ {
687
+ "clip_ratio": 0.0,
688
+ "completion_length": 877.544677734375,
689
+ "epoch": 0.6865671641791045,
690
+ "grad_norm": 0.2075255811214447,
691
+ "learning_rate": 8.34565303179429e-07,
692
+ "loss": 0.0387,
693
+ "num_tokens": 30787355.0,
694
+ "reward": 2.0822473019361496,
695
+ "reward_std": 0.4727121517062187,
696
+ "rewards/accuracy_reward": 0.6863839253783226,
697
+ "rewards/cosine_scaled_reward": 0.4315776005387306,
698
+ "rewards/format_reward": 0.9642857164144516,
699
+ "step": 46
700
+ },
701
+ {
702
+ "clip_ratio": 0.0,
703
+ "completion_length": 832.0123138427734,
704
+ "epoch": 0.7014925373134329,
705
+ "grad_norm": 0.1394934505224228,
706
+ "learning_rate": 8.247240241650917e-07,
707
+ "loss": 0.0034,
708
+ "num_tokens": 31648894.0,
709
+ "reward": 2.255901038646698,
710
+ "reward_std": 0.3795453645288944,
711
+ "rewards/accuracy_reward": 0.7533482164144516,
712
+ "rewards/cosine_scaled_reward": 0.520409844815731,
713
+ "rewards/format_reward": 0.9821428507566452,
714
+ "step": 47
715
+ },
716
+ {
717
+ "clip_ratio": 0.0,
718
+ "completion_length": 818.0067443847656,
719
+ "epoch": 0.7164179104477612,
720
+ "grad_norm": 0.15266141295433044,
721
+ "learning_rate": 8.146601955249187e-07,
722
+ "loss": 0.0231,
723
+ "num_tokens": 32508644.0,
724
+ "reward": 2.250428795814514,
725
+ "reward_std": 0.4384920671582222,
726
+ "rewards/accuracy_reward": 0.765625,
727
+ "rewards/cosine_scaled_reward": 0.5060090012848377,
728
+ "rewards/format_reward": 0.9787946417927742,
729
+ "step": 48
730
+ },
731
+ {
732
+ "clip_ratio": 0.0,
733
+ "completion_length": 852.2611999511719,
734
+ "epoch": 0.7313432835820896,
735
+ "grad_norm": 0.14692984521389008,
736
+ "learning_rate": 8.043807145043603e-07,
737
+ "loss": 0.0131,
738
+ "num_tokens": 33408670.0,
739
+ "reward": 2.257953464984894,
740
+ "reward_std": 0.4461590237915516,
741
+ "rewards/accuracy_reward": 0.7527472451329231,
742
+ "rewards/cosine_scaled_reward": 0.523578368127346,
743
+ "rewards/format_reward": 0.9888392835855484,
744
+ "step": 49
745
+ },
746
+ {
747
+ "clip_ratio": 0.0,
748
+ "completion_length": 818.233283996582,
749
+ "epoch": 0.746268656716418,
750
+ "grad_norm": 0.14716172218322754,
751
+ "learning_rate": 7.938926261462365e-07,
752
+ "loss": 0.0287,
753
+ "num_tokens": 34279959.0,
754
+ "reward": 2.1656472980976105,
755
+ "reward_std": 0.4217447005212307,
756
+ "rewards/accuracy_reward": 0.704241082072258,
757
+ "rewards/cosine_scaled_reward": 0.47591499611735344,
758
+ "rewards/format_reward": 0.9854910671710968,
759
+ "step": 50
760
+ },
761
+ {
762
+ "clip_ratio": 0.0,
763
+ "completion_length": 827.4855422973633,
764
+ "epoch": 0.7611940298507462,
765
+ "grad_norm": 0.20014236867427826,
766
+ "learning_rate": 7.832031184624164e-07,
767
+ "loss": 0.036,
768
+ "num_tokens": 35159170.0,
769
+ "reward": 2.2312643826007843,
770
+ "reward_std": 0.44411566108465195,
771
+ "rewards/accuracy_reward": 0.7366071417927742,
772
+ "rewards/cosine_scaled_reward": 0.5136303901672363,
773
+ "rewards/format_reward": 0.9810267761349678,
774
+ "step": 51
775
+ },
776
+ {
777
+ "clip_ratio": 0.0,
778
+ "completion_length": 849.912971496582,
779
+ "epoch": 0.7761194029850746,
780
+ "grad_norm": 0.1414945125579834,
781
+ "learning_rate": 7.723195175075135e-07,
782
+ "loss": 0.0295,
783
+ "num_tokens": 36049340.0,
784
+ "reward": 2.1217743158340454,
785
+ "reward_std": 0.389321930706501,
786
+ "rewards/accuracy_reward": 0.690848208963871,
787
+ "rewards/cosine_scaled_reward": 0.45659567788243294,
788
+ "rewards/format_reward": 0.9743303507566452,
789
+ "step": 52
790
+ },
791
+ {
792
+ "clip_ratio": 0.0,
793
+ "completion_length": 835.043571472168,
794
+ "epoch": 0.7910447761194029,
795
+ "grad_norm": 0.15856719017028809,
796
+ "learning_rate": 7.612492823579744e-07,
797
+ "loss": 0.0191,
798
+ "num_tokens": 36925211.0,
799
+ "reward": 2.0988520830869675,
800
+ "reward_std": 0.46310891956090927,
801
+ "rewards/accuracy_reward": 0.6763392761349678,
802
+ "rewards/cosine_scaled_reward": 0.4381376765668392,
803
+ "rewards/format_reward": 0.984375,
804
+ "step": 53
805
+ },
806
+ {
807
+ "clip_ratio": 0.0,
808
+ "completion_length": 815.3058395385742,
809
+ "epoch": 0.8059701492537313,
810
+ "grad_norm": 0.14549441635608673,
811
+ "learning_rate": 7.5e-07,
812
+ "loss": 0.032,
813
+ "num_tokens": 37790557.0,
814
+ "reward": 2.1466605812311172,
815
+ "reward_std": 0.44885101169347763,
816
+ "rewards/accuracy_reward": 0.706473208963871,
817
+ "rewards/cosine_scaled_reward": 0.4546961672604084,
818
+ "rewards/format_reward": 0.9854910671710968,
819
+ "step": 54
820
+ },
821
+ {
822
+ "clip_ratio": 0.0,
823
+ "completion_length": 844.6250305175781,
824
+ "epoch": 0.8208955223880597,
825
+ "grad_norm": 0.14582230150699615,
826
+ "learning_rate": 7.385793801298042e-07,
827
+ "loss": 0.0222,
828
+ "num_tokens": 38686461.0,
829
+ "reward": 2.2503548711538315,
830
+ "reward_std": 0.44255904480814934,
831
+ "rewards/accuracy_reward": 0.7600446343421936,
832
+ "rewards/cosine_scaled_reward": 0.519327986985445,
833
+ "rewards/format_reward": 0.9787946343421936,
834
+ "step": 55
835
+ },
836
+ {
837
+ "clip_ratio": 0.0,
838
+ "completion_length": 835.8382110595703,
839
+ "epoch": 0.835820895522388,
840
+ "grad_norm": 0.17814846336841583,
841
+ "learning_rate": 7.269952498697734e-07,
842
+ "loss": 0.0278,
843
+ "num_tokens": 39566428.0,
844
+ "reward": 2.1854068338871,
845
+ "reward_std": 0.46856704354286194,
846
+ "rewards/accuracy_reward": 0.7209821417927742,
847
+ "rewards/cosine_scaled_reward": 0.47670139744877815,
848
+ "rewards/format_reward": 0.987723208963871,
849
+ "step": 56
850
+ },
851
+ {
852
+ "clip_ratio": 0.0,
853
+ "completion_length": 812.9855270385742,
854
+ "epoch": 0.8507462686567164,
855
+ "grad_norm": 0.23100529611110687,
856
+ "learning_rate": 7.152555484041475e-07,
857
+ "loss": 0.0233,
858
+ "num_tokens": 40417863.0,
859
+ "reward": 2.307561933994293,
860
+ "reward_std": 0.39695313945412636,
861
+ "rewards/accuracy_reward": 0.7845982015132904,
862
+ "rewards/cosine_scaled_reward": 0.5318922027945518,
863
+ "rewards/format_reward": 0.9910714253783226,
864
+ "step": 57
865
+ },
866
+ {
867
+ "clip_ratio": 0.0,
868
+ "completion_length": 852.9955749511719,
869
+ "epoch": 0.8656716417910447,
870
+ "grad_norm": 0.1441306471824646,
871
+ "learning_rate": 7.033683215379002e-07,
872
+ "loss": 0.0299,
873
+ "num_tokens": 41308115.0,
874
+ "reward": 2.1960265040397644,
875
+ "reward_std": 0.37129098176956177,
876
+ "rewards/accuracy_reward": 0.7176339328289032,
877
+ "rewards/cosine_scaled_reward": 0.499597892165184,
878
+ "rewards/format_reward": 0.9787946343421936,
879
+ "step": 58
880
+ },
881
+ {
882
+ "clip_ratio": 0.0,
883
+ "completion_length": 841.684196472168,
884
+ "epoch": 0.8805970149253731,
885
+ "grad_norm": 0.14134341478347778,
886
+ "learning_rate": 6.913417161825449e-07,
887
+ "loss": 0.0325,
888
+ "num_tokens": 42187672.0,
889
+ "reward": 2.3595363944768906,
890
+ "reward_std": 0.4285864755511284,
891
+ "rewards/accuracy_reward": 0.8058035597205162,
892
+ "rewards/cosine_scaled_reward": 0.5693577714264393,
893
+ "rewards/format_reward": 0.9843749850988388,
894
+ "step": 59
895
+ },
896
+ {
897
+ "clip_ratio": 0.0,
898
+ "completion_length": 806.7199096679688,
899
+ "epoch": 0.8955223880597015,
900
+ "grad_norm": 0.2022542655467987,
901
+ "learning_rate": 6.7918397477265e-07,
902
+ "loss": 0.0281,
903
+ "num_tokens": 43045525.0,
904
+ "reward": 2.208475172519684,
905
+ "reward_std": 0.4270087294280529,
906
+ "rewards/accuracy_reward": 0.7332589328289032,
907
+ "rewards/cosine_scaled_reward": 0.4841447100043297,
908
+ "rewards/format_reward": 0.9910714253783226,
909
+ "step": 60
910
+ },
911
+ {
912
+ "clip_ratio": 0.0,
913
+ "completion_length": 840.762321472168,
914
+ "epoch": 0.9104477611940298,
915
+ "grad_norm": 0.16908100247383118,
916
+ "learning_rate": 6.669034296168854e-07,
917
+ "loss": 0.0378,
918
+ "num_tokens": 43941024.0,
919
+ "reward": 2.198526903986931,
920
+ "reward_std": 0.3748279809951782,
921
+ "rewards/accuracy_reward": 0.7265625,
922
+ "rewards/cosine_scaled_reward": 0.4864732697606087,
923
+ "rewards/format_reward": 0.9854910597205162,
924
+ "step": 61
925
+ },
926
+ {
927
+ "clip_ratio": 0.0,
928
+ "completion_length": 813.1105346679688,
929
+ "epoch": 0.9253731343283582,
930
+ "grad_norm": 0.2283850759267807,
931
+ "learning_rate": 6.545084971874736e-07,
932
+ "loss": 0.0432,
933
+ "num_tokens": 44794987.0,
934
+ "reward": 2.3305827528238297,
935
+ "reward_std": 0.4148641601204872,
936
+ "rewards/accuracy_reward": 0.7845982164144516,
937
+ "rewards/cosine_scaled_reward": 0.5549130644649267,
938
+ "rewards/format_reward": 0.9910714328289032,
939
+ "step": 62
940
+ },
941
+ {
942
+ "clip_ratio": 0.0,
943
+ "completion_length": 890.0435791015625,
944
+ "epoch": 0.9402985074626866,
945
+ "grad_norm": 0.14631207287311554,
946
+ "learning_rate": 6.420076723519614e-07,
947
+ "loss": 0.0483,
948
+ "num_tokens": 45721978.0,
949
+ "reward": 2.20485882461071,
950
+ "reward_std": 0.45313265547156334,
951
+ "rewards/accuracy_reward": 0.7321428507566452,
952
+ "rewards/cosine_scaled_reward": 0.49838550947606564,
953
+ "rewards/format_reward": 0.9743303507566452,
954
+ "step": 63
955
+ },
956
+ {
957
+ "clip_ratio": 0.0,
958
+ "completion_length": 802.3672256469727,
959
+ "epoch": 0.9552238805970149,
960
+ "grad_norm": 0.24495986104011536,
961
+ "learning_rate": 6.294095225512604e-07,
962
+ "loss": 0.0333,
963
+ "num_tokens": 46577579.0,
964
+ "reward": 2.271100014448166,
965
+ "reward_std": 0.44345359317958355,
966
+ "rewards/accuracy_reward": 0.768973208963871,
967
+ "rewards/cosine_scaled_reward": 0.5132873728871346,
968
+ "rewards/format_reward": 0.9888392761349678,
969
+ "step": 64
970
+ },
971
+ {
972
+ "clip_ratio": 0.0,
973
+ "completion_length": 833.2678909301758,
974
+ "epoch": 0.9701492537313433,
975
+ "grad_norm": 0.22237442433834076,
976
+ "learning_rate": 6.167226819279527e-07,
977
+ "loss": 0.0192,
978
+ "num_tokens": 47457379.0,
979
+ "reward": 2.232301279902458,
980
+ "reward_std": 0.40537629649043083,
981
+ "rewards/accuracy_reward": 0.7332589402794838,
982
+ "rewards/cosine_scaled_reward": 0.5168994888663292,
983
+ "rewards/format_reward": 0.9821428507566452,
984
+ "step": 65
985
+ },
986
+ {
987
+ "clip_ratio": 0.0,
988
+ "completion_length": 859.0405197143555,
989
+ "epoch": 0.9850746268656716,
990
+ "grad_norm": 1.1952835321426392,
991
+ "learning_rate": 6.039558454088795e-07,
992
+ "loss": 0.0479,
993
+ "num_tokens": 48351995.0,
994
+ "reward": 2.171072855591774,
995
+ "reward_std": 0.41515132039785385,
996
+ "rewards/accuracy_reward": 0.7053571417927742,
997
+ "rewards/cosine_scaled_reward": 0.4869209751486778,
998
+ "rewards/format_reward": 0.9787946343421936,
999
+ "step": 66
1000
+ },
1001
+ {
1002
+ "clip_ratio": 0.0,
1003
+ "completion_length": 822.8929061889648,
1004
+ "epoch": 1.0149253731343284,
1005
+ "grad_norm": 0.1507645845413208,
1006
+ "learning_rate": 5.911177627460738e-07,
1007
+ "loss": 0.033,
1008
+ "num_tokens": 49207691.0,
1009
+ "reward": 2.3084839433431625,
1010
+ "reward_std": 0.42283207178115845,
1011
+ "rewards/accuracy_reward": 0.7756696417927742,
1012
+ "rewards/cosine_scaled_reward": 0.5473231822252274,
1013
+ "rewards/format_reward": 0.9854910671710968,
1014
+ "step": 67
1015
+ },
1016
+ {
1017
+ "clip_ratio": 0.0,
1018
+ "completion_length": 863.9565124511719,
1019
+ "epoch": 1.0298507462686568,
1020
+ "grad_norm": 0.1485546976327896,
1021
+ "learning_rate": 5.782172325201155e-07,
1022
+ "loss": 0.0507,
1023
+ "num_tokens": 50114748.0,
1024
+ "reward": 2.2060565650463104,
1025
+ "reward_std": 0.4834662191569805,
1026
+ "rewards/accuracy_reward": 0.7433035746216774,
1027
+ "rewards/cosine_scaled_reward": 0.4906546622514725,
1028
+ "rewards/format_reward": 0.9720982164144516,
1029
+ "step": 68
1030
+ },
1031
+ {
1032
+ "clip_ratio": 0.0,
1033
+ "completion_length": 830.6830825805664,
1034
+ "epoch": 1.044776119402985,
1035
+ "grad_norm": 0.1332515925168991,
1036
+ "learning_rate": 5.652630961100258e-07,
1037
+ "loss": 0.0425,
1038
+ "num_tokens": 50984376.0,
1039
+ "reward": 2.2528974413871765,
1040
+ "reward_std": 0.38001761958003044,
1041
+ "rewards/accuracy_reward": 0.7511160746216774,
1042
+ "rewards/cosine_scaled_reward": 0.5140580758452415,
1043
+ "rewards/format_reward": 0.987723208963871,
1044
+ "step": 69
1045
+ },
1046
+ {
1047
+ "clip_ratio": 0.0,
1048
+ "completion_length": 807.5033874511719,
1049
+ "epoch": 1.0597014925373134,
1050
+ "grad_norm": 0.16610166430473328,
1051
+ "learning_rate": 5.522642316338268e-07,
1052
+ "loss": 0.0156,
1053
+ "num_tokens": 51835195.0,
1054
+ "reward": 2.312391608953476,
1055
+ "reward_std": 0.39653103426098824,
1056
+ "rewards/accuracy_reward": 0.7734375,
1057
+ "rewards/cosine_scaled_reward": 0.5445343889296055,
1058
+ "rewards/format_reward": 0.994419626891613,
1059
+ "step": 70
1060
+ },
1061
+ {
1062
+ "clip_ratio": 0.0,
1063
+ "completion_length": 787.1373138427734,
1064
+ "epoch": 1.0746268656716418,
1065
+ "grad_norm": 0.14138761162757874,
1066
+ "learning_rate": 5.392295478639225e-07,
1067
+ "loss": 0.035,
1068
+ "num_tokens": 52675462.0,
1069
+ "reward": 2.3058966398239136,
1070
+ "reward_std": 0.3873091973364353,
1071
+ "rewards/accuracy_reward": 0.772321417927742,
1072
+ "rewards/cosine_scaled_reward": 0.5447358340024948,
1073
+ "rewards/format_reward": 0.9888392761349678,
1074
+ "step": 71
1075
+ },
1076
+ {
1077
+ "clip_ratio": 0.0,
1078
+ "completion_length": 821.1105346679688,
1079
+ "epoch": 1.0895522388059702,
1080
+ "grad_norm": 0.230118989944458,
1081
+ "learning_rate": 5.26167978121472e-07,
1082
+ "loss": 0.0327,
1083
+ "num_tokens": 53533969.0,
1084
+ "reward": 2.2972765266895294,
1085
+ "reward_std": 0.4094788581132889,
1086
+ "rewards/accuracy_reward": 0.761160708963871,
1087
+ "rewards/cosine_scaled_reward": 0.5517406836152077,
1088
+ "rewards/format_reward": 0.9843749850988388,
1089
+ "step": 72
1090
+ },
1091
+ {
1092
+ "clip_ratio": 0.0,
1093
+ "completion_length": 826.6942367553711,
1094
+ "epoch": 1.1044776119402986,
1095
+ "grad_norm": 0.14363016188144684,
1096
+ "learning_rate": 5.130884741539366e-07,
1097
+ "loss": 0.0255,
1098
+ "num_tokens": 54407367.0,
1099
+ "reward": 2.1353407949209213,
1100
+ "reward_std": 0.4409247748553753,
1101
+ "rewards/accuracy_reward": 0.6830357164144516,
1102
+ "rewards/cosine_scaled_reward": 0.4712782185524702,
1103
+ "rewards/format_reward": 0.9810267761349678,
1104
+ "step": 73
1105
+ },
1106
+ {
1107
+ "clip_ratio": 0.0,
1108
+ "completion_length": 797.6071853637695,
1109
+ "epoch": 1.1194029850746268,
1110
+ "grad_norm": 0.1634291261434555,
1111
+ "learning_rate": 5e-07,
1112
+ "loss": 0.0129,
1113
+ "num_tokens": 55235255.0,
1114
+ "reward": 2.33771675825119,
1115
+ "reward_std": 0.438749760389328,
1116
+ "rewards/accuracy_reward": 0.7845982164144516,
1117
+ "rewards/cosine_scaled_reward": 0.5676273554563522,
1118
+ "rewards/format_reward": 0.9854910671710968,
1119
+ "step": 74
1120
+ },
1121
+ {
1122
+ "clip_ratio": 0.0,
1123
+ "completion_length": 798.5502548217773,
1124
+ "epoch": 1.1343283582089552,
1125
+ "grad_norm": 0.14468881487846375,
1126
+ "learning_rate": 4.869115258460634e-07,
1127
+ "loss": 0.0281,
1128
+ "num_tokens": 56075140.0,
1129
+ "reward": 2.2596937716007233,
1130
+ "reward_std": 0.40830419957637787,
1131
+ "rewards/accuracy_reward": 0.7477678656578064,
1132
+ "rewards/cosine_scaled_reward": 0.524202574044466,
1133
+ "rewards/format_reward": 0.9877232015132904,
1134
+ "step": 75
1135
+ },
1136
+ {
1137
+ "clip_ratio": 0.0,
1138
+ "completion_length": 777.0413284301758,
1139
+ "epoch": 1.1492537313432836,
1140
+ "grad_norm": 0.2769474387168884,
1141
+ "learning_rate": 4.7383202187852804e-07,
1142
+ "loss": 0.0288,
1143
+ "num_tokens": 56898185.0,
1144
+ "reward": 2.376899868249893,
1145
+ "reward_std": 0.384668942540884,
1146
+ "rewards/accuracy_reward": 0.8013392761349678,
1147
+ "rewards/cosine_scaled_reward": 0.5900693982839584,
1148
+ "rewards/format_reward": 0.9854910671710968,
1149
+ "step": 76
1150
+ },
1151
+ {
1152
+ "clip_ratio": 0.0,
1153
+ "completion_length": 703.7042694091797,
1154
+ "epoch": 1.164179104477612,
1155
+ "grad_norm": 0.23909108340740204,
1156
+ "learning_rate": 4.6077045213607755e-07,
1157
+ "loss": 0.0117,
1158
+ "num_tokens": 57654136.0,
1159
+ "reward": 2.448263019323349,
1160
+ "reward_std": 0.3043863233178854,
1161
+ "rewards/accuracy_reward": 0.828125,
1162
+ "rewards/cosine_scaled_reward": 0.6246022097766399,
1163
+ "rewards/format_reward": 0.995535708963871,
1164
+ "step": 77
1165
+ },
1166
+ {
1167
+ "clip_ratio": 0.0,
1168
+ "completion_length": 818.4219131469727,
1169
+ "epoch": 1.1791044776119404,
1170
+ "grad_norm": 0.13757474720478058,
1171
+ "learning_rate": 4.477357683661733e-07,
1172
+ "loss": 0.0271,
1173
+ "num_tokens": 58514250.0,
1174
+ "reward": 2.1977421790361404,
1175
+ "reward_std": 0.3669319860637188,
1176
+ "rewards/accuracy_reward": 0.723214291036129,
1177
+ "rewards/cosine_scaled_reward": 0.490152794867754,
1178
+ "rewards/format_reward": 0.9843749925494194,
1179
+ "step": 78
1180
+ },
1181
+ {
1182
+ "clip_ratio": 0.0,
1183
+ "completion_length": 828.7768096923828,
1184
+ "epoch": 1.1940298507462686,
1185
+ "grad_norm": 0.356913685798645,
1186
+ "learning_rate": 4.347369038899743e-07,
1187
+ "loss": 0.0155,
1188
+ "num_tokens": 59387978.0,
1189
+ "reward": 2.2681883424520493,
1190
+ "reward_std": 0.40200271271169186,
1191
+ "rewards/accuracy_reward": 0.7399553544819355,
1192
+ "rewards/cosine_scaled_reward": 0.5371615076437593,
1193
+ "rewards/format_reward": 0.991071417927742,
1194
+ "step": 79
1195
+ },
1196
+ {
1197
+ "clip_ratio": 0.0,
1198
+ "completion_length": 856.8393173217773,
1199
+ "epoch": 1.208955223880597,
1200
+ "grad_norm": 0.1846323013305664,
1201
+ "learning_rate": 4.2178276747988444e-07,
1202
+ "loss": 0.0249,
1203
+ "num_tokens": 60282122.0,
1204
+ "reward": 2.0796916633844376,
1205
+ "reward_std": 0.43989887088537216,
1206
+ "rewards/accuracy_reward": 0.6674107164144516,
1207
+ "rewards/cosine_scaled_reward": 0.4267898350954056,
1208
+ "rewards/format_reward": 0.9854910597205162,
1209
+ "step": 80
1210
+ },
1211
+ {
1212
+ "clip_ratio": 0.0,
1213
+ "completion_length": 815.146240234375,
1214
+ "epoch": 1.2238805970149254,
1215
+ "grad_norm": 0.12916217744350433,
1216
+ "learning_rate": 4.0888223725392624e-07,
1217
+ "loss": 0.0224,
1218
+ "num_tokens": 61137405.0,
1219
+ "reward": 2.2786046862602234,
1220
+ "reward_std": 0.37079325318336487,
1221
+ "rewards/accuracy_reward": 0.7477678582072258,
1222
+ "rewards/cosine_scaled_reward": 0.5375330746173859,
1223
+ "rewards/format_reward": 0.9933035671710968,
1224
+ "step": 81
1225
+ },
1226
+ {
1227
+ "clip_ratio": 0.0,
1228
+ "completion_length": 789.4765930175781,
1229
+ "epoch": 1.2388059701492538,
1230
+ "grad_norm": 0.1601138859987259,
1231
+ "learning_rate": 3.960441545911204e-07,
1232
+ "loss": 0.0192,
1233
+ "num_tokens": 61981880.0,
1234
+ "reward": 2.3246329575777054,
1235
+ "reward_std": 0.3963399939239025,
1236
+ "rewards/accuracy_reward": 0.777901791036129,
1237
+ "rewards/cosine_scaled_reward": 0.5567756779491901,
1238
+ "rewards/format_reward": 0.9899553582072258,
1239
+ "step": 82
1240
+ },
1241
+ {
1242
+ "clip_ratio": 0.0,
1243
+ "completion_length": 806.366096496582,
1244
+ "epoch": 1.2537313432835822,
1245
+ "grad_norm": 1.0955898761749268,
1246
+ "learning_rate": 3.8327731807204744e-07,
1247
+ "loss": 0.007,
1248
+ "num_tokens": 62833152.0,
1249
+ "reward": 2.258734792470932,
1250
+ "reward_std": 0.41654712706804276,
1251
+ "rewards/accuracy_reward": 0.734375,
1252
+ "rewards/cosine_scaled_reward": 0.5388686545193195,
1253
+ "rewards/format_reward": 0.9854910671710968,
1254
+ "step": 83
1255
+ },
1256
+ {
1257
+ "clip_ratio": 0.0,
1258
+ "completion_length": 799.4464569091797,
1259
+ "epoch": 1.2686567164179103,
1260
+ "grad_norm": 0.14970937371253967,
1261
+ "learning_rate": 3.7059047744873955e-07,
1262
+ "loss": 0.0191,
1263
+ "num_tokens": 63688080.0,
1264
+ "reward": 2.2213496565818787,
1265
+ "reward_std": 0.4365269783884287,
1266
+ "rewards/accuracy_reward": 0.7332589253783226,
1267
+ "rewards/cosine_scaled_reward": 0.5014834739267826,
1268
+ "rewards/format_reward": 0.9866071343421936,
1269
+ "step": 84
1270
+ },
1271
+ {
1272
+ "clip_ratio": 0.0,
1273
+ "completion_length": 799.4777145385742,
1274
+ "epoch": 1.2835820895522387,
1275
+ "grad_norm": 0.1490371972322464,
1276
+ "learning_rate": 3.5799232764803867e-07,
1277
+ "loss": 0.0427,
1278
+ "num_tokens": 64522916.0,
1279
+ "reward": 2.300745904445648,
1280
+ "reward_std": 0.3617209382355213,
1281
+ "rewards/accuracy_reward": 0.7745535746216774,
1282
+ "rewards/cosine_scaled_reward": 0.5395850613713264,
1283
+ "rewards/format_reward": 0.9866071417927742,
1284
+ "step": 85
1285
+ },
1286
+ {
1287
+ "clip_ratio": 0.0,
1288
+ "completion_length": 803.349365234375,
1289
+ "epoch": 1.2985074626865671,
1290
+ "grad_norm": 0.13647782802581787,
1291
+ "learning_rate": 3.454915028125263e-07,
1292
+ "loss": 0.0167,
1293
+ "num_tokens": 65365413.0,
1294
+ "reward": 2.1816782504320145,
1295
+ "reward_std": 0.3722137622535229,
1296
+ "rewards/accuracy_reward": 0.7020089253783226,
1297
+ "rewards/cosine_scaled_reward": 0.4908299520611763,
1298
+ "rewards/format_reward": 0.9888392835855484,
1299
+ "step": 86
1300
+ },
1301
+ {
1302
+ "clip_ratio": 0.0,
1303
+ "completion_length": 824.3370971679688,
1304
+ "epoch": 1.3134328358208955,
1305
+ "grad_norm": 0.141972154378891,
1306
+ "learning_rate": 3.330965703831146e-07,
1307
+ "loss": 0.0188,
1308
+ "num_tokens": 66236179.0,
1309
+ "reward": 2.2222750931978226,
1310
+ "reward_std": 0.3877013325691223,
1311
+ "rewards/accuracy_reward": 0.7246737629175186,
1312
+ "rewards/cosine_scaled_reward": 0.5180339068174362,
1313
+ "rewards/format_reward": 0.9866071343421936,
1314
+ "step": 87
1315
+ },
1316
+ {
1317
+ "clip_ratio": 0.0,
1318
+ "completion_length": 768.4710159301758,
1319
+ "epoch": 1.328358208955224,
1320
+ "grad_norm": 0.2339029610157013,
1321
+ "learning_rate": 3.2081602522734985e-07,
1322
+ "loss": 0.0365,
1323
+ "num_tokens": 67063113.0,
1324
+ "reward": 2.526910215616226,
1325
+ "reward_std": 0.34235746040940285,
1326
+ "rewards/accuracy_reward": 0.8671875,
1327
+ "rewards/cosine_scaled_reward": 0.6719993054866791,
1328
+ "rewards/format_reward": 0.9877232015132904,
1329
+ "step": 88
1330
+ },
1331
+ {
1332
+ "clip_ratio": 0.0,
1333
+ "completion_length": 745.8627548217773,
1334
+ "epoch": 1.3432835820895521,
1335
+ "grad_norm": 0.15348723530769348,
1336
+ "learning_rate": 3.086582838174551e-07,
1337
+ "loss": 0.0302,
1338
+ "num_tokens": 67857294.0,
1339
+ "reward": 2.3677200973033905,
1340
+ "reward_std": 0.3587967976927757,
1341
+ "rewards/accuracy_reward": 0.7879464253783226,
1342
+ "rewards/cosine_scaled_reward": 0.5853539742529392,
1343
+ "rewards/format_reward": 0.9944196343421936,
1344
+ "step": 89
1345
+ },
1346
+ {
1347
+ "clip_ratio": 0.0,
1348
+ "completion_length": 805.3437805175781,
1349
+ "epoch": 1.3582089552238805,
1350
+ "grad_norm": 0.15230360627174377,
1351
+ "learning_rate": 2.9663167846209996e-07,
1352
+ "loss": 0.0109,
1353
+ "num_tokens": 68712010.0,
1354
+ "reward": 2.1709432005882263,
1355
+ "reward_std": 0.34247344359755516,
1356
+ "rewards/accuracy_reward": 0.7120535746216774,
1357
+ "rewards/cosine_scaled_reward": 0.46893422678112984,
1358
+ "rewards/format_reward": 0.9899553507566452,
1359
+ "step": 90
1360
+ },
1361
+ {
1362
+ "clip_ratio": 0.0,
1363
+ "completion_length": 766.2344055175781,
1364
+ "epoch": 1.373134328358209,
1365
+ "grad_norm": 0.20056165754795074,
1366
+ "learning_rate": 2.847444515958523e-07,
1367
+ "loss": 0.0529,
1368
+ "num_tokens": 69530684.0,
1369
+ "reward": 2.444439873099327,
1370
+ "reward_std": 0.4391251541674137,
1371
+ "rewards/accuracy_reward": 0.8180803507566452,
1372
+ "rewards/cosine_scaled_reward": 0.6375201046466827,
1373
+ "rewards/format_reward": 0.9888392835855484,
1374
+ "step": 91
1375
+ },
1376
+ {
1377
+ "clip_ratio": 0.0,
1378
+ "completion_length": 806.084846496582,
1379
+ "epoch": 1.3880597014925373,
1380
+ "grad_norm": 0.2504553198814392,
1381
+ "learning_rate": 2.730047501302266e-07,
1382
+ "loss": 0.0276,
1383
+ "num_tokens": 70379000.0,
1384
+ "reward": 2.3064000606536865,
1385
+ "reward_std": 0.41721983440220356,
1386
+ "rewards/accuracy_reward": 0.768973208963871,
1387
+ "rewards/cosine_scaled_reward": 0.5485874190926552,
1388
+ "rewards/format_reward": 0.9888392761349678,
1389
+ "step": 92
1390
+ },
1391
+ {
1392
+ "clip_ratio": 0.0,
1393
+ "completion_length": 805.863883972168,
1394
+ "epoch": 1.4029850746268657,
1395
+ "grad_norm": 0.19941268861293793,
1396
+ "learning_rate": 2.6142061987019574e-07,
1397
+ "loss": 0.0203,
1398
+ "num_tokens": 71220958.0,
1399
+ "reward": 2.357511520385742,
1400
+ "reward_std": 0.42761751636862755,
1401
+ "rewards/accuracy_reward": 0.7868303507566452,
1402
+ "rewards/cosine_scaled_reward": 0.5740292370319366,
1403
+ "rewards/format_reward": 0.9966517761349678,
1404
+ "step": 93
1405
+ },
1406
+ {
1407
+ "clip_ratio": 0.0,
1408
+ "completion_length": 777.7678833007812,
1409
+ "epoch": 1.417910447761194,
1410
+ "grad_norm": 0.2115296721458435,
1411
+ "learning_rate": 2.500000000000001e-07,
1412
+ "loss": 0.0212,
1413
+ "num_tokens": 72060422.0,
1414
+ "reward": 2.2628036439418793,
1415
+ "reward_std": 0.3773616813123226,
1416
+ "rewards/accuracy_reward": 0.7399553582072258,
1417
+ "rewards/cosine_scaled_reward": 0.5284285433590412,
1418
+ "rewards/format_reward": 0.9944196343421936,
1419
+ "step": 94
1420
+ },
1421
+ {
1422
+ "clip_ratio": 0.0,
1423
+ "completion_length": 809.7109680175781,
1424
+ "epoch": 1.4328358208955223,
1425
+ "grad_norm": 0.20344114303588867,
1426
+ "learning_rate": 2.387507176420256e-07,
1427
+ "loss": 0.0388,
1428
+ "num_tokens": 72917563.0,
1429
+ "reward": 2.2071495205163956,
1430
+ "reward_std": 0.43392882496118546,
1431
+ "rewards/accuracy_reward": 0.7165178507566452,
1432
+ "rewards/cosine_scaled_reward": 0.5129529945552349,
1433
+ "rewards/format_reward": 0.9776785671710968,
1434
+ "step": 95
1435
+ },
1436
+ {
1437
+ "clip_ratio": 0.0,
1438
+ "completion_length": 825.3906707763672,
1439
+ "epoch": 1.4477611940298507,
1440
+ "grad_norm": 0.17310675978660583,
1441
+ "learning_rate": 2.2768048249248644e-07,
1442
+ "loss": 0.0233,
1443
+ "num_tokens": 73786337.0,
1444
+ "reward": 2.265718474984169,
1445
+ "reward_std": 0.42188060469925404,
1446
+ "rewards/accuracy_reward": 0.753348208963871,
1447
+ "rewards/cosine_scaled_reward": 0.5212987046688795,
1448
+ "rewards/format_reward": 0.9910714328289032,
1449
+ "step": 96
1450
+ },
1451
+ {
1452
+ "clip_ratio": 0.0,
1453
+ "completion_length": 746.5346221923828,
1454
+ "epoch": 1.462686567164179,
1455
+ "grad_norm": 0.33199918270111084,
1456
+ "learning_rate": 2.167968815375837e-07,
1457
+ "loss": 0.0252,
1458
+ "num_tokens": 74591024.0,
1459
+ "reward": 2.3167020082473755,
1460
+ "reward_std": 0.3432948123663664,
1461
+ "rewards/accuracy_reward": 0.7790178582072258,
1462
+ "rewards/cosine_scaled_reward": 0.5477286390960217,
1463
+ "rewards/format_reward": 0.9899553433060646,
1464
+ "step": 97
1465
+ },
1466
+ {
1467
+ "clip_ratio": 0.0,
1468
+ "completion_length": 844.3359756469727,
1469
+ "epoch": 1.4776119402985075,
1470
+ "grad_norm": 0.26303336024284363,
1471
+ "learning_rate": 2.0610737385376348e-07,
1472
+ "loss": 0.0209,
1473
+ "num_tokens": 75489445.0,
1474
+ "reward": 2.130746826529503,
1475
+ "reward_std": 0.4488871730864048,
1476
+ "rewards/accuracy_reward": 0.6897321343421936,
1477
+ "rewards/cosine_scaled_reward": 0.4510592333972454,
1478
+ "rewards/format_reward": 0.9899553507566452,
1479
+ "step": 98
1480
+ },
1481
+ {
1482
+ "clip_ratio": 0.0,
1483
+ "completion_length": 856.8058471679688,
1484
+ "epoch": 1.4925373134328357,
1485
+ "grad_norm": 0.14609511196613312,
1486
+ "learning_rate": 1.9561928549563966e-07,
1487
+ "loss": 0.0271,
1488
+ "num_tokens": 76395231.0,
1489
+ "reward": 1.9894811660051346,
1490
+ "reward_std": 0.4427115470170975,
1491
+ "rewards/accuracy_reward": 0.6227678619325161,
1492
+ "rewards/cosine_scaled_reward": 0.3700613994151354,
1493
+ "rewards/format_reward": 0.9966517761349678,
1494
+ "step": 99
1495
+ },
1496
+ {
1497
+ "epoch": 1.5074626865671643,
1498
+ "grad_norm": 0.17610162496566772,
1499
+ "learning_rate": 1.8533980447508135e-07,
1500
+ "loss": 0.0242,
1501
+ "step": 100
1502
+ },
1503
+ {
1504
+ "epoch": 1.5074626865671643,
1505
+ "eval_clip_ratio": 0.0,
1506
+ "eval_completion_length": 792.2322063765712,
1507
+ "eval_loss": 0.02554122917354107,
1508
+ "eval_num_tokens": 77204687.0,
1509
+ "eval_reward": 2.204988658095205,
1510
+ "eval_reward_std": 0.4329044972468355,
1511
+ "eval_rewards/accuracy_reward": 0.7148593376135693,
1512
+ "eval_rewards/cosine_scaled_reward": 0.5001551498081431,
1513
+ "eval_rewards/format_reward": 0.9899740553767987,
1514
+ "eval_runtime": 11721.4303,
1515
+ "eval_samples_per_second": 0.427,
1516
+ "eval_steps_per_second": 0.004,
1517
+ "step": 100
1518
+ },
1519
+ {
1520
+ "clip_ratio": 0.0,
1521
+ "completion_length": 788.0508155822754,
1522
+ "epoch": 1.5223880597014925,
1523
+ "grad_norm": 0.19114182889461517,
1524
+ "learning_rate": 1.7527597583490823e-07,
1525
+ "loss": 0.0328,
1526
+ "num_tokens": 78053058.0,
1527
+ "reward": 2.281766965985298,
1528
+ "reward_std": 0.4103840598836541,
1529
+ "rewards/accuracy_reward": 0.7534769810736179,
1530
+ "rewards/cosine_scaled_reward": 0.5412534717470407,
1531
+ "rewards/format_reward": 0.9905133806169033,
1532
+ "step": 101
1533
+ },
1534
+ {
1535
+ "clip_ratio": 0.0,
1536
+ "completion_length": 772.8002624511719,
1537
+ "epoch": 1.537313432835821,
1538
+ "grad_norm": 0.177150696516037,
1539
+ "learning_rate": 1.6543469682057104e-07,
1540
+ "loss": 0.0458,
1541
+ "num_tokens": 78881671.0,
1542
+ "reward": 2.2872008681297302,
1543
+ "reward_std": 0.409332113340497,
1544
+ "rewards/accuracy_reward": 0.7544642835855484,
1545
+ "rewards/cosine_scaled_reward": 0.5450132600963116,
1546
+ "rewards/format_reward": 0.9877232015132904,
1547
+ "step": 102
1548
+ },
1549
+ {
1550
+ "clip_ratio": 0.0,
1551
+ "completion_length": 813.3482437133789,
1552
+ "epoch": 1.5522388059701493,
1553
+ "grad_norm": 0.15676981210708618,
1554
+ "learning_rate": 1.5582271215312293e-07,
1555
+ "loss": 0.0244,
1556
+ "num_tokens": 79736991.0,
1557
+ "reward": 2.280416786670685,
1558
+ "reward_std": 0.37246554158627987,
1559
+ "rewards/accuracy_reward": 0.7511160597205162,
1560
+ "rewards/cosine_scaled_reward": 0.5471578016877174,
1561
+ "rewards/format_reward": 0.9821428582072258,
1562
+ "step": 103
1563
+ },
1564
+ {
1565
+ "clip_ratio": 0.0,
1566
+ "completion_length": 753.7221298217773,
1567
+ "epoch": 1.5671641791044775,
1568
+ "grad_norm": 0.14831306040287018,
1569
+ "learning_rate": 1.4644660940672627e-07,
1570
+ "loss": 0.046,
1571
+ "num_tokens": 80548398.0,
1572
+ "reward": 2.3254519551992416,
1573
+ "reward_std": 0.386288670822978,
1574
+ "rewards/accuracy_reward": 0.7823660671710968,
1575
+ "rewards/cosine_scaled_reward": 0.5631750710308552,
1576
+ "rewards/format_reward": 0.9799107015132904,
1577
+ "step": 104
1578
+ },
1579
+ {
1580
+ "clip_ratio": 0.0,
1581
+ "completion_length": 798.2891082763672,
1582
+ "epoch": 1.582089552238806,
1583
+ "grad_norm": 0.15447363257408142,
1584
+ "learning_rate": 1.3731281449385628e-07,
1585
+ "loss": 0.0151,
1586
+ "num_tokens": 81400969.0,
1587
+ "reward": 2.328332096338272,
1588
+ "reward_std": 0.3908931314945221,
1589
+ "rewards/accuracy_reward": 0.7779017835855484,
1590
+ "rewards/cosine_scaled_reward": 0.5615909844636917,
1591
+ "rewards/format_reward": 0.9888392835855484,
1592
+ "step": 105
1593
+ },
1594
+ {
1595
+ "clip_ratio": 0.0,
1596
+ "completion_length": 840.0145492553711,
1597
+ "epoch": 1.5970149253731343,
1598
+ "grad_norm": 0.196583554148674,
1599
+ "learning_rate": 1.284275872613028e-07,
1600
+ "loss": 0.0263,
1601
+ "num_tokens": 82284510.0,
1602
+ "reward": 2.143914580345154,
1603
+ "reward_std": 0.4517398029565811,
1604
+ "rewards/accuracy_reward": 0.6964285671710968,
1605
+ "rewards/cosine_scaled_reward": 0.46199479326605797,
1606
+ "rewards/format_reward": 0.9854910597205162,
1607
+ "step": 106
1608
+ },
1609
+ {
1610
+ "clip_ratio": 0.0,
1611
+ "completion_length": 857.6384353637695,
1612
+ "epoch": 1.6119402985074627,
1613
+ "grad_norm": 0.16375041007995605,
1614
+ "learning_rate": 1.1979701719998454e-07,
1615
+ "loss": 0.0386,
1616
+ "num_tokens": 83187714.0,
1617
+ "reward": 2.1951108425855637,
1618
+ "reward_std": 0.5204542428255081,
1619
+ "rewards/accuracy_reward": 0.7154017761349678,
1620
+ "rewards/cosine_scaled_reward": 0.4931018613278866,
1621
+ "rewards/format_reward": 0.9866071417927742,
1622
+ "step": 107
1623
+ },
1624
+ {
1625
+ "clip_ratio": 0.0,
1626
+ "completion_length": 783.974365234375,
1627
+ "epoch": 1.626865671641791,
1628
+ "grad_norm": 0.14117641746997833,
1629
+ "learning_rate": 1.1142701927151454e-07,
1630
+ "loss": 0.0151,
1631
+ "num_tokens": 84010083.0,
1632
+ "reward": 2.3524541556835175,
1633
+ "reward_std": 0.43999602273106575,
1634
+ "rewards/accuracy_reward": 0.7890625074505806,
1635
+ "rewards/cosine_scaled_reward": 0.5745523162186146,
1636
+ "rewards/format_reward": 0.9888392835855484,
1637
+ "step": 108
1638
+ },
1639
+ {
1640
+ "clip_ratio": 0.0,
1641
+ "completion_length": 813.5145492553711,
1642
+ "epoch": 1.6417910447761193,
1643
+ "grad_norm": 0.14856059849262238,
1644
+ "learning_rate": 1.0332332985438247e-07,
1645
+ "loss": 0.0141,
1646
+ "num_tokens": 84873256.0,
1647
+ "reward": 2.279180735349655,
1648
+ "reward_std": 0.40090466663241386,
1649
+ "rewards/accuracy_reward": 0.7522321492433548,
1650
+ "rewards/cosine_scaled_reward": 0.538109190762043,
1651
+ "rewards/format_reward": 0.9888392835855484,
1652
+ "step": 109
1653
+ },
1654
+ {
1655
+ "clip_ratio": 0.0,
1656
+ "completion_length": 770.613883972168,
1657
+ "epoch": 1.6567164179104479,
1658
+ "grad_norm": 0.1753540337085724,
1659
+ "learning_rate": 9.549150281252632e-08,
1660
+ "loss": 0.0173,
1661
+ "num_tokens": 85689638.0,
1662
+ "reward": 2.2676322162151337,
1663
+ "reward_std": 0.36183078587055206,
1664
+ "rewards/accuracy_reward": 0.7377232164144516,
1665
+ "rewards/cosine_scaled_reward": 0.5377213880419731,
1666
+ "rewards/format_reward": 0.9921874925494194,
1667
+ "step": 110
1668
+ },
1669
+ {
1670
+ "clip_ratio": 0.0,
1671
+ "completion_length": 821.4765930175781,
1672
+ "epoch": 1.671641791044776,
1673
+ "grad_norm": 0.15574952960014343,
1674
+ "learning_rate": 8.793690568899215e-08,
1675
+ "loss": 0.0436,
1676
+ "num_tokens": 86549257.0,
1677
+ "reward": 2.299679785966873,
1678
+ "reward_std": 0.36411611922085285,
1679
+ "rewards/accuracy_reward": 0.7656249925494194,
1680
+ "rewards/cosine_scaled_reward": 0.5474475063383579,
1681
+ "rewards/format_reward": 0.9866071417927742,
1682
+ "step": 111
1683
+ },
1684
+ {
1685
+ "clip_ratio": 0.0,
1686
+ "completion_length": 818.6038360595703,
1687
+ "epoch": 1.6865671641791045,
1688
+ "grad_norm": 0.15562310814857483,
1689
+ "learning_rate": 8.066471602728803e-08,
1690
+ "loss": 0.0308,
1691
+ "num_tokens": 87400390.0,
1692
+ "reward": 2.3183076828718185,
1693
+ "reward_std": 0.29475370794534683,
1694
+ "rewards/accuracy_reward": 0.7723214328289032,
1695
+ "rewards/cosine_scaled_reward": 0.5616111867129803,
1696
+ "rewards/format_reward": 0.9843749925494194,
1697
+ "step": 112
1698
+ },
1699
+ {
1700
+ "clip_ratio": 0.0,
1701
+ "completion_length": 774.2924423217773,
1702
+ "epoch": 1.7014925373134329,
1703
+ "grad_norm": 0.17490361630916595,
1704
+ "learning_rate": 7.36799178229539e-08,
1705
+ "loss": 0.0194,
1706
+ "num_tokens": 88221964.0,
1707
+ "reward": 2.239181011915207,
1708
+ "reward_std": 0.3596949577331543,
1709
+ "rewards/accuracy_reward": 0.7176339253783226,
1710
+ "rewards/cosine_scaled_reward": 0.5315916165709496,
1711
+ "rewards/format_reward": 0.9899553507566452,
1712
+ "step": 113
1713
+ },
1714
+ {
1715
+ "clip_ratio": 0.0,
1716
+ "completion_length": 774.7154388427734,
1717
+ "epoch": 1.716417910447761,
1718
+ "grad_norm": 0.244772270321846,
1719
+ "learning_rate": 6.698729810778064e-08,
1720
+ "loss": 0.022,
1721
+ "num_tokens": 89047949.0,
1722
+ "reward": 2.343540608882904,
1723
+ "reward_std": 0.3898981437087059,
1724
+ "rewards/accuracy_reward": 0.777901791036129,
1725
+ "rewards/cosine_scaled_reward": 0.5756833851337433,
1726
+ "rewards/format_reward": 0.9899553433060646,
1727
+ "step": 114
1728
+ },
1729
+ {
1730
+ "clip_ratio": 0.0,
1731
+ "completion_length": 840.091552734375,
1732
+ "epoch": 1.7313432835820897,
1733
+ "grad_norm": 0.14277122914791107,
1734
+ "learning_rate": 6.059144366901736e-08,
1735
+ "loss": 0.0168,
1736
+ "num_tokens": 89929215.0,
1737
+ "reward": 2.2201480120420456,
1738
+ "reward_std": 0.3869924359023571,
1739
+ "rewards/accuracy_reward": 0.7209821492433548,
1740
+ "rewards/cosine_scaled_reward": 0.518138974905014,
1741
+ "rewards/format_reward": 0.9810267761349678,
1742
+ "step": 115
1743
+ },
1744
+ {
1745
+ "clip_ratio": 0.0,
1746
+ "completion_length": 766.7477951049805,
1747
+ "epoch": 1.7462686567164178,
1748
+ "grad_norm": 0.26776042580604553,
1749
+ "learning_rate": 5.44967379058161e-08,
1750
+ "loss": 0.0413,
1751
+ "num_tokens": 90736805.0,
1752
+ "reward": 2.255069524049759,
1753
+ "reward_std": 0.3386665191501379,
1754
+ "rewards/accuracy_reward": 0.737723208963871,
1755
+ "rewards/cosine_scaled_reward": 0.5307390131056309,
1756
+ "rewards/format_reward": 0.9866071343421936,
1757
+ "step": 116
1758
+ },
1759
+ {
1760
+ "clip_ratio": 0.0,
1761
+ "completion_length": 829.6663284301758,
1762
+ "epoch": 1.7611940298507462,
1763
+ "grad_norm": 0.2277984321117401,
1764
+ "learning_rate": 4.870735782506979e-08,
1765
+ "loss": 0.0148,
1766
+ "num_tokens": 91635634.0,
1767
+ "reward": 2.1820897459983826,
1768
+ "reward_std": 0.4220114853233099,
1769
+ "rewards/accuracy_reward": 0.7087053582072258,
1770
+ "rewards/cosine_scaled_reward": 0.4767325222492218,
1771
+ "rewards/format_reward": 0.9966517761349678,
1772
+ "step": 117
1773
+ },
1774
+ {
1775
+ "clip_ratio": 0.0,
1776
+ "completion_length": 798.0044860839844,
1777
+ "epoch": 1.7761194029850746,
1778
+ "grad_norm": 0.18910035490989685,
1779
+ "learning_rate": 4.322727117869951e-08,
1780
+ "loss": 0.0163,
1781
+ "num_tokens": 92477054.0,
1782
+ "reward": 2.2727625370025635,
1783
+ "reward_std": 0.4164229966700077,
1784
+ "rewards/accuracy_reward": 0.7555803582072258,
1785
+ "rewards/cosine_scaled_reward": 0.5261106304824352,
1786
+ "rewards/format_reward": 0.991071417927742,
1787
+ "step": 118
1788
+ },
1789
+ {
1790
+ "clip_ratio": 0.0,
1791
+ "completion_length": 818.9799499511719,
1792
+ "epoch": 1.7910447761194028,
1793
+ "grad_norm": 0.18116088211536407,
1794
+ "learning_rate": 3.806023374435663e-08,
1795
+ "loss": 0.0207,
1796
+ "num_tokens": 93336268.0,
1797
+ "reward": 2.1996723413467407,
1798
+ "reward_std": 0.4445042908191681,
1799
+ "rewards/accuracy_reward": 0.7109375,
1800
+ "rewards/cosine_scaled_reward": 0.5043596625328064,
1801
+ "rewards/format_reward": 0.9843749925494194,
1802
+ "step": 119
1803
+ },
1804
+ {
1805
+ "clip_ratio": 0.0,
1806
+ "completion_length": 791.8013687133789,
1807
+ "epoch": 1.8059701492537314,
1808
+ "grad_norm": 0.16948001086711884,
1809
+ "learning_rate": 3.3209786751399184e-08,
1810
+ "loss": 0.0363,
1811
+ "num_tokens": 94184666.0,
1812
+ "reward": 2.3499678671360016,
1813
+ "reward_std": 0.3877370711416006,
1814
+ "rewards/accuracy_reward": 0.777901791036129,
1815
+ "rewards/cosine_scaled_reward": 0.586574912071228,
1816
+ "rewards/format_reward": 0.9854910671710968,
1817
+ "step": 120
1818
+ },
1819
+ {
1820
+ "clip_ratio": 0.0,
1821
+ "completion_length": 779.1897659301758,
1822
+ "epoch": 1.8208955223880596,
1823
+ "grad_norm": 0.15338537096977234,
1824
+ "learning_rate": 2.8679254453910785e-08,
1825
+ "loss": 0.0161,
1826
+ "num_tokens": 95013124.0,
1827
+ "reward": 2.2275805920362473,
1828
+ "reward_std": 0.3441179431974888,
1829
+ "rewards/accuracy_reward": 0.7109374925494194,
1830
+ "rewards/cosine_scaled_reward": 0.5188751742243767,
1831
+ "rewards/format_reward": 0.9977678507566452,
1832
+ "step": 121
1833
+ },
1834
+ {
1835
+ "clip_ratio": 0.0,
1836
+ "completion_length": 829.8382034301758,
1837
+ "epoch": 1.835820895522388,
1838
+ "grad_norm": 0.13179580867290497,
1839
+ "learning_rate": 2.4471741852423233e-08,
1840
+ "loss": 0.0287,
1841
+ "num_tokens": 95903451.0,
1842
+ "reward": 2.1736037135124207,
1843
+ "reward_std": 0.4318021424114704,
1844
+ "rewards/accuracy_reward": 0.7142857164144516,
1845
+ "rewards/cosine_scaled_reward": 0.47717495635151863,
1846
+ "rewards/format_reward": 0.9821428507566452,
1847
+ "step": 122
1848
+ },
1849
+ {
1850
+ "clip_ratio": 0.0,
1851
+ "completion_length": 806.5413360595703,
1852
+ "epoch": 1.8507462686567164,
1853
+ "grad_norm": 0.29391592741012573,
1854
+ "learning_rate": 2.0590132565903473e-08,
1855
+ "loss": 0.0311,
1856
+ "num_tokens": 96759152.0,
1857
+ "reward": 2.2691119611263275,
1858
+ "reward_std": 0.4518252518028021,
1859
+ "rewards/accuracy_reward": 0.7589285746216774,
1860
+ "rewards/cosine_scaled_reward": 0.5313886553049088,
1861
+ "rewards/format_reward": 0.9787946343421936,
1862
+ "step": 123
1863
+ },
1864
+ {
1865
+ "clip_ratio": 0.0,
1866
+ "completion_length": 779.5937881469727,
1867
+ "epoch": 1.8656716417910446,
1868
+ "grad_norm": 0.15708433091640472,
1869
+ "learning_rate": 1.7037086855465898e-08,
1870
+ "loss": 0.0199,
1871
+ "num_tokens": 97582132.0,
1872
+ "reward": 2.251932591199875,
1873
+ "reward_std": 0.44112248346209526,
1874
+ "rewards/accuracy_reward": 0.7421874925494194,
1875
+ "rewards/cosine_scaled_reward": 0.5164414867758751,
1876
+ "rewards/format_reward": 0.9933035597205162,
1877
+ "step": 124
1878
+ },
1879
+ {
1880
+ "clip_ratio": 0.0,
1881
+ "completion_length": 813.1272659301758,
1882
+ "epoch": 1.8805970149253732,
1883
+ "grad_norm": 0.1673169732093811,
1884
+ "learning_rate": 1.3815039801161722e-08,
1885
+ "loss": 0.0249,
1886
+ "num_tokens": 98437462.0,
1887
+ "reward": 2.205630913376808,
1888
+ "reward_std": 0.42455647699534893,
1889
+ "rewards/accuracy_reward": 0.7142857164144516,
1890
+ "rewards/cosine_scaled_reward": 0.5025058649480343,
1891
+ "rewards/format_reward": 0.9888392835855484,
1892
+ "step": 125
1893
+ },
1894
+ {
1895
+ "clip_ratio": 0.0,
1896
+ "completion_length": 741.6920013427734,
1897
+ "epoch": 1.8955223880597014,
1898
+ "grad_norm": 0.1612333208322525,
1899
+ "learning_rate": 1.0926199633097154e-08,
1900
+ "loss": 0.0216,
1901
+ "num_tokens": 99228194.0,
1902
+ "reward": 2.3898730278015137,
1903
+ "reward_std": 0.3898141644895077,
1904
+ "rewards/accuracy_reward": 0.7890625149011612,
1905
+ "rewards/cosine_scaled_reward": 0.6075067967176437,
1906
+ "rewards/format_reward": 0.9933035597205162,
1907
+ "step": 126
1908
+ },
1909
+ {
1910
+ "clip_ratio": 0.0,
1911
+ "completion_length": 789.6574096679688,
1912
+ "epoch": 1.9104477611940298,
1913
+ "grad_norm": 0.2855228781700134,
1914
+ "learning_rate": 8.372546218022746e-09,
1915
+ "loss": 0.0362,
1916
+ "num_tokens": 100072663.0,
1917
+ "reward": 2.211760714650154,
1918
+ "reward_std": 0.3630409985780716,
1919
+ "rewards/accuracy_reward": 0.7142857238650322,
1920
+ "rewards/cosine_scaled_reward": 0.5108677893877029,
1921
+ "rewards/format_reward": 0.9866071417927742,
1922
+ "step": 127
1923
+ },
1924
+ {
1925
+ "clip_ratio": 0.0,
1926
+ "completion_length": 793.2388687133789,
1927
+ "epoch": 1.9253731343283582,
1928
+ "grad_norm": 0.1790562868118286,
1929
+ "learning_rate": 6.15582970243117e-09,
1930
+ "loss": 0.0324,
1931
+ "num_tokens": 100913773.0,
1932
+ "reward": 2.358086109161377,
1933
+ "reward_std": 0.3747531082481146,
1934
+ "rewards/accuracy_reward": 0.7868303582072258,
1935
+ "rewards/cosine_scaled_reward": 0.5824163034558296,
1936
+ "rewards/format_reward": 0.9888392761349678,
1937
+ "step": 128
1938
+ },
1939
+ {
1940
+ "clip_ratio": 0.0,
1941
+ "completion_length": 805.0513763427734,
1942
+ "epoch": 1.9402985074626866,
1943
+ "grad_norm": 0.20580174028873444,
1944
+ "learning_rate": 4.277569313094809e-09,
1945
+ "loss": 0.0337,
1946
+ "num_tokens": 101758427.0,
1947
+ "reward": 2.3048039972782135,
1948
+ "reward_std": 0.413201667368412,
1949
+ "rewards/accuracy_reward": 0.7622767835855484,
1950
+ "rewards/cosine_scaled_reward": 0.5503396540880203,
1951
+ "rewards/format_reward": 0.9921874925494194,
1952
+ "step": 129
1953
+ },
1954
+ {
1955
+ "clip_ratio": 0.0,
1956
+ "completion_length": 825.1707992553711,
1957
+ "epoch": 1.955223880597015,
1958
+ "grad_norm": 0.1606811136007309,
1959
+ "learning_rate": 2.739052315863355e-09,
1960
+ "loss": 0.0124,
1961
+ "num_tokens": 102639524.0,
1962
+ "reward": 2.1286870390176773,
1963
+ "reward_std": 0.40390729531645775,
1964
+ "rewards/accuracy_reward": 0.6796875,
1965
+ "rewards/cosine_scaled_reward": 0.45457978174090385,
1966
+ "rewards/format_reward": 0.9944196343421936,
1967
+ "step": 130
1968
+ },
1969
+ {
1970
+ "clip_ratio": 0.0,
1971
+ "completion_length": 780.3303833007812,
1972
+ "epoch": 1.9701492537313432,
1973
+ "grad_norm": 0.1538384109735489,
1974
+ "learning_rate": 1.541333133436018e-09,
1975
+ "loss": 0.0104,
1976
+ "num_tokens": 103465868.0,
1977
+ "reward": 2.334537535905838,
1978
+ "reward_std": 0.37909975461661816,
1979
+ "rewards/accuracy_reward": 0.7723214328289032,
1980
+ "rewards/cosine_scaled_reward": 0.5700285099446774,
1981
+ "rewards/format_reward": 0.9921874925494194,
1982
+ "step": 131
1983
+ },
1984
+ {
1985
+ "clip_ratio": 0.0,
1986
+ "completion_length": 807.1284713745117,
1987
+ "epoch": 1.9850746268656716,
1988
+ "grad_norm": 0.15763509273529053,
1989
+ "learning_rate": 6.852326227130833e-10,
1990
+ "loss": 0.0277,
1991
+ "num_tokens": 104326476.0,
1992
+ "reward": 2.2643921971321106,
1993
+ "reward_std": 0.3899136632680893,
1994
+ "rewards/accuracy_reward": 0.7410714328289032,
1995
+ "rewards/cosine_scaled_reward": 0.5277849473059177,
1996
+ "rewards/format_reward": 0.995535708963871,
1997
+ "step": 132
1998
+ },
1999
+ {
2000
+ "epoch": 1.9850746268656716,
2001
+ "step": 132,
2002
+ "total_flos": 0.0,
2003
+ "train_loss": 0.02113412496052633,
2004
+ "train_runtime": 52235.1468,
2005
+ "train_samples_per_second": 0.287,
2006
+ "train_steps_per_second": 0.003
2007
+ }
2008
+ ],
2009
+ "logging_steps": 1,
2010
+ "max_steps": 134,
2011
+ "num_input_tokens_seen": 0,
2012
+ "num_train_epochs": 2,
2013
+ "save_steps": 500,
2014
+ "stateful_callbacks": {
2015
+ "TrainerControl": {
2016
+ "args": {
2017
+ "should_epoch_stop": false,
2018
+ "should_evaluate": false,
2019
+ "should_log": false,
2020
+ "should_save": true,
2021
+ "should_training_stop": false
2022
+ },
2023
+ "attributes": {}
2024
+ }
2025
+ },
2026
+ "total_flos": 0.0,
2027
+ "train_batch_size": 16,
2028
+ "trial_name": null,
2029
+ "trial_params": null
2030
+ }