bimabk commited on
Commit
f52aebd
·
verified ·
1 Parent(s): 4cacedb

Upload task output 63b2db3d-a057-429f-9319-84e8338dbfb9

Browse files
README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: None
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.15.1
adapter_config.json ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": null,
5
+ "bias": "none",
6
+ "corda_config": null,
7
+ "eva_config": null,
8
+ "exclude_modules": null,
9
+ "fan_in_fan_out": false,
10
+ "inference_mode": true,
11
+ "init_lora_weights": true,
12
+ "layer_replication": null,
13
+ "layers_pattern": null,
14
+ "layers_to_transform": null,
15
+ "loftq_config": {},
16
+ "lora_alpha": 256,
17
+ "lora_bias": false,
18
+ "lora_dropout": 0.05,
19
+ "megatron_config": null,
20
+ "megatron_core": "megatron.core",
21
+ "modules_to_save": null,
22
+ "peft_type": "LORA",
23
+ "r": 128,
24
+ "rank_pattern": {},
25
+ "revision": null,
26
+ "target_modules": [
27
+ "qkv_proj",
28
+ "gate_up_proj",
29
+ "down_proj",
30
+ "o_proj"
31
+ ],
32
+ "task_type": "CAUSAL_LM",
33
+ "trainable_token_indices": null,
34
+ "use_dora": false,
35
+ "use_rslora": false
36
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4ca99e8b2718ee22e8b79e961cc6b99c72ce364a268788e2c3335e891be5181
3
+ size 805341552
added_tokens.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "<|assistant|>": 32001,
3
+ "<|endoftext|>": 32000,
4
+ "<|end|>": 32007,
5
+ "<|placeholder1|>": 32002,
6
+ "<|placeholder2|>": 32003,
7
+ "<|placeholder3|>": 32004,
8
+ "<|placeholder4|>": 32005,
9
+ "<|placeholder5|>": 32008,
10
+ "<|placeholder6|>": 32009,
11
+ "<|system|>": 32006,
12
+ "<|user|>": 32010
13
+ }
loss.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ 358,0.6298339366912842
special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<|endoftext|>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<unk>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
3
+ size 499723
tokenizer_config.json ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_eos_token": false,
4
+ "add_prefix_space": null,
5
+ "added_tokens_decoder": {
6
+ "0": {
7
+ "content": "<unk>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "1": {
15
+ "content": "<s>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": true
21
+ },
22
+ "2": {
23
+ "content": "</s>",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": true,
27
+ "single_word": false,
28
+ "special": false
29
+ },
30
+ "32000": {
31
+ "content": "<|endoftext|>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false,
36
+ "special": true
37
+ },
38
+ "32001": {
39
+ "content": "<|assistant|>",
40
+ "lstrip": false,
41
+ "normalized": false,
42
+ "rstrip": true,
43
+ "single_word": false,
44
+ "special": true
45
+ },
46
+ "32002": {
47
+ "content": "<|placeholder1|>",
48
+ "lstrip": false,
49
+ "normalized": false,
50
+ "rstrip": true,
51
+ "single_word": false,
52
+ "special": true
53
+ },
54
+ "32003": {
55
+ "content": "<|placeholder2|>",
56
+ "lstrip": false,
57
+ "normalized": false,
58
+ "rstrip": true,
59
+ "single_word": false,
60
+ "special": true
61
+ },
62
+ "32004": {
63
+ "content": "<|placeholder3|>",
64
+ "lstrip": false,
65
+ "normalized": false,
66
+ "rstrip": true,
67
+ "single_word": false,
68
+ "special": true
69
+ },
70
+ "32005": {
71
+ "content": "<|placeholder4|>",
72
+ "lstrip": false,
73
+ "normalized": false,
74
+ "rstrip": true,
75
+ "single_word": false,
76
+ "special": true
77
+ },
78
+ "32006": {
79
+ "content": "<|system|>",
80
+ "lstrip": false,
81
+ "normalized": false,
82
+ "rstrip": true,
83
+ "single_word": false,
84
+ "special": true
85
+ },
86
+ "32007": {
87
+ "content": "<|end|>",
88
+ "lstrip": false,
89
+ "normalized": false,
90
+ "rstrip": true,
91
+ "single_word": false,
92
+ "special": true
93
+ },
94
+ "32008": {
95
+ "content": "<|placeholder5|>",
96
+ "lstrip": false,
97
+ "normalized": false,
98
+ "rstrip": true,
99
+ "single_word": false,
100
+ "special": true
101
+ },
102
+ "32009": {
103
+ "content": "<|placeholder6|>",
104
+ "lstrip": false,
105
+ "normalized": false,
106
+ "rstrip": true,
107
+ "single_word": false,
108
+ "special": true
109
+ },
110
+ "32010": {
111
+ "content": "<|user|>",
112
+ "lstrip": false,
113
+ "normalized": false,
114
+ "rstrip": true,
115
+ "single_word": false,
116
+ "special": true
117
+ }
118
+ },
119
+ "bos_token": "<s>",
120
+ "chat_template": "{% for message in messages %}{% if message['role'] == 'system' %}{{'<|system|>\n' + message['content'] + '<|end|>\n'}}{% elif message['role'] == 'user' %}{{'<|user|>\n' + message['content'] + '<|end|>\n'}}{% elif message['role'] == 'assistant' %}{{'<|assistant|>\n' + message['content'] + '<|end|>\n'}}{% endif %}{% endfor %}{% if add_generation_prompt %}{{ '<|assistant|>\n' }}{% else %}{{ eos_token }}{% endif %}",
121
+ "clean_up_tokenization_spaces": false,
122
+ "eos_token": "<|endoftext|>",
123
+ "extra_special_tokens": {},
124
+ "legacy": false,
125
+ "model_max_length": 131072,
126
+ "pad_token": "<|endoftext|>",
127
+ "padding_side": "left",
128
+ "sp_model_kwargs": {},
129
+ "tokenizer_class": "LlamaTokenizer",
130
+ "unk_token": "<unk>",
131
+ "use_default_system_prompt": false
132
+ }
trainer_state.json ADDED
@@ -0,0 +1,1115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": null,
3
+ "best_metric": null,
4
+ "best_model_checkpoint": null,
5
+ "epoch": 0.99860529986053,
6
+ "eval_steps": 500,
7
+ "global_step": 358,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "epoch": 0.01394700139470014,
14
+ "grad_norm": 8.443673133850098,
15
+ "learning_rate": 3.1067090740782183e-06,
16
+ "logits/chosen": 13.965856552124023,
17
+ "logits/rejected": 15.354721069335938,
18
+ "logps/chosen": -263.2093200683594,
19
+ "logps/rejected": -286.745361328125,
20
+ "loss": 0.6931,
21
+ "rewards/accuracies": 0.3083333373069763,
22
+ "rewards/chosen": 0.001035079127177596,
23
+ "rewards/margins": 0.004346395842730999,
24
+ "rewards/rejected": -0.003311316715553403,
25
+ "step": 5
26
+ },
27
+ {
28
+ "epoch": 0.02789400278940028,
29
+ "grad_norm": 9.05903434753418,
30
+ "learning_rate": 6.99009541667599e-06,
31
+ "logits/chosen": 13.079957962036133,
32
+ "logits/rejected": 15.2965726852417,
33
+ "logps/chosen": -246.61495971679688,
34
+ "logps/rejected": -292.19561767578125,
35
+ "loss": 0.6512,
36
+ "rewards/accuracies": 0.6499999761581421,
37
+ "rewards/chosen": -0.004167428705841303,
38
+ "rewards/margins": 0.10341107845306396,
39
+ "rewards/rejected": -0.10757851600646973,
40
+ "step": 10
41
+ },
42
+ {
43
+ "epoch": 0.04184100418410042,
44
+ "grad_norm": 7.4054036140441895,
45
+ "learning_rate": 1.0873481759273766e-05,
46
+ "logits/chosen": 13.588569641113281,
47
+ "logits/rejected": 14.664776802062988,
48
+ "logps/chosen": -250.9985809326172,
49
+ "logps/rejected": -271.63397216796875,
50
+ "loss": 0.6306,
51
+ "rewards/accuracies": 0.6750000715255737,
52
+ "rewards/chosen": -0.07446910440921783,
53
+ "rewards/margins": 0.2632213234901428,
54
+ "rewards/rejected": -0.33769041299819946,
55
+ "step": 15
56
+ },
57
+ {
58
+ "epoch": 0.05578800557880056,
59
+ "grad_norm": 7.973468780517578,
60
+ "learning_rate": 1.4756868101871536e-05,
61
+ "logits/chosen": 10.776374816894531,
62
+ "logits/rejected": 12.654977798461914,
63
+ "logps/chosen": -219.0859375,
64
+ "logps/rejected": -279.8912048339844,
65
+ "loss": 0.5803,
66
+ "rewards/accuracies": 0.7083333730697632,
67
+ "rewards/chosen": -0.15387776494026184,
68
+ "rewards/margins": 0.5580573678016663,
69
+ "rewards/rejected": -0.7119351625442505,
70
+ "step": 20
71
+ },
72
+ {
73
+ "epoch": 0.0697350069735007,
74
+ "grad_norm": 8.493500709533691,
75
+ "learning_rate": 1.864025444446931e-05,
76
+ "logits/chosen": 11.028432846069336,
77
+ "logits/rejected": 12.664867401123047,
78
+ "logps/chosen": -269.8415832519531,
79
+ "logps/rejected": -324.97613525390625,
80
+ "loss": 0.6957,
81
+ "rewards/accuracies": 0.6833333969116211,
82
+ "rewards/chosen": -0.31670600175857544,
83
+ "rewards/margins": 0.7376400232315063,
84
+ "rewards/rejected": -1.0543458461761475,
85
+ "step": 25
86
+ },
87
+ {
88
+ "epoch": 0.08368200836820083,
89
+ "grad_norm": 6.57462739944458,
90
+ "learning_rate": 2.2523640787067085e-05,
91
+ "logits/chosen": 12.154653549194336,
92
+ "logits/rejected": 13.542457580566406,
93
+ "logps/chosen": -250.53341674804688,
94
+ "logps/rejected": -290.4168395996094,
95
+ "loss": 0.6559,
96
+ "rewards/accuracies": 0.6750000715255737,
97
+ "rewards/chosen": -0.11686629056930542,
98
+ "rewards/margins": 0.5811147093772888,
99
+ "rewards/rejected": -0.6979809999465942,
100
+ "step": 30
101
+ },
102
+ {
103
+ "epoch": 0.09762900976290098,
104
+ "grad_norm": 7.757661819458008,
105
+ "learning_rate": 2.6407027129664858e-05,
106
+ "logits/chosen": 11.735052108764648,
107
+ "logits/rejected": 12.871235847473145,
108
+ "logps/chosen": -196.1747589111328,
109
+ "logps/rejected": -223.5957794189453,
110
+ "loss": 0.6182,
111
+ "rewards/accuracies": 0.6333333253860474,
112
+ "rewards/chosen": -0.00016996636986732483,
113
+ "rewards/margins": 0.3299906551837921,
114
+ "rewards/rejected": -0.33016061782836914,
115
+ "step": 35
116
+ },
117
+ {
118
+ "epoch": 0.11157601115760112,
119
+ "grad_norm": 9.74497127532959,
120
+ "learning_rate": 2.7182958819900885e-05,
121
+ "logits/chosen": 10.667373657226562,
122
+ "logits/rejected": 12.847865104675293,
123
+ "logps/chosen": -208.2180633544922,
124
+ "logps/rejected": -302.39971923828125,
125
+ "loss": 0.5985,
126
+ "rewards/accuracies": 0.625,
127
+ "rewards/chosen": -0.03828881308436394,
128
+ "rewards/margins": 0.43971744179725647,
129
+ "rewards/rejected": -0.4780062735080719,
130
+ "step": 40
131
+ },
132
+ {
133
+ "epoch": 0.12552301255230125,
134
+ "grad_norm": 9.502607345581055,
135
+ "learning_rate": 2.7179930095042222e-05,
136
+ "logits/chosen": 11.666415214538574,
137
+ "logits/rejected": 13.399192810058594,
138
+ "logps/chosen": -252.95516967773438,
139
+ "logps/rejected": -297.4845275878906,
140
+ "loss": 0.6683,
141
+ "rewards/accuracies": 0.625,
142
+ "rewards/chosen": -0.2012624740600586,
143
+ "rewards/margins": 0.4417993426322937,
144
+ "rewards/rejected": -0.6430618166923523,
145
+ "step": 45
146
+ },
147
+ {
148
+ "epoch": 0.1394700139470014,
149
+ "grad_norm": 6.734437942504883,
150
+ "learning_rate": 2.717457231667877e-05,
151
+ "logits/chosen": 11.400947570800781,
152
+ "logits/rejected": 12.905207633972168,
153
+ "logps/chosen": -199.80596923828125,
154
+ "logps/rejected": -254.4221954345703,
155
+ "loss": 0.5718,
156
+ "rewards/accuracies": 0.6416667103767395,
157
+ "rewards/chosen": -0.022540345788002014,
158
+ "rewards/margins": 0.5749102830886841,
159
+ "rewards/rejected": -0.5974506139755249,
160
+ "step": 50
161
+ },
162
+ {
163
+ "epoch": 0.15341701534170155,
164
+ "grad_norm": 6.349542617797852,
165
+ "learning_rate": 2.7166886709384802e-05,
166
+ "logits/chosen": 9.220715522766113,
167
+ "logits/rejected": 10.65298080444336,
168
+ "logps/chosen": -187.50601196289062,
169
+ "logps/rejected": -232.32241821289062,
170
+ "loss": 0.6379,
171
+ "rewards/accuracies": 0.6583333015441895,
172
+ "rewards/chosen": -0.3121258318424225,
173
+ "rewards/margins": 0.4689728617668152,
174
+ "rewards/rejected": -0.7810987234115601,
175
+ "step": 55
176
+ },
177
+ {
178
+ "epoch": 0.16736401673640167,
179
+ "grad_norm": 6.804388046264648,
180
+ "learning_rate": 2.715687502978336e-05,
181
+ "logits/chosen": 10.728836059570312,
182
+ "logits/rejected": 11.548182487487793,
183
+ "logps/chosen": -220.2592010498047,
184
+ "logps/rejected": -248.6510009765625,
185
+ "loss": 0.5624,
186
+ "rewards/accuracies": 0.6916667222976685,
187
+ "rewards/chosen": -0.24719354510307312,
188
+ "rewards/margins": 0.5707910656929016,
189
+ "rewards/rejected": -0.8179847002029419,
190
+ "step": 60
191
+ },
192
+ {
193
+ "epoch": 0.18131101813110181,
194
+ "grad_norm": 8.308326721191406,
195
+ "learning_rate": 2.714453956614478e-05,
196
+ "logits/chosen": 9.632620811462402,
197
+ "logits/rejected": 10.766716003417969,
198
+ "logps/chosen": -234.59494018554688,
199
+ "logps/rejected": -246.2835235595703,
200
+ "loss": 0.6495,
201
+ "rewards/accuracies": 0.6500000357627869,
202
+ "rewards/chosen": -0.4380224347114563,
203
+ "rewards/margins": 0.505369246006012,
204
+ "rewards/rejected": -0.9433916211128235,
205
+ "step": 65
206
+ },
207
+ {
208
+ "epoch": 0.19525801952580196,
209
+ "grad_norm": 5.469903945922852,
210
+ "learning_rate": 2.7129883137863668e-05,
211
+ "logits/chosen": 10.187846183776855,
212
+ "logits/rejected": 11.750052452087402,
213
+ "logps/chosen": -201.72952270507812,
214
+ "logps/rejected": -254.0623779296875,
215
+ "loss": 0.6407,
216
+ "rewards/accuracies": 0.6166666746139526,
217
+ "rewards/chosen": -0.28986990451812744,
218
+ "rewards/margins": 0.4815855026245117,
219
+ "rewards/rejected": -0.7714553475379944,
220
+ "step": 70
221
+ },
222
+ {
223
+ "epoch": 0.20920502092050208,
224
+ "grad_norm": 6.822120189666748,
225
+ "learning_rate": 2.7112909094814497e-05,
226
+ "logits/chosen": 11.570058822631836,
227
+ "logits/rejected": 12.092260360717773,
228
+ "logps/chosen": -201.8399658203125,
229
+ "logps/rejected": -255.96774291992188,
230
+ "loss": 0.5926,
231
+ "rewards/accuracies": 0.6500000357627869,
232
+ "rewards/chosen": -0.17586487531661987,
233
+ "rewards/margins": 0.4662502706050873,
234
+ "rewards/rejected": -0.6421152353286743,
235
+ "step": 75
236
+ },
237
+ {
238
+ "epoch": 0.22315202231520223,
239
+ "grad_norm": 6.789912223815918,
240
+ "learning_rate": 2.7093621316585976e-05,
241
+ "logits/chosen": 9.428289413452148,
242
+ "logits/rejected": 12.114514350891113,
243
+ "logps/chosen": -199.76634216308594,
244
+ "logps/rejected": -259.3163146972656,
245
+ "loss": 0.6011,
246
+ "rewards/accuracies": 0.7083333730697632,
247
+ "rewards/chosen": -0.12371706962585449,
248
+ "rewards/margins": 0.6005428433418274,
249
+ "rewards/rejected": -0.7242598533630371,
250
+ "step": 80
251
+ },
252
+ {
253
+ "epoch": 0.23709902370990238,
254
+ "grad_norm": 8.106010437011719,
255
+ "learning_rate": 2.7072024211594312e-05,
256
+ "logits/chosen": 10.688722610473633,
257
+ "logits/rejected": 11.625075340270996,
258
+ "logps/chosen": -221.6556854248047,
259
+ "logps/rejected": -274.34808349609375,
260
+ "loss": 0.6676,
261
+ "rewards/accuracies": 0.6499999761581421,
262
+ "rewards/chosen": -0.237966850399971,
263
+ "rewards/margins": 0.4688500761985779,
264
+ "rewards/rejected": -0.7068168520927429,
265
+ "step": 85
266
+ },
267
+ {
268
+ "epoch": 0.2510460251046025,
269
+ "grad_norm": 7.263269901275635,
270
+ "learning_rate": 2.7048122716075636e-05,
271
+ "logits/chosen": 12.357452392578125,
272
+ "logits/rejected": 14.314582824707031,
273
+ "logps/chosen": -224.59451293945312,
274
+ "logps/rejected": -291.54840087890625,
275
+ "loss": 0.6033,
276
+ "rewards/accuracies": 0.6666667461395264,
277
+ "rewards/chosen": -0.07308916002511978,
278
+ "rewards/margins": 0.504932165145874,
279
+ "rewards/rejected": -0.5780213475227356,
280
+ "step": 90
281
+ },
282
+ {
283
+ "epoch": 0.2649930264993027,
284
+ "grad_norm": 8.925468444824219,
285
+ "learning_rate": 2.7021922292957776e-05,
286
+ "logits/chosen": 12.812413215637207,
287
+ "logits/rejected": 13.796060562133789,
288
+ "logps/chosen": -230.6768798828125,
289
+ "logps/rejected": -276.98828125,
290
+ "loss": 0.6054,
291
+ "rewards/accuracies": 0.6916667222976685,
292
+ "rewards/chosen": -0.03974637761712074,
293
+ "rewards/margins": 0.5660965442657471,
294
+ "rewards/rejected": -0.6058429479598999,
295
+ "step": 95
296
+ },
297
+ {
298
+ "epoch": 0.2789400278940028,
299
+ "grad_norm": 5.422610282897949,
300
+ "learning_rate": 2.6993428930611634e-05,
301
+ "logits/chosen": 12.078475952148438,
302
+ "logits/rejected": 14.003347396850586,
303
+ "logps/chosen": -229.6229705810547,
304
+ "logps/rejected": -276.1091613769531,
305
+ "loss": 0.5103,
306
+ "rewards/accuracies": 0.7916666269302368,
307
+ "rewards/chosen": 0.10478214919567108,
308
+ "rewards/margins": 0.8226801156997681,
309
+ "rewards/rejected": -0.7178980112075806,
310
+ "step": 100
311
+ },
312
+ {
313
+ "epoch": 0.2928870292887029,
314
+ "grad_norm": 7.437990188598633,
315
+ "learning_rate": 2.69626491414825e-05,
316
+ "logits/chosen": 11.215384483337402,
317
+ "logits/rejected": 14.319795608520508,
318
+ "logps/chosen": -253.2958221435547,
319
+ "logps/rejected": -353.27728271484375,
320
+ "loss": 0.4647,
321
+ "rewards/accuracies": 0.800000011920929,
322
+ "rewards/chosen": -0.20068514347076416,
323
+ "rewards/margins": 1.1105215549468994,
324
+ "rewards/rejected": -1.3112064599990845,
325
+ "step": 105
326
+ },
327
+ {
328
+ "epoch": 0.3068340306834031,
329
+ "grad_norm": 4.438156604766846,
330
+ "learning_rate": 2.6929589960601567e-05,
331
+ "logits/chosen": 10.04852294921875,
332
+ "logits/rejected": 11.834890365600586,
333
+ "logps/chosen": -177.2984619140625,
334
+ "logps/rejected": -235.3513946533203,
335
+ "loss": 0.5941,
336
+ "rewards/accuracies": 0.75,
337
+ "rewards/chosen": -0.14255599677562714,
338
+ "rewards/margins": 1.0380289554595947,
339
+ "rewards/rejected": -1.1805849075317383,
340
+ "step": 110
341
+ },
342
+ {
343
+ "epoch": 0.3207810320781032,
344
+ "grad_norm": 5.81158971786499,
345
+ "learning_rate": 2.689425894397799e-05,
346
+ "logits/chosen": 11.626302719116211,
347
+ "logits/rejected": 13.641179084777832,
348
+ "logps/chosen": -258.5274353027344,
349
+ "logps/rejected": -290.1232604980469,
350
+ "loss": 0.7585,
351
+ "rewards/accuracies": 0.6916667222976685,
352
+ "rewards/chosen": -0.3435746729373932,
353
+ "rewards/margins": 0.7051381468772888,
354
+ "rewards/rejected": -1.0487128496170044,
355
+ "step": 115
356
+ },
357
+ {
358
+ "epoch": 0.33472803347280333,
359
+ "grad_norm": 6.80866003036499,
360
+ "learning_rate": 2.685666416687189e-05,
361
+ "logits/chosen": 12.343037605285645,
362
+ "logits/rejected": 14.431178092956543,
363
+ "logps/chosen": -240.32119750976562,
364
+ "logps/rejected": -306.0838623046875,
365
+ "loss": 0.595,
366
+ "rewards/accuracies": 0.7166666984558105,
367
+ "rewards/chosen": 0.007438424974679947,
368
+ "rewards/margins": 0.7371615171432495,
369
+ "rewards/rejected": -0.7297230362892151,
370
+ "step": 120
371
+ },
372
+ {
373
+ "epoch": 0.3486750348675035,
374
+ "grad_norm": 5.887704372406006,
375
+ "learning_rate": 2.6816814221948682e-05,
376
+ "logits/chosen": 12.360528945922852,
377
+ "logits/rejected": 13.354936599731445,
378
+ "logps/chosen": -229.65194702148438,
379
+ "logps/rejected": -286.4953918457031,
380
+ "loss": 0.5958,
381
+ "rewards/accuracies": 0.7250000238418579,
382
+ "rewards/chosen": 0.04780165106058121,
383
+ "rewards/margins": 0.5758650898933411,
384
+ "rewards/rejected": -0.5280634164810181,
385
+ "step": 125
386
+ },
387
+ {
388
+ "epoch": 0.36262203626220363,
389
+ "grad_norm": 6.528600215911865,
390
+ "learning_rate": 2.6774718217315124e-05,
391
+ "logits/chosen": 12.22896671295166,
392
+ "logits/rejected": 14.30839729309082,
393
+ "logps/chosen": -258.0196838378906,
394
+ "logps/rejected": -312.25018310546875,
395
+ "loss": 0.5731,
396
+ "rewards/accuracies": 0.7166666984558105,
397
+ "rewards/chosen": -0.1742248833179474,
398
+ "rewards/margins": 0.7956671118736267,
399
+ "rewards/rejected": -0.9698920249938965,
400
+ "step": 130
401
+ },
402
+ {
403
+ "epoch": 0.37656903765690375,
404
+ "grad_norm": 4.8263702392578125,
405
+ "learning_rate": 2.6730385774437575e-05,
406
+ "logits/chosen": 11.001718521118164,
407
+ "logits/rejected": 12.369864463806152,
408
+ "logps/chosen": -186.54306030273438,
409
+ "logps/rejected": -229.1658935546875,
410
+ "loss": 0.5747,
411
+ "rewards/accuracies": 0.7333333492279053,
412
+ "rewards/chosen": -0.10876262187957764,
413
+ "rewards/margins": 0.783820390701294,
414
+ "rewards/rejected": -0.8925830721855164,
415
+ "step": 135
416
+ },
417
+ {
418
+ "epoch": 0.3905160390516039,
419
+ "grad_norm": 6.88525915145874,
420
+ "learning_rate": 2.668382702594289e-05,
421
+ "logits/chosen": 12.428079605102539,
422
+ "logits/rejected": 13.376548767089844,
423
+ "logps/chosen": -244.35336303710938,
424
+ "logps/rejected": -314.2310485839844,
425
+ "loss": 0.5727,
426
+ "rewards/accuracies": 0.675000011920929,
427
+ "rewards/chosen": -0.19110675156116486,
428
+ "rewards/margins": 0.8208184242248535,
429
+ "rewards/rejected": -1.011925220489502,
430
+ "step": 140
431
+ },
432
+ {
433
+ "epoch": 0.40446304044630405,
434
+ "grad_norm": 5.504042148590088,
435
+ "learning_rate": 2.663505261330254e-05,
436
+ "logits/chosen": 11.734379768371582,
437
+ "logits/rejected": 13.473363876342773,
438
+ "logps/chosen": -204.04251098632812,
439
+ "logps/rejected": -258.86737060546875,
440
+ "loss": 0.6128,
441
+ "rewards/accuracies": 0.6666666865348816,
442
+ "rewards/chosen": -0.17432288825511932,
443
+ "rewards/margins": 0.4802930951118469,
444
+ "rewards/rejected": -0.6546159982681274,
445
+ "step": 145
446
+ },
447
+ {
448
+ "epoch": 0.41841004184100417,
449
+ "grad_norm": 6.70412540435791,
450
+ "learning_rate": 2.6584073684400373e-05,
451
+ "logits/chosen": 13.045954704284668,
452
+ "logits/rejected": 13.6720552444458,
453
+ "logps/chosen": -220.8832550048828,
454
+ "logps/rejected": -269.233642578125,
455
+ "loss": 0.6032,
456
+ "rewards/accuracies": 0.675000011920929,
457
+ "rewards/chosen": -0.04318929463624954,
458
+ "rewards/margins": 0.618157684803009,
459
+ "rewards/rejected": -0.6613470315933228,
460
+ "step": 150
461
+ },
462
+ {
463
+ "epoch": 0.43235704323570434,
464
+ "grad_norm": 12.566320419311523,
465
+ "learning_rate": 2.653090189098466e-05,
466
+ "logits/chosen": 11.808868408203125,
467
+ "logits/rejected": 13.49694538116455,
468
+ "logps/chosen": -248.18212890625,
469
+ "logps/rejected": -286.3559265136719,
470
+ "loss": 0.6096,
471
+ "rewards/accuracies": 0.6499999761581421,
472
+ "rewards/chosen": -0.15974543988704681,
473
+ "rewards/margins": 0.6279758214950562,
474
+ "rewards/rejected": -0.7877213954925537,
475
+ "step": 155
476
+ },
477
+ {
478
+ "epoch": 0.44630404463040446,
479
+ "grad_norm": 6.7717509269714355,
480
+ "learning_rate": 2.647554938600497e-05,
481
+ "logits/chosen": 11.906153678894043,
482
+ "logits/rejected": 13.252202033996582,
483
+ "logps/chosen": -230.92593383789062,
484
+ "logps/rejected": -248.78488159179688,
485
+ "loss": 0.6474,
486
+ "rewards/accuracies": 0.6166666746139526,
487
+ "rewards/chosen": -0.20924147963523865,
488
+ "rewards/margins": 0.49998435378074646,
489
+ "rewards/rejected": -0.7092257738113403,
490
+ "step": 160
491
+ },
492
+ {
493
+ "epoch": 0.4602510460251046,
494
+ "grad_norm": 7.113363742828369,
495
+ "learning_rate": 2.6418028820834483e-05,
496
+ "logits/chosen": 11.409849166870117,
497
+ "logits/rejected": 13.500089645385742,
498
+ "logps/chosen": -242.9384765625,
499
+ "logps/rejected": -325.8098449707031,
500
+ "loss": 0.5703,
501
+ "rewards/accuracies": 0.7000000476837158,
502
+ "rewards/chosen": -0.4405299127101898,
503
+ "rewards/margins": 0.8166629076004028,
504
+ "rewards/rejected": -1.257192850112915,
505
+ "step": 165
506
+ },
507
+ {
508
+ "epoch": 0.47419804741980476,
509
+ "grad_norm": 4.8473429679870605,
510
+ "learning_rate": 2.6358353342378405e-05,
511
+ "logits/chosen": 8.604534149169922,
512
+ "logits/rejected": 11.609006881713867,
513
+ "logps/chosen": -185.0061492919922,
514
+ "logps/rejected": -256.83465576171875,
515
+ "loss": 0.4518,
516
+ "rewards/accuracies": 0.8166667222976685,
517
+ "rewards/chosen": -0.5300418734550476,
518
+ "rewards/margins": 1.1527934074401855,
519
+ "rewards/rejected": -1.682835340499878,
520
+ "step": 170
521
+ },
522
+ {
523
+ "epoch": 0.4881450488145049,
524
+ "grad_norm": 6.59335994720459,
525
+ "learning_rate": 2.6296536590069104e-05,
526
+ "logits/chosen": 9.987462997436523,
527
+ "logits/rejected": 11.572959899902344,
528
+ "logps/chosen": -253.36050415039062,
529
+ "logps/rejected": -306.0450439453125,
530
+ "loss": 0.7439,
531
+ "rewards/accuracies": 0.6666666269302368,
532
+ "rewards/chosen": -1.0800180435180664,
533
+ "rewards/margins": 0.8873661160469055,
534
+ "rewards/rejected": -1.9673839807510376,
535
+ "step": 175
536
+ },
537
+ {
538
+ "epoch": 0.502092050209205,
539
+ "grad_norm": 7.2572808265686035,
540
+ "learning_rate": 2.6232592692748676e-05,
541
+ "logits/chosen": 9.321784019470215,
542
+ "logits/rejected": 10.399984359741211,
543
+ "logps/chosen": -234.3979949951172,
544
+ "logps/rejected": -294.28961181640625,
545
+ "loss": 0.489,
546
+ "rewards/accuracies": 0.7916667461395264,
547
+ "rewards/chosen": -0.8451215624809265,
548
+ "rewards/margins": 1.1555068492889404,
549
+ "rewards/rejected": -2.0006284713745117,
550
+ "step": 180
551
+ },
552
+ {
553
+ "epoch": 0.5160390516039052,
554
+ "grad_norm": 6.049355506896973,
555
+ "learning_rate": 2.6166536265439664e-05,
556
+ "logits/chosen": 11.809592247009277,
557
+ "logits/rejected": 13.855669021606445,
558
+ "logps/chosen": -235.9927215576172,
559
+ "logps/rejected": -315.3490295410156,
560
+ "loss": 0.5868,
561
+ "rewards/accuracies": 0.6583333015441895,
562
+ "rewards/chosen": -0.6016494631767273,
563
+ "rewards/margins": 0.7112706899642944,
564
+ "rewards/rejected": -1.3129202127456665,
565
+ "step": 185
566
+ },
567
+ {
568
+ "epoch": 0.5299860529986054,
569
+ "grad_norm": 11.038729667663574,
570
+ "learning_rate": 2.609838240600464e-05,
571
+ "logits/chosen": 11.31627082824707,
572
+ "logits/rejected": 12.770919799804688,
573
+ "logps/chosen": -221.79501342773438,
574
+ "logps/rejected": -262.8851013183594,
575
+ "loss": 0.5937,
576
+ "rewards/accuracies": 0.6916667222976685,
577
+ "rewards/chosen": -0.5558231472969055,
578
+ "rewards/margins": 0.6753066778182983,
579
+ "rewards/rejected": -1.2311298847198486,
580
+ "step": 190
581
+ },
582
+ {
583
+ "epoch": 0.5439330543933054,
584
+ "grad_norm": 4.267746925354004,
585
+ "learning_rate": 2.602814669169543e-05,
586
+ "logits/chosen": 11.312234878540039,
587
+ "logits/rejected": 13.296246528625488,
588
+ "logps/chosen": -221.83663940429688,
589
+ "logps/rejected": -302.32623291015625,
590
+ "loss": 0.5809,
591
+ "rewards/accuracies": 0.7166666984558105,
592
+ "rewards/chosen": -0.5700381994247437,
593
+ "rewards/margins": 0.8077206611633301,
594
+ "rewards/rejected": -1.3777590990066528,
595
+ "step": 195
596
+ },
597
+ {
598
+ "epoch": 0.5578800557880056,
599
+ "grad_norm": 5.395613193511963,
600
+ "learning_rate": 2.5955845175592813e-05,
601
+ "logits/chosen": 11.410247802734375,
602
+ "logits/rejected": 13.515039443969727,
603
+ "logps/chosen": -232.266845703125,
604
+ "logps/rejected": -293.22467041015625,
605
+ "loss": 0.5553,
606
+ "rewards/accuracies": 0.7250000238418579,
607
+ "rewards/chosen": -0.5373277068138123,
608
+ "rewards/margins": 0.8851755857467651,
609
+ "rewards/rejected": -1.4225032329559326,
610
+ "step": 200
611
+ },
612
+ {
613
+ "epoch": 0.5718270571827058,
614
+ "grad_norm": 5.884355545043945,
615
+ "learning_rate": 2.5881494382937354e-05,
616
+ "logits/chosen": 11.658515930175781,
617
+ "logits/rejected": 13.375651359558105,
618
+ "logps/chosen": -212.52932739257812,
619
+ "logps/rejected": -286.71783447265625,
620
+ "loss": 0.5126,
621
+ "rewards/accuracies": 0.7250000238418579,
622
+ "rewards/chosen": -0.6171377897262573,
623
+ "rewards/margins": 1.1004387140274048,
624
+ "rewards/rejected": -1.7175763845443726,
625
+ "step": 205
626
+ },
627
+ {
628
+ "epoch": 0.5857740585774058,
629
+ "grad_norm": 8.411831855773926,
630
+ "learning_rate": 2.5805111307352483e-05,
631
+ "logits/chosen": 11.734652519226074,
632
+ "logits/rejected": 12.831028938293457,
633
+ "logps/chosen": -247.45736694335938,
634
+ "logps/rejected": -297.03485107421875,
635
+ "loss": 0.5907,
636
+ "rewards/accuracies": 0.7333332896232605,
637
+ "rewards/chosen": -0.6340610384941101,
638
+ "rewards/margins": 1.0966707468032837,
639
+ "rewards/rejected": -1.7307319641113281,
640
+ "step": 210
641
+ },
642
+ {
643
+ "epoch": 0.599721059972106,
644
+ "grad_norm": 6.429418087005615,
645
+ "learning_rate": 2.5726713406960365e-05,
646
+ "logits/chosen": 9.269506454467773,
647
+ "logits/rejected": 12.285856246948242,
648
+ "logps/chosen": -187.17591857910156,
649
+ "logps/rejected": -269.7003479003906,
650
+ "loss": 0.5349,
651
+ "rewards/accuracies": 0.7666667699813843,
652
+ "rewards/chosen": -0.5631422400474548,
653
+ "rewards/margins": 1.0805346965789795,
654
+ "rewards/rejected": -1.6436771154403687,
655
+ "step": 215
656
+ },
657
+ {
658
+ "epoch": 0.6136680613668062,
659
+ "grad_norm": 8.299524307250977,
660
+ "learning_rate": 2.5646318600391693e-05,
661
+ "logits/chosen": 10.429685592651367,
662
+ "logits/rejected": 11.87198543548584,
663
+ "logps/chosen": -234.75802612304688,
664
+ "logps/rejected": -284.33642578125,
665
+ "loss": 0.5775,
666
+ "rewards/accuracies": 0.6833333373069763,
667
+ "rewards/chosen": -0.5350970029830933,
668
+ "rewards/margins": 1.0643174648284912,
669
+ "rewards/rejected": -1.599414348602295,
670
+ "step": 220
671
+ },
672
+ {
673
+ "epoch": 0.6276150627615062,
674
+ "grad_norm": 3.9937915802001953,
675
+ "learning_rate": 2.556394526269021e-05,
676
+ "logits/chosen": 11.182074546813965,
677
+ "logits/rejected": 12.9561767578125,
678
+ "logps/chosen": -209.6283721923828,
679
+ "logps/rejected": -261.5816345214844,
680
+ "loss": 0.7124,
681
+ "rewards/accuracies": 0.625,
682
+ "rewards/chosen": -0.651893675327301,
683
+ "rewards/margins": 0.631694495677948,
684
+ "rewards/rejected": -1.283588171005249,
685
+ "step": 225
686
+ },
687
+ {
688
+ "epoch": 0.6415620641562064,
689
+ "grad_norm": 8.43625259399414,
690
+ "learning_rate": 2.5479612221112888e-05,
691
+ "logits/chosen": 11.575922966003418,
692
+ "logits/rejected": 12.958845138549805,
693
+ "logps/chosen": -241.3083953857422,
694
+ "logps/rejected": -291.8955993652344,
695
+ "loss": 0.6766,
696
+ "rewards/accuracies": 0.6666666269302368,
697
+ "rewards/chosen": -0.6825530529022217,
698
+ "rewards/margins": 0.6574732661247253,
699
+ "rewards/rejected": -1.3400263786315918,
700
+ "step": 230
701
+ },
702
+ {
703
+ "epoch": 0.6555090655509066,
704
+ "grad_norm": 4.0093817710876465,
705
+ "learning_rate": 2.5393338750826796e-05,
706
+ "logits/chosen": 12.776809692382812,
707
+ "logits/rejected": 14.406654357910156,
708
+ "logps/chosen": -229.2129669189453,
709
+ "logps/rejected": -300.2899475097656,
710
+ "loss": 0.5466,
711
+ "rewards/accuracies": 0.7000000476837158,
712
+ "rewards/chosen": -0.5617952346801758,
713
+ "rewards/margins": 0.752461850643158,
714
+ "rewards/rejected": -1.314257264137268,
715
+ "step": 235
716
+ },
717
+ {
718
+ "epoch": 0.6694560669456067,
719
+ "grad_norm": 6.296603679656982,
720
+ "learning_rate": 2.5305144570503554e-05,
721
+ "logits/chosen": 12.071104049682617,
722
+ "logits/rejected": 14.155171394348145,
723
+ "logps/chosen": -226.9972381591797,
724
+ "logps/rejected": -299.86993408203125,
725
+ "loss": 0.5546,
726
+ "rewards/accuracies": 0.7166667580604553,
727
+ "rewards/chosen": -0.5309011936187744,
728
+ "rewards/margins": 0.8765324354171753,
729
+ "rewards/rejected": -1.4074336290359497,
730
+ "step": 240
731
+ },
732
+ {
733
+ "epoch": 0.6834030683403068,
734
+ "grad_norm": 6.713784694671631,
735
+ "learning_rate": 2.5215049837812413e-05,
736
+ "logits/chosen": 11.046672821044922,
737
+ "logits/rejected": 13.329177856445312,
738
+ "logps/chosen": -222.3082275390625,
739
+ "logps/rejected": -302.4698181152344,
740
+ "loss": 0.4944,
741
+ "rewards/accuracies": 0.7583333253860474,
742
+ "rewards/chosen": -0.6500851511955261,
743
+ "rewards/margins": 1.216412901878357,
744
+ "rewards/rejected": -1.8664979934692383,
745
+ "step": 245
746
+ },
747
+ {
748
+ "epoch": 0.697350069735007,
749
+ "grad_norm": 3.64483904838562,
750
+ "learning_rate": 2.5123075144813044e-05,
751
+ "logits/chosen": 9.95425796508789,
752
+ "logits/rejected": 12.83178997039795,
753
+ "logps/chosen": -245.83987426757812,
754
+ "logps/rejected": -347.21807861328125,
755
+ "loss": 0.4497,
756
+ "rewards/accuracies": 0.8083333969116211,
757
+ "rewards/chosen": -0.7453610897064209,
758
+ "rewards/margins": 1.5202277898788452,
759
+ "rewards/rejected": -2.2655892372131348,
760
+ "step": 250
761
+ },
762
+ {
763
+ "epoch": 0.7112970711297071,
764
+ "grad_norm": 6.323943138122559,
765
+ "learning_rate": 2.5029241513248992e-05,
766
+ "logits/chosen": 10.718851089477539,
767
+ "logits/rejected": 11.787945747375488,
768
+ "logps/chosen": -243.6724853515625,
769
+ "logps/rejected": -275.3341064453125,
770
+ "loss": 0.7227,
771
+ "rewards/accuracies": 0.6499999761581421,
772
+ "rewards/chosen": -0.8060539364814758,
773
+ "rewards/margins": 0.8095922470092773,
774
+ "rewards/rejected": -1.6156460046768188,
775
+ "step": 255
776
+ },
777
+ {
778
+ "epoch": 0.7252440725244073,
779
+ "grad_norm": 7.24354887008667,
780
+ "learning_rate": 2.4933570389742975e-05,
781
+ "logits/chosen": 10.517350196838379,
782
+ "logits/rejected": 12.290593147277832,
783
+ "logps/chosen": -217.3165740966797,
784
+ "logps/rejected": -277.9285888671875,
785
+ "loss": 0.5759,
786
+ "rewards/accuracies": 0.7333332896232605,
787
+ "rewards/chosen": -0.4403415620326996,
788
+ "rewards/margins": 0.91583251953125,
789
+ "rewards/rejected": -1.3561739921569824,
790
+ "step": 260
791
+ },
792
+ {
793
+ "epoch": 0.7391910739191074,
794
+ "grad_norm": 5.851767063140869,
795
+ "learning_rate": 2.4836083640895016e-05,
796
+ "logits/chosen": 11.360766410827637,
797
+ "logits/rejected": 13.0468111038208,
798
+ "logps/chosen": -253.7987518310547,
799
+ "logps/rejected": -321.97967529296875,
800
+ "loss": 0.552,
801
+ "rewards/accuracies": 0.73333340883255,
802
+ "rewards/chosen": -0.37671542167663574,
803
+ "rewards/margins": 1.0203909873962402,
804
+ "rewards/rejected": -1.3971065282821655,
805
+ "step": 265
806
+ },
807
+ {
808
+ "epoch": 0.7531380753138075,
809
+ "grad_norm": 5.513597011566162,
810
+ "learning_rate": 2.473680354828461e-05,
811
+ "logits/chosen": 10.999284744262695,
812
+ "logits/rejected": 12.660871505737305,
813
+ "logps/chosen": -218.94735717773438,
814
+ "logps/rejected": -259.43133544921875,
815
+ "loss": 0.5662,
816
+ "rewards/accuracies": 0.7083333730697632,
817
+ "rewards/chosen": -0.4751812517642975,
818
+ "rewards/margins": 0.7894707322120667,
819
+ "rewards/rejected": -1.264651894569397,
820
+ "step": 270
821
+ },
822
+ {
823
+ "epoch": 0.7670850767085077,
824
+ "grad_norm": 5.750617504119873,
825
+ "learning_rate": 2.4635752803378063e-05,
826
+ "logits/chosen": 11.961132049560547,
827
+ "logits/rejected": 14.314129829406738,
828
+ "logps/chosen": -221.6836700439453,
829
+ "logps/rejected": -307.97467041015625,
830
+ "loss": 0.5598,
831
+ "rewards/accuracies": 0.7166666984558105,
832
+ "rewards/chosen": -0.42271748185157776,
833
+ "rewards/margins": 0.7973464727401733,
834
+ "rewards/rejected": -1.2200638055801392,
835
+ "step": 275
836
+ },
837
+ {
838
+ "epoch": 0.7810320781032078,
839
+ "grad_norm": 6.243062496185303,
840
+ "learning_rate": 2.453295450234211e-05,
841
+ "logits/chosen": 11.414163589477539,
842
+ "logits/rejected": 13.276272773742676,
843
+ "logps/chosen": -223.52529907226562,
844
+ "logps/rejected": -324.9430236816406,
845
+ "loss": 0.516,
846
+ "rewards/accuracies": 0.7833333015441895,
847
+ "rewards/chosen": -0.5223814249038696,
848
+ "rewards/margins": 1.0250855684280396,
849
+ "rewards/rejected": -1.5474669933319092,
850
+ "step": 280
851
+ },
852
+ {
853
+ "epoch": 0.7949790794979079,
854
+ "grad_norm": 7.279259204864502,
855
+ "learning_rate": 2.442843214076507e-05,
856
+ "logits/chosen": 11.585288047790527,
857
+ "logits/rejected": 12.664071083068848,
858
+ "logps/chosen": -244.99154663085938,
859
+ "logps/rejected": -254.84158325195312,
860
+ "loss": 0.5448,
861
+ "rewards/accuracies": 0.6916667222976685,
862
+ "rewards/chosen": -0.7385099530220032,
863
+ "rewards/margins": 0.9241247177124023,
864
+ "rewards/rejected": -1.6626346111297607,
865
+ "step": 285
866
+ },
867
+ {
868
+ "epoch": 0.8089260808926081,
869
+ "grad_norm": 4.353116512298584,
870
+ "learning_rate": 2.4322209608286686e-05,
871
+ "logits/chosen": 8.986726760864258,
872
+ "logits/rejected": 12.175418853759766,
873
+ "logps/chosen": -206.93594360351562,
874
+ "logps/rejected": -292.4324645996094,
875
+ "loss": 0.5075,
876
+ "rewards/accuracies": 0.7166666984558105,
877
+ "rewards/chosen": -0.7221769094467163,
878
+ "rewards/margins": 1.2281521558761597,
879
+ "rewards/rejected": -1.9503291845321655,
880
+ "step": 290
881
+ },
882
+ {
883
+ "epoch": 0.8228730822873083,
884
+ "grad_norm": 8.33502197265625,
885
+ "learning_rate": 2.421431118313789e-05,
886
+ "logits/chosen": 10.475484848022461,
887
+ "logits/rejected": 11.844476699829102,
888
+ "logps/chosen": -240.02206420898438,
889
+ "logps/rejected": -279.66064453125,
890
+ "loss": 0.7693,
891
+ "rewards/accuracies": 0.7250000238418579,
892
+ "rewards/chosen": -1.0663460493087769,
893
+ "rewards/margins": 0.940434455871582,
894
+ "rewards/rejected": -2.0067803859710693,
895
+ "step": 295
896
+ },
897
+ {
898
+ "epoch": 0.8368200836820083,
899
+ "grad_norm": 7.870750904083252,
900
+ "learning_rate": 2.41047615265918e-05,
901
+ "logits/chosen": 11.179079055786133,
902
+ "logits/rejected": 13.722526550292969,
903
+ "logps/chosen": -243.4619140625,
904
+ "logps/rejected": -338.6166687011719,
905
+ "loss": 0.5564,
906
+ "rewards/accuracies": 0.7333333492279053,
907
+ "rewards/chosen": -0.5845211744308472,
908
+ "rewards/margins": 1.252018690109253,
909
+ "rewards/rejected": -1.8365398645401,
910
+ "step": 300
911
+ },
912
+ {
913
+ "epoch": 0.8507670850767085,
914
+ "grad_norm": 5.815814018249512,
915
+ "learning_rate": 2.3993585677327107e-05,
916
+ "logits/chosen": 13.206197738647461,
917
+ "logits/rejected": 13.943652153015137,
918
+ "logps/chosen": -276.7415466308594,
919
+ "logps/rejected": -329.16949462890625,
920
+ "loss": 0.5341,
921
+ "rewards/accuracies": 0.7250000238418579,
922
+ "rewards/chosen": -0.4414283335208893,
923
+ "rewards/margins": 0.9674088358879089,
924
+ "rewards/rejected": -1.408837080001831,
925
+ "step": 305
926
+ },
927
+ {
928
+ "epoch": 0.8647140864714087,
929
+ "grad_norm": 8.953941345214844,
930
+ "learning_rate": 2.3880809045705262e-05,
931
+ "logits/chosen": 12.374226570129395,
932
+ "logits/rejected": 13.989924430847168,
933
+ "logps/chosen": -244.1103515625,
934
+ "logps/rejected": -294.4126281738281,
935
+ "loss": 0.5197,
936
+ "rewards/accuracies": 0.7416666746139526,
937
+ "rewards/chosen": -0.3111744523048401,
938
+ "rewards/margins": 1.076432228088379,
939
+ "rewards/rejected": -1.3876066207885742,
940
+ "step": 310
941
+ },
942
+ {
943
+ "epoch": 0.8786610878661087,
944
+ "grad_norm": 5.921610355377197,
945
+ "learning_rate": 2.3766457407962654e-05,
946
+ "logits/chosen": 11.312451362609863,
947
+ "logits/rejected": 13.527206420898438,
948
+ "logps/chosen": -234.7072296142578,
949
+ "logps/rejected": -290.8281555175781,
950
+ "loss": 0.6344,
951
+ "rewards/accuracies": 0.6583333611488342,
952
+ "rewards/chosen": -0.5945696234703064,
953
+ "rewards/margins": 0.7386828660964966,
954
+ "rewards/rejected": -1.3332524299621582,
955
+ "step": 315
956
+ },
957
+ {
958
+ "epoch": 0.8926080892608089,
959
+ "grad_norm": 7.662965774536133,
960
+ "learning_rate": 2.3650556900319204e-05,
961
+ "logits/chosen": 11.144214630126953,
962
+ "logits/rejected": 13.931811332702637,
963
+ "logps/chosen": -198.17422485351562,
964
+ "logps/rejected": -286.37152099609375,
965
+ "loss": 0.5295,
966
+ "rewards/accuracies": 0.7166666388511658,
967
+ "rewards/chosen": -0.3581700325012207,
968
+ "rewards/margins": 1.0556398630142212,
969
+ "rewards/rejected": -1.4138100147247314,
970
+ "step": 320
971
+ },
972
+ {
973
+ "epoch": 0.9065550906555091,
974
+ "grad_norm": 6.2693071365356445,
975
+ "learning_rate": 2.3533134013004666e-05,
976
+ "logits/chosen": 11.203069686889648,
977
+ "logits/rejected": 11.721672058105469,
978
+ "logps/chosen": -192.05380249023438,
979
+ "logps/rejected": -221.40042114257812,
980
+ "loss": 0.6338,
981
+ "rewards/accuracies": 0.7083333730697632,
982
+ "rewards/chosen": -0.56825852394104,
983
+ "rewards/margins": 0.6851181983947754,
984
+ "rewards/rejected": -1.253376841545105,
985
+ "step": 325
986
+ },
987
+ {
988
+ "epoch": 0.9205020920502092,
989
+ "grad_norm": 4.502702236175537,
990
+ "learning_rate": 2.341421558420403e-05,
991
+ "logits/chosen": 10.94641399383545,
992
+ "logits/rejected": 13.1921968460083,
993
+ "logps/chosen": -214.18911743164062,
994
+ "logps/rejected": -298.7375183105469,
995
+ "loss": 0.4812,
996
+ "rewards/accuracies": 0.7416666746139526,
997
+ "rewards/chosen": -0.6049180030822754,
998
+ "rewards/margins": 1.2558748722076416,
999
+ "rewards/rejected": -1.860793113708496,
1000
+ "step": 330
1001
+ },
1002
+ {
1003
+ "epoch": 0.9344490934449093,
1004
+ "grad_norm": 6.939857482910156,
1005
+ "learning_rate": 2.3293828793923365e-05,
1006
+ "logits/chosen": 11.944158554077148,
1007
+ "logits/rejected": 13.312121391296387,
1008
+ "logps/chosen": -258.62774658203125,
1009
+ "logps/rejected": -314.3907165527344,
1010
+ "loss": 0.619,
1011
+ "rewards/accuracies": 0.6583333015441895,
1012
+ "rewards/chosen": -0.5666269063949585,
1013
+ "rewards/margins": 1.0656505823135376,
1014
+ "rewards/rejected": -1.632277488708496,
1015
+ "step": 335
1016
+ },
1017
+ {
1018
+ "epoch": 0.9483960948396095,
1019
+ "grad_norm": 4.783708572387695,
1020
+ "learning_rate": 2.3172001157777566e-05,
1021
+ "logits/chosen": 11.259064674377441,
1022
+ "logits/rejected": 12.700227737426758,
1023
+ "logps/chosen": -222.77395629882812,
1024
+ "logps/rejected": -291.24884033203125,
1025
+ "loss": 0.6497,
1026
+ "rewards/accuracies": 0.658333420753479,
1027
+ "rewards/chosen": -0.5113255977630615,
1028
+ "rewards/margins": 0.866672158241272,
1029
+ "rewards/rejected": -1.377997636795044,
1030
+ "step": 340
1031
+ },
1032
+ {
1033
+ "epoch": 0.9623430962343096,
1034
+ "grad_norm": 4.230240821838379,
1035
+ "learning_rate": 2.3048760520701374e-05,
1036
+ "logits/chosen": 11.741998672485352,
1037
+ "logits/rejected": 13.120841979980469,
1038
+ "logps/chosen": -246.1442108154297,
1039
+ "logps/rejected": -311.65972900390625,
1040
+ "loss": 0.485,
1041
+ "rewards/accuracies": 0.7416666746139526,
1042
+ "rewards/chosen": -0.4931299090385437,
1043
+ "rewards/margins": 1.0040347576141357,
1044
+ "rewards/rejected": -1.4971646070480347,
1045
+ "step": 345
1046
+ },
1047
+ {
1048
+ "epoch": 0.9762900976290098,
1049
+ "grad_norm": 4.901912212371826,
1050
+ "learning_rate": 2.2924135050585152e-05,
1051
+ "logits/chosen": 11.310202598571777,
1052
+ "logits/rejected": 13.040916442871094,
1053
+ "logps/chosen": -223.5089111328125,
1054
+ "logps/rejected": -253.26953125,
1055
+ "loss": 0.5638,
1056
+ "rewards/accuracies": 0.7416666746139526,
1057
+ "rewards/chosen": -0.6479132771492004,
1058
+ "rewards/margins": 0.7726107835769653,
1059
+ "rewards/rejected": -1.4205242395401,
1060
+ "step": 350
1061
+ },
1062
+ {
1063
+ "epoch": 0.9902370990237099,
1064
+ "grad_norm": 4.653822898864746,
1065
+ "learning_rate": 2.2798153231836813e-05,
1066
+ "logits/chosen": 12.201348304748535,
1067
+ "logits/rejected": 13.601194381713867,
1068
+ "logps/chosen": -268.86328125,
1069
+ "logps/rejected": -299.11578369140625,
1070
+ "loss": 0.5367,
1071
+ "rewards/accuracies": 0.7416666746139526,
1072
+ "rewards/chosen": -0.5867849588394165,
1073
+ "rewards/margins": 1.1348934173583984,
1074
+ "rewards/rejected": -1.721678376197815,
1075
+ "step": 355
1076
+ },
1077
+ {
1078
+ "epoch": 0.99860529986053,
1079
+ "eval_logits/chosen": 11.91541862487793,
1080
+ "eval_logits/rejected": 12.808130264282227,
1081
+ "eval_logps/chosen": -229.4397430419922,
1082
+ "eval_logps/rejected": -286.59234619140625,
1083
+ "eval_loss": 0.6298339366912842,
1084
+ "eval_rewards/accuracies": 0.7099999785423279,
1085
+ "eval_rewards/chosen": -0.7923385500907898,
1086
+ "eval_rewards/margins": 0.9493054151535034,
1087
+ "eval_rewards/rejected": -1.7416436672210693,
1088
+ "eval_runtime": 24.5973,
1089
+ "eval_samples_per_second": 8.131,
1090
+ "eval_steps_per_second": 8.131,
1091
+ "step": 358
1092
+ }
1093
+ ],
1094
+ "logging_steps": 5,
1095
+ "max_steps": 1074,
1096
+ "num_input_tokens_seen": 0,
1097
+ "num_train_epochs": 3,
1098
+ "save_steps": 500,
1099
+ "stateful_callbacks": {
1100
+ "TrainerControl": {
1101
+ "args": {
1102
+ "should_epoch_stop": false,
1103
+ "should_evaluate": false,
1104
+ "should_log": false,
1105
+ "should_save": true,
1106
+ "should_training_stop": false
1107
+ },
1108
+ "attributes": {}
1109
+ }
1110
+ },
1111
+ "total_flos": 0.0,
1112
+ "train_batch_size": 12,
1113
+ "trial_name": null,
1114
+ "trial_params": null
1115
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4ea11f63ac857a12c1f6cc99e62cd2c441d03650bbbe89deae1338518fdbfd2f
3
+ size 6520