NBAmine commited on
Commit
efc5659
·
verified ·
1 Parent(s): ea82909

Training in progress, epoch 1, checkpoint

Browse files
.gitattributes CHANGED
@@ -34,3 +34,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  tokenizer.json filter=lfs diff=lfs merge=lfs -text
 
 
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
+ last-checkpoint/tokenizer.json filter=lfs diff=lfs merge=lfs -text
last-checkpoint/README.md ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: mistralai/Mistral-Nemo-Instruct-2407
3
+ library_name: peft
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - base_model:adapter:mistralai/Mistral-Nemo-Instruct-2407
7
+ - lora
8
+ - sft
9
+ - transformers
10
+ - trl
11
+ ---
12
+
13
+ # Model Card for Model ID
14
+
15
+ <!-- Provide a quick summary of what the model is/does. -->
16
+
17
+
18
+
19
+ ## Model Details
20
+
21
+ ### Model Description
22
+
23
+ <!-- Provide a longer summary of what this model is. -->
24
+
25
+
26
+
27
+ - **Developed by:** [More Information Needed]
28
+ - **Funded by [optional]:** [More Information Needed]
29
+ - **Shared by [optional]:** [More Information Needed]
30
+ - **Model type:** [More Information Needed]
31
+ - **Language(s) (NLP):** [More Information Needed]
32
+ - **License:** [More Information Needed]
33
+ - **Finetuned from model [optional]:** [More Information Needed]
34
+
35
+ ### Model Sources [optional]
36
+
37
+ <!-- Provide the basic links for the model. -->
38
+
39
+ - **Repository:** [More Information Needed]
40
+ - **Paper [optional]:** [More Information Needed]
41
+ - **Demo [optional]:** [More Information Needed]
42
+
43
+ ## Uses
44
+
45
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
46
+
47
+ ### Direct Use
48
+
49
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
50
+
51
+ [More Information Needed]
52
+
53
+ ### Downstream Use [optional]
54
+
55
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
56
+
57
+ [More Information Needed]
58
+
59
+ ### Out-of-Scope Use
60
+
61
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
62
+
63
+ [More Information Needed]
64
+
65
+ ## Bias, Risks, and Limitations
66
+
67
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
68
+
69
+ [More Information Needed]
70
+
71
+ ### Recommendations
72
+
73
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
74
+
75
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
76
+
77
+ ## How to Get Started with the Model
78
+
79
+ Use the code below to get started with the model.
80
+
81
+ [More Information Needed]
82
+
83
+ ## Training Details
84
+
85
+ ### Training Data
86
+
87
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
88
+
89
+ [More Information Needed]
90
+
91
+ ### Training Procedure
92
+
93
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
94
+
95
+ #### Preprocessing [optional]
96
+
97
+ [More Information Needed]
98
+
99
+
100
+ #### Training Hyperparameters
101
+
102
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
103
+
104
+ #### Speeds, Sizes, Times [optional]
105
+
106
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
107
+
108
+ [More Information Needed]
109
+
110
+ ## Evaluation
111
+
112
+ <!-- This section describes the evaluation protocols and provides the results. -->
113
+
114
+ ### Testing Data, Factors & Metrics
115
+
116
+ #### Testing Data
117
+
118
+ <!-- This should link to a Dataset Card if possible. -->
119
+
120
+ [More Information Needed]
121
+
122
+ #### Factors
123
+
124
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
125
+
126
+ [More Information Needed]
127
+
128
+ #### Metrics
129
+
130
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
131
+
132
+ [More Information Needed]
133
+
134
+ ### Results
135
+
136
+ [More Information Needed]
137
+
138
+ #### Summary
139
+
140
+
141
+
142
+ ## Model Examination [optional]
143
+
144
+ <!-- Relevant interpretability work for the model goes here -->
145
+
146
+ [More Information Needed]
147
+
148
+ ## Environmental Impact
149
+
150
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
151
+
152
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
153
+
154
+ - **Hardware Type:** [More Information Needed]
155
+ - **Hours used:** [More Information Needed]
156
+ - **Cloud Provider:** [More Information Needed]
157
+ - **Compute Region:** [More Information Needed]
158
+ - **Carbon Emitted:** [More Information Needed]
159
+
160
+ ## Technical Specifications [optional]
161
+
162
+ ### Model Architecture and Objective
163
+
164
+ [More Information Needed]
165
+
166
+ ### Compute Infrastructure
167
+
168
+ [More Information Needed]
169
+
170
+ #### Hardware
171
+
172
+ [More Information Needed]
173
+
174
+ #### Software
175
+
176
+ [More Information Needed]
177
+
178
+ ## Citation [optional]
179
+
180
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
181
+
182
+ **BibTeX:**
183
+
184
+ [More Information Needed]
185
+
186
+ **APA:**
187
+
188
+ [More Information Needed]
189
+
190
+ ## Glossary [optional]
191
+
192
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
193
+
194
+ [More Information Needed]
195
+
196
+ ## More Information [optional]
197
+
198
+ [More Information Needed]
199
+
200
+ ## Model Card Authors [optional]
201
+
202
+ [More Information Needed]
203
+
204
+ ## Model Card Contact
205
+
206
+ [More Information Needed]
207
+ ### Framework versions
208
+
209
+ - PEFT 0.18.1
last-checkpoint/adapter_config.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alora_invocation_tokens": null,
3
+ "alpha_pattern": {},
4
+ "arrow_config": null,
5
+ "auto_mapping": null,
6
+ "base_model_name_or_path": "mistralai/Mistral-Nemo-Instruct-2407",
7
+ "bias": "none",
8
+ "corda_config": null,
9
+ "ensure_weight_tying": false,
10
+ "eva_config": null,
11
+ "exclude_modules": null,
12
+ "fan_in_fan_out": false,
13
+ "inference_mode": true,
14
+ "init_lora_weights": true,
15
+ "layer_replication": null,
16
+ "layers_pattern": null,
17
+ "layers_to_transform": null,
18
+ "loftq_config": {},
19
+ "lora_alpha": 32,
20
+ "lora_bias": false,
21
+ "lora_dropout": 0.05,
22
+ "megatron_config": null,
23
+ "megatron_core": "megatron.core",
24
+ "modules_to_save": null,
25
+ "peft_type": "LORA",
26
+ "peft_version": "0.18.1",
27
+ "qalora_group_size": 16,
28
+ "r": 16,
29
+ "rank_pattern": {},
30
+ "revision": null,
31
+ "target_modules": [
32
+ "o_proj",
33
+ "k_proj",
34
+ "up_proj",
35
+ "v_proj",
36
+ "q_proj",
37
+ "down_proj",
38
+ "gate_proj"
39
+ ],
40
+ "target_parameters": null,
41
+ "task_type": "CAUSAL_LM",
42
+ "trainable_token_indices": null,
43
+ "use_dora": false,
44
+ "use_qalora": false,
45
+ "use_rslora": false
46
+ }
last-checkpoint/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5128d9dcd70414a36e31024f4ad5ec042101281c4c972fc6e1627cf56599a4f6
3
+ size 228140600
last-checkpoint/chat_template.jinja ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- if messages[0]["role"] == "system" %}
2
+ {%- set system_message = messages[0]["content"] %}
3
+ {%- set loop_messages = messages[1:] %}
4
+ {%- else %}
5
+ {%- set loop_messages = messages %}
6
+ {%- endif %}
7
+ {%- if not tools is defined %}
8
+ {%- set tools = none %}
9
+ {%- endif %}
10
+ {%- set user_messages = loop_messages | selectattr("role", "equalto", "user") | list %}
11
+
12
+ {#- This block checks for alternating user/assistant messages, skipping tool calling messages #}
13
+ {%- set ns = namespace() %}
14
+ {%- set ns.index = 0 %}
15
+ {%- for message in loop_messages %}
16
+ {%- if not (message.role == "tool" or message.role == "tool_results" or (message.tool_calls is defined and message.tool_calls is not none)) %}
17
+ {%- if (message["role"] == "user") != (ns.index % 2 == 0) %}
18
+ {{- raise_exception("After the optional system message, conversation roles must alternate user/assistant/user/assistant/...") }}
19
+ {%- endif %}
20
+ {%- set ns.index = ns.index + 1 %}
21
+ {%- endif %}
22
+ {%- endfor %}
23
+
24
+ {{- bos_token }}
25
+ {%- for message in loop_messages %}
26
+ {%- if message["role"] == "user" %}
27
+ {%- if tools is not none and (message == user_messages[-1]) %}
28
+ {{- "[AVAILABLE_TOOLS][" }}
29
+ {%- for tool in tools %}
30
+ {%- set tool = tool.function %}
31
+ {{- '{"type": "function", "function": {' }}
32
+ {%- for key, val in tool.items() if key != "return" %}
33
+ {%- if val is string %}
34
+ {{- '"' + key + '": "' + val + '"' }}
35
+ {%- else %}
36
+ {{- '"' + key + '": ' + val|tojson }}
37
+ {%- endif %}
38
+ {%- if not loop.last %}
39
+ {{- ", " }}
40
+ {%- endif %}
41
+ {%- endfor %}
42
+ {{- "}}" }}
43
+ {%- if not loop.last %}
44
+ {{- ", " }}
45
+ {%- else %}
46
+ {{- "]" }}
47
+ {%- endif %}
48
+ {%- endfor %}
49
+ {{- "[/AVAILABLE_TOOLS]" }}
50
+ {%- endif %}
51
+ {%- if loop.last and system_message is defined %}
52
+ {{- "[INST]" + system_message + "\n\n" + message["content"] + "[/INST]" }}
53
+ {%- else %}
54
+ {{- "[INST]" + message["content"] + "[/INST]" }}
55
+ {%- endif %}
56
+ {%- elif (message.tool_calls is defined and message.tool_calls is not none) %}
57
+ {{- "[TOOL_CALLS][" }}
58
+ {%- for tool_call in message.tool_calls %}
59
+ {%- set out = tool_call.function|tojson %}
60
+ {{- out[:-1] }}
61
+ {%- if not tool_call.id is defined or tool_call.id|length != 9 %}
62
+ {{- raise_exception("Tool call IDs should be alphanumeric strings with length 9!") }}
63
+ {%- endif %}
64
+ {{- ', "id": "' + tool_call.id + '"}' }}
65
+ {%- if not loop.last %}
66
+ {{- ", " }}
67
+ {%- else %}
68
+ {{- "]" + eos_token }}
69
+ {%- endif %}
70
+ {%- endfor %}
71
+ {%- elif message["role"] == "assistant" %}
72
+ {{- message["content"] + eos_token}}
73
+ {%- elif message["role"] == "tool_results" or message["role"] == "tool" %}
74
+ {%- if message.content is defined and message.content.content is defined %}
75
+ {%- set content = message.content.content %}
76
+ {%- else %}
77
+ {%- set content = message.content %}
78
+ {%- endif %}
79
+ {{- '[TOOL_RESULTS]{"content": ' + content|string + ", " }}
80
+ {%- if not message.tool_call_id is defined or message.tool_call_id|length != 9 %}
81
+ {{- raise_exception("Tool call IDs should be alphanumeric strings with length 9!") }}
82
+ {%- endif %}
83
+ {{- '"call_id": "' + message.tool_call_id + '"}[/TOOL_RESULTS]' }}
84
+ {%- else %}
85
+ {{- raise_exception("Only user and assistant roles are supported, with the exception of an initial optional system message!") }}
86
+ {%- endif %}
87
+ {%- endfor %}
last-checkpoint/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b29aeaf32b580836c4ab6ce3bb3b6341694319fadb2c700d01bdfa699d0c67c
3
+ size 116484839
last-checkpoint/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e2a25360b265ca8d0b891411b6f03807107a036c84312fe5f9c527c82dffde4
3
+ size 14709
last-checkpoint/scaler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d2784a2c99c69b1eeb46f85a93b50eab9ad7944681abfbfbe77fcff06d3d98c4
3
+ size 1383
last-checkpoint/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:235ac77d5afb578b9d394edc166238b7f00aecfd5e424e6f4eb719fa59ee4941
3
+ size 1465
last-checkpoint/special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "<unk>",
17
+ "unk_token": {
18
+ "content": "<unk>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
last-checkpoint/tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b0240ce510f08e6c2041724e9043e33be9d251d1e4a4d94eb68cd47b954b61d2
3
+ size 17078292
last-checkpoint/tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff
 
last-checkpoint/trainer_state.json ADDED
@@ -0,0 +1,476 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": 438,
3
+ "best_metric": 1.2615772485733032,
4
+ "best_model_checkpoint": "./adapter-phase2/checkpoint-438",
5
+ "epoch": 1.0,
6
+ "eval_steps": 500,
7
+ "global_step": 438,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "entropy": 0.9881319830194115,
14
+ "epoch": 0.022857142857142857,
15
+ "grad_norm": 1.5965849161148071,
16
+ "learning_rate": 9.958904109589041e-06,
17
+ "loss": 1.6407,
18
+ "mean_token_accuracy": 0.6662811586633325,
19
+ "num_tokens": 15996.0,
20
+ "step": 10
21
+ },
22
+ {
23
+ "entropy": 1.2699928235262632,
24
+ "epoch": 0.045714285714285714,
25
+ "grad_norm": 1.3544626235961914,
26
+ "learning_rate": 9.913242009132421e-06,
27
+ "loss": 1.5705,
28
+ "mean_token_accuracy": 0.6635363385081291,
29
+ "num_tokens": 27684.0,
30
+ "step": 20
31
+ },
32
+ {
33
+ "entropy": 1.6406203664839267,
34
+ "epoch": 0.06857142857142857,
35
+ "grad_norm": 1.615886926651001,
36
+ "learning_rate": 9.8675799086758e-06,
37
+ "loss": 1.6826,
38
+ "mean_token_accuracy": 0.6459833424538374,
39
+ "num_tokens": 36498.0,
40
+ "step": 30
41
+ },
42
+ {
43
+ "entropy": 1.8021161697804928,
44
+ "epoch": 0.09142857142857143,
45
+ "grad_norm": 1.3791861534118652,
46
+ "learning_rate": 9.821917808219178e-06,
47
+ "loss": 1.6402,
48
+ "mean_token_accuracy": 0.6510010983794927,
49
+ "num_tokens": 43095.0,
50
+ "step": 40
51
+ },
52
+ {
53
+ "entropy": 1.8170176550745964,
54
+ "epoch": 0.11428571428571428,
55
+ "grad_norm": 1.978265643119812,
56
+ "learning_rate": 9.776255707762557e-06,
57
+ "loss": 1.595,
58
+ "mean_token_accuracy": 0.6443639542907477,
59
+ "num_tokens": 47981.0,
60
+ "step": 50
61
+ },
62
+ {
63
+ "entropy": 1.1217214532196522,
64
+ "epoch": 0.13714285714285715,
65
+ "grad_norm": 0.9291492700576782,
66
+ "learning_rate": 9.730593607305937e-06,
67
+ "loss": 1.1238,
68
+ "mean_token_accuracy": 0.748357442393899,
69
+ "num_tokens": 64374.0,
70
+ "step": 60
71
+ },
72
+ {
73
+ "entropy": 1.2191850792616605,
74
+ "epoch": 0.16,
75
+ "grad_norm": 0.946483314037323,
76
+ "learning_rate": 9.684931506849316e-06,
77
+ "loss": 1.1395,
78
+ "mean_token_accuracy": 0.7462461993098259,
79
+ "num_tokens": 76109.0,
80
+ "step": 70
81
+ },
82
+ {
83
+ "entropy": 1.3447685081511735,
84
+ "epoch": 0.18285714285714286,
85
+ "grad_norm": 1.2081618309020996,
86
+ "learning_rate": 9.639269406392696e-06,
87
+ "loss": 1.2818,
88
+ "mean_token_accuracy": 0.7141567166894675,
89
+ "num_tokens": 84921.0,
90
+ "step": 80
91
+ },
92
+ {
93
+ "entropy": 1.5163870930671692,
94
+ "epoch": 0.2057142857142857,
95
+ "grad_norm": 1.1962831020355225,
96
+ "learning_rate": 9.593607305936073e-06,
97
+ "loss": 1.4127,
98
+ "mean_token_accuracy": 0.6898491781204938,
99
+ "num_tokens": 91475.0,
100
+ "step": 90
101
+ },
102
+ {
103
+ "entropy": 1.494850355386734,
104
+ "epoch": 0.22857142857142856,
105
+ "grad_norm": 1.8051788806915283,
106
+ "learning_rate": 9.547945205479453e-06,
107
+ "loss": 1.3539,
108
+ "mean_token_accuracy": 0.7036820162087679,
109
+ "num_tokens": 96433.0,
110
+ "step": 100
111
+ },
112
+ {
113
+ "entropy": 0.9693804103881121,
114
+ "epoch": 0.25142857142857145,
115
+ "grad_norm": 0.9580877423286438,
116
+ "learning_rate": 9.502283105022831e-06,
117
+ "loss": 1.0186,
118
+ "mean_token_accuracy": 0.7684306014329195,
119
+ "num_tokens": 112394.0,
120
+ "step": 110
121
+ },
122
+ {
123
+ "entropy": 1.090338883176446,
124
+ "epoch": 0.2742857142857143,
125
+ "grad_norm": 0.8812002539634705,
126
+ "learning_rate": 9.456621004566212e-06,
127
+ "loss": 1.0132,
128
+ "mean_token_accuracy": 0.7664786655455827,
129
+ "num_tokens": 123947.0,
130
+ "step": 120
131
+ },
132
+ {
133
+ "entropy": 1.284313677251339,
134
+ "epoch": 0.29714285714285715,
135
+ "grad_norm": 1.1698031425476074,
136
+ "learning_rate": 9.41095890410959e-06,
137
+ "loss": 1.2496,
138
+ "mean_token_accuracy": 0.7140494808554649,
139
+ "num_tokens": 132309.0,
140
+ "step": 130
141
+ },
142
+ {
143
+ "entropy": 1.3905419509857893,
144
+ "epoch": 0.32,
145
+ "grad_norm": 1.206275463104248,
146
+ "learning_rate": 9.365296803652969e-06,
147
+ "loss": 1.2838,
148
+ "mean_token_accuracy": 0.7115083243697882,
149
+ "num_tokens": 138781.0,
150
+ "step": 140
151
+ },
152
+ {
153
+ "entropy": 1.3588767245411872,
154
+ "epoch": 0.34285714285714286,
155
+ "grad_norm": 1.989784598350525,
156
+ "learning_rate": 9.319634703196347e-06,
157
+ "loss": 1.1625,
158
+ "mean_token_accuracy": 0.7235500495880842,
159
+ "num_tokens": 143734.0,
160
+ "step": 150
161
+ },
162
+ {
163
+ "entropy": 0.8994554949924349,
164
+ "epoch": 0.3657142857142857,
165
+ "grad_norm": 1.00913405418396,
166
+ "learning_rate": 9.273972602739727e-06,
167
+ "loss": 0.9263,
168
+ "mean_token_accuracy": 0.7811420723795891,
169
+ "num_tokens": 159877.0,
170
+ "step": 160
171
+ },
172
+ {
173
+ "entropy": 1.0330400079488755,
174
+ "epoch": 0.38857142857142857,
175
+ "grad_norm": 1.0430705547332764,
176
+ "learning_rate": 9.228310502283106e-06,
177
+ "loss": 0.9874,
178
+ "mean_token_accuracy": 0.7636023428291082,
179
+ "num_tokens": 171343.0,
180
+ "step": 170
181
+ },
182
+ {
183
+ "entropy": 1.1962346132844686,
184
+ "epoch": 0.4114285714285714,
185
+ "grad_norm": 1.3926544189453125,
186
+ "learning_rate": 9.182648401826484e-06,
187
+ "loss": 1.1651,
188
+ "mean_token_accuracy": 0.7276976224035024,
189
+ "num_tokens": 179829.0,
190
+ "step": 180
191
+ },
192
+ {
193
+ "entropy": 1.2758624110370875,
194
+ "epoch": 0.4342857142857143,
195
+ "grad_norm": 1.4376598596572876,
196
+ "learning_rate": 9.136986301369863e-06,
197
+ "loss": 1.1398,
198
+ "mean_token_accuracy": 0.7325437176972628,
199
+ "num_tokens": 186366.0,
200
+ "step": 190
201
+ },
202
+ {
203
+ "entropy": 1.2853564880788326,
204
+ "epoch": 0.45714285714285713,
205
+ "grad_norm": 2.2004787921905518,
206
+ "learning_rate": 9.091324200913243e-06,
207
+ "loss": 1.1638,
208
+ "mean_token_accuracy": 0.7276943679898977,
209
+ "num_tokens": 191365.0,
210
+ "step": 200
211
+ },
212
+ {
213
+ "entropy": 0.866673743724823,
214
+ "epoch": 0.48,
215
+ "grad_norm": 1.0414421558380127,
216
+ "learning_rate": 9.045662100456622e-06,
217
+ "loss": 0.8423,
218
+ "mean_token_accuracy": 0.7982745975255966,
219
+ "num_tokens": 207506.0,
220
+ "step": 210
221
+ },
222
+ {
223
+ "entropy": 0.947331802919507,
224
+ "epoch": 0.5028571428571429,
225
+ "grad_norm": 1.0546936988830566,
226
+ "learning_rate": 9e-06,
227
+ "loss": 0.9144,
228
+ "mean_token_accuracy": 0.7818248048424721,
229
+ "num_tokens": 219079.0,
230
+ "step": 220
231
+ },
232
+ {
233
+ "entropy": 1.1076377972960472,
234
+ "epoch": 0.5257142857142857,
235
+ "grad_norm": 1.488133430480957,
236
+ "learning_rate": 8.954337899543379e-06,
237
+ "loss": 1.0699,
238
+ "mean_token_accuracy": 0.7396229557693005,
239
+ "num_tokens": 227798.0,
240
+ "step": 230
241
+ },
242
+ {
243
+ "entropy": 1.2146507527679204,
244
+ "epoch": 0.5485714285714286,
245
+ "grad_norm": 1.393114686012268,
246
+ "learning_rate": 8.908675799086759e-06,
247
+ "loss": 1.1319,
248
+ "mean_token_accuracy": 0.7367029923945665,
249
+ "num_tokens": 234590.0,
250
+ "step": 240
251
+ },
252
+ {
253
+ "entropy": 1.2185490131378174,
254
+ "epoch": 0.5714285714285714,
255
+ "grad_norm": 2.391268730163574,
256
+ "learning_rate": 8.863013698630137e-06,
257
+ "loss": 1.0764,
258
+ "mean_token_accuracy": 0.7414231035858393,
259
+ "num_tokens": 239719.0,
260
+ "step": 250
261
+ },
262
+ {
263
+ "entropy": 0.8054604699835182,
264
+ "epoch": 0.5942857142857143,
265
+ "grad_norm": 0.9834737181663513,
266
+ "learning_rate": 8.817351598173518e-06,
267
+ "loss": 0.8053,
268
+ "mean_token_accuracy": 0.7998832739889622,
269
+ "num_tokens": 256472.0,
270
+ "step": 260
271
+ },
272
+ {
273
+ "entropy": 0.9120684009045362,
274
+ "epoch": 0.6171428571428571,
275
+ "grad_norm": 1.0815778970718384,
276
+ "learning_rate": 8.771689497716896e-06,
277
+ "loss": 0.8559,
278
+ "mean_token_accuracy": 0.7897930487990379,
279
+ "num_tokens": 268303.0,
280
+ "step": 270
281
+ },
282
+ {
283
+ "entropy": 1.0642446961253882,
284
+ "epoch": 0.64,
285
+ "grad_norm": 1.4160140752792358,
286
+ "learning_rate": 8.726027397260275e-06,
287
+ "loss": 1.0196,
288
+ "mean_token_accuracy": 0.7551121093332768,
289
+ "num_tokens": 277099.0,
290
+ "step": 280
291
+ },
292
+ {
293
+ "entropy": 1.1535940799862145,
294
+ "epoch": 0.6628571428571428,
295
+ "grad_norm": 1.6847327947616577,
296
+ "learning_rate": 8.680365296803653e-06,
297
+ "loss": 1.0552,
298
+ "mean_token_accuracy": 0.7411838915199042,
299
+ "num_tokens": 283815.0,
300
+ "step": 290
301
+ },
302
+ {
303
+ "entropy": 1.1651097811758517,
304
+ "epoch": 0.6857142857142857,
305
+ "grad_norm": 2.285900592803955,
306
+ "learning_rate": 8.634703196347033e-06,
307
+ "loss": 1.0267,
308
+ "mean_token_accuracy": 0.7420264776796103,
309
+ "num_tokens": 288831.0,
310
+ "step": 300
311
+ },
312
+ {
313
+ "entropy": 0.7967551020905376,
314
+ "epoch": 0.7085714285714285,
315
+ "grad_norm": 1.1952354907989502,
316
+ "learning_rate": 8.589041095890412e-06,
317
+ "loss": 0.8013,
318
+ "mean_token_accuracy": 0.803380336984992,
319
+ "num_tokens": 305255.0,
320
+ "step": 310
321
+ },
322
+ {
323
+ "entropy": 0.9132619671523571,
324
+ "epoch": 0.7314285714285714,
325
+ "grad_norm": 1.2885327339172363,
326
+ "learning_rate": 8.54337899543379e-06,
327
+ "loss": 0.8516,
328
+ "mean_token_accuracy": 0.7869276314973831,
329
+ "num_tokens": 316828.0,
330
+ "step": 320
331
+ },
332
+ {
333
+ "entropy": 1.020271310582757,
334
+ "epoch": 0.7542857142857143,
335
+ "grad_norm": 1.46247398853302,
336
+ "learning_rate": 8.497716894977169e-06,
337
+ "loss": 0.9943,
338
+ "mean_token_accuracy": 0.7584239929914475,
339
+ "num_tokens": 325557.0,
340
+ "step": 330
341
+ },
342
+ {
343
+ "entropy": 1.1350981347262858,
344
+ "epoch": 0.7771428571428571,
345
+ "grad_norm": 1.8628100156784058,
346
+ "learning_rate": 8.45205479452055e-06,
347
+ "loss": 1.0646,
348
+ "mean_token_accuracy": 0.7359312813729048,
349
+ "num_tokens": 332224.0,
350
+ "step": 340
351
+ },
352
+ {
353
+ "entropy": 1.1172638952732086,
354
+ "epoch": 0.8,
355
+ "grad_norm": 2.362889289855957,
356
+ "learning_rate": 8.406392694063928e-06,
357
+ "loss": 0.974,
358
+ "mean_token_accuracy": 0.7591060597449542,
359
+ "num_tokens": 337193.0,
360
+ "step": 350
361
+ },
362
+ {
363
+ "entropy": 0.7974750218912959,
364
+ "epoch": 0.8228571428571428,
365
+ "grad_norm": 1.1142443418502808,
366
+ "learning_rate": 8.360730593607306e-06,
367
+ "loss": 0.7776,
368
+ "mean_token_accuracy": 0.8041313651949167,
369
+ "num_tokens": 353595.0,
370
+ "step": 360
371
+ },
372
+ {
373
+ "entropy": 0.8807439863681793,
374
+ "epoch": 0.8457142857142858,
375
+ "grad_norm": 1.4073342084884644,
376
+ "learning_rate": 8.315068493150685e-06,
377
+ "loss": 0.8367,
378
+ "mean_token_accuracy": 0.7917833589017391,
379
+ "num_tokens": 365258.0,
380
+ "step": 370
381
+ },
382
+ {
383
+ "entropy": 1.0210479736328124,
384
+ "epoch": 0.8685714285714285,
385
+ "grad_norm": 1.506361484527588,
386
+ "learning_rate": 8.269406392694065e-06,
387
+ "loss": 0.9998,
388
+ "mean_token_accuracy": 0.753864735737443,
389
+ "num_tokens": 373694.0,
390
+ "step": 380
391
+ },
392
+ {
393
+ "entropy": 1.0973432060331105,
394
+ "epoch": 0.8914285714285715,
395
+ "grad_norm": 1.8420379161834717,
396
+ "learning_rate": 8.223744292237444e-06,
397
+ "loss": 1.0189,
398
+ "mean_token_accuracy": 0.7553424458950758,
399
+ "num_tokens": 380272.0,
400
+ "step": 390
401
+ },
402
+ {
403
+ "entropy": 1.1227020058780908,
404
+ "epoch": 0.9142857142857143,
405
+ "grad_norm": 2.7073919773101807,
406
+ "learning_rate": 8.178082191780822e-06,
407
+ "loss": 0.939,
408
+ "mean_token_accuracy": 0.755080708488822,
409
+ "num_tokens": 385214.0,
410
+ "step": 400
411
+ },
412
+ {
413
+ "entropy": 0.7808640262112021,
414
+ "epoch": 0.9371428571428572,
415
+ "grad_norm": 1.3160523176193237,
416
+ "learning_rate": 8.1324200913242e-06,
417
+ "loss": 0.7749,
418
+ "mean_token_accuracy": 0.8022231217473745,
419
+ "num_tokens": 400292.0,
420
+ "step": 410
421
+ },
422
+ {
423
+ "entropy": 0.9076132765039802,
424
+ "epoch": 0.96,
425
+ "grad_norm": 1.5909351110458374,
426
+ "learning_rate": 8.08675799086758e-06,
427
+ "loss": 0.8487,
428
+ "mean_token_accuracy": 0.78284947052598,
429
+ "num_tokens": 410576.0,
430
+ "step": 420
431
+ },
432
+ {
433
+ "entropy": 1.03597099930048,
434
+ "epoch": 0.9828571428571429,
435
+ "grad_norm": 1.9207741022109985,
436
+ "learning_rate": 8.04109589041096e-06,
437
+ "loss": 0.9651,
438
+ "mean_token_accuracy": 0.7578862871974706,
439
+ "num_tokens": 417570.0,
440
+ "step": 430
441
+ },
442
+ {
443
+ "epoch": 1.0,
444
+ "eval_accuracy": 0.008388412892696859,
445
+ "eval_entropy": 0.9678071262063207,
446
+ "eval_loss": 1.2615772485733032,
447
+ "eval_mean_token_accuracy": 0.7336987892172971,
448
+ "eval_num_tokens": 421194.0,
449
+ "eval_runtime": 298.9526,
450
+ "eval_samples_per_second": 3.459,
451
+ "eval_steps_per_second": 0.866,
452
+ "step": 438
453
+ }
454
+ ],
455
+ "logging_steps": 10,
456
+ "max_steps": 2190,
457
+ "num_input_tokens_seen": 0,
458
+ "num_train_epochs": 5,
459
+ "save_steps": 500,
460
+ "stateful_callbacks": {
461
+ "TrainerControl": {
462
+ "args": {
463
+ "should_epoch_stop": false,
464
+ "should_evaluate": false,
465
+ "should_log": false,
466
+ "should_save": true,
467
+ "should_training_stop": false
468
+ },
469
+ "attributes": {}
470
+ }
471
+ },
472
+ "total_flos": 2.940029330061312e+16,
473
+ "train_batch_size": 1,
474
+ "trial_name": null,
475
+ "trial_params": null
476
+ }
last-checkpoint/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6397570b74b109fa363e6deebec3b410825df0b8fddd810637091c898cd86887
3
+ size 6353