JimmyYang2025 commited on
Commit
1a94f15
·
verified ·
1 Parent(s): 5063dbf

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: meta-llama/Llama-2-7b-hf
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.15.1
adapter_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "/hpc2hdd/home/xyang346/llama2/llama/llama-2-7b-hf/7B",
5
+ "bias": "none",
6
+ "corda_config": null,
7
+ "eva_config": null,
8
+ "exclude_modules": null,
9
+ "fan_in_fan_out": false,
10
+ "inference_mode": true,
11
+ "init_lora_weights": true,
12
+ "layer_replication": null,
13
+ "layers_pattern": null,
14
+ "layers_to_transform": null,
15
+ "loftq_config": {},
16
+ "lora_alpha": 32,
17
+ "lora_bias": false,
18
+ "lora_dropout": 0.1,
19
+ "megatron_config": null,
20
+ "megatron_core": "megatron.core",
21
+ "modules_to_save": null,
22
+ "peft_type": "LORA",
23
+ "r": 8,
24
+ "rank_pattern": {},
25
+ "revision": null,
26
+ "target_modules": [
27
+ "q_proj",
28
+ "v_proj"
29
+ ],
30
+ "task_type": "CAUSAL_LM",
31
+ "trainable_token_indices": null,
32
+ "use_dora": false,
33
+ "use_rslora": false
34
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b873794793689516c5f9b6fe9f19e2f6f1055585496e125bd7e1b64f80120ca
3
+ size 16794200
checkpoint-2500/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: /hpc2hdd/home/xyang346/llama2/llama/llama-2-7b-hf/7B
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.15.1
checkpoint-2500/adapter_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "/hpc2hdd/home/xyang346/llama2/llama/llama-2-7b-hf/7B",
5
+ "bias": "none",
6
+ "corda_config": null,
7
+ "eva_config": null,
8
+ "exclude_modules": null,
9
+ "fan_in_fan_out": false,
10
+ "inference_mode": true,
11
+ "init_lora_weights": true,
12
+ "layer_replication": null,
13
+ "layers_pattern": null,
14
+ "layers_to_transform": null,
15
+ "loftq_config": {},
16
+ "lora_alpha": 32,
17
+ "lora_bias": false,
18
+ "lora_dropout": 0.1,
19
+ "megatron_config": null,
20
+ "megatron_core": "megatron.core",
21
+ "modules_to_save": null,
22
+ "peft_type": "LORA",
23
+ "r": 8,
24
+ "rank_pattern": {},
25
+ "revision": null,
26
+ "target_modules": [
27
+ "q_proj",
28
+ "v_proj"
29
+ ],
30
+ "task_type": "CAUSAL_LM",
31
+ "trainable_token_indices": null,
32
+ "use_dora": false,
33
+ "use_rslora": false
34
+ }
checkpoint-2500/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b873794793689516c5f9b6fe9f19e2f6f1055585496e125bd7e1b64f80120ca
3
+ size 16794200
checkpoint-2500/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6241f46eca3946d363c6812f88a1ea5ab39248137406c04f0fb85e9eb9f5d1d
3
+ size 33662074
checkpoint-2500/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:21e1e55b2a0c8bc18bd768d33a89237dea74f4ee907d13006c7a12b6ccb1453c
3
+ size 14244
checkpoint-2500/scaler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:601ce08c79a2eaeca202eba2cb6e92b2fca131ce81001cf02dfa76571baaa630
3
+ size 988
checkpoint-2500/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5650b7a6656194ab46bbbbc7dc3ed0ee154453c5b8b70986b20c0ede2a874311
3
+ size 1064
checkpoint-2500/special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "</s>",
17
+ "unk_token": {
18
+ "content": "<unk>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
checkpoint-2500/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-2500/tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
3
+ size 499723
checkpoint-2500/tokenizer_config.json ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "add_prefix_space": null,
5
+ "added_tokens_decoder": {
6
+ "0": {
7
+ "content": "<unk>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "1": {
15
+ "content": "<s>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": true
21
+ },
22
+ "2": {
23
+ "content": "</s>",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false,
28
+ "special": true
29
+ }
30
+ },
31
+ "bos_token": "<s>",
32
+ "clean_up_tokenization_spaces": false,
33
+ "eos_token": "</s>",
34
+ "extra_special_tokens": {},
35
+ "legacy": true,
36
+ "model_max_length": 1000000000000000019884624838656,
37
+ "pad_token": "</s>",
38
+ "sp_model_kwargs": {},
39
+ "spaces_between_special_tokens": false,
40
+ "tokenizer_class": "LlamaTokenizer",
41
+ "unk_token": "<unk>",
42
+ "use_default_system_prompt": false
43
+ }
checkpoint-2500/trainer_state.json ADDED
@@ -0,0 +1,584 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": 2500,
3
+ "best_metric": 0.7841161489486694,
4
+ "best_model_checkpoint": "./llama2-m2/checkpoint-2500",
5
+ "epoch": 2.9767718880285887,
6
+ "eval_steps": 100,
7
+ "global_step": 2500,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "epoch": 0.05955926146515783,
14
+ "grad_norm": 11.590325355529785,
15
+ "learning_rate": 4.9000000000000005e-06,
16
+ "loss": 3.0703,
17
+ "step": 50
18
+ },
19
+ {
20
+ "epoch": 0.11911852293031566,
21
+ "grad_norm": 6.117842197418213,
22
+ "learning_rate": 9.9e-06,
23
+ "loss": 2.3868,
24
+ "step": 100
25
+ },
26
+ {
27
+ "epoch": 0.11911852293031566,
28
+ "eval_loss": 1.5943528413772583,
29
+ "eval_runtime": 86.9995,
30
+ "eval_samples_per_second": 8.081,
31
+ "eval_steps_per_second": 2.023,
32
+ "step": 100
33
+ },
34
+ {
35
+ "epoch": 0.1786777843954735,
36
+ "grad_norm": 0.4276258051395416,
37
+ "learning_rate": 9.797269342159703e-06,
38
+ "loss": 1.1152,
39
+ "step": 150
40
+ },
41
+ {
42
+ "epoch": 0.23823704586063132,
43
+ "grad_norm": 0.34813204407691956,
44
+ "learning_rate": 9.590401323955318e-06,
45
+ "loss": 0.9458,
46
+ "step": 200
47
+ },
48
+ {
49
+ "epoch": 0.23823704586063132,
50
+ "eval_loss": 0.9201429486274719,
51
+ "eval_runtime": 86.8761,
52
+ "eval_samples_per_second": 8.092,
53
+ "eval_steps_per_second": 2.026,
54
+ "step": 200
55
+ },
56
+ {
57
+ "epoch": 0.29779630732578916,
58
+ "grad_norm": 0.31664666533470154,
59
+ "learning_rate": 9.383533305750931e-06,
60
+ "loss": 0.8754,
61
+ "step": 250
62
+ },
63
+ {
64
+ "epoch": 0.357355568790947,
65
+ "grad_norm": 0.3039833903312683,
66
+ "learning_rate": 9.176665287546546e-06,
67
+ "loss": 0.8236,
68
+ "step": 300
69
+ },
70
+ {
71
+ "epoch": 0.357355568790947,
72
+ "eval_loss": 0.8294563293457031,
73
+ "eval_runtime": 86.6916,
74
+ "eval_samples_per_second": 8.109,
75
+ "eval_steps_per_second": 2.03,
76
+ "step": 300
77
+ },
78
+ {
79
+ "epoch": 0.4169148302561048,
80
+ "grad_norm": 0.3569670021533966,
81
+ "learning_rate": 8.969797269342161e-06,
82
+ "loss": 0.7643,
83
+ "step": 350
84
+ },
85
+ {
86
+ "epoch": 0.47647409172126265,
87
+ "grad_norm": 0.4587797224521637,
88
+ "learning_rate": 8.762929251137776e-06,
89
+ "loss": 0.7638,
90
+ "step": 400
91
+ },
92
+ {
93
+ "epoch": 0.47647409172126265,
94
+ "eval_loss": 0.8007138967514038,
95
+ "eval_runtime": 86.7325,
96
+ "eval_samples_per_second": 8.105,
97
+ "eval_steps_per_second": 2.029,
98
+ "step": 400
99
+ },
100
+ {
101
+ "epoch": 0.5360333531864205,
102
+ "grad_norm": 0.2903870940208435,
103
+ "learning_rate": 8.556061232933389e-06,
104
+ "loss": 0.7505,
105
+ "step": 450
106
+ },
107
+ {
108
+ "epoch": 0.5955926146515783,
109
+ "grad_norm": 0.39271315932273865,
110
+ "learning_rate": 8.349193214729004e-06,
111
+ "loss": 0.7773,
112
+ "step": 500
113
+ },
114
+ {
115
+ "epoch": 0.5955926146515783,
116
+ "eval_loss": 0.7964405417442322,
117
+ "eval_runtime": 86.7496,
118
+ "eval_samples_per_second": 8.104,
119
+ "eval_steps_per_second": 2.029,
120
+ "step": 500
121
+ },
122
+ {
123
+ "epoch": 0.6551518761167362,
124
+ "grad_norm": 0.2611350119113922,
125
+ "learning_rate": 8.142325196524617e-06,
126
+ "loss": 0.7339,
127
+ "step": 550
128
+ },
129
+ {
130
+ "epoch": 0.714711137581894,
131
+ "grad_norm": 0.3096601665019989,
132
+ "learning_rate": 7.935457178320233e-06,
133
+ "loss": 0.7867,
134
+ "step": 600
135
+ },
136
+ {
137
+ "epoch": 0.714711137581894,
138
+ "eval_loss": 0.7935438752174377,
139
+ "eval_runtime": 86.8192,
140
+ "eval_samples_per_second": 8.097,
141
+ "eval_steps_per_second": 2.027,
142
+ "step": 600
143
+ },
144
+ {
145
+ "epoch": 0.7742703990470519,
146
+ "grad_norm": 0.28062084317207336,
147
+ "learning_rate": 7.728589160115847e-06,
148
+ "loss": 0.7642,
149
+ "step": 650
150
+ },
151
+ {
152
+ "epoch": 0.8338296605122096,
153
+ "grad_norm": 0.2916211783885956,
154
+ "learning_rate": 7.521721141911461e-06,
155
+ "loss": 0.7436,
156
+ "step": 700
157
+ },
158
+ {
159
+ "epoch": 0.8338296605122096,
160
+ "eval_loss": 0.7918882369995117,
161
+ "eval_runtime": 86.8706,
162
+ "eval_samples_per_second": 8.092,
163
+ "eval_steps_per_second": 2.026,
164
+ "step": 700
165
+ },
166
+ {
167
+ "epoch": 0.8933889219773675,
168
+ "grad_norm": 0.4260661005973816,
169
+ "learning_rate": 7.3148531237070755e-06,
170
+ "loss": 0.7944,
171
+ "step": 750
172
+ },
173
+ {
174
+ "epoch": 0.9529481834425253,
175
+ "grad_norm": 0.3311309218406677,
176
+ "learning_rate": 7.1079851055026895e-06,
177
+ "loss": 0.7618,
178
+ "step": 800
179
+ },
180
+ {
181
+ "epoch": 0.9529481834425253,
182
+ "eval_loss": 0.7905948758125305,
183
+ "eval_runtime": 86.7293,
184
+ "eval_samples_per_second": 8.106,
185
+ "eval_steps_per_second": 2.029,
186
+ "step": 800
187
+ },
188
+ {
189
+ "epoch": 1.0119118522930315,
190
+ "grad_norm": 0.33902204036712646,
191
+ "learning_rate": 6.901117087298304e-06,
192
+ "loss": 0.7565,
193
+ "step": 850
194
+ },
195
+ {
196
+ "epoch": 1.0714711137581894,
197
+ "grad_norm": 0.3156481981277466,
198
+ "learning_rate": 6.694249069093918e-06,
199
+ "loss": 0.7834,
200
+ "step": 900
201
+ },
202
+ {
203
+ "epoch": 1.0714711137581894,
204
+ "eval_loss": 0.789471447467804,
205
+ "eval_runtime": 86.7639,
206
+ "eval_samples_per_second": 8.102,
207
+ "eval_steps_per_second": 2.028,
208
+ "step": 900
209
+ },
210
+ {
211
+ "epoch": 1.1310303752233473,
212
+ "grad_norm": 0.29626569151878357,
213
+ "learning_rate": 6.487381050889533e-06,
214
+ "loss": 0.7636,
215
+ "step": 950
216
+ },
217
+ {
218
+ "epoch": 1.1905896366885051,
219
+ "grad_norm": 0.32058003544807434,
220
+ "learning_rate": 6.280513032685147e-06,
221
+ "loss": 0.7588,
222
+ "step": 1000
223
+ },
224
+ {
225
+ "epoch": 1.1905896366885051,
226
+ "eval_loss": 0.7887451648712158,
227
+ "eval_runtime": 86.8029,
228
+ "eval_samples_per_second": 8.099,
229
+ "eval_steps_per_second": 2.028,
230
+ "step": 1000
231
+ },
232
+ {
233
+ "epoch": 1.2501488981536628,
234
+ "grad_norm": 0.3029298484325409,
235
+ "learning_rate": 6.073645014480761e-06,
236
+ "loss": 0.7651,
237
+ "step": 1050
238
+ },
239
+ {
240
+ "epoch": 1.3097081596188207,
241
+ "grad_norm": 0.30075645446777344,
242
+ "learning_rate": 5.866776996276376e-06,
243
+ "loss": 0.747,
244
+ "step": 1100
245
+ },
246
+ {
247
+ "epoch": 1.3097081596188207,
248
+ "eval_loss": 0.7880399227142334,
249
+ "eval_runtime": 86.7707,
250
+ "eval_samples_per_second": 8.102,
251
+ "eval_steps_per_second": 2.028,
252
+ "step": 1100
253
+ },
254
+ {
255
+ "epoch": 1.3692674210839786,
256
+ "grad_norm": 0.30230703949928284,
257
+ "learning_rate": 5.659908978071991e-06,
258
+ "loss": 0.7694,
259
+ "step": 1150
260
+ },
261
+ {
262
+ "epoch": 1.4288266825491365,
263
+ "grad_norm": 0.2981889545917511,
264
+ "learning_rate": 5.453040959867605e-06,
265
+ "loss": 0.7546,
266
+ "step": 1200
267
+ },
268
+ {
269
+ "epoch": 1.4288266825491365,
270
+ "eval_loss": 0.7873143553733826,
271
+ "eval_runtime": 86.9249,
272
+ "eval_samples_per_second": 8.087,
273
+ "eval_steps_per_second": 2.025,
274
+ "step": 1200
275
+ },
276
+ {
277
+ "epoch": 1.4883859440142944,
278
+ "grad_norm": 0.33295580744743347,
279
+ "learning_rate": 5.246172941663219e-06,
280
+ "loss": 0.7356,
281
+ "step": 1250
282
+ },
283
+ {
284
+ "epoch": 1.547945205479452,
285
+ "grad_norm": 0.2881334125995636,
286
+ "learning_rate": 5.039304923458833e-06,
287
+ "loss": 0.7616,
288
+ "step": 1300
289
+ },
290
+ {
291
+ "epoch": 1.547945205479452,
292
+ "eval_loss": 0.7868330478668213,
293
+ "eval_runtime": 86.9371,
294
+ "eval_samples_per_second": 8.086,
295
+ "eval_steps_per_second": 2.024,
296
+ "step": 1300
297
+ },
298
+ {
299
+ "epoch": 1.60750446694461,
300
+ "grad_norm": 0.42549142241477966,
301
+ "learning_rate": 4.832436905254448e-06,
302
+ "loss": 0.7613,
303
+ "step": 1350
304
+ },
305
+ {
306
+ "epoch": 1.6670637284097678,
307
+ "grad_norm": 0.32537880539894104,
308
+ "learning_rate": 4.625568887050063e-06,
309
+ "loss": 0.777,
310
+ "step": 1400
311
+ },
312
+ {
313
+ "epoch": 1.6670637284097678,
314
+ "eval_loss": 0.7863583564758301,
315
+ "eval_runtime": 86.9105,
316
+ "eval_samples_per_second": 8.089,
317
+ "eval_steps_per_second": 2.025,
318
+ "step": 1400
319
+ },
320
+ {
321
+ "epoch": 1.7266229898749255,
322
+ "grad_norm": 0.31612130999565125,
323
+ "learning_rate": 4.418700868845677e-06,
324
+ "loss": 0.7123,
325
+ "step": 1450
326
+ },
327
+ {
328
+ "epoch": 1.7861822513400833,
329
+ "grad_norm": 0.39497706294059753,
330
+ "learning_rate": 4.211832850641292e-06,
331
+ "loss": 0.7999,
332
+ "step": 1500
333
+ },
334
+ {
335
+ "epoch": 1.7861822513400833,
336
+ "eval_loss": 0.7859570980072021,
337
+ "eval_runtime": 86.7739,
338
+ "eval_samples_per_second": 8.102,
339
+ "eval_steps_per_second": 2.028,
340
+ "step": 1500
341
+ },
342
+ {
343
+ "epoch": 1.8457415128052412,
344
+ "grad_norm": 0.3905975818634033,
345
+ "learning_rate": 4.004964832436906e-06,
346
+ "loss": 0.7105,
347
+ "step": 1550
348
+ },
349
+ {
350
+ "epoch": 1.905300774270399,
351
+ "grad_norm": 0.3420596718788147,
352
+ "learning_rate": 3.7980968142325196e-06,
353
+ "loss": 0.7735,
354
+ "step": 1600
355
+ },
356
+ {
357
+ "epoch": 1.905300774270399,
358
+ "eval_loss": 0.7855594754219055,
359
+ "eval_runtime": 86.9977,
360
+ "eval_samples_per_second": 8.081,
361
+ "eval_steps_per_second": 2.023,
362
+ "step": 1600
363
+ },
364
+ {
365
+ "epoch": 1.964860035735557,
366
+ "grad_norm": 0.2925880551338196,
367
+ "learning_rate": 3.5912287960281345e-06,
368
+ "loss": 0.7675,
369
+ "step": 1650
370
+ },
371
+ {
372
+ "epoch": 2.023823704586063,
373
+ "grad_norm": 0.42387983202934265,
374
+ "learning_rate": 3.3843607778237485e-06,
375
+ "loss": 0.7679,
376
+ "step": 1700
377
+ },
378
+ {
379
+ "epoch": 2.023823704586063,
380
+ "eval_loss": 0.7852116227149963,
381
+ "eval_runtime": 86.7932,
382
+ "eval_samples_per_second": 8.1,
383
+ "eval_steps_per_second": 2.028,
384
+ "step": 1700
385
+ },
386
+ {
387
+ "epoch": 2.083382966051221,
388
+ "grad_norm": 0.3012678325176239,
389
+ "learning_rate": 3.1774927596193634e-06,
390
+ "loss": 0.7529,
391
+ "step": 1750
392
+ },
393
+ {
394
+ "epoch": 2.1429422275163788,
395
+ "grad_norm": 0.3647378385066986,
396
+ "learning_rate": 2.9706247414149774e-06,
397
+ "loss": 0.7772,
398
+ "step": 1800
399
+ },
400
+ {
401
+ "epoch": 2.1429422275163788,
402
+ "eval_loss": 0.7850247025489807,
403
+ "eval_runtime": 86.7181,
404
+ "eval_samples_per_second": 8.107,
405
+ "eval_steps_per_second": 2.03,
406
+ "step": 1800
407
+ },
408
+ {
409
+ "epoch": 2.202501488981537,
410
+ "grad_norm": 0.30863115191459656,
411
+ "learning_rate": 2.763756723210592e-06,
412
+ "loss": 0.7485,
413
+ "step": 1850
414
+ },
415
+ {
416
+ "epoch": 2.2620607504466945,
417
+ "grad_norm": 0.3829723298549652,
418
+ "learning_rate": 2.5568887050062062e-06,
419
+ "loss": 0.7449,
420
+ "step": 1900
421
+ },
422
+ {
423
+ "epoch": 2.2620607504466945,
424
+ "eval_loss": 0.7847884893417358,
425
+ "eval_runtime": 86.725,
426
+ "eval_samples_per_second": 8.106,
427
+ "eval_steps_per_second": 2.029,
428
+ "step": 1900
429
+ },
430
+ {
431
+ "epoch": 2.321620011911852,
432
+ "grad_norm": 0.3733135759830475,
433
+ "learning_rate": 2.3500206868018207e-06,
434
+ "loss": 0.7508,
435
+ "step": 1950
436
+ },
437
+ {
438
+ "epoch": 2.3811792733770103,
439
+ "grad_norm": 0.37344199419021606,
440
+ "learning_rate": 2.143152668597435e-06,
441
+ "loss": 0.7509,
442
+ "step": 2000
443
+ },
444
+ {
445
+ "epoch": 2.3811792733770103,
446
+ "eval_loss": 0.7846249938011169,
447
+ "eval_runtime": 86.7361,
448
+ "eval_samples_per_second": 8.105,
449
+ "eval_steps_per_second": 2.029,
450
+ "step": 2000
451
+ },
452
+ {
453
+ "epoch": 2.440738534842168,
454
+ "grad_norm": 0.46035104990005493,
455
+ "learning_rate": 1.9362846503930496e-06,
456
+ "loss": 0.7901,
457
+ "step": 2050
458
+ },
459
+ {
460
+ "epoch": 2.5002977963073256,
461
+ "grad_norm": 0.31786802411079407,
462
+ "learning_rate": 1.7294166321886638e-06,
463
+ "loss": 0.7654,
464
+ "step": 2100
465
+ },
466
+ {
467
+ "epoch": 2.5002977963073256,
468
+ "eval_loss": 0.7844468951225281,
469
+ "eval_runtime": 86.7628,
470
+ "eval_samples_per_second": 8.103,
471
+ "eval_steps_per_second": 2.029,
472
+ "step": 2100
473
+ },
474
+ {
475
+ "epoch": 2.5598570577724837,
476
+ "grad_norm": 0.337811678647995,
477
+ "learning_rate": 1.5225486139842782e-06,
478
+ "loss": 0.7524,
479
+ "step": 2150
480
+ },
481
+ {
482
+ "epoch": 2.6194163192376414,
483
+ "grad_norm": 0.29232126474380493,
484
+ "learning_rate": 1.3156805957798926e-06,
485
+ "loss": 0.7279,
486
+ "step": 2200
487
+ },
488
+ {
489
+ "epoch": 2.6194163192376414,
490
+ "eval_loss": 0.7843312621116638,
491
+ "eval_runtime": 86.7533,
492
+ "eval_samples_per_second": 8.103,
493
+ "eval_steps_per_second": 2.029,
494
+ "step": 2200
495
+ },
496
+ {
497
+ "epoch": 2.678975580702799,
498
+ "grad_norm": 0.4377705454826355,
499
+ "learning_rate": 1.1088125775755069e-06,
500
+ "loss": 0.7593,
501
+ "step": 2250
502
+ },
503
+ {
504
+ "epoch": 2.738534842167957,
505
+ "grad_norm": 0.36447674036026,
506
+ "learning_rate": 9.019445593711212e-07,
507
+ "loss": 0.7523,
508
+ "step": 2300
509
+ },
510
+ {
511
+ "epoch": 2.738534842167957,
512
+ "eval_loss": 0.7842342257499695,
513
+ "eval_runtime": 86.8097,
514
+ "eval_samples_per_second": 8.098,
515
+ "eval_steps_per_second": 2.027,
516
+ "step": 2300
517
+ },
518
+ {
519
+ "epoch": 2.798094103633115,
520
+ "grad_norm": 0.38712531328201294,
521
+ "learning_rate": 6.950765411667356e-07,
522
+ "loss": 0.7347,
523
+ "step": 2350
524
+ },
525
+ {
526
+ "epoch": 2.857653365098273,
527
+ "grad_norm": 0.34733325242996216,
528
+ "learning_rate": 4.882085229623501e-07,
529
+ "loss": 0.7605,
530
+ "step": 2400
531
+ },
532
+ {
533
+ "epoch": 2.857653365098273,
534
+ "eval_loss": 0.7841441035270691,
535
+ "eval_runtime": 86.8339,
536
+ "eval_samples_per_second": 8.096,
537
+ "eval_steps_per_second": 2.027,
538
+ "step": 2400
539
+ },
540
+ {
541
+ "epoch": 2.9172126265634306,
542
+ "grad_norm": 0.3819723129272461,
543
+ "learning_rate": 2.8134050475796445e-07,
544
+ "loss": 0.7412,
545
+ "step": 2450
546
+ },
547
+ {
548
+ "epoch": 2.9767718880285887,
549
+ "grad_norm": 0.3409363329410553,
550
+ "learning_rate": 7.447248655357883e-08,
551
+ "loss": 0.7425,
552
+ "step": 2500
553
+ },
554
+ {
555
+ "epoch": 2.9767718880285887,
556
+ "eval_loss": 0.7841161489486694,
557
+ "eval_runtime": 87.2598,
558
+ "eval_samples_per_second": 8.056,
559
+ "eval_steps_per_second": 2.017,
560
+ "step": 2500
561
+ }
562
+ ],
563
+ "logging_steps": 50,
564
+ "max_steps": 2517,
565
+ "num_input_tokens_seen": 0,
566
+ "num_train_epochs": 3,
567
+ "save_steps": 100,
568
+ "stateful_callbacks": {
569
+ "TrainerControl": {
570
+ "args": {
571
+ "should_epoch_stop": false,
572
+ "should_evaluate": false,
573
+ "should_log": false,
574
+ "should_save": true,
575
+ "should_training_stop": false
576
+ },
577
+ "attributes": {}
578
+ }
579
+ },
580
+ "total_flos": 8.120601880087757e+17,
581
+ "train_batch_size": 4,
582
+ "trial_name": null,
583
+ "trial_params": null
584
+ }
checkpoint-2500/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b93397caa9376b8eea4ec6b2e017638fbcd88c220589b7cb21e7657564bf41c5
3
+ size 5304
checkpoint-2517/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: /hpc2hdd/home/xyang346/llama2/llama/llama-2-7b-hf/7B
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.15.1
checkpoint-2517/adapter_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "/hpc2hdd/home/xyang346/llama2/llama/llama-2-7b-hf/7B",
5
+ "bias": "none",
6
+ "corda_config": null,
7
+ "eva_config": null,
8
+ "exclude_modules": null,
9
+ "fan_in_fan_out": false,
10
+ "inference_mode": true,
11
+ "init_lora_weights": true,
12
+ "layer_replication": null,
13
+ "layers_pattern": null,
14
+ "layers_to_transform": null,
15
+ "loftq_config": {},
16
+ "lora_alpha": 32,
17
+ "lora_bias": false,
18
+ "lora_dropout": 0.1,
19
+ "megatron_config": null,
20
+ "megatron_core": "megatron.core",
21
+ "modules_to_save": null,
22
+ "peft_type": "LORA",
23
+ "r": 8,
24
+ "rank_pattern": {},
25
+ "revision": null,
26
+ "target_modules": [
27
+ "q_proj",
28
+ "v_proj"
29
+ ],
30
+ "task_type": "CAUSAL_LM",
31
+ "trainable_token_indices": null,
32
+ "use_dora": false,
33
+ "use_rslora": false
34
+ }
checkpoint-2517/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:369a6bf7ee643e3ba50fd638abe939141e191f2014624e286b7e466098b4ae14
3
+ size 16794200
checkpoint-2517/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:704dc3b8caddfde87937ed6b9d8684dc70a49ffa0de01ba1a197121afeb809ae
3
+ size 33662074
checkpoint-2517/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:828c7d933353ef81fc8594deed2134690a2af46b85bf3d554aa09aee813e6c1b
3
+ size 14244
checkpoint-2517/scaler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb761d3cc1874f86b8b2f5c74b835c73b0addb9152f406fdd449327345f7b1e2
3
+ size 988
checkpoint-2517/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:77ddfb35a3498eade0cc73c7b00f4616bcaf4ec172aaa78b0fcbe453f538bf41
3
+ size 1064
checkpoint-2517/special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "</s>",
17
+ "unk_token": {
18
+ "content": "<unk>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
checkpoint-2517/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-2517/tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
3
+ size 499723
checkpoint-2517/tokenizer_config.json ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "add_prefix_space": null,
5
+ "added_tokens_decoder": {
6
+ "0": {
7
+ "content": "<unk>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "1": {
15
+ "content": "<s>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": true
21
+ },
22
+ "2": {
23
+ "content": "</s>",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false,
28
+ "special": true
29
+ }
30
+ },
31
+ "bos_token": "<s>",
32
+ "clean_up_tokenization_spaces": false,
33
+ "eos_token": "</s>",
34
+ "extra_special_tokens": {},
35
+ "legacy": true,
36
+ "model_max_length": 1000000000000000019884624838656,
37
+ "pad_token": "</s>",
38
+ "sp_model_kwargs": {},
39
+ "spaces_between_special_tokens": false,
40
+ "tokenizer_class": "LlamaTokenizer",
41
+ "unk_token": "<unk>",
42
+ "use_default_system_prompt": false
43
+ }
checkpoint-2517/trainer_state.json ADDED
@@ -0,0 +1,584 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": 2500,
3
+ "best_metric": 0.7841161489486694,
4
+ "best_model_checkpoint": "./llama2-m2/checkpoint-2500",
5
+ "epoch": 2.997022036926742,
6
+ "eval_steps": 100,
7
+ "global_step": 2517,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "epoch": 0.05955926146515783,
14
+ "grad_norm": 11.590325355529785,
15
+ "learning_rate": 4.9000000000000005e-06,
16
+ "loss": 3.0703,
17
+ "step": 50
18
+ },
19
+ {
20
+ "epoch": 0.11911852293031566,
21
+ "grad_norm": 6.117842197418213,
22
+ "learning_rate": 9.9e-06,
23
+ "loss": 2.3868,
24
+ "step": 100
25
+ },
26
+ {
27
+ "epoch": 0.11911852293031566,
28
+ "eval_loss": 1.5943528413772583,
29
+ "eval_runtime": 86.9995,
30
+ "eval_samples_per_second": 8.081,
31
+ "eval_steps_per_second": 2.023,
32
+ "step": 100
33
+ },
34
+ {
35
+ "epoch": 0.1786777843954735,
36
+ "grad_norm": 0.4276258051395416,
37
+ "learning_rate": 9.797269342159703e-06,
38
+ "loss": 1.1152,
39
+ "step": 150
40
+ },
41
+ {
42
+ "epoch": 0.23823704586063132,
43
+ "grad_norm": 0.34813204407691956,
44
+ "learning_rate": 9.590401323955318e-06,
45
+ "loss": 0.9458,
46
+ "step": 200
47
+ },
48
+ {
49
+ "epoch": 0.23823704586063132,
50
+ "eval_loss": 0.9201429486274719,
51
+ "eval_runtime": 86.8761,
52
+ "eval_samples_per_second": 8.092,
53
+ "eval_steps_per_second": 2.026,
54
+ "step": 200
55
+ },
56
+ {
57
+ "epoch": 0.29779630732578916,
58
+ "grad_norm": 0.31664666533470154,
59
+ "learning_rate": 9.383533305750931e-06,
60
+ "loss": 0.8754,
61
+ "step": 250
62
+ },
63
+ {
64
+ "epoch": 0.357355568790947,
65
+ "grad_norm": 0.3039833903312683,
66
+ "learning_rate": 9.176665287546546e-06,
67
+ "loss": 0.8236,
68
+ "step": 300
69
+ },
70
+ {
71
+ "epoch": 0.357355568790947,
72
+ "eval_loss": 0.8294563293457031,
73
+ "eval_runtime": 86.6916,
74
+ "eval_samples_per_second": 8.109,
75
+ "eval_steps_per_second": 2.03,
76
+ "step": 300
77
+ },
78
+ {
79
+ "epoch": 0.4169148302561048,
80
+ "grad_norm": 0.3569670021533966,
81
+ "learning_rate": 8.969797269342161e-06,
82
+ "loss": 0.7643,
83
+ "step": 350
84
+ },
85
+ {
86
+ "epoch": 0.47647409172126265,
87
+ "grad_norm": 0.4587797224521637,
88
+ "learning_rate": 8.762929251137776e-06,
89
+ "loss": 0.7638,
90
+ "step": 400
91
+ },
92
+ {
93
+ "epoch": 0.47647409172126265,
94
+ "eval_loss": 0.8007138967514038,
95
+ "eval_runtime": 86.7325,
96
+ "eval_samples_per_second": 8.105,
97
+ "eval_steps_per_second": 2.029,
98
+ "step": 400
99
+ },
100
+ {
101
+ "epoch": 0.5360333531864205,
102
+ "grad_norm": 0.2903870940208435,
103
+ "learning_rate": 8.556061232933389e-06,
104
+ "loss": 0.7505,
105
+ "step": 450
106
+ },
107
+ {
108
+ "epoch": 0.5955926146515783,
109
+ "grad_norm": 0.39271315932273865,
110
+ "learning_rate": 8.349193214729004e-06,
111
+ "loss": 0.7773,
112
+ "step": 500
113
+ },
114
+ {
115
+ "epoch": 0.5955926146515783,
116
+ "eval_loss": 0.7964405417442322,
117
+ "eval_runtime": 86.7496,
118
+ "eval_samples_per_second": 8.104,
119
+ "eval_steps_per_second": 2.029,
120
+ "step": 500
121
+ },
122
+ {
123
+ "epoch": 0.6551518761167362,
124
+ "grad_norm": 0.2611350119113922,
125
+ "learning_rate": 8.142325196524617e-06,
126
+ "loss": 0.7339,
127
+ "step": 550
128
+ },
129
+ {
130
+ "epoch": 0.714711137581894,
131
+ "grad_norm": 0.3096601665019989,
132
+ "learning_rate": 7.935457178320233e-06,
133
+ "loss": 0.7867,
134
+ "step": 600
135
+ },
136
+ {
137
+ "epoch": 0.714711137581894,
138
+ "eval_loss": 0.7935438752174377,
139
+ "eval_runtime": 86.8192,
140
+ "eval_samples_per_second": 8.097,
141
+ "eval_steps_per_second": 2.027,
142
+ "step": 600
143
+ },
144
+ {
145
+ "epoch": 0.7742703990470519,
146
+ "grad_norm": 0.28062084317207336,
147
+ "learning_rate": 7.728589160115847e-06,
148
+ "loss": 0.7642,
149
+ "step": 650
150
+ },
151
+ {
152
+ "epoch": 0.8338296605122096,
153
+ "grad_norm": 0.2916211783885956,
154
+ "learning_rate": 7.521721141911461e-06,
155
+ "loss": 0.7436,
156
+ "step": 700
157
+ },
158
+ {
159
+ "epoch": 0.8338296605122096,
160
+ "eval_loss": 0.7918882369995117,
161
+ "eval_runtime": 86.8706,
162
+ "eval_samples_per_second": 8.092,
163
+ "eval_steps_per_second": 2.026,
164
+ "step": 700
165
+ },
166
+ {
167
+ "epoch": 0.8933889219773675,
168
+ "grad_norm": 0.4260661005973816,
169
+ "learning_rate": 7.3148531237070755e-06,
170
+ "loss": 0.7944,
171
+ "step": 750
172
+ },
173
+ {
174
+ "epoch": 0.9529481834425253,
175
+ "grad_norm": 0.3311309218406677,
176
+ "learning_rate": 7.1079851055026895e-06,
177
+ "loss": 0.7618,
178
+ "step": 800
179
+ },
180
+ {
181
+ "epoch": 0.9529481834425253,
182
+ "eval_loss": 0.7905948758125305,
183
+ "eval_runtime": 86.7293,
184
+ "eval_samples_per_second": 8.106,
185
+ "eval_steps_per_second": 2.029,
186
+ "step": 800
187
+ },
188
+ {
189
+ "epoch": 1.0119118522930315,
190
+ "grad_norm": 0.33902204036712646,
191
+ "learning_rate": 6.901117087298304e-06,
192
+ "loss": 0.7565,
193
+ "step": 850
194
+ },
195
+ {
196
+ "epoch": 1.0714711137581894,
197
+ "grad_norm": 0.3156481981277466,
198
+ "learning_rate": 6.694249069093918e-06,
199
+ "loss": 0.7834,
200
+ "step": 900
201
+ },
202
+ {
203
+ "epoch": 1.0714711137581894,
204
+ "eval_loss": 0.789471447467804,
205
+ "eval_runtime": 86.7639,
206
+ "eval_samples_per_second": 8.102,
207
+ "eval_steps_per_second": 2.028,
208
+ "step": 900
209
+ },
210
+ {
211
+ "epoch": 1.1310303752233473,
212
+ "grad_norm": 0.29626569151878357,
213
+ "learning_rate": 6.487381050889533e-06,
214
+ "loss": 0.7636,
215
+ "step": 950
216
+ },
217
+ {
218
+ "epoch": 1.1905896366885051,
219
+ "grad_norm": 0.32058003544807434,
220
+ "learning_rate": 6.280513032685147e-06,
221
+ "loss": 0.7588,
222
+ "step": 1000
223
+ },
224
+ {
225
+ "epoch": 1.1905896366885051,
226
+ "eval_loss": 0.7887451648712158,
227
+ "eval_runtime": 86.8029,
228
+ "eval_samples_per_second": 8.099,
229
+ "eval_steps_per_second": 2.028,
230
+ "step": 1000
231
+ },
232
+ {
233
+ "epoch": 1.2501488981536628,
234
+ "grad_norm": 0.3029298484325409,
235
+ "learning_rate": 6.073645014480761e-06,
236
+ "loss": 0.7651,
237
+ "step": 1050
238
+ },
239
+ {
240
+ "epoch": 1.3097081596188207,
241
+ "grad_norm": 0.30075645446777344,
242
+ "learning_rate": 5.866776996276376e-06,
243
+ "loss": 0.747,
244
+ "step": 1100
245
+ },
246
+ {
247
+ "epoch": 1.3097081596188207,
248
+ "eval_loss": 0.7880399227142334,
249
+ "eval_runtime": 86.7707,
250
+ "eval_samples_per_second": 8.102,
251
+ "eval_steps_per_second": 2.028,
252
+ "step": 1100
253
+ },
254
+ {
255
+ "epoch": 1.3692674210839786,
256
+ "grad_norm": 0.30230703949928284,
257
+ "learning_rate": 5.659908978071991e-06,
258
+ "loss": 0.7694,
259
+ "step": 1150
260
+ },
261
+ {
262
+ "epoch": 1.4288266825491365,
263
+ "grad_norm": 0.2981889545917511,
264
+ "learning_rate": 5.453040959867605e-06,
265
+ "loss": 0.7546,
266
+ "step": 1200
267
+ },
268
+ {
269
+ "epoch": 1.4288266825491365,
270
+ "eval_loss": 0.7873143553733826,
271
+ "eval_runtime": 86.9249,
272
+ "eval_samples_per_second": 8.087,
273
+ "eval_steps_per_second": 2.025,
274
+ "step": 1200
275
+ },
276
+ {
277
+ "epoch": 1.4883859440142944,
278
+ "grad_norm": 0.33295580744743347,
279
+ "learning_rate": 5.246172941663219e-06,
280
+ "loss": 0.7356,
281
+ "step": 1250
282
+ },
283
+ {
284
+ "epoch": 1.547945205479452,
285
+ "grad_norm": 0.2881334125995636,
286
+ "learning_rate": 5.039304923458833e-06,
287
+ "loss": 0.7616,
288
+ "step": 1300
289
+ },
290
+ {
291
+ "epoch": 1.547945205479452,
292
+ "eval_loss": 0.7868330478668213,
293
+ "eval_runtime": 86.9371,
294
+ "eval_samples_per_second": 8.086,
295
+ "eval_steps_per_second": 2.024,
296
+ "step": 1300
297
+ },
298
+ {
299
+ "epoch": 1.60750446694461,
300
+ "grad_norm": 0.42549142241477966,
301
+ "learning_rate": 4.832436905254448e-06,
302
+ "loss": 0.7613,
303
+ "step": 1350
304
+ },
305
+ {
306
+ "epoch": 1.6670637284097678,
307
+ "grad_norm": 0.32537880539894104,
308
+ "learning_rate": 4.625568887050063e-06,
309
+ "loss": 0.777,
310
+ "step": 1400
311
+ },
312
+ {
313
+ "epoch": 1.6670637284097678,
314
+ "eval_loss": 0.7863583564758301,
315
+ "eval_runtime": 86.9105,
316
+ "eval_samples_per_second": 8.089,
317
+ "eval_steps_per_second": 2.025,
318
+ "step": 1400
319
+ },
320
+ {
321
+ "epoch": 1.7266229898749255,
322
+ "grad_norm": 0.31612130999565125,
323
+ "learning_rate": 4.418700868845677e-06,
324
+ "loss": 0.7123,
325
+ "step": 1450
326
+ },
327
+ {
328
+ "epoch": 1.7861822513400833,
329
+ "grad_norm": 0.39497706294059753,
330
+ "learning_rate": 4.211832850641292e-06,
331
+ "loss": 0.7999,
332
+ "step": 1500
333
+ },
334
+ {
335
+ "epoch": 1.7861822513400833,
336
+ "eval_loss": 0.7859570980072021,
337
+ "eval_runtime": 86.7739,
338
+ "eval_samples_per_second": 8.102,
339
+ "eval_steps_per_second": 2.028,
340
+ "step": 1500
341
+ },
342
+ {
343
+ "epoch": 1.8457415128052412,
344
+ "grad_norm": 0.3905975818634033,
345
+ "learning_rate": 4.004964832436906e-06,
346
+ "loss": 0.7105,
347
+ "step": 1550
348
+ },
349
+ {
350
+ "epoch": 1.905300774270399,
351
+ "grad_norm": 0.3420596718788147,
352
+ "learning_rate": 3.7980968142325196e-06,
353
+ "loss": 0.7735,
354
+ "step": 1600
355
+ },
356
+ {
357
+ "epoch": 1.905300774270399,
358
+ "eval_loss": 0.7855594754219055,
359
+ "eval_runtime": 86.9977,
360
+ "eval_samples_per_second": 8.081,
361
+ "eval_steps_per_second": 2.023,
362
+ "step": 1600
363
+ },
364
+ {
365
+ "epoch": 1.964860035735557,
366
+ "grad_norm": 0.2925880551338196,
367
+ "learning_rate": 3.5912287960281345e-06,
368
+ "loss": 0.7675,
369
+ "step": 1650
370
+ },
371
+ {
372
+ "epoch": 2.023823704586063,
373
+ "grad_norm": 0.42387983202934265,
374
+ "learning_rate": 3.3843607778237485e-06,
375
+ "loss": 0.7679,
376
+ "step": 1700
377
+ },
378
+ {
379
+ "epoch": 2.023823704586063,
380
+ "eval_loss": 0.7852116227149963,
381
+ "eval_runtime": 86.7932,
382
+ "eval_samples_per_second": 8.1,
383
+ "eval_steps_per_second": 2.028,
384
+ "step": 1700
385
+ },
386
+ {
387
+ "epoch": 2.083382966051221,
388
+ "grad_norm": 0.3012678325176239,
389
+ "learning_rate": 3.1774927596193634e-06,
390
+ "loss": 0.7529,
391
+ "step": 1750
392
+ },
393
+ {
394
+ "epoch": 2.1429422275163788,
395
+ "grad_norm": 0.3647378385066986,
396
+ "learning_rate": 2.9706247414149774e-06,
397
+ "loss": 0.7772,
398
+ "step": 1800
399
+ },
400
+ {
401
+ "epoch": 2.1429422275163788,
402
+ "eval_loss": 0.7850247025489807,
403
+ "eval_runtime": 86.7181,
404
+ "eval_samples_per_second": 8.107,
405
+ "eval_steps_per_second": 2.03,
406
+ "step": 1800
407
+ },
408
+ {
409
+ "epoch": 2.202501488981537,
410
+ "grad_norm": 0.30863115191459656,
411
+ "learning_rate": 2.763756723210592e-06,
412
+ "loss": 0.7485,
413
+ "step": 1850
414
+ },
415
+ {
416
+ "epoch": 2.2620607504466945,
417
+ "grad_norm": 0.3829723298549652,
418
+ "learning_rate": 2.5568887050062062e-06,
419
+ "loss": 0.7449,
420
+ "step": 1900
421
+ },
422
+ {
423
+ "epoch": 2.2620607504466945,
424
+ "eval_loss": 0.7847884893417358,
425
+ "eval_runtime": 86.725,
426
+ "eval_samples_per_second": 8.106,
427
+ "eval_steps_per_second": 2.029,
428
+ "step": 1900
429
+ },
430
+ {
431
+ "epoch": 2.321620011911852,
432
+ "grad_norm": 0.3733135759830475,
433
+ "learning_rate": 2.3500206868018207e-06,
434
+ "loss": 0.7508,
435
+ "step": 1950
436
+ },
437
+ {
438
+ "epoch": 2.3811792733770103,
439
+ "grad_norm": 0.37344199419021606,
440
+ "learning_rate": 2.143152668597435e-06,
441
+ "loss": 0.7509,
442
+ "step": 2000
443
+ },
444
+ {
445
+ "epoch": 2.3811792733770103,
446
+ "eval_loss": 0.7846249938011169,
447
+ "eval_runtime": 86.7361,
448
+ "eval_samples_per_second": 8.105,
449
+ "eval_steps_per_second": 2.029,
450
+ "step": 2000
451
+ },
452
+ {
453
+ "epoch": 2.440738534842168,
454
+ "grad_norm": 0.46035104990005493,
455
+ "learning_rate": 1.9362846503930496e-06,
456
+ "loss": 0.7901,
457
+ "step": 2050
458
+ },
459
+ {
460
+ "epoch": 2.5002977963073256,
461
+ "grad_norm": 0.31786802411079407,
462
+ "learning_rate": 1.7294166321886638e-06,
463
+ "loss": 0.7654,
464
+ "step": 2100
465
+ },
466
+ {
467
+ "epoch": 2.5002977963073256,
468
+ "eval_loss": 0.7844468951225281,
469
+ "eval_runtime": 86.7628,
470
+ "eval_samples_per_second": 8.103,
471
+ "eval_steps_per_second": 2.029,
472
+ "step": 2100
473
+ },
474
+ {
475
+ "epoch": 2.5598570577724837,
476
+ "grad_norm": 0.337811678647995,
477
+ "learning_rate": 1.5225486139842782e-06,
478
+ "loss": 0.7524,
479
+ "step": 2150
480
+ },
481
+ {
482
+ "epoch": 2.6194163192376414,
483
+ "grad_norm": 0.29232126474380493,
484
+ "learning_rate": 1.3156805957798926e-06,
485
+ "loss": 0.7279,
486
+ "step": 2200
487
+ },
488
+ {
489
+ "epoch": 2.6194163192376414,
490
+ "eval_loss": 0.7843312621116638,
491
+ "eval_runtime": 86.7533,
492
+ "eval_samples_per_second": 8.103,
493
+ "eval_steps_per_second": 2.029,
494
+ "step": 2200
495
+ },
496
+ {
497
+ "epoch": 2.678975580702799,
498
+ "grad_norm": 0.4377705454826355,
499
+ "learning_rate": 1.1088125775755069e-06,
500
+ "loss": 0.7593,
501
+ "step": 2250
502
+ },
503
+ {
504
+ "epoch": 2.738534842167957,
505
+ "grad_norm": 0.36447674036026,
506
+ "learning_rate": 9.019445593711212e-07,
507
+ "loss": 0.7523,
508
+ "step": 2300
509
+ },
510
+ {
511
+ "epoch": 2.738534842167957,
512
+ "eval_loss": 0.7842342257499695,
513
+ "eval_runtime": 86.8097,
514
+ "eval_samples_per_second": 8.098,
515
+ "eval_steps_per_second": 2.027,
516
+ "step": 2300
517
+ },
518
+ {
519
+ "epoch": 2.798094103633115,
520
+ "grad_norm": 0.38712531328201294,
521
+ "learning_rate": 6.950765411667356e-07,
522
+ "loss": 0.7347,
523
+ "step": 2350
524
+ },
525
+ {
526
+ "epoch": 2.857653365098273,
527
+ "grad_norm": 0.34733325242996216,
528
+ "learning_rate": 4.882085229623501e-07,
529
+ "loss": 0.7605,
530
+ "step": 2400
531
+ },
532
+ {
533
+ "epoch": 2.857653365098273,
534
+ "eval_loss": 0.7841441035270691,
535
+ "eval_runtime": 86.8339,
536
+ "eval_samples_per_second": 8.096,
537
+ "eval_steps_per_second": 2.027,
538
+ "step": 2400
539
+ },
540
+ {
541
+ "epoch": 2.9172126265634306,
542
+ "grad_norm": 0.3819723129272461,
543
+ "learning_rate": 2.8134050475796445e-07,
544
+ "loss": 0.7412,
545
+ "step": 2450
546
+ },
547
+ {
548
+ "epoch": 2.9767718880285887,
549
+ "grad_norm": 0.3409363329410553,
550
+ "learning_rate": 7.447248655357883e-08,
551
+ "loss": 0.7425,
552
+ "step": 2500
553
+ },
554
+ {
555
+ "epoch": 2.9767718880285887,
556
+ "eval_loss": 0.7841161489486694,
557
+ "eval_runtime": 87.2598,
558
+ "eval_samples_per_second": 8.056,
559
+ "eval_steps_per_second": 2.017,
560
+ "step": 2500
561
+ }
562
+ ],
563
+ "logging_steps": 50,
564
+ "max_steps": 2517,
565
+ "num_input_tokens_seen": 0,
566
+ "num_train_epochs": 3,
567
+ "save_steps": 100,
568
+ "stateful_callbacks": {
569
+ "TrainerControl": {
570
+ "args": {
571
+ "should_epoch_stop": false,
572
+ "should_evaluate": false,
573
+ "should_log": false,
574
+ "should_save": true,
575
+ "should_training_stop": true
576
+ },
577
+ "attributes": {}
578
+ }
579
+ },
580
+ "total_flos": 8.17584683310121e+17,
581
+ "train_batch_size": 4,
582
+ "trial_name": null,
583
+ "trial_params": null
584
+ }
checkpoint-2517/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b93397caa9376b8eea4ec6b2e017638fbcd88c220589b7cb21e7657564bf41c5
3
+ size 5304
special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "</s>",
17
+ "unk_token": {
18
+ "content": "<unk>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
3
+ size 499723
tokenizer_config.json ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "add_prefix_space": null,
5
+ "added_tokens_decoder": {
6
+ "0": {
7
+ "content": "<unk>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "1": {
15
+ "content": "<s>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": true
21
+ },
22
+ "2": {
23
+ "content": "</s>",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false,
28
+ "special": true
29
+ }
30
+ },
31
+ "bos_token": "<s>",
32
+ "clean_up_tokenization_spaces": false,
33
+ "eos_token": "</s>",
34
+ "extra_special_tokens": {},
35
+ "legacy": true,
36
+ "model_max_length": 1000000000000000019884624838656,
37
+ "pad_token": "</s>",
38
+ "sp_model_kwargs": {},
39
+ "spaces_between_special_tokens": false,
40
+ "tokenizer_class": "LlamaTokenizer",
41
+ "unk_token": "<unk>",
42
+ "use_default_system_prompt": false
43
+ }