tbmod vwxyzjn commited on
Commit
c4e407f
·
0 Parent(s):

Duplicate from allenai/OLMo-2-1124-7B-Instruct

Browse files

Co-authored-by: Shengyi Costa Huang <vwxyzjn@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ base_model:
7
+ - allenai/OLMo-2-1124-7B-DPO
8
+ library_name: transformers
9
+ datasets:
10
+ - allenai/RLVR-GSM
11
+ ---
12
+
13
+ <img alt="OLMo Logo" src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/olmo2/olmo.png" width="242px">
14
+
15
+ # OLMo-2-1124-7B-Instruct
16
+
17
+ ## NOTE: 1/3/2025 UPDATE:
18
+
19
+ Upon the initial release of OLMo-2 models, we realized the post-trained models did not share the pre-tokenization logic that the base models use. As a result, we have trained new post-trained models. The new models are available under the same names as the original models, but we have made the old models available with a postfix "-preview". See [OLMo 2 Preview Post-trained Models](https://huggingface.co/collections/allenai/olmo-2-preview-post-trained-models-6762f662c660962e52de7c96) for the colleciton of the legacy models.
20
+
21
+ ## Release Documentation
22
+
23
+ OLMo 2 7B Instruct November 2024 is post-trained variant of the [OLMo-2 7B November 2024](https://huggingface.co/allenai/OLMo2-7B-1124) model, which has undergone supervised finetuning on an OLMo-specific variant of the [Tülu 3 dataset](https://huggingface.co/datasets/allenai/tulu-3-sft-olmo-2-mixture) and further DPO training on [this dataset](https://huggingface.co/datasets/allenai/olmo-2-1124-7b-preference-mix), and finally RLVR training using [this data](https://huggingface.co/datasets/allenai/RLVR-GSM).
24
+ Tülu 3 is designed for state-of-the-art performance on a diversity of tasks in addition to chat, such as MATH, GSM8K, and IFEval.
25
+ Check out the [OLMo 2 paper](https://arxiv.org/abs/2501.00656) or [Tülu 3 paper](https://arxiv.org/abs/2411.15124) for more details!
26
+
27
+ OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
28
+ These models are trained on the Dolma dataset. We are releasing all code, checkpoints, logs (coming soon), and associated training details.
29
+ The core models released in this batch include the following:
30
+
31
+
32
+ | **Stage** | **OLMo 2 7B** | **OLMo 2 13B** |
33
+ |----------------------|----------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|
34
+ | **Base Model** | [allenai/OLMo2-7B-1124](https://huggingface.co/allenai/OLMo2-7B-1124) | [allenai/OLMo-2-13B-1124](https://huggingface.co/allenai/OLMo-2-13B-1124) |
35
+ | **SFT** | [allenai/OLMo-2-1124-7B-SFT](https://huggingface.co/allenai/OLMo-2-1124-7B-SFT) | [allenai/OLMo-2-1124-13B-SFT](https://huggingface.co/allenai/OLMo-2-1124-13B-SFT) |
36
+ | **DPO** | [allenai/OLMo-2-1124-7B-DPO](https://huggingface.co/allenai/OLMo-2-1124-7B-DPO) | [allenai/OLMo-2-1124-13B-DPO](https://huggingface.co/allenai/OLMo-2-1124-13B-DPO) |
37
+ | **Final Models (RLVR)** | [allenai/OLMo-2-1124-7B-Instruct](https://huggingface.co/allenai/OLMo-2-1124-7B-Instruct) | [allenai/OLMo-2-1124-13B-Instruct](https://huggingface.co/allenai/OLMo-2-1124-13B-Instruct) |
38
+ | **Reward Model (RM)**| [allenai/OLMo-2-1124-7B-RM](https://huggingface.co/allenai/OLMo-2-1124-7B-RM) | [allenai/OLMo-2-1124-13B-RM](https://huggingface.co/allenai/OLMo-2-1124-13B-RM) |
39
+
40
+
41
+
42
+ ## Model description
43
+
44
+ - **Model type:** A model trained on a mix of publicly available, synthetic and human-created datasets.
45
+ - **Language(s) (NLP):** Primarily English
46
+ - **License:** Apache 2.0
47
+ - **Finetuned from model:** allenai/OLMo-2-7B-1124-DPO
48
+
49
+ ### Model Sources
50
+
51
+ - **Project Page:** https://allenai.org/olmo
52
+ - **Repositories:**
53
+ - Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo
54
+ - Evaluation code: https://github.com/allenai/olmes
55
+ - Further fine-tuning code: https://github.com/allenai/open-instruct
56
+ - **Paper:** https://arxiv.org/abs/2501.00656
57
+ - **Demo:** https://playground.allenai.org/
58
+
59
+ ## Installation
60
+
61
+ OLMo 2 will be supported in the next version of Transformers, and you need to install it from the main branch using:
62
+ ```bash
63
+ pip install --upgrade git+https://github.com/huggingface/transformers.git
64
+ ```
65
+
66
+ ## Using the model
67
+
68
+ ### Loading with HuggingFace
69
+
70
+ To load the model with HuggingFace, use the following snippet:
71
+ ```
72
+ from transformers import AutoModelForCausalLM
73
+
74
+ olmo_model = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-1124-7B-Instruct")
75
+ ```
76
+
77
+ ### Chat template
78
+
79
+ The chat template for our models is formatted as:
80
+ ```
81
+ <|endoftext|><|user|>\nHow are you doing?\n<|assistant|>\nI'm just a computer program, so I don't have feelings, but I'm functioning as expected. How can I assist you today?<|endoftext|>
82
+ ```
83
+ Or with new lines expanded:
84
+ ```
85
+ <|endoftext|><|user|>
86
+ How are you doing?
87
+ <|assistant|>
88
+ I'm just a computer program, so I don't have feelings, but I'm functioning as expected. How can I assist you today?<|endoftext|>
89
+ ```
90
+ It is embedded within the tokenizer as well, for `tokenizer.apply_chat_template`.
91
+
92
+ ### System prompt
93
+
94
+ In Ai2 demos, we use this system prompt by default:
95
+ ```
96
+ You are OLMo 2, a helpful and harmless AI Assistant built by the Allen Institute for AI.
97
+ ```
98
+ The model has not been trained with a specific system prompt in mind.
99
+
100
+ ### Bias, Risks, and Limitations
101
+
102
+ The OLMo-2 models have limited safety training, but are not deployed automatically with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
103
+ See the Falcon 180B model card for an example of this.
104
+
105
+
106
+ ## Performance
107
+
108
+ | Model | Average | AlpacaEval | BBH | DROP | GSM8k | IFEval | MATH | MMLU | Safety | PopQA | TruthQA |
109
+ |-------|---------|------------|-----|------|--------|---------|------|-------|---------|-------|---------|
110
+ | **Open weights models** |
111
+ | Gemma-2-9B-it | 51.9 | 43.7 | 2.5 | 58.8 | 79.7 | 69.9 | 29.8 | 69.1 | 75.5 | 28.3 | 61.4 |
112
+ | Ministral-8B-Instruct | 52.1 | 31.4 | 56.2 | 56.2 | 80.0 | 56.4 | 40.0 | 68.5 | 56.2 | 20.2 | 55.5 |
113
+ | Mistral-Nemo-Instruct-2407 | 50.9 | 45.8 | 54.6 | 23.6 | 81.4 | 64.5 | 31.9 | 70.0 | 52.7 | 26.9 | 57.7 |
114
+ | Qwen-2.5-7B-Instruct | 57.1 | 29.7 | 25.3 | 54.4 | 83.8 | 74.7 | 69.9 | 76.6 | 75.0 | 18.1 | 63.1 |
115
+ | Llama-3.1-8B-Instruct | 58.9 | 25.8 | 69.7 | 61.7 | 83.4 | 80.6 | 42.5 | 71.3 | 70.2 | 28.4 | 55.1 |
116
+ | Tülu 3 8B | 60.4 | 34.0 | 66.0 | 62.6 | 87.6 | 82.4 | 43.7 | 68.2 | 75.4 | 29.1 | 55.0 |
117
+ | Qwen-2.5-14B-Instruct | 60.8 | 34.6 | 34.0 | 50.5 | 83.9 | 82.4 | 70.6 | 81.1 | 79.3 | 21.1 | 70.8 |
118
+ | **Fully open models** |
119
+ | OLMo-7B-Instruct | 28.2 | 5.2 | 35.3 | 30.7 | 14.3 | 32.2 | 2.1 | 46.3 | 54.0 | 17.1 | 44.5 |
120
+ | OLMo-7B-0424-Instruct | 33.1 | 8.5 | 34.4 | 47.9 | 23.2 | 39.2 | 5.2 | 48.9 | 49.3 | 18.9 | 55.2 |
121
+ | OLMoE-1B-7B-0924-Instruct | 35.5 | 8.5 | 37.2 | 34.3 | 47.2 | 46.2 | 8.4 | 51.6 | 51.6 | 20.6 | 49.1 |
122
+ | MAP-Neo-7B-Instruct | 42.9 | 17.6 | 26.4 | 48.2 | 69.4 | 35.9 | 31.5 | 56.5 | 73.7 | 18.4 | 51.6 |
123
+ | *OLMo-2-7B-SFT* | 50.2 | 10.2 | 49.7 | 59.6 | 74.6 | 66.9 | 25.3 | 61.1 | 82.1 | 23.6 | 48.6 |
124
+ | *OLMo-2-7B-DPO* | 54.2 | 27.9 | 46.7 | 60.2 | 82.6 | 73.0 | 30.3 | 60.8 | 81.0 | 23.5 | 56.0 |
125
+ | *OLMo-2-13B-SFT* | 55.3 | 11.5 | 59.6 | 71.3 | 76.3 | 68.6 | 29.5 | 68.0 | 82.3 | 29.4 | 57.1 |
126
+ | *OLMo-2-13B-DPO* | 60.6 | 38.3 | 57.9 | 71.5 | 82.3 | 80.2 | 35.2 | 67.9 | 79.7 | 29.0 | 63.9 |
127
+ | **OLMo-2-7B-1124–Instruct** | 54.8 | 29.1 | 46.6 | 60.5 | 85.1 | 72.3 | 32.5 | 61.3 | 80.6 | 23.2 | 56.5 |
128
+ | **OLMo-2-13B-1124-Instruct** | 62.0 | 39.5 | 58.8 | 71.5 | 87.4 | 82.6 | 39.2 | 68.5 | 79.1 | 28.8 | 64.3 |
129
+
130
+
131
+ ## License and use
132
+
133
+ OLMo 2 is licensed under the Apache 2.0 license.
134
+ OLMo 2 is intended for research and educational use.
135
+ For more information, please see our [Responsible Use Guidelines](https://allenai.org/responsible-use).
136
+ This model has been fine-tuned using a dataset mix with outputs generated from third party models and are subject to additional terms: [Gemma Terms of Use](https://ai.google.dev/gemma/terms).
137
+
138
+ ## Citation
139
+
140
+ ```bibtex
141
+ @article{olmo20242olmo2furious,
142
+ title={2 OLMo 2 Furious},
143
+ author={Team OLMo and Pete Walsh and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Shane Arora and Akshita Bhagia and Yuling Gu and Shengyi Huang and Matt Jordan and Nathan Lambert and Dustin Schwenk and Oyvind Tafjord and Taira Anderson and David Atkinson and Faeze Brahman and Christopher Clark and Pradeep Dasigi and Nouha Dziri and Michal Guerquin and Hamish Ivison and Pang Wei Koh and Jiacheng Liu and Saumya Malik and William Merrill and Lester James V. Miranda and Jacob Morrison and Tyler Murray and Crystal Nam and Valentina Pyatkin and Aman Rangapur and Michael Schmitz and Sam Skjonsberg and David Wadden and Christopher Wilhelm and Michael Wilson and Luke Zettlemoyer and Ali Farhadi and Noah A. Smith and Hannaneh Hajishirzi},
144
+ year={2024},
145
+ eprint={2501.00656},
146
+ archivePrefix={arXiv},
147
+ primaryClass={cs.CL},
148
+ url={https://arxiv.org/abs/2501.00656},
149
+ }
150
+ ```
151
+
config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "allenai/open_instruct_dev",
3
+ "architectures": [
4
+ "Olmo2ForCausalLM"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "eos_token_id": 100257,
9
+ "hidden_act": "silu",
10
+ "hidden_size": 4096,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 11008,
13
+ "max_position_embeddings": 4096,
14
+ "model_type": "olmo2",
15
+ "num_attention_heads": 32,
16
+ "num_hidden_layers": 32,
17
+ "num_key_value_heads": 32,
18
+ "pad_token_id": 100277,
19
+ "rms_norm_eps": 1e-06,
20
+ "rope_scaling": null,
21
+ "rope_theta": 500000,
22
+ "tie_word_embeddings": false,
23
+ "torch_dtype": "bfloat16",
24
+ "transformers_version": "4.47.0.dev0",
25
+ "use_cache": false,
26
+ "vocab_size": 100352
27
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "eos_token_id": 100257,
4
+ "pad_token_id": 100277,
5
+ "transformers_version": "4.47.0.dev0"
6
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model-00001-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05ae91e1171f1f1753b23df3a5a6740d89da377932b33a1661bb7485d6ec530e
3
+ size 4970591184
model-00002-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d41b81fe0d7e4c1d673a77beded5ee77080e39d6a6940b523e82f61eaa1dcbd
3
+ size 4981161496
model-00003-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51d701cc400703094ae4b6434b067ee8810056170ce4703e93b2f5b2cb93a51d
3
+ size 4645523448
model.safetensors.index.json ADDED
@@ -0,0 +1,362 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 14597234688
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "model-00003-of-00003.safetensors",
7
+ "model.embed_tokens.weight": "model-00001-of-00003.safetensors",
8
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
9
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
10
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
11
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
12
+ "model.layers.0.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
13
+ "model.layers.0.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
14
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
15
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
16
+ "model.layers.0.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
17
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
18
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
19
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
20
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
21
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
22
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
23
+ "model.layers.1.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
24
+ "model.layers.1.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
25
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
26
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
27
+ "model.layers.1.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
28
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
29
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
30
+ "model.layers.10.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
31
+ "model.layers.10.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
32
+ "model.layers.10.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
33
+ "model.layers.10.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
34
+ "model.layers.10.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
35
+ "model.layers.10.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
36
+ "model.layers.10.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
37
+ "model.layers.10.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
38
+ "model.layers.10.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
39
+ "model.layers.10.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
40
+ "model.layers.10.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
41
+ "model.layers.11.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
42
+ "model.layers.11.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
43
+ "model.layers.11.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
44
+ "model.layers.11.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
45
+ "model.layers.11.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
46
+ "model.layers.11.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
47
+ "model.layers.11.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
48
+ "model.layers.11.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
49
+ "model.layers.11.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
50
+ "model.layers.11.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
51
+ "model.layers.11.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
52
+ "model.layers.12.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
53
+ "model.layers.12.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
54
+ "model.layers.12.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
55
+ "model.layers.12.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
56
+ "model.layers.12.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
57
+ "model.layers.12.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
58
+ "model.layers.12.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
59
+ "model.layers.12.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
60
+ "model.layers.12.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
61
+ "model.layers.12.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
62
+ "model.layers.12.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
63
+ "model.layers.13.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
64
+ "model.layers.13.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
65
+ "model.layers.13.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
66
+ "model.layers.13.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
67
+ "model.layers.13.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
68
+ "model.layers.13.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
69
+ "model.layers.13.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
70
+ "model.layers.13.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
71
+ "model.layers.13.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
72
+ "model.layers.13.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
73
+ "model.layers.13.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
74
+ "model.layers.14.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
75
+ "model.layers.14.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
76
+ "model.layers.14.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
77
+ "model.layers.14.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
78
+ "model.layers.14.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
79
+ "model.layers.14.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
80
+ "model.layers.14.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
81
+ "model.layers.14.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
82
+ "model.layers.14.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
83
+ "model.layers.14.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
84
+ "model.layers.14.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
85
+ "model.layers.15.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
86
+ "model.layers.15.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
87
+ "model.layers.15.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
88
+ "model.layers.15.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
89
+ "model.layers.15.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
90
+ "model.layers.15.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
91
+ "model.layers.15.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
92
+ "model.layers.15.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
93
+ "model.layers.15.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
94
+ "model.layers.15.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
95
+ "model.layers.15.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
96
+ "model.layers.16.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
97
+ "model.layers.16.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
98
+ "model.layers.16.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
99
+ "model.layers.16.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
100
+ "model.layers.16.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
101
+ "model.layers.16.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
102
+ "model.layers.16.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
103
+ "model.layers.16.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
104
+ "model.layers.16.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
105
+ "model.layers.16.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
106
+ "model.layers.16.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
107
+ "model.layers.17.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
108
+ "model.layers.17.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
109
+ "model.layers.17.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
110
+ "model.layers.17.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
111
+ "model.layers.17.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
112
+ "model.layers.17.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
113
+ "model.layers.17.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
114
+ "model.layers.17.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
115
+ "model.layers.17.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
116
+ "model.layers.17.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
117
+ "model.layers.17.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
118
+ "model.layers.18.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
119
+ "model.layers.18.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
120
+ "model.layers.18.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
121
+ "model.layers.18.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
122
+ "model.layers.18.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
123
+ "model.layers.18.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
124
+ "model.layers.18.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
125
+ "model.layers.18.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
126
+ "model.layers.18.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
127
+ "model.layers.18.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
128
+ "model.layers.18.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
129
+ "model.layers.19.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
130
+ "model.layers.19.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
131
+ "model.layers.19.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
132
+ "model.layers.19.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
133
+ "model.layers.19.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
134
+ "model.layers.19.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
135
+ "model.layers.19.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
136
+ "model.layers.19.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
137
+ "model.layers.19.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
138
+ "model.layers.19.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
139
+ "model.layers.19.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
140
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
141
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
142
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
143
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
144
+ "model.layers.2.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
145
+ "model.layers.2.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
146
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
147
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
148
+ "model.layers.2.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
149
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
150
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
151
+ "model.layers.20.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
152
+ "model.layers.20.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
153
+ "model.layers.20.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
154
+ "model.layers.20.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
155
+ "model.layers.20.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
156
+ "model.layers.20.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
157
+ "model.layers.20.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
158
+ "model.layers.20.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
159
+ "model.layers.20.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
160
+ "model.layers.20.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
161
+ "model.layers.20.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
162
+ "model.layers.21.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
163
+ "model.layers.21.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
164
+ "model.layers.21.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
165
+ "model.layers.21.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
166
+ "model.layers.21.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
167
+ "model.layers.21.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
168
+ "model.layers.21.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
169
+ "model.layers.21.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
170
+ "model.layers.21.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
171
+ "model.layers.21.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
172
+ "model.layers.21.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
173
+ "model.layers.22.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
174
+ "model.layers.22.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
175
+ "model.layers.22.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
176
+ "model.layers.22.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
177
+ "model.layers.22.post_feedforward_layernorm.weight": "model-00003-of-00003.safetensors",
178
+ "model.layers.22.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
179
+ "model.layers.22.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
180
+ "model.layers.22.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
181
+ "model.layers.22.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
182
+ "model.layers.22.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
183
+ "model.layers.22.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
184
+ "model.layers.23.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
185
+ "model.layers.23.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
186
+ "model.layers.23.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
187
+ "model.layers.23.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
188
+ "model.layers.23.post_feedforward_layernorm.weight": "model-00003-of-00003.safetensors",
189
+ "model.layers.23.self_attn.k_norm.weight": "model-00003-of-00003.safetensors",
190
+ "model.layers.23.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
191
+ "model.layers.23.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
192
+ "model.layers.23.self_attn.q_norm.weight": "model-00003-of-00003.safetensors",
193
+ "model.layers.23.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
194
+ "model.layers.23.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
195
+ "model.layers.24.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
196
+ "model.layers.24.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
197
+ "model.layers.24.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
198
+ "model.layers.24.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
199
+ "model.layers.24.post_feedforward_layernorm.weight": "model-00003-of-00003.safetensors",
200
+ "model.layers.24.self_attn.k_norm.weight": "model-00003-of-00003.safetensors",
201
+ "model.layers.24.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
202
+ "model.layers.24.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
203
+ "model.layers.24.self_attn.q_norm.weight": "model-00003-of-00003.safetensors",
204
+ "model.layers.24.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
205
+ "model.layers.24.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
206
+ "model.layers.25.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
207
+ "model.layers.25.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
208
+ "model.layers.25.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
209
+ "model.layers.25.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
210
+ "model.layers.25.post_feedforward_layernorm.weight": "model-00003-of-00003.safetensors",
211
+ "model.layers.25.self_attn.k_norm.weight": "model-00003-of-00003.safetensors",
212
+ "model.layers.25.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
213
+ "model.layers.25.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
214
+ "model.layers.25.self_attn.q_norm.weight": "model-00003-of-00003.safetensors",
215
+ "model.layers.25.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
216
+ "model.layers.25.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
217
+ "model.layers.26.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
218
+ "model.layers.26.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
219
+ "model.layers.26.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
220
+ "model.layers.26.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
221
+ "model.layers.26.post_feedforward_layernorm.weight": "model-00003-of-00003.safetensors",
222
+ "model.layers.26.self_attn.k_norm.weight": "model-00003-of-00003.safetensors",
223
+ "model.layers.26.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
224
+ "model.layers.26.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
225
+ "model.layers.26.self_attn.q_norm.weight": "model-00003-of-00003.safetensors",
226
+ "model.layers.26.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
227
+ "model.layers.26.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
228
+ "model.layers.27.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
229
+ "model.layers.27.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
230
+ "model.layers.27.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
231
+ "model.layers.27.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
232
+ "model.layers.27.post_feedforward_layernorm.weight": "model-00003-of-00003.safetensors",
233
+ "model.layers.27.self_attn.k_norm.weight": "model-00003-of-00003.safetensors",
234
+ "model.layers.27.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
235
+ "model.layers.27.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
236
+ "model.layers.27.self_attn.q_norm.weight": "model-00003-of-00003.safetensors",
237
+ "model.layers.27.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
238
+ "model.layers.27.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
239
+ "model.layers.28.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
240
+ "model.layers.28.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
241
+ "model.layers.28.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
242
+ "model.layers.28.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
243
+ "model.layers.28.post_feedforward_layernorm.weight": "model-00003-of-00003.safetensors",
244
+ "model.layers.28.self_attn.k_norm.weight": "model-00003-of-00003.safetensors",
245
+ "model.layers.28.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
246
+ "model.layers.28.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
247
+ "model.layers.28.self_attn.q_norm.weight": "model-00003-of-00003.safetensors",
248
+ "model.layers.28.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
249
+ "model.layers.28.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
250
+ "model.layers.29.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
251
+ "model.layers.29.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
252
+ "model.layers.29.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
253
+ "model.layers.29.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
254
+ "model.layers.29.post_feedforward_layernorm.weight": "model-00003-of-00003.safetensors",
255
+ "model.layers.29.self_attn.k_norm.weight": "model-00003-of-00003.safetensors",
256
+ "model.layers.29.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
257
+ "model.layers.29.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
258
+ "model.layers.29.self_attn.q_norm.weight": "model-00003-of-00003.safetensors",
259
+ "model.layers.29.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
260
+ "model.layers.29.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
261
+ "model.layers.3.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
262
+ "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
263
+ "model.layers.3.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
264
+ "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
265
+ "model.layers.3.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
266
+ "model.layers.3.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
267
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
268
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
269
+ "model.layers.3.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
270
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
271
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
272
+ "model.layers.30.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
273
+ "model.layers.30.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
274
+ "model.layers.30.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
275
+ "model.layers.30.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
276
+ "model.layers.30.post_feedforward_layernorm.weight": "model-00003-of-00003.safetensors",
277
+ "model.layers.30.self_attn.k_norm.weight": "model-00003-of-00003.safetensors",
278
+ "model.layers.30.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
279
+ "model.layers.30.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
280
+ "model.layers.30.self_attn.q_norm.weight": "model-00003-of-00003.safetensors",
281
+ "model.layers.30.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
282
+ "model.layers.30.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
283
+ "model.layers.31.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
284
+ "model.layers.31.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
285
+ "model.layers.31.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
286
+ "model.layers.31.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
287
+ "model.layers.31.post_feedforward_layernorm.weight": "model-00003-of-00003.safetensors",
288
+ "model.layers.31.self_attn.k_norm.weight": "model-00003-of-00003.safetensors",
289
+ "model.layers.31.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
290
+ "model.layers.31.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
291
+ "model.layers.31.self_attn.q_norm.weight": "model-00003-of-00003.safetensors",
292
+ "model.layers.31.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
293
+ "model.layers.31.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
294
+ "model.layers.4.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
295
+ "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
296
+ "model.layers.4.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
297
+ "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
298
+ "model.layers.4.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
299
+ "model.layers.4.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
300
+ "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
301
+ "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
302
+ "model.layers.4.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
303
+ "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
304
+ "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
305
+ "model.layers.5.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
306
+ "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
307
+ "model.layers.5.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
308
+ "model.layers.5.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
309
+ "model.layers.5.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
310
+ "model.layers.5.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
311
+ "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
312
+ "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
313
+ "model.layers.5.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
314
+ "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
315
+ "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
316
+ "model.layers.6.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
317
+ "model.layers.6.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
318
+ "model.layers.6.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
319
+ "model.layers.6.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
320
+ "model.layers.6.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
321
+ "model.layers.6.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
322
+ "model.layers.6.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
323
+ "model.layers.6.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
324
+ "model.layers.6.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
325
+ "model.layers.6.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
326
+ "model.layers.6.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
327
+ "model.layers.7.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
328
+ "model.layers.7.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
329
+ "model.layers.7.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
330
+ "model.layers.7.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
331
+ "model.layers.7.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
332
+ "model.layers.7.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
333
+ "model.layers.7.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
334
+ "model.layers.7.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
335
+ "model.layers.7.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
336
+ "model.layers.7.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
337
+ "model.layers.7.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
338
+ "model.layers.8.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
339
+ "model.layers.8.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
340
+ "model.layers.8.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
341
+ "model.layers.8.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
342
+ "model.layers.8.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
343
+ "model.layers.8.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
344
+ "model.layers.8.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
345
+ "model.layers.8.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
346
+ "model.layers.8.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
347
+ "model.layers.8.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
348
+ "model.layers.8.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
349
+ "model.layers.9.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
350
+ "model.layers.9.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
351
+ "model.layers.9.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
352
+ "model.layers.9.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
353
+ "model.layers.9.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
354
+ "model.layers.9.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
355
+ "model.layers.9.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
356
+ "model.layers.9.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
357
+ "model.layers.9.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
358
+ "model.layers.9.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
359
+ "model.layers.9.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
360
+ "model.norm.weight": "model-00003-of-00003.safetensors"
361
+ }
362
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|endoftext|>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<|pad|>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<|endoftext|>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,190 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "100256": {
5
+ "content": "<|extra_id_0|>",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": false
11
+ },
12
+ "100257": {
13
+ "content": "<|endoftext|>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "100258": {
21
+ "content": "<|fim_prefix|>",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "100259": {
29
+ "content": "<|fim_middle|>",
30
+ "lstrip": false,
31
+ "normalized": false,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "100260": {
37
+ "content": "<|fim_suffix|>",
38
+ "lstrip": false,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ },
44
+ "100261": {
45
+ "content": "|||PHONE_NUMBER|||",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false,
50
+ "special": false
51
+ },
52
+ "100262": {
53
+ "content": "|||EMAIL_ADDRESS|||",
54
+ "lstrip": false,
55
+ "normalized": false,
56
+ "rstrip": false,
57
+ "single_word": false,
58
+ "special": false
59
+ },
60
+ "100263": {
61
+ "content": "|||IP_ADDRESS|||",
62
+ "lstrip": false,
63
+ "normalized": false,
64
+ "rstrip": false,
65
+ "single_word": false,
66
+ "special": false
67
+ },
68
+ "100264": {
69
+ "content": "<|im_start|>",
70
+ "lstrip": false,
71
+ "normalized": false,
72
+ "rstrip": false,
73
+ "single_word": false,
74
+ "special": true
75
+ },
76
+ "100265": {
77
+ "content": "<|im_end|>",
78
+ "lstrip": false,
79
+ "normalized": false,
80
+ "rstrip": false,
81
+ "single_word": false,
82
+ "special": true
83
+ },
84
+ "100266": {
85
+ "content": "<|extra_id_1|>",
86
+ "lstrip": false,
87
+ "normalized": false,
88
+ "rstrip": false,
89
+ "single_word": false,
90
+ "special": false
91
+ },
92
+ "100267": {
93
+ "content": "<|extra_id_2|>",
94
+ "lstrip": false,
95
+ "normalized": false,
96
+ "rstrip": false,
97
+ "single_word": false,
98
+ "special": false
99
+ },
100
+ "100268": {
101
+ "content": "<|extra_id_3|>",
102
+ "lstrip": false,
103
+ "normalized": false,
104
+ "rstrip": false,
105
+ "single_word": false,
106
+ "special": false
107
+ },
108
+ "100269": {
109
+ "content": "<|extra_id_4|>",
110
+ "lstrip": false,
111
+ "normalized": false,
112
+ "rstrip": false,
113
+ "single_word": false,
114
+ "special": false
115
+ },
116
+ "100270": {
117
+ "content": "<|extra_id_5|>",
118
+ "lstrip": false,
119
+ "normalized": false,
120
+ "rstrip": false,
121
+ "single_word": false,
122
+ "special": false
123
+ },
124
+ "100271": {
125
+ "content": "<|extra_id_6|>",
126
+ "lstrip": false,
127
+ "normalized": false,
128
+ "rstrip": false,
129
+ "single_word": false,
130
+ "special": false
131
+ },
132
+ "100272": {
133
+ "content": "<|extra_id_7|>",
134
+ "lstrip": false,
135
+ "normalized": false,
136
+ "rstrip": false,
137
+ "single_word": false,
138
+ "special": false
139
+ },
140
+ "100273": {
141
+ "content": "<|extra_id_8|>",
142
+ "lstrip": false,
143
+ "normalized": false,
144
+ "rstrip": false,
145
+ "single_word": false,
146
+ "special": false
147
+ },
148
+ "100274": {
149
+ "content": "<|extra_id_9|>",
150
+ "lstrip": false,
151
+ "normalized": false,
152
+ "rstrip": false,
153
+ "single_word": false,
154
+ "special": false
155
+ },
156
+ "100275": {
157
+ "content": "<|extra_id_10|>",
158
+ "lstrip": false,
159
+ "normalized": false,
160
+ "rstrip": false,
161
+ "single_word": false,
162
+ "special": false
163
+ },
164
+ "100276": {
165
+ "content": "<|endofprompt|>",
166
+ "lstrip": false,
167
+ "normalized": false,
168
+ "rstrip": false,
169
+ "single_word": false,
170
+ "special": true
171
+ },
172
+ "100277": {
173
+ "content": "<|pad|>",
174
+ "lstrip": false,
175
+ "normalized": false,
176
+ "rstrip": false,
177
+ "single_word": false,
178
+ "special": true
179
+ }
180
+ },
181
+ "bos_token": "<|endoftext|>",
182
+ "chat_template": "{{ bos_token }}{% for message in messages %}{% if message['role'] == 'system' %}{{ '<|system|>\n' + message['content'] + '\n' }}{% elif message['role'] == 'user' %}{{ '<|user|>\n' + message['content'] + '\n' }}{% elif message['role'] == 'assistant' %}{% if not loop.last %}{{ '<|assistant|>\n' + message['content'] + eos_token + '\n' }}{% else %}{{ '<|assistant|>\n' + message['content'] + eos_token }}{% endif %}{% endif %}{% if loop.last and add_generation_prompt %}{{ '<|assistant|>\n' }}{% endif %}{% endfor %}",
183
+ "clean_up_tokenization_spaces": false,
184
+ "eos_token": "<|endoftext|>",
185
+ "extra_special_tokens": {},
186
+ "model_max_length": 1000000000000000019884624838656,
187
+ "pad_token": "<|pad|>",
188
+ "tokenizer_class": "GPT2Tokenizer",
189
+ "unk_token": "<|endoftext|>"
190
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff