zeekay commited on
Commit
ce2034f
·
verified ·
1 Parent(s): 2f00467

Upload zen-eco with MLX formats

Browse files
.gitattributes CHANGED
@@ -36,3 +36,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
36
  checkpoint-10/tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
  tokenizer.json filter=lfs diff=lfs merge=lfs -text
38
  checkpoint-15/tokenizer.json filter=lfs diff=lfs merge=lfs -text
 
 
36
  checkpoint-10/tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
  tokenizer.json filter=lfs diff=lfs merge=lfs -text
38
  checkpoint-15/tokenizer.json filter=lfs diff=lfs merge=lfs -text
39
+ mlx/tokenizer.json filter=lfs diff=lfs merge=lfs -text
LICENSE ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Qwen RESEARCH LICENSE AGREEMENT
2
+
3
+ Qwen RESEARCH LICENSE AGREEMENT Release Date: September 19, 2024
4
+
5
+ By clicking to agree or by using or distributing any portion or element of the Qwen Materials, you will be deemed to have recognized and accepted the content of this Agreement, which is effective immediately.
6
+
7
+ 1. Definitions
8
+ a. This Qwen RESEARCH LICENSE AGREEMENT (this "Agreement") shall mean the terms and conditions for use, reproduction, distribution and modification of the Materials as defined by this Agreement.
9
+ b. "We" (or "Us") shall mean Alibaba Cloud.
10
+ c. "You" (or "Your") shall mean a natural person or legal entity exercising the rights granted by this Agreement and/or using the Materials for any purpose and in any field of use.
11
+ d. "Third Parties" shall mean individuals or legal entities that are not under common control with us or you.
12
+ e. "Qwen" shall mean the large language models, and software and algorithms, consisting of trained model weights, parameters (including optimizer states), machine-learning model code, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by us.
13
+ f. "Materials" shall mean, collectively, Alibaba Cloud's proprietary Qwen and Documentation (and any portion thereof) made available under this Agreement.
14
+ g. "Source" form shall mean the preferred form for making modifications, including but not limited to model source code, documentation source, and configuration files.
15
+ h. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
16
+ i. "Non-Commercial" shall mean for research or evaluation purposes only.
17
+
18
+ 2. Grant of Rights
19
+ a. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Alibaba Cloud's intellectual property or other rights owned by us embodied in the Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Materials FOR NON-COMMERCIAL PURPOSES ONLY.
20
+ b. If you are commercially using the Materials, you shall request a license from us.
21
+
22
+ 3. Redistribution
23
+ You may distribute copies or make the Materials, or derivative works thereof, available as part of a product or service that contains any of them, with or without modifications, and in Source or Object form, provided that you meet the following conditions:
24
+ a. You shall give any other recipients of the Materials or derivative works a copy of this Agreement;
25
+ b. You shall cause any modified files to carry prominent notices stating that you changed the files;
26
+ c. You shall retain in all copies of the Materials that you distribute the following attribution notices within a "Notice" text file distributed as a part of such copies: "Qwen is licensed under the Qwen RESEARCH LICENSE AGREEMENT, Copyright (c) Alibaba Cloud. All Rights Reserved."; and
27
+ d. You may add your own copyright statement to your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of your modifications, or for any such derivative works as a whole, provided your use, reproduction, and distribution of the work otherwise complies with the terms and conditions of this Agreement.
28
+
29
+ 4. Rules of use
30
+ a. The Materials may be subject to export controls or restrictions in China, the United States or other countries or regions. You shall comply with applicable laws and regulations in your use of the Materials.
31
+ b. If you use the Materials or any outputs or results therefrom to create, train, fine-tune, or improve an AI model that is distributed or made available, you shall prominently display “Built with Qwen” or “Improved using Qwen” in the related product documentation.
32
+
33
+ 5. Intellectual Property
34
+ a. We retain ownership of all intellectual property rights in and to the Materials and derivatives made by or for us. Conditioned upon compliance with the terms and conditions of this Agreement, with respect to any derivative works and modifications of the Materials that are made by you, you are and will be the owner of such derivative works and modifications.
35
+ b. No trademark license is granted to use the trade names, trademarks, service marks, or product names of us, except as required to fulfill notice requirements under this Agreement or as required for reasonable and customary use in describing and redistributing the Materials.
36
+ c. If you commence a lawsuit or other proceedings (including a cross-claim or counterclaim in a lawsuit) against us or any entity alleging that the Materials or any output therefrom, or any part of the foregoing, infringe any intellectual property or other right owned or licensable by you, then all licenses granted to you under this Agreement shall terminate as of the date such lawsuit or other proceeding is commenced or brought.
37
+
38
+ 6. Disclaimer of Warranty and Limitation of Liability
39
+ a. We are not obligated to support, update, provide training for, or develop any further version of the Qwen Materials or to grant any license thereto.
40
+ b. THE MATERIALS ARE PROVIDED "AS IS" WITHOUT ANY EXPRESS OR IMPLIED WARRANTY OF ANY KIND INCLUDING WARRANTIES OF MERCHANTABILITY, NONINFRINGEMENT, OR FITNESS FOR A PARTICULAR PURPOSE. WE MAKE NO WARRANTY AND ASSUME NO RESPONSIBILITY FOR THE SAFETY OR STABILITY OF THE MATERIALS AND ANY OUTPUT THEREFROM.
41
+ c. IN NO EVENT SHALL WE BE LIABLE TO YOU FOR ANY DAMAGES, INCLUDING, BUT NOT LIMITED TO ANY DIRECT, OR INDIRECT, SPECIAL OR CONSEQUENTIAL DAMAGES ARISING FROM YOUR USE OR INABILITY TO USE THE MATERIALS OR ANY OUTPUT OF IT, NO MATTER HOW IT’S CAUSED.
42
+ d. You will defend, indemnify and hold harmless us from and against any claim by any third party arising out of or related to your use or distribution of the Materials.
43
+
44
+ 7. Survival and Termination.
45
+ a. The term of this Agreement shall commence upon your acceptance of this Agreement or access to the Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein.
46
+ b. We may terminate this Agreement if you breach any of the terms or conditions of this Agreement. Upon termination of this Agreement, you must delete and cease use of the Materials. Sections 6 and 8 shall survive the termination of this Agreement.
47
+
48
+ 8. Governing Law and Jurisdiction.
49
+ a. This Agreement and any dispute arising out of or relating to it will be governed by the laws of China, without regard to conflict of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement.
50
+ b. The People's Courts in Hangzhou City shall have exclusive jurisdiction over any dispute arising out of this Agreement.
51
+
52
+ 9. Other Terms and Conditions.
53
+ a. Any arrangements, understandings, or agreements regarding the Material not stated herein are separate from and independent of the terms and conditions of this Agreement. You shall request a separate license from us, if you use the Materials in ways not expressly agreed to in this Agreement.
54
+ b. We shall not be bound by any additional or different terms or conditions communicated by you unless expressly agreed.
README.md CHANGED
@@ -1,207 +1,82 @@
1
  ---
2
- base_model: Qwen/Qwen2.5-Coder-3B-Instruct
3
- library_name: peft
4
- pipeline_tag: text-generation
5
  tags:
6
- - base_model:adapter:Qwen/Qwen2.5-Coder-3B-Instruct
7
- - lora
8
- - transformers
 
 
 
 
9
  ---
10
 
11
- # Model Card for Model ID
12
-
13
- <!-- Provide a quick summary of what the model is/does. -->
14
-
15
-
16
-
17
- ## Model Details
18
-
19
- ### Model Description
20
-
21
- <!-- Provide a longer summary of what this model is. -->
22
-
23
-
24
-
25
- - **Developed by:** [More Information Needed]
26
- - **Funded by [optional]:** [More Information Needed]
27
- - **Shared by [optional]:** [More Information Needed]
28
- - **Model type:** [More Information Needed]
29
- - **Language(s) (NLP):** [More Information Needed]
30
- - **License:** [More Information Needed]
31
- - **Finetuned from model [optional]:** [More Information Needed]
32
-
33
- ### Model Sources [optional]
34
-
35
- <!-- Provide the basic links for the model. -->
36
-
37
- - **Repository:** [More Information Needed]
38
- - **Paper [optional]:** [More Information Needed]
39
- - **Demo [optional]:** [More Information Needed]
40
-
41
- ## Uses
42
-
43
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
44
-
45
- ### Direct Use
46
-
47
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
48
-
49
- [More Information Needed]
50
-
51
- ### Downstream Use [optional]
52
-
53
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
54
-
55
- [More Information Needed]
56
-
57
- ### Out-of-Scope Use
58
-
59
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
60
-
61
- [More Information Needed]
62
-
63
- ## Bias, Risks, and Limitations
64
-
65
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
66
-
67
- [More Information Needed]
68
-
69
- ### Recommendations
70
-
71
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
72
-
73
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
74
-
75
- ## How to Get Started with the Model
76
-
77
- Use the code below to get started with the model.
78
-
79
- [More Information Needed]
80
-
81
- ## Training Details
82
-
83
- ### Training Data
84
-
85
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
86
-
87
- [More Information Needed]
88
-
89
- ### Training Procedure
90
-
91
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
92
-
93
- #### Preprocessing [optional]
94
 
95
- [More Information Needed]
96
 
 
97
 
98
- #### Training Hyperparameters
 
 
 
99
 
100
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
 
 
101
 
102
- #### Speeds, Sizes, Times [optional]
 
 
103
 
104
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
105
 
106
- [More Information Needed]
 
 
107
 
108
- ## Evaluation
 
 
109
 
110
- <!-- This section describes the evaluation protocols and provides the results. -->
 
 
111
 
112
- ### Testing Data, Factors & Metrics
 
113
 
114
- #### Testing Data
 
 
 
115
 
116
- <!-- This should link to a Dataset Card if possible. -->
 
 
 
117
 
118
- [More Information Needed]
119
 
120
- #### Factors
 
 
 
121
 
122
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
123
-
124
- [More Information Needed]
125
-
126
- #### Metrics
127
-
128
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
129
-
130
- [More Information Needed]
131
-
132
- ### Results
133
-
134
- [More Information Needed]
135
-
136
- #### Summary
137
-
138
-
139
-
140
- ## Model Examination [optional]
141
-
142
- <!-- Relevant interpretability work for the model goes here -->
143
-
144
- [More Information Needed]
145
-
146
- ## Environmental Impact
147
-
148
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
149
-
150
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
151
-
152
- - **Hardware Type:** [More Information Needed]
153
- - **Hours used:** [More Information Needed]
154
- - **Cloud Provider:** [More Information Needed]
155
- - **Compute Region:** [More Information Needed]
156
- - **Carbon Emitted:** [More Information Needed]
157
-
158
- ## Technical Specifications [optional]
159
-
160
- ### Model Architecture and Objective
161
-
162
- [More Information Needed]
163
-
164
- ### Compute Infrastructure
165
-
166
- [More Information Needed]
167
-
168
- #### Hardware
169
-
170
- [More Information Needed]
171
-
172
- #### Software
173
-
174
- [More Information Needed]
175
-
176
- ## Citation [optional]
177
-
178
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
179
-
180
- **BibTeX:**
181
-
182
- [More Information Needed]
183
-
184
- **APA:**
185
-
186
- [More Information Needed]
187
-
188
- ## Glossary [optional]
189
-
190
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
191
-
192
- [More Information Needed]
193
-
194
- ## More Information [optional]
195
-
196
- [More Information Needed]
197
-
198
- ## Model Card Authors [optional]
199
-
200
- [More Information Needed]
201
-
202
- ## Model Card Contact
203
-
204
- [More Information Needed]
205
- ### Framework versions
206
 
207
- - PEFT 0.17.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
 
 
3
  tags:
4
+ - zen
5
+ - mlx
6
+ - gguf
7
+ - safetensors
8
+ language:
9
+ - en
10
+ pipeline_tag: text-generation
11
  ---
12
 
13
+ # Zen Eco v1.0.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
+ ## Available Formats
16
 
17
+ This model is available in multiple formats for different platforms:
18
 
19
+ ### SafeTensors (Base Format)
20
+ - Standard HuggingFace format
21
+ - Compatible with Transformers library
22
+ - Use for training and fine-tuning
23
 
24
+ ### MLX Format (Apple Silicon Optimized)
25
+ - `/mlx/` - Full precision MLX format
26
+ - `/mlx-4bit/` - 4-bit quantized (fastest on Mac)
27
 
28
+ ### GGUF Format (Coming Soon)
29
+ - Will be added for llama.cpp compatibility
30
+ - CPU-optimized for all platforms
31
 
32
+ ## Quick Start
33
 
34
+ ### Using Transformers
35
+ ```python
36
+ from transformers import AutoModelForCausalLM, AutoTokenizer
37
 
38
+ model = AutoModelForCausalLM.from_pretrained("zenlm/zen-eco-4b-instruct")
39
+ tokenizer = AutoTokenizer.from_pretrained("zenlm/zen-eco-4b-instruct")
40
+ ```
41
 
42
+ ### Using MLX (Apple Silicon)
43
+ ```python
44
+ from mlx_lm import load, generate
45
 
46
+ # Load 4-bit model (fastest)
47
+ model, tokenizer = load("zenlm/zen-eco-4b-instruct", adapter_path="mlx-4bit")
48
 
49
+ # Generate
50
+ response = generate(model, tokenizer, prompt="Your prompt", max_tokens=256)
51
+ print(response)
52
+ ```
53
 
54
+ ### Using llama.cpp (GGUF - Coming Soon)
55
+ ```bash
56
+ llama-cli -m gguf/zen-eco-q4_k_m.gguf -p "Your prompt"
57
+ ```
58
 
59
+ ## Training with Zoo-Gym
60
 
61
+ ```bash
62
+ pip install zoo-gym
63
+ zoo-gym train --model zenlm/zen-eco-4b-instruct --data your_data.jsonl
64
+ ```
65
 
66
+ ## Model Details
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67
 
68
+ - **Architecture**: Based on Qwen 2.5
69
+ - **Training**: Zoo-Gym with RAIS (Recursive AI Self-Improvement System)
70
+ - **License**: Apache 2.0
71
+ - **Partnership**: Hanzo AI x Zoo Labs Foundation
72
+
73
+ ## Citation
74
+
75
+ ```bibtex
76
+ @misc{zen_zen_eco_2025,
77
+ title={Zen Eco v1.0.1},
78
+ author={Hanzo AI and Zoo Labs Foundation},
79
+ year={2025},
80
+ version={1.0.1}
81
+ }
82
+ ```
config.json CHANGED
@@ -10,7 +10,7 @@
10
  "initializer_range": 0.02,
11
  "intermediate_size": 11008,
12
  "max_position_embeddings": 32768,
13
- "max_window_layers": 36,
14
  "model_type": "qwen2",
15
  "num_attention_heads": 16,
16
  "num_hidden_layers": 36,
 
10
  "initializer_range": 0.02,
11
  "intermediate_size": 11008,
12
  "max_position_embeddings": 32768,
13
+ "max_window_layers": 70,
14
  "model_type": "qwen2",
15
  "num_attention_heads": 16,
16
  "num_hidden_layers": 36,
generation_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "pad_token_id": 151643,
4
+ "do_sample": true,
5
+ "eos_token_id": [
6
+ 151645,
7
+ 151643
8
+ ],
9
+ "repetition_penalty": 1.05,
10
+ "temperature": 0.7,
11
+ "top_p": 0.8,
12
+ "top_k": 20,
13
+ "transformers_version": "4.37.0"
14
+ }
merges.txt CHANGED
@@ -1,4 +1,3 @@
1
- #version: 0.2
2
  Ġ Ġ
3
  ĠĠ ĠĠ
4
  i n
 
 
1
  Ġ Ġ
2
  ĠĠ ĠĠ
3
  i n
mlx/README.md ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: qwen-research
4
+ license_link: https://huggingface.co/Qwen/Qwen2.5-3B/blob/main/LICENSE
5
+ language:
6
+ - en
7
+ pipeline_tag: text-generation
8
+ base_model: Qwen/Qwen2.5-3B
9
+ tags:
10
+ - mlx
11
+ library_name: mlx
12
+ ---
mlx/added_tokens.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</tool_call>": 151658,
3
+ "<tool_call>": 151657,
4
+ "<|box_end|>": 151649,
5
+ "<|box_start|>": 151648,
6
+ "<|endoftext|>": 151643,
7
+ "<|file_sep|>": 151664,
8
+ "<|fim_middle|>": 151660,
9
+ "<|fim_pad|>": 151662,
10
+ "<|fim_prefix|>": 151659,
11
+ "<|fim_suffix|>": 151661,
12
+ "<|im_end|>": 151645,
13
+ "<|im_start|>": 151644,
14
+ "<|image_pad|>": 151655,
15
+ "<|object_ref_end|>": 151647,
16
+ "<|object_ref_start|>": 151646,
17
+ "<|quad_end|>": 151651,
18
+ "<|quad_start|>": 151650,
19
+ "<|repo_name|>": 151663,
20
+ "<|video_pad|>": 151656,
21
+ "<|vision_end|>": 151653,
22
+ "<|vision_pad|>": 151654,
23
+ "<|vision_start|>": 151652
24
+ }
mlx/chat_template.jinja ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- if tools %}
2
+ {{- '<|im_start|>system\n' }}
3
+ {%- if messages[0]['role'] == 'system' %}
4
+ {{- messages[0]['content'] }}
5
+ {%- else %}
6
+ {{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}
7
+ {%- endif %}
8
+ {{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
9
+ {%- for tool in tools %}
10
+ {{- "\n" }}
11
+ {{- tool | tojson }}
12
+ {%- endfor %}
13
+ {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
14
+ {%- else %}
15
+ {%- if messages[0]['role'] == 'system' %}
16
+ {{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }}
17
+ {%- else %}
18
+ {{- '<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n' }}
19
+ {%- endif %}
20
+ {%- endif %}
21
+ {%- for message in messages %}
22
+ {%- if (message.role == "user") or (message.role == "system" and not loop.first) or (message.role == "assistant" and not message.tool_calls) %}
23
+ {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
24
+ {%- elif message.role == "assistant" %}
25
+ {{- '<|im_start|>' + message.role }}
26
+ {%- if message.content %}
27
+ {{- '\n' + message.content }}
28
+ {%- endif %}
29
+ {%- for tool_call in message.tool_calls %}
30
+ {%- if tool_call.function is defined %}
31
+ {%- set tool_call = tool_call.function %}
32
+ {%- endif %}
33
+ {{- '\n<tool_call>\n{"name": "' }}
34
+ {{- tool_call.name }}
35
+ {{- '", "arguments": ' }}
36
+ {{- tool_call.arguments | tojson }}
37
+ {{- '}\n</tool_call>' }}
38
+ {%- endfor %}
39
+ {{- '<|im_end|>\n' }}
40
+ {%- elif message.role == "tool" %}
41
+ {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %}
42
+ {{- '<|im_start|>user' }}
43
+ {%- endif %}
44
+ {{- '\n<tool_response>\n' }}
45
+ {{- message.content }}
46
+ {{- '\n</tool_response>' }}
47
+ {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
48
+ {{- '<|im_end|>\n' }}
49
+ {%- endif %}
50
+ {%- endif %}
51
+ {%- endfor %}
52
+ {%- if add_generation_prompt %}
53
+ {{- '<|im_start|>assistant\n' }}
54
+ {%- endif %}
mlx/config.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Qwen2ForCausalLM"
4
+ ],
5
+ "attention_dropout": 0.0,
6
+ "bos_token_id": 151643,
7
+ "eos_token_id": 151645,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 2048,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 11008,
12
+ "max_position_embeddings": 32768,
13
+ "max_window_layers": 70,
14
+ "model_type": "qwen2",
15
+ "num_attention_heads": 16,
16
+ "num_hidden_layers": 36,
17
+ "num_key_value_heads": 2,
18
+ "quantization": {
19
+ "group_size": 64,
20
+ "bits": 4,
21
+ "mode": "affine"
22
+ },
23
+ "quantization_config": {
24
+ "group_size": 64,
25
+ "bits": 4,
26
+ "mode": "affine"
27
+ },
28
+ "rms_norm_eps": 1e-06,
29
+ "rope_theta": 1000000.0,
30
+ "sliding_window": 32768,
31
+ "tie_word_embeddings": true,
32
+ "torch_dtype": "bfloat16",
33
+ "transformers_version": "4.43.1",
34
+ "use_cache": true,
35
+ "use_sliding_window": false,
36
+ "vocab_size": 151936
37
+ }
mlx/generation_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "pad_token_id": 151643,
4
+ "do_sample": true,
5
+ "eos_token_id": [
6
+ 151645,
7
+ 151643
8
+ ],
9
+ "repetition_penalty": 1.05,
10
+ "temperature": 0.7,
11
+ "top_p": 0.8,
12
+ "top_k": 20,
13
+ "transformers_version": "4.37.0"
14
+ }
mlx/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
mlx/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8e4560e7d712377ddbe97e3cbe266477848df13b0f885dbcb825723b523031a
3
+ size 1736293583
mlx/model.safetensors.index.json ADDED
@@ -0,0 +1,948 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 1736187904,
4
+ "total_parameters": 3085938688
5
+ },
6
+ "weight_map": {
7
+ "model.embed_tokens.biases": "model.safetensors",
8
+ "model.embed_tokens.scales": "model.safetensors",
9
+ "model.embed_tokens.weight": "model.safetensors",
10
+ "model.layers.0.input_layernorm.weight": "model.safetensors",
11
+ "model.layers.0.mlp.down_proj.biases": "model.safetensors",
12
+ "model.layers.0.mlp.down_proj.scales": "model.safetensors",
13
+ "model.layers.0.mlp.down_proj.weight": "model.safetensors",
14
+ "model.layers.0.mlp.gate_proj.biases": "model.safetensors",
15
+ "model.layers.0.mlp.gate_proj.scales": "model.safetensors",
16
+ "model.layers.0.mlp.gate_proj.weight": "model.safetensors",
17
+ "model.layers.0.mlp.up_proj.biases": "model.safetensors",
18
+ "model.layers.0.mlp.up_proj.scales": "model.safetensors",
19
+ "model.layers.0.mlp.up_proj.weight": "model.safetensors",
20
+ "model.layers.0.post_attention_layernorm.weight": "model.safetensors",
21
+ "model.layers.0.self_attn.k_proj.bias": "model.safetensors",
22
+ "model.layers.0.self_attn.k_proj.biases": "model.safetensors",
23
+ "model.layers.0.self_attn.k_proj.scales": "model.safetensors",
24
+ "model.layers.0.self_attn.k_proj.weight": "model.safetensors",
25
+ "model.layers.0.self_attn.o_proj.biases": "model.safetensors",
26
+ "model.layers.0.self_attn.o_proj.scales": "model.safetensors",
27
+ "model.layers.0.self_attn.o_proj.weight": "model.safetensors",
28
+ "model.layers.0.self_attn.q_proj.bias": "model.safetensors",
29
+ "model.layers.0.self_attn.q_proj.biases": "model.safetensors",
30
+ "model.layers.0.self_attn.q_proj.scales": "model.safetensors",
31
+ "model.layers.0.self_attn.q_proj.weight": "model.safetensors",
32
+ "model.layers.0.self_attn.v_proj.bias": "model.safetensors",
33
+ "model.layers.0.self_attn.v_proj.biases": "model.safetensors",
34
+ "model.layers.0.self_attn.v_proj.scales": "model.safetensors",
35
+ "model.layers.0.self_attn.v_proj.weight": "model.safetensors",
36
+ "model.layers.1.input_layernorm.weight": "model.safetensors",
37
+ "model.layers.1.mlp.down_proj.biases": "model.safetensors",
38
+ "model.layers.1.mlp.down_proj.scales": "model.safetensors",
39
+ "model.layers.1.mlp.down_proj.weight": "model.safetensors",
40
+ "model.layers.1.mlp.gate_proj.biases": "model.safetensors",
41
+ "model.layers.1.mlp.gate_proj.scales": "model.safetensors",
42
+ "model.layers.1.mlp.gate_proj.weight": "model.safetensors",
43
+ "model.layers.1.mlp.up_proj.biases": "model.safetensors",
44
+ "model.layers.1.mlp.up_proj.scales": "model.safetensors",
45
+ "model.layers.1.mlp.up_proj.weight": "model.safetensors",
46
+ "model.layers.1.post_attention_layernorm.weight": "model.safetensors",
47
+ "model.layers.1.self_attn.k_proj.bias": "model.safetensors",
48
+ "model.layers.1.self_attn.k_proj.biases": "model.safetensors",
49
+ "model.layers.1.self_attn.k_proj.scales": "model.safetensors",
50
+ "model.layers.1.self_attn.k_proj.weight": "model.safetensors",
51
+ "model.layers.1.self_attn.o_proj.biases": "model.safetensors",
52
+ "model.layers.1.self_attn.o_proj.scales": "model.safetensors",
53
+ "model.layers.1.self_attn.o_proj.weight": "model.safetensors",
54
+ "model.layers.1.self_attn.q_proj.bias": "model.safetensors",
55
+ "model.layers.1.self_attn.q_proj.biases": "model.safetensors",
56
+ "model.layers.1.self_attn.q_proj.scales": "model.safetensors",
57
+ "model.layers.1.self_attn.q_proj.weight": "model.safetensors",
58
+ "model.layers.1.self_attn.v_proj.bias": "model.safetensors",
59
+ "model.layers.1.self_attn.v_proj.biases": "model.safetensors",
60
+ "model.layers.1.self_attn.v_proj.scales": "model.safetensors",
61
+ "model.layers.1.self_attn.v_proj.weight": "model.safetensors",
62
+ "model.layers.10.input_layernorm.weight": "model.safetensors",
63
+ "model.layers.10.mlp.down_proj.biases": "model.safetensors",
64
+ "model.layers.10.mlp.down_proj.scales": "model.safetensors",
65
+ "model.layers.10.mlp.down_proj.weight": "model.safetensors",
66
+ "model.layers.10.mlp.gate_proj.biases": "model.safetensors",
67
+ "model.layers.10.mlp.gate_proj.scales": "model.safetensors",
68
+ "model.layers.10.mlp.gate_proj.weight": "model.safetensors",
69
+ "model.layers.10.mlp.up_proj.biases": "model.safetensors",
70
+ "model.layers.10.mlp.up_proj.scales": "model.safetensors",
71
+ "model.layers.10.mlp.up_proj.weight": "model.safetensors",
72
+ "model.layers.10.post_attention_layernorm.weight": "model.safetensors",
73
+ "model.layers.10.self_attn.k_proj.bias": "model.safetensors",
74
+ "model.layers.10.self_attn.k_proj.biases": "model.safetensors",
75
+ "model.layers.10.self_attn.k_proj.scales": "model.safetensors",
76
+ "model.layers.10.self_attn.k_proj.weight": "model.safetensors",
77
+ "model.layers.10.self_attn.o_proj.biases": "model.safetensors",
78
+ "model.layers.10.self_attn.o_proj.scales": "model.safetensors",
79
+ "model.layers.10.self_attn.o_proj.weight": "model.safetensors",
80
+ "model.layers.10.self_attn.q_proj.bias": "model.safetensors",
81
+ "model.layers.10.self_attn.q_proj.biases": "model.safetensors",
82
+ "model.layers.10.self_attn.q_proj.scales": "model.safetensors",
83
+ "model.layers.10.self_attn.q_proj.weight": "model.safetensors",
84
+ "model.layers.10.self_attn.v_proj.bias": "model.safetensors",
85
+ "model.layers.10.self_attn.v_proj.biases": "model.safetensors",
86
+ "model.layers.10.self_attn.v_proj.scales": "model.safetensors",
87
+ "model.layers.10.self_attn.v_proj.weight": "model.safetensors",
88
+ "model.layers.11.input_layernorm.weight": "model.safetensors",
89
+ "model.layers.11.mlp.down_proj.biases": "model.safetensors",
90
+ "model.layers.11.mlp.down_proj.scales": "model.safetensors",
91
+ "model.layers.11.mlp.down_proj.weight": "model.safetensors",
92
+ "model.layers.11.mlp.gate_proj.biases": "model.safetensors",
93
+ "model.layers.11.mlp.gate_proj.scales": "model.safetensors",
94
+ "model.layers.11.mlp.gate_proj.weight": "model.safetensors",
95
+ "model.layers.11.mlp.up_proj.biases": "model.safetensors",
96
+ "model.layers.11.mlp.up_proj.scales": "model.safetensors",
97
+ "model.layers.11.mlp.up_proj.weight": "model.safetensors",
98
+ "model.layers.11.post_attention_layernorm.weight": "model.safetensors",
99
+ "model.layers.11.self_attn.k_proj.bias": "model.safetensors",
100
+ "model.layers.11.self_attn.k_proj.biases": "model.safetensors",
101
+ "model.layers.11.self_attn.k_proj.scales": "model.safetensors",
102
+ "model.layers.11.self_attn.k_proj.weight": "model.safetensors",
103
+ "model.layers.11.self_attn.o_proj.biases": "model.safetensors",
104
+ "model.layers.11.self_attn.o_proj.scales": "model.safetensors",
105
+ "model.layers.11.self_attn.o_proj.weight": "model.safetensors",
106
+ "model.layers.11.self_attn.q_proj.bias": "model.safetensors",
107
+ "model.layers.11.self_attn.q_proj.biases": "model.safetensors",
108
+ "model.layers.11.self_attn.q_proj.scales": "model.safetensors",
109
+ "model.layers.11.self_attn.q_proj.weight": "model.safetensors",
110
+ "model.layers.11.self_attn.v_proj.bias": "model.safetensors",
111
+ "model.layers.11.self_attn.v_proj.biases": "model.safetensors",
112
+ "model.layers.11.self_attn.v_proj.scales": "model.safetensors",
113
+ "model.layers.11.self_attn.v_proj.weight": "model.safetensors",
114
+ "model.layers.12.input_layernorm.weight": "model.safetensors",
115
+ "model.layers.12.mlp.down_proj.biases": "model.safetensors",
116
+ "model.layers.12.mlp.down_proj.scales": "model.safetensors",
117
+ "model.layers.12.mlp.down_proj.weight": "model.safetensors",
118
+ "model.layers.12.mlp.gate_proj.biases": "model.safetensors",
119
+ "model.layers.12.mlp.gate_proj.scales": "model.safetensors",
120
+ "model.layers.12.mlp.gate_proj.weight": "model.safetensors",
121
+ "model.layers.12.mlp.up_proj.biases": "model.safetensors",
122
+ "model.layers.12.mlp.up_proj.scales": "model.safetensors",
123
+ "model.layers.12.mlp.up_proj.weight": "model.safetensors",
124
+ "model.layers.12.post_attention_layernorm.weight": "model.safetensors",
125
+ "model.layers.12.self_attn.k_proj.bias": "model.safetensors",
126
+ "model.layers.12.self_attn.k_proj.biases": "model.safetensors",
127
+ "model.layers.12.self_attn.k_proj.scales": "model.safetensors",
128
+ "model.layers.12.self_attn.k_proj.weight": "model.safetensors",
129
+ "model.layers.12.self_attn.o_proj.biases": "model.safetensors",
130
+ "model.layers.12.self_attn.o_proj.scales": "model.safetensors",
131
+ "model.layers.12.self_attn.o_proj.weight": "model.safetensors",
132
+ "model.layers.12.self_attn.q_proj.bias": "model.safetensors",
133
+ "model.layers.12.self_attn.q_proj.biases": "model.safetensors",
134
+ "model.layers.12.self_attn.q_proj.scales": "model.safetensors",
135
+ "model.layers.12.self_attn.q_proj.weight": "model.safetensors",
136
+ "model.layers.12.self_attn.v_proj.bias": "model.safetensors",
137
+ "model.layers.12.self_attn.v_proj.biases": "model.safetensors",
138
+ "model.layers.12.self_attn.v_proj.scales": "model.safetensors",
139
+ "model.layers.12.self_attn.v_proj.weight": "model.safetensors",
140
+ "model.layers.13.input_layernorm.weight": "model.safetensors",
141
+ "model.layers.13.mlp.down_proj.biases": "model.safetensors",
142
+ "model.layers.13.mlp.down_proj.scales": "model.safetensors",
143
+ "model.layers.13.mlp.down_proj.weight": "model.safetensors",
144
+ "model.layers.13.mlp.gate_proj.biases": "model.safetensors",
145
+ "model.layers.13.mlp.gate_proj.scales": "model.safetensors",
146
+ "model.layers.13.mlp.gate_proj.weight": "model.safetensors",
147
+ "model.layers.13.mlp.up_proj.biases": "model.safetensors",
148
+ "model.layers.13.mlp.up_proj.scales": "model.safetensors",
149
+ "model.layers.13.mlp.up_proj.weight": "model.safetensors",
150
+ "model.layers.13.post_attention_layernorm.weight": "model.safetensors",
151
+ "model.layers.13.self_attn.k_proj.bias": "model.safetensors",
152
+ "model.layers.13.self_attn.k_proj.biases": "model.safetensors",
153
+ "model.layers.13.self_attn.k_proj.scales": "model.safetensors",
154
+ "model.layers.13.self_attn.k_proj.weight": "model.safetensors",
155
+ "model.layers.13.self_attn.o_proj.biases": "model.safetensors",
156
+ "model.layers.13.self_attn.o_proj.scales": "model.safetensors",
157
+ "model.layers.13.self_attn.o_proj.weight": "model.safetensors",
158
+ "model.layers.13.self_attn.q_proj.bias": "model.safetensors",
159
+ "model.layers.13.self_attn.q_proj.biases": "model.safetensors",
160
+ "model.layers.13.self_attn.q_proj.scales": "model.safetensors",
161
+ "model.layers.13.self_attn.q_proj.weight": "model.safetensors",
162
+ "model.layers.13.self_attn.v_proj.bias": "model.safetensors",
163
+ "model.layers.13.self_attn.v_proj.biases": "model.safetensors",
164
+ "model.layers.13.self_attn.v_proj.scales": "model.safetensors",
165
+ "model.layers.13.self_attn.v_proj.weight": "model.safetensors",
166
+ "model.layers.14.input_layernorm.weight": "model.safetensors",
167
+ "model.layers.14.mlp.down_proj.biases": "model.safetensors",
168
+ "model.layers.14.mlp.down_proj.scales": "model.safetensors",
169
+ "model.layers.14.mlp.down_proj.weight": "model.safetensors",
170
+ "model.layers.14.mlp.gate_proj.biases": "model.safetensors",
171
+ "model.layers.14.mlp.gate_proj.scales": "model.safetensors",
172
+ "model.layers.14.mlp.gate_proj.weight": "model.safetensors",
173
+ "model.layers.14.mlp.up_proj.biases": "model.safetensors",
174
+ "model.layers.14.mlp.up_proj.scales": "model.safetensors",
175
+ "model.layers.14.mlp.up_proj.weight": "model.safetensors",
176
+ "model.layers.14.post_attention_layernorm.weight": "model.safetensors",
177
+ "model.layers.14.self_attn.k_proj.bias": "model.safetensors",
178
+ "model.layers.14.self_attn.k_proj.biases": "model.safetensors",
179
+ "model.layers.14.self_attn.k_proj.scales": "model.safetensors",
180
+ "model.layers.14.self_attn.k_proj.weight": "model.safetensors",
181
+ "model.layers.14.self_attn.o_proj.biases": "model.safetensors",
182
+ "model.layers.14.self_attn.o_proj.scales": "model.safetensors",
183
+ "model.layers.14.self_attn.o_proj.weight": "model.safetensors",
184
+ "model.layers.14.self_attn.q_proj.bias": "model.safetensors",
185
+ "model.layers.14.self_attn.q_proj.biases": "model.safetensors",
186
+ "model.layers.14.self_attn.q_proj.scales": "model.safetensors",
187
+ "model.layers.14.self_attn.q_proj.weight": "model.safetensors",
188
+ "model.layers.14.self_attn.v_proj.bias": "model.safetensors",
189
+ "model.layers.14.self_attn.v_proj.biases": "model.safetensors",
190
+ "model.layers.14.self_attn.v_proj.scales": "model.safetensors",
191
+ "model.layers.14.self_attn.v_proj.weight": "model.safetensors",
192
+ "model.layers.15.input_layernorm.weight": "model.safetensors",
193
+ "model.layers.15.mlp.down_proj.biases": "model.safetensors",
194
+ "model.layers.15.mlp.down_proj.scales": "model.safetensors",
195
+ "model.layers.15.mlp.down_proj.weight": "model.safetensors",
196
+ "model.layers.15.mlp.gate_proj.biases": "model.safetensors",
197
+ "model.layers.15.mlp.gate_proj.scales": "model.safetensors",
198
+ "model.layers.15.mlp.gate_proj.weight": "model.safetensors",
199
+ "model.layers.15.mlp.up_proj.biases": "model.safetensors",
200
+ "model.layers.15.mlp.up_proj.scales": "model.safetensors",
201
+ "model.layers.15.mlp.up_proj.weight": "model.safetensors",
202
+ "model.layers.15.post_attention_layernorm.weight": "model.safetensors",
203
+ "model.layers.15.self_attn.k_proj.bias": "model.safetensors",
204
+ "model.layers.15.self_attn.k_proj.biases": "model.safetensors",
205
+ "model.layers.15.self_attn.k_proj.scales": "model.safetensors",
206
+ "model.layers.15.self_attn.k_proj.weight": "model.safetensors",
207
+ "model.layers.15.self_attn.o_proj.biases": "model.safetensors",
208
+ "model.layers.15.self_attn.o_proj.scales": "model.safetensors",
209
+ "model.layers.15.self_attn.o_proj.weight": "model.safetensors",
210
+ "model.layers.15.self_attn.q_proj.bias": "model.safetensors",
211
+ "model.layers.15.self_attn.q_proj.biases": "model.safetensors",
212
+ "model.layers.15.self_attn.q_proj.scales": "model.safetensors",
213
+ "model.layers.15.self_attn.q_proj.weight": "model.safetensors",
214
+ "model.layers.15.self_attn.v_proj.bias": "model.safetensors",
215
+ "model.layers.15.self_attn.v_proj.biases": "model.safetensors",
216
+ "model.layers.15.self_attn.v_proj.scales": "model.safetensors",
217
+ "model.layers.15.self_attn.v_proj.weight": "model.safetensors",
218
+ "model.layers.16.input_layernorm.weight": "model.safetensors",
219
+ "model.layers.16.mlp.down_proj.biases": "model.safetensors",
220
+ "model.layers.16.mlp.down_proj.scales": "model.safetensors",
221
+ "model.layers.16.mlp.down_proj.weight": "model.safetensors",
222
+ "model.layers.16.mlp.gate_proj.biases": "model.safetensors",
223
+ "model.layers.16.mlp.gate_proj.scales": "model.safetensors",
224
+ "model.layers.16.mlp.gate_proj.weight": "model.safetensors",
225
+ "model.layers.16.mlp.up_proj.biases": "model.safetensors",
226
+ "model.layers.16.mlp.up_proj.scales": "model.safetensors",
227
+ "model.layers.16.mlp.up_proj.weight": "model.safetensors",
228
+ "model.layers.16.post_attention_layernorm.weight": "model.safetensors",
229
+ "model.layers.16.self_attn.k_proj.bias": "model.safetensors",
230
+ "model.layers.16.self_attn.k_proj.biases": "model.safetensors",
231
+ "model.layers.16.self_attn.k_proj.scales": "model.safetensors",
232
+ "model.layers.16.self_attn.k_proj.weight": "model.safetensors",
233
+ "model.layers.16.self_attn.o_proj.biases": "model.safetensors",
234
+ "model.layers.16.self_attn.o_proj.scales": "model.safetensors",
235
+ "model.layers.16.self_attn.o_proj.weight": "model.safetensors",
236
+ "model.layers.16.self_attn.q_proj.bias": "model.safetensors",
237
+ "model.layers.16.self_attn.q_proj.biases": "model.safetensors",
238
+ "model.layers.16.self_attn.q_proj.scales": "model.safetensors",
239
+ "model.layers.16.self_attn.q_proj.weight": "model.safetensors",
240
+ "model.layers.16.self_attn.v_proj.bias": "model.safetensors",
241
+ "model.layers.16.self_attn.v_proj.biases": "model.safetensors",
242
+ "model.layers.16.self_attn.v_proj.scales": "model.safetensors",
243
+ "model.layers.16.self_attn.v_proj.weight": "model.safetensors",
244
+ "model.layers.17.input_layernorm.weight": "model.safetensors",
245
+ "model.layers.17.mlp.down_proj.biases": "model.safetensors",
246
+ "model.layers.17.mlp.down_proj.scales": "model.safetensors",
247
+ "model.layers.17.mlp.down_proj.weight": "model.safetensors",
248
+ "model.layers.17.mlp.gate_proj.biases": "model.safetensors",
249
+ "model.layers.17.mlp.gate_proj.scales": "model.safetensors",
250
+ "model.layers.17.mlp.gate_proj.weight": "model.safetensors",
251
+ "model.layers.17.mlp.up_proj.biases": "model.safetensors",
252
+ "model.layers.17.mlp.up_proj.scales": "model.safetensors",
253
+ "model.layers.17.mlp.up_proj.weight": "model.safetensors",
254
+ "model.layers.17.post_attention_layernorm.weight": "model.safetensors",
255
+ "model.layers.17.self_attn.k_proj.bias": "model.safetensors",
256
+ "model.layers.17.self_attn.k_proj.biases": "model.safetensors",
257
+ "model.layers.17.self_attn.k_proj.scales": "model.safetensors",
258
+ "model.layers.17.self_attn.k_proj.weight": "model.safetensors",
259
+ "model.layers.17.self_attn.o_proj.biases": "model.safetensors",
260
+ "model.layers.17.self_attn.o_proj.scales": "model.safetensors",
261
+ "model.layers.17.self_attn.o_proj.weight": "model.safetensors",
262
+ "model.layers.17.self_attn.q_proj.bias": "model.safetensors",
263
+ "model.layers.17.self_attn.q_proj.biases": "model.safetensors",
264
+ "model.layers.17.self_attn.q_proj.scales": "model.safetensors",
265
+ "model.layers.17.self_attn.q_proj.weight": "model.safetensors",
266
+ "model.layers.17.self_attn.v_proj.bias": "model.safetensors",
267
+ "model.layers.17.self_attn.v_proj.biases": "model.safetensors",
268
+ "model.layers.17.self_attn.v_proj.scales": "model.safetensors",
269
+ "model.layers.17.self_attn.v_proj.weight": "model.safetensors",
270
+ "model.layers.18.input_layernorm.weight": "model.safetensors",
271
+ "model.layers.18.mlp.down_proj.biases": "model.safetensors",
272
+ "model.layers.18.mlp.down_proj.scales": "model.safetensors",
273
+ "model.layers.18.mlp.down_proj.weight": "model.safetensors",
274
+ "model.layers.18.mlp.gate_proj.biases": "model.safetensors",
275
+ "model.layers.18.mlp.gate_proj.scales": "model.safetensors",
276
+ "model.layers.18.mlp.gate_proj.weight": "model.safetensors",
277
+ "model.layers.18.mlp.up_proj.biases": "model.safetensors",
278
+ "model.layers.18.mlp.up_proj.scales": "model.safetensors",
279
+ "model.layers.18.mlp.up_proj.weight": "model.safetensors",
280
+ "model.layers.18.post_attention_layernorm.weight": "model.safetensors",
281
+ "model.layers.18.self_attn.k_proj.bias": "model.safetensors",
282
+ "model.layers.18.self_attn.k_proj.biases": "model.safetensors",
283
+ "model.layers.18.self_attn.k_proj.scales": "model.safetensors",
284
+ "model.layers.18.self_attn.k_proj.weight": "model.safetensors",
285
+ "model.layers.18.self_attn.o_proj.biases": "model.safetensors",
286
+ "model.layers.18.self_attn.o_proj.scales": "model.safetensors",
287
+ "model.layers.18.self_attn.o_proj.weight": "model.safetensors",
288
+ "model.layers.18.self_attn.q_proj.bias": "model.safetensors",
289
+ "model.layers.18.self_attn.q_proj.biases": "model.safetensors",
290
+ "model.layers.18.self_attn.q_proj.scales": "model.safetensors",
291
+ "model.layers.18.self_attn.q_proj.weight": "model.safetensors",
292
+ "model.layers.18.self_attn.v_proj.bias": "model.safetensors",
293
+ "model.layers.18.self_attn.v_proj.biases": "model.safetensors",
294
+ "model.layers.18.self_attn.v_proj.scales": "model.safetensors",
295
+ "model.layers.18.self_attn.v_proj.weight": "model.safetensors",
296
+ "model.layers.19.input_layernorm.weight": "model.safetensors",
297
+ "model.layers.19.mlp.down_proj.biases": "model.safetensors",
298
+ "model.layers.19.mlp.down_proj.scales": "model.safetensors",
299
+ "model.layers.19.mlp.down_proj.weight": "model.safetensors",
300
+ "model.layers.19.mlp.gate_proj.biases": "model.safetensors",
301
+ "model.layers.19.mlp.gate_proj.scales": "model.safetensors",
302
+ "model.layers.19.mlp.gate_proj.weight": "model.safetensors",
303
+ "model.layers.19.mlp.up_proj.biases": "model.safetensors",
304
+ "model.layers.19.mlp.up_proj.scales": "model.safetensors",
305
+ "model.layers.19.mlp.up_proj.weight": "model.safetensors",
306
+ "model.layers.19.post_attention_layernorm.weight": "model.safetensors",
307
+ "model.layers.19.self_attn.k_proj.bias": "model.safetensors",
308
+ "model.layers.19.self_attn.k_proj.biases": "model.safetensors",
309
+ "model.layers.19.self_attn.k_proj.scales": "model.safetensors",
310
+ "model.layers.19.self_attn.k_proj.weight": "model.safetensors",
311
+ "model.layers.19.self_attn.o_proj.biases": "model.safetensors",
312
+ "model.layers.19.self_attn.o_proj.scales": "model.safetensors",
313
+ "model.layers.19.self_attn.o_proj.weight": "model.safetensors",
314
+ "model.layers.19.self_attn.q_proj.bias": "model.safetensors",
315
+ "model.layers.19.self_attn.q_proj.biases": "model.safetensors",
316
+ "model.layers.19.self_attn.q_proj.scales": "model.safetensors",
317
+ "model.layers.19.self_attn.q_proj.weight": "model.safetensors",
318
+ "model.layers.19.self_attn.v_proj.bias": "model.safetensors",
319
+ "model.layers.19.self_attn.v_proj.biases": "model.safetensors",
320
+ "model.layers.19.self_attn.v_proj.scales": "model.safetensors",
321
+ "model.layers.19.self_attn.v_proj.weight": "model.safetensors",
322
+ "model.layers.2.input_layernorm.weight": "model.safetensors",
323
+ "model.layers.2.mlp.down_proj.biases": "model.safetensors",
324
+ "model.layers.2.mlp.down_proj.scales": "model.safetensors",
325
+ "model.layers.2.mlp.down_proj.weight": "model.safetensors",
326
+ "model.layers.2.mlp.gate_proj.biases": "model.safetensors",
327
+ "model.layers.2.mlp.gate_proj.scales": "model.safetensors",
328
+ "model.layers.2.mlp.gate_proj.weight": "model.safetensors",
329
+ "model.layers.2.mlp.up_proj.biases": "model.safetensors",
330
+ "model.layers.2.mlp.up_proj.scales": "model.safetensors",
331
+ "model.layers.2.mlp.up_proj.weight": "model.safetensors",
332
+ "model.layers.2.post_attention_layernorm.weight": "model.safetensors",
333
+ "model.layers.2.self_attn.k_proj.bias": "model.safetensors",
334
+ "model.layers.2.self_attn.k_proj.biases": "model.safetensors",
335
+ "model.layers.2.self_attn.k_proj.scales": "model.safetensors",
336
+ "model.layers.2.self_attn.k_proj.weight": "model.safetensors",
337
+ "model.layers.2.self_attn.o_proj.biases": "model.safetensors",
338
+ "model.layers.2.self_attn.o_proj.scales": "model.safetensors",
339
+ "model.layers.2.self_attn.o_proj.weight": "model.safetensors",
340
+ "model.layers.2.self_attn.q_proj.bias": "model.safetensors",
341
+ "model.layers.2.self_attn.q_proj.biases": "model.safetensors",
342
+ "model.layers.2.self_attn.q_proj.scales": "model.safetensors",
343
+ "model.layers.2.self_attn.q_proj.weight": "model.safetensors",
344
+ "model.layers.2.self_attn.v_proj.bias": "model.safetensors",
345
+ "model.layers.2.self_attn.v_proj.biases": "model.safetensors",
346
+ "model.layers.2.self_attn.v_proj.scales": "model.safetensors",
347
+ "model.layers.2.self_attn.v_proj.weight": "model.safetensors",
348
+ "model.layers.20.input_layernorm.weight": "model.safetensors",
349
+ "model.layers.20.mlp.down_proj.biases": "model.safetensors",
350
+ "model.layers.20.mlp.down_proj.scales": "model.safetensors",
351
+ "model.layers.20.mlp.down_proj.weight": "model.safetensors",
352
+ "model.layers.20.mlp.gate_proj.biases": "model.safetensors",
353
+ "model.layers.20.mlp.gate_proj.scales": "model.safetensors",
354
+ "model.layers.20.mlp.gate_proj.weight": "model.safetensors",
355
+ "model.layers.20.mlp.up_proj.biases": "model.safetensors",
356
+ "model.layers.20.mlp.up_proj.scales": "model.safetensors",
357
+ "model.layers.20.mlp.up_proj.weight": "model.safetensors",
358
+ "model.layers.20.post_attention_layernorm.weight": "model.safetensors",
359
+ "model.layers.20.self_attn.k_proj.bias": "model.safetensors",
360
+ "model.layers.20.self_attn.k_proj.biases": "model.safetensors",
361
+ "model.layers.20.self_attn.k_proj.scales": "model.safetensors",
362
+ "model.layers.20.self_attn.k_proj.weight": "model.safetensors",
363
+ "model.layers.20.self_attn.o_proj.biases": "model.safetensors",
364
+ "model.layers.20.self_attn.o_proj.scales": "model.safetensors",
365
+ "model.layers.20.self_attn.o_proj.weight": "model.safetensors",
366
+ "model.layers.20.self_attn.q_proj.bias": "model.safetensors",
367
+ "model.layers.20.self_attn.q_proj.biases": "model.safetensors",
368
+ "model.layers.20.self_attn.q_proj.scales": "model.safetensors",
369
+ "model.layers.20.self_attn.q_proj.weight": "model.safetensors",
370
+ "model.layers.20.self_attn.v_proj.bias": "model.safetensors",
371
+ "model.layers.20.self_attn.v_proj.biases": "model.safetensors",
372
+ "model.layers.20.self_attn.v_proj.scales": "model.safetensors",
373
+ "model.layers.20.self_attn.v_proj.weight": "model.safetensors",
374
+ "model.layers.21.input_layernorm.weight": "model.safetensors",
375
+ "model.layers.21.mlp.down_proj.biases": "model.safetensors",
376
+ "model.layers.21.mlp.down_proj.scales": "model.safetensors",
377
+ "model.layers.21.mlp.down_proj.weight": "model.safetensors",
378
+ "model.layers.21.mlp.gate_proj.biases": "model.safetensors",
379
+ "model.layers.21.mlp.gate_proj.scales": "model.safetensors",
380
+ "model.layers.21.mlp.gate_proj.weight": "model.safetensors",
381
+ "model.layers.21.mlp.up_proj.biases": "model.safetensors",
382
+ "model.layers.21.mlp.up_proj.scales": "model.safetensors",
383
+ "model.layers.21.mlp.up_proj.weight": "model.safetensors",
384
+ "model.layers.21.post_attention_layernorm.weight": "model.safetensors",
385
+ "model.layers.21.self_attn.k_proj.bias": "model.safetensors",
386
+ "model.layers.21.self_attn.k_proj.biases": "model.safetensors",
387
+ "model.layers.21.self_attn.k_proj.scales": "model.safetensors",
388
+ "model.layers.21.self_attn.k_proj.weight": "model.safetensors",
389
+ "model.layers.21.self_attn.o_proj.biases": "model.safetensors",
390
+ "model.layers.21.self_attn.o_proj.scales": "model.safetensors",
391
+ "model.layers.21.self_attn.o_proj.weight": "model.safetensors",
392
+ "model.layers.21.self_attn.q_proj.bias": "model.safetensors",
393
+ "model.layers.21.self_attn.q_proj.biases": "model.safetensors",
394
+ "model.layers.21.self_attn.q_proj.scales": "model.safetensors",
395
+ "model.layers.21.self_attn.q_proj.weight": "model.safetensors",
396
+ "model.layers.21.self_attn.v_proj.bias": "model.safetensors",
397
+ "model.layers.21.self_attn.v_proj.biases": "model.safetensors",
398
+ "model.layers.21.self_attn.v_proj.scales": "model.safetensors",
399
+ "model.layers.21.self_attn.v_proj.weight": "model.safetensors",
400
+ "model.layers.22.input_layernorm.weight": "model.safetensors",
401
+ "model.layers.22.mlp.down_proj.biases": "model.safetensors",
402
+ "model.layers.22.mlp.down_proj.scales": "model.safetensors",
403
+ "model.layers.22.mlp.down_proj.weight": "model.safetensors",
404
+ "model.layers.22.mlp.gate_proj.biases": "model.safetensors",
405
+ "model.layers.22.mlp.gate_proj.scales": "model.safetensors",
406
+ "model.layers.22.mlp.gate_proj.weight": "model.safetensors",
407
+ "model.layers.22.mlp.up_proj.biases": "model.safetensors",
408
+ "model.layers.22.mlp.up_proj.scales": "model.safetensors",
409
+ "model.layers.22.mlp.up_proj.weight": "model.safetensors",
410
+ "model.layers.22.post_attention_layernorm.weight": "model.safetensors",
411
+ "model.layers.22.self_attn.k_proj.bias": "model.safetensors",
412
+ "model.layers.22.self_attn.k_proj.biases": "model.safetensors",
413
+ "model.layers.22.self_attn.k_proj.scales": "model.safetensors",
414
+ "model.layers.22.self_attn.k_proj.weight": "model.safetensors",
415
+ "model.layers.22.self_attn.o_proj.biases": "model.safetensors",
416
+ "model.layers.22.self_attn.o_proj.scales": "model.safetensors",
417
+ "model.layers.22.self_attn.o_proj.weight": "model.safetensors",
418
+ "model.layers.22.self_attn.q_proj.bias": "model.safetensors",
419
+ "model.layers.22.self_attn.q_proj.biases": "model.safetensors",
420
+ "model.layers.22.self_attn.q_proj.scales": "model.safetensors",
421
+ "model.layers.22.self_attn.q_proj.weight": "model.safetensors",
422
+ "model.layers.22.self_attn.v_proj.bias": "model.safetensors",
423
+ "model.layers.22.self_attn.v_proj.biases": "model.safetensors",
424
+ "model.layers.22.self_attn.v_proj.scales": "model.safetensors",
425
+ "model.layers.22.self_attn.v_proj.weight": "model.safetensors",
426
+ "model.layers.23.input_layernorm.weight": "model.safetensors",
427
+ "model.layers.23.mlp.down_proj.biases": "model.safetensors",
428
+ "model.layers.23.mlp.down_proj.scales": "model.safetensors",
429
+ "model.layers.23.mlp.down_proj.weight": "model.safetensors",
430
+ "model.layers.23.mlp.gate_proj.biases": "model.safetensors",
431
+ "model.layers.23.mlp.gate_proj.scales": "model.safetensors",
432
+ "model.layers.23.mlp.gate_proj.weight": "model.safetensors",
433
+ "model.layers.23.mlp.up_proj.biases": "model.safetensors",
434
+ "model.layers.23.mlp.up_proj.scales": "model.safetensors",
435
+ "model.layers.23.mlp.up_proj.weight": "model.safetensors",
436
+ "model.layers.23.post_attention_layernorm.weight": "model.safetensors",
437
+ "model.layers.23.self_attn.k_proj.bias": "model.safetensors",
438
+ "model.layers.23.self_attn.k_proj.biases": "model.safetensors",
439
+ "model.layers.23.self_attn.k_proj.scales": "model.safetensors",
440
+ "model.layers.23.self_attn.k_proj.weight": "model.safetensors",
441
+ "model.layers.23.self_attn.o_proj.biases": "model.safetensors",
442
+ "model.layers.23.self_attn.o_proj.scales": "model.safetensors",
443
+ "model.layers.23.self_attn.o_proj.weight": "model.safetensors",
444
+ "model.layers.23.self_attn.q_proj.bias": "model.safetensors",
445
+ "model.layers.23.self_attn.q_proj.biases": "model.safetensors",
446
+ "model.layers.23.self_attn.q_proj.scales": "model.safetensors",
447
+ "model.layers.23.self_attn.q_proj.weight": "model.safetensors",
448
+ "model.layers.23.self_attn.v_proj.bias": "model.safetensors",
449
+ "model.layers.23.self_attn.v_proj.biases": "model.safetensors",
450
+ "model.layers.23.self_attn.v_proj.scales": "model.safetensors",
451
+ "model.layers.23.self_attn.v_proj.weight": "model.safetensors",
452
+ "model.layers.24.input_layernorm.weight": "model.safetensors",
453
+ "model.layers.24.mlp.down_proj.biases": "model.safetensors",
454
+ "model.layers.24.mlp.down_proj.scales": "model.safetensors",
455
+ "model.layers.24.mlp.down_proj.weight": "model.safetensors",
456
+ "model.layers.24.mlp.gate_proj.biases": "model.safetensors",
457
+ "model.layers.24.mlp.gate_proj.scales": "model.safetensors",
458
+ "model.layers.24.mlp.gate_proj.weight": "model.safetensors",
459
+ "model.layers.24.mlp.up_proj.biases": "model.safetensors",
460
+ "model.layers.24.mlp.up_proj.scales": "model.safetensors",
461
+ "model.layers.24.mlp.up_proj.weight": "model.safetensors",
462
+ "model.layers.24.post_attention_layernorm.weight": "model.safetensors",
463
+ "model.layers.24.self_attn.k_proj.bias": "model.safetensors",
464
+ "model.layers.24.self_attn.k_proj.biases": "model.safetensors",
465
+ "model.layers.24.self_attn.k_proj.scales": "model.safetensors",
466
+ "model.layers.24.self_attn.k_proj.weight": "model.safetensors",
467
+ "model.layers.24.self_attn.o_proj.biases": "model.safetensors",
468
+ "model.layers.24.self_attn.o_proj.scales": "model.safetensors",
469
+ "model.layers.24.self_attn.o_proj.weight": "model.safetensors",
470
+ "model.layers.24.self_attn.q_proj.bias": "model.safetensors",
471
+ "model.layers.24.self_attn.q_proj.biases": "model.safetensors",
472
+ "model.layers.24.self_attn.q_proj.scales": "model.safetensors",
473
+ "model.layers.24.self_attn.q_proj.weight": "model.safetensors",
474
+ "model.layers.24.self_attn.v_proj.bias": "model.safetensors",
475
+ "model.layers.24.self_attn.v_proj.biases": "model.safetensors",
476
+ "model.layers.24.self_attn.v_proj.scales": "model.safetensors",
477
+ "model.layers.24.self_attn.v_proj.weight": "model.safetensors",
478
+ "model.layers.25.input_layernorm.weight": "model.safetensors",
479
+ "model.layers.25.mlp.down_proj.biases": "model.safetensors",
480
+ "model.layers.25.mlp.down_proj.scales": "model.safetensors",
481
+ "model.layers.25.mlp.down_proj.weight": "model.safetensors",
482
+ "model.layers.25.mlp.gate_proj.biases": "model.safetensors",
483
+ "model.layers.25.mlp.gate_proj.scales": "model.safetensors",
484
+ "model.layers.25.mlp.gate_proj.weight": "model.safetensors",
485
+ "model.layers.25.mlp.up_proj.biases": "model.safetensors",
486
+ "model.layers.25.mlp.up_proj.scales": "model.safetensors",
487
+ "model.layers.25.mlp.up_proj.weight": "model.safetensors",
488
+ "model.layers.25.post_attention_layernorm.weight": "model.safetensors",
489
+ "model.layers.25.self_attn.k_proj.bias": "model.safetensors",
490
+ "model.layers.25.self_attn.k_proj.biases": "model.safetensors",
491
+ "model.layers.25.self_attn.k_proj.scales": "model.safetensors",
492
+ "model.layers.25.self_attn.k_proj.weight": "model.safetensors",
493
+ "model.layers.25.self_attn.o_proj.biases": "model.safetensors",
494
+ "model.layers.25.self_attn.o_proj.scales": "model.safetensors",
495
+ "model.layers.25.self_attn.o_proj.weight": "model.safetensors",
496
+ "model.layers.25.self_attn.q_proj.bias": "model.safetensors",
497
+ "model.layers.25.self_attn.q_proj.biases": "model.safetensors",
498
+ "model.layers.25.self_attn.q_proj.scales": "model.safetensors",
499
+ "model.layers.25.self_attn.q_proj.weight": "model.safetensors",
500
+ "model.layers.25.self_attn.v_proj.bias": "model.safetensors",
501
+ "model.layers.25.self_attn.v_proj.biases": "model.safetensors",
502
+ "model.layers.25.self_attn.v_proj.scales": "model.safetensors",
503
+ "model.layers.25.self_attn.v_proj.weight": "model.safetensors",
504
+ "model.layers.26.input_layernorm.weight": "model.safetensors",
505
+ "model.layers.26.mlp.down_proj.biases": "model.safetensors",
506
+ "model.layers.26.mlp.down_proj.scales": "model.safetensors",
507
+ "model.layers.26.mlp.down_proj.weight": "model.safetensors",
508
+ "model.layers.26.mlp.gate_proj.biases": "model.safetensors",
509
+ "model.layers.26.mlp.gate_proj.scales": "model.safetensors",
510
+ "model.layers.26.mlp.gate_proj.weight": "model.safetensors",
511
+ "model.layers.26.mlp.up_proj.biases": "model.safetensors",
512
+ "model.layers.26.mlp.up_proj.scales": "model.safetensors",
513
+ "model.layers.26.mlp.up_proj.weight": "model.safetensors",
514
+ "model.layers.26.post_attention_layernorm.weight": "model.safetensors",
515
+ "model.layers.26.self_attn.k_proj.bias": "model.safetensors",
516
+ "model.layers.26.self_attn.k_proj.biases": "model.safetensors",
517
+ "model.layers.26.self_attn.k_proj.scales": "model.safetensors",
518
+ "model.layers.26.self_attn.k_proj.weight": "model.safetensors",
519
+ "model.layers.26.self_attn.o_proj.biases": "model.safetensors",
520
+ "model.layers.26.self_attn.o_proj.scales": "model.safetensors",
521
+ "model.layers.26.self_attn.o_proj.weight": "model.safetensors",
522
+ "model.layers.26.self_attn.q_proj.bias": "model.safetensors",
523
+ "model.layers.26.self_attn.q_proj.biases": "model.safetensors",
524
+ "model.layers.26.self_attn.q_proj.scales": "model.safetensors",
525
+ "model.layers.26.self_attn.q_proj.weight": "model.safetensors",
526
+ "model.layers.26.self_attn.v_proj.bias": "model.safetensors",
527
+ "model.layers.26.self_attn.v_proj.biases": "model.safetensors",
528
+ "model.layers.26.self_attn.v_proj.scales": "model.safetensors",
529
+ "model.layers.26.self_attn.v_proj.weight": "model.safetensors",
530
+ "model.layers.27.input_layernorm.weight": "model.safetensors",
531
+ "model.layers.27.mlp.down_proj.biases": "model.safetensors",
532
+ "model.layers.27.mlp.down_proj.scales": "model.safetensors",
533
+ "model.layers.27.mlp.down_proj.weight": "model.safetensors",
534
+ "model.layers.27.mlp.gate_proj.biases": "model.safetensors",
535
+ "model.layers.27.mlp.gate_proj.scales": "model.safetensors",
536
+ "model.layers.27.mlp.gate_proj.weight": "model.safetensors",
537
+ "model.layers.27.mlp.up_proj.biases": "model.safetensors",
538
+ "model.layers.27.mlp.up_proj.scales": "model.safetensors",
539
+ "model.layers.27.mlp.up_proj.weight": "model.safetensors",
540
+ "model.layers.27.post_attention_layernorm.weight": "model.safetensors",
541
+ "model.layers.27.self_attn.k_proj.bias": "model.safetensors",
542
+ "model.layers.27.self_attn.k_proj.biases": "model.safetensors",
543
+ "model.layers.27.self_attn.k_proj.scales": "model.safetensors",
544
+ "model.layers.27.self_attn.k_proj.weight": "model.safetensors",
545
+ "model.layers.27.self_attn.o_proj.biases": "model.safetensors",
546
+ "model.layers.27.self_attn.o_proj.scales": "model.safetensors",
547
+ "model.layers.27.self_attn.o_proj.weight": "model.safetensors",
548
+ "model.layers.27.self_attn.q_proj.bias": "model.safetensors",
549
+ "model.layers.27.self_attn.q_proj.biases": "model.safetensors",
550
+ "model.layers.27.self_attn.q_proj.scales": "model.safetensors",
551
+ "model.layers.27.self_attn.q_proj.weight": "model.safetensors",
552
+ "model.layers.27.self_attn.v_proj.bias": "model.safetensors",
553
+ "model.layers.27.self_attn.v_proj.biases": "model.safetensors",
554
+ "model.layers.27.self_attn.v_proj.scales": "model.safetensors",
555
+ "model.layers.27.self_attn.v_proj.weight": "model.safetensors",
556
+ "model.layers.28.input_layernorm.weight": "model.safetensors",
557
+ "model.layers.28.mlp.down_proj.biases": "model.safetensors",
558
+ "model.layers.28.mlp.down_proj.scales": "model.safetensors",
559
+ "model.layers.28.mlp.down_proj.weight": "model.safetensors",
560
+ "model.layers.28.mlp.gate_proj.biases": "model.safetensors",
561
+ "model.layers.28.mlp.gate_proj.scales": "model.safetensors",
562
+ "model.layers.28.mlp.gate_proj.weight": "model.safetensors",
563
+ "model.layers.28.mlp.up_proj.biases": "model.safetensors",
564
+ "model.layers.28.mlp.up_proj.scales": "model.safetensors",
565
+ "model.layers.28.mlp.up_proj.weight": "model.safetensors",
566
+ "model.layers.28.post_attention_layernorm.weight": "model.safetensors",
567
+ "model.layers.28.self_attn.k_proj.bias": "model.safetensors",
568
+ "model.layers.28.self_attn.k_proj.biases": "model.safetensors",
569
+ "model.layers.28.self_attn.k_proj.scales": "model.safetensors",
570
+ "model.layers.28.self_attn.k_proj.weight": "model.safetensors",
571
+ "model.layers.28.self_attn.o_proj.biases": "model.safetensors",
572
+ "model.layers.28.self_attn.o_proj.scales": "model.safetensors",
573
+ "model.layers.28.self_attn.o_proj.weight": "model.safetensors",
574
+ "model.layers.28.self_attn.q_proj.bias": "model.safetensors",
575
+ "model.layers.28.self_attn.q_proj.biases": "model.safetensors",
576
+ "model.layers.28.self_attn.q_proj.scales": "model.safetensors",
577
+ "model.layers.28.self_attn.q_proj.weight": "model.safetensors",
578
+ "model.layers.28.self_attn.v_proj.bias": "model.safetensors",
579
+ "model.layers.28.self_attn.v_proj.biases": "model.safetensors",
580
+ "model.layers.28.self_attn.v_proj.scales": "model.safetensors",
581
+ "model.layers.28.self_attn.v_proj.weight": "model.safetensors",
582
+ "model.layers.29.input_layernorm.weight": "model.safetensors",
583
+ "model.layers.29.mlp.down_proj.biases": "model.safetensors",
584
+ "model.layers.29.mlp.down_proj.scales": "model.safetensors",
585
+ "model.layers.29.mlp.down_proj.weight": "model.safetensors",
586
+ "model.layers.29.mlp.gate_proj.biases": "model.safetensors",
587
+ "model.layers.29.mlp.gate_proj.scales": "model.safetensors",
588
+ "model.layers.29.mlp.gate_proj.weight": "model.safetensors",
589
+ "model.layers.29.mlp.up_proj.biases": "model.safetensors",
590
+ "model.layers.29.mlp.up_proj.scales": "model.safetensors",
591
+ "model.layers.29.mlp.up_proj.weight": "model.safetensors",
592
+ "model.layers.29.post_attention_layernorm.weight": "model.safetensors",
593
+ "model.layers.29.self_attn.k_proj.bias": "model.safetensors",
594
+ "model.layers.29.self_attn.k_proj.biases": "model.safetensors",
595
+ "model.layers.29.self_attn.k_proj.scales": "model.safetensors",
596
+ "model.layers.29.self_attn.k_proj.weight": "model.safetensors",
597
+ "model.layers.29.self_attn.o_proj.biases": "model.safetensors",
598
+ "model.layers.29.self_attn.o_proj.scales": "model.safetensors",
599
+ "model.layers.29.self_attn.o_proj.weight": "model.safetensors",
600
+ "model.layers.29.self_attn.q_proj.bias": "model.safetensors",
601
+ "model.layers.29.self_attn.q_proj.biases": "model.safetensors",
602
+ "model.layers.29.self_attn.q_proj.scales": "model.safetensors",
603
+ "model.layers.29.self_attn.q_proj.weight": "model.safetensors",
604
+ "model.layers.29.self_attn.v_proj.bias": "model.safetensors",
605
+ "model.layers.29.self_attn.v_proj.biases": "model.safetensors",
606
+ "model.layers.29.self_attn.v_proj.scales": "model.safetensors",
607
+ "model.layers.29.self_attn.v_proj.weight": "model.safetensors",
608
+ "model.layers.3.input_layernorm.weight": "model.safetensors",
609
+ "model.layers.3.mlp.down_proj.biases": "model.safetensors",
610
+ "model.layers.3.mlp.down_proj.scales": "model.safetensors",
611
+ "model.layers.3.mlp.down_proj.weight": "model.safetensors",
612
+ "model.layers.3.mlp.gate_proj.biases": "model.safetensors",
613
+ "model.layers.3.mlp.gate_proj.scales": "model.safetensors",
614
+ "model.layers.3.mlp.gate_proj.weight": "model.safetensors",
615
+ "model.layers.3.mlp.up_proj.biases": "model.safetensors",
616
+ "model.layers.3.mlp.up_proj.scales": "model.safetensors",
617
+ "model.layers.3.mlp.up_proj.weight": "model.safetensors",
618
+ "model.layers.3.post_attention_layernorm.weight": "model.safetensors",
619
+ "model.layers.3.self_attn.k_proj.bias": "model.safetensors",
620
+ "model.layers.3.self_attn.k_proj.biases": "model.safetensors",
621
+ "model.layers.3.self_attn.k_proj.scales": "model.safetensors",
622
+ "model.layers.3.self_attn.k_proj.weight": "model.safetensors",
623
+ "model.layers.3.self_attn.o_proj.biases": "model.safetensors",
624
+ "model.layers.3.self_attn.o_proj.scales": "model.safetensors",
625
+ "model.layers.3.self_attn.o_proj.weight": "model.safetensors",
626
+ "model.layers.3.self_attn.q_proj.bias": "model.safetensors",
627
+ "model.layers.3.self_attn.q_proj.biases": "model.safetensors",
628
+ "model.layers.3.self_attn.q_proj.scales": "model.safetensors",
629
+ "model.layers.3.self_attn.q_proj.weight": "model.safetensors",
630
+ "model.layers.3.self_attn.v_proj.bias": "model.safetensors",
631
+ "model.layers.3.self_attn.v_proj.biases": "model.safetensors",
632
+ "model.layers.3.self_attn.v_proj.scales": "model.safetensors",
633
+ "model.layers.3.self_attn.v_proj.weight": "model.safetensors",
634
+ "model.layers.30.input_layernorm.weight": "model.safetensors",
635
+ "model.layers.30.mlp.down_proj.biases": "model.safetensors",
636
+ "model.layers.30.mlp.down_proj.scales": "model.safetensors",
637
+ "model.layers.30.mlp.down_proj.weight": "model.safetensors",
638
+ "model.layers.30.mlp.gate_proj.biases": "model.safetensors",
639
+ "model.layers.30.mlp.gate_proj.scales": "model.safetensors",
640
+ "model.layers.30.mlp.gate_proj.weight": "model.safetensors",
641
+ "model.layers.30.mlp.up_proj.biases": "model.safetensors",
642
+ "model.layers.30.mlp.up_proj.scales": "model.safetensors",
643
+ "model.layers.30.mlp.up_proj.weight": "model.safetensors",
644
+ "model.layers.30.post_attention_layernorm.weight": "model.safetensors",
645
+ "model.layers.30.self_attn.k_proj.bias": "model.safetensors",
646
+ "model.layers.30.self_attn.k_proj.biases": "model.safetensors",
647
+ "model.layers.30.self_attn.k_proj.scales": "model.safetensors",
648
+ "model.layers.30.self_attn.k_proj.weight": "model.safetensors",
649
+ "model.layers.30.self_attn.o_proj.biases": "model.safetensors",
650
+ "model.layers.30.self_attn.o_proj.scales": "model.safetensors",
651
+ "model.layers.30.self_attn.o_proj.weight": "model.safetensors",
652
+ "model.layers.30.self_attn.q_proj.bias": "model.safetensors",
653
+ "model.layers.30.self_attn.q_proj.biases": "model.safetensors",
654
+ "model.layers.30.self_attn.q_proj.scales": "model.safetensors",
655
+ "model.layers.30.self_attn.q_proj.weight": "model.safetensors",
656
+ "model.layers.30.self_attn.v_proj.bias": "model.safetensors",
657
+ "model.layers.30.self_attn.v_proj.biases": "model.safetensors",
658
+ "model.layers.30.self_attn.v_proj.scales": "model.safetensors",
659
+ "model.layers.30.self_attn.v_proj.weight": "model.safetensors",
660
+ "model.layers.31.input_layernorm.weight": "model.safetensors",
661
+ "model.layers.31.mlp.down_proj.biases": "model.safetensors",
662
+ "model.layers.31.mlp.down_proj.scales": "model.safetensors",
663
+ "model.layers.31.mlp.down_proj.weight": "model.safetensors",
664
+ "model.layers.31.mlp.gate_proj.biases": "model.safetensors",
665
+ "model.layers.31.mlp.gate_proj.scales": "model.safetensors",
666
+ "model.layers.31.mlp.gate_proj.weight": "model.safetensors",
667
+ "model.layers.31.mlp.up_proj.biases": "model.safetensors",
668
+ "model.layers.31.mlp.up_proj.scales": "model.safetensors",
669
+ "model.layers.31.mlp.up_proj.weight": "model.safetensors",
670
+ "model.layers.31.post_attention_layernorm.weight": "model.safetensors",
671
+ "model.layers.31.self_attn.k_proj.bias": "model.safetensors",
672
+ "model.layers.31.self_attn.k_proj.biases": "model.safetensors",
673
+ "model.layers.31.self_attn.k_proj.scales": "model.safetensors",
674
+ "model.layers.31.self_attn.k_proj.weight": "model.safetensors",
675
+ "model.layers.31.self_attn.o_proj.biases": "model.safetensors",
676
+ "model.layers.31.self_attn.o_proj.scales": "model.safetensors",
677
+ "model.layers.31.self_attn.o_proj.weight": "model.safetensors",
678
+ "model.layers.31.self_attn.q_proj.bias": "model.safetensors",
679
+ "model.layers.31.self_attn.q_proj.biases": "model.safetensors",
680
+ "model.layers.31.self_attn.q_proj.scales": "model.safetensors",
681
+ "model.layers.31.self_attn.q_proj.weight": "model.safetensors",
682
+ "model.layers.31.self_attn.v_proj.bias": "model.safetensors",
683
+ "model.layers.31.self_attn.v_proj.biases": "model.safetensors",
684
+ "model.layers.31.self_attn.v_proj.scales": "model.safetensors",
685
+ "model.layers.31.self_attn.v_proj.weight": "model.safetensors",
686
+ "model.layers.32.input_layernorm.weight": "model.safetensors",
687
+ "model.layers.32.mlp.down_proj.biases": "model.safetensors",
688
+ "model.layers.32.mlp.down_proj.scales": "model.safetensors",
689
+ "model.layers.32.mlp.down_proj.weight": "model.safetensors",
690
+ "model.layers.32.mlp.gate_proj.biases": "model.safetensors",
691
+ "model.layers.32.mlp.gate_proj.scales": "model.safetensors",
692
+ "model.layers.32.mlp.gate_proj.weight": "model.safetensors",
693
+ "model.layers.32.mlp.up_proj.biases": "model.safetensors",
694
+ "model.layers.32.mlp.up_proj.scales": "model.safetensors",
695
+ "model.layers.32.mlp.up_proj.weight": "model.safetensors",
696
+ "model.layers.32.post_attention_layernorm.weight": "model.safetensors",
697
+ "model.layers.32.self_attn.k_proj.bias": "model.safetensors",
698
+ "model.layers.32.self_attn.k_proj.biases": "model.safetensors",
699
+ "model.layers.32.self_attn.k_proj.scales": "model.safetensors",
700
+ "model.layers.32.self_attn.k_proj.weight": "model.safetensors",
701
+ "model.layers.32.self_attn.o_proj.biases": "model.safetensors",
702
+ "model.layers.32.self_attn.o_proj.scales": "model.safetensors",
703
+ "model.layers.32.self_attn.o_proj.weight": "model.safetensors",
704
+ "model.layers.32.self_attn.q_proj.bias": "model.safetensors",
705
+ "model.layers.32.self_attn.q_proj.biases": "model.safetensors",
706
+ "model.layers.32.self_attn.q_proj.scales": "model.safetensors",
707
+ "model.layers.32.self_attn.q_proj.weight": "model.safetensors",
708
+ "model.layers.32.self_attn.v_proj.bias": "model.safetensors",
709
+ "model.layers.32.self_attn.v_proj.biases": "model.safetensors",
710
+ "model.layers.32.self_attn.v_proj.scales": "model.safetensors",
711
+ "model.layers.32.self_attn.v_proj.weight": "model.safetensors",
712
+ "model.layers.33.input_layernorm.weight": "model.safetensors",
713
+ "model.layers.33.mlp.down_proj.biases": "model.safetensors",
714
+ "model.layers.33.mlp.down_proj.scales": "model.safetensors",
715
+ "model.layers.33.mlp.down_proj.weight": "model.safetensors",
716
+ "model.layers.33.mlp.gate_proj.biases": "model.safetensors",
717
+ "model.layers.33.mlp.gate_proj.scales": "model.safetensors",
718
+ "model.layers.33.mlp.gate_proj.weight": "model.safetensors",
719
+ "model.layers.33.mlp.up_proj.biases": "model.safetensors",
720
+ "model.layers.33.mlp.up_proj.scales": "model.safetensors",
721
+ "model.layers.33.mlp.up_proj.weight": "model.safetensors",
722
+ "model.layers.33.post_attention_layernorm.weight": "model.safetensors",
723
+ "model.layers.33.self_attn.k_proj.bias": "model.safetensors",
724
+ "model.layers.33.self_attn.k_proj.biases": "model.safetensors",
725
+ "model.layers.33.self_attn.k_proj.scales": "model.safetensors",
726
+ "model.layers.33.self_attn.k_proj.weight": "model.safetensors",
727
+ "model.layers.33.self_attn.o_proj.biases": "model.safetensors",
728
+ "model.layers.33.self_attn.o_proj.scales": "model.safetensors",
729
+ "model.layers.33.self_attn.o_proj.weight": "model.safetensors",
730
+ "model.layers.33.self_attn.q_proj.bias": "model.safetensors",
731
+ "model.layers.33.self_attn.q_proj.biases": "model.safetensors",
732
+ "model.layers.33.self_attn.q_proj.scales": "model.safetensors",
733
+ "model.layers.33.self_attn.q_proj.weight": "model.safetensors",
734
+ "model.layers.33.self_attn.v_proj.bias": "model.safetensors",
735
+ "model.layers.33.self_attn.v_proj.biases": "model.safetensors",
736
+ "model.layers.33.self_attn.v_proj.scales": "model.safetensors",
737
+ "model.layers.33.self_attn.v_proj.weight": "model.safetensors",
738
+ "model.layers.34.input_layernorm.weight": "model.safetensors",
739
+ "model.layers.34.mlp.down_proj.biases": "model.safetensors",
740
+ "model.layers.34.mlp.down_proj.scales": "model.safetensors",
741
+ "model.layers.34.mlp.down_proj.weight": "model.safetensors",
742
+ "model.layers.34.mlp.gate_proj.biases": "model.safetensors",
743
+ "model.layers.34.mlp.gate_proj.scales": "model.safetensors",
744
+ "model.layers.34.mlp.gate_proj.weight": "model.safetensors",
745
+ "model.layers.34.mlp.up_proj.biases": "model.safetensors",
746
+ "model.layers.34.mlp.up_proj.scales": "model.safetensors",
747
+ "model.layers.34.mlp.up_proj.weight": "model.safetensors",
748
+ "model.layers.34.post_attention_layernorm.weight": "model.safetensors",
749
+ "model.layers.34.self_attn.k_proj.bias": "model.safetensors",
750
+ "model.layers.34.self_attn.k_proj.biases": "model.safetensors",
751
+ "model.layers.34.self_attn.k_proj.scales": "model.safetensors",
752
+ "model.layers.34.self_attn.k_proj.weight": "model.safetensors",
753
+ "model.layers.34.self_attn.o_proj.biases": "model.safetensors",
754
+ "model.layers.34.self_attn.o_proj.scales": "model.safetensors",
755
+ "model.layers.34.self_attn.o_proj.weight": "model.safetensors",
756
+ "model.layers.34.self_attn.q_proj.bias": "model.safetensors",
757
+ "model.layers.34.self_attn.q_proj.biases": "model.safetensors",
758
+ "model.layers.34.self_attn.q_proj.scales": "model.safetensors",
759
+ "model.layers.34.self_attn.q_proj.weight": "model.safetensors",
760
+ "model.layers.34.self_attn.v_proj.bias": "model.safetensors",
761
+ "model.layers.34.self_attn.v_proj.biases": "model.safetensors",
762
+ "model.layers.34.self_attn.v_proj.scales": "model.safetensors",
763
+ "model.layers.34.self_attn.v_proj.weight": "model.safetensors",
764
+ "model.layers.35.input_layernorm.weight": "model.safetensors",
765
+ "model.layers.35.mlp.down_proj.biases": "model.safetensors",
766
+ "model.layers.35.mlp.down_proj.scales": "model.safetensors",
767
+ "model.layers.35.mlp.down_proj.weight": "model.safetensors",
768
+ "model.layers.35.mlp.gate_proj.biases": "model.safetensors",
769
+ "model.layers.35.mlp.gate_proj.scales": "model.safetensors",
770
+ "model.layers.35.mlp.gate_proj.weight": "model.safetensors",
771
+ "model.layers.35.mlp.up_proj.biases": "model.safetensors",
772
+ "model.layers.35.mlp.up_proj.scales": "model.safetensors",
773
+ "model.layers.35.mlp.up_proj.weight": "model.safetensors",
774
+ "model.layers.35.post_attention_layernorm.weight": "model.safetensors",
775
+ "model.layers.35.self_attn.k_proj.bias": "model.safetensors",
776
+ "model.layers.35.self_attn.k_proj.biases": "model.safetensors",
777
+ "model.layers.35.self_attn.k_proj.scales": "model.safetensors",
778
+ "model.layers.35.self_attn.k_proj.weight": "model.safetensors",
779
+ "model.layers.35.self_attn.o_proj.biases": "model.safetensors",
780
+ "model.layers.35.self_attn.o_proj.scales": "model.safetensors",
781
+ "model.layers.35.self_attn.o_proj.weight": "model.safetensors",
782
+ "model.layers.35.self_attn.q_proj.bias": "model.safetensors",
783
+ "model.layers.35.self_attn.q_proj.biases": "model.safetensors",
784
+ "model.layers.35.self_attn.q_proj.scales": "model.safetensors",
785
+ "model.layers.35.self_attn.q_proj.weight": "model.safetensors",
786
+ "model.layers.35.self_attn.v_proj.bias": "model.safetensors",
787
+ "model.layers.35.self_attn.v_proj.biases": "model.safetensors",
788
+ "model.layers.35.self_attn.v_proj.scales": "model.safetensors",
789
+ "model.layers.35.self_attn.v_proj.weight": "model.safetensors",
790
+ "model.layers.4.input_layernorm.weight": "model.safetensors",
791
+ "model.layers.4.mlp.down_proj.biases": "model.safetensors",
792
+ "model.layers.4.mlp.down_proj.scales": "model.safetensors",
793
+ "model.layers.4.mlp.down_proj.weight": "model.safetensors",
794
+ "model.layers.4.mlp.gate_proj.biases": "model.safetensors",
795
+ "model.layers.4.mlp.gate_proj.scales": "model.safetensors",
796
+ "model.layers.4.mlp.gate_proj.weight": "model.safetensors",
797
+ "model.layers.4.mlp.up_proj.biases": "model.safetensors",
798
+ "model.layers.4.mlp.up_proj.scales": "model.safetensors",
799
+ "model.layers.4.mlp.up_proj.weight": "model.safetensors",
800
+ "model.layers.4.post_attention_layernorm.weight": "model.safetensors",
801
+ "model.layers.4.self_attn.k_proj.bias": "model.safetensors",
802
+ "model.layers.4.self_attn.k_proj.biases": "model.safetensors",
803
+ "model.layers.4.self_attn.k_proj.scales": "model.safetensors",
804
+ "model.layers.4.self_attn.k_proj.weight": "model.safetensors",
805
+ "model.layers.4.self_attn.o_proj.biases": "model.safetensors",
806
+ "model.layers.4.self_attn.o_proj.scales": "model.safetensors",
807
+ "model.layers.4.self_attn.o_proj.weight": "model.safetensors",
808
+ "model.layers.4.self_attn.q_proj.bias": "model.safetensors",
809
+ "model.layers.4.self_attn.q_proj.biases": "model.safetensors",
810
+ "model.layers.4.self_attn.q_proj.scales": "model.safetensors",
811
+ "model.layers.4.self_attn.q_proj.weight": "model.safetensors",
812
+ "model.layers.4.self_attn.v_proj.bias": "model.safetensors",
813
+ "model.layers.4.self_attn.v_proj.biases": "model.safetensors",
814
+ "model.layers.4.self_attn.v_proj.scales": "model.safetensors",
815
+ "model.layers.4.self_attn.v_proj.weight": "model.safetensors",
816
+ "model.layers.5.input_layernorm.weight": "model.safetensors",
817
+ "model.layers.5.mlp.down_proj.biases": "model.safetensors",
818
+ "model.layers.5.mlp.down_proj.scales": "model.safetensors",
819
+ "model.layers.5.mlp.down_proj.weight": "model.safetensors",
820
+ "model.layers.5.mlp.gate_proj.biases": "model.safetensors",
821
+ "model.layers.5.mlp.gate_proj.scales": "model.safetensors",
822
+ "model.layers.5.mlp.gate_proj.weight": "model.safetensors",
823
+ "model.layers.5.mlp.up_proj.biases": "model.safetensors",
824
+ "model.layers.5.mlp.up_proj.scales": "model.safetensors",
825
+ "model.layers.5.mlp.up_proj.weight": "model.safetensors",
826
+ "model.layers.5.post_attention_layernorm.weight": "model.safetensors",
827
+ "model.layers.5.self_attn.k_proj.bias": "model.safetensors",
828
+ "model.layers.5.self_attn.k_proj.biases": "model.safetensors",
829
+ "model.layers.5.self_attn.k_proj.scales": "model.safetensors",
830
+ "model.layers.5.self_attn.k_proj.weight": "model.safetensors",
831
+ "model.layers.5.self_attn.o_proj.biases": "model.safetensors",
832
+ "model.layers.5.self_attn.o_proj.scales": "model.safetensors",
833
+ "model.layers.5.self_attn.o_proj.weight": "model.safetensors",
834
+ "model.layers.5.self_attn.q_proj.bias": "model.safetensors",
835
+ "model.layers.5.self_attn.q_proj.biases": "model.safetensors",
836
+ "model.layers.5.self_attn.q_proj.scales": "model.safetensors",
837
+ "model.layers.5.self_attn.q_proj.weight": "model.safetensors",
838
+ "model.layers.5.self_attn.v_proj.bias": "model.safetensors",
839
+ "model.layers.5.self_attn.v_proj.biases": "model.safetensors",
840
+ "model.layers.5.self_attn.v_proj.scales": "model.safetensors",
841
+ "model.layers.5.self_attn.v_proj.weight": "model.safetensors",
842
+ "model.layers.6.input_layernorm.weight": "model.safetensors",
843
+ "model.layers.6.mlp.down_proj.biases": "model.safetensors",
844
+ "model.layers.6.mlp.down_proj.scales": "model.safetensors",
845
+ "model.layers.6.mlp.down_proj.weight": "model.safetensors",
846
+ "model.layers.6.mlp.gate_proj.biases": "model.safetensors",
847
+ "model.layers.6.mlp.gate_proj.scales": "model.safetensors",
848
+ "model.layers.6.mlp.gate_proj.weight": "model.safetensors",
849
+ "model.layers.6.mlp.up_proj.biases": "model.safetensors",
850
+ "model.layers.6.mlp.up_proj.scales": "model.safetensors",
851
+ "model.layers.6.mlp.up_proj.weight": "model.safetensors",
852
+ "model.layers.6.post_attention_layernorm.weight": "model.safetensors",
853
+ "model.layers.6.self_attn.k_proj.bias": "model.safetensors",
854
+ "model.layers.6.self_attn.k_proj.biases": "model.safetensors",
855
+ "model.layers.6.self_attn.k_proj.scales": "model.safetensors",
856
+ "model.layers.6.self_attn.k_proj.weight": "model.safetensors",
857
+ "model.layers.6.self_attn.o_proj.biases": "model.safetensors",
858
+ "model.layers.6.self_attn.o_proj.scales": "model.safetensors",
859
+ "model.layers.6.self_attn.o_proj.weight": "model.safetensors",
860
+ "model.layers.6.self_attn.q_proj.bias": "model.safetensors",
861
+ "model.layers.6.self_attn.q_proj.biases": "model.safetensors",
862
+ "model.layers.6.self_attn.q_proj.scales": "model.safetensors",
863
+ "model.layers.6.self_attn.q_proj.weight": "model.safetensors",
864
+ "model.layers.6.self_attn.v_proj.bias": "model.safetensors",
865
+ "model.layers.6.self_attn.v_proj.biases": "model.safetensors",
866
+ "model.layers.6.self_attn.v_proj.scales": "model.safetensors",
867
+ "model.layers.6.self_attn.v_proj.weight": "model.safetensors",
868
+ "model.layers.7.input_layernorm.weight": "model.safetensors",
869
+ "model.layers.7.mlp.down_proj.biases": "model.safetensors",
870
+ "model.layers.7.mlp.down_proj.scales": "model.safetensors",
871
+ "model.layers.7.mlp.down_proj.weight": "model.safetensors",
872
+ "model.layers.7.mlp.gate_proj.biases": "model.safetensors",
873
+ "model.layers.7.mlp.gate_proj.scales": "model.safetensors",
874
+ "model.layers.7.mlp.gate_proj.weight": "model.safetensors",
875
+ "model.layers.7.mlp.up_proj.biases": "model.safetensors",
876
+ "model.layers.7.mlp.up_proj.scales": "model.safetensors",
877
+ "model.layers.7.mlp.up_proj.weight": "model.safetensors",
878
+ "model.layers.7.post_attention_layernorm.weight": "model.safetensors",
879
+ "model.layers.7.self_attn.k_proj.bias": "model.safetensors",
880
+ "model.layers.7.self_attn.k_proj.biases": "model.safetensors",
881
+ "model.layers.7.self_attn.k_proj.scales": "model.safetensors",
882
+ "model.layers.7.self_attn.k_proj.weight": "model.safetensors",
883
+ "model.layers.7.self_attn.o_proj.biases": "model.safetensors",
884
+ "model.layers.7.self_attn.o_proj.scales": "model.safetensors",
885
+ "model.layers.7.self_attn.o_proj.weight": "model.safetensors",
886
+ "model.layers.7.self_attn.q_proj.bias": "model.safetensors",
887
+ "model.layers.7.self_attn.q_proj.biases": "model.safetensors",
888
+ "model.layers.7.self_attn.q_proj.scales": "model.safetensors",
889
+ "model.layers.7.self_attn.q_proj.weight": "model.safetensors",
890
+ "model.layers.7.self_attn.v_proj.bias": "model.safetensors",
891
+ "model.layers.7.self_attn.v_proj.biases": "model.safetensors",
892
+ "model.layers.7.self_attn.v_proj.scales": "model.safetensors",
893
+ "model.layers.7.self_attn.v_proj.weight": "model.safetensors",
894
+ "model.layers.8.input_layernorm.weight": "model.safetensors",
895
+ "model.layers.8.mlp.down_proj.biases": "model.safetensors",
896
+ "model.layers.8.mlp.down_proj.scales": "model.safetensors",
897
+ "model.layers.8.mlp.down_proj.weight": "model.safetensors",
898
+ "model.layers.8.mlp.gate_proj.biases": "model.safetensors",
899
+ "model.layers.8.mlp.gate_proj.scales": "model.safetensors",
900
+ "model.layers.8.mlp.gate_proj.weight": "model.safetensors",
901
+ "model.layers.8.mlp.up_proj.biases": "model.safetensors",
902
+ "model.layers.8.mlp.up_proj.scales": "model.safetensors",
903
+ "model.layers.8.mlp.up_proj.weight": "model.safetensors",
904
+ "model.layers.8.post_attention_layernorm.weight": "model.safetensors",
905
+ "model.layers.8.self_attn.k_proj.bias": "model.safetensors",
906
+ "model.layers.8.self_attn.k_proj.biases": "model.safetensors",
907
+ "model.layers.8.self_attn.k_proj.scales": "model.safetensors",
908
+ "model.layers.8.self_attn.k_proj.weight": "model.safetensors",
909
+ "model.layers.8.self_attn.o_proj.biases": "model.safetensors",
910
+ "model.layers.8.self_attn.o_proj.scales": "model.safetensors",
911
+ "model.layers.8.self_attn.o_proj.weight": "model.safetensors",
912
+ "model.layers.8.self_attn.q_proj.bias": "model.safetensors",
913
+ "model.layers.8.self_attn.q_proj.biases": "model.safetensors",
914
+ "model.layers.8.self_attn.q_proj.scales": "model.safetensors",
915
+ "model.layers.8.self_attn.q_proj.weight": "model.safetensors",
916
+ "model.layers.8.self_attn.v_proj.bias": "model.safetensors",
917
+ "model.layers.8.self_attn.v_proj.biases": "model.safetensors",
918
+ "model.layers.8.self_attn.v_proj.scales": "model.safetensors",
919
+ "model.layers.8.self_attn.v_proj.weight": "model.safetensors",
920
+ "model.layers.9.input_layernorm.weight": "model.safetensors",
921
+ "model.layers.9.mlp.down_proj.biases": "model.safetensors",
922
+ "model.layers.9.mlp.down_proj.scales": "model.safetensors",
923
+ "model.layers.9.mlp.down_proj.weight": "model.safetensors",
924
+ "model.layers.9.mlp.gate_proj.biases": "model.safetensors",
925
+ "model.layers.9.mlp.gate_proj.scales": "model.safetensors",
926
+ "model.layers.9.mlp.gate_proj.weight": "model.safetensors",
927
+ "model.layers.9.mlp.up_proj.biases": "model.safetensors",
928
+ "model.layers.9.mlp.up_proj.scales": "model.safetensors",
929
+ "model.layers.9.mlp.up_proj.weight": "model.safetensors",
930
+ "model.layers.9.post_attention_layernorm.weight": "model.safetensors",
931
+ "model.layers.9.self_attn.k_proj.bias": "model.safetensors",
932
+ "model.layers.9.self_attn.k_proj.biases": "model.safetensors",
933
+ "model.layers.9.self_attn.k_proj.scales": "model.safetensors",
934
+ "model.layers.9.self_attn.k_proj.weight": "model.safetensors",
935
+ "model.layers.9.self_attn.o_proj.biases": "model.safetensors",
936
+ "model.layers.9.self_attn.o_proj.scales": "model.safetensors",
937
+ "model.layers.9.self_attn.o_proj.weight": "model.safetensors",
938
+ "model.layers.9.self_attn.q_proj.bias": "model.safetensors",
939
+ "model.layers.9.self_attn.q_proj.biases": "model.safetensors",
940
+ "model.layers.9.self_attn.q_proj.scales": "model.safetensors",
941
+ "model.layers.9.self_attn.q_proj.weight": "model.safetensors",
942
+ "model.layers.9.self_attn.v_proj.bias": "model.safetensors",
943
+ "model.layers.9.self_attn.v_proj.biases": "model.safetensors",
944
+ "model.layers.9.self_attn.v_proj.scales": "model.safetensors",
945
+ "model.layers.9.self_attn.v_proj.weight": "model.safetensors",
946
+ "model.norm.weight": "model.safetensors"
947
+ }
948
+ }
mlx/special_tokens_map.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "eos_token": {
18
+ "content": "<|im_end|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "pad_token": {
25
+ "content": "<|endoftext|>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ }
31
+ }
mlx/tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c5ae00e602b8860cbd784ba82a8aa14e8feecec692e7076590d014d7b7fdafa
3
+ size 11421896
mlx/tokenizer_config.json ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ }
181
+ },
182
+ "additional_special_tokens": [
183
+ "<|im_start|>",
184
+ "<|im_end|>",
185
+ "<|object_ref_start|>",
186
+ "<|object_ref_end|>",
187
+ "<|box_start|>",
188
+ "<|box_end|>",
189
+ "<|quad_start|>",
190
+ "<|quad_end|>",
191
+ "<|vision_start|>",
192
+ "<|vision_end|>",
193
+ "<|vision_pad|>",
194
+ "<|image_pad|>",
195
+ "<|video_pad|>"
196
+ ],
197
+ "bos_token": null,
198
+ "clean_up_tokenization_spaces": false,
199
+ "eos_token": "<|im_end|>",
200
+ "errors": "replace",
201
+ "extra_special_tokens": {},
202
+ "model_max_length": 131072,
203
+ "pad_token": "<|endoftext|>",
204
+ "split_special_tokens": false,
205
+ "tokenizer_class": "Qwen2Tokenizer",
206
+ "unk_token": null
207
+ }
mlx/vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
model-00001-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67347b23fb4165b652eb6611f5e1f2a06dfcddba8e909df1b2b0b1857bee06c2
3
+ size 3968658944
model-00002-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a40d941d0e7e0b966ad8b62bb6d6b7c88cce1299197b599d9d0a4ce59aabfc1d
3
+ size 2203268048
model.safetensors.index.json ADDED
@@ -0,0 +1,441 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 6171877376
4
+ },
5
+ "weight_map": {
6
+ "model.embed_tokens.weight": "model-00001-of-00002.safetensors",
7
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00002.safetensors",
8
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
9
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
10
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
11
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
12
+ "model.layers.0.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
13
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
14
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
15
+ "model.layers.0.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
16
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
17
+ "model.layers.0.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
18
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
19
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00002.safetensors",
20
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
21
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
22
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
23
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
24
+ "model.layers.1.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
25
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
26
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
27
+ "model.layers.1.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
28
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
29
+ "model.layers.1.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
30
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
31
+ "model.layers.10.input_layernorm.weight": "model-00001-of-00002.safetensors",
32
+ "model.layers.10.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
33
+ "model.layers.10.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
34
+ "model.layers.10.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
35
+ "model.layers.10.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
36
+ "model.layers.10.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
37
+ "model.layers.10.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
38
+ "model.layers.10.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
39
+ "model.layers.10.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
40
+ "model.layers.10.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
41
+ "model.layers.10.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
42
+ "model.layers.10.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
43
+ "model.layers.11.input_layernorm.weight": "model-00001-of-00002.safetensors",
44
+ "model.layers.11.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
45
+ "model.layers.11.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
46
+ "model.layers.11.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
47
+ "model.layers.11.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
48
+ "model.layers.11.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
49
+ "model.layers.11.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
50
+ "model.layers.11.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
51
+ "model.layers.11.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
52
+ "model.layers.11.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
53
+ "model.layers.11.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
54
+ "model.layers.11.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
55
+ "model.layers.12.input_layernorm.weight": "model-00001-of-00002.safetensors",
56
+ "model.layers.12.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
57
+ "model.layers.12.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
58
+ "model.layers.12.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
59
+ "model.layers.12.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
60
+ "model.layers.12.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
61
+ "model.layers.12.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
62
+ "model.layers.12.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
63
+ "model.layers.12.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
64
+ "model.layers.12.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
65
+ "model.layers.12.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
66
+ "model.layers.12.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
67
+ "model.layers.13.input_layernorm.weight": "model-00001-of-00002.safetensors",
68
+ "model.layers.13.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
69
+ "model.layers.13.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
70
+ "model.layers.13.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
71
+ "model.layers.13.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
72
+ "model.layers.13.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
73
+ "model.layers.13.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
74
+ "model.layers.13.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
75
+ "model.layers.13.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
76
+ "model.layers.13.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
77
+ "model.layers.13.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
78
+ "model.layers.13.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
79
+ "model.layers.14.input_layernorm.weight": "model-00001-of-00002.safetensors",
80
+ "model.layers.14.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
81
+ "model.layers.14.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
82
+ "model.layers.14.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
83
+ "model.layers.14.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
84
+ "model.layers.14.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
85
+ "model.layers.14.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
86
+ "model.layers.14.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
87
+ "model.layers.14.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
88
+ "model.layers.14.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
89
+ "model.layers.14.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
90
+ "model.layers.14.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
91
+ "model.layers.15.input_layernorm.weight": "model-00001-of-00002.safetensors",
92
+ "model.layers.15.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
93
+ "model.layers.15.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
94
+ "model.layers.15.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
95
+ "model.layers.15.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
96
+ "model.layers.15.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
97
+ "model.layers.15.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
98
+ "model.layers.15.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
99
+ "model.layers.15.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
100
+ "model.layers.15.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
101
+ "model.layers.15.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
102
+ "model.layers.15.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
103
+ "model.layers.16.input_layernorm.weight": "model-00001-of-00002.safetensors",
104
+ "model.layers.16.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
105
+ "model.layers.16.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
106
+ "model.layers.16.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
107
+ "model.layers.16.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
108
+ "model.layers.16.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
109
+ "model.layers.16.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
110
+ "model.layers.16.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
111
+ "model.layers.16.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
112
+ "model.layers.16.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
113
+ "model.layers.16.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
114
+ "model.layers.16.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
115
+ "model.layers.17.input_layernorm.weight": "model-00001-of-00002.safetensors",
116
+ "model.layers.17.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
117
+ "model.layers.17.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
118
+ "model.layers.17.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
119
+ "model.layers.17.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
120
+ "model.layers.17.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
121
+ "model.layers.17.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
122
+ "model.layers.17.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
123
+ "model.layers.17.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
124
+ "model.layers.17.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
125
+ "model.layers.17.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
126
+ "model.layers.17.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
127
+ "model.layers.18.input_layernorm.weight": "model-00001-of-00002.safetensors",
128
+ "model.layers.18.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
129
+ "model.layers.18.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
130
+ "model.layers.18.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
131
+ "model.layers.18.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
132
+ "model.layers.18.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
133
+ "model.layers.18.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
134
+ "model.layers.18.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
135
+ "model.layers.18.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
136
+ "model.layers.18.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
137
+ "model.layers.18.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
138
+ "model.layers.18.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
139
+ "model.layers.19.input_layernorm.weight": "model-00001-of-00002.safetensors",
140
+ "model.layers.19.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
141
+ "model.layers.19.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
142
+ "model.layers.19.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
143
+ "model.layers.19.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
144
+ "model.layers.19.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
145
+ "model.layers.19.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
146
+ "model.layers.19.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
147
+ "model.layers.19.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
148
+ "model.layers.19.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
149
+ "model.layers.19.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
150
+ "model.layers.19.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
151
+ "model.layers.2.input_layernorm.weight": "model-00001-of-00002.safetensors",
152
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
153
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
154
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
155
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
156
+ "model.layers.2.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
157
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
158
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
159
+ "model.layers.2.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
160
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
161
+ "model.layers.2.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
162
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
163
+ "model.layers.20.input_layernorm.weight": "model-00001-of-00002.safetensors",
164
+ "model.layers.20.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
165
+ "model.layers.20.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
166
+ "model.layers.20.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
167
+ "model.layers.20.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
168
+ "model.layers.20.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
169
+ "model.layers.20.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
170
+ "model.layers.20.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
171
+ "model.layers.20.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
172
+ "model.layers.20.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
173
+ "model.layers.20.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
174
+ "model.layers.20.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
175
+ "model.layers.21.input_layernorm.weight": "model-00001-of-00002.safetensors",
176
+ "model.layers.21.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
177
+ "model.layers.21.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
178
+ "model.layers.21.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
179
+ "model.layers.21.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
180
+ "model.layers.21.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
181
+ "model.layers.21.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
182
+ "model.layers.21.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
183
+ "model.layers.21.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
184
+ "model.layers.21.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
185
+ "model.layers.21.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
186
+ "model.layers.21.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
187
+ "model.layers.22.input_layernorm.weight": "model-00002-of-00002.safetensors",
188
+ "model.layers.22.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
189
+ "model.layers.22.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
190
+ "model.layers.22.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
191
+ "model.layers.22.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
192
+ "model.layers.22.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
193
+ "model.layers.22.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
194
+ "model.layers.22.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
195
+ "model.layers.22.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
196
+ "model.layers.22.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
197
+ "model.layers.22.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
198
+ "model.layers.22.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
199
+ "model.layers.23.input_layernorm.weight": "model-00002-of-00002.safetensors",
200
+ "model.layers.23.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
201
+ "model.layers.23.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
202
+ "model.layers.23.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
203
+ "model.layers.23.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
204
+ "model.layers.23.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
205
+ "model.layers.23.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
206
+ "model.layers.23.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
207
+ "model.layers.23.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
208
+ "model.layers.23.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
209
+ "model.layers.23.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
210
+ "model.layers.23.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
211
+ "model.layers.24.input_layernorm.weight": "model-00002-of-00002.safetensors",
212
+ "model.layers.24.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
213
+ "model.layers.24.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
214
+ "model.layers.24.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
215
+ "model.layers.24.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
216
+ "model.layers.24.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
217
+ "model.layers.24.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
218
+ "model.layers.24.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
219
+ "model.layers.24.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
220
+ "model.layers.24.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
221
+ "model.layers.24.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
222
+ "model.layers.24.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
223
+ "model.layers.25.input_layernorm.weight": "model-00002-of-00002.safetensors",
224
+ "model.layers.25.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
225
+ "model.layers.25.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
226
+ "model.layers.25.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
227
+ "model.layers.25.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
228
+ "model.layers.25.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
229
+ "model.layers.25.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
230
+ "model.layers.25.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
231
+ "model.layers.25.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
232
+ "model.layers.25.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
233
+ "model.layers.25.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
234
+ "model.layers.25.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
235
+ "model.layers.26.input_layernorm.weight": "model-00002-of-00002.safetensors",
236
+ "model.layers.26.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
237
+ "model.layers.26.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
238
+ "model.layers.26.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
239
+ "model.layers.26.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
240
+ "model.layers.26.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
241
+ "model.layers.26.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
242
+ "model.layers.26.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
243
+ "model.layers.26.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
244
+ "model.layers.26.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
245
+ "model.layers.26.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
246
+ "model.layers.26.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
247
+ "model.layers.27.input_layernorm.weight": "model-00002-of-00002.safetensors",
248
+ "model.layers.27.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
249
+ "model.layers.27.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
250
+ "model.layers.27.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
251
+ "model.layers.27.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
252
+ "model.layers.27.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
253
+ "model.layers.27.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
254
+ "model.layers.27.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
255
+ "model.layers.27.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
256
+ "model.layers.27.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
257
+ "model.layers.27.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
258
+ "model.layers.27.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
259
+ "model.layers.28.input_layernorm.weight": "model-00002-of-00002.safetensors",
260
+ "model.layers.28.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
261
+ "model.layers.28.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
262
+ "model.layers.28.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
263
+ "model.layers.28.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
264
+ "model.layers.28.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
265
+ "model.layers.28.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
266
+ "model.layers.28.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
267
+ "model.layers.28.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
268
+ "model.layers.28.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
269
+ "model.layers.28.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
270
+ "model.layers.28.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
271
+ "model.layers.29.input_layernorm.weight": "model-00002-of-00002.safetensors",
272
+ "model.layers.29.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
273
+ "model.layers.29.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
274
+ "model.layers.29.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
275
+ "model.layers.29.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
276
+ "model.layers.29.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
277
+ "model.layers.29.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
278
+ "model.layers.29.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
279
+ "model.layers.29.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
280
+ "model.layers.29.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
281
+ "model.layers.29.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
282
+ "model.layers.29.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
283
+ "model.layers.3.input_layernorm.weight": "model-00001-of-00002.safetensors",
284
+ "model.layers.3.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
285
+ "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
286
+ "model.layers.3.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
287
+ "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
288
+ "model.layers.3.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
289
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
290
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
291
+ "model.layers.3.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
292
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
293
+ "model.layers.3.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
294
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
295
+ "model.layers.30.input_layernorm.weight": "model-00002-of-00002.safetensors",
296
+ "model.layers.30.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
297
+ "model.layers.30.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
298
+ "model.layers.30.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
299
+ "model.layers.30.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
300
+ "model.layers.30.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
301
+ "model.layers.30.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
302
+ "model.layers.30.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
303
+ "model.layers.30.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
304
+ "model.layers.30.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
305
+ "model.layers.30.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
306
+ "model.layers.30.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
307
+ "model.layers.31.input_layernorm.weight": "model-00002-of-00002.safetensors",
308
+ "model.layers.31.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
309
+ "model.layers.31.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
310
+ "model.layers.31.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
311
+ "model.layers.31.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
312
+ "model.layers.31.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
313
+ "model.layers.31.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
314
+ "model.layers.31.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
315
+ "model.layers.31.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
316
+ "model.layers.31.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
317
+ "model.layers.31.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
318
+ "model.layers.31.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
319
+ "model.layers.32.input_layernorm.weight": "model-00002-of-00002.safetensors",
320
+ "model.layers.32.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
321
+ "model.layers.32.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
322
+ "model.layers.32.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
323
+ "model.layers.32.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
324
+ "model.layers.32.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
325
+ "model.layers.32.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
326
+ "model.layers.32.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
327
+ "model.layers.32.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
328
+ "model.layers.32.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
329
+ "model.layers.32.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
330
+ "model.layers.32.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
331
+ "model.layers.33.input_layernorm.weight": "model-00002-of-00002.safetensors",
332
+ "model.layers.33.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
333
+ "model.layers.33.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
334
+ "model.layers.33.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
335
+ "model.layers.33.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
336
+ "model.layers.33.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
337
+ "model.layers.33.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
338
+ "model.layers.33.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
339
+ "model.layers.33.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
340
+ "model.layers.33.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
341
+ "model.layers.33.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
342
+ "model.layers.33.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
343
+ "model.layers.34.input_layernorm.weight": "model-00002-of-00002.safetensors",
344
+ "model.layers.34.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
345
+ "model.layers.34.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
346
+ "model.layers.34.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
347
+ "model.layers.34.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
348
+ "model.layers.34.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
349
+ "model.layers.34.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
350
+ "model.layers.34.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
351
+ "model.layers.34.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
352
+ "model.layers.34.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
353
+ "model.layers.34.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
354
+ "model.layers.34.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
355
+ "model.layers.35.input_layernorm.weight": "model-00002-of-00002.safetensors",
356
+ "model.layers.35.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
357
+ "model.layers.35.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
358
+ "model.layers.35.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
359
+ "model.layers.35.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
360
+ "model.layers.35.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
361
+ "model.layers.35.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
362
+ "model.layers.35.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
363
+ "model.layers.35.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
364
+ "model.layers.35.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
365
+ "model.layers.35.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
366
+ "model.layers.35.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
367
+ "model.layers.4.input_layernorm.weight": "model-00001-of-00002.safetensors",
368
+ "model.layers.4.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
369
+ "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
370
+ "model.layers.4.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
371
+ "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
372
+ "model.layers.4.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
373
+ "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
374
+ "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
375
+ "model.layers.4.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
376
+ "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
377
+ "model.layers.4.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
378
+ "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
379
+ "model.layers.5.input_layernorm.weight": "model-00001-of-00002.safetensors",
380
+ "model.layers.5.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
381
+ "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
382
+ "model.layers.5.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
383
+ "model.layers.5.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
384
+ "model.layers.5.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
385
+ "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
386
+ "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
387
+ "model.layers.5.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
388
+ "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
389
+ "model.layers.5.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
390
+ "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
391
+ "model.layers.6.input_layernorm.weight": "model-00001-of-00002.safetensors",
392
+ "model.layers.6.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
393
+ "model.layers.6.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
394
+ "model.layers.6.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
395
+ "model.layers.6.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
396
+ "model.layers.6.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
397
+ "model.layers.6.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
398
+ "model.layers.6.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
399
+ "model.layers.6.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
400
+ "model.layers.6.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
401
+ "model.layers.6.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
402
+ "model.layers.6.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
403
+ "model.layers.7.input_layernorm.weight": "model-00001-of-00002.safetensors",
404
+ "model.layers.7.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
405
+ "model.layers.7.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
406
+ "model.layers.7.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
407
+ "model.layers.7.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
408
+ "model.layers.7.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
409
+ "model.layers.7.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
410
+ "model.layers.7.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
411
+ "model.layers.7.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
412
+ "model.layers.7.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
413
+ "model.layers.7.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
414
+ "model.layers.7.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
415
+ "model.layers.8.input_layernorm.weight": "model-00001-of-00002.safetensors",
416
+ "model.layers.8.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
417
+ "model.layers.8.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
418
+ "model.layers.8.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
419
+ "model.layers.8.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
420
+ "model.layers.8.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
421
+ "model.layers.8.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
422
+ "model.layers.8.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
423
+ "model.layers.8.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
424
+ "model.layers.8.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
425
+ "model.layers.8.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
426
+ "model.layers.8.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
427
+ "model.layers.9.input_layernorm.weight": "model-00001-of-00002.safetensors",
428
+ "model.layers.9.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
429
+ "model.layers.9.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
430
+ "model.layers.9.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
431
+ "model.layers.9.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
432
+ "model.layers.9.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
433
+ "model.layers.9.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
434
+ "model.layers.9.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
435
+ "model.layers.9.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
436
+ "model.layers.9.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
437
+ "model.layers.9.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
438
+ "model.layers.9.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
439
+ "model.norm.weight": "model-00002-of-00002.safetensors"
440
+ }
441
+ }
model_info.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "zen_model": "zen-eco",
3
+ "base_model": "Qwen/Qwen2.5-3B-Instruct",
4
+ "size": "4B",
5
+ "type": "language",
6
+ "downloaded": true
7
+ }
tokenizer.json CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:85acc0ed1a93f8b0e6c803b53edf0fe4898ac19a3fff657f21020d280364a0cf
3
- size 11422174
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0382117ea329cdf097041132f6d735924b697924d6f6fc3945713e96ce87539
3
+ size 7031645
tokenizer_config.json CHANGED
@@ -195,13 +195,13 @@
195
  "<|video_pad|>"
196
  ],
197
  "bos_token": null,
 
198
  "clean_up_tokenization_spaces": false,
199
  "eos_token": "<|im_end|>",
200
  "errors": "replace",
201
- "extra_special_tokens": {},
202
- "model_max_length": 32768,
203
  "pad_token": "<|endoftext|>",
204
  "split_special_tokens": false,
205
  "tokenizer_class": "Qwen2Tokenizer",
206
  "unk_token": null
207
- }
 
195
  "<|video_pad|>"
196
  ],
197
  "bos_token": null,
198
+ "chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n {%- else %}\n {{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}\n {%- endif %}\n {{- \"\\n\\n# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0]['role'] == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0]['content'] + '<|im_end|>\\n' }}\n {%- else %}\n {{- '<|im_start|>system\\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- for message in messages %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) or (message.role == \"assistant\" and not message.tool_calls) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {{- '<|im_start|>' + message.role }}\n {%- if message.content %}\n {{- '\\n' + message.content }}\n {%- endif %}\n {%- for tool_call in message.tool_calls %}\n {%- if tool_call.function is defined %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '\\n<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {{- tool_call.arguments | tojson }}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- message.content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}\n",
199
  "clean_up_tokenization_spaces": false,
200
  "eos_token": "<|im_end|>",
201
  "errors": "replace",
202
+ "model_max_length": 131072,
 
203
  "pad_token": "<|endoftext|>",
204
  "split_special_tokens": false,
205
  "tokenizer_class": "Qwen2Tokenizer",
206
  "unk_token": null
207
+ }