busybisi commited on
Commit
9e0009b
·
verified ·
1 Parent(s): a578361

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -1,282 +1,27 @@
1
- ---
2
- library_name: transformers
3
- license: mit
4
- base_model: Qwen/Qwen2.5-3B-Instruct
5
- tags:
6
- - text-generation
7
- - conversational
8
- - immigration-law
9
- - legal-assistant
10
- - qwen
11
- - lora
12
- ---
13
 
14
- # Model Card for DoloresAI-Merged
15
 
16
- ## Model Summary
17
-
18
- **DoloresAI-Merged** is a fine-tuned conversational AI assistant specialized in U.S. immigration law. This model is a merged version of a LoRA adapter trained on the base model `Qwen/Qwen2.5-3B-Instruct`. It provides accurate, context-aware responses to immigration-related questions and assists with form completion, case management, and legal guidance.
 
19
 
20
  ## Model Details
21
-
22
- ### Model Description
23
-
24
- DoloresAI-Merged is a merged model that combines the base Qwen2.5-3B-Instruct model with fine-tuned LoRA weights. The model has been specifically trained to understand and respond to immigration law queries, USCIS form questions, and provide legal assistance for immigrants navigating the U.S. immigration system.
25
-
26
- - **Developed by:** JustiGuide
27
- - **Model type:** Causal Language Model (Decoder-only)
28
- - **Language(s):** English (primary), with support for multilingual queries
29
- - **License:** MIT
30
- - **Finetuned from:** `Qwen/Qwen2.5-3B-Instruct`
31
- - **Merged from:** `JustiGuide/DoloresAI` (LoRA adapter)
32
-
33
- ### Model Architecture
34
-
35
- - **Base Model:** Qwen/Qwen2.5-3B-Instruct (3B parameters)
36
- - **Architecture:** Transformer-based decoder
37
- - **Context Length:** 32,768 tokens
38
- - **Model Format:** Merged (LoRA weights integrated into base model)
39
-
40
- ### Model Sources
41
-
42
- - **Repository:** https://huggingface.co/JustiGuide/DoloresAI-Merged
43
- - **Base Model:** https://huggingface.co/Qwen/Qwen2.5-3B-Instruct
44
- - **Original LoRA Adapter:** https://huggingface.co/JustiGuide/DoloresAI
45
-
46
- ## Uses
47
-
48
- ### Direct Use
49
-
50
- This model is intended for use as an immigration law assistant that can:
51
-
52
- - Answer questions about U.S. immigration law and procedures
53
- - Assist with USCIS form completion (I-130, I-765, I-589, I-129, N-400)
54
- - Provide guidance on immigration processes and requirements
55
- - Help users understand legal terminology and requirements
56
- - Support case management and document preparation
57
-
58
- ### Intended Use Cases
59
-
60
- 1. **Immigration Form Assistance:** Help users complete USCIS forms accurately
61
- 2. **Legal Q&A:** Answer questions about immigration law, processes, and requirements
62
- 3. **Case Management:** Assist with tracking immigration cases and deadlines
63
- 4. **Educational Support:** Provide explanations of immigration concepts and procedures
64
-
65
- ### Out-of-Scope Use
66
-
67
- This model should NOT be used for:
68
-
69
- - Providing definitive legal advice (users should consult licensed attorneys)
70
- - Making final legal decisions
71
- - Replacing professional legal counsel
72
- - Handling emergency legal situations
73
- - Providing advice on non-U.S. immigration systems
74
-
75
- ## Bias, Risks, and Limitations
76
-
77
- ### Limitations
78
-
79
- 1. **Not Legal Advice:** This model provides information and assistance but does not constitute legal advice. Users should consult licensed immigration attorneys for legal representation.
80
-
81
- 2. **Training Data Limitations:** The model's knowledge is based on training data and may not reflect the most recent changes in immigration law or policy.
82
-
83
- 3. **Context Window:** Limited to 32,768 tokens, which may not capture all relevant context for complex cases.
84
-
85
- 4. **Language:** Primarily trained on English; performance may vary for other languages.
86
-
87
- 5. **Accuracy:** While trained on immigration law data, responses should be verified with official sources and legal professionals.
88
-
89
- ### Recommendations
90
-
91
- - Always verify information with official USCIS sources
92
- - Consult licensed immigration attorneys for legal representation
93
- - Use this model as a tool to assist, not replace, professional legal services
94
- - Keep in mind that immigration law changes frequently
95
-
96
- ## How to Get Started with the Model
97
-
98
- ### Using Transformers
99
-
100
- ```python
101
- from transformers import AutoModelForCausalLM, AutoTokenizer
102
-
103
- model_name = "JustiGuide/DoloresAI-Merged"
104
- tokenizer = AutoTokenizer.from_pretrained(model_name)
105
- model = AutoModelForCausalLM.from_pretrained(model_name)
106
-
107
- # Format prompt for Qwen2.5 chat template
108
- messages = [
109
- {"role": "system", "content": "You are Dolores, an immigration law assistant."},
110
- {"role": "user", "content": "What is an H-1B visa?"}
111
- ]
112
-
113
- # Apply chat template
114
- prompt = tokenizer.apply_chat_template(
115
- messages,
116
- tokenize=False,
117
- add_generation_prompt=True
118
- )
119
-
120
- # Generate response
121
- inputs = tokenizer(prompt, return_tensors="pt")
122
- outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7)
123
- response = tokenizer.decode(outputs[0], skip_special_tokens=True)
124
- print(response)
125
- ```
126
-
127
- ### Using Hugging Face Inference Endpoint
128
-
129
- ```python
130
- import requests
131
-
132
- endpoint_url = "YOUR_ENDPOINT_URL"
133
- headers = {
134
- "Authorization": "Bearer YOUR_API_KEY",
135
- "Content-Type": "application/json"
136
- }
137
-
138
- payload = {
139
- "inputs": "User: What is an H-1B visa?\nAssistant:",
140
- "parameters": {
141
- "max_new_tokens": 256,
142
- "temperature": 0.7,
143
- "top_p": 0.9
144
- }
145
- }
146
-
147
- response = requests.post(endpoint_url, json=payload, headers=headers)
148
- print(response.json())
149
- ```
150
-
151
- ## Training Details
152
-
153
- ### Training Data
154
-
155
- The model was fine-tuned on a custom dataset of:
156
- - Immigration law questions and answers
157
- - USCIS form instructions and examples
158
- - Legal terminology and definitions
159
- - Case management scenarios
160
- - Immigration process documentation
161
-
162
- **Dataset Size:** 338+ training examples (as of training)
163
-
164
- ### Training Procedure
165
-
166
- #### Preprocessing
167
-
168
- - Training data was formatted using Qwen2.5 chat template
169
- - System prompts included role definitions and instructions
170
- - Context and examples were included in training format
171
-
172
- #### Training Hyperparameters
173
-
174
- - **Training Type:** LoRA (Low-Rank Adaptation)
175
- - **Base Model:** Qwen/Qwen2.5-3B-Instruct
176
- - **LoRA Rank (r):** 16
177
- - **LoRA Alpha:** 32
178
- - **LoRA Dropout:** 0.1
179
- - **Target Modules:** q_proj, v_proj, k_proj, o_proj, gate_proj, up_proj, down_proj
180
- - **Learning Rate:** 2e-4 (0.0002)
181
- - **Batch Size:** 4
182
- - **Gradient Accumulation Steps:** 4
183
- - **Effective Batch Size:** 16
184
- - **Epochs:** 3
185
- - **Max Sequence Length:** 1024 tokens
186
- - **Warmup Steps:** 50
187
- - **Mixed Precision:** FP16
188
-
189
- #### Training Infrastructure
190
-
191
- - **Platform:** Hugging Face AutoTrain
192
- - **Hardware:** GPU (specific GPU type may vary)
193
- - **Training Time:** 1-3 hours (depending on hardware)
194
-
195
- ### Model Merging
196
-
197
- The LoRA adapter was merged with the base model to create a single, unified model file. This process:
198
- - Integrates LoRA weights into the base model
199
- - Creates a standalone model that doesn't require adapter loading
200
- - Improves inference stability and reduces CUDA errors
201
- - Maintains all fine-tuning benefits
202
-
203
- ## Evaluation
204
-
205
- ### Testing Data
206
-
207
- The model was evaluated on:
208
- - Immigration law Q&A accuracy
209
- - Form completion assistance quality
210
- - Legal terminology understanding
211
- - Response coherence and relevance
212
-
213
- ### Metrics
214
-
215
- - **Training Loss:** Decreased from ~2.5 to ~1.5-2.0
216
- - **Response Quality:** Improved context awareness and accuracy
217
- - **Form Assistance:** Accurate guidance on USCIS forms
218
-
219
- ### Results
220
-
221
- The merged model maintains the fine-tuning benefits while providing:
222
- - ✅ Stable inference (no CUDA errors from adapter loading)
223
- - ✅ Faster inference (single model file)
224
- - ✅ Better compatibility with inference endpoints
225
- - ✅ Preserved training quality
226
-
227
- ## Technical Specifications
228
-
229
- ### Model Architecture
230
-
231
- - **Architecture:** Transformer Decoder
232
- - **Parameters:** ~3B
233
- - **Layers:** Based on Qwen2.5-3B-Instruct architecture
234
- - **Attention Mechanism:** Multi-head self-attention
235
- - **Activation:** SwiGLU
236
-
237
- ### Compute Infrastructure
238
-
239
- #### Hardware
240
-
241
- - **Training:** GPU (via Hugging Face AutoTrain)
242
- - **Inference:** GPU recommended (T4, A10G, or A100)
243
- - **Minimum:** CPU inference possible but slower
244
-
245
- #### Software
246
-
247
- - **Framework:** PyTorch
248
- - **Transformers:** Hugging Face Transformers library
249
- - **LoRA:** PEFT (Parameter-Efficient Fine-Tuning)
250
-
251
- ## Citation
252
-
253
- If you use this model, please cite:
254
-
255
- ```bibtex
256
- @model{JustiGuide/DoloresAI-Merged,
257
- title={DoloresAI-Merged: Immigration Law Assistant},
258
- author={JustiGuide},
259
- year={2025},
260
- url={https://huggingface.co/JustiGuide/DoloresAI-Merged},
261
- base_model={Qwen/Qwen2.5-3B-Instruct}
262
- }
263
- ```
264
-
265
- ## Model Card Contact
266
-
267
- For questions or issues, please contact:
268
- - **Organization:** JustiGuide
269
- - **Model Repository:** https://huggingface.co/JustiGuide/DoloresAI-Merged
270
-
271
- ## Acknowledgments
272
-
273
- - Base model: Qwen/Qwen2.5-3B-Instruct by Alibaba Cloud
274
- - Training platform: Hugging Face AutoTrain
275
- - Framework: Hugging Face Transformers and PEFT
276
-
277
- ## Version History
278
-
279
- - **v1.0 (Merged):** Initial merged model release
280
- - Merged LoRA adapter with base model
281
- - Optimized for inference endpoints
282
- - Fixed CUDA compatibility issues
 
1
+ # DoloresAI-Merged (Fixed)
 
 
 
 
 
 
 
 
 
 
 
2
 
3
+ This is a fixed version of the DoloresAI merged model with vocabulary mismatch resolved.
4
 
5
+ ## Changes
6
+ - Fixed vocabulary size mismatch between model (151936) and tokenizer (151665)
7
+ - Model embeddings resized to match tokenizer: 151665 tokens
8
+ - Ready for deployment on HuggingFace Inference Endpoints
9
 
10
  ## Model Details
11
+ - Base Model: Qwen2-7B-Instruct
12
+ - Fine-tuned for: Immigration law assistance
13
+ - Fixed on: 2026-01-11 00:38:12
14
+
15
+ ## Deployment
16
+ This model is ready to deploy on HuggingFace Inference Endpoints without CUDA errors.
17
+
18
+ ## Testing
19
+ The vocabulary sizes have been verified to match:
20
+ - Model vocab size: 151665
21
+ - Tokenizer vocab size: 151665
22
+ - Match: ✅
23
+
24
+ ## Next Steps
25
+ 1. Upload to HuggingFace: `huggingface-cli upload JustiGuide/DoloresAI-Merged ./dolores-merged-fixed --repo-type model`
26
+ 2. Deploy new inference endpoint
27
+ 3. Update backend secrets with new endpoint URL
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
config.json CHANGED
@@ -4,7 +4,7 @@
4
  ],
5
  "attention_dropout": 0.0,
6
  "bos_token_id": 151643,
7
- "dtype": "bfloat16",
8
  "eos_token_id": 151645,
9
  "hidden_act": "silu",
10
  "hidden_size": 2048,
@@ -62,5 +62,5 @@
62
  "transformers_version": "4.57.3",
63
  "use_cache": true,
64
  "use_sliding_window": false,
65
- "vocab_size": 151936
66
  }
 
4
  ],
5
  "attention_dropout": 0.0,
6
  "bos_token_id": 151643,
7
+ "dtype": "float32",
8
  "eos_token_id": 151645,
9
  "hidden_act": "silu",
10
  "hidden_size": 2048,
 
62
  "transformers_version": "4.57.3",
63
  "use_cache": true,
64
  "use_sliding_window": false,
65
+ "vocab_size": 151665
66
  }
model-00001-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d8416fb083c2aa9716f973bdd94d5f8ae6be8253fb743aea93652699363b11f
3
+ size 4979911504
model-00002-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d11fc85b6c2c71b9f26ac54dbb4b8c9737dbdcdc413e71b23b95d0bd75db2cc
3
+ size 4932949336
model-00003-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:419889ec7da51ed6ae1d335b35ee1e0add54a5d37989130cf71e764ed7e5e05d
3
+ size 2428723160
model.safetensors.index.json CHANGED
@@ -1,442 +1,442 @@
1
  {
2
  "metadata": {
3
- "total_parameters": 3085938688,
4
- "total_size": 6171877376
5
  },
6
  "weight_map": {
7
- "model.embed_tokens.weight": "model-00001-of-00002.safetensors",
8
- "model.layers.0.input_layernorm.weight": "model-00001-of-00002.safetensors",
9
- "model.layers.0.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
10
- "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
11
- "model.layers.0.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
12
- "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
13
- "model.layers.0.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
14
- "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
15
- "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
16
- "model.layers.0.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
17
- "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
18
- "model.layers.0.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
19
- "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
20
- "model.layers.1.input_layernorm.weight": "model-00001-of-00002.safetensors",
21
- "model.layers.1.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
22
- "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
23
- "model.layers.1.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
24
- "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
25
- "model.layers.1.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
26
- "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
27
- "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
28
- "model.layers.1.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
29
- "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
30
- "model.layers.1.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
31
- "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
32
- "model.layers.10.input_layernorm.weight": "model-00001-of-00002.safetensors",
33
- "model.layers.10.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
34
- "model.layers.10.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
35
- "model.layers.10.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
36
- "model.layers.10.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
37
- "model.layers.10.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
38
- "model.layers.10.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
39
- "model.layers.10.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
40
- "model.layers.10.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
41
- "model.layers.10.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
42
- "model.layers.10.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
43
- "model.layers.10.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
44
- "model.layers.11.input_layernorm.weight": "model-00001-of-00002.safetensors",
45
- "model.layers.11.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
46
- "model.layers.11.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
47
- "model.layers.11.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
48
- "model.layers.11.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
49
- "model.layers.11.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
50
- "model.layers.11.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
51
- "model.layers.11.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
52
- "model.layers.11.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
53
- "model.layers.11.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
54
- "model.layers.11.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
55
- "model.layers.11.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
56
- "model.layers.12.input_layernorm.weight": "model-00001-of-00002.safetensors",
57
- "model.layers.12.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
58
- "model.layers.12.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
59
- "model.layers.12.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
60
- "model.layers.12.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
61
- "model.layers.12.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
62
- "model.layers.12.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
63
- "model.layers.12.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
64
- "model.layers.12.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
65
- "model.layers.12.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
66
- "model.layers.12.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
67
- "model.layers.12.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
68
- "model.layers.13.input_layernorm.weight": "model-00001-of-00002.safetensors",
69
- "model.layers.13.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
70
- "model.layers.13.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
71
- "model.layers.13.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
72
- "model.layers.13.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
73
- "model.layers.13.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
74
- "model.layers.13.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
75
- "model.layers.13.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
76
- "model.layers.13.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
77
- "model.layers.13.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
78
- "model.layers.13.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
79
- "model.layers.13.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
80
- "model.layers.14.input_layernorm.weight": "model-00001-of-00002.safetensors",
81
- "model.layers.14.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
82
- "model.layers.14.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
83
- "model.layers.14.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
84
- "model.layers.14.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
85
- "model.layers.14.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
86
- "model.layers.14.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
87
- "model.layers.14.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
88
- "model.layers.14.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
89
- "model.layers.14.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
90
- "model.layers.14.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
91
- "model.layers.14.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
92
- "model.layers.15.input_layernorm.weight": "model-00001-of-00002.safetensors",
93
- "model.layers.15.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
94
- "model.layers.15.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
95
- "model.layers.15.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
96
- "model.layers.15.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
97
- "model.layers.15.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
98
- "model.layers.15.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
99
- "model.layers.15.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
100
- "model.layers.15.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
101
- "model.layers.15.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
102
- "model.layers.15.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
103
- "model.layers.15.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
104
- "model.layers.16.input_layernorm.weight": "model-00001-of-00002.safetensors",
105
- "model.layers.16.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
106
- "model.layers.16.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
107
- "model.layers.16.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
108
- "model.layers.16.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
109
- "model.layers.16.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
110
- "model.layers.16.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
111
- "model.layers.16.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
112
- "model.layers.16.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
113
- "model.layers.16.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
114
- "model.layers.16.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
115
- "model.layers.16.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
116
- "model.layers.17.input_layernorm.weight": "model-00001-of-00002.safetensors",
117
- "model.layers.17.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
118
- "model.layers.17.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
119
- "model.layers.17.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
120
- "model.layers.17.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
121
- "model.layers.17.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
122
- "model.layers.17.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
123
- "model.layers.17.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
124
- "model.layers.17.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
125
- "model.layers.17.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
126
- "model.layers.17.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
127
- "model.layers.17.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
128
- "model.layers.18.input_layernorm.weight": "model-00001-of-00002.safetensors",
129
- "model.layers.18.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
130
- "model.layers.18.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
131
- "model.layers.18.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
132
- "model.layers.18.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
133
- "model.layers.18.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
134
- "model.layers.18.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
135
- "model.layers.18.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
136
- "model.layers.18.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
137
- "model.layers.18.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
138
- "model.layers.18.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
139
- "model.layers.18.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
140
- "model.layers.19.input_layernorm.weight": "model-00001-of-00002.safetensors",
141
- "model.layers.19.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
142
- "model.layers.19.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
143
- "model.layers.19.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
144
- "model.layers.19.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
145
- "model.layers.19.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
146
- "model.layers.19.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
147
- "model.layers.19.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
148
- "model.layers.19.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
149
- "model.layers.19.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
150
- "model.layers.19.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
151
- "model.layers.19.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
152
- "model.layers.2.input_layernorm.weight": "model-00001-of-00002.safetensors",
153
- "model.layers.2.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
154
- "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
155
- "model.layers.2.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
156
- "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
157
- "model.layers.2.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
158
- "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
159
- "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
160
- "model.layers.2.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
161
- "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
162
- "model.layers.2.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
163
- "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
164
- "model.layers.20.input_layernorm.weight": "model-00001-of-00002.safetensors",
165
- "model.layers.20.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
166
- "model.layers.20.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
167
- "model.layers.20.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
168
- "model.layers.20.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
169
- "model.layers.20.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
170
- "model.layers.20.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
171
- "model.layers.20.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
172
- "model.layers.20.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
173
- "model.layers.20.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
174
- "model.layers.20.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
175
- "model.layers.20.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
176
- "model.layers.21.input_layernorm.weight": "model-00001-of-00002.safetensors",
177
- "model.layers.21.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
178
- "model.layers.21.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
179
- "model.layers.21.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
180
- "model.layers.21.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
181
- "model.layers.21.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
182
- "model.layers.21.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
183
- "model.layers.21.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
184
- "model.layers.21.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
185
- "model.layers.21.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
186
- "model.layers.21.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
187
- "model.layers.21.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
188
- "model.layers.22.input_layernorm.weight": "model-00001-of-00002.safetensors",
189
- "model.layers.22.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
190
- "model.layers.22.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
191
- "model.layers.22.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
192
- "model.layers.22.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
193
- "model.layers.22.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
194
- "model.layers.22.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
195
- "model.layers.22.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
196
- "model.layers.22.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
197
- "model.layers.22.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
198
- "model.layers.22.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
199
- "model.layers.22.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
200
- "model.layers.23.input_layernorm.weight": "model-00001-of-00002.safetensors",
201
- "model.layers.23.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
202
- "model.layers.23.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
203
- "model.layers.23.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
204
- "model.layers.23.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
205
- "model.layers.23.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
206
- "model.layers.23.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
207
- "model.layers.23.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
208
- "model.layers.23.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
209
- "model.layers.23.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
210
- "model.layers.23.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
211
- "model.layers.23.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
212
- "model.layers.24.input_layernorm.weight": "model-00001-of-00002.safetensors",
213
- "model.layers.24.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
214
- "model.layers.24.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
215
- "model.layers.24.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
216
- "model.layers.24.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
217
- "model.layers.24.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
218
- "model.layers.24.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
219
- "model.layers.24.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
220
- "model.layers.24.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
221
- "model.layers.24.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
222
- "model.layers.24.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
223
- "model.layers.24.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
224
- "model.layers.25.input_layernorm.weight": "model-00001-of-00002.safetensors",
225
- "model.layers.25.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
226
- "model.layers.25.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
227
- "model.layers.25.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
228
- "model.layers.25.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
229
- "model.layers.25.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
230
- "model.layers.25.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
231
- "model.layers.25.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
232
- "model.layers.25.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
233
- "model.layers.25.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
234
- "model.layers.25.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
235
- "model.layers.25.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
236
- "model.layers.26.input_layernorm.weight": "model-00001-of-00002.safetensors",
237
- "model.layers.26.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
238
- "model.layers.26.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
239
- "model.layers.26.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
240
- "model.layers.26.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
241
- "model.layers.26.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
242
- "model.layers.26.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
243
- "model.layers.26.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
244
- "model.layers.26.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
245
- "model.layers.26.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
246
- "model.layers.26.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
247
- "model.layers.26.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
248
- "model.layers.27.input_layernorm.weight": "model-00001-of-00002.safetensors",
249
- "model.layers.27.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
250
- "model.layers.27.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
251
- "model.layers.27.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
252
- "model.layers.27.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
253
- "model.layers.27.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
254
- "model.layers.27.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
255
- "model.layers.27.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
256
- "model.layers.27.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
257
- "model.layers.27.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
258
- "model.layers.27.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
259
- "model.layers.27.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
260
- "model.layers.28.input_layernorm.weight": "model-00002-of-00002.safetensors",
261
- "model.layers.28.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
262
- "model.layers.28.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
263
- "model.layers.28.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
264
- "model.layers.28.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
265
- "model.layers.28.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
266
- "model.layers.28.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
267
- "model.layers.28.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
268
- "model.layers.28.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
269
- "model.layers.28.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
270
- "model.layers.28.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
271
- "model.layers.28.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
272
- "model.layers.29.input_layernorm.weight": "model-00002-of-00002.safetensors",
273
- "model.layers.29.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
274
- "model.layers.29.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
275
- "model.layers.29.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
276
- "model.layers.29.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
277
- "model.layers.29.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
278
- "model.layers.29.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
279
- "model.layers.29.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
280
- "model.layers.29.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
281
- "model.layers.29.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
282
- "model.layers.29.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
283
- "model.layers.29.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
284
- "model.layers.3.input_layernorm.weight": "model-00001-of-00002.safetensors",
285
- "model.layers.3.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
286
- "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
287
- "model.layers.3.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
288
- "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
289
- "model.layers.3.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
290
- "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
291
- "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
292
- "model.layers.3.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
293
- "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
294
- "model.layers.3.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
295
- "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
296
- "model.layers.30.input_layernorm.weight": "model-00002-of-00002.safetensors",
297
- "model.layers.30.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
298
- "model.layers.30.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
299
- "model.layers.30.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
300
- "model.layers.30.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
301
- "model.layers.30.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
302
- "model.layers.30.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
303
- "model.layers.30.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
304
- "model.layers.30.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
305
- "model.layers.30.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
306
- "model.layers.30.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
307
- "model.layers.30.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
308
- "model.layers.31.input_layernorm.weight": "model-00002-of-00002.safetensors",
309
- "model.layers.31.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
310
- "model.layers.31.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
311
- "model.layers.31.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
312
- "model.layers.31.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
313
- "model.layers.31.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
314
- "model.layers.31.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
315
- "model.layers.31.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
316
- "model.layers.31.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
317
- "model.layers.31.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
318
- "model.layers.31.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
319
- "model.layers.31.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
320
- "model.layers.32.input_layernorm.weight": "model-00002-of-00002.safetensors",
321
- "model.layers.32.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
322
- "model.layers.32.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
323
- "model.layers.32.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
324
- "model.layers.32.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
325
- "model.layers.32.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
326
- "model.layers.32.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
327
- "model.layers.32.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
328
- "model.layers.32.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
329
- "model.layers.32.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
330
- "model.layers.32.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
331
- "model.layers.32.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
332
- "model.layers.33.input_layernorm.weight": "model-00002-of-00002.safetensors",
333
- "model.layers.33.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
334
- "model.layers.33.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
335
- "model.layers.33.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
336
- "model.layers.33.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
337
- "model.layers.33.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
338
- "model.layers.33.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
339
- "model.layers.33.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
340
- "model.layers.33.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
341
- "model.layers.33.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
342
- "model.layers.33.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
343
- "model.layers.33.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
344
- "model.layers.34.input_layernorm.weight": "model-00002-of-00002.safetensors",
345
- "model.layers.34.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
346
- "model.layers.34.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
347
- "model.layers.34.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
348
- "model.layers.34.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
349
- "model.layers.34.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
350
- "model.layers.34.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
351
- "model.layers.34.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
352
- "model.layers.34.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
353
- "model.layers.34.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
354
- "model.layers.34.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
355
- "model.layers.34.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
356
- "model.layers.35.input_layernorm.weight": "model-00002-of-00002.safetensors",
357
- "model.layers.35.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
358
- "model.layers.35.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
359
- "model.layers.35.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
360
- "model.layers.35.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
361
- "model.layers.35.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
362
- "model.layers.35.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
363
- "model.layers.35.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
364
- "model.layers.35.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
365
- "model.layers.35.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
366
- "model.layers.35.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
367
- "model.layers.35.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
368
- "model.layers.4.input_layernorm.weight": "model-00001-of-00002.safetensors",
369
- "model.layers.4.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
370
- "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
371
- "model.layers.4.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
372
- "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
373
- "model.layers.4.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
374
- "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
375
- "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
376
- "model.layers.4.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
377
- "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
378
- "model.layers.4.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
379
- "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
380
- "model.layers.5.input_layernorm.weight": "model-00001-of-00002.safetensors",
381
- "model.layers.5.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
382
- "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
383
- "model.layers.5.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
384
- "model.layers.5.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
385
- "model.layers.5.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
386
- "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
387
- "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
388
- "model.layers.5.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
389
- "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
390
- "model.layers.5.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
391
- "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
392
- "model.layers.6.input_layernorm.weight": "model-00001-of-00002.safetensors",
393
- "model.layers.6.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
394
- "model.layers.6.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
395
- "model.layers.6.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
396
- "model.layers.6.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
397
- "model.layers.6.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
398
- "model.layers.6.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
399
- "model.layers.6.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
400
- "model.layers.6.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
401
- "model.layers.6.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
402
- "model.layers.6.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
403
- "model.layers.6.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
404
- "model.layers.7.input_layernorm.weight": "model-00001-of-00002.safetensors",
405
- "model.layers.7.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
406
- "model.layers.7.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
407
- "model.layers.7.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
408
- "model.layers.7.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
409
- "model.layers.7.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
410
- "model.layers.7.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
411
- "model.layers.7.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
412
- "model.layers.7.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
413
- "model.layers.7.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
414
- "model.layers.7.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
415
- "model.layers.7.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
416
- "model.layers.8.input_layernorm.weight": "model-00001-of-00002.safetensors",
417
- "model.layers.8.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
418
- "model.layers.8.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
419
- "model.layers.8.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
420
- "model.layers.8.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
421
- "model.layers.8.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
422
- "model.layers.8.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
423
- "model.layers.8.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
424
- "model.layers.8.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
425
- "model.layers.8.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
426
- "model.layers.8.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
427
- "model.layers.8.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
428
- "model.layers.9.input_layernorm.weight": "model-00001-of-00002.safetensors",
429
- "model.layers.9.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
430
- "model.layers.9.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
431
- "model.layers.9.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
432
- "model.layers.9.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
433
- "model.layers.9.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
434
- "model.layers.9.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
435
- "model.layers.9.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
436
- "model.layers.9.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
437
- "model.layers.9.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
438
- "model.layers.9.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
439
- "model.layers.9.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
440
- "model.norm.weight": "model-00002-of-00002.safetensors"
441
  }
442
  }
 
1
  {
2
  "metadata": {
3
+ "total_parameters": 3085383680,
4
+ "total_size": 12341534720
5
  },
6
  "weight_map": {
7
+ "model.embed_tokens.weight": "model-00001-of-00003.safetensors",
8
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00003.safetensors",
9
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
10
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
11
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
12
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
13
+ "model.layers.0.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
14
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
15
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
16
+ "model.layers.0.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
17
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
18
+ "model.layers.0.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
19
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
20
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00003.safetensors",
21
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
22
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
23
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
24
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
25
+ "model.layers.1.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
26
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
27
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
28
+ "model.layers.1.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
29
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
30
+ "model.layers.1.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
31
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
32
+ "model.layers.10.input_layernorm.weight": "model-00001-of-00003.safetensors",
33
+ "model.layers.10.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
34
+ "model.layers.10.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
35
+ "model.layers.10.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
36
+ "model.layers.10.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
37
+ "model.layers.10.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
38
+ "model.layers.10.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
39
+ "model.layers.10.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
40
+ "model.layers.10.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
41
+ "model.layers.10.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
42
+ "model.layers.10.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
43
+ "model.layers.10.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
44
+ "model.layers.11.input_layernorm.weight": "model-00001-of-00003.safetensors",
45
+ "model.layers.11.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
46
+ "model.layers.11.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
47
+ "model.layers.11.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
48
+ "model.layers.11.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
49
+ "model.layers.11.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
50
+ "model.layers.11.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
51
+ "model.layers.11.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
52
+ "model.layers.11.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
53
+ "model.layers.11.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
54
+ "model.layers.11.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
55
+ "model.layers.11.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
56
+ "model.layers.12.input_layernorm.weight": "model-00002-of-00003.safetensors",
57
+ "model.layers.12.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
58
+ "model.layers.12.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
59
+ "model.layers.12.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
60
+ "model.layers.12.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
61
+ "model.layers.12.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
62
+ "model.layers.12.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
63
+ "model.layers.12.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
64
+ "model.layers.12.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
65
+ "model.layers.12.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
66
+ "model.layers.12.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
67
+ "model.layers.12.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
68
+ "model.layers.13.input_layernorm.weight": "model-00002-of-00003.safetensors",
69
+ "model.layers.13.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
70
+ "model.layers.13.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
71
+ "model.layers.13.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
72
+ "model.layers.13.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
73
+ "model.layers.13.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
74
+ "model.layers.13.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
75
+ "model.layers.13.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
76
+ "model.layers.13.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
77
+ "model.layers.13.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
78
+ "model.layers.13.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
79
+ "model.layers.13.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
80
+ "model.layers.14.input_layernorm.weight": "model-00002-of-00003.safetensors",
81
+ "model.layers.14.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
82
+ "model.layers.14.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
83
+ "model.layers.14.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
84
+ "model.layers.14.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
85
+ "model.layers.14.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
86
+ "model.layers.14.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
87
+ "model.layers.14.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
88
+ "model.layers.14.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
89
+ "model.layers.14.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
90
+ "model.layers.14.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
91
+ "model.layers.14.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
92
+ "model.layers.15.input_layernorm.weight": "model-00002-of-00003.safetensors",
93
+ "model.layers.15.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
94
+ "model.layers.15.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
95
+ "model.layers.15.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
96
+ "model.layers.15.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
97
+ "model.layers.15.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
98
+ "model.layers.15.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
99
+ "model.layers.15.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
100
+ "model.layers.15.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
101
+ "model.layers.15.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
102
+ "model.layers.15.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
103
+ "model.layers.15.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
104
+ "model.layers.16.input_layernorm.weight": "model-00002-of-00003.safetensors",
105
+ "model.layers.16.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
106
+ "model.layers.16.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
107
+ "model.layers.16.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
108
+ "model.layers.16.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
109
+ "model.layers.16.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
110
+ "model.layers.16.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
111
+ "model.layers.16.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
112
+ "model.layers.16.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
113
+ "model.layers.16.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
114
+ "model.layers.16.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
115
+ "model.layers.16.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
116
+ "model.layers.17.input_layernorm.weight": "model-00002-of-00003.safetensors",
117
+ "model.layers.17.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
118
+ "model.layers.17.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
119
+ "model.layers.17.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
120
+ "model.layers.17.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
121
+ "model.layers.17.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
122
+ "model.layers.17.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
123
+ "model.layers.17.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
124
+ "model.layers.17.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
125
+ "model.layers.17.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
126
+ "model.layers.17.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
127
+ "model.layers.17.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
128
+ "model.layers.18.input_layernorm.weight": "model-00002-of-00003.safetensors",
129
+ "model.layers.18.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
130
+ "model.layers.18.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
131
+ "model.layers.18.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
132
+ "model.layers.18.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
133
+ "model.layers.18.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
134
+ "model.layers.18.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
135
+ "model.layers.18.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
136
+ "model.layers.18.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
137
+ "model.layers.18.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
138
+ "model.layers.18.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
139
+ "model.layers.18.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
140
+ "model.layers.19.input_layernorm.weight": "model-00002-of-00003.safetensors",
141
+ "model.layers.19.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
142
+ "model.layers.19.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
143
+ "model.layers.19.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
144
+ "model.layers.19.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
145
+ "model.layers.19.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
146
+ "model.layers.19.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
147
+ "model.layers.19.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
148
+ "model.layers.19.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
149
+ "model.layers.19.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
150
+ "model.layers.19.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
151
+ "model.layers.19.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
152
+ "model.layers.2.input_layernorm.weight": "model-00001-of-00003.safetensors",
153
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
154
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
155
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
156
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
157
+ "model.layers.2.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
158
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
159
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
160
+ "model.layers.2.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
161
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
162
+ "model.layers.2.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
163
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
164
+ "model.layers.20.input_layernorm.weight": "model-00002-of-00003.safetensors",
165
+ "model.layers.20.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
166
+ "model.layers.20.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
167
+ "model.layers.20.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
168
+ "model.layers.20.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
169
+ "model.layers.20.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
170
+ "model.layers.20.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
171
+ "model.layers.20.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
172
+ "model.layers.20.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
173
+ "model.layers.20.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
174
+ "model.layers.20.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
175
+ "model.layers.20.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
176
+ "model.layers.21.input_layernorm.weight": "model-00002-of-00003.safetensors",
177
+ "model.layers.21.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
178
+ "model.layers.21.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
179
+ "model.layers.21.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
180
+ "model.layers.21.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
181
+ "model.layers.21.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
182
+ "model.layers.21.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
183
+ "model.layers.21.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
184
+ "model.layers.21.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
185
+ "model.layers.21.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
186
+ "model.layers.21.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
187
+ "model.layers.21.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
188
+ "model.layers.22.input_layernorm.weight": "model-00002-of-00003.safetensors",
189
+ "model.layers.22.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
190
+ "model.layers.22.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
191
+ "model.layers.22.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
192
+ "model.layers.22.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
193
+ "model.layers.22.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
194
+ "model.layers.22.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
195
+ "model.layers.22.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
196
+ "model.layers.22.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
197
+ "model.layers.22.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
198
+ "model.layers.22.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
199
+ "model.layers.22.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
200
+ "model.layers.23.input_layernorm.weight": "model-00002-of-00003.safetensors",
201
+ "model.layers.23.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
202
+ "model.layers.23.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
203
+ "model.layers.23.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
204
+ "model.layers.23.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
205
+ "model.layers.23.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
206
+ "model.layers.23.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
207
+ "model.layers.23.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
208
+ "model.layers.23.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
209
+ "model.layers.23.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
210
+ "model.layers.23.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
211
+ "model.layers.23.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
212
+ "model.layers.24.input_layernorm.weight": "model-00002-of-00003.safetensors",
213
+ "model.layers.24.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
214
+ "model.layers.24.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
215
+ "model.layers.24.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
216
+ "model.layers.24.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
217
+ "model.layers.24.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
218
+ "model.layers.24.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
219
+ "model.layers.24.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
220
+ "model.layers.24.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
221
+ "model.layers.24.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
222
+ "model.layers.24.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
223
+ "model.layers.24.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
224
+ "model.layers.25.input_layernorm.weight": "model-00002-of-00003.safetensors",
225
+ "model.layers.25.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
226
+ "model.layers.25.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
227
+ "model.layers.25.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
228
+ "model.layers.25.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
229
+ "model.layers.25.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
230
+ "model.layers.25.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
231
+ "model.layers.25.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
232
+ "model.layers.25.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
233
+ "model.layers.25.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
234
+ "model.layers.25.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
235
+ "model.layers.25.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
236
+ "model.layers.26.input_layernorm.weight": "model-00002-of-00003.safetensors",
237
+ "model.layers.26.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
238
+ "model.layers.26.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
239
+ "model.layers.26.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
240
+ "model.layers.26.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
241
+ "model.layers.26.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
242
+ "model.layers.26.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
243
+ "model.layers.26.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
244
+ "model.layers.26.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
245
+ "model.layers.26.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
246
+ "model.layers.26.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
247
+ "model.layers.26.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
248
+ "model.layers.27.input_layernorm.weight": "model-00002-of-00003.safetensors",
249
+ "model.layers.27.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
250
+ "model.layers.27.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
251
+ "model.layers.27.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
252
+ "model.layers.27.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
253
+ "model.layers.27.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
254
+ "model.layers.27.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
255
+ "model.layers.27.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
256
+ "model.layers.27.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
257
+ "model.layers.27.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
258
+ "model.layers.27.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
259
+ "model.layers.27.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
260
+ "model.layers.28.input_layernorm.weight": "model-00003-of-00003.safetensors",
261
+ "model.layers.28.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
262
+ "model.layers.28.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
263
+ "model.layers.28.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
264
+ "model.layers.28.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
265
+ "model.layers.28.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
266
+ "model.layers.28.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
267
+ "model.layers.28.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
268
+ "model.layers.28.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
269
+ "model.layers.28.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
270
+ "model.layers.28.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
271
+ "model.layers.28.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
272
+ "model.layers.29.input_layernorm.weight": "model-00003-of-00003.safetensors",
273
+ "model.layers.29.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
274
+ "model.layers.29.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
275
+ "model.layers.29.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
276
+ "model.layers.29.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
277
+ "model.layers.29.self_attn.k_proj.bias": "model-00003-of-00003.safetensors",
278
+ "model.layers.29.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
279
+ "model.layers.29.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
280
+ "model.layers.29.self_attn.q_proj.bias": "model-00003-of-00003.safetensors",
281
+ "model.layers.29.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
282
+ "model.layers.29.self_attn.v_proj.bias": "model-00003-of-00003.safetensors",
283
+ "model.layers.29.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
284
+ "model.layers.3.input_layernorm.weight": "model-00001-of-00003.safetensors",
285
+ "model.layers.3.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
286
+ "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
287
+ "model.layers.3.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
288
+ "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
289
+ "model.layers.3.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
290
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
291
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
292
+ "model.layers.3.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
293
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
294
+ "model.layers.3.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
295
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
296
+ "model.layers.30.input_layernorm.weight": "model-00003-of-00003.safetensors",
297
+ "model.layers.30.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
298
+ "model.layers.30.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
299
+ "model.layers.30.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
300
+ "model.layers.30.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
301
+ "model.layers.30.self_attn.k_proj.bias": "model-00003-of-00003.safetensors",
302
+ "model.layers.30.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
303
+ "model.layers.30.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
304
+ "model.layers.30.self_attn.q_proj.bias": "model-00003-of-00003.safetensors",
305
+ "model.layers.30.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
306
+ "model.layers.30.self_attn.v_proj.bias": "model-00003-of-00003.safetensors",
307
+ "model.layers.30.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
308
+ "model.layers.31.input_layernorm.weight": "model-00003-of-00003.safetensors",
309
+ "model.layers.31.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
310
+ "model.layers.31.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
311
+ "model.layers.31.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
312
+ "model.layers.31.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
313
+ "model.layers.31.self_attn.k_proj.bias": "model-00003-of-00003.safetensors",
314
+ "model.layers.31.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
315
+ "model.layers.31.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
316
+ "model.layers.31.self_attn.q_proj.bias": "model-00003-of-00003.safetensors",
317
+ "model.layers.31.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
318
+ "model.layers.31.self_attn.v_proj.bias": "model-00003-of-00003.safetensors",
319
+ "model.layers.31.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
320
+ "model.layers.32.input_layernorm.weight": "model-00003-of-00003.safetensors",
321
+ "model.layers.32.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
322
+ "model.layers.32.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
323
+ "model.layers.32.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
324
+ "model.layers.32.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
325
+ "model.layers.32.self_attn.k_proj.bias": "model-00003-of-00003.safetensors",
326
+ "model.layers.32.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
327
+ "model.layers.32.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
328
+ "model.layers.32.self_attn.q_proj.bias": "model-00003-of-00003.safetensors",
329
+ "model.layers.32.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
330
+ "model.layers.32.self_attn.v_proj.bias": "model-00003-of-00003.safetensors",
331
+ "model.layers.32.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
332
+ "model.layers.33.input_layernorm.weight": "model-00003-of-00003.safetensors",
333
+ "model.layers.33.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
334
+ "model.layers.33.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
335
+ "model.layers.33.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
336
+ "model.layers.33.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
337
+ "model.layers.33.self_attn.k_proj.bias": "model-00003-of-00003.safetensors",
338
+ "model.layers.33.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
339
+ "model.layers.33.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
340
+ "model.layers.33.self_attn.q_proj.bias": "model-00003-of-00003.safetensors",
341
+ "model.layers.33.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
342
+ "model.layers.33.self_attn.v_proj.bias": "model-00003-of-00003.safetensors",
343
+ "model.layers.33.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
344
+ "model.layers.34.input_layernorm.weight": "model-00003-of-00003.safetensors",
345
+ "model.layers.34.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
346
+ "model.layers.34.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
347
+ "model.layers.34.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
348
+ "model.layers.34.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
349
+ "model.layers.34.self_attn.k_proj.bias": "model-00003-of-00003.safetensors",
350
+ "model.layers.34.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
351
+ "model.layers.34.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
352
+ "model.layers.34.self_attn.q_proj.bias": "model-00003-of-00003.safetensors",
353
+ "model.layers.34.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
354
+ "model.layers.34.self_attn.v_proj.bias": "model-00003-of-00003.safetensors",
355
+ "model.layers.34.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
356
+ "model.layers.35.input_layernorm.weight": "model-00003-of-00003.safetensors",
357
+ "model.layers.35.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
358
+ "model.layers.35.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
359
+ "model.layers.35.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
360
+ "model.layers.35.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
361
+ "model.layers.35.self_attn.k_proj.bias": "model-00003-of-00003.safetensors",
362
+ "model.layers.35.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
363
+ "model.layers.35.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
364
+ "model.layers.35.self_attn.q_proj.bias": "model-00003-of-00003.safetensors",
365
+ "model.layers.35.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
366
+ "model.layers.35.self_attn.v_proj.bias": "model-00003-of-00003.safetensors",
367
+ "model.layers.35.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
368
+ "model.layers.4.input_layernorm.weight": "model-00001-of-00003.safetensors",
369
+ "model.layers.4.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
370
+ "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
371
+ "model.layers.4.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
372
+ "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
373
+ "model.layers.4.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
374
+ "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
375
+ "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
376
+ "model.layers.4.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
377
+ "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
378
+ "model.layers.4.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
379
+ "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
380
+ "model.layers.5.input_layernorm.weight": "model-00001-of-00003.safetensors",
381
+ "model.layers.5.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
382
+ "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
383
+ "model.layers.5.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
384
+ "model.layers.5.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
385
+ "model.layers.5.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
386
+ "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
387
+ "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
388
+ "model.layers.5.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
389
+ "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
390
+ "model.layers.5.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
391
+ "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
392
+ "model.layers.6.input_layernorm.weight": "model-00001-of-00003.safetensors",
393
+ "model.layers.6.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
394
+ "model.layers.6.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
395
+ "model.layers.6.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
396
+ "model.layers.6.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
397
+ "model.layers.6.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
398
+ "model.layers.6.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
399
+ "model.layers.6.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
400
+ "model.layers.6.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
401
+ "model.layers.6.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
402
+ "model.layers.6.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
403
+ "model.layers.6.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
404
+ "model.layers.7.input_layernorm.weight": "model-00001-of-00003.safetensors",
405
+ "model.layers.7.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
406
+ "model.layers.7.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
407
+ "model.layers.7.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
408
+ "model.layers.7.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
409
+ "model.layers.7.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
410
+ "model.layers.7.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
411
+ "model.layers.7.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
412
+ "model.layers.7.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
413
+ "model.layers.7.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
414
+ "model.layers.7.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
415
+ "model.layers.7.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
416
+ "model.layers.8.input_layernorm.weight": "model-00001-of-00003.safetensors",
417
+ "model.layers.8.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
418
+ "model.layers.8.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
419
+ "model.layers.8.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
420
+ "model.layers.8.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
421
+ "model.layers.8.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
422
+ "model.layers.8.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
423
+ "model.layers.8.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
424
+ "model.layers.8.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
425
+ "model.layers.8.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
426
+ "model.layers.8.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
427
+ "model.layers.8.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
428
+ "model.layers.9.input_layernorm.weight": "model-00001-of-00003.safetensors",
429
+ "model.layers.9.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
430
+ "model.layers.9.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
431
+ "model.layers.9.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
432
+ "model.layers.9.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
433
+ "model.layers.9.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
434
+ "model.layers.9.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
435
+ "model.layers.9.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
436
+ "model.layers.9.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
437
+ "model.layers.9.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
438
+ "model.layers.9.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
439
+ "model.layers.9.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
440
+ "model.norm.weight": "model-00003-of-00003.safetensors"
441
  }
442
  }