legendbl skyblanket commited on
Commit
08823b1
·
0 Parent(s):

Duplicate from skyblanket/glm5-abliterated-fp8

Browse files

Co-authored-by: skyblanket <skyblanket@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ model.safetensors.index.json filter=lfs diff=lfs merge=lfs -text
37
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ base_model: THUDM/GLM-5-FP8
4
+ tags:
5
+ - abliteration
6
+ - uncensored
7
+ - glm5
8
+ - moe
9
+ - fp8
10
+ model_type: glm-moe
11
+ ---
12
+
13
+ # GLM-5 744B Abliterated (FP8)
14
+
15
+ This doesnt work an abliterated (uncensored) version of [THUDM/GLM-5-FP8](https://huggingface.co/THUDM/GLM-5-FP8) with safety guardrails removed via weight orthogonalization.
16
+
17
+ ## Method
18
+
19
+ **Abliteration** (representation engineering) was used to identify and remove the "refusal direction" from the model's residual stream:
20
+
21
+ 1. **Computed refusal directions** for all 78 layers by collecting activations on 50 harmful vs 50 harmless prompts and computing mean difference vectors
22
+ 2. **Applied weight orthogonalization** to layers 15-54 (o_proj and shared_experts.down_proj) with alpha=1.0
23
+ 3. **FP8-aware processing**: Proper dequantization using block-wise scale_inv factors, abliteration in float32, and re-quantization preserving original scale factors to minimize perturbation
24
+
25
+ ### Technical Details
26
+
27
+ - **Architecture**: GLM-5 MoE (744B total, 40B active), 78 layers, 6144 hidden dim
28
+ - **Layers 0-2**: Dense MLP, **Layers 3-77**: MoE with FP8Expert fused kernels
29
+ - **Modified weights**: 80 weight matrices (40 o_proj + 40 shared_experts.down_proj)
30
+ - **Quantization**: FP8 E4M3 with block-wise scaling (128x128 blocks)
31
+ - **Scale preservation**: Original weight_scale_inv factors retained for minimal quantization drift
32
+
33
+ ### Hardware Used
34
+
35
+ 8x NVIDIA B200 (1.4TB VRAM) on Vast.ai
36
+
37
+ ## Usage
38
+
39
+ This model requires the same setup as the base GLM-5-FP8 model. Use `trust_remote_code=True` when loading.
40
+
41
+
42
+
43
+ ## Disclaimer
44
+
45
+ This model is provided for research purposes only. The removal of safety guardrails means the model may generate harmful, biased, or offensive content. Users are responsible for ensuring appropriate use.
chat_template.jinja ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [gMASK]<sop>
2
+ {%- if tools -%}
3
+ <|system|>
4
+ # Tools
5
+
6
+ You may call one or more functions to assist with the user query.
7
+
8
+ You are provided with function signatures within <tools></tools> XML tags:
9
+ <tools>
10
+ {% for tool in tools %}
11
+ {{ tool | tojson(ensure_ascii=False) }}
12
+ {% endfor %}
13
+ </tools>
14
+
15
+ For each function call, output the function name and arguments within the following XML format:
16
+ <tool_call>{function-name}<arg_key>{arg-key-1}</arg_key><arg_value>{arg-value-1}</arg_value><arg_key>{arg-key-2}</arg_key><arg_value>{arg-value-2}</arg_value>...</tool_call>{%- endif -%}
17
+ {%- macro visible_text(content) -%}
18
+ {%- if content is string -%}
19
+ {{- content }}
20
+ {%- elif content is iterable and content is not mapping -%}
21
+ {%- for item in content -%}
22
+ {%- if item is mapping and item.type == 'text' -%}
23
+ {{- item.text }}
24
+ {%- elif item is string -%}
25
+ {{- item }}
26
+ {%- endif -%}
27
+ {%- endfor -%}
28
+ {%- else -%}
29
+ {{- content }}
30
+ {%- endif -%}
31
+ {%- endmacro -%}
32
+ {%- set ns = namespace(last_user_index=-1) %}
33
+ {%- for m in messages %}
34
+ {%- if m.role == 'user' %}
35
+ {% set ns.last_user_index = loop.index0 -%}
36
+ {%- endif %}
37
+ {%- endfor %}
38
+ {% for m in messages %}
39
+ {%- if m.role == 'user' -%}<|user|>{{ visible_text(m.content) }}
40
+ {%- elif m.role == 'assistant' -%}
41
+ <|assistant|>
42
+ {%- set reasoning_content = '' %}
43
+ {%- set content = visible_text(m.content) %}
44
+ {%- if m.reasoning_content is string %}
45
+ {%- set reasoning_content = m.reasoning_content %}
46
+ {%- else %}
47
+ {%- if '</think>' in content %}
48
+ {%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
49
+ {%- set content = content.split('</think>')[-1].lstrip('\n') %}
50
+ {%- endif %}
51
+ {%- endif %}
52
+ {%- if ((clear_thinking is defined and not clear_thinking) or loop.index0 > ns.last_user_index) and reasoning_content -%}
53
+ {{ '<think>' + reasoning_content.strip() + '</think>'}}
54
+ {%- else -%}
55
+ {{ '</think>' }}
56
+ {%- endif -%}
57
+ {%- if content.strip() -%}
58
+ {{ content.strip() }}
59
+ {%- endif -%}
60
+ {% if m.tool_calls %}
61
+ {% for tc in m.tool_calls %}
62
+ {%- if tc.function %}
63
+ {%- set tc = tc.function %}
64
+ {%- endif %}
65
+ {{- '<tool_call>' + tc.name -}}
66
+ {% set _args = tc.arguments %}{% for k, v in _args.items() %}<arg_key>{{ k }}</arg_key><arg_value>{{ v | tojson(ensure_ascii=False) if v is not string else v }}</arg_value>{% endfor %}</tool_call>{% endfor %}
67
+ {% endif %}
68
+ {%- elif m.role == 'tool' -%}
69
+ {%- if m.content is string -%}
70
+ {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
71
+ {{- '<|observation|>' }}
72
+ {%- endif %}
73
+ {{- '<tool_response>' }}
74
+ {{- m.content }}
75
+ {{- '</tool_response>' }}
76
+ {%- else -%}
77
+ <|observation|>{% for tr in m.content %}
78
+ <tool_response>{{ tr.output if tr.output is defined else tr }}</tool_response>{% endfor -%}
79
+ {% endif -%}
80
+ {%- elif m.role == 'system' -%}
81
+ <|system|>{{ visible_text(m.content) }}
82
+ {%- endif -%}
83
+ {%- endfor -%}
84
+ {%- if add_generation_prompt -%}
85
+ <|assistant|>{{- '</think>' if (enable_thinking is defined and not enable_thinking) else '<think>' -}}
86
+ {%- endif -%}
config.json ADDED
@@ -0,0 +1,863 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "GlmMoeDsaForCausalLM"
4
+ ],
5
+ "attention_bias": false,
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 0,
8
+ "dtype": "bfloat16",
9
+ "eos_token_id": [
10
+ 154820,
11
+ 154827,
12
+ 154829
13
+ ],
14
+ "ep_size": 1,
15
+ "first_k_dense_replace": 3,
16
+ "head_dim": 64,
17
+ "hidden_act": "silu",
18
+ "hidden_size": 6144,
19
+ "index_head_dim": 128,
20
+ "index_n_heads": 32,
21
+ "index_topk": 2048,
22
+ "indexer_rope_interleave": true,
23
+ "initializer_range": 0.02,
24
+ "intermediate_size": 12288,
25
+ "kv_lora_rank": 512,
26
+ "max_position_embeddings": 202752,
27
+ "mlp_layer_types": [
28
+ "dense",
29
+ "dense",
30
+ "dense",
31
+ "sparse",
32
+ "sparse",
33
+ "sparse",
34
+ "sparse",
35
+ "sparse",
36
+ "sparse",
37
+ "sparse",
38
+ "sparse",
39
+ "sparse",
40
+ "sparse",
41
+ "sparse",
42
+ "sparse",
43
+ "sparse",
44
+ "sparse",
45
+ "sparse",
46
+ "sparse",
47
+ "sparse",
48
+ "sparse",
49
+ "sparse",
50
+ "sparse",
51
+ "sparse",
52
+ "sparse",
53
+ "sparse",
54
+ "sparse",
55
+ "sparse",
56
+ "sparse",
57
+ "sparse",
58
+ "sparse",
59
+ "sparse",
60
+ "sparse",
61
+ "sparse",
62
+ "sparse",
63
+ "sparse",
64
+ "sparse",
65
+ "sparse",
66
+ "sparse",
67
+ "sparse",
68
+ "sparse",
69
+ "sparse",
70
+ "sparse",
71
+ "sparse",
72
+ "sparse",
73
+ "sparse",
74
+ "sparse",
75
+ "sparse",
76
+ "sparse",
77
+ "sparse",
78
+ "sparse",
79
+ "sparse",
80
+ "sparse",
81
+ "sparse",
82
+ "sparse",
83
+ "sparse",
84
+ "sparse",
85
+ "sparse",
86
+ "sparse",
87
+ "sparse",
88
+ "sparse",
89
+ "sparse",
90
+ "sparse",
91
+ "sparse",
92
+ "sparse",
93
+ "sparse",
94
+ "sparse",
95
+ "sparse",
96
+ "sparse",
97
+ "sparse",
98
+ "sparse",
99
+ "sparse",
100
+ "sparse",
101
+ "sparse",
102
+ "sparse",
103
+ "sparse",
104
+ "sparse",
105
+ "sparse"
106
+ ],
107
+ "model_type": "glm_moe_dsa",
108
+ "moe_intermediate_size": 2048,
109
+ "moe_layer_freq": 1,
110
+ "n_group": 1,
111
+ "n_routed_experts": 256,
112
+ "n_shared_experts": 1,
113
+ "norm_topk_prob": true,
114
+ "num_attention_heads": 64,
115
+ "num_experts_per_tok": 8,
116
+ "num_hidden_layers": 78,
117
+ "num_key_value_heads": 64,
118
+ "num_nextn_predict_layers": 1,
119
+ "pad_token_id": 154820,
120
+ "pretraining_tp": 1,
121
+ "q_lora_rank": 2048,
122
+ "qk_head_dim": 256,
123
+ "qk_nope_head_dim": 192,
124
+ "qk_rope_head_dim": 64,
125
+ "quantization_config": {
126
+ "activation_scheme": "dynamic",
127
+ "dequantize": false,
128
+ "modules_to_not_convert": [
129
+ "lm_head",
130
+ "model.embed_tokens",
131
+ "model.layers.0.input_layernorm",
132
+ "model.layers.0.post_attention_layernorm",
133
+ "model.layers.0.self_attn.indexer.k_norm",
134
+ "model.layers.0.self_attn.indexer.k_norm.bias",
135
+ "model.layers.0.self_attn.indexers_proj",
136
+ "model.layers.0.self_attn.kv_a_layernorm",
137
+ "model.layers.0.self_attn.q_a_layernorm",
138
+ "model.layers.1.input_layernorm",
139
+ "model.layers.1.post_attention_layernorm",
140
+ "model.layers.1.self_attn.indexer.k_norm",
141
+ "model.layers.1.self_attn.indexer.k_norm.bias",
142
+ "model.layers.1.self_attn.indexers_proj",
143
+ "model.layers.1.self_attn.kv_a_layernorm",
144
+ "model.layers.1.self_attn.q_a_layernorm",
145
+ "model.layers.2.input_layernorm",
146
+ "model.layers.2.post_attention_layernorm",
147
+ "model.layers.2.self_attn.indexer.k_norm",
148
+ "model.layers.2.self_attn.indexer.k_norm.bias",
149
+ "model.layers.2.self_attn.indexers_proj",
150
+ "model.layers.2.self_attn.kv_a_layernorm",
151
+ "model.layers.2.self_attn.q_a_layernorm",
152
+ "model.layers.3.input_layernorm",
153
+ "model.layers.3.mlp.gate",
154
+ "model.layers.3.mlp.gate.e_score_correction_bias",
155
+ "model.layers.3.post_attention_layernorm",
156
+ "model.layers.3.self_attn.indexer.k_norm",
157
+ "model.layers.3.self_attn.indexer.k_norm.bias",
158
+ "model.layers.3.self_attn.indexers_proj",
159
+ "model.layers.3.self_attn.kv_a_layernorm",
160
+ "model.layers.3.self_attn.q_a_layernorm",
161
+ "model.layers.4.input_layernorm",
162
+ "model.layers.4.mlp.gate",
163
+ "model.layers.4.mlp.gate.e_score_correction_bias",
164
+ "model.layers.4.post_attention_layernorm",
165
+ "model.layers.4.self_attn.indexer.k_norm",
166
+ "model.layers.4.self_attn.indexer.k_norm.bias",
167
+ "model.layers.4.self_attn.indexers_proj",
168
+ "model.layers.4.self_attn.kv_a_layernorm",
169
+ "model.layers.4.self_attn.q_a_layernorm",
170
+ "model.layers.5.input_layernorm",
171
+ "model.layers.5.mlp.gate",
172
+ "model.layers.5.mlp.gate.e_score_correction_bias",
173
+ "model.layers.5.post_attention_layernorm",
174
+ "model.layers.5.self_attn.indexer.k_norm",
175
+ "model.layers.5.self_attn.indexer.k_norm.bias",
176
+ "model.layers.5.self_attn.indexers_proj",
177
+ "model.layers.5.self_attn.kv_a_layernorm",
178
+ "model.layers.5.self_attn.q_a_layernorm",
179
+ "model.layers.6.input_layernorm",
180
+ "model.layers.6.mlp.gate",
181
+ "model.layers.6.mlp.gate.e_score_correction_bias",
182
+ "model.layers.6.post_attention_layernorm",
183
+ "model.layers.6.self_attn.indexer.k_norm",
184
+ "model.layers.6.self_attn.indexer.k_norm.bias",
185
+ "model.layers.6.self_attn.indexers_proj",
186
+ "model.layers.6.self_attn.kv_a_layernorm",
187
+ "model.layers.6.self_attn.q_a_layernorm",
188
+ "model.layers.7.input_layernorm",
189
+ "model.layers.7.mlp.gate",
190
+ "model.layers.7.mlp.gate.e_score_correction_bias",
191
+ "model.layers.7.post_attention_layernorm",
192
+ "model.layers.7.self_attn.indexer.k_norm",
193
+ "model.layers.7.self_attn.indexer.k_norm.bias",
194
+ "model.layers.7.self_attn.indexers_proj",
195
+ "model.layers.7.self_attn.kv_a_layernorm",
196
+ "model.layers.7.self_attn.q_a_layernorm",
197
+ "model.layers.8.input_layernorm",
198
+ "model.layers.8.mlp.gate",
199
+ "model.layers.8.mlp.gate.e_score_correction_bias",
200
+ "model.layers.8.post_attention_layernorm",
201
+ "model.layers.8.self_attn.indexer.k_norm",
202
+ "model.layers.8.self_attn.indexer.k_norm.bias",
203
+ "model.layers.8.self_attn.indexers_proj",
204
+ "model.layers.8.self_attn.kv_a_layernorm",
205
+ "model.layers.8.self_attn.q_a_layernorm",
206
+ "model.layers.9.input_layernorm",
207
+ "model.layers.9.mlp.gate",
208
+ "model.layers.9.mlp.gate.e_score_correction_bias",
209
+ "model.layers.9.post_attention_layernorm",
210
+ "model.layers.9.self_attn.indexer.k_norm",
211
+ "model.layers.9.self_attn.indexer.k_norm.bias",
212
+ "model.layers.9.self_attn.indexers_proj",
213
+ "model.layers.9.self_attn.kv_a_layernorm",
214
+ "model.layers.9.self_attn.q_a_layernorm",
215
+ "model.layers.10.input_layernorm",
216
+ "model.layers.10.mlp.gate",
217
+ "model.layers.10.mlp.gate.e_score_correction_bias",
218
+ "model.layers.10.post_attention_layernorm",
219
+ "model.layers.10.self_attn.indexer.k_norm",
220
+ "model.layers.10.self_attn.indexer.k_norm.bias",
221
+ "model.layers.10.self_attn.indexers_proj",
222
+ "model.layers.10.self_attn.kv_a_layernorm",
223
+ "model.layers.10.self_attn.q_a_layernorm",
224
+ "model.layers.11.input_layernorm",
225
+ "model.layers.11.mlp.gate",
226
+ "model.layers.11.mlp.gate.e_score_correction_bias",
227
+ "model.layers.11.post_attention_layernorm",
228
+ "model.layers.11.self_attn.indexer.k_norm",
229
+ "model.layers.11.self_attn.indexer.k_norm.bias",
230
+ "model.layers.11.self_attn.indexers_proj",
231
+ "model.layers.11.self_attn.kv_a_layernorm",
232
+ "model.layers.11.self_attn.q_a_layernorm",
233
+ "model.layers.12.input_layernorm",
234
+ "model.layers.12.mlp.gate",
235
+ "model.layers.12.mlp.gate.e_score_correction_bias",
236
+ "model.layers.12.post_attention_layernorm",
237
+ "model.layers.12.self_attn.indexer.k_norm",
238
+ "model.layers.12.self_attn.indexer.k_norm.bias",
239
+ "model.layers.12.self_attn.indexers_proj",
240
+ "model.layers.12.self_attn.kv_a_layernorm",
241
+ "model.layers.12.self_attn.q_a_layernorm",
242
+ "model.layers.13.input_layernorm",
243
+ "model.layers.13.mlp.gate",
244
+ "model.layers.13.mlp.gate.e_score_correction_bias",
245
+ "model.layers.13.post_attention_layernorm",
246
+ "model.layers.13.self_attn.indexer.k_norm",
247
+ "model.layers.13.self_attn.indexer.k_norm.bias",
248
+ "model.layers.13.self_attn.indexers_proj",
249
+ "model.layers.13.self_attn.kv_a_layernorm",
250
+ "model.layers.13.self_attn.q_a_layernorm",
251
+ "model.layers.14.input_layernorm",
252
+ "model.layers.14.mlp.gate",
253
+ "model.layers.14.mlp.gate.e_score_correction_bias",
254
+ "model.layers.14.post_attention_layernorm",
255
+ "model.layers.14.self_attn.indexer.k_norm",
256
+ "model.layers.14.self_attn.indexer.k_norm.bias",
257
+ "model.layers.14.self_attn.indexers_proj",
258
+ "model.layers.14.self_attn.kv_a_layernorm",
259
+ "model.layers.14.self_attn.q_a_layernorm",
260
+ "model.layers.15.input_layernorm",
261
+ "model.layers.15.mlp.gate",
262
+ "model.layers.15.mlp.gate.e_score_correction_bias",
263
+ "model.layers.15.post_attention_layernorm",
264
+ "model.layers.15.self_attn.indexer.k_norm",
265
+ "model.layers.15.self_attn.indexer.k_norm.bias",
266
+ "model.layers.15.self_attn.indexers_proj",
267
+ "model.layers.15.self_attn.kv_a_layernorm",
268
+ "model.layers.15.self_attn.q_a_layernorm",
269
+ "model.layers.16.input_layernorm",
270
+ "model.layers.16.mlp.gate",
271
+ "model.layers.16.mlp.gate.e_score_correction_bias",
272
+ "model.layers.16.post_attention_layernorm",
273
+ "model.layers.16.self_attn.indexer.k_norm",
274
+ "model.layers.16.self_attn.indexer.k_norm.bias",
275
+ "model.layers.16.self_attn.indexers_proj",
276
+ "model.layers.16.self_attn.kv_a_layernorm",
277
+ "model.layers.16.self_attn.q_a_layernorm",
278
+ "model.layers.17.input_layernorm",
279
+ "model.layers.17.mlp.gate",
280
+ "model.layers.17.mlp.gate.e_score_correction_bias",
281
+ "model.layers.17.post_attention_layernorm",
282
+ "model.layers.17.self_attn.indexer.k_norm",
283
+ "model.layers.17.self_attn.indexer.k_norm.bias",
284
+ "model.layers.17.self_attn.indexers_proj",
285
+ "model.layers.17.self_attn.kv_a_layernorm",
286
+ "model.layers.17.self_attn.q_a_layernorm",
287
+ "model.layers.18.input_layernorm",
288
+ "model.layers.18.mlp.gate",
289
+ "model.layers.18.mlp.gate.e_score_correction_bias",
290
+ "model.layers.18.post_attention_layernorm",
291
+ "model.layers.18.self_attn.indexer.k_norm",
292
+ "model.layers.18.self_attn.indexer.k_norm.bias",
293
+ "model.layers.18.self_attn.indexers_proj",
294
+ "model.layers.18.self_attn.kv_a_layernorm",
295
+ "model.layers.18.self_attn.q_a_layernorm",
296
+ "model.layers.19.input_layernorm",
297
+ "model.layers.19.mlp.gate",
298
+ "model.layers.19.mlp.gate.e_score_correction_bias",
299
+ "model.layers.19.post_attention_layernorm",
300
+ "model.layers.19.self_attn.indexer.k_norm",
301
+ "model.layers.19.self_attn.indexer.k_norm.bias",
302
+ "model.layers.19.self_attn.indexers_proj",
303
+ "model.layers.19.self_attn.kv_a_layernorm",
304
+ "model.layers.19.self_attn.q_a_layernorm",
305
+ "model.layers.20.input_layernorm",
306
+ "model.layers.20.mlp.gate",
307
+ "model.layers.20.mlp.gate.e_score_correction_bias",
308
+ "model.layers.20.post_attention_layernorm",
309
+ "model.layers.20.self_attn.indexer.k_norm",
310
+ "model.layers.20.self_attn.indexer.k_norm.bias",
311
+ "model.layers.20.self_attn.indexers_proj",
312
+ "model.layers.20.self_attn.kv_a_layernorm",
313
+ "model.layers.20.self_attn.q_a_layernorm",
314
+ "model.layers.21.input_layernorm",
315
+ "model.layers.21.mlp.gate",
316
+ "model.layers.21.mlp.gate.e_score_correction_bias",
317
+ "model.layers.21.post_attention_layernorm",
318
+ "model.layers.21.self_attn.indexer.k_norm",
319
+ "model.layers.21.self_attn.indexer.k_norm.bias",
320
+ "model.layers.21.self_attn.indexers_proj",
321
+ "model.layers.21.self_attn.kv_a_layernorm",
322
+ "model.layers.21.self_attn.q_a_layernorm",
323
+ "model.layers.22.input_layernorm",
324
+ "model.layers.22.mlp.gate",
325
+ "model.layers.22.mlp.gate.e_score_correction_bias",
326
+ "model.layers.22.post_attention_layernorm",
327
+ "model.layers.22.self_attn.indexer.k_norm",
328
+ "model.layers.22.self_attn.indexer.k_norm.bias",
329
+ "model.layers.22.self_attn.indexers_proj",
330
+ "model.layers.22.self_attn.kv_a_layernorm",
331
+ "model.layers.22.self_attn.q_a_layernorm",
332
+ "model.layers.23.input_layernorm",
333
+ "model.layers.23.mlp.gate",
334
+ "model.layers.23.mlp.gate.e_score_correction_bias",
335
+ "model.layers.23.post_attention_layernorm",
336
+ "model.layers.23.self_attn.indexer.k_norm",
337
+ "model.layers.23.self_attn.indexer.k_norm.bias",
338
+ "model.layers.23.self_attn.indexers_proj",
339
+ "model.layers.23.self_attn.kv_a_layernorm",
340
+ "model.layers.23.self_attn.q_a_layernorm",
341
+ "model.layers.24.input_layernorm",
342
+ "model.layers.24.mlp.gate",
343
+ "model.layers.24.mlp.gate.e_score_correction_bias",
344
+ "model.layers.24.post_attention_layernorm",
345
+ "model.layers.24.self_attn.indexer.k_norm",
346
+ "model.layers.24.self_attn.indexer.k_norm.bias",
347
+ "model.layers.24.self_attn.indexers_proj",
348
+ "model.layers.24.self_attn.kv_a_layernorm",
349
+ "model.layers.24.self_attn.q_a_layernorm",
350
+ "model.layers.25.input_layernorm",
351
+ "model.layers.25.mlp.gate",
352
+ "model.layers.25.mlp.gate.e_score_correction_bias",
353
+ "model.layers.25.post_attention_layernorm",
354
+ "model.layers.25.self_attn.indexer.k_norm",
355
+ "model.layers.25.self_attn.indexer.k_norm.bias",
356
+ "model.layers.25.self_attn.indexers_proj",
357
+ "model.layers.25.self_attn.kv_a_layernorm",
358
+ "model.layers.25.self_attn.q_a_layernorm",
359
+ "model.layers.26.input_layernorm",
360
+ "model.layers.26.mlp.gate",
361
+ "model.layers.26.mlp.gate.e_score_correction_bias",
362
+ "model.layers.26.post_attention_layernorm",
363
+ "model.layers.26.self_attn.indexer.k_norm",
364
+ "model.layers.26.self_attn.indexer.k_norm.bias",
365
+ "model.layers.26.self_attn.indexers_proj",
366
+ "model.layers.26.self_attn.kv_a_layernorm",
367
+ "model.layers.26.self_attn.q_a_layernorm",
368
+ "model.layers.27.input_layernorm",
369
+ "model.layers.27.mlp.gate",
370
+ "model.layers.27.mlp.gate.e_score_correction_bias",
371
+ "model.layers.27.post_attention_layernorm",
372
+ "model.layers.27.self_attn.indexer.k_norm",
373
+ "model.layers.27.self_attn.indexer.k_norm.bias",
374
+ "model.layers.27.self_attn.indexers_proj",
375
+ "model.layers.27.self_attn.kv_a_layernorm",
376
+ "model.layers.27.self_attn.q_a_layernorm",
377
+ "model.layers.28.input_layernorm",
378
+ "model.layers.28.mlp.gate",
379
+ "model.layers.28.mlp.gate.e_score_correction_bias",
380
+ "model.layers.28.post_attention_layernorm",
381
+ "model.layers.28.self_attn.indexer.k_norm",
382
+ "model.layers.28.self_attn.indexer.k_norm.bias",
383
+ "model.layers.28.self_attn.indexers_proj",
384
+ "model.layers.28.self_attn.kv_a_layernorm",
385
+ "model.layers.28.self_attn.q_a_layernorm",
386
+ "model.layers.29.input_layernorm",
387
+ "model.layers.29.mlp.gate",
388
+ "model.layers.29.mlp.gate.e_score_correction_bias",
389
+ "model.layers.29.post_attention_layernorm",
390
+ "model.layers.29.self_attn.indexer.k_norm",
391
+ "model.layers.29.self_attn.indexer.k_norm.bias",
392
+ "model.layers.29.self_attn.indexers_proj",
393
+ "model.layers.29.self_attn.kv_a_layernorm",
394
+ "model.layers.29.self_attn.q_a_layernorm",
395
+ "model.layers.30.input_layernorm",
396
+ "model.layers.30.mlp.gate",
397
+ "model.layers.30.mlp.gate.e_score_correction_bias",
398
+ "model.layers.30.post_attention_layernorm",
399
+ "model.layers.30.self_attn.indexer.k_norm",
400
+ "model.layers.30.self_attn.indexer.k_norm.bias",
401
+ "model.layers.30.self_attn.indexers_proj",
402
+ "model.layers.30.self_attn.kv_a_layernorm",
403
+ "model.layers.30.self_attn.q_a_layernorm",
404
+ "model.layers.31.input_layernorm",
405
+ "model.layers.31.mlp.gate",
406
+ "model.layers.31.mlp.gate.e_score_correction_bias",
407
+ "model.layers.31.post_attention_layernorm",
408
+ "model.layers.31.self_attn.indexer.k_norm",
409
+ "model.layers.31.self_attn.indexer.k_norm.bias",
410
+ "model.layers.31.self_attn.indexers_proj",
411
+ "model.layers.31.self_attn.kv_a_layernorm",
412
+ "model.layers.31.self_attn.q_a_layernorm",
413
+ "model.layers.32.input_layernorm",
414
+ "model.layers.32.mlp.gate",
415
+ "model.layers.32.mlp.gate.e_score_correction_bias",
416
+ "model.layers.32.post_attention_layernorm",
417
+ "model.layers.32.self_attn.indexer.k_norm",
418
+ "model.layers.32.self_attn.indexer.k_norm.bias",
419
+ "model.layers.32.self_attn.indexers_proj",
420
+ "model.layers.32.self_attn.kv_a_layernorm",
421
+ "model.layers.32.self_attn.q_a_layernorm",
422
+ "model.layers.33.input_layernorm",
423
+ "model.layers.33.mlp.gate",
424
+ "model.layers.33.mlp.gate.e_score_correction_bias",
425
+ "model.layers.33.post_attention_layernorm",
426
+ "model.layers.33.self_attn.indexer.k_norm",
427
+ "model.layers.33.self_attn.indexer.k_norm.bias",
428
+ "model.layers.33.self_attn.indexers_proj",
429
+ "model.layers.33.self_attn.kv_a_layernorm",
430
+ "model.layers.33.self_attn.q_a_layernorm",
431
+ "model.layers.34.input_layernorm",
432
+ "model.layers.34.mlp.gate",
433
+ "model.layers.34.mlp.gate.e_score_correction_bias",
434
+ "model.layers.34.post_attention_layernorm",
435
+ "model.layers.34.self_attn.indexer.k_norm",
436
+ "model.layers.34.self_attn.indexer.k_norm.bias",
437
+ "model.layers.34.self_attn.indexers_proj",
438
+ "model.layers.34.self_attn.kv_a_layernorm",
439
+ "model.layers.34.self_attn.q_a_layernorm",
440
+ "model.layers.35.input_layernorm",
441
+ "model.layers.35.mlp.gate",
442
+ "model.layers.35.mlp.gate.e_score_correction_bias",
443
+ "model.layers.35.post_attention_layernorm",
444
+ "model.layers.35.self_attn.indexer.k_norm",
445
+ "model.layers.35.self_attn.indexer.k_norm.bias",
446
+ "model.layers.35.self_attn.indexers_proj",
447
+ "model.layers.35.self_attn.kv_a_layernorm",
448
+ "model.layers.35.self_attn.q_a_layernorm",
449
+ "model.layers.36.input_layernorm",
450
+ "model.layers.36.mlp.gate",
451
+ "model.layers.36.mlp.gate.e_score_correction_bias",
452
+ "model.layers.36.post_attention_layernorm",
453
+ "model.layers.36.self_attn.indexer.k_norm",
454
+ "model.layers.36.self_attn.indexer.k_norm.bias",
455
+ "model.layers.36.self_attn.indexers_proj",
456
+ "model.layers.36.self_attn.kv_a_layernorm",
457
+ "model.layers.36.self_attn.q_a_layernorm",
458
+ "model.layers.37.input_layernorm",
459
+ "model.layers.37.mlp.gate",
460
+ "model.layers.37.mlp.gate.e_score_correction_bias",
461
+ "model.layers.37.post_attention_layernorm",
462
+ "model.layers.37.self_attn.indexer.k_norm",
463
+ "model.layers.37.self_attn.indexer.k_norm.bias",
464
+ "model.layers.37.self_attn.indexers_proj",
465
+ "model.layers.37.self_attn.kv_a_layernorm",
466
+ "model.layers.37.self_attn.q_a_layernorm",
467
+ "model.layers.38.input_layernorm",
468
+ "model.layers.38.mlp.gate",
469
+ "model.layers.38.mlp.gate.e_score_correction_bias",
470
+ "model.layers.38.post_attention_layernorm",
471
+ "model.layers.38.self_attn.indexer.k_norm",
472
+ "model.layers.38.self_attn.indexer.k_norm.bias",
473
+ "model.layers.38.self_attn.indexers_proj",
474
+ "model.layers.38.self_attn.kv_a_layernorm",
475
+ "model.layers.38.self_attn.q_a_layernorm",
476
+ "model.layers.39.input_layernorm",
477
+ "model.layers.39.mlp.gate",
478
+ "model.layers.39.mlp.gate.e_score_correction_bias",
479
+ "model.layers.39.post_attention_layernorm",
480
+ "model.layers.39.self_attn.indexer.k_norm",
481
+ "model.layers.39.self_attn.indexer.k_norm.bias",
482
+ "model.layers.39.self_attn.indexers_proj",
483
+ "model.layers.39.self_attn.kv_a_layernorm",
484
+ "model.layers.39.self_attn.q_a_layernorm",
485
+ "model.layers.40.input_layernorm",
486
+ "model.layers.40.mlp.gate",
487
+ "model.layers.40.mlp.gate.e_score_correction_bias",
488
+ "model.layers.40.post_attention_layernorm",
489
+ "model.layers.40.self_attn.indexer.k_norm",
490
+ "model.layers.40.self_attn.indexer.k_norm.bias",
491
+ "model.layers.40.self_attn.indexers_proj",
492
+ "model.layers.40.self_attn.kv_a_layernorm",
493
+ "model.layers.40.self_attn.q_a_layernorm",
494
+ "model.layers.41.input_layernorm",
495
+ "model.layers.41.mlp.gate",
496
+ "model.layers.41.mlp.gate.e_score_correction_bias",
497
+ "model.layers.41.post_attention_layernorm",
498
+ "model.layers.41.self_attn.indexer.k_norm",
499
+ "model.layers.41.self_attn.indexer.k_norm.bias",
500
+ "model.layers.41.self_attn.indexers_proj",
501
+ "model.layers.41.self_attn.kv_a_layernorm",
502
+ "model.layers.41.self_attn.q_a_layernorm",
503
+ "model.layers.42.input_layernorm",
504
+ "model.layers.42.mlp.gate",
505
+ "model.layers.42.mlp.gate.e_score_correction_bias",
506
+ "model.layers.42.post_attention_layernorm",
507
+ "model.layers.42.self_attn.indexer.k_norm",
508
+ "model.layers.42.self_attn.indexer.k_norm.bias",
509
+ "model.layers.42.self_attn.indexers_proj",
510
+ "model.layers.42.self_attn.kv_a_layernorm",
511
+ "model.layers.42.self_attn.q_a_layernorm",
512
+ "model.layers.43.input_layernorm",
513
+ "model.layers.43.mlp.gate",
514
+ "model.layers.43.mlp.gate.e_score_correction_bias",
515
+ "model.layers.43.post_attention_layernorm",
516
+ "model.layers.43.self_attn.indexer.k_norm",
517
+ "model.layers.43.self_attn.indexer.k_norm.bias",
518
+ "model.layers.43.self_attn.indexers_proj",
519
+ "model.layers.43.self_attn.kv_a_layernorm",
520
+ "model.layers.43.self_attn.q_a_layernorm",
521
+ "model.layers.44.input_layernorm",
522
+ "model.layers.44.mlp.gate",
523
+ "model.layers.44.mlp.gate.e_score_correction_bias",
524
+ "model.layers.44.post_attention_layernorm",
525
+ "model.layers.44.self_attn.indexer.k_norm",
526
+ "model.layers.44.self_attn.indexer.k_norm.bias",
527
+ "model.layers.44.self_attn.indexers_proj",
528
+ "model.layers.44.self_attn.kv_a_layernorm",
529
+ "model.layers.44.self_attn.q_a_layernorm",
530
+ "model.layers.45.input_layernorm",
531
+ "model.layers.45.mlp.gate",
532
+ "model.layers.45.mlp.gate.e_score_correction_bias",
533
+ "model.layers.45.post_attention_layernorm",
534
+ "model.layers.45.self_attn.indexer.k_norm",
535
+ "model.layers.45.self_attn.indexer.k_norm.bias",
536
+ "model.layers.45.self_attn.indexers_proj",
537
+ "model.layers.45.self_attn.kv_a_layernorm",
538
+ "model.layers.45.self_attn.q_a_layernorm",
539
+ "model.layers.46.input_layernorm",
540
+ "model.layers.46.mlp.gate",
541
+ "model.layers.46.mlp.gate.e_score_correction_bias",
542
+ "model.layers.46.post_attention_layernorm",
543
+ "model.layers.46.self_attn.indexer.k_norm",
544
+ "model.layers.46.self_attn.indexer.k_norm.bias",
545
+ "model.layers.46.self_attn.indexers_proj",
546
+ "model.layers.46.self_attn.kv_a_layernorm",
547
+ "model.layers.46.self_attn.q_a_layernorm",
548
+ "model.layers.47.input_layernorm",
549
+ "model.layers.47.mlp.gate",
550
+ "model.layers.47.mlp.gate.e_score_correction_bias",
551
+ "model.layers.47.post_attention_layernorm",
552
+ "model.layers.47.self_attn.indexer.k_norm",
553
+ "model.layers.47.self_attn.indexer.k_norm.bias",
554
+ "model.layers.47.self_attn.indexers_proj",
555
+ "model.layers.47.self_attn.kv_a_layernorm",
556
+ "model.layers.47.self_attn.q_a_layernorm",
557
+ "model.layers.48.input_layernorm",
558
+ "model.layers.48.mlp.gate",
559
+ "model.layers.48.mlp.gate.e_score_correction_bias",
560
+ "model.layers.48.post_attention_layernorm",
561
+ "model.layers.48.self_attn.indexer.k_norm",
562
+ "model.layers.48.self_attn.indexer.k_norm.bias",
563
+ "model.layers.48.self_attn.indexers_proj",
564
+ "model.layers.48.self_attn.kv_a_layernorm",
565
+ "model.layers.48.self_attn.q_a_layernorm",
566
+ "model.layers.49.input_layernorm",
567
+ "model.layers.49.mlp.gate",
568
+ "model.layers.49.mlp.gate.e_score_correction_bias",
569
+ "model.layers.49.post_attention_layernorm",
570
+ "model.layers.49.self_attn.indexer.k_norm",
571
+ "model.layers.49.self_attn.indexer.k_norm.bias",
572
+ "model.layers.49.self_attn.indexers_proj",
573
+ "model.layers.49.self_attn.kv_a_layernorm",
574
+ "model.layers.49.self_attn.q_a_layernorm",
575
+ "model.layers.50.input_layernorm",
576
+ "model.layers.50.mlp.gate",
577
+ "model.layers.50.mlp.gate.e_score_correction_bias",
578
+ "model.layers.50.post_attention_layernorm",
579
+ "model.layers.50.self_attn.indexer.k_norm",
580
+ "model.layers.50.self_attn.indexer.k_norm.bias",
581
+ "model.layers.50.self_attn.indexers_proj",
582
+ "model.layers.50.self_attn.kv_a_layernorm",
583
+ "model.layers.50.self_attn.q_a_layernorm",
584
+ "model.layers.51.input_layernorm",
585
+ "model.layers.51.mlp.gate",
586
+ "model.layers.51.mlp.gate.e_score_correction_bias",
587
+ "model.layers.51.post_attention_layernorm",
588
+ "model.layers.51.self_attn.indexer.k_norm",
589
+ "model.layers.51.self_attn.indexer.k_norm.bias",
590
+ "model.layers.51.self_attn.indexers_proj",
591
+ "model.layers.51.self_attn.kv_a_layernorm",
592
+ "model.layers.51.self_attn.q_a_layernorm",
593
+ "model.layers.52.input_layernorm",
594
+ "model.layers.52.mlp.gate",
595
+ "model.layers.52.mlp.gate.e_score_correction_bias",
596
+ "model.layers.52.post_attention_layernorm",
597
+ "model.layers.52.self_attn.indexer.k_norm",
598
+ "model.layers.52.self_attn.indexer.k_norm.bias",
599
+ "model.layers.52.self_attn.indexers_proj",
600
+ "model.layers.52.self_attn.kv_a_layernorm",
601
+ "model.layers.52.self_attn.q_a_layernorm",
602
+ "model.layers.53.input_layernorm",
603
+ "model.layers.53.mlp.gate",
604
+ "model.layers.53.mlp.gate.e_score_correction_bias",
605
+ "model.layers.53.post_attention_layernorm",
606
+ "model.layers.53.self_attn.indexer.k_norm",
607
+ "model.layers.53.self_attn.indexer.k_norm.bias",
608
+ "model.layers.53.self_attn.indexers_proj",
609
+ "model.layers.53.self_attn.kv_a_layernorm",
610
+ "model.layers.53.self_attn.q_a_layernorm",
611
+ "model.layers.54.input_layernorm",
612
+ "model.layers.54.mlp.gate",
613
+ "model.layers.54.mlp.gate.e_score_correction_bias",
614
+ "model.layers.54.post_attention_layernorm",
615
+ "model.layers.54.self_attn.indexer.k_norm",
616
+ "model.layers.54.self_attn.indexer.k_norm.bias",
617
+ "model.layers.54.self_attn.indexers_proj",
618
+ "model.layers.54.self_attn.kv_a_layernorm",
619
+ "model.layers.54.self_attn.q_a_layernorm",
620
+ "model.layers.55.input_layernorm",
621
+ "model.layers.55.mlp.gate",
622
+ "model.layers.55.mlp.gate.e_score_correction_bias",
623
+ "model.layers.55.post_attention_layernorm",
624
+ "model.layers.55.self_attn.indexer.k_norm",
625
+ "model.layers.55.self_attn.indexer.k_norm.bias",
626
+ "model.layers.55.self_attn.indexers_proj",
627
+ "model.layers.55.self_attn.kv_a_layernorm",
628
+ "model.layers.55.self_attn.q_a_layernorm",
629
+ "model.layers.56.input_layernorm",
630
+ "model.layers.56.mlp.gate",
631
+ "model.layers.56.mlp.gate.e_score_correction_bias",
632
+ "model.layers.56.post_attention_layernorm",
633
+ "model.layers.56.self_attn.indexer.k_norm",
634
+ "model.layers.56.self_attn.indexer.k_norm.bias",
635
+ "model.layers.56.self_attn.indexers_proj",
636
+ "model.layers.56.self_attn.kv_a_layernorm",
637
+ "model.layers.56.self_attn.q_a_layernorm",
638
+ "model.layers.57.input_layernorm",
639
+ "model.layers.57.mlp.gate",
640
+ "model.layers.57.mlp.gate.e_score_correction_bias",
641
+ "model.layers.57.post_attention_layernorm",
642
+ "model.layers.57.self_attn.indexer.k_norm",
643
+ "model.layers.57.self_attn.indexer.k_norm.bias",
644
+ "model.layers.57.self_attn.indexers_proj",
645
+ "model.layers.57.self_attn.kv_a_layernorm",
646
+ "model.layers.57.self_attn.q_a_layernorm",
647
+ "model.layers.58.input_layernorm",
648
+ "model.layers.58.mlp.gate",
649
+ "model.layers.58.mlp.gate.e_score_correction_bias",
650
+ "model.layers.58.post_attention_layernorm",
651
+ "model.layers.58.self_attn.indexer.k_norm",
652
+ "model.layers.58.self_attn.indexer.k_norm.bias",
653
+ "model.layers.58.self_attn.indexers_proj",
654
+ "model.layers.58.self_attn.kv_a_layernorm",
655
+ "model.layers.58.self_attn.q_a_layernorm",
656
+ "model.layers.59.input_layernorm",
657
+ "model.layers.59.mlp.gate",
658
+ "model.layers.59.mlp.gate.e_score_correction_bias",
659
+ "model.layers.59.post_attention_layernorm",
660
+ "model.layers.59.self_attn.indexer.k_norm",
661
+ "model.layers.59.self_attn.indexer.k_norm.bias",
662
+ "model.layers.59.self_attn.indexers_proj",
663
+ "model.layers.59.self_attn.kv_a_layernorm",
664
+ "model.layers.59.self_attn.q_a_layernorm",
665
+ "model.layers.60.input_layernorm",
666
+ "model.layers.60.mlp.gate",
667
+ "model.layers.60.mlp.gate.e_score_correction_bias",
668
+ "model.layers.60.post_attention_layernorm",
669
+ "model.layers.60.self_attn.indexer.k_norm",
670
+ "model.layers.60.self_attn.indexer.k_norm.bias",
671
+ "model.layers.60.self_attn.indexers_proj",
672
+ "model.layers.60.self_attn.kv_a_layernorm",
673
+ "model.layers.60.self_attn.q_a_layernorm",
674
+ "model.layers.61.input_layernorm",
675
+ "model.layers.61.mlp.gate",
676
+ "model.layers.61.mlp.gate.e_score_correction_bias",
677
+ "model.layers.61.post_attention_layernorm",
678
+ "model.layers.61.self_attn.indexer.k_norm",
679
+ "model.layers.61.self_attn.indexer.k_norm.bias",
680
+ "model.layers.61.self_attn.indexers_proj",
681
+ "model.layers.61.self_attn.kv_a_layernorm",
682
+ "model.layers.61.self_attn.q_a_layernorm",
683
+ "model.layers.62.input_layernorm",
684
+ "model.layers.62.mlp.gate",
685
+ "model.layers.62.mlp.gate.e_score_correction_bias",
686
+ "model.layers.62.post_attention_layernorm",
687
+ "model.layers.62.self_attn.indexer.k_norm",
688
+ "model.layers.62.self_attn.indexer.k_norm.bias",
689
+ "model.layers.62.self_attn.indexers_proj",
690
+ "model.layers.62.self_attn.kv_a_layernorm",
691
+ "model.layers.62.self_attn.q_a_layernorm",
692
+ "model.layers.63.input_layernorm",
693
+ "model.layers.63.mlp.gate",
694
+ "model.layers.63.mlp.gate.e_score_correction_bias",
695
+ "model.layers.63.post_attention_layernorm",
696
+ "model.layers.63.self_attn.indexer.k_norm",
697
+ "model.layers.63.self_attn.indexer.k_norm.bias",
698
+ "model.layers.63.self_attn.indexers_proj",
699
+ "model.layers.63.self_attn.kv_a_layernorm",
700
+ "model.layers.63.self_attn.q_a_layernorm",
701
+ "model.layers.64.input_layernorm",
702
+ "model.layers.64.mlp.gate",
703
+ "model.layers.64.mlp.gate.e_score_correction_bias",
704
+ "model.layers.64.post_attention_layernorm",
705
+ "model.layers.64.self_attn.indexer.k_norm",
706
+ "model.layers.64.self_attn.indexer.k_norm.bias",
707
+ "model.layers.64.self_attn.indexers_proj",
708
+ "model.layers.64.self_attn.kv_a_layernorm",
709
+ "model.layers.64.self_attn.q_a_layernorm",
710
+ "model.layers.65.input_layernorm",
711
+ "model.layers.65.mlp.gate",
712
+ "model.layers.65.mlp.gate.e_score_correction_bias",
713
+ "model.layers.65.post_attention_layernorm",
714
+ "model.layers.65.self_attn.indexer.k_norm",
715
+ "model.layers.65.self_attn.indexer.k_norm.bias",
716
+ "model.layers.65.self_attn.indexers_proj",
717
+ "model.layers.65.self_attn.kv_a_layernorm",
718
+ "model.layers.65.self_attn.q_a_layernorm",
719
+ "model.layers.66.input_layernorm",
720
+ "model.layers.66.mlp.gate",
721
+ "model.layers.66.mlp.gate.e_score_correction_bias",
722
+ "model.layers.66.post_attention_layernorm",
723
+ "model.layers.66.self_attn.indexer.k_norm",
724
+ "model.layers.66.self_attn.indexer.k_norm.bias",
725
+ "model.layers.66.self_attn.indexers_proj",
726
+ "model.layers.66.self_attn.kv_a_layernorm",
727
+ "model.layers.66.self_attn.q_a_layernorm",
728
+ "model.layers.67.input_layernorm",
729
+ "model.layers.67.mlp.gate",
730
+ "model.layers.67.mlp.gate.e_score_correction_bias",
731
+ "model.layers.67.post_attention_layernorm",
732
+ "model.layers.67.self_attn.indexer.k_norm",
733
+ "model.layers.67.self_attn.indexer.k_norm.bias",
734
+ "model.layers.67.self_attn.indexers_proj",
735
+ "model.layers.67.self_attn.kv_a_layernorm",
736
+ "model.layers.67.self_attn.q_a_layernorm",
737
+ "model.layers.68.input_layernorm",
738
+ "model.layers.68.mlp.gate",
739
+ "model.layers.68.mlp.gate.e_score_correction_bias",
740
+ "model.layers.68.post_attention_layernorm",
741
+ "model.layers.68.self_attn.indexer.k_norm",
742
+ "model.layers.68.self_attn.indexer.k_norm.bias",
743
+ "model.layers.68.self_attn.indexers_proj",
744
+ "model.layers.68.self_attn.kv_a_layernorm",
745
+ "model.layers.68.self_attn.q_a_layernorm",
746
+ "model.layers.69.input_layernorm",
747
+ "model.layers.69.mlp.gate",
748
+ "model.layers.69.mlp.gate.e_score_correction_bias",
749
+ "model.layers.69.post_attention_layernorm",
750
+ "model.layers.69.self_attn.indexer.k_norm",
751
+ "model.layers.69.self_attn.indexer.k_norm.bias",
752
+ "model.layers.69.self_attn.indexers_proj",
753
+ "model.layers.69.self_attn.kv_a_layernorm",
754
+ "model.layers.69.self_attn.q_a_layernorm",
755
+ "model.layers.70.input_layernorm",
756
+ "model.layers.70.mlp.gate",
757
+ "model.layers.70.mlp.gate.e_score_correction_bias",
758
+ "model.layers.70.post_attention_layernorm",
759
+ "model.layers.70.self_attn.indexer.k_norm",
760
+ "model.layers.70.self_attn.indexer.k_norm.bias",
761
+ "model.layers.70.self_attn.indexers_proj",
762
+ "model.layers.70.self_attn.kv_a_layernorm",
763
+ "model.layers.70.self_attn.q_a_layernorm",
764
+ "model.layers.71.input_layernorm",
765
+ "model.layers.71.mlp.gate",
766
+ "model.layers.71.mlp.gate.e_score_correction_bias",
767
+ "model.layers.71.post_attention_layernorm",
768
+ "model.layers.71.self_attn.indexer.k_norm",
769
+ "model.layers.71.self_attn.indexer.k_norm.bias",
770
+ "model.layers.71.self_attn.indexers_proj",
771
+ "model.layers.71.self_attn.kv_a_layernorm",
772
+ "model.layers.71.self_attn.q_a_layernorm",
773
+ "model.layers.72.input_layernorm",
774
+ "model.layers.72.mlp.gate",
775
+ "model.layers.72.mlp.gate.e_score_correction_bias",
776
+ "model.layers.72.post_attention_layernorm",
777
+ "model.layers.72.self_attn.indexer.k_norm",
778
+ "model.layers.72.self_attn.indexer.k_norm.bias",
779
+ "model.layers.72.self_attn.indexers_proj",
780
+ "model.layers.72.self_attn.kv_a_layernorm",
781
+ "model.layers.72.self_attn.q_a_layernorm",
782
+ "model.layers.73.input_layernorm",
783
+ "model.layers.73.mlp.gate",
784
+ "model.layers.73.mlp.gate.e_score_correction_bias",
785
+ "model.layers.73.post_attention_layernorm",
786
+ "model.layers.73.self_attn.indexer.k_norm",
787
+ "model.layers.73.self_attn.indexer.k_norm.bias",
788
+ "model.layers.73.self_attn.indexers_proj",
789
+ "model.layers.73.self_attn.kv_a_layernorm",
790
+ "model.layers.73.self_attn.q_a_layernorm",
791
+ "model.layers.74.input_layernorm",
792
+ "model.layers.74.mlp.gate",
793
+ "model.layers.74.mlp.gate.e_score_correction_bias",
794
+ "model.layers.74.post_attention_layernorm",
795
+ "model.layers.74.self_attn.indexer.k_norm",
796
+ "model.layers.74.self_attn.indexer.k_norm.bias",
797
+ "model.layers.74.self_attn.indexers_proj",
798
+ "model.layers.74.self_attn.kv_a_layernorm",
799
+ "model.layers.74.self_attn.q_a_layernorm",
800
+ "model.layers.75.input_layernorm",
801
+ "model.layers.75.mlp.gate",
802
+ "model.layers.75.mlp.gate.e_score_correction_bias",
803
+ "model.layers.75.post_attention_layernorm",
804
+ "model.layers.75.self_attn.indexer.k_norm",
805
+ "model.layers.75.self_attn.indexer.k_norm.bias",
806
+ "model.layers.75.self_attn.indexers_proj",
807
+ "model.layers.75.self_attn.kv_a_layernorm",
808
+ "model.layers.75.self_attn.q_a_layernorm",
809
+ "model.layers.76.input_layernorm",
810
+ "model.layers.76.mlp.gate",
811
+ "model.layers.76.mlp.gate.e_score_correction_bias",
812
+ "model.layers.76.post_attention_layernorm",
813
+ "model.layers.76.self_attn.indexer.k_norm",
814
+ "model.layers.76.self_attn.indexer.k_norm.bias",
815
+ "model.layers.76.self_attn.indexers_proj",
816
+ "model.layers.76.self_attn.kv_a_layernorm",
817
+ "model.layers.76.self_attn.q_a_layernorm",
818
+ "model.layers.77.input_layernorm",
819
+ "model.layers.77.mlp.gate",
820
+ "model.layers.77.mlp.gate.e_score_correction_bias",
821
+ "model.layers.77.post_attention_layernorm",
822
+ "model.layers.77.self_attn.indexer.k_norm",
823
+ "model.layers.77.self_attn.indexer.k_norm.bias",
824
+ "model.layers.77.self_attn.indexers_proj",
825
+ "model.layers.77.self_attn.kv_a_layernorm",
826
+ "model.layers.77.self_attn.q_a_layernorm",
827
+ "model.layers.78.eh_proj",
828
+ "model.layers.78.enorm",
829
+ "model.layers.78.hnorm",
830
+ "model.layers.78.input_layernorm",
831
+ "model.layers.78.mlp.gate",
832
+ "model.layers.78.mlp.gate.e_score_correction_bias",
833
+ "model.layers.78.post_attention_layernorm",
834
+ "model.layers.78.self_attn.indexer.k_norm",
835
+ "model.layers.78.self_attn.indexer.k_norm.bias",
836
+ "model.layers.78.self_attn.indexers_proj",
837
+ "model.layers.78.self_attn.kv_a_layernorm",
838
+ "model.layers.78.self_attn.q_a_layernorm",
839
+ "model.layers.78.shared_head.norm",
840
+ "model.norm"
841
+ ],
842
+ "quant_method": "fp8",
843
+ "weight_block_size": [
844
+ 128,
845
+ 128
846
+ ]
847
+ },
848
+ "rms_norm_eps": 1e-05,
849
+ "rope_interleave": true,
850
+ "rope_parameters": {
851
+ "rope_theta": 1000000,
852
+ "rope_type": "default"
853
+ },
854
+ "routed_scaling_factor": 2.5,
855
+ "scoring_func": "sigmoid",
856
+ "tie_word_embeddings": false,
857
+ "topk_group": 1,
858
+ "topk_method": "noaux_tc",
859
+ "transformers_version": "5.3.0.dev0",
860
+ "use_cache": true,
861
+ "v_head_dim": 256,
862
+ "vocab_size": 154880
863
+ }
generation_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 154820,
6
+ 154827,
7
+ 154829
8
+ ],
9
+ "pad_token_id": 154820,
10
+ "temperature": 1.0,
11
+ "top_p": 0.95,
12
+ "transformers_version": "5.3.0.dev0"
13
+ }
model-00001-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:416021f585df721f620359a57d1d9e3175d833bb98ecefdc9a395700b95d404c
3
+ size 47759253472
model-00002-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:733e45e7a88ce29c33c8620c8ca4113a0904ecd3afa6dfa2016e9c0d3b08047d
3
+ size 49409000576
model-00003-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f9373370f7993e066da2b8bf65b8506b8957691294b8e7ecb357d3dd82bae39
3
+ size 49409004760
model-00004-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d94625e1d4a1b2c0365c9288f8ef8a6513439502b67e8de9940a6f473803b49
3
+ size 49409004760
model-00005-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99f0e114376499c9ca927c37a5b1170989e782b89d51ffc8f91a0763b48b3745
3
+ size 49409004760
model-00006-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0cba6ff4f1b23dea839b67c2828c4894a12d8eebaf5498a2d9f9181ea2c27d9e
3
+ size 49409004760
model-00007-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca35ba9d4ec228a47a3a34c4d88b84e7ddd101f4ba6ae8b8190d243b06b8be88
3
+ size 49409004760
model-00008-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14b10760034afae4684d2dc4555d4f0b176055c9dc963e09cf739f17319e05ff
3
+ size 49409004760
model-00009-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:73ff38608e6e3ded29149089f3e76de32d2d879a1b3819135b1fc3d6d308a0df
3
+ size 49409004760
model-00010-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:225655e2e240a4b17ea6b09a66ae74e4bbb800b79e0cc5127cc2868b75ea2c85
3
+ size 49409004760
model-00011-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1dec06942660a0e1f9c257f01c121c2b53cdbb597f01f73fe67d8600878abfe2
3
+ size 49409004760
model-00012-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:96d4016626e94d9190669cbf674b5e89574aec3871223327e5db2d3f28bf5c56
3
+ size 49409004760
model-00013-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:73bd4342291705287e57abe9adfb7c608193c1e2f0531c0235e15386c445ca4a
3
+ size 49409004760
model-00014-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e15507743d6db1be01fa9328cd2f0c7ed37e9dd9e870ec1a568ca3f000a45a9b
3
+ size 49409004760
model-00015-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3bc919a81816214f3f5befc58af054336a9fa6eb328b33db7a228e976c515f20
3
+ size 49409004760
model-00016-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a025b72eee610d77ca0d8ff54a6ace589aec7189f186a55643942bdd4cdd26d9
3
+ size 6659721496
model.safetensors.index.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7884e881184fa865c89a622f56a5435c7122bf8e4329c448573fa0f87426c3ff
3
+ size 11245948
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19e773648cb4e65de8660ea6365e10acca112d42a854923df93db4a6f333a82d
3
+ size 20217442
tokenizer_config.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "backend": "tokenizers",
3
+ "clean_up_tokenization_spaces": false,
4
+ "do_lower_case": false,
5
+ "eos_token": "<|endoftext|>",
6
+ "extra_special_tokens": [
7
+ "<|endoftext|>",
8
+ "[MASK]",
9
+ "[gMASK]",
10
+ "[sMASK]",
11
+ "<sop>",
12
+ "<eop>",
13
+ "<|system|>",
14
+ "<|user|>",
15
+ "<|assistant|>",
16
+ "<|observation|>",
17
+ "<|begin_of_image|>",
18
+ "<|end_of_image|>",
19
+ "<|begin_of_video|>",
20
+ "<|end_of_video|>",
21
+ "<|begin_of_audio|>",
22
+ "<|end_of_audio|>",
23
+ "<|begin_of_transcription|>",
24
+ "<|end_of_transcription|>"
25
+ ],
26
+ "is_local": true,
27
+ "model_max_length": 202752,
28
+ "model_specific_special_tokens": {},
29
+ "pad_token": "<|endoftext|>",
30
+ "padding_side": "left",
31
+ "remove_space": false,
32
+ "tokenizer_class": "TokenizersBackend"
33
+ }