Add files using upload-large-folder tool
Browse files- qwen3-4b-log-unary/model_layers_11_self_attn_v_proj_weight.scales +0 -0
- qwen3-4b-log-unary/model_layers_22_self_attn_k_proj_weight.scales +0 -0
- qwen3-4b-log-unary/model_layers_23_self_attn_k_proj_weight.scales +0 -0
- qwen3-4b-log-unary/model_layers_30_self_attn_v_proj_weight.scales +0 -0
- qwen3-4b-log-unary/model_layers_35_self_attn_q_norm_weight.fp16 +0 -0
- qwen3-4b-log-unary/model_layers_4_self_attn_v_proj_weight.scales +0 -0
- qwen3-4b-log-unary/model_layers_7_input_layernorm_weight.fp16 +0 -0
- qwen3-4b-log-unary/model_layers_7_post_attention_layernorm_weight.fp16 +0 -0
- qwen3-4b-thinking-unary/model_layers_0_mlp_up_proj_weight.scales +0 -0
- qwen3-4b-thinking-unary/model_layers_0_post_attention_layernorm_weight.fp16 +0 -0
- qwen3-4b-thinking-unary/model_layers_12_post_attention_layernorm_weight.fp16 +0 -0
- qwen3-4b-thinking-unary/model_layers_14_input_layernorm_weight.fp16 +0 -0
- qwen3-4b-thinking-unary/model_layers_14_self_attn_k_proj_weight.scales +0 -0
- qwen3-4b-thinking-unary/model_layers_14_self_attn_q_proj_weight.scales +0 -0
- qwen3-4b-thinking-unary/model_layers_17_mlp_down_proj_weight.scales +0 -0
- qwen3-4b-thinking-unary/model_layers_19_mlp_up_proj_weight.scales +0 -0
- qwen3-4b-thinking-unary/model_layers_19_self_attn_k_proj_weight.scales +0 -0
- qwen3-4b-thinking-unary/model_layers_19_self_attn_q_proj_weight.scales +0 -0
- qwen3-4b-thinking-unary/model_layers_1_mlp_up_proj_weight.scales +0 -0
- qwen3-4b-thinking-unary/model_layers_1_self_attn_q_norm_weight.fp16 +0 -0
- qwen3-4b-thinking-unary/model_layers_21_mlp_gate_proj_weight.scales +0 -0
- qwen3-4b-thinking-unary/model_layers_21_self_attn_k_proj_weight.scales +0 -0
- qwen3-4b-thinking-unary/model_layers_22_post_attention_layernorm_weight.fp16 +0 -0
- qwen3-4b-thinking-unary/model_layers_23_self_attn_k_norm_weight.fp16 +1 -0
- qwen3-4b-thinking-unary/model_layers_23_self_attn_q_proj_weight.scales +0 -0
- qwen3-4b-thinking-unary/model_layers_23_self_attn_v_proj_weight.scales +0 -0
- qwen3-4b-thinking-unary/model_layers_24_self_attn_k_proj_weight.scales +0 -0
- qwen3-4b-thinking-unary/model_layers_24_self_attn_q_proj_weight.scales +0 -0
- qwen3-4b-thinking-unary/model_layers_25_self_attn_k_norm_weight.fp16 +0 -0
- qwen3-4b-thinking-unary/model_layers_26_self_attn_k_proj_weight.scales +0 -0
- qwen3-4b-thinking-unary/model_layers_27_post_attention_layernorm_weight.fp16 +0 -0
- qwen3-4b-thinking-unary/model_layers_27_self_attn_o_proj_weight.scales +0 -0
- qwen3-4b-thinking-unary/model_layers_28_mlp_down_proj_weight.scales +0 -0
- qwen3-4b-thinking-unary/model_layers_28_self_attn_k_norm_weight.fp16 +0 -0
- qwen3-4b-thinking-unary/model_layers_29_post_attention_layernorm_weight.fp16 +0 -0
- qwen3-4b-thinking-unary/model_layers_2_input_layernorm_weight.fp16 +0 -0
- qwen3-4b-thinking-unary/model_layers_2_self_attn_q_proj_weight.scales +0 -0
- qwen3-4b-thinking-unary/model_layers_31_self_attn_o_proj_weight.scales +0 -0
- qwen3-4b-thinking-unary/model_layers_33_post_attention_layernorm_weight.fp16 +0 -0
- qwen3-4b-thinking-unary/model_layers_33_self_attn_k_proj_weight.scales +0 -0
- qwen3-4b-thinking-unary/model_layers_34_post_attention_layernorm_weight.fp16 +0 -0
- qwen3-4b-thinking-unary/model_layers_34_self_attn_k_norm_weight.fp16 +0 -0
- qwen3-4b-thinking-unary/model_layers_35_mlp_gate_proj_weight.scales +0 -0
- qwen3-4b-thinking-unary/model_layers_35_self_attn_v_proj_weight.scales +0 -0
- qwen3-4b-thinking-unary/model_layers_3_self_attn_q_norm_weight.fp16 +0 -0
- qwen3-4b-thinking-unary/model_layers_4_self_attn_k_norm_weight.fp16 +0 -0
- qwen3-4b-thinking-unary/model_layers_4_self_attn_o_proj_weight.scales +0 -0
- qwen3-4b-thinking-unary/model_layers_7_self_attn_q_proj_weight.scales +0 -0
- qwen3-4b-thinking-unary/model_layers_8_post_attention_layernorm_weight.fp16 +0 -0
- qwen3-4b-thinking-unary/model_layers_9_self_attn_v_proj_weight.scales +0 -0
qwen3-4b-log-unary/model_layers_11_self_attn_v_proj_weight.scales
ADDED
|
Binary file (4.1 kB). View file
|
|
|
qwen3-4b-log-unary/model_layers_22_self_attn_k_proj_weight.scales
ADDED
|
Binary file (4.1 kB). View file
|
|
|
qwen3-4b-log-unary/model_layers_23_self_attn_k_proj_weight.scales
ADDED
|
Binary file (4.1 kB). View file
|
|
|
qwen3-4b-log-unary/model_layers_30_self_attn_v_proj_weight.scales
ADDED
|
Binary file (4.1 kB). View file
|
|
|
qwen3-4b-log-unary/model_layers_35_self_attn_q_norm_weight.fp16
ADDED
|
Binary file (256 Bytes). View file
|
|
|
qwen3-4b-log-unary/model_layers_4_self_attn_v_proj_weight.scales
ADDED
|
Binary file (4.1 kB). View file
|
|
|
qwen3-4b-log-unary/model_layers_7_input_layernorm_weight.fp16
ADDED
|
Binary file (5.12 kB). View file
|
|
|
qwen3-4b-log-unary/model_layers_7_post_attention_layernorm_weight.fp16
ADDED
|
Binary file (5.12 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_0_mlp_up_proj_weight.scales
ADDED
|
Binary file (38.9 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_0_post_attention_layernorm_weight.fp16
ADDED
|
Binary file (5.12 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_12_post_attention_layernorm_weight.fp16
ADDED
|
Binary file (5.12 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_14_input_layernorm_weight.fp16
ADDED
|
Binary file (5.12 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_14_self_attn_k_proj_weight.scales
ADDED
|
Binary file (4.1 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_14_self_attn_q_proj_weight.scales
ADDED
|
Binary file (16.4 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_17_mlp_down_proj_weight.scales
ADDED
|
Binary file (10.2 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_19_mlp_up_proj_weight.scales
ADDED
|
Binary file (38.9 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_19_self_attn_k_proj_weight.scales
ADDED
|
Binary file (4.1 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_19_self_attn_q_proj_weight.scales
ADDED
|
Binary file (16.4 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_1_mlp_up_proj_weight.scales
ADDED
|
Binary file (38.9 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_1_self_attn_q_norm_weight.fp16
ADDED
|
Binary file (256 Bytes). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_21_mlp_gate_proj_weight.scales
ADDED
|
Binary file (38.9 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_21_self_attn_k_proj_weight.scales
ADDED
|
Binary file (4.1 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_22_post_attention_layernorm_weight.fp16
ADDED
|
Binary file (5.12 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_23_self_attn_k_norm_weight.fp16
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
=�>@��=(�=�.�?`?�?�=:�=8=�:�>�=P:�>�>�@h@�>�>h;X@�@�@�@@�?p>C@@X@�@�C�@8@0@x@�=P@X@ @�?�@`G�@?�B�A�@�@�@�@�@@X@�@h@�@ AA�?2`9�=�)8>`+(=0�=�?(?0@P@(?X?h@�@�?@�<(=(@H@x?(?�>�>�?P@x@�@�5@@@@(@�4h?�@�@0@�@H@p@�@�@(@�=x@�?hA`@pAB�Ax@�@@`@`@�@�@�@�@
|
qwen3-4b-thinking-unary/model_layers_23_self_attn_q_proj_weight.scales
ADDED
|
Binary file (16.4 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_23_self_attn_v_proj_weight.scales
ADDED
|
Binary file (4.1 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_24_self_attn_k_proj_weight.scales
ADDED
|
Binary file (4.1 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_24_self_attn_q_proj_weight.scales
ADDED
|
Binary file (16.4 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_25_self_attn_k_norm_weight.fp16
ADDED
|
Binary file (256 Bytes). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_26_self_attn_k_proj_weight.scales
ADDED
|
Binary file (4.1 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_27_post_attention_layernorm_weight.fp16
ADDED
|
Binary file (5.12 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_27_self_attn_o_proj_weight.scales
ADDED
|
Binary file (10.2 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_28_mlp_down_proj_weight.scales
ADDED
|
Binary file (10.2 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_28_self_attn_k_norm_weight.fp16
ADDED
|
Binary file (256 Bytes). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_29_post_attention_layernorm_weight.fp16
ADDED
|
Binary file (5.12 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_2_input_layernorm_weight.fp16
ADDED
|
Binary file (5.12 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_2_self_attn_q_proj_weight.scales
ADDED
|
Binary file (16.4 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_31_self_attn_o_proj_weight.scales
ADDED
|
Binary file (10.2 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_33_post_attention_layernorm_weight.fp16
ADDED
|
Binary file (5.12 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_33_self_attn_k_proj_weight.scales
ADDED
|
Binary file (4.1 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_34_post_attention_layernorm_weight.fp16
ADDED
|
Binary file (5.12 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_34_self_attn_k_norm_weight.fp16
ADDED
|
Binary file (256 Bytes). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_35_mlp_gate_proj_weight.scales
ADDED
|
Binary file (38.9 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_35_self_attn_v_proj_weight.scales
ADDED
|
Binary file (4.1 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_3_self_attn_q_norm_weight.fp16
ADDED
|
Binary file (256 Bytes). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_4_self_attn_k_norm_weight.fp16
ADDED
|
Binary file (256 Bytes). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_4_self_attn_o_proj_weight.scales
ADDED
|
Binary file (10.2 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_7_self_attn_q_proj_weight.scales
ADDED
|
Binary file (16.4 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_8_post_attention_layernorm_weight.fp16
ADDED
|
Binary file (5.12 kB). View file
|
|
|
qwen3-4b-thinking-unary/model_layers_9_self_attn_v_proj_weight.scales
ADDED
|
Binary file (4.1 kB). View file
|
|
|