OpenTransformer commited on
Commit
273663e
·
verified ·
1 Parent(s): cc7ab1e

Add files using upload-large-folder tool

Browse files
Files changed (20) hide show
  1. deepseek-r1-1.5b-unary/model_layers_0_self_attn_v_proj_weight.sign +0 -0
  2. deepseek-r1-1.5b-unary/model_layers_19_mlp_up_proj_weight.scales +0 -0
  3. deepseek-r1-1.5b-unary/model_layers_22_post_attention_layernorm_weight.fp16 +0 -0
  4. deepseek-r1-1.5b-unary/model_layers_24_self_attn_k_proj_weight.scales +0 -0
  5. deepseek-r1-1.5b-unary/model_layers_24_self_attn_v_proj_weight.sign +0 -0
  6. deepseek-r1-1.5b-unary/model_layers_27_self_attn_o_proj_weight.scales +0 -0
  7. deepseek-r1-1.5b-unary/model_layers_4_self_attn_o_proj_weight.scales +0 -0
  8. qwen3-4b-thinking-unary/model_layers_10_self_attn_k_norm_weight.fp16 +1 -0
  9. qwen3-4b-thinking-unary/model_layers_10_self_attn_k_proj_weight.scales +0 -0
  10. qwen3-4b-thinking-unary/model_layers_16_mlp_down_proj_weight.scales +0 -0
  11. qwen3-4b-thinking-unary/model_layers_18_input_layernorm_weight.fp16 +0 -0
  12. qwen3-4b-thinking-unary/model_layers_19_self_attn_q_norm_weight.fp16 +0 -0
  13. qwen3-4b-thinking-unary/model_layers_21_mlp_up_proj_weight.scales +0 -0
  14. qwen3-4b-thinking-unary/model_layers_26_input_layernorm_weight.fp16 +0 -0
  15. qwen3-4b-thinking-unary/model_layers_26_mlp_gate_proj_weight.scales +0 -0
  16. qwen3-4b-thinking-unary/model_layers_27_self_attn_q_norm_weight.fp16 +0 -0
  17. qwen3-4b-thinking-unary/model_layers_28_self_attn_o_proj_weight.scales +0 -0
  18. qwen3-4b-thinking-unary/model_layers_29_self_attn_q_proj_weight.scales +0 -0
  19. qwen3-4b-thinking-unary/model_layers_2_mlp_up_proj_weight.scales +0 -0
  20. qwen3-4b-thinking-unary/model_layers_3_input_layernorm_weight.fp16 +0 -0
deepseek-r1-1.5b-unary/model_layers_0_self_attn_v_proj_weight.sign ADDED
Binary file (49.2 kB). View file
 
deepseek-r1-1.5b-unary/model_layers_19_mlp_up_proj_weight.scales ADDED
Binary file (35.8 kB). View file
 
deepseek-r1-1.5b-unary/model_layers_22_post_attention_layernorm_weight.fp16 ADDED
Binary file (3.07 kB). View file
 
deepseek-r1-1.5b-unary/model_layers_24_self_attn_k_proj_weight.scales ADDED
Binary file (1.02 kB). View file
 
deepseek-r1-1.5b-unary/model_layers_24_self_attn_v_proj_weight.sign ADDED
Binary file (49.2 kB). View file
 
deepseek-r1-1.5b-unary/model_layers_27_self_attn_o_proj_weight.scales ADDED
Binary file (6.14 kB). View file
 
deepseek-r1-1.5b-unary/model_layers_4_self_attn_o_proj_weight.scales ADDED
Binary file (6.14 kB). View file
 
qwen3-4b-thinking-unary/model_layers_10_self_attn_k_norm_weight.fp16 ADDED
@@ -0,0 +1 @@
 
 
1
+ �9�;�=8�8x=9X=�=8=�=�=(>P:8=�>P>0x=�=p>0>8�(>h>H>�=p@@>�<�=�=XBH=H>8>x9@>�< >X>8>HB >�>x=�@�<�;HC�H8=�>�?@�?�B`?�@H>�Ax@@C >H>p<�2h<�=�6�?�=X<X=h=H<X>�=�:�=�=�=h=�<p=�?>�=�=H>>�>X>x>�-@?�>h>�@P>h@�>�>�>h)�>�>P@�<�@�@�<.�@P?�=>�=�=H@8=@>�?�>hB
qwen3-4b-thinking-unary/model_layers_10_self_attn_k_proj_weight.scales ADDED
Binary file (4.1 kB). View file
 
qwen3-4b-thinking-unary/model_layers_16_mlp_down_proj_weight.scales ADDED
Binary file (10.2 kB). View file
 
qwen3-4b-thinking-unary/model_layers_18_input_layernorm_weight.fp16 ADDED
Binary file (5.12 kB). View file
 
qwen3-4b-thinking-unary/model_layers_19_self_attn_q_norm_weight.fp16 ADDED
Binary file (256 Bytes). View file
 
qwen3-4b-thinking-unary/model_layers_21_mlp_up_proj_weight.scales ADDED
Binary file (38.9 kB). View file
 
qwen3-4b-thinking-unary/model_layers_26_input_layernorm_weight.fp16 ADDED
Binary file (5.12 kB). View file
 
qwen3-4b-thinking-unary/model_layers_26_mlp_gate_proj_weight.scales ADDED
Binary file (38.9 kB). View file
 
qwen3-4b-thinking-unary/model_layers_27_self_attn_q_norm_weight.fp16 ADDED
Binary file (256 Bytes). View file
 
qwen3-4b-thinking-unary/model_layers_28_self_attn_o_proj_weight.scales ADDED
Binary file (10.2 kB). View file
 
qwen3-4b-thinking-unary/model_layers_29_self_attn_q_proj_weight.scales ADDED
Binary file (16.4 kB). View file
 
qwen3-4b-thinking-unary/model_layers_2_mlp_up_proj_weight.scales ADDED
Binary file (38.9 kB). View file
 
qwen3-4b-thinking-unary/model_layers_3_input_layernorm_weight.fp16 ADDED
Binary file (5.12 kB). View file