MiniMax-M2.7-GGUF / logs /convert-MiniMax-M2.7.log
ubergarm's picture
add some early logs
5c81bbb
# mainline llama.cpp master@ff5ef8278
numactl -N ${SOCKET} -m ${SOCKET} \
python \
convert_hf_to_gguf.py \
--outtype bf16 \
--split-max-size 50G \
--outfile /mnt/data/models/ubergarm/MiniMax-M2.7-GGUF/ \
/mnt/data/models/MiniMaxAI/MiniMax-M2.7/
INFO:hf-to-gguf:Loading model: MiniMax-M2.7
INFO:hf-to-gguf:Model architecture: MiniMaxM2ForCausalLM
INFO:hf-to-gguf:gguf: loading model weight map from 'model.safetensors.index.json'
INFO:hf-to-gguf:gguf: indexing model part 'model-00000-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00001-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00002-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00003-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00004-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00005-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00006-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00007-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00008-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00009-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00010-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00011-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00012-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00013-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00014-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00015-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00016-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00017-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00018-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00019-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00020-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00021-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00022-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00023-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00024-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00025-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00026-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00027-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00028-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00029-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00030-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00031-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00032-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00033-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00034-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00035-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00036-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00037-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00038-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00039-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00040-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00041-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00042-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00043-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00044-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00045-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00046-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00047-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00048-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00049-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00050-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00051-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00052-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00053-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00054-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00055-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00056-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00057-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00058-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00059-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00060-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00061-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00062-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00063-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00064-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00065-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00066-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00067-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00068-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00069-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00070-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00071-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00072-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00073-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00074-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00075-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00076-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00077-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00078-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00079-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00080-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00081-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00082-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00083-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00084-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00085-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00086-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00087-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00088-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00089-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00090-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00091-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00092-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00093-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00094-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00095-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00096-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00097-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00098-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00099-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00100-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00101-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00102-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00103-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00104-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00105-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00106-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00107-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00108-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00109-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00110-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00111-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00112-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00113-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00114-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00115-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00116-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00117-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00118-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00119-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00120-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00121-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00122-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00123-of-00130.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00124-of-00130.safetensors'
INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only
INFO:hf-to-gguf:Exporting model...
INFO:hf-to-gguf:token_embd.weight, torch.bfloat16 --> BF16, shape = {3072, 200064}
INFO:hf-to-gguf:blk.0.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.0.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.0.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.0.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.0.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.0.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.0.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.0.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.0.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.0.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.0.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.0.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.0.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.1.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.1.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.1.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.1.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.1.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.1.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.1.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.1.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.1.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.1.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.1.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.1.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.1.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.2.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.2.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.2.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.2.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.2.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.2.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.2.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.2.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.2.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.2.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.2.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.2.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.2.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.3.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.3.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.3.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.3.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.3.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.3.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.3.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.3.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.3.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.3.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.3.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.3.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.3.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.4.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.4.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.4.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.4.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.4.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.4.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.4.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.4.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.4.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.4.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.4.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.4.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.4.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.5.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.5.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.5.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.5.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.5.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.5.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.5.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.5.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.5.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.5.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.5.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.5.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.5.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.6.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.6.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.6.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.6.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.6.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.6.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.6.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.6.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.6.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.6.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.6.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.6.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.6.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.7.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.7.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.7.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.7.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.7.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.7.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.7.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.7.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.7.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.7.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.7.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.7.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.7.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.8.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.8.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.8.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.8.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.8.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.8.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.8.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.8.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.8.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.8.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.8.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.8.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.8.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.9.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.9.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.9.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.9.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.9.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.9.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.9.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.9.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.9.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.9.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.9.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.9.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.9.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.10.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.10.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.10.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.10.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.10.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.10.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.10.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.10.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.10.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.10.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.10.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.10.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.10.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.11.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.11.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.11.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.11.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.11.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.11.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.11.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.11.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.11.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.11.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.11.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.11.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.11.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.12.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.12.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.12.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.12.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.12.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.12.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.12.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.12.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.12.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.12.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.12.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.12.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.12.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.13.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.13.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.13.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.13.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.13.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.13.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.13.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.13.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.13.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.13.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.13.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.13.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.13.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.14.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.14.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.14.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.14.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.14.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.14.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.14.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.14.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.14.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.14.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.14.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.14.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.14.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.15.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.15.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.15.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.15.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.15.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.15.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.15.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.15.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.15.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.15.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.15.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.15.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.15.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.16.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.16.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.16.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.16.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.16.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.16.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.16.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.16.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.16.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.16.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.16.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.16.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.16.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.17.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.17.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.17.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.17.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.17.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.17.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.17.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.17.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.17.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.17.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.17.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.17.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.17.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.18.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.18.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.18.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.18.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.18.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.18.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.18.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.18.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.18.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.18.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.18.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.18.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.18.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.19.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.19.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.19.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.19.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.19.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.19.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.19.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.19.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.19.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.19.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.19.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.19.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.19.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.20.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.20.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.20.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.20.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.20.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.20.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.20.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.20.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.20.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.20.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.20.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.20.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.20.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.21.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.21.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.21.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.21.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.21.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.21.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.21.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.21.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.21.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.21.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.21.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.21.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.21.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.22.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.22.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.22.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.22.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.22.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.22.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.22.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.22.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.22.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.22.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.22.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.22.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.22.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.23.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.23.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.23.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.23.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.23.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.23.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.23.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.23.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.23.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.23.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.23.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.23.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.23.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.24.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.24.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.24.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.24.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.24.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.24.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.24.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.24.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.24.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.24.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.24.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.24.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.24.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.25.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.25.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.25.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.25.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.25.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.25.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.25.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.25.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.25.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.25.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.25.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.25.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.25.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.26.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.26.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.26.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.26.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.26.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.26.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.26.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.26.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.26.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.26.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.26.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.26.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.26.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.27.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.27.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.27.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.27.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.27.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.27.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.27.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.27.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.27.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.27.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.27.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.27.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.27.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.28.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.28.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.28.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.28.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.28.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.28.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.28.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.28.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.28.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.28.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.28.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.28.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.28.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.29.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.29.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.29.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.29.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.29.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.29.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.29.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.29.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.29.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.29.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.29.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.29.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.29.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.30.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.30.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.30.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.30.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.30.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.30.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.30.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.30.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.30.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.30.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.30.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.30.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.30.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.31.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.31.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.31.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.31.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.31.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.31.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.31.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.31.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.31.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.31.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.31.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.31.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.31.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.32.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.32.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.32.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.32.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.32.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.32.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.32.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.32.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.32.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.32.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.32.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.32.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.32.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.33.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.33.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.33.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.33.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.33.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.33.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.33.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.33.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.33.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.33.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.33.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.33.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.33.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.34.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.34.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.34.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.34.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.34.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.34.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.34.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.34.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.34.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.34.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.34.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.34.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.34.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.35.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.35.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.35.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.35.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.35.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.35.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.35.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.35.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.35.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.35.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.35.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.35.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.35.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.36.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.36.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.36.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.36.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.36.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.36.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.36.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.36.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.36.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.36.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.36.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.36.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.36.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.37.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.37.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.37.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.37.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.37.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.37.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.37.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.37.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.37.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.37.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.37.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.37.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.37.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.38.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.38.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.38.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.38.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.38.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.38.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.38.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.38.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.38.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.38.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.38.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.38.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.38.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.39.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.39.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.39.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.39.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.39.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.39.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.39.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.39.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.39.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.39.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.39.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.39.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.39.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.40.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.40.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.40.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.40.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.40.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.40.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.40.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.40.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.40.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.40.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.40.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.40.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.40.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.41.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.41.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.41.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.41.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.41.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.41.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.41.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.41.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.41.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.41.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.41.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.41.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.41.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.42.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.42.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.42.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.42.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.42.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.42.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.42.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.42.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.42.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.42.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.42.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.42.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.42.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.43.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.43.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.43.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.43.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.43.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.43.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.43.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.43.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.43.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.43.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.43.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.43.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.43.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.44.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.44.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.44.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.44.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.44.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.44.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.44.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.44.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.44.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.44.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.44.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.44.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.44.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.45.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.45.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.45.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.45.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.45.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.45.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.45.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.45.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.45.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.45.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.45.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.45.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.45.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.46.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.46.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.46.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.46.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.46.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.46.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.46.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.46.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.46.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.46.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.46.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.46.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.46.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.47.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.47.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.47.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.47.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.47.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.47.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.47.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.47.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.47.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.47.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.47.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.47.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.47.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.48.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.48.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.48.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.48.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.48.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.48.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.48.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.48.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.48.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.48.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.48.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.48.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.48.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.49.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.49.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.49.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.49.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.49.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.49.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.49.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.49.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.49.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.49.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.49.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.49.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.49.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.50.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.50.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.50.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.50.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.50.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.50.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.50.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.50.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.50.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.50.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.50.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.50.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.50.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.51.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.51.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.51.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.51.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.51.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.51.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.51.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.51.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.51.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.51.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.51.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.51.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.51.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.52.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.52.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.52.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.52.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.52.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.52.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.52.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.52.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.52.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.52.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.52.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.52.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.52.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.53.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.53.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.53.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.53.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.53.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.53.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.53.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.53.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.53.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.53.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.53.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.53.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.53.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.54.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.54.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.54.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.54.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.54.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.54.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.54.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.54.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.54.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.54.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.54.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.54.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.54.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.55.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.55.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.55.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.55.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.55.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.55.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.55.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.55.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.55.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.55.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.55.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.55.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.55.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.56.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.56.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.56.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.56.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.56.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.56.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.56.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.56.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.56.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.56.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.56.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.56.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.56.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.57.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.57.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.57.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.57.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.57.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.57.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.57.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.57.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.57.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.57.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.57.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.57.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.57.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.58.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.58.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.58.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.58.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.58.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.58.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.58.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.58.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.58.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.58.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.58.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.58.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.58.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.59.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.59.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.59.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.59.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.59.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.59.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.59.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.59.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.59.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.59.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.59.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.59.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.59.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.60.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.60.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.60.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.60.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.60.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.60.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.60.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.60.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.60.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.60.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.60.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.60.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.60.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.61.exp_probs_b.bias, torch.float32 --> F32, shape = {256}
INFO:hf-to-gguf:blk.61.ffn_gate_inp.weight, torch.float32 --> F32, shape = {3072, 256}
INFO:hf-to-gguf:blk.61.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.61.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.61.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {1024}
INFO:hf-to-gguf:blk.61.attn_k.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.61.attn_output.weight, torch.float32 --> BF16, shape = {6144, 3072}
INFO:hf-to-gguf:blk.61.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:blk.61.attn_q.weight, torch.float32 --> BF16, shape = {3072, 6144}
INFO:hf-to-gguf:blk.61.attn_v.weight, torch.float32 --> BF16, shape = {3072, 1024}
INFO:hf-to-gguf:blk.61.ffn_gate_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:blk.61.ffn_down_exps.weight, torch.float32 --> BF16, shape = {1536, 3072, 256}
INFO:hf-to-gguf:blk.61.ffn_up_exps.weight, torch.float32 --> BF16, shape = {3072, 1536, 256}
INFO:hf-to-gguf:output.weight, torch.bfloat16 --> BF16, shape = {3072, 200064}
INFO:hf-to-gguf:output_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:Set meta model
INFO:hf-to-gguf:Set model parameters
INFO:hf-to-gguf:gguf: context length = 196608
INFO:hf-to-gguf:gguf: embedding length = 3072
INFO:hf-to-gguf:gguf: feed forward length = 1536
INFO:hf-to-gguf:gguf: head count = 48
INFO:hf-to-gguf:gguf: key-value head count = 8
WARNING:hf-to-gguf:Unknown RoPE type: default
INFO:hf-to-gguf:gguf: rope scaling type = NONE
INFO:hf-to-gguf:gguf: rope theta = 5000000
INFO:hf-to-gguf:gguf: rms norm epsilon = 1e-06
INFO:hf-to-gguf:gguf: expert count = 256
INFO:hf-to-gguf:gguf: experts used count = 8
INFO:hf-to-gguf:gguf: expert score gating function = sigmoid
INFO:hf-to-gguf:gguf: file type = 32
INFO:hf-to-gguf:Set model quantization version
INFO:hf-to-gguf:Set model tokenizer
INFO:gguf.vocab:Adding 199744 merge(s).
INFO:gguf.vocab:Setting special token type bos to 200034
INFO:gguf.vocab:Setting special token type eos to 200020
INFO:gguf.vocab:Setting special token type unk to 200021
INFO:gguf.vocab:Setting chat_template to {# ----------‑‑‑ special token variables ‑‑‑---------- #}
{%- set toolcall_begin_token = '<minimax:tool_call>' -%}
{%- set toolcall_end_token = '</minimax:tool_call>' -%}
{#- Tool Rendering Functions ============================================== -#}
{%- macro render_tool_namespace(namespace_name, tool_list) -%}
{%- for tool in tool_list -%}
<tool>{{ tool.function | tojson(ensure_ascii=False) }}</tool>
{% endfor -%}
{%- endmacro -%}
{%- macro visible_text(content) -%}
{%- if content is string -%}
{{ content }}
{%- elif content is iterable and content is not mapping -%}
{%- for item in content -%}
{%- if item is mapping and item.type == 'text' -%}
{{- item.text }}
{%- elif item is string -%}
{{- item }}
{%- endif -%}
{%- endfor -%}
{%- else -%}
{{- content }}
{%- endif -%}
{%- endmacro -%}
{#- System Message Construction ============================================ -#}
{%- macro build_system_message(system_message) -%}
{%- if system_message and system_message.content -%}
{{- visible_text(system_message.content) }}
{%- else -%}
{%- if model_identity is not defined -%}
{%- set model_identity = "You are a helpful assistant. Your name is MiniMax-M2.7 and is built by MiniMax." -%}
{%- endif -%}
{{- model_identity }}
{%- endif -%}
{#- Handle current_date -#}
{%- if system_message and system_message.current_date -%}
{{- '\n' ~ 'Current date: ' + system_message.current_date }}
{%- endif -%}
{#- Handle current_location -#}
{%- if system_message and system_message.current_location -%}
{{- '\n' ~ 'Current location: ' + system_message.current_location }}
{%- endif -%}
{%- endmacro -%}
{#- Main Template Logic ================================================= -#}
{#- Extract system message (only first message if it's system) -#}
{%- set system_message = none -%}
{%- set conversation_messages = messages -%}
{%- if messages and messages[0].role == "system" -%}
{%- set system_message = messages[0] -%}
{%- set conversation_messages = messages[1:] -%}
{%- endif -%}
{#- Get the last user message turn, for interleved thinking -#}
{%- set ns = namespace(last_user_index=-1) %}
{% for m in conversation_messages %}
{%- if m.role == 'user' %}
{% set ns.last_user_index = loop.index0 -%}
{%- endif %}
{%- endfor %}
{#- Render system message -#}
{{- ']~!b[' ~ ']~b]system' ~ '\n' }}
{{- build_system_message(system_message) }}
{#- Render tools if available -#}
{%- if tools -%}
{{- '\n\n' ~ '# Tools' ~ '\n' ~ 'You may call one or more tools to assist with the user query.\nHere are the tools available in JSONSchema format:' ~ '\n' }}
{{- '\n' ~ '<tools>' ~ '\n' }}
{{- render_tool_namespace("functions", tools) }}
{{- '</tools>' ~ '\n\n' }}
{{- 'When making tool calls, use XML format to invoke tools and pass parameters:' ~ '\n' }}
{{- '\n' ~ toolcall_begin_token }}
<invoke name="tool-name-1">
<parameter name="param-key-1">param-value-1</parameter>
<parameter name="param-key-2">param-value-2</parameter>
...
</invoke>
{{- '\n' ~ toolcall_end_token }}
{%- endif -%}
{{- '[e~[\n' }}
{#- Render messages -#}
{%- set last_tool_call = namespace(name=none) -%}
{%- for message in conversation_messages -%}
{%- if message.role == 'assistant' -%}
{#- Only render reasoning_content if no user message follows -#}
{{- ']~b]ai' ~ '\n' }}
{%- set reasoning_content = '' %}
{%- set content = visible_text(message.content) %}
{%- if message.reasoning_content is string %}
{%- set reasoning_content = message.reasoning_content %}
{%- else %}
{%- if '</think>' in content %}
{%- set reasoning_content = content.split('</think>')[0].strip('\n').split('<think>')[-1].strip('\n') %}
{%- set content = content.split('</think>')[-1].strip('\n') %}
{%- endif %}
{%- endif %}
{%- if reasoning_content and loop.index0 > ns.last_user_index -%}
{{- '<think>' ~ '\n' ~ reasoning_content ~ '\n' ~ '</think>' ~ '\n\n' }}
{%- endif -%}
{%- if content -%}
{{- content }}
{%- endif -%}
{%- if message.tool_calls -%}
{{- '\n' ~ toolcall_begin_token ~ '\n' }}
{%- for tool_call in message.tool_calls -%}
{%- if tool_call.function %}
{%- set tool_call = tool_call.function %}
{%- endif %}
{{- '<invoke name="' + tool_call.name + '">' }}
{% set _args = tool_call.arguments %}
{%- for k, v in _args.items() %}
{{- '<parameter name="' + k + '">' }}
{{- v | tojson(ensure_ascii=False) if v is not string else v }}
{{- '</parameter>' }}
{% endfor %}
{{- '</invoke>' ~ '\n' }}
{%- endfor -%}
{{- toolcall_end_token}}
{%- set last_tool_call.name = message.tool_calls[-1].name -%}
{%- else -%}
{%- set last_tool_call.name = none -%}
{%- endif -%}
{{- '[e~[' ~ '\n' }}
{%- elif message.role == 'tool' -%}
{%- if last_tool_call.name is none -%}
{{- raise_exception("Message has tool role, but there was no previous assistant message with a tool call!") }}
{%- endif -%}
{%- if loop.first or (conversation_messages[loop.index0 - 1].role != 'tool') -%}
{{- ']~b]tool' }}
{%- endif -%}
{%- if message.content is string -%}
{{- '\n<response>' }}
{{- message.content }}
{{- '</response>' }}
{%- else -%}
{%- for tr in message.content -%}
{{- '\n<response>' }}
{{- tr.output if tr.output is defined else (tr.text if tr.type == 'text' and tr.text is defined else tr) }}
{{- '\n</response>' }}
{%- endfor -%}
{%- endif -%}
{%- if loop.last or (conversation_messages[loop.index0 + 1].role != 'tool') -%}
{{- '[e~[\n' -}}
{%- endif -%}
{%- elif message.role == 'user' -%}
{{- ']~b]user' ~ '\n' }}
{{- visible_text(message.content) }}
{{- '[e~[' ~ '\n' }}
{%- endif -%}
{%- endfor -%}
{#- Generation prompt -#}
{%- if add_generation_prompt -%}
{{- ']~b]ai' ~ '\n' ~ '<think>' ~ '\n' }}
{%- endif -%}
INFO:gguf.gguf_writer:Writing the following files:
INFO:gguf.gguf_writer:/mnt/data/models/ubergarm/MiniMax-M2.7-GGUF/MiniMax-M2.7-256x4.9B-BF16-00001-of-00010.gguf: n_tensors = 90, total_size = 47.8G
INFO:gguf.gguf_writer:/mnt/data/models/ubergarm/MiniMax-M2.7-GGUF/MiniMax-M2.7-256x4.9B-BF16-00002-of-00010.gguf: n_tensors = 90, total_size = 49.0G
INFO:gguf.gguf_writer:/mnt/data/models/ubergarm/MiniMax-M2.7-GGUF/MiniMax-M2.7-256x4.9B-BF16-00003-of-00010.gguf: n_tensors = 80, total_size = 48.9G
INFO:gguf.gguf_writer:/mnt/data/models/ubergarm/MiniMax-M2.7-GGUF/MiniMax-M2.7-256x4.9B-BF16-00004-of-00010.gguf: n_tensors = 90, total_size = 49.0G
INFO:gguf.gguf_writer:/mnt/data/models/ubergarm/MiniMax-M2.7-GGUF/MiniMax-M2.7-256x4.9B-BF16-00005-of-00010.gguf: n_tensors = 90, total_size = 49.0G
INFO:gguf.gguf_writer:/mnt/data/models/ubergarm/MiniMax-M2.7-GGUF/MiniMax-M2.7-256x4.9B-BF16-00006-of-00010.gguf: n_tensors = 80, total_size = 48.9G
INFO:gguf.gguf_writer:/mnt/data/models/ubergarm/MiniMax-M2.7-GGUF/MiniMax-M2.7-256x4.9B-BF16-00007-of-00010.gguf: n_tensors = 90, total_size = 49.0G
INFO:gguf.gguf_writer:/mnt/data/models/ubergarm/MiniMax-M2.7-GGUF/MiniMax-M2.7-256x4.9B-BF16-00008-of-00010.gguf: n_tensors = 90, total_size = 49.0G
INFO:gguf.gguf_writer:/mnt/data/models/ubergarm/MiniMax-M2.7-GGUF/MiniMax-M2.7-256x4.9B-BF16-00009-of-00010.gguf: n_tensors = 80, total_size = 48.9G
INFO:gguf.gguf_writer:/mnt/data/models/ubergarm/MiniMax-M2.7-GGUF/MiniMax-M2.7-256x4.9B-BF16-00010-of-00010.gguf: n_tensors = 29, total_size = 18.3G
Shard (0/10): 0.00byte [00:00, ?byte/s]
Writing: 0%| | 0.00/457G [00:00<?, ?byte/s] Shard (1/10): : 0.00byte [00:00, ?byte/s] Shard (1/10): 0%| | 0.00/47.8G [00:00<?, ?byte/s] Shard (1/10): 3%|β–Ž | 1.23G/47.8G [00:01<00:58, 795Mbyte/s]
Writing: 0%| | 1.23G/457G [00:01<09:33, 795Mbyte/s] Shard (1/10): 3%|β–Ž | 1.31G/47.8G [00:01<01:06, 695Mbyte/s]
Writing: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 457G/457G [13:38<00:00, 597Mbyte/s] Shard (10/10): 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 18.3G/18.3G [00:32<00:00, 558Mbyte/s]
Writing: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 457G/457G [13:38<00:00, 559Mbyte/s]
INFO:hf-to-gguf:Model successfully exported to /mnt/data/models/ubergarm/MiniMax-M2.7-GGUF/