https://huggingface.co/LGAI-EXAONE/K-EXAONE-236B-A23B

#1697
by Doctor-Chad-PhD - opened

https://huggingface.co/LGAI-EXAONE/K-EXAONE-236B-A23B

Would appreciate quants of the model, thank you!
Heads up that support was only merged 8 hours ago, so the new version of llama.cpp will be needed.

@nicoboss can you please queue when you update llamacpp ?

It doesn't convert even without --outtype. I just commented on the original PR: https://github.com/ggml-org/llama.cpp/pull/18543#issuecomment-3749083396
Error:

venv/bin/python convert_hf_to_gguf.py /bpool/K-EXAONE-236B-A23B --outfile=/mradermacher/tmp/quant/K-EXAONE-236B-A23B
(...)
INFO:hf-to-gguf:blk.47.ffn_up_shexp.weight,           torch.bfloat16 --> BF16, shape = {6144, 2048}
INFO:hf-to-gguf:blk.47.ffn_norm.weight,               torch.bfloat16 --> F32, shape = {6144}
INFO:hf-to-gguf:output_norm.weight,                   torch.bfloat16 --> F32, shape = {6144}
Traceback (most recent call last):
  File "/apool/llama.cpp/convert_hf_to_gguf.py", line 11437, in <module>
    main()
  File "/apool/llama.cpp/convert_hf_to_gguf.py", line 11431, in main
    model_instance.write()
  File "/apool/llama.cpp/convert_hf_to_gguf.py", line 684, in write
    self.prepare_tensors()
  File "/apool/llama.cpp/convert_hf_to_gguf.py", line 8846, in prepare_tensors
    super().prepare_tensors()
  File "/apool/llama.cpp/convert_hf_to_gguf.py", line 555, in prepare_tensors
    for new_name, data_torch in (self.modify_tensors(data_torch, name, bid)):
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/apool/llama.cpp/convert_hf_to_gguf.py", line 8843, in modify_tensors
    return [(self.map_tensor_name(name), data_torch)]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/apool/llama.cpp/convert_hf_to_gguf.py", line 510, in map_tensor_name
    raise ValueError(f"Can not map tensor {name!r}")
ValueError: Can not map tensor 'model.layers.48.input_layernorm.weight'

Thank you, it looks like they need to make a pr to fix it first.

Sign up or log in to comment