IQ2_KS model merging failed

#4
by lan0004 - opened

I encountered an error after merging the downloaded IQ2_KS model. The information is as follows:

gguf_merge: c:\Users\nova\Step-3.5-Flash-GGUF\smol-IQ2_KS\Step-3.5-Flash-smol-IQ2_KS-00001-of-00003.gguf -> d:\Step-3.5-Flash-smol-IQ2_KS.gguf
gguf_merge: reading metadata c:\Users\nova\Step-3.5-Flash-GGUF\smol-IQ2_KS\Step-3.5-Flash-smol-IQ2_KS-00001-of-00003.gguf done
gguf_merge: reading metadata c:\Users\nova\Step-3.5-Flash-GGUF\smol-IQ2_KS\Step-3.5-Flash-smol-IQ2_KS-00002-of-00003.gguf ...gguf_init_from_file_impl: tensor 'token_embd.weight' has invalid ggml type 139 (NONE)
gguf_init_from_file_impl: failed to read tensor info

gguf_merge: failed to load input GGUF from c:\Users\nova\Step-3.5-Flash-GGUF\smol-IQ2_KS\Step-3.5-Flash-smol-IQ2_KS-00001-of-00003.gguf

Same on IQ3_KS:

gguf_init_from_file_impl: tensor 'token_embd.weight' has invalid ggml type 139 (NONE)

@lan0004 @JoeSmith245

You need to use ikawrakow/ik_llama.cpp for most of these quants including the IQ2_KS. Check the model card here for instructions getting started with it: https://huggingface.co/ubergarm/Step-3.5-Flash-GGUF#quick-start

You can see the issue grepping the code like so, here ik_llama.cpp has the needful:

$ cd ik_llama.cpp/gguf-py/
$ git rev-parse --short HEAD
e22b2d12
$ grep -r 139
gguf/constants.py:    IQ4_K     = 139
gguf/constants.py:    MOSTLY_IQ3_K         = 139 #except 1d tensors

If you try that on mainline llama.cpp, you'll see it is missing the new ik_llama.cpp SOTA quants:

$ cd llama.cpp/gguf-py/
$ grep -r 139
# there is nothing

Hope that helps! Let me know if you need more help getting ik_llama.cpp to run, you can check out windows builds here: https://github.com/Thireus/ik_llama.cpp/releases

Sign up or log in to comment