missing tensor 'blk.92.attn_norm.weight'

#3
by AliceThirty - opened

I made a gguf out of this model, and it says "llama_model_load: error loading model: missing tensor 'blk.92.attn_norm.weight'".
I downloaded your repo twices in case the first download was corrupted, and the problem remains.

That’s by design, we’ve disabled the MTP layer in our quant for speed consideration, if you’re intent on making your own GGUF, you’ll need to patch the GGUF configuration json for the MTP layer accordingly.

Do you know how this works? I tried changing the config.json file from your repo, and I changed "num_hidden_layers" from 92 to 91.
Now when I try to load the model, the error is different: lama_model_load: error loading model: missing tensor 'blk.91.nextn.eh_proj.weight'

Try this…
Edit config.json:


{
"num_hidden_layers": 92, // KEEP THIS AS-IS
"num_nextn_predict_layers": 0 // CHANGE FROM 1 TO 0
}

EDIT: I'm sorry, I didn't see the '92', I tried with 91 instead. I will run a new try with 92 and keep you informed

I don't know... this time during the creation of the bf16 gguf, it crashes. Previously it crashed during the model loading, but the creation was successful.

INFO:hf-to-gguf:blk.90.ffn_gate_inp.weight,           torch.bfloat16 --> F32, shape = {5120, 160}
INFO:hf-to-gguf:blk.90.ffn_down_shexp.weight,         torch.bfloat16 --> BF16, shape = {1536, 5120}
INFO:hf-to-gguf:blk.90.ffn_gate_shexp.weight,         torch.bfloat16 --> BF16, shape = {5120, 1536}
INFO:hf-to-gguf:blk.90.ffn_up_shexp.weight,           torch.bfloat16 --> BF16, shape = {5120, 1536}
INFO:hf-to-gguf:blk.90.post_attention_norm.weight,    torch.bfloat16 --> F32, shape = {5120}
Traceback (most recent call last):
  File "C:\Users\me\Docs\IA\llama.cpp\convert_hf_to_gguf.py", line 10832, in <module>
    main()
  File "C:\Users\me\Docs\IA\llama.cpp\convert_hf_to_gguf.py", line 10826, in main
    model_instance.write()
  File "C:\Users\me\Docs\IA\llama.cpp\convert_hf_to_gguf.py", line 680, in write
    self.prepare_tensors()
  File "C:\Users\me\Docs\IA\llama.cpp\convert_hf_to_gguf.py", line 8004, in prepare_tensors
    super().prepare_tensors()
  File "C:\Users\me\Docs\IA\llama.cpp\convert_hf_to_gguf.py", line 551, in prepare_tensors
    for new_name, data_torch in (self.modify_tensors(data_torch, name, bid)):
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\me\Docs\IA\llama.cpp\convert_hf_to_gguf.py", line 7972, in modify_tensors
    self._experts[bid][name] = data_torch
    ~~~~~~~~~~~~~^^^^^
IndexError: list index out of range

It works, thank you!

AliceThirty changed discussion status to closed
AliceThirty changed discussion status to open
Ex0bit changed discussion status to closed

Sorry to bother you again. I have a problem with the tokenizer. I don't know if you know about it... I'm new to this.

I compared the tokenizer of GLM 4.7's unsloth quant and the tokenizer of the generated gguf of GLM 4.7 PRISM.

pip install llama-cpp-python

from llama_cpp import Llama

models = [
    "GLM-4.7-PRISM-UD-Q4_K_XL.gguf",
    "GLM-4.7-UD-Q4_K_XL.gguf",
]

for model in models:
    llm = Llama(model_path=model, vocab_only=True)

    tokens = llm.tokenize(b"</think>")
    print(tokens)

    text = llm.detokenize(tokens)
    print(text.decode())
    input("Press enter to continue")

Prism says

[522, 26779, 29]
</think>

Glm 4.7 says

[151351]
</think>

Do you have an idea? Otherwise the model works and seems truly uncensored !

This is a known quirk in GLM4 models (and en error in the configuration file I uploaded ).. When "special": false, llama.cpp’s tokenizer treats as regular text and applies BPE splitting:
∙ → ["</", "think", ">"] → token IDs [522, 26779, 29]
When "special": true, it’s recognized as a single atomic token:
∙ → [151351]

Try Editing BOTH files before GGUF conversion:
In tokenizer.json:
Locate the added_tokens array and change "special": false to "special": true for tokens 151350 and 151351.
In tokenizer_config.json:

  • Change "special": false → "special": true
    in added_tokens_decoder for IDs 151350 and 151351
    -Add think and /think to the additional_special_tokens array

If it works for you, let me know and I’ll apply a patch to the configs for wider community availability

I tried with your patch (I downloaded the two files since you updated them), then I created the gguf again, and prism still tokenizes </think> as [522, 26779, 29]. This is very weird because your patch should work.

I also tried to replace those two files with the ones in https://huggingface.co/unsloth/GLM-4.7, and the problem remains. By the way, this repo and the original GLM 4.7 repo both have "special": false for thinking tokens. This is weirder than I thought.

Also, each test is very long. If you know a way to change the tokenizer in the gguf without recreating the whole 705GB bf16 file... I'll take it

I asked chatgpt to write me a script to transfert the tokenizer metadata from glm-4.7 to prism, and now it works. But I still don't understand why your fix didn't do anything to me.

#!/usr/bin/env python3
"""
import sys
from pathlib import Path
from typing import Any

import numpy as np

from gguf import GGUFReader, ReaderField
from gguf import GGUFWriter
from gguf.constants import GGUFValueType, GGUFEndian


def extract_gguf_value(field: ReaderField):
    """
    Convert a ReaderField into (value, vtype, subtype)
    suitable for GGUFWriter.add_key_value().
    """
    main_type = field.types[0]

    if main_type == GGUFValueType.ARRAY:
        sub_type = field.types[-1]
        value = field.contents()
        return value, GGUFValueType.ARRAY, sub_type

    value = field.contents()
    return value, main_type, None


def copy_tokenizer_kv(src: GGUFReader, dst: GGUFWriter):
    """
    Copy all tokenizer.* KV fields from src GGUF into dst GGUF writer.
    """
    for key, field in src.fields.items():
        if key.startswith("tokenizer."):
            value, vtype, subtype = extract_gguf_value(field)
            dst.add_key_value(key, value, vtype, sub_type=subtype)


def copy_non_tokenizer_kv(src: GGUFReader, dst: GGUFWriter):
    """
    Copy all non-tokenizer KV fields (architecture, rope, etc).
    Prevents missing metadata.
    """
    for key, field in src.fields.items():
        if key.startswith("tokenizer."):
            continue
        if key.startswith("GGUF."):
            continue  # internal fields

        value, vtype, subtype = extract_gguf_value(field)
        dst.add_key_value(key, value, vtype, sub_type=subtype)

def copy_tensors(src: GGUFReader, dst: GGUFWriter):
    """
    Copy tensors losslessly from src to dst.
    Works for quantized and non-quantized tensors.
    """
    for tensor in src.tensors:
        dst.add_tensor(
            name=tensor.name,
            tensor=tensor.data,              # raw bytes
            raw_dtype=tensor.tensor_type,    # quant type
        )


def main():
    good_path = "GLM-4.7-UD-Q4_K_XL.gguf"      # File with GOOD tokenizer
    broken_path = "GLM-4.7-PRISM-UD-Q4_K_XL.gguf"      # File with GOOD weights but BROKEN tokenizer
    out_path = "GLM-4.7-PRISM-UD-Q4_K_XL-patched.gguf"

    print("Reading reference GGUF:", good_path)
    good = GGUFReader(good_path)

    print("Reading broken GGUF:", broken_path)
    broken = GGUFReader(broken_path)

    arch_field = broken.get_field("general.architecture")
    if arch_field is None:
        raise RuntimeError("Broken GGUF missing general.architecture")

    arch = arch_field.contents()

    writer = GGUFWriter(
        path=out_path,
        arch=arch,
        endianess=broken.endianess,
    )

    # 1. Copy all non-tokenizer metadata from broken file
    copy_non_tokenizer_kv(broken, writer)

    # 2. Overwrite tokenizer metadata with reference tokenizer
    copy_tokenizer_kv(good, writer)

    # 3. Copy tensors from broken file
    copy_tensors(broken, writer)

    # 4. Write GGUF
    writer.write_header_to_file()
    writer.write_kv_data_to_file()
    writer.write_tensors_to_file()
    writer.close()

    print("✅ Fixed GGUF written to:", out_path)


if __name__ == "__main__":
    main()

Sign up or log in to comment