"Missing weight for layer gemma3_12b.transformer.model.layers.0.self_attn.q_proj"

#4
by MrRyukami - opened

"Missing weight for layer gemma3_12b.transformer.model.layers.0.self_attn.q_proj" I get this error when trying to use the fp8 version, how can i fix it?

i get this regardless of gemma version

Comfy Org org

It's something that was fixed in a commit few days ago, you currently need ComfyUI to be in the nightly version to use these quantized text encoders.

Sign up or log in to comment