Fix: add .model after language_model in quantization ignore/exclude_modules

#5
by zhiyucheng - opened

In both config.json (quantization_config.ignore) and hf_quant_config.json (quantization.exclude_modules), all entries starting with language_model. have been updated to language_model.model. to correctly reference the submodule path.

For example:

  • language_model.layers.*.self_attn* β†’ language_model.model.layers.*.self_attn*
zhiyucheng changed pull request status to merged

Sign up or log in to comment