How to use bziemba/M3-quantized-qlora-4bit with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-0.6B-Base") model = PeftModel.from_pretrained(base_model, "bziemba/M3-quantized-qlora-4bit")
How to fix it?