How to use fzkun/llama3_lora with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("unsloth/llama-3-8b-Instruct-bnb-4bit") model = PeftModel.from_pretrained(base_model, "fzkun/llama3_lora")
The community tab is the place to discuss and collaborate with the HF community!