| base_model: microsoft/Phi-3-mini-4k-instruct | |
| library_name: peft | |
| tags: | |
| - lora | |
| - peft | |
| - sft | |
| - transformers | |
| - trl | |
| license: mit | |
| pipeline_tag: text-generation | |
| # phi3-mr-lora-fixed-v3 (LoRA adapter) | |
| This repository contains **LoRA (PEFT) adapter weights only**. | |
| It **does not** include the base model weights. | |
| ## Base model | |
| microsoft/Phi-3-mini-4k-instruct | |
| ## Usage | |
| ```python | |
| from transformers import AutoModelForCausalLM, AutoTokenizer | |
| from peft import PeftModel | |
| base_id = "microsoft/Phi-3-mini-4k-instruct" | |
| adapter_id = "TUO_USERNAME/phi3-mr-lora-fixed-v3" | |
| tokenizer = AutoTokenizer.from_pretrained(base_id) | |
| base_model = AutoModelForCausalLM.from_pretrained( | |
| base_id, | |
| load_in_4bit=True, | |
| device_map="auto", | |
| torch_dtype="auto", | |
| ) | |
| model = PeftModel.from_pretrained(base_model, adapter_id, device_map="auto") | |
| ``` | |