| # wpu_semantic_10 | |
| > LoRA adapter uploaded automatically. | |
| ## Overview | |
| - **Type:** LoRA adapter (PEFT) | |
| - **Task type:** `CAUSAL_LM` | |
| - **Base model:** `/home/praveen/coreset/outputs/llama_3_1_8b_finetuned` | |
| - **LoRA r:** `8` | |
| - **LoRA alpha:** `16` | |
| ## Usage | |
| ```python | |
| from peft import PeftModel, PeftConfig | |
| from transformers import AutoModelForCausalLM, AutoTokenizer | |
| peft_model_id = "coreset-selection/wpu_semantic_10" | |
| cfg = PeftConfig.from_pretrained(peft_model_id) | |
| base = cfg.base_model_name_or_path | |
| tok = AutoTokenizer.from_pretrained(base) | |
| model = AutoModelForCausalLM.from_pretrained(base, torch_dtype='auto') | |
| model = PeftModel.from_pretrained(model, peft_model_id) | |
| ``` | |
| ## Files | |
| - `adapter_config.json` | |
| - `adapter_model.bin` or `adapter_model.safetensors` | |
| ## Notes | |
| - This repo contains only the LoRA adapter weights. | |
| - Load with the matching base model specified above. | |
| _Uploaded from: `/home/praveen/coreset/outputs/gd_semantic_10_model`_ |