# mix_full_ret > LoRA adapter uploaded automatically. ## Overview - **Type:** LoRA adapter (PEFT) - **Task type:** `CAUSAL_LM` - **Base model:** `/home/praveen/coreset/outputs/unified_llama` - **LoRA r:** `8` - **LoRA alpha:** `16` ## Usage ```python from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM, AutoTokenizer peft_model_id = "coreset-selection/mix_full_ret" cfg = PeftConfig.from_pretrained(peft_model_id) base = cfg.base_model_name_or_path tok = AutoTokenizer.from_pretrained(base) model = AutoModelForCausalLM.from_pretrained(base, torch_dtype='auto') model = PeftModel.from_pretrained(model, peft_model_id) ``` ## Files - `adapter_config.json` - `adapter_model.bin` or `adapter_model.safetensors` ## Notes - This repo contains only the LoRA adapter weights. - Load with the matching base model specified above. _Uploaded from: `/home/praveen/coreset/outputs/unified/gd_full_retain_model`_