wpu_semantic_5 / README.md
praveensonu's picture
Upload LoRA adapter from gd_semantic_5_model
ad75b39 verified

wpu_semantic_5

LoRA adapter uploaded automatically.

Overview

  • Type: LoRA adapter (PEFT)
  • Task type: CAUSAL_LM
  • Base model: /home/praveen/coreset/outputs/llama_3_1_8b_finetuned
  • LoRA r: 8
  • LoRA alpha: 16

Usage

from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer

peft_model_id = "coreset-selection/wpu_semantic_5"
cfg = PeftConfig.from_pretrained(peft_model_id)
base = cfg.base_model_name_or_path
tok = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(base, torch_dtype='auto')
model = PeftModel.from_pretrained(model, peft_model_id)

Files

  • adapter_config.json
  • adapter_model.bin or adapter_model.safetensors

Notes

  • This repo contains only the LoRA adapter weights.
  • Load with the matching base model specified above.

Uploaded from: /home/praveen/coreset/outputs/gd_semantic_5_model